Jump to content

kaffeeundsalz

Members
  • Posts

    513
  • Joined

Everything posted by kaffeeundsalz

  1. That's not how the Recolor adjustment works. If you need to apply specific color values, use Fill layers. They can be given precise color values. Then, use Blend Modes and Blend Ranges to control how the color affects your original image.
  2. It's spam. @etiennepatierno is either a bot or a human fake account. He used the other post he made just for the purpose of linking to a poster template site.
  3. If you use Overlay, you can see the background for sure because the overlay is semi-transparent. If even that doesn't make masked/unmaskes image parts apparent enough for you to see, I can't help you. You do need at least some kind of visual representation of a mask, otherwise you wouldn't have any clue about how it currently looks, right? The key to a good mask is of course to toggle between the preview modes as needed so you get a good overview of how your mask works under different conditions. For example, "Black Matte" lets you evaluate other aspects of your mask's quality than e.g. "Black and White" does and vice versa.
  4. When you go into Quick Mask mode, you can use the Paint Brush tool to manipulate the current selection. As I wrote in my previous post, it doesn't work with mask layers. Quick Mask mode creates selections, not masks. To create a mask from a selection, add a new Mask layer while having an active selection. To load a mask as a selection, Cmd+Click the Mask layer in the layers panel (on Windows, it's Ctrl+Click i think).
  5. Two things. First: There's also the Quick Mask mode ("Edit Selection as Layer") which basically lets you paint your selection with brushes while showing the same red overlay that you're used to from the Refine Selection dialog. Unfortunately, this works with selections only and not with existing masks, but you can load your mask as a selection and create a new one after painting. Also, what I told you in my first answer still holds: If you paint in just a bit from the background, the edges of the subject become clearly visible and you could work your way back until only the subject is selected. Second: Sometimes what counts with selections is credibility rather than accuracy. In some cases, it just doesn't matter whether you managed to follow the exact contour of, say, a person's clothing as long as your selection result looks natual. In other cases, hair selection is so difficult that it looks better to manually paint in some artifical strands of hair instead of trying to select them from the original photo. Don't try to be overly precise when what you got is already quite realistic.
  6. I also don't understand your question, but I'll try to work through your post. Well, if you tried it, you should be able to tell for yourself. If the technique works for you, use it. If it doesn't, then don't. In theory, yes, but the workflow shown in the video is specifically tailored to image content where these automatic tools don't work reliably. This is thoroughly explained in the video, including the factors that make smart selection algorithms fail. Please watch the introduction. This is all specific to the very image you're trying to work on. You need soft brushes for blurry edges and hard brushes for sharp edges. And if you create an initial mask and paint back in too much, you can of course very clearly see which parts of the image belong to the foreground or to the background. But again, the video covers all of this in much detail, so I'm not entirely sure what your questions are. Again, I'm unsure if I got you right, but where this advice comes from probably is that in Affinity Photo, image content is treated differently depending on how you output the selection refinement. If Output is set to "New layer" or "New layer with mask", Photo applies additional processing to reduce color bleed from the background into the selection which makes these output options better suitable for cutting out or compositing. This is covered in more detail in the help files. It may sound repetitive, but the experience with all these techniques depends on the image content. With some images, they'll work. With some, they won't. It's always been that way.
  7. But isn't this a bug then? Shouldn't a "Subset fonts" feature ensure to include all glyphs that are actually used in a document?
  8. I think the confusion here comes from your assumption that the marching ants, i.e. the moving dotted line in your image, are an accurate visual representation of your selection. They're not. Pixels can be selected, not selected – and also partially selected, which means their selection involves some degree of transparency. But Affinity Photo will only show marching ants for areas where the selection has >50 percent opacity. That's why in your example it seems that the Eraser brush doesn't respect the limits of your selection. In reality, your selection area is simply larger than the marching ants suggest because it contains pixel that are selected with <=50 percent opacity. To get an accurate view of your selection, use the Quick Mask feature by pressing Q or do what @NotMyFault recommends in this post. You need a different selection to achieve what you want.
  9. I assume with Stamp tool you mean the Clone Brush Tool because that's what you use in your screen recording. At least in your video, the reason why the Clone Brush doesn't work is because you have no active layer so Affinity Photo doesn't know what pixel information to use as a source (and target). You need to select the Background layer in the Layers panel first to get this working.
  10. I would simply use the Flood Selection Tool to isolate the red tones from the text and copy the selection over to a new layer. You can then do whatever you want with that layer to achieve a uniform color fill. Three minute quick and dirty example: Edit: I forgot to mention that if you invert your selection, you can edit the color of the text in a quite similar way.
  11. The cross in your example screenhot is not dim, it's invisible. You need to hover the mouse cursor over the tab to make it appear.
  12. Since Affinity Publisher doesn't have a built-in barcode generator, I wouldn't bother too much replicating the exact appearance of the reference image with only Publisher. You'd have to convert the barcode font to curves, manually add lines to the code, increase their length and realign the numbers accordingly. I would simply use one of the many online barcode generators, output the EAN13 code to PDF or SVG and import that into Publisher.
  13. Maybe so, but since the letters are pretty geometrical, it's also not that hard to trace them with the pen tool to fill the gaps. Snapping helps a lot in this case. I know my version is not perfect, but it's just supposed to be a quick example. The hardest part was the letter G because there were so many key points missing and I don't think I got this one right. Anyway, it should be doable.
  14. Not sure how you'd define a newcomer, but Scrivener has been around for ~15 years now.
  15. Since G'MIC doesn't seem to work as an Affinity Photo plugin on macOS, you may want to have a look at Exposure Software's SnapArt if you're using a Mac.
  16. Running a crappy business being convinced that you are an expert when really you are not is what I'd consider a sad story. But I know this is probably an entirely subjective matter. For me personally, I really, really appreciate people who know what they're doing, and I find there are too many people out there who don't.
  17. @Ron P. This is probably the saddest story I've heard in a long time. 😢
  18. Try ImageMagick where it shouldn't matter how many files you throw at it. You need to be familiar with the command line though. Provided that you work on a Mac, you could also try GraphicConverter if you prefer a GUI solution.
  19. I don't have any personal experience with converting that many files, but it would probably help to know which applications you've tested and what you mean by "nothing works". Also, what operating system do you use? Are the files you want to convert located on a network drive?
  20. Affinity Photo is not a suitable solution for what you want to achieve. Use a dedicated batch processing application that's specialized on handling large amounts of files.
  21. Please excuse me if this will become kind of a long read, but I'd really appreciate your input from a workflow perspective on this. I recently had the opportunity to make use of Affinity Publisher's StudioLink feature while working on a project for a client because I was able to do most of the design work myself. This included retouching the main photograph, drawing some vectors and also creating the prepress files. You would think that this is ideal for StudioLink to prove its strenghts – and overall, it really was. What I noticed afterwards was that things could have been even more seamless if I had done everything in one color space from the beginning. On the other hand, this would have broken other parts of my workflow. So here's the scenario: It was certain from the very beginning where the project would be printed, so I already had the correct settings for bleed, margins and color profiles in place. I created the Affinity Publisher document accordingly and worked in CMYK from start to finish. Apart from some basic blemish removal and color corrections, the selected photo mainly needed some sharpening. Playing around with the available filters, I gained the best results by placing a High Pass filter on top of the layer stack and change its blend mode to Linear Light, which is a common technique for output sharpening and gave a very natural look in this case. In theory, I could have done the entire retouching work for the main image in Affinity Publisher using the Photo persona. The big, big advantage of this would have been the perfectly non-destructive approach, giving me access to all adjustments and live filter layers (and their masks) directly in Publisher. This would have allowed me to fine-tune the retouching at all times even while mainly working on the project design rather than the photo. I ended up not doing this because the entire idea fell apart with the chosen sharpening method: Using the High Pass filter as described only really works in RGB, and at least to my knowledge, there is no way to get it to work in CMYK with the available blend modes (or is there?). I think this is not specific to Affinity Photo; I used to have the same problem in e.g. Adobe Photoshop. This situation is of course part of a bigger issue when it comes to working in different color spaces: There is a number of filters and adjustments in various image editing applications which only show the expected behavior when applied to RGB images. It's just that the problem becomes very obvious in an integrated environment like StudioLink. My solution was to retouch the image in Affinity Photo separately, converting the final (flattened) document to the correct CMYK color space and finally placing the exported TIFF image into the Publisher document. I also kept the original .afphoto file with layers. The exported image did not contain any color corrections; these were in fact done in Publisher because this made it easier for me to match the color appearance of the image to the color scheme of the design. The disadvantage I had was that whenever I needed to fine-tune the image (which happened rather often, in part because the client had specific requests after each draft), I had to get back to Affinity Photo, edit the image, flatten the document, convert it to CMYK, export it again and refresh the linked image in Publisher. What I could have done instead: Stay entirely in RGB with the project as long as possible. With this, I would have actually been able to work exclusively in Publisher, perfectly making use of the StudioLink scenario described above.The conversion to CMYK would have been done as a last step during export for print. Place the RGB .afphoto document in Publisher instead of the exported TIFF file. Not as integrated as having all the filters/adjustments available directly in Publisher's layer stack, but still a good solution. It would have been a matter of just double-clicking the image to get back to editing it. The reason why I did not do this was because I couldn't find a way to control how the placed .afphoto document would be converted to CMYK. How does Publisher determine the rendering intent to be used in this case? Changing the settings in the Color Preferences seemed to have no effect. The colors were always slightly off compared to the flattened TIFF image. Avoid using filters and adjustments that can't resonably be applied in color spaces other than RGB. Here, it would have meant choosing a different sharpening method, e.g. Unsharp Mask or Clarity. This would have made a CMYK-only workflow possible. So, I'm interested in what would have been your solution. Of course, it doesn't need to be one of the above. Maybe I missed additional options? Is there something obvious (or not so obvious) that I didn't consider? Thanks for your thoughts and ideas.
  22. Thanks @Alfred, this is very interesting. I know there are many people around here coming from the legacy Plus range of Serif products. I'm not one of them, so I'm doomed to work my way backwards through Serif's history. 😉
  23. Yes, I think we can both agree on this 👍 It's clear that more RAM is always the fastest option if you really need it. But RAM demand has always been pretty low with Apple's ARM chipsets. We've seen similar effects e.g. on the iPhone and especially iPad. Earlier models had little RAM compared to their Android counterparts, yet were much faster in benchmarks. And I'm tempted to say that the number of people out there who really need 32 GB RAM or more in a Mac have somewhat decreased with the release of the M1 chips. Also, I think buying one of the new MBPs with 16 GB RAM will still make the Affinity Suite fly. As always, it depends on what you intend to do with it.
  24. That may be the case for traditional x86 systems. With Apple Silicon's ARM implementation, it's simply wrong. The architecture is different in a number of ways, and all chips of the M1 range get along with much less RAM, as was proven in countless reviews of the 2020 and 2021 models. Also, I fail to see how the base specs of the new MBP would be anything less than sufficient to outperform the vast majority of systems on which the Affinity suite is currently in use.
  25. @spidermurph No missing feature here. The same thing is possible in all three Affinity applications.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.