Jump to content

kaffeeundsalz

Members
  • Posts

    367
  • Joined

Everything posted by kaffeeundsalz

  1. Please do such a test, post the export results here so we can compare them and throughly describe what exactly you think should convince us that the image quality is superior in other applications. You've taken the entire discussion this far, yet you never actually told us your definition of "better quality". Why?
  2. Off topic: Most hardcore audiophiles don't give a damn about double-blind listening tests. They simply claim they can hear a difference.
  3. Not only snapping overrides force pixel alignment, proportional scaling does, too. This means that if you have e.g. a rectangle at the size of 6x10 pixels and proportionally scale the shorter edge up to 7 pixels, the longer edge will scale to 11.7 pixels (rather than 12 pixels). There are probably more operations that undesirably introduce decimal places to pixel values, but this is a very common one to keep in mind.
  4. From the Affinity Designer product website at https://affinity.serif.com/en-us/designer/ :
  5. I dare to doubt that very much. There already are a number of ways to desaturate layers, and Serif has no interest in precisely replicating Adobe Photoshop's behavior.
  6. That's not how the Recolor adjustment works. If you need to apply specific color values, use Fill layers. They can be given precise color values. Then, use Blend Modes and Blend Ranges to control how the color affects your original image.
  7. It's spam. @etiennepatierno is either a bot or a human fake account. He used the other post he made just for the purpose of linking to a poster template site.
  8. If you use Overlay, you can see the background for sure because the overlay is semi-transparent. If even that doesn't make masked/unmaskes image parts apparent enough for you to see, I can't help you. You do need at least some kind of visual representation of a mask, otherwise you wouldn't have any clue about how it currently looks, right? The key to a good mask is of course to toggle between the preview modes as needed so you get a good overview of how your mask works under different conditions. For example, "Black Matte" lets you evaluate other aspects of your mask's quality than e.g. "Black and White" does and vice versa.
  9. When you go into Quick Mask mode, you can use the Paint Brush tool to manipulate the current selection. As I wrote in my previous post, it doesn't work with mask layers. Quick Mask mode creates selections, not masks. To create a mask from a selection, add a new Mask layer while having an active selection. To load a mask as a selection, Cmd+Click the Mask layer in the layers panel (on Windows, it's Ctrl+Click i think).
  10. Two things. First: There's also the Quick Mask mode ("Edit Selection as Layer") which basically lets you paint your selection with brushes while showing the same red overlay that you're used to from the Refine Selection dialog. Unfortunately, this works with selections only and not with existing masks, but you can load your mask as a selection and create a new one after painting. Also, what I told you in my first answer still holds: If you paint in just a bit from the background, the edges of the subject become clearly visible and you could work your way back until only the subject is selected. Second: Sometimes what counts with selections is credibility rather than accuracy. In some cases, it just doesn't matter whether you managed to follow the exact contour of, say, a person's clothing as long as your selection result looks natual. In other cases, hair selection is so difficult that it looks better to manually paint in some artifical strands of hair instead of trying to select them from the original photo. Don't try to be overly precise when what you got is already quite realistic.
  11. I also don't understand your question, but I'll try to work through your post. Well, if you tried it, you should be able to tell for yourself. If the technique works for you, use it. If it doesn't, then don't. In theory, yes, but the workflow shown in the video is specifically tailored to image content where these automatic tools don't work reliably. This is thoroughly explained in the video, including the factors that make smart selection algorithms fail. Please watch the introduction. This is all specific to the very image you're trying to work on. You need soft brushes for blurry edges and hard brushes for sharp edges. And if you create an initial mask and paint back in too much, you can of course very clearly see which parts of the image belong to the foreground or to the background. But again, the video covers all of this in much detail, so I'm not entirely sure what your questions are. Again, I'm unsure if I got you right, but where this advice comes from probably is that in Affinity Photo, image content is treated differently depending on how you output the selection refinement. If Output is set to "New layer" or "New layer with mask", Photo applies additional processing to reduce color bleed from the background into the selection which makes these output options better suitable for cutting out or compositing. This is covered in more detail in the help files. It may sound repetitive, but the experience with all these techniques depends on the image content. With some images, they'll work. With some, they won't. It's always been that way.
  12. But isn't this a bug then? Shouldn't a "Subset fonts" feature ensure to include all glyphs that are actually used in a document?
  13. I think the confusion here comes from your assumption that the marching ants, i.e. the moving dotted line in your image, are an accurate visual representation of your selection. They're not. Pixels can be selected, not selected – and also partially selected, which means their selection involves some degree of transparency. But Affinity Photo will only show marching ants for areas where the selection has >50 percent opacity. That's why in your example it seems that the Eraser brush doesn't respect the limits of your selection. In reality, your selection area is simply larger than the marching ants suggest because it contains pixel that are selected with <=50 percent opacity. To get an accurate view of your selection, use the Quick Mask feature by pressing Q or do what @NotMyFault recommends in this post. You need a different selection to achieve what you want.
  14. I assume with Stamp tool you mean the Clone Brush Tool because that's what you use in your screen recording. At least in your video, the reason why the Clone Brush doesn't work is because you have no active layer so Affinity Photo doesn't know what pixel information to use as a source (and target). You need to select the Background layer in the Layers panel first to get this working.
  15. I would simply use the Flood Selection Tool to isolate the red tones from the text and copy the selection over to a new layer. You can then do whatever you want with that layer to achieve a uniform color fill. Three minute quick and dirty example: Edit: I forgot to mention that if you invert your selection, you can edit the color of the text in a quite similar way.
  16. The cross in your example screenhot is not dim, it's invisible. You need to hover the mouse cursor over the tab to make it appear.
  17. Since Affinity Publisher doesn't have a built-in barcode generator, I wouldn't bother too much replicating the exact appearance of the reference image with only Publisher. You'd have to convert the barcode font to curves, manually add lines to the code, increase their length and realign the numbers accordingly. I would simply use one of the many online barcode generators, output the EAN13 code to PDF or SVG and import that into Publisher.
  18. Maybe so, but since the letters are pretty geometrical, it's also not that hard to trace them with the pen tool to fill the gaps. Snapping helps a lot in this case. I know my version is not perfect, but it's just supposed to be a quick example. The hardest part was the letter G because there were so many key points missing and I don't think I got this one right. Anyway, it should be doable.
  19. Not sure how you'd define a newcomer, but Scrivener has been around for ~15 years now.
  20. Since G'MIC doesn't seem to work as an Affinity Photo plugin on macOS, you may want to have a look at Exposure Software's SnapArt if you're using a Mac.
  21. Running a crappy business being convinced that you are an expert when really you are not is what I'd consider a sad story. But I know this is probably an entirely subjective matter. For me personally, I really, really appreciate people who know what they're doing, and I find there are too many people out there who don't.
  22. Try ImageMagick where it shouldn't matter how many files you throw at it. You need to be familiar with the command line though. Provided that you work on a Mac, you could also try GraphicConverter if you prefer a GUI solution.
  23. I don't have any personal experience with converting that many files, but it would probably help to know which applications you've tested and what you mean by "nothing works". Also, what operating system do you use? Are the files you want to convert located on a network drive?
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.