Jump to content


  • Content Count

  • Joined

  • Last visited

  1. I've been using Affinity Photo to process some TIFF scans of negatives. Once I figured out the basic set of filters I wanted I've been copying them from one file to the next. When I began work on this photo I selected the group of adjustments from another and pasted it in. It was then that I noticed a surprising bounding box with the Move tool selected. I tracked down the culprits to these two layers. I have no idea how they got to be those shapes. The fill layer affects the entire underlying pixel layer, but I am not sure if the noise reduction one goes outside of the box shown. How on earth did these get this way, does it matter, and how to I get them back to "normal" aside from just removing and recreating those layers?
  2. Thanks for the confirmation. It seems counterintuitive because filters such as noise reduction work on pixel sizes which, as illustrated, is going to create markedly different results depending on scale. This means Photo not a WYSIWYG editor, or at the least there should be a warning in the export window. Thanks, I think the Merge Visible is the way to go for me, as it is easily created and removed for reprocessing.
  3. I don't think it has anything to do with zoom levels. I'm talking about exported results looking different between performing the scale in the export window or later, using the same scaling algorithm. That is the only difference between my attached examples.
  4. I am processing large (24mpix 48-bit) TIFF files which are scans of negatives. I have applied many adjustment and filter layers: Fill (remove colour cast), noise reduction, unsharp mask, curves, vibrance, and vignette Once I have the result looking good at full size I want to export as a 1600px wide JPEG. However when I do this, the effect of noise reduction and sharpening is far too pronounced. As a comparison, I exported at full size and then loaded that JPEG back into Photo and downsized using the same algorithm and get a much smoother result. This implies Photo is downsizing the pixel layer and then applying my adjustments. Is this the case, and if so, can I get the downsize on export to apply to the fully processed, full-sized image? Attached are examples of the difference. Both downsized from 6479 pixels wide to 1600 pixels using Lanczos 3 (separable). Resized JPEG: Resized in Export window:
  5. Thanks @Gabe for acknowledgement of the bug, and @garrettm30 for the workaround, which I have replicated and can now use successfully!
  6. I completely understand the process as you described it @garrettm30. It makes perfect sense to me. Which is why, when I first spotted it in the style after @Gabe's post, I thought I had my answer. The problem I now have, however, is that for some reason I cannot set either Base or Body to [No change]. In both cases they switch themselves back to Regular. Which brings me back to the final clause of my original post... "...I am unclear on why this cannot be changed in the style editor." I would be reasonably confident that if I burned down the style structure and started from scratch I could get something to work, but I'd really like to know if I have found a bug or if there is some factor I am not understanding about why that setting will not change on either style. Remembering [no style] -> Base -> Body. I've also discovered an interesting fact. The reason "...and Preserve Local Formatting" does not alter the font is because it is impossible to have unstyled text without a locally set font. Try this: create a text frame anywhere and make sure you have [No Style] set in the toolbar for both Paragraph and Character styles. As soon as you type any text, you will have an applied local formatting of the font based on the currently selected font details in the toolbar. As such, the "default style" that @Wosven referred to earlier is not really a default style, as much as a default setting of the toolbar. Which means... I have to have the style itself actively preserve the local formatting for bold and italic but not the font. Which means... I need to get past my inability to set that in my styles. I've attached a cut down version of my document where I basically stripped out all my real content but left the style structure as is, along with a test passage of text. The first para is a direct copy of the Markdown, the second para was copied and pasted as rich text. Try getting my Body style applied to the second para while retaining the bold and italics. Also try editing the Body or Base styles to see if you can get the Font trait to stay as [No change]. Styles.afpub
  7. I agree. Yes, I tried all of those options. None of them work. Both "Preserve" options do indeed retain the bold and italic phrases, but neither changes the font. This is why I initially contacted Ulysses support, believing the pasted rich text was specifically tagged with the Helvetica font. They say it's not. I don't know how to verify that.
  8. I should clarify one thing — the title of this thread should use the word "local" not "character". I am not applying Character Styles at any point in this troubleshooting. I copy rich text from a Markdown document in Ulysses, and paste it into Publisher. That gives me Helvetica text with bold and italic phrases. All I want to do is apply my Body style without losing the bold and italic. I believe that should be possible, but I cannot figure out how. The Ulysses support people commented that applying a font that does not have Bold and Italic variants will strip the formatting, but if I apply my Palatino just as a font then the formatting is preserved. It is only when applying my Body style that they disappear.
  9. Sure, but it's a cognitive load, and it has to be remembered every time I paste text, and let's not forget search and replace cannot currently be scoped so I have to be careful about this action. Anyway, I don't believe this should be necessary, so I'm pushing for a proper solution. I understand that but why is it impossible to set [No change] on either Base or Body? If this is by design, that means it is not possible to apply a paragraph style over local formatting while retaining the latter. That seems wrong, and this is supported by the option to "Apply style while keeping local formatting". The trouble with that option is it also does not apply the font, which I also do not understand. I had previously contacted the vendor of the software I am copying from asking why rich text automatically came with the Helvetica font applied and they have responded that it doesn't. Either there is something at play that I am missing (why does setting Font Trait: [No change] not persist?) or there are bugs in this area. I believe it should be possible to apply my Body style while keeping the original rich text attributes as local formatting. So far it seems impossible.
  10. Ahhhh! It's based on "Base" which has Font traits: Regular set. That explains the result of the style, but surely the style definition for Body should retain the [No change] in itself? And in fact Base itself exhibits this same behaviour. It is based on [No style] and if I set the Font traits to [No change] it reverts itself to Regular. So aside from the proper inheritance on the page, the style definition still seems to be incorrect in that neither will take [No change] as a value and keep it. I can find other styles that have [No change] set for Font trait, but all are greyed out and I don't understand why. I also tried the Apply Style and Preserve ... options, but they do not change the font from the default Helvetica. I can't see a way out of this if I cannot force Base or Body to leave italics and bold alone. Short of burning these styles down to the ground and building up new ones — but I didn't build them, I inherited them from somewhere (I think they are defaults?) I am familiar with @Seneca's workaround but that seems like a lot of work when the styles should be able to respect the original local formatting.
  11. What I am trying to do is paste rich text into my Publisher document and then apply my body style without losing italic and bold text. If I paste in rich text, it always seems to appear in Helvetica with [No Style] for Paragraph and Character styles. The text does still contain bold and italicised words. If I then select the whole passage of text and apply, say, Palatino font, the font is switched as expected and the bold and italics remain in force. If I select the whole passage of text and apply my Body style, the font also changes to Palatino (because that's in the style) but the bold and italic words are forced back to regular. I tried bringing up my Body style in the style editor and noticed the setting highlighted below.I set that to [No change] but that does not remove the "Italic: Off" in the style description, and when I commit that change and then return to the editor, it once again says "Regular" in the dropdown. I suspect this is a bug in that setting [No change] should remove the "Italic:" entry altogether yet it doesn't. I assume the Font weight: Normal setting is what is getting rid of the bold text, but I am unclear on why this cannot be changed in the style editor.
  12. I think we're saying the same thing @NoSi. Your mention of bold, italic, green colour, and boxes made me think you were advocating a markup to annotate these. We're passing each other on the terms we use to describe these things. I like the term "semantic markup," which I learned many years ago in the context of HTML, because it conveys that the markup reflects meaning and not presentation. Depending on what type of content is being created, I can see that some presentation decisions may be made that do not reflect meaning but design. For instance a creator might choose to set different background colours to a series of block quotes and these differences serve only to alter the design, not the fact that the quotes differ in some way.
  13. Strictly speaking, all of Markdown is "structural" although I would use the word semantic. Markdown (as per the original Gruber spec) does not offer italic and bold but rather emphasis and strong emphasis. The "block" and "span" features of Markdown should be directly applicable to Paragraph and Character styles respectively. I believe when you start trying to address actual formatting (like colour choice, borders, backgrounds) then it gets very messy very quickly. Markdown writing environments are generally geared to semantic writing only, and so they should be. It's the modern equivalent of the old adage applied in the days when Word ruled the roost: write your content before you format it.
  14. Just been pointed to this thread. The very simple reason I would like an Affinity DAM is because Serif have proven that they know how to build great software, and the same reason Adobe built Lightroom is a reason for Affinity to build a "darkroom" product. I'm sure there are photographers who use Photoshop to process RAW files, but I reckon an awful lot more use Lightroom because that's what it is designed to do. I believe both the Photoshop and Affinity Photo names are less than ideal. Both products are way, way more than photo-based tools, and because of this, they aren't well geared to a photographer's workflow. Indeed they shouldn't be. In my past and/or current repertoire of "darkroom" applications are: Lightroom, Aperture, Luminar, PhotoLab, ON1, and Apple Photos. Aperture is no longer for this world, and without it nothing equals Lightroom when it comes to managing and processing photographs. Lightroom I now find boring, but it can't be beat as an all-rounder (subscription pricing aside). Luminar is a capable processor with some interesting processing features (not including sky replacement) but its DAM is barely there. PhotoLab has fantastic RAW conversion, a good feature set, and its DAM is workable, but with some critical issues. It also has some severe performance issues. ON1 is a decent all-rounder, most closely matching up to Lightroom but not quite there just yet, and bettered in some areas by others in my list. Apple Photos ... not sure why I included it in the list to be honest, but it does have a DAM. Why? Because Apple recognise that photos need a place to live where you can find them again. But... try putting a 100 MB, 24 megapixel, 16 bit, full colour TIFF scan of a 35mm slide into any of these applications and they struggle. Again, Lightroom is capable. Some of the others almost seize up. One of them actually crashed my computer. Put that same TIFF in Affinity Photo, whack a new pixel layer on for non-destructive workflow, and use the infill brush to remove hundreds of dust specks and it doesn't even break a sweat. And I can do it on my iPad with a Pencil, too! Affinity product performance is next level. For an Affinity DAM, I would not be surprised to see jaw dropping performance with great workflow, and, of course, fantastic integration. Imagine Selecting "Place" in Publisher and having your DAM library as an option to pick the file - just like Apple do with Pages and Photos. However, the current Affinity RAW conversion appears to be relatively basic. Elsewhere, I asked that Serif buy the technology developed by DxO for PhotoLab 3, because it does astounding things with RAW files. By profiling actual lenses on actual cameras they are able to bring out razor sharp images that no other product in my list, nor Affinity Photo, can do. Their noise reduction is the most impressive I've seen, too, because it de-noises the RAW data based on those same camera and lens profiles, not the decoded image like everyone else does. When it comes to clever coding, fantastic implementation, and next level performance, Affinity products have it in spades, but DxO put some leg work into working with the hardware that is providing the images and it shows. I don't expect Serif to go that far, but the results speak for themselves.
  15. I'm struggling. I have or have used Adobe Lightroom Classic, Apple's (defunct) Aperture 3, Skylum Luminar 3, DxO PhotoLab 3, and now ON1 2020, as well as, of course, Affinity Photo. I currently use Lightroom to catalogue because of the peerless keyword support and PhotoLab to process my RAW files because of its peerless lens modules. ON1 is introducing ON1 360 which looks fantastic. And finally, nothing beats Affinity Photo for speed. In my perfect world, Affinity would produce a world class DAM as I know only they could, but would also buy DxO's lens module technology and build or buy a 360 service like ON1. I can dream, can't I? But seriously... Serif, please buy DxO's module technology. I cannot use any other RAW processor now because of it.
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.