Jump to content
You must now use your email address to sign in [click for more info] ×

Olivier.A

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Olivier.A

  1. The "develop" term comes from the analog film world. When you needed to "develop" you film using chemicals and masks. Depending of what chemicals, quantity and time, you would get different developed images. With digital, you have basically the same, but everything is done either in camera or on computer/phone/tablet. Let's say you set your camera to produce RAW+JPEG, you get 2 files on your card. As you mentioned, RAW is just data from the sensor. It can't be even displayed because they are just voltage of the sensor. JPEG is an RGB version of the image, and is coming from the RAW data too. What separate RAW from JPEG on the card, is that the camera processed all the numbers and converted then into RGB format, and added some process that nobody knows but the manufacturer. The image is not only converted but also processed. Then, it is saved in a quality loss format : jpeg. You can be really be happy with the result, as the camera manufacturer has a process that xou like. This is specifically the case for Fujifilm user who really like the different looks available. But each manufacturer has its own receipe. Sony jpegs are more cold / neutral, Canon jpegs more warm. Now, you may be not happy with what the camera do to the RAW file. Don't like the look for example, or you want to get your hand dirty and use the RAW to developp yourself, using all the tools available in Affinity Photo. Yes you are right, developing a RAW add complexcity, but gives you the opportunity to add you tweak during the process. In comparison, you could say, why do I need a raw meast, when I can eat a cooked piece of beef... you can choose the restaurant or chef you like to cook it for you. you might be pleased with the result, and you don't care about how it's been cooked, boiled, grilled etc... and that's toally fine. Now, you might want to have your raw meat and cook it yourself, the way you want. Yes, it add complexity because you have to find out how to cook this raw meat... If you let the camera manufacturer do the developement for you, along the process, some data have to be left out, and you can't decide which one. It's part of their process. If you decide to start with the raw, it's up to you to find out how to keep the details and make them hide or shine. You decide. Back of the piece of meat, either the chef or you decide what flavour you want to enhance when cooking. It's all in the process. You might want to trust your chef, or you can master the skill of enhancing the flavour you like.
  2. From your screenshot, Affinity Photo has loaded the file IMG_4200.jpeg If it's a iPhone Pro Max RAW file you would have the file named IMG_4200.DNG You should check Camera settings on the phone if Apple ProRAW is switched on. When not : the camera produces jpeg. Affinity doesn't convert file format on the disk. Only when you use the export menu, and you select which format you like to save.
  3. When creating a new stack image, the target document uses sRGB colour profile. Bit depth is correctly managed by the assistant. There should be an option to set the colour profile of the target document (like it is possible when going from Develop Persona to Photo Persona). My use case is I'm loading multiple RAW file (DNG) to the stack panel. I like to keep as much information as possible, but the resulting document always default to sRGB. The current workaround is to first convert all RAW file to DNG setting colour profile to my prefered profile (ROMM RGB), then stack TIFFs. that is an extra step ini the process. Regards,
  4. You are may be right, stacking may be not support RAW files. And AP 2 is only using embeded jpegs instead. I think I found out ! It's in the prefs pane under the Colour tab: When "Convert open files to workspace" is UNchecked (which was at the beginning), Stacked photos are using sRGB color profile. (but why sRGB, when it's raw ?) When "Convert open files to workspace" is CHECKED, stacked photos are using ROMM RGB profile, that is set under the RVB colour profile above. This also means that any photos (RAWs as well as jpegs, tiffs, etc...) will now open with ROMM RGB profile. Which is fine with me
  5. Hi, I use to develop photos using ROMM RGB colour space (and 16 bit depth). I've setup the develop assitant and prefs that way. I'm trying to get the same colourspace when using Stack photos. But once the stack is made I always get an sRGB / 16bits document. I guess 16 bits is coming from the develop assistant, but I can't find how to change the default target profile from sRGB to ROMM RGB. Any clue ? Thank you for your help
  6. Hi, I thought about that too. Affinity may not be the reason, but I prefer to mention it just in case. MacOS might be the reason. The following question is what differs in the 2 files so that: - MacOS calls the right Affinity thumbnail function for the original DNG on both Ventura and Monterey, - MacOS doesn't call the right or same thumbnail function for the tagged one on Ventura, but finds it on Monterey. I'll do further test with or woithout Affinity and other applicatons. Thank you.
  7. Hi @stokerg, Thank for the reply. I uploaded 2 files : original.dng and tagged.dng so you can compare. Let me know if you need more information. Regards
  8. Hi, I've noticed that Affinity Photo 2.0 and now 2.1 (and I checked v2.1.1) is not able to create the preview icon and the quick preview on Mac M1, once a DNG raw file has been tagged (whatever the metadata change) by Adobe Bridge. This doesn't happens on Mac Intel. This is not a direct bug of Photo interface, but how Photo decode the embedded JPEG on M1 macs. I'm sure Bridge has it's part of responsability too. How to proceed: Take a RAW file in DNG format. Open Adobe Bridge (any version works) Add a star or add keyword in the file. Close Adobe Bridge Open a Finder on an M1 machine, you should get this preview "imacbookpro m1.f.jpg". If you quick preview (spacebar), you get Affinity Photo icon) Open the same image on a Intel machine, you should get this preview "imac intel.f.jpg" I can provide the DNG images. But each are 15MB, let me know how I can best share. Regards, iMac 2015, Monterey 12.6.6, Affinity Photo 2.1.0, Adobe Bridge 13.0 Macbook Pro 2020, Ventura 13.4, Affinity Photo 2.1.0, Adobe Bridge 13.0 Photo is the default application to open RAW files, Designer and Publisher are not installed.
  9. Hi, I have used Photo v1 and v2, there is a mistranslation for the panel scope. The french word has not the meaning intended. I guess the translator didn't had the context of that word. The first use of scope means space, range.. And this is how the french translation "Étendues" is used. However the second meaning of scope defines the view display of a device. I suppose the meaning is the second one. Could the scope panel be renamed for example "Range" ? A better named for that panel has not direc translation of scope in this case and one should be consider an interpetation: "Vues" (plural), meaning Views / Viewers / Displays Hope this helps. Regards
  10. I'm using Flick for ages and forgot they have a huge program of Creative commons library. The foundation : https://www.flickr.org/ The Library: https://www.flickr.com/commons These are mainly old public images, shared from library of institutions.
  11. Getty bought last summer or so, Unsplash. They are since monetizing Unsplash API, as it is use in a lot of design software. Reason why it also disappered from Affinity Photo.
  12. You can also have a try on Wikimedia commons (the asset site for all wikipedias) : https://commons.wikimedia.org/
  13. The original question is : The question might be simple, but are layer order applied bottom to top in a group ? And we might have derived a bit. But from the previous posts, it seems the answer is yes when there is a pixel layer inside that group. To know how I should place layer so that I get expect results. Sometimes layer order can give different outcomes, and depending if they are applied top to bottom or the way around can change the image.
  14. Just did the test, and it works as said ! Wow. The bottom group has it's own pixel layer (a crop of the bottom background layer). The pixel layer at the top is the same cropped pixel layer, with duplicated adjustment. Each cropped pixel payer are identical. The very bottom pixel layer is not affected at all.
  15. It seems I can find a "rule", thanks to your test. Filter and adjustment layers are affecting the first pixel layer the find from the filter down. This determines which pixel layer has to be affected. Following your grouping, let's take your adjustment group. Which pixel layer has to be affected, take any of the live/adjustment layer, and go down until you find one pixel layer. In your case this is the background layer inside the live filter group. If there were no pixel layer, then go further down, and you'll find the background layer at the very bottom. Also, it seems that a group containing a pixel layer and adjustments are considered are a pixel layer by itself (that has adjustment, but basically a pixel layer. I'm pretty sure that not having a pixel layer in the live filter group, filter would affect background layer, and adjustment layer too. Or if you like, a group containing a pixel layer + adjustment and filter, is equivalent to a pixel layer containing the same adjustment layer and filter. Basically what @Pšenda said ( but differently) Wha do you think ?
  16. When I move the background layer inside the group at the bottom, I don't see any difference as when outside (and below) (first screenshot) When I move the background layer inside the group at the top, the result is that no adjustment layer and live filter are applied (second screenshot)
  17. Sorry, I've been vague here. On my TV, you have a mode and app to grab image from local server on my network. Just like UB stick, but remotely. I can imagine new service providing the same online... As it's a connected TV. JPEg would work, of course, but without taking the benefit of the HDR and extended dynamic range of the TV. Basically what is "jpeg" of HDR images
  18. As it seems, there are no clear answer to this question. It looks like, from my screenshot, that, all layers are merged together then applied to the background layer. As there are no pixel layer in the group, I'm wondering what are in fact the changes made... Or is a folder just a way to group, and has no effect in the order of applying them to the layer below the group (here, the background layer).
  19. After editing an image I want to display it on an HDR capable TV. What is the file format and also the colourspace / bits depth to use in this case to get he most of the TV ? I don't know yet is image will be on a usb stick plugged in the TV or downloaded from the inter via the wifi connexion. Thanks !
  20. Hi, The question might be simple, but are layer order applied bottom to top in a group ? When you have normal layers above the background layer, these are applied from the background layer up. But is it the same when layers are in a group, like the screenshot below ? Background, then White Balance, then Noise Reduction, etc... ? Or is it Background, then Curve, then Shadows & highlights ? And is it also the case when layer are "inside" (as the folder is a pixel layer from the screenshot) a pixel layer ?$ Thanks !
  21. This sentence alone can be misinterpreted... I meant only with32 bits, color grading has no sense, because 32 bits is meant for tones. However when you combine ROMM RGB with 32 bits, then yes. First because the colourspace is large enough, so colour grading have space to find the right chroma, and adding the 32 bits depth, it also has room for the right intensity, without being stuck between 2 values (0 and 1). That's why I added ROMM RGB or Rec.2020 for colour than tones. A good combination is a large colour pace and a large bit depth. That's why I'm trying to try to find a process with Affinity keep this combination as long as possible before exporting. As for my background history, I was using Lightroom as my developing software, and wanted to move away. Last year I started looking around and tried Darktable. This is were I learned most of colourspace, linear etc... (and started a youtube channel of my journey of learning). I'm far from knowing everything. That's why I'm asking questions here.
  22. Thank you very much for all these information and process ! They are very valuable ! I learned recently of the 3 sharpening steps (capture, creative, export) and the reason behind. I added that on my process About the 32 / 16 process, I'm confused now. For me 32 bits means tones are not bounded, unlike in 16 bits. Colour grading in 32 bits has no much sense ? However, colour grading in ROMM RGB or Rec.2020 colourspace should be done instead of sRGB. No ?
  23. Moin moin @NotMyFault, Thank for the insight ! I'm happy to see Im in the right path. I obviously didn't know about the added contrast (I always have Trey Radcliff's images im mind). Can't this effect be done using filter layers in Photo Persona ? Or is it specific to Tone Mapping Persona ? You suggest to us a merge visible layer because I guess Tone mapping persona is a destructive process, right ? (And that makes sense in this case) I'm trying to keep as much data as possible until the final export, that is whyI'm trying to have the wide colourspace and tone depth possible. And yes, I notice that exporting a jpeg from a ROMM RGB, 32bits doesn't give great results. That's the reason of digging into tone mapping persona and adapt for the output. Thanks, continue digging !
  24. Hi, I'm trying to better understand when using (or not) tone mapping persona, when not for merging photos. Please let me know if / when I'm wrong I watched several videos about it but it's not that clear. But a video of James Ritson gave me some clues how it works and when I should use it. From my understanding, Tone mapping persona is trying to slide a foot into a small shoes (sort of). I mean a unbounded (32 bits) image into a bounded space. Tools are very similar to the develop persona except the compression sliders and the clamp to SDR checkbox. Am I correct that the only moment to use the tone mapping persona is when I have a pixel layer in the photo persona in a 32 bits document (whatever the colourspace from sRGB to ROMM RGB). When the document is 16 bits or 8 bits deep, there is no value using the tone map persona, because all tone already bounded between 0 and 1. When I say no value, I mean clamping to SDR and compression have no effect, are they ? Also something I don't quite grab yet in that persona. Using a 32bits image, I can't see areas that would be clipped as the red area in available in the Develop persona for clipped highlights (red) and shadows (blue). May be I'm uncomfortable, because I have no visual reference of where are value above 1.. But do I need them ? (still thinking). Any thoughts ? Thanks
  25. I understand what you mean. Doing a lot of image for the web, sharpening has no real value here. I also shoot photo and try to develop them with the best qualityI can. For this I try to learn and better understand tools so I can use them at best. Sadly a lot of people are "just" attracted to shiny shiny throwable images and I had some contacts wanting just that.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.