
kirkt
-
Posts
440 -
Joined
-
Last visited
Reputation Activity
-
kirkt reacted to StevenC2020 in How to Turn Off Noise Reduction When Processing RAW Files?
Thanks, everyone! These are helpful tips. I'll try enabling Develop Assistant to take no action on the NR and try capture sharpening to see if that helps.
@Kirk, yeah, the stars are intentional lol. I used a Hoya 8-point star filter for this effect.
-
kirkt got a reaction from walt.farrell in Brush settings to multiply color on overlap??
Also take a look at the difference in brush application of color between OPACITY and FLOW. For a SINGLE CONTINUOUS STROKE, opacity will lay down partial color at the specified opacity and not any more color, even if you overlap that continuous stroke on itself. On the other hand, Flow will lay down partial color in a continuous stroke that will build up ("intensify") if you overlap the continuous stroke on itself.
If you overlap multiple strokes (not a single, continuous stroke), then opacity appears to give a result a lot like flow.
Kirk
-
kirkt reacted to walt.farrell in Affinity Photo file size creates greater then 5X Raw files
That's under your control in the More... options in the File > Export dialog. The default is ZIP compression, which for me gives somewhat better size results than choosing NONE, and much better than choosing LZW, for the 16-bit image I just tested with.
-
kirkt reacted to smadell in How do I get rid of this space on my Affinity Photo window?
Instead of that, go to View > Studio > Hide Left Studio (or wording to that effect...)
-
kirkt got a reaction from JDM in How do I get rid of this space on my Affinity Photo window?
@JDM It is the Macro and Library panels. Go to View > Studio > and uncheck Macro and Library.
Kirk
-
kirkt reacted to Old Bruce in Red, Yellow, Green
Friends don't let friends use Migration Assistant.
For that reason and myriad others.
-
kirkt got a reaction from LuiG in RAW DNG Negative Film Scanned from Plustek Optic Film
It looks like the EXIF data in the DNG indicate, in the tag called "As Shot White XY," a cue for the effective white balance of the image. It appears that AP is using this information behind the scenes and automatically removing the orange mask. Try enabling the White Balance check box in the Develop module and you will see that there are automatic settings that appear. Remove the Tint correction (zero it) and set the Temperature correction to 6500 °K and then, again, slide the Temperature slider to even higher CCTs. The orange mask will look correct. I used the CCT of 6500K because that is what Adobe Camera Raw sets as the "As Shot" WB when I open the image in PS.
Interestingly, when I changed the As Shot White XY values to 1.0 and 1.0 with EXIFTool, the White Balance operation in AP did not do anything. This leads me to believe that the As Shot White XY values are being used by AP to understand white balance in the image.
Kirk
-
kirkt reacted to walt.farrell in Change my lenses correction
For Samyang 14mm f2_8.ARW the file EXIF data that seems relevant is
Lens Type : E-Mount, T-Mount, Other Lens or no lens Lens Spec : E 14mm F2. Lens Mount : E-mount Lens Type 3 : Samyang AF 14mm F2.8 or Samyang AF 35mm F2.8 Lens Type 2 : Samyang AF 14mm F2.8 or Samyang AF 35mm F2.8 Lens ID : Samyang AF 35mm F2.8 Make : SONY Camera Model Name : ILCE-7M3 Photo identifies it by the Lens ID as SAMYANG AF 35MM F2.8.
The Lensfun database has two entries that are similar, but that do not match:
<model>Samyang 14mm f/2.8 AE ED AS IF UMC</model> One entry is for a cropfactor of 1, the other for 1.523. But note that that AF is missing.
--------
For Samyang 85mm f1_4.ARW:
Lens Type : E-Mount, T-Mount, Other Lens or no lens Lens Spec : E 85mm F1.4 Lens Mount : E-mount Lens Type 3 : Samyang AF 35mm F1.4 Lens Type 2 : Samyang AF 35mm F1.4 Lens ID : Samyang AF 35mm F1.4 Make : SONY Camera Model Name : ILCE-7M3 The strange thing about this one is that you have said it's an 85mm lens, and the Lens Spec field says 85mm, but the camera/lens is reporting the lens type and ID as a 35mm lens. The only entry in the lensfun database for a similar lens is this one:
<model>Samyang 35mm f/1.4 AS UMC</model> <!-- This model is really only available for Nikon. For the other brands, the 35/1.4 may have different characteristics. --> Note that the model name does not match (which means that Photo won't find it) but also that this lens is only for a Nikon camera.
Result: Basically, both lenses are unknown to Lensfun, and you need to either wait for them to calibrate them and add them to their database, or you need to calibrate them yourself and provide the information to Lensfun for inclusion in their database. The Lensfun website has information about how to ask them to do the calibration, or how to calibrate them yourself.
-
kirkt reacted to David in Яuislip in Change my lenses correction
You can persuade Photo to apply an existing, similar lens correction instead
Download the relevant fie from https://github.com/lensfun/lensfun/blob/master/data/db/mil-samyang.xml
Modify the model descriptions
<model>Samyang AF 14mm F2.8</model> - was Samyang 14mm f/2.8 AE ED AS IF UMC
<model>SAMYANG AF 85mm F1.4</model> - was Samyang 85mm f/1.4 IF UMC Aspherical
The attached xml file contains these changes so load it into Photo's Lens Profiles Folder. Preferences\General\click button
Restart Photo
That's enough for the 14mm lens but the 85 needs a bit more help as the exif data is so mangled. Exiftool to the rescue available here https://exiftool.org/
exiftool -lens* 85.ARW
displays:
Lens Type : E-Mount, T-Mount, Other Lens or no lens
Lens Spec : E 85mm F1.4
Lens Zoom Position : 0%
Lens Mount 2 : E-mount
Lens Type 3 : Samyang AF 35mm F1.4
Lens E-mount Version : 1.70
Lens Firmware Version : Ver.03.002
Lens Mount : E-mount
Lens Format : Full-frame
Lens Type 2 : Samyang AF 35mm F1.4
Lens Spec Features : E
Lens Info : 85mm f/1.4
Lens Model : SAMYANG AF 85mm F1.4
Lens ID : Samyang AF 35mm F1.4
To fix this do
exiftool -lenstype2#=32823 -lenstype3#=32823 85.ARW
and you'll see:
Lens Type : E-Mount, T-Mount, Other Lens or no lens
Lens Spec : E 85mm F1.4
Lens Zoom Position : 0%
Lens Mount 2 : E-mount
Lens Type 3 : Sony FE 85mm F1.4 GM or Samyang AF 85mm F1.4
Lens E-mount Version : 1.70
Lens Firmware Version : Ver.03.002
Lens Mount : E-mount
Lens Format : Full-frame
Lens Type 2 : Sony FE 85mm F1.4 GM or Samyang AF 85mm F1.4
Lens Spec Features : E
Lens Info : 85mm f/1.4
Lens Model : SAMYANG AF 85mm F1.4
Lens ID : Samyang AF 85mm F1.4
32823 is the lens number from https://exiftool.org/TagNames/Sony.html
Samyang1485.xml
-
kirkt reacted to dkallan in Live Equirectangular Projection in Develop Persona
Thanks, @kirkt. I will use the Feature Requests forum.
Appreciate your advice on caution with adjustments. The raw 360° photos I have developed in Affinity Photo look surprisingly good for what seems like a toy camera (no issues with mismatches along the "seamless" edges). But that 360° preview capability in Develop would make life easier from a decision-making perspective. Because the live projection is layer-based, there are some special considerations with adjustment layers, but that's later in the workflow anyway.
Yes, the stitching software is very basic for raw photos: take in an unstitched DNG, check the stitching and horizon, spit out a stitched DNG or stitched DNG+JPEG (no color adjustment, no offset/shifting/recentering of image content). After it spits out the stitched DNG, that software has no more role in the workflow—it's all in the hands of other photo software to grade/develop the DNG and do other adjustments. Although there's no "feedback loop" for this particular stitching software, your Smart Objects idea would be useful in so many other applications. Keep up the crusade!
-
kirkt reacted to walt.farrell in Cant export ACES 1.2 OCIO what I see in 32-bit preview
For the JPEG export, it's going to be RGB-8 rather than RGB-32, which is likely to have a noticeable effect of some kind.
-
kirkt got a reaction from InigoRotaetxe in Cant export ACES 1.2 OCIO what I see in 32-bit preview
@d3d13 - What are you trying to accomplish specifically within AP? Toward the end of your original post you mention trying to save your AP work to ACES [ACES cg presumably] but then you mention that you are trying to export to JPEG or PNG. Are you trying to export a gamma-encoded version of the AP documented to an 8-bit JPEG or PNG in ACES or in sRGB?
See this video -
take deep breath and watch the entire thing before pointing me to the original post about not wanting to use Blender Filmic. Even though the video is about using Blender Filmic LUTs, the video spells out exactly how to generate correct output for a gamma-encoded format like JPEG or TIFF.
Here is the TL/DR version:
1) When you open a 32bit file into AP and have an OCIO configuration enabled in AP, you need to select "ICC Display Transform" in the 32bit Preview panel. This is critical to getting your gamma-encoded, exported JPEG or TIFF to look correct.
2) Once that is sorted out, you can use your OCIO adjustment layers to do whatever it is you are trying to do - remember that you are now manually overriding color management to a certain extent. For example, transform your linear ACES data into sRGB to do texture work.
3) Make your edits.
4) Use an OCIO adjustment to bring the file from the transformed sRGB state back to linear.
5) Export to a JPEG, or whatever you need to do.
The key to getting the "correct" low bit depth, gamma-encoded output is enabling the ICC Display Profile in the 32bit Preview panel, and keeping track of your OCIO transforms to make sure your data are transformed correctly for output. The attached screenshot depicts a 32bit EXR opened in AP with the above workflow, and the exported JPEG of the composite 32bit document. I used a Curves and a Levels adjustment AFTER the OCIO transform from linear ACES to sRGB (gamma-encoded data) and BEFORE the transform from sRGB back to linear ACES to manually tone map the bright, unbounded lighting of the celling lights back into the overall exposure range for the rest of the image.
As Walt noted, there will be some differences between the two files and how they are displayed, because of the differences in bit depth and the preview accuracy (like the shadow tones in the 32bit file displayed on the left side of the screenshot). But that is minimal compared to the differences when ICC Display Transform is not used throughout the process.
Have fun!
Kirk
-
kirkt reacted to Nick Battersby in Highlighting in RAW
Thanks so much! So simple. Many thanks for help from both
-
kirkt got a reaction from Invictus in New Affinity Photo User - Mac Book Problem ?
@Invictus - it looks like your machine is an older iMac (not a MacBook) - what OS are you currently running? What version of AP? AP takes advantage of your GPU to display and compute, and your video card may not be up to the task. In the Preferences > Display section, you may want to disable GPU (Metal) computing and see if that helps.
Here are the minimum spec/system requirements:
https://affinity.serif.com/en-gb/photo/full-feature-list/#system-requirements
Kirk
-
kirkt reacted to daveb2 in Color Blind Photographer
Kurt:
I read your post. I have a hard time understanding or following what you are trying to say.
I am reading again and looking at other info about the LAB method.
If you could tell me again in a simplified version it would be appreciated.
-
kirkt got a reaction from walt.farrell in Color Blind Photographer
One thing to consider is learning how to interpret color "by the numbers" in the L*a*b* color model (also called "Lab"). There are several advantages to using Lab to assess and talk about color, even if you ultimately work in an RGB color space. You can set up the Info panel in AP to read Lab color values and use those values to get an idea of color throughout your image - the great thing about Lab is that it inherently separates lightness in the image from color. Color is modeled on two axes ("a" and "b") such that the "a" axis represents green-magenta and the "b" axis represents blue-yellow - this is very similar in some respects to how white balance adjustment tools characterize color temperature and tint. Each axis is centered around 0 - that is, if a or b has a value of zero, then than represents no color, or gray, in that channel. For the a channel, -a means more green and +a means more red. The further from 0, the more color. Similarly for the b channel, -b means more blue, +b means more yellow. For clarity, negative values in Lab are often noted in parentheses - for example, a-10 would read "a(10)" using this convention.
Reading color in Lab is as simple as reading the a and b numbers and understanding what specific color that combination of numbers represents. For example, vegetation is usually "green" but that green typically contains a lot of yellow - a typical value of green leaves might be a(10) b40, where a(10) means greenish, or negative a, and b40 means yellowish, or positive b, with more yellow than green. If you sample an area that you know should be green and the a and b values do not make sense, it may require further investigation and adjustment.
Green in vegetation is also characterized as a "memory" color and can be affected by various cultural and individual preferences of the person seeing green; however, you can probably find a bunch of reference images with various kinds of vegetation in them and sample the various greens with an Lab color sampler tool and make note of the relationship between a and b (usually negative a and positive b) and the absolute amount and ratio between a and b. The de facto reference for understanding Lab color is the written work of Dan Margulis. I know that some of his books contain specific discussions about color blindness in the context of evaluating color - for example: https://www.peachpit.com/articles/article.aspx?p=608635&seqNum=6
Instead of seeking a special tool, you can use standard tools included in all image processing applications if you familiarize yourself with "by the numbers" assessment of color, and specifically in Lab. You will find that when you examine an image and find that several different areas of the image appear to be off, and off by the same kind of error, there is a color cast that you can isolate and correct. Once this cast is removed, you can examine color in the vegetation, for example, and see if it falls within your range of values for a and b.
Good luck!
kirk
-
kirkt got a reaction from Jowday in Dan Margulis books and Affinity Photo
I think we both agree that AP's curve dialog could use some additional tools, although the fact that the user can change the input range is pretty great, especially when editing 32 bit per channel, HDR data. Numerical input would be a great addition.
I was also trying to kludge together the max/min reversed curve adjustment layer with a procedural texture live filter to invert the output of the curve, but even when the document is in Lab mode, the live filter version of the procedural texture only permits equations in RGB, including presets made in Lab. Boo!
Kirk
-
kirkt got a reaction from ThatMikeGuy in Node-based UI for AP. Please?
While I do not profess to know how AP is structured under-the-hood (bonnet!), it seems like a lot of the tools are implemented in a real-time, live way that seems as if they would work in a node-based workflow. For example, the node editor in Blender or Davinci Resolve. If this is the case, it would be an incredibly terrific feature if the user could select between the current "traditional" interface and workflow for AP, or a node-based interface. I would love to be able to create an image-processing pipeline with a network for nodes with preview renders along the way to see each stage of the workflow and variations of the node chain. It would be terrific if node groups could be saved as "presets" that became single nodes themselves, which could be expanded and the contents exposed for tweaking and customization.
Please consider this approach, if it is possible. Rendering low-res preview proxies during node assembly would hopefully be a lot less taxing on the interface than the current full-res rendering of Live Filters that tends to get laggy when there are even a modest amount of layers in the stack. You could save full, non-destructive workflows as a pre-built node chain, you could have a single node chain branch into multiple variants, and have a batch node that feeds an entire directory of images into the node chain for processing. Maybe even macro nodes, etc. It would be so much more flexible and serve to further differentiate AP from PS.
The output of the node-based workflow could be fed into the "traditional" photo persona (a Photo persona node) for local, destructive edits, painting on masks, etc.
One can dream.... LOL
Thanks for pushing the boundaries with your applications.
Kirk
-
kirkt reacted to walt.farrell in Brush Tool Opacity is not 100%
Welcome to the Serif Affinity forums.
Check other settings, like Wet Edge. If that doesn't do it, please let us see a screenshot that includes the Context Toolbar.
-
kirkt got a reaction from sfriedberg in Substance Alchemist Equalizer
It is advisable to remove the uneven illumination (the large areas of luminance variation) across the source image before you tile the source. One way is to create a copy of the source and perform a Gaussian blur to remove the high-frequency detail and leave the low-frequency blobs of illumination variation - make sure to enable "preserve alpha" so the blur is retained to the edges. Invert the blurred result and set the blend mode of the inverted, blurred result layer to something like Vivid Light (try the various contrast blend modes to see which works best for your source art). This will neutralize the luminance variation.
To remove the tiling borders, use the Affine transform (Filters > Distort > Affine) and dial in an Offset in X and Y of 50%. The edges of the source will be moved to the center of the image, in a cross-like arrangement where you can inspect and deal with the discontinuity of the source tile at the edges. You can Inpaint or clone the seam away at the center cross. Then reverse the Affine transform by repeating it. Make sure that when you perform the Affine transform, you have the "Wrap" option selected. Now the edges of the source tile are continuous across the boundaries of the source.
Now your source tile has the uneven illumination neutralized and the tiled edge discontinuities removed and is ready for repeating seamlessly.
Kirk
-
kirkt got a reaction from Dmi3ryd in Work in Affinity Photo with ACES
@Dmi3ryd - What specifically are you trying to accomplish in ACES within AP? Is your source EXR in ACES? Are you simply trying to use the Blender Filmic OCIO with an ACES source file, or are you trying to use ACES as a working space, etc.?
Kirk
-
kirkt reacted to Kaffeepause in Uninstall old versions of Affinity Photo on iMac
I share markw's assumption that the second installation is on an external drive or another partition, because in one of the screenshots showing the sidebar there is the drive "Macintosh SSD - Datos".
But I would like to mention one more thought, which probably does not apply to your problem, but is still worth checking.
In macOS there is the possibility to install apps for all users of the Mac or only for a certain user.
So there can be two Applications folders in the system where apps are installed:
in the root directory of the system in "Macintosh HD/Applications" in the user's home folder in "Macintosh HD/Users/YourUserName/Applications" Therefore my suggestion: Check your home folder to see if there is an Applications folder and if there is an installation of Affinity Photo inside.
Based on the screenshots I have come to the assumption that there is an installation of Affinity Photo in the Applications folder in the root directory and ianother installation of Affinity Photo in the Applications folder inside the Home folder. With CMD + I you can find out the version and then delete the old version.
As written, this is just a thought and does not have to apply to your problem, but I didn't want to leave it unmentioned and I think markw's hint ist more accurate.
-
-
kirkt got a reaction from MEB in HDRI Neutralization method
@cgiout - How much control do you have over the scene you are photographing and how long do you have to acquire all of the source images that you ultimately use to make your HDR composite? The reason I ask is because, when you use a full 32bit per channel workflow, you maintain the physical integrity of the lighting in your scene, making it easy to manipulate that lighting after capture. However, to give you the most flexibility, you want to sample the scene one light at a time and then add all of the lighting together in post. That is, let's say your scene has 3 lights in it. Ideally, you would want to shoot the entire scene with each light illuminated individually, with all of the other lights off. In post, you can combine your HDR exposure sequence for each light into its own HDR file (32bit per channel) and then bring each HDR file into a working document and add the light sources together in your 32 bit per channel working environment. In this scenario, you can add an exposure adjustment layer and a color filter adjustment layer clipped to each light layer and use these controls to change the intensity and color of the contribution of that light to the scene. This gives you the power to recolor each light and adjust is contribution to the scene as you see fit. Not only can you neutralize the color temperature of each light, if that is what you want to accomplish, but you can add any color filter, completely relighting the scene with some look or mood.
Essentially, you stack the light layers and set the blend mode of each one that is above the background layer to "Add" - because you are working in a 32 bit per channel document, the light will add linearly, just as it does in "real life."
Attached are a few images of the process based on an example I wrote up several years ago, but it is no different now (the example is in Photoshop, but it is the same in AP).
The first three images show the three light sources in the scene, each one illuminating the scene without the other two. An HDR sequence was shot for each light. A color checker card is included near the light source that is being imaged. The color checker can also be cloned out or the image sequence can be shot again without the color checker.
Next, the layer stack that is constructed to mix the lighting - note that each image of a light has a color filter ("Gel") and an Exposure control to modulate the properties of the light. It is like having a graphic equalizer for the scene! Also note the Master Exposurecontrol at the top of the stack, giving you control over the overall intensity of the scene (you could add a master color filter layer too).
The next image demonstrates how a local white balance for one of the lamps is accomplished to bring its CCT (correlated color temperature) into line with the other lamps in the scene. In this scene, two of the lamps were LEDs with a daylight CCT, and one lamp was a tungsten filament light bulb with a much warmer CCT. I balanced the warmer lamp to bring its color into line with the other LED lamps by adjusting the warmer lamp's color filter layer.
Finally, the rendered results for a "literal" tone mapping of the scene, and then a moody, funky relighting of the scene using the exposure and color filter layers for each image. Note that the scene is rendered "correctly" when you make large and extreme changes to the lighting because you spent the time to capture each light's contribution to the scene (for example, the mixing of colors and reflections within the scene). You can also add ambient lighting to the scene by acquiring a separate HDR sequence taken in that ambient lighting condition (daylight from outside, for example) and mix that into the scene as well. You just need to keep your tripod set up locked down within the scene and wait for the ambient lighting conditions you want. For example, set up your tripod and camera and shoot the ambient scene during the time of day you want (or several different times of day) and then shoot the individual lamps in the scene at night, where there is no ambient light in the scene.
This process take a lot of time to sort out and acquire the image sequences, but it gives you an incredible amount of data to work with when compiling your HDR image. It sounds like you also are acquiring spherical panoramic HDRs for image-based lighting - the process is no different, but it will take time and diligent management of the workflow. You can mix your scene in 32 bits per channel and then export a 32 bit per channel flattened EXR to use for your CGI rendering.
Have fun!
Kirk
-
kirkt got a reaction from Patrick Connor in HDRI Neutralization method
@cgiout - How much control do you have over the scene you are photographing and how long do you have to acquire all of the source images that you ultimately use to make your HDR composite? The reason I ask is because, when you use a full 32bit per channel workflow, you maintain the physical integrity of the lighting in your scene, making it easy to manipulate that lighting after capture. However, to give you the most flexibility, you want to sample the scene one light at a time and then add all of the lighting together in post. That is, let's say your scene has 3 lights in it. Ideally, you would want to shoot the entire scene with each light illuminated individually, with all of the other lights off. In post, you can combine your HDR exposure sequence for each light into its own HDR file (32bit per channel) and then bring each HDR file into a working document and add the light sources together in your 32 bit per channel working environment. In this scenario, you can add an exposure adjustment layer and a color filter adjustment layer clipped to each light layer and use these controls to change the intensity and color of the contribution of that light to the scene. This gives you the power to recolor each light and adjust is contribution to the scene as you see fit. Not only can you neutralize the color temperature of each light, if that is what you want to accomplish, but you can add any color filter, completely relighting the scene with some look or mood.
Essentially, you stack the light layers and set the blend mode of each one that is above the background layer to "Add" - because you are working in a 32 bit per channel document, the light will add linearly, just as it does in "real life."
Attached are a few images of the process based on an example I wrote up several years ago, but it is no different now (the example is in Photoshop, but it is the same in AP).
The first three images show the three light sources in the scene, each one illuminating the scene without the other two. An HDR sequence was shot for each light. A color checker card is included near the light source that is being imaged. The color checker can also be cloned out or the image sequence can be shot again without the color checker.
Next, the layer stack that is constructed to mix the lighting - note that each image of a light has a color filter ("Gel") and an Exposure control to modulate the properties of the light. It is like having a graphic equalizer for the scene! Also note the Master Exposurecontrol at the top of the stack, giving you control over the overall intensity of the scene (you could add a master color filter layer too).
The next image demonstrates how a local white balance for one of the lamps is accomplished to bring its CCT (correlated color temperature) into line with the other lamps in the scene. In this scene, two of the lamps were LEDs with a daylight CCT, and one lamp was a tungsten filament light bulb with a much warmer CCT. I balanced the warmer lamp to bring its color into line with the other LED lamps by adjusting the warmer lamp's color filter layer.
Finally, the rendered results for a "literal" tone mapping of the scene, and then a moody, funky relighting of the scene using the exposure and color filter layers for each image. Note that the scene is rendered "correctly" when you make large and extreme changes to the lighting because you spent the time to capture each light's contribution to the scene (for example, the mixing of colors and reflections within the scene). You can also add ambient lighting to the scene by acquiring a separate HDR sequence taken in that ambient lighting condition (daylight from outside, for example) and mix that into the scene as well. You just need to keep your tripod set up locked down within the scene and wait for the ambient lighting conditions you want. For example, set up your tripod and camera and shoot the ambient scene during the time of day you want (or several different times of day) and then shoot the individual lamps in the scene at night, where there is no ambient light in the scene.
This process take a lot of time to sort out and acquire the image sequences, but it gives you an incredible amount of data to work with when compiling your HDR image. It sounds like you also are acquiring spherical panoramic HDRs for image-based lighting - the process is no different, but it will take time and diligent management of the workflow. You can mix your scene in 32 bits per channel and then export a 32 bit per channel flattened EXR to use for your CGI rendering.
Have fun!
Kirk