Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Everything posted by kirkt

  1. Magenta highlights In clipped areas occur when the green channel clips (red+blue = magenta) and the raw converter assumes an incorrect sensor saturation value. This is a problem on the raw converter side. Try changing the raw converter used, in the Develop Assistant. kirk
  2. A 16bit per channel RGB file is that size. It has nothing to do with Affinity Photo. For example: A 6000x4000 pixel RGB image at 16 bits per channel = 6000 x 4000 pixels x 3 channels x 16 bits per channel / 8 bits per byte = 144 MB. Use 8 bit or save as a 16 bit TIFF with compression. Your raw files converted to RGB images at 16bpc have always been this big, until you save them as jpeg. Then they get automatically converted to 8bpc (jpeg is not a 16 bpc file type) and then compressed. Raw files are not images, they are a matrix of digital numbers in a pattern that gets processed into a full three-channel RGB image through a process called demosaicing. Each “pixel” in the raw file has one “channel” and is often encoded in 14 bits: 6000x4000 x 14 bits per pixel / 8 bits per byte = 42MB The raw file may actually be smaller than that if you use compressed raw, for example. kirk
  3. You can load the spare channel (alpha) to its own pixel layer and export the AP file to a PSD. Then, in PS, copy and paste that layer into a new alpha channel in PS. This way the alpha in AP does not get premultiplied into the composite in AP and stays separate, on its own layer, for export to PS. Kirk
  4. Makes sense Walt. Thank you for the clarification. I’m curious, then, as to what the OP is missing when editing text. kirk
  5. I think there was confusion earlier in the thread about 0 being at the center of the image. kirk
  6. That is strange, considering the Help file for AP indicates that "⌘B" is the default keyboard shortcut for bold text. In my copy of AP, on a Mac, the "⌘B" keyboard shortcut listed in the preferences is blank as well. Perhaps the default shortcuts have been changed and the Help document is out of date. kirk
  7. On1's image editor was designed relatively recently and is essentially a very slow and clunky raw converter/image editor that is a clone of Lightroom, with a bunch of "filters" to make things look nice. The fact that many of Lightroom's features, keywords and edits can be imported into the On1 environment and rendered is not surprising, seeing as how they have literally copied Lightroom's approach - the migration utility is part of their marketing On1 as a replacement for Lightroom for people who do not like Adobe's subscription model. Even if you can reverse engineer Adobe's editing pipeline, tone curves, profiles, etc., other image editors and raw converters are probably much different than Lightroom (in structure and editing controls/conversion workflow) so the "migrated conversion" may not really translate very well, or at all. Imagine migrating thousands of raw image files into a new editing environment thinking that the results of translating your previous edits will be identical to their previous conversions in the old software and then realizing you have to reconstruct thousands upon thousands of edits.... The raw file contains the data from the scene you shot, but the rendered RGB files contain your edits. Otherwise, you are depending upon the instructions from one raw converter being understood by another raw converter, which will rarely be successful. Kirk
  8. You can also do this with fewer steps using the Apply Image command and Equations. For this example, assume that you have a new document open, with the three source images layered in it. The first layer will be the image that will give us the RED channel for the Harris composite - call this layer RED. Same for the other two layers, call them GREEN (second layer) and BLUE (third layer). On the top of the layer stack, create a new pixel layer called HARRIS - make sure you fill the Alpha channel (select the HARRIS layer, then in the Channels palette, right-click on HARRIS Alpha and select "Fill" to make the Alpha channel filled with white). Now the fun begins. 1) If it is not the active layer, select the HARRIS Layer to make it active - this is going to be the layer upon which the Apply Image filter operates, so it needs to be the active layer before invoking the Apply Image command. 2) Select Filters > Apply Image... 3a) For this step, we are going to place the red channel from the RED image layer into the red channel of the HARRIS layer. To do this, drag the RED layer from the Layers palette onto the upper area in the Apply Image dialog to make the RED layer the source for the Apply Image operation. 3b) Next, check the "Equations" box and make sure the Equation Color Space is set to RGB. In the equations boxes below, you are going to specify the channels for the HARRIS layer (the "Destination" layer) based on the channels in the RED layer (the "Source" layer). In this step, we want to place the red channel from RED into the red channel of HARRIS, and leave the green and blue channels of HARRIS alone. To do this, we enter the following equations: DR = SR DG = DG DB = DB That is, the Destination Red (DR) channel (the red channel of HARRIS) equals the Source Red (SR) channel (the red channel of RED). Note that the DG = DG and DB = DB equations basically mean that the Destination Green (and Blue) equals whatever it already is (in this case, nothing). 3c) Repeat 3b for the GREEN and BLUE layers as sources for their respective channels in the HARRIS layer. So, for the green channel of the HARRIS layer, make sure HARRIS is the active layer, select Filter > Apply Image..., drag the GREEN layer onto the Apply Image dialog, check the Equations box and enter: DR = DR (leave red alone) DG = SG (place the green from the Source [GREEN] into the Destination [HARRIS]) DB = DB (leave blue alone). For the blue channel in HARRIS, drag the BLUE layer onto the Apply Image dialog - the equations will be: DR = DR DG = DG DB = SB Taa daaah! This is a more elegant method, but if you do not understand how to use Apply Image, it can be very confusing. Kirk
  9. @stitch - To create the Harris Shutter effect, you need to take three images and put the red channel from the first, the green channel from the second, and the blue channel from the third into a single document. Objects in all of the images that are stationary with respect to the frame will appear as normal, full color; objects that move relative to the frame will create a rainbow-like offset effect. To do this, you can import the three images onto three layers in your working document. Then you can select the red channel from the first layer, the green channel from the second layer and the blue channel from the third layer, for example, and place them into their respective channels on a new pixel layer. This is done easily using spare channels. Create a spare channel from the RED of the first image and rename it "RED." Create a spare channel from the GREEN of the second image layer and rename it "GREEN" and create a spare channel from the BLUE channel of the third image layer and rename it "BLUE." Then, make a new pixel layer and make it the active layer in the stack - let's call this layer "Harris." Right-click on the RED spare channel and select "Load to Harris Red" - repeat for the GREEN and BLUE spare channels, selecting "Load to Harris (GREEN or BLUE). In the attached example, I took the red channel from the first image, the green channel from the second image and the blue channel from the third image and combined them as outlined above to produce the Harris Shutter result. Kirk
  10. Also - when you are defringing something, you are really targeting a specific photographic issue that occurs at high-contrast edges of objects with optics that do not focus the spectrum of light evenly on the sensor. Find an example of fringing (chromatic aberration) and test the filter on it - you will find that it does a pretty good job. In the image you posted, there is no fringing, which may be causing some of your frustration. Kirk
  11. @DarkClown - If you are zoomed into 100% to view the image, then mousing on the image area to manipulate the slider gives you finer levels of control. And, when applying Unsharp Masking at such small radii, you should be making the adjustment at 100% zoom anyway, so this makes sense. That is, the incremental changes in the slider when you use your mouse to manipulate it are related to the zoom level - i.e., the pixel location on the image. This is a clever implementation of this interface feature. Perhaps in the future, a modifier key (like SHIFT) could be employed to divide the current increment by 10, to temporarily override the current zoom level increment. It sounds like the reference point for "0" is established on the image area when you first click the mouse (and then hold it to start sliding the mouse) - so pick a feature in your image that is easily identifiable and click on it to start your mousing to change the slider value. This way, you know where zero is on your image if you need to slide your mouse back toward it. Another feature that would be nice for this interface mode would be a key (something akin to TABbing through web fields) that would permit the user to step through dialog fields while mousing the field values. This way you could mouse the value for radius, then hit the TAB key, for example, and change to the Factor field, mouse the value of that, tap the TAB key again, change the Threshold field by mousing. Etc. Currently, the TAB key hides the interface, but you get the idea. Kirk
  12. It looks like the image has had some cloning performed and some FFT filtration applied. DO you have the original without any of the edits applied? Kirk
  13. I assume you are working with a pixel-based image (as opposed to a vector image - you mentioned "instances" but I am assuming that these are not separate objects). The sparks are the brightest parts of the image and are well-defined, so you can make a mask where the sparks are white and everything else is black. Then make a rectangle with the rectangle tool and fill it with the color you want (or a gradient, or whatever) and apply the mask. Set the blend mode of the rectangle layer to something like Linear Light. See example attached - I used a gradient. Because you use a rectangle shape (instead of filling a pixel layer) you can edit the fill once it is initially laid down, or change it to another fill type (color to a gradient, etc.). Is this what you are going for? Kirk
  14. Attached is the Python code, updated to run in Python3. The code expects the file you want to transform to be named "lookup.png" kirk genc64clut.py
  15. There's also an iPhone app which allows you to choose from several old consoles and computer systems from back in the day: https://apps.apple.com/us/app/consolecam/id1496896085 kirk Attached is the output of the ConsoleCam app, for the C64 hi resolution machine, with the "less detail" setting enabled.
  16. Here is the result of: 1) Running the test image from above through the Inferred LUT; 2) Running the test image through the Python code directly. Kirk
  17. @Rongkongcoma - Here is a HALD Identity Image run through the Python code to which @R C-R linked. In AP, you can use the HALD identity image and its transformed mate as a pair of images in a LUT adjustment layer using "Infer LUT." Otherwise, you can run your image through the Python code and it will transform the image itself. To control the distribution of color, I would add a Posterize adjustment layer below the LUT layer. kirk Attached images: 1) "lookup1024.png" the identity image 2) "paletteC64_first.png" - the transformed identity image mapped to the C64 color palette. FYI - the code looks at the R, G and B values of each pixel and then figures out the distances from that color to the 16 colors in the C64 palette. It then sorts the list of distances and saves the C64 palette color associated with the closest distance to the pixel's color.
  18. Here is a link to the palette of 16 colors that the C64 had available for display: and https://www.c64-wiki.com/wiki/Color You can use this to construct a custom palette in an application that supports such things and see if that works for you. I encountered the long-standing lack of a way to specify a custom palette for GIF export as well and got the same error as in the thread to which @Medical Officer Bones linked.
  19. As a kludge, and obviously depending highly upon the edits you do in post to the rendering, you can bake a LUT that will replace all of the edits and can be stacked on top of the rendering that may change, subject to client feedback, etc. Because you cannot export a LUT from AP that is based on pixel operations (like using ColorEfex or something similar), you will need to export two images after your first editing session: 1) The original render 2) The edited render. Then you can open the original render, add a LUT adjustment layer and then use the Infer LUT option to infer the edits between the original and the post-production editing. Again, this will only work for global edits, like color grading and adjustments to global tone, as opposed to edits that change local portions of the image. Hey, it ain't pretty but it works. When the client needs changes made to the original render, you can make those changes, render the new image and replace the stale (background) image with the new render and all of the color grading and tonal changes made during the first post session will be applied via the LUT adjustment layer. If you make a bunch of edits in, for example, ColorEfex, you can also save them as recipes in ColorEfex and, while it will require you revisiting ColorEfex to apply the filters to the new render, they will be fully editable in ColorEfex. In this example, all a SmartObject is doing is saving the filter recipe in the SO instead of as a preset in ColorEfex. Kirk
  20. @vendryes - AP uses the Lensfun correction database. See: https://lensfun.github.io/calibration/ to help include your specific lenses. Kirk
  21. You can use the web application "WhatTheFont?" (or similar font ID web pages) and drop an image of the type (like the one you posted) into the web app and it will return suggested typefaces that resemble the image. Feed it a JPEG or PNG. https://lmgtfy.com/?q=what+the+font Kirk
  22. If you need your noise to be more noticeable in the final exported JPEG and you plan to resize (i.e., change the number of pixels) the JPEG export for final delivery, then reduce the JPEG in size first to the final output dimensions, and then add the noise. This way when you view the preview at 100%, you will be able to see the size and quality of the noise that will appear in the JPEG output at final dimensions. Kirk
  23. With some operations, like sharpening and noise, you need to examine the preview/result at 100% zoom. Otherwise, you will get a scaled, likely inaccurate, preview. Also, with images the contain fine detail, like noise, your JPEG export settings should be very high quality (like >90% or so - test the amount of compression you can get away with if file size is an issue). The attached images depict what the noise looks like in AP before you export and after you export - in the first attachment, the image is scaled to fit the display (about 25% zoom). In the second attachment, the image is being viewed at 100% zoom. The two images in each attachment are the preview in AP (left) and the resulting exported JPEG (right). As you can see, the versions that are not viewed at 100% look very different (as the OP suggests, the noise "disappears") but the images viewed at 100% look very similar. Kirk
  24. Here's what it would look like without the processing (just tiling the original image). Kirk
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.