Jump to content
You must now use your email address to sign in [click for more info] ×

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Reputation Activity

  1. Like
    kirkt got a reaction from R C-R in How to reduce colors in Photos to create C64 looking Photos?!   
    There's also an iPhone app which allows you to choose from several old consoles and computer systems from back in the day:
    https://apps.apple.com/us/app/consolecam/id1496896085
    kirk
    Attached is the output of the ConsoleCam app, for the C64 hi resolution machine, with the "less detail" setting enabled.

  2. Like
    kirkt reacted to Medical Officer Bones in How to reduce colors in Photos to create C64 looking Photos?!   
    Here is a TRUE Commodore 64 version: one that would be displayed exactly like this, including the infamous 3 colours + 1 background colour per 4 by 8 tile!
    The zoomed version:

    ...and the 320x200 version at double pixel res:

    To generate an image like this needs specialized software that keeps track of the physical resolution limitations of these older 8-bit machines. In this case I used Affinity Photo to resize the image to 640x400 (double 320x200), open in Krita, use the Palletize filter with a C64 Lospec PAL file and Pixelate at 4:2 (this is the reason I needed to feed it a 640x400 image: the pixelate filter cannot go below 2). This then generates a version which can be downsampled in Affinity with nearest neighbour to 160x200.pixels. All double pixels are maintained this way in the next step!
    To ensure the hardware limitations (no more than 3+1 colours per 4 by 8 pixel tile) of the original c64 are maintained, I created a C64 Multi Color image in Pro Motion NG. Pro Motion NG is "aware" of the colour screen mode constraints of the older 8bit machines. Pro Motion NG automatically fixes any tile that contains more than the aforementioned hardware limit.
    I pasted the 160x200 image in PM, and saved the final result.
    This version could be displayed on a real Commodore 64!
    Anyway, in this little project I used three different apps to create the final result. I'd suggest to be pragmatic about software: use the right tool for the right job, and have a range of tools in your tool bag. I use Affinity for certain things, PhotoLine for other things, Krita, even an older version of PS CS6 and GIMP for various jobs, when they call for it.
     
  3. Thanks
    kirkt got a reaction from R C-R in How to reduce colors in Photos to create C64 looking Photos?!   
    @Rongkongcoma - Here is a HALD Identity Image run through the Python code to which @R C-R linked.  In AP, you can use the HALD identity image and its transformed mate as a pair of images in a LUT adjustment layer using "Infer LUT."  Otherwise, you can run your image through the Python code and it will transform the image itself. 
    To control the distribution of color, I would add a Posterize adjustment layer below the LUT layer.
    kirk
    Attached images:
    1) "lookup1024.png" the identity image
    2) "paletteC64_first.png" - the transformed identity image mapped to the C64 color palette.
     
    FYI - the code looks at the R, G and B values of each pixel and then figures out the distances from that color to the 16 colors in the C64 palette.  It then sorts the list of distances and saves the C64 palette color associated with the closest distance to the pixel's color.


  4. Like
    kirkt got a reaction from Alfred in How to reduce colors in Photos to create C64 looking Photos?!   
    There's also an iPhone app which allows you to choose from several old consoles and computer systems from back in the day:
    https://apps.apple.com/us/app/consolecam/id1496896085
    kirk
    Attached is the output of the ConsoleCam app, for the C64 hi resolution machine, with the "less detail" setting enabled.

  5. Like
    kirkt got a reaction from Medical Officer Bones in How to reduce colors in Photos to create C64 looking Photos?!   
    Here is the result of:
    1) Running the test image from above through the Inferred LUT;
    2) Running the test image through the Python code directly.
    Kirk


  6. Like
    kirkt got a reaction from Medical Officer Bones in How to reduce colors in Photos to create C64 looking Photos?!   
    @Rongkongcoma - Here is a HALD Identity Image run through the Python code to which @R C-R linked.  In AP, you can use the HALD identity image and its transformed mate as a pair of images in a LUT adjustment layer using "Infer LUT."  Otherwise, you can run your image through the Python code and it will transform the image itself. 
    To control the distribution of color, I would add a Posterize adjustment layer below the LUT layer.
    kirk
    Attached images:
    1) "lookup1024.png" the identity image
    2) "paletteC64_first.png" - the transformed identity image mapped to the C64 color palette.
     
    FYI - the code looks at the R, G and B values of each pixel and then figures out the distances from that color to the 16 colors in the C64 palette.  It then sorts the list of distances and saves the C64 palette color associated with the closest distance to the pixel's color.


  7. Like
    kirkt got a reaction from Alfred in How to reduce colors in Photos to create C64 looking Photos?!   
    Here is the result of:
    1) Running the test image from above through the Inferred LUT;
    2) Running the test image through the Python code directly.
    Kirk


  8. Like
    kirkt got a reaction from Alfred in How to reduce colors in Photos to create C64 looking Photos?!   
    @Rongkongcoma - Here is a HALD Identity Image run through the Python code to which @R C-R linked.  In AP, you can use the HALD identity image and its transformed mate as a pair of images in a LUT adjustment layer using "Infer LUT."  Otherwise, you can run your image through the Python code and it will transform the image itself. 
    To control the distribution of color, I would add a Posterize adjustment layer below the LUT layer.
    kirk
    Attached images:
    1) "lookup1024.png" the identity image
    2) "paletteC64_first.png" - the transformed identity image mapped to the C64 color palette.
     
    FYI - the code looks at the R, G and B values of each pixel and then figures out the distances from that color to the 16 colors in the C64 palette.  It then sorts the list of distances and saves the C64 palette color associated with the closest distance to the pixel's color.


  9. Like
    kirkt got a reaction from Medical Officer Bones in How to reduce colors in Photos to create C64 looking Photos?!   
    Here is a link to the palette of 16 colors that the C64 had available for display:

    and
    https://www.c64-wiki.com/wiki/Color
    You can use this to construct a custom palette in an application that supports such things and see if that works for you.  I encountered the long-standing lack of a way to specify a custom palette for GIF export as well and got the same error as in the thread to which @Medical Officer Bones linked.
  10. Like
    kirkt reacted to GarryP in How to make Gulshat in A. Designer?   
    I don’t have time to write down all of the steps fully but this GIF might give you the clues you need.
    (The end result in the GIF doesn’t look very pleasing but that’s just because I did it quickly, you’ll need to do it better to get a better result, but all of the steps are there.)

  11. Like
    kirkt got a reaction from World View in What Font is This?   
    You can use the web application "WhatTheFont?" (or similar font ID web pages) and drop an image of the type (like the one you posted) into the web app and it will return suggested typefaces that resemble the image.  Feed it a JPEG or PNG.
    https://lmgtfy.com/?q=what+the+font
    Kirk
  12. Like
    kirkt reacted to GarryP in What Font is This?   
    If you want to find out which font is used in a web page component, and you’re not averse to a bit of ‘technical digging’, you can use a web page “Inspector” to see how the web page has been created. (Most modern web browsers have some sort of “Inspector” but it might be called something different.)
    My attached video shows how to quickly use such a tool in Firefox to check the font name used in a couple of places.
    Note: The browser goes down the list of font names until it finds one it can use so the one you see on-screen might not be the first one in the list given in the Inspector.
    2020-07-18_08-35-34.mp4
  13. Like
    kirkt reacted to Alfred in What Font is This?   
    For those wondering, the OP’s screenshot comes from this page: https://www.alexperry.com.au/product/alex-3/ 

    The name ‘ALEX’ is in Hind Bold. The price and description are in Varela Round (at two different font sizes).
    The above font families are both available for free from Google Fonts or Font Squirrel.
    https://fonts.google.com/specimen/Hind?preview.text=ALEX&preview.text_type=custom
    https://www.fontsquirrel.com/fonts/hind
    https://fonts.google.com/specimen/Varela+Round?preview.text=$2,500.00+SATIN+CREPE&preview.text_type=custom
    https://www.fontsquirrel.com/fonts/varela-round
  14. Like
    kirkt reacted to R C-R in Infer LUT confusion   
    FWIW, http://www.quelsolaar.com/technology/clut.html includes a lot of detailed info about HALD CLUT's, the identity versions, & so on that I found very helpful in understanding what they are & how they can be used.
    Also, mostly for my own amusement I created nine macros based on the resources from the Pat David.net web page HaldCLUT download, using AP's "Infer LUT..." with the Hald_CLUT_Identity_12.tif as the source file & one of the CreativePack-1pngs as the adjusted one. Each of them includes 5 steps like this one for the FallColors.png:

    Each of the macros executes very quickly even on my old iMac. Because they start with a Deselect Layers step, were recorded with the Assistant Manager set to add adjustments as new layers, & end with a "Move Inside" step (automatically applied during the rename step), they should work well with any AP file as long as the topmost layer is a pixel one like the Background layer you get by default when opening a photo file. 
    Together, these 10 files from the download alone require about 10 MB of file space. The nine macros exported to a single .afmacros Affinity library file uses only about 400 KB of file space. Because everything is 'baked' into the .afmacros file, this would be a great way to share macros like these with other AP users, who would not have to download the large HaldCLUT file to use them.
    I would like to upload my Creative 1 LUTs.afmacros file here or in the Resources forum, but I won't do that without first checking with Pat David to make sure it is within the scope of the license for his work. Accordingly, I am going to contact him through his 'about' link & refer to this post to explain what I want to do.
  15. Like
    kirkt reacted to JimmyJack in Masking issue...   
    @Barrowman you can still use the Selection Brush if you want/need to 🙂.
    Instead of committing the Refine process to Mask....
    Commit to Selection and on  a new pixel layer fill with any solid color. Drag into the mask position.
    (or..... if you fill with white you can hit Rasterize to Mask and leave it above or nest it as in the first option as needed)

    move with mask.mp4
  16. Like
    kirkt got a reaction from Al Edwards in Even Lighting on a Texture   
    Here is th result of applying the above suggestions to the concrete tile the OP posted.  The Affine transform makes the seams at the edges move to the middle of the tile.  This makes them easy to blend/clone out.  Then, when you are satisfied with the blending an removal of the seams at the edges of the original tile, you perform the same (reverse) affine transform to get back to the original tile, but now the edges are contiguous.  Now you can tile this image seamlessly.  Of course, it will repeat, and the repetition might be pretty obvious, but that depends on the application in which the texture is used.
    Kirk

  17. Like
    kirkt got a reaction from Al Edwards in Even Lighting on a Texture   
    You can try duplicating the original on a new layer above the original.  Then apply a very large gaussian blur* to the image to obliterate the small details and leave the very large variations in tone (i.e., uneven lighting).  Then invert the result and apply it to the original in screen, overlay or soft light blend mode.  Adjust the black and white points of the composite to restore contrast and tonal range.
    Then you can apply an Affine transformation (Filters > Distort > Affine) and offset the image 50% in X and Y.  Use this as the basis for your tile, cloning and inpainting the seams that intersect in the middle of the tile for a seamless tile.
    Kirk
    * Although the Gaussian blur slider in AP only goes to 100 pixels, you can enter larger values into the numerical radius field to get blur radii larger than 100 pixels.  Try something like 300 for your image.
  18. Like
    kirkt got a reaction from firstdefence in Even Lighting on a Texture   
    Here is th result of applying the above suggestions to the concrete tile the OP posted.  The Affine transform makes the seams at the edges move to the middle of the tile.  This makes them easy to blend/clone out.  Then, when you are satisfied with the blending an removal of the seams at the edges of the original tile, you perform the same (reverse) affine transform to get back to the original tile, but now the edges are contiguous.  Now you can tile this image seamlessly.  Of course, it will repeat, and the repetition might be pretty obvious, but that depends on the application in which the texture is used.
    Kirk

  19. Like
    kirkt got a reaction from R C-R in Infer LUT confusion   
    @Claire_Marie - As @Lee D notes, the Infer LUT operation compares the before and after color of an image and tries to reverse engineer the color in the after image based on the before image.  Some images that are used in the Infer LUT operation may not have a very wide variety of tone or color represented in them, so when inferring a LUT from them, the inferred LUT only captures part of the toning (the toning restricted to the colors present in the image).  One way that LUTs are stored in a graphical format is to use a before and after version of a special color image called an Identity (ungraded, neutral) HALD CLUT (color lookup table) image like this one:

    As you can see, this special image is essentially a grid of colors with a wide range of tonal and hue variation.  Copy this HALD image and run it through your filter and then use the before and after versions of it as your Infer LUT base images.  The Identity HALD image contains a lot of colors and will capture your filter's color transform fully.  As with all LUTs, the HALD images need to be in the color space of the image you are editing for the color transform of the LUT to work as expected.
    Here is  link to a page of technical LUTs which includes the original HALD image I posted here:
    https://3dlutcreator.com/3d-lut-creator---materials-and-luts.html
    For example, here is a webpage that contains a link to several HALD CLUTs that capture color transforms for several film simulations.  You can use these in AP to apply a film look to your image with a LUT adjustment layer and the Infer LUT feature.
    https://patdavid.net/2015/03/film-emulation-in-rawtherapee.html
    Kirk
  20. Like
    kirkt got a reaction from Aftemplate in Node-based UI for AP. Please?   
    @kirk23 - Also - I have noticed the increased number of CG/3D artists that are suggesting more and more features, or tweaking of existing elements of AP.  This is great, in my opinion.  I have dabbled in 3D rendering but I am primarily a photo/image processing person.  I think the input from the 32bit, multi-channel, OCIO folks who deal with these things regularly in their workflow is solid gold.  This is where image processing needs to go.  Thank you!  From someone who shot HDR mirror ball images and used Paul Debevec's HDRShop decades ago to light his CG scenes!
    Kirk
  21. Like
    kirkt got a reaction from kirk23 in Node-based UI for AP. Please?   
    Exactly.  This is the model I would love to see Affinity follow.  I have no problem with nodes that render the output to a low-res proxy throughout the node tree.  This speeds up the editing process and gets you where you need to be so that you can then put a full res output node at the end of the tree.  Blender is terrific for so many reasons and is an example of how an application can evolve with feedback from an incredibly diverse user base and a bunch of really talented designers and programmers who have support.
    I am also a fan of Photoline, for many reasons, but the interface can be klunky and a little obtuse, which adds to the learning curve. 
    Kirk (t, not 23 - LOL - how many times does a Kirk run into another Kirk?!  I've met three in my lifetime.  Now, virtually, four.)
  22. Like
    kirkt reacted to Medical Officer Bones in Node-based UI for AP. Please?   
    In my experience no layer-based image editor is perfect. PhotoLine does support pretty much a full non-destructive workflow with 32bpc images, and, like Photoshop, fully implements a "smart objects" ("placeholder layers") workflow, up to the point of allowing external PS plugins to be applied as live filters, and using placeholder layers as masks for other placeholder layers, and live instancing of layers. Krita also supports instanced layers, and a non-destructive filter layer.
    The thing that really bothers me about Photoshop is its reliance on clipped layers to create stacks of combined masks. It just works better to allow multiple (grouped) layer masks.
    But the issue remains that layer-based editors do slow down at some point, and things become rather complex fast: a nodal approach is often easier and more effective to work with when things heat up in terms of complex compositing. I would love to see a nodal layer of some sort to be implemented.
    If memory serves me, I recall a mac-based image editor that (years ago) implemented a kind-of stackable puzzle approach on top of the traditional layer stack. I forget its name; it was quite intriguing, but development stopped at some point.
  23. Like
    kirkt reacted to kirk23 in Node-based UI for AP. Please?   
    Thanks Medical Officer Bone     for reminding me  about Photoline.  It's promising  for sure.    They probably have done quite a progress since last time I tried it.   And I agree Phs "group clipping"  is a clunky way to introduce non-destructiveness.  I often couldn't figure out a thing in my own mess of groups and smart objects there.    Photoline  was definitely a touch of simplicity and elegance  in that regard.
      I recall a lack of transform "chain " links  what have annoyed me.  I tried to perform typical   depth based objects/layers combine trick  and lack of  "chains"   made it fall apart every time I need to re-compose something.  
    Same problem  in APhoto , no links or multi selection, and if you touch something  accidentally  it's breaking.  I re-subscribed back to Photoshop because of that.
    Still  every time I work in Substance Designer  I miss an ability to select  some object of screen  instantly,  move/scale it around,  paintbrush a mask or vector shape and scatter some vector based particles along a vector spline  manually.      SDesigner svg and bitmap paint nodes are absolutely  horrendous and unusable.
    Whatever mess of group clipping Phs may have   a Gordian node of those  SDesigner nodes with its connection lines is just on totally different level.
    I wish Affinity Designer  could simply use Substance Designer  sbs   files  or Filter forge ones  as live filters.   But  the issue of inputs still stays.    We usually need inputs from other layers too  to do something meaningful with nodes
    I also use Blender compositing mode a lot since they introduced   "cryptomatte"  last year .  Perhaps some kind of a bridge  with Affinity soft  could be  cool too.   In general  I love Blender's approach to node editors. Imo they are the best I saw.  Much easier to work with  than both SDesigner and FForge ones.
    I suspect one day Blender may become quite a solution for image editing too.  2d or 3d, it's all the same basically     
  24. Like
    kirkt reacted to AlejandroJ in Colour EQ   
    Thanks again Kirkt. I found the tutorials the other day and have spent the last two days watching at them all. I am only missing the ones that talk about HDR, panoramas, 3D renders and the ones about PSD files which I won’t be seeing. The rest, I have already seen them all. Very interesting and I have learnt a lot of things about the program I would have never discovered or get to know by myself nor reading the manual.
  25. Like
    kirkt reacted to Andy Somerfield in Affinity Photo Customer Beta (1.8.4.184)   
    Status: Beta
    Purpose: Features, Improvements, Fixes
    Requirements: Purchased Affinity Photo
    Mac App Store: Not submitted
    Download ZIP: Download
    Auto-update: Available
     
    Hello,
    We are pleased to announce the immediate availability of the second build of Affinity Photo 1.8.4 for macOS.
    If this is your first time using a customer beta of an Affinity app, it’s worth noting that the beta will install as a separate app - alongside your store version. They will not interfere with each other at all and you can continue to use the store version for critical work without worry.
    This beta is an incremental update to the 1.8.3 version recently released to all customers (though it installs parallel to the release, as described above). We recommend that you use this beta in preference to the store version if you are affected by any of the issues listed below.
    Affinity Photo Team  
     
    Changes Since Last Build
     
    - Improved selective viewing of LAB document channels in the Channels panel.
    - Improvements for Force Touch Trackpad users.
    - Improved layers panel representation of empty groups.
    - Improvements to Canon EOS-1D Mk. III metadata import.
    - Improved reliability of Canon CR3 RAW loading.
    - Improved support for .fff RAW files.
    - Right click in the Move Tool will offer a tree of layers to select from.
    - Honour the monochrome flag in X3F files.
    - “Save document with history” will now show up as a recorded macro step.
    - Resetting the Voronoi filter now resets to the correct values.
    - Mask thumbnails now look the same in dark / light UI mode.
    - General layers panel tidying & drag improvements.
    - Fixed a crash when refining selection.
    - Fixed an issue where the history page could get out of step in Develop.
    - Fixed erroneous blend mode imports for PSD files.
    - Fixed potential hang when using “Move inside”.
    - Fixed issue with the Colour panel when switching away from a mask.
    - Fixed a crash in red-eye removal.
    - Fixed loading of X3F files with non-square pixels.
    - Localisation improvements.
    - Help improvements.
     
    Changes Since 1.8.3
     
    - Integrated x3f-tools project, significantly improving support for X3F RAW.
    - Added ability to show folders as icons in the layers panel.
    - Added option to always show folders as “small” in the layers panel.
    - Improved dragging behaviour in the layers panel.
    - PDF import performance improvements.
    - Document save performance improvements.
    - Detect Edges improved on 32bit documents.
    - Lens profile popup might report incorrect state.
    - RW2 files can cause a crash.
    - Text performance improvements.
    - Selection brush width is not remember after being changed using modifier keys.
    - Placing RAW files now produces Pixel layers, as opposed to Image layers.
    - Fixed reading of XP metadata under certain conditions.
    - Fix for Nikon D90 RAW loading issues.
    - Fixed High Pass filter when using LAB colour.
    - Assorted small bug fixes.
    - Help improvements.
    - Localisation improvements.
     
    To be notified about all future macOS beta updates, please follow this notification thread 
    To be notified when this Photo update comes out of beta and is fully released to all Affinity Photo on macOS customers, please follow this thread
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.