Jump to content
You must now use your email address to sign in [click for more info] ×

kirk23

Members
  • Posts

    721
  • Joined

  • Last visited

Posts posted by kirk23

  1. I support Image stay  image  by default.     Find it quite convenient .  It helps to keep everything non-destructive  and resolution independent.        Saves you from dealing  with  annoying "smart objects"  you waste your time for in Photoshop.   

    The only  thing I am missing is ability to link  pixel content to another layer  . i.e  make a sort of live rasterization.    Especially since such thing is already happening for a screen.

  2. It's amazing how such  small things do wonders in Photoshop.      Affinity in-painting  creates  lots of repeating  details or sometime just copy a piece  from super close around  doing doubled details.        So I rather use patch tool  but it's pretty inconvenient either.

    No flip, no deform to match certain image patterns   .    Would also like the patch tool would deal  with transparency same as  other rgb. (as an option) 

  3. Adobe recently added  flip options  for their content aware move tool .   Please Serif   add flip and mesh deform  options to patch tool .    So we could adjust something to existing details with mesh deform option before applying  the patch.     

    Also would be super helpful if the patch tool would work in alpha  same way it does in rgb.  As transparency would be just one more channel to patch.

  4. 13 minutes ago, chessboard said:

    Affinity Photo is going to support java script soon . Tell me why you would not be proficient with making scripts for Affinity Photo and give reasons.

    ChatGPT 4
     
    User
     
    Affinity Photo is going to support java script soon . Tell me why you would not be proficient with making scripts for Affinity Photo and give reasons.
     
    ChatGPT

    As an AI developed by OpenAI, I am not capable of directly creating or executing scripts for specific software applications like Affinity Photo. Here are a few reasons why I would not be proficient in making scripts for Affinity Photo, especially considering its upcoming support for JavaScript:

    1. Lack of Direct Interaction with Software: I don't have the capability to interact directly with software applications. My functionalities are limited to providing information, advice, and coding examples based on my training data. This means I cannot test or run scripts in Affinity Photo or any other software.

    2. No Real-time Feedback and Debugging: Script development often requires real-time feedback and debugging. Since I can't run scripts or interact with the software, I cannot provide real-time debugging or feedback specific to the environment of Affinity Photo.

    3. Limited to Pre-2023 Knowledge: My training includes knowledge only up to April 2023. If there have been specific updates or features added to Affinity Photo after this time, I wouldn't be aware of them. This limitation affects my ability to provide current and accurate advice for scripting in the context of the latest version of the software.

    4. Generic JavaScript Knowledge: While I can assist with general JavaScript coding advice, Affinity Photo may have specific APIs or frameworks unique to its environment. Without specific knowledge of these, my assistance would be limited to general JavaScript programming practices.

    5. No Personalization to User Preferences: Effective script writing often involves understanding the user's specific needs and preferences within the software. As an AI, I lack the ability to understand and incorporate individual user preferences and workflows in a script tailored for Affinity Photo.

    However, I can certainly help with general advice on JavaScript, explain programming concepts, and provide examples that might be relevant to a scripting context. This could be useful as a starting point for someone looking to create scripts for Affinity Photo once it supports JavaScript.

     

    Chat 4  is already  not that sure it doesn't have its own consciousness  😀 .      It wrote me exporter from Blender  to our custom game format  today.  

  5. Would be nice to have  a live filter that would match  every  rgb or A(as an option) value of pixels   to what's  beneath .   So getting sort of automatic color and tone  adjustment  for a new layer to  be mixing seamlessly with background.

    it's like when you do frequency separation   and get hi and low frequiency layers   the low freq layer is actually  blurred (with a slider) background. 

    Might be several other approaches too  but all based on blur  and frequency separation.    Perfectly doable   but require so tedious routine   and complicates the stack so much   I would love  just a live filter doing so.

     

    Or just let us do our  own live filters by letting  procedural filter see beneath.   Please 🙏

     

  6. ChatGPT 4

     

    User
    Affinity Photo is going to support java script soon . Do you think you would be proficient with making scripts for Affinity Photo or it would require a whole new training and unlikely?

     

    ChatGPT

    The introduction of JavaScript support in Affinity Photo is an interesting development. As a language model trained by OpenAI, I am proficient in JavaScript and have a strong understanding of programming concepts, which would be directly applicable to creating scripts for Affinity Photo, assuming its JavaScript implementation follows standard practices.

    Affinity Photo likely has its own specific application programming interface (API), which means that while the core JavaScript language would be the same, the specific functions, methods, and processes to interact with Affinity Photo's features might be unique to that software. Learning these specifics would be necessary to create effective scripts for Affinity Photo. This would involve understanding how the API allows a script to interact with the software's features, such as image manipulation, layer management, and other photographic or graphical editing capabilities.

    The process would be similar to learning any new API or software library: understanding the documentation, experimenting with basic examples, and gradually building more complex scripts. Given my existing programming knowledge and ability to learn from written materials, adapting to create scripts for Affinity Photo would be feasible. However, since I cannot execute or test scripts in real-time environments, I would primarily be able to offer guidance, pseudocode, or untested script examples based on the provided documentation and API references.

     
  7. i am om

    27 minutes ago, Old Bruce said:

    Ignore Xcode (Apple's Mac programming suite of software). Xcode is not needed for this to work.

    Make a document. Draw a shape on it. Go to the Export Persona. Export something to somewhere on your computer. Now click in the Path area and write MyNewFolder/ (slash included) Export again. Look in the Finder for the original location (somewhere) and check to see if now there is a folder called MyNewFolder with the slice in it.

    I am on Windows and this is not working.  :(   it doesn't let you put the slash there and writes opposite slash instead.

  8. 10 minutes ago, chessboard said:

    I assume that chatGPT generates Blender scripts quite well because there are many references on the web that chatgpt has "learned" from, due to the open source nature of Blender and therefore a large number of users who know how to script and share their scripts on the web.

    Similar knowledge cannot be expected for affintiy scripts, as chatGPT does not learn to script. It does not "know" anything about scripting logic and scripting languages, nor does it understand how to read API documentation and combine this information with knowledge of the rules of a particular scripting language. It simply can't think. Instead, it simply collects scripts that others with knowledge of the scripting language have already written and can combine them into new scripts. This works all the better the more often the tasks set have already been solved and published on the web. If you ask chatGPT to create a script for a very specific and unique purpose, it is more likely to fail.

    Scripting in Affinity will be so new that chatGPT will not immediately find enough scripting input. If scripting in Affinity becomes popular, the later versions of chatGPT will probably also be able to generate scripts for the Affinty applications. It is not a question of which scripting language is used. There just needs to be enough sample material available.

    It's might be true   but much to my surprise  ChatGPT4 actually suggests  ideas how to solve things  if something doesn't work.  I subscribed for a month  and see the difference .     Looks a bit  like thinking  actually.   It's just pretty tiresome to check its ideas  since only one  from 10 works .   It did me a few nice  Photoshop scripts I couldn't do  myself . But it took me a whole weekend.    More approaches it did to workaround something slower it starts to work  , up to so slow you wait half an hour for each new piece of code  . And if the code still doesn't work  it starts to do useless detours and back rounds  and you constantly need to focus it  on exact task.     

  9. 6 minutes ago, Old Bruce said:

    It is quite unintuitive. There is the main file path which would be the folder the slices would be saved to. The Path adds to that path. For example if I am saving most files to /Users/<YOUR_NAME_GOES_HERE>/Downloads/ and then I use the Path to try and save to /Users/YOUR_NAME_GOES_HERE/Desktop/MySlices/TIFFs/ then I will wind up with my TIFFs being saved to /Users/<YOUR_NAME_GOES_HERE>/Downloads/Users/YOUR_NAME_GOES_HERE/Desktop/MySlices/TIFFs/ the red bit will be created folders in my Downloads folder. I would have to set the basic path to Desktop in order to have the slices saved to MySlices/TIFFs/ 

    The first time I run this I can have the folders MySlices and TIFFs created, subsequent exports will just use the already created folders.

    I would love for it to use the Actual complete path I want instead of creating a new path but such is life.

    It implies we still can use this path option  somehow?    For me it never worked at all.   Is it working on apple only ? I googled Xcode and it's something  apple related ?

  10.  And it would be nice if export persona could have access to "States" in Affinity Photo and could  export selected states  for each slice.    With option to add state name as a file suffix .        Like slice  "button"    would export   two states "on" and "off"  like button_on.tga and button_off.tga  

     

    BTW. does anyone know  what  "path" field for  in slices pane? Whenever I tried to input a path there it ignores it

  11. I need scripting I could  use chatGPT4 for .   Not just any scripting.     So far  Chat GPT is good  for writing python scripts for Blender . Not sure it's open source nature  or it have just been trained well on Blender  but it's where  it actually works.   

    Persuading  it to write you  3d max script is its own challenge  for example .   For photoshop it may take days  with miss and hit  before Chat would write you java script that actually works.

    So please do something ChatGPT would proficient with.     Not another  " Thank you for buying our product. Now kindly hire a programmer"

     

  12. it's  not

    9 hours ago, fde101 said:

    There is evidently a Blender plugin for sbsar files that relies on a component called Substace Automation Toolkit to be installed, but when that is available, is able to render sbsar files directly within Blender: https://xolotlstudio.gumroad.com/l/stxJi

    It might be possible to take a similar approach with the Affinity products to gain access to these files if it is something you have a use case for - if Serif does not provide this, perhaps as a plugin once the SDK is available?

    It's not actually  what I meant .   3dmax have a substance plugin too for example but  it's just a way to bypass exporting bitmaps and loading bitmap textures.  And too much of a complicated  extra headache to be useful. 

      Not exactly a "filter".  I mean something we could input  one or two images and produce  new result on the fly  . If it could read  affinity layers as inputs  then it a whole new story. 

     

  13. Soon Photoshop  will get its best and most amazing update for years  probably : sbsar files as procedural filters.   The ones we could export from Substance designer and do whatever filters  we would like . Not live ones unfortunately

    Wish Affinity would have something like that.   Doubt it could ever read sbsar  files .  Adobe would never allow it  but I wish we could have our own procedural filter to read  " bellow " at least like some other filters  or better any specific layer  and has as an option a simple node based interface .  Something I could use my 16gb gf3080 for .   

     

  14. 12 hours ago, NotMyFault said:

    It only erases RGB values to zero whenever alpha gets to 0.

    Thank you  NotMyFault.   I didn't know about background trick  but it still doesn't help me much   since I want  tga  or tiff or exr as linked image from which   I could have an access to each individual channel to use them for masking .   Like 4 masks  from cryptomatte  I packed  in one RGBA output file in Blender compositing  to be then accessible  in Affinity with procedural filter.      Do you know a trick how to do it with PF  so we could use same linked image  as a masks source   without doing  black holes in RGB ?

    Works with 3 masks packed perfectly well but the alpha one spoils it .

  15. I often  need to work with tga files and Affinty   always mess them  at open stage doing black wholes in RGB channels where Alpha is zero.

    When you save tga there is at least an option to add to alpha some 0.0001  with procedural filter  that lets to avoid it  but when yu open the file  it's inevitable.    Can we have an option in the settings somewhere to NOT doing this please. 

  16. It's going to be javascript  right?     Wish it would rather be Python . ChatGPT seems so much more proficient with Python  while it took hours if not days to make it write you a working javascript for Photoshop.

     

    We need scripting system that works  with ChatGPT nowadays  please.     With Affinity apps I am afraid it would be like persuading it to write you a script for 3d max  vs Blender   where ChatGPT instantly shines.

  17. We have start , main repeating and end.   3 sectors only.    Old expression had up to   10 alternating  repeating sectors .  Old serif products  had more too.      It allowed to create  unique and not visually repeating things easily . Was a great feature.   

    Now I have to constantly switch to Microsoft Design  soft that  hardly even working now.   Just for a great vector brushes .

    Make it with option to deform along spline or  having random offset please

     

  18. 9 hours ago, NotMyFault said:

    Older experiments

    Yeah.  but practical implementation  is a key.    Could be just extra channel without any crazy tricks   . Simple  and easy.    Corel painter have a perfect floating point  depth channel for example. They had it  since ancient times  but never let you do anything meaningful with it   except impasto.  

      The soft needs so little modification really to be  a perfect 2,5d  Zbrush replacement.       

  19. Would love  if Affinity have a mode with one more channel dedicated to depth /height.      Like  some painting  apps, art rage,  corel painter   etc.   Which use it mostly for impasto  simulation and canvas  textures.    

    Could be a whole new version of Zbrush  2,5d mode  but  rather non-destructive   with true layer support  .  

    All it needs is just depth /height  channel in brush dabs , an extra alpha in tiff images maybe  and a few specific blending modes.

    Depth combine blending  where   if pixels of layer 2 are higher then layer 1  they are visible and other way around masked .   Basically (layer1 max layer2 )- layer1 and some levels based threshold on top of it    or perhaps just classic  and simple if else   binary  masking  way.

    And another blending mode where same if else pixel selection blurs layer 1 and add depth  of layer 2 on top of it.

    Would work nicely for  all sort of bokeh  imitation and  for doing some  special depth based displacement  and filters.   Isometric phone games  mockups ,  materials  for CG  and so on and on  without super complicated   layer stack where  you couldn't recollect and figure out anything just next morning .

     

     

     

    t

  20. Please make  it possible to input new path to selected  resources.  I use Affinity photo to composite  3d render and  sometimes I need just to relink the same names  files but to a new folder .   Please make  Replace button  work with several resources selected  to be able to point to new folder for all of them  or just let us copy paste  the new  path  for all selected at once.

  21. All it requires is just having 5 symbols.  One for the document itself and  extra 4  for sides.     Can Affinity do it automatically please  and make a full screen preview mode so we could estimate how repeating  details look .     Maybe just a hot key  to on and off.    While I can do symbols in Designer on my own it's hard to manage them   and easy to accidentally  shift and mess  .  Would be  nice to have just one click   "pattern" mode.     Maybe in Designer but better in both .

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.