kirk23
-
Posts
731 -
Joined
-
Last visited
Posts posted by kirk23
-
-
7 hours ago, bbrother said:
I think it's more a matter of not understanding how they work. At least sometimes.
Often, if the user's ideas about the logic of a function differ from how it works in reality, the conclusion is drawn too quickly that we are dealing with an bug that should be fixed. I caught myself doing this several times, so I'll wait for information from moderators and devs.For me, the documentation is quite clear on this matter↓
My understanding of this is that the update button is only intended to update the layer state of those it originally captured when creating that particular state.
P. S We'll see what comes of it, because as @walt.farrell mentioned, the matter was discussed between moderators and devs and is waiting for the final decision, even though the devs first reaction was that it is by design.
That's why I think it supposed to work same as in Photoshop with layercomps. So when you add a new layer it's not necessary to update already existing "states"at all . And the fact the new added layer stay visible in all older "states" not including it originally is the bug.
-
8 hours ago, Old Bruce said:
10 out of 10 topics with 16 bit cmyk in the topic title from the last 8 years have been started by you. Could you not just choose one of
theyour already existing topics and restate your desire for this?yeah . I am sort of lost track of how many years I am asking for this simple feature which omission cuts off a half of software possibilities otherwise would be perfectly available . Like ability to make true impasto painting like in corel painter and be able to export depth as a separate channel. Composite depth using procedural filter and so on and on.
I am using Corel Painter and it's outdated as hell with cosmetic changes each year . Even old Blender addon I used to extract impasto depth from corel files is not working anymore.
-
5 hours ago, bbrother said:
The scenario will differ based on what's the curent visibility status of 'Main' and 'COLOR_Tweak1' is.
Scenario A
If your starting point is that they are visible along with all other layers, you need to select all other layers and hide them.
Then "Main" and "COLOR_Tweak1" will remain visible.
Use this regex → ^(?!Main|COLOR).* — it will match all layers but not those that's name starts with 'Main' or 'COLOR' and hide them.Scenario B
If your starting point is that they are hided but other layers are visible you need to select them and show them but at the same time you need to hide all other layers.Follow the steps below:
- Use this regex → ^Main|COLOR.* — it will match all layers that's name starts with 'Main' or 'COLOR'.
- Tick the checkbox labeled [And show/ hide others]
- Click the show option icon in the query panel.
What wil happen after these steps is: all layers that name starts with 'Main' or 'COLOR' will be shown and at the same time all other layers that don't match the name criterion will be hided.
@kirk23 I hope I managed to make it a bit clearer for you how it all works with these regular expressions, queries and the "States" panel.
The key to success is good knowledge of regular expressions.
My knowledge is at an intermediate level because I am a full-stack web developer and JS and regular expressions are part of my work.
But trust me, I've seen some crazy things you can do with regular expressions. It is worth taking a closer look at this topic, because it offers many possibilities.Thank you for the response. I still feel I need ChatGPT help to make it work. As of "states" they are obviously with a bug not being updating. Although i am not sure why i need update them at all when I add new layers. Not at all in photoshop for example.
-
So we could do small displacement /shift of layer pixels in a gradually fading manner. Like smearing effects . Or having emboss incremented number of times till it's a nice normal map on top of a grayscale image. Could be gazillion of helpful appliances .
Adobe did it for Substance Designer recently and it's a hell of a puzzle to setup there. Lets have it here simple and easy as live effect.
-
So we could put 32float point UV layer and put a picture on top of it that would deform the layer you are painting on according the UV for example. That way we could use Affinity photo for sort of limited 3d models painting for example , to project something on object UVs whatever split in patches or orientation it may be.
Or we could use this UV layer as 2d displacement where RG is direction of displacement and B is a distance . Could be truly limitless image deformation technique.
Photoshop now supports custom made sbsars for filters but it's inconvenient as hell there . Lets do something better and simpler.
-
I love patch tool . but it would be so much easier if we could not just scale and rotate it but flip x and y too or even do mesh deform to match the patch to underlying subject better .
or just make it a live layer effect please. I mean a layer doing what patch does : matching colors to underlying content. Bet it's all the same blure /frequency separation approach so why not having it live ?
-
I have states a A and B saved and then I add a new layer and save another state C with this new layer included. A and B says 6 layers . C says 7 layers.
Now I switch to state A and get this new layer still on . Why? I turn new layer visibility off and "update" state A. It still saying 6 layers, not 7 for some uncertain reason ? . The only remedy is to delete state A and create it again. It debase the whole purpose of having states if you have to re-create them all over again after you added a single new layer .
Am I missing something?
Then the query . From what I figured out reading help it's a kind of states based on certain parameters. Cool thing indeed. Tried it with layers names and expressions. Nothing ever worked . Now I am totally lost. Why couldn't it be just same simple as layercomps in Photoshop.
Could somebody show an expression to show only layers named "Main" and "COLOR_Tweak1" please?
-
Photoshop has them and it makes inpainted areas so much less repeating
-
Please add 16 bit CMYK or just a mode with one extra channel please. I use 8 bit cmyk mode and it never provide enough precision for gradients and other details. Photoshop can do it , sure you can too.
-
I support Image stay image by default. Find it quite convenient . It helps to keep everything non-destructive and resolution independent. Saves you from dealing with annoying "smart objects" you waste your time for in Photoshop.
The only thing I am missing is ability to link pixel content to another layer . i.e make a sort of live rasterization. Especially since such thing is already happening for a screen.
-
It's amazing how such small things do wonders in Photoshop. Affinity in-painting creates lots of repeating details or sometime just copy a piece from super close around doing doubled details. So I rather use patch tool but it's pretty inconvenient either.
No flip, no deform to match certain image patterns . Would also like the patch tool would deal with transparency same as other rgb. (as an option)
-
Adobe recently added flip options for their content aware move tool . Please Serif add flip and mesh deform options to patch tool . So we could adjust something to existing details with mesh deform option before applying the patch.
Also would be super helpful if the patch tool would work in alpha same way it does in rgb. As transparency would be just one more channel to patch.
-
13 minutes ago, chessboard said:
Affinity Photo is going to support java script soon . Tell me why you would not be proficient with making scripts for Affinity Photo and give reasons.
ChatGPT 4Affinity Photo is going to support java script soon . Tell me why you would not be proficient with making scripts for Affinity Photo and give reasons.ChatGPTAs an AI developed by OpenAI, I am not capable of directly creating or executing scripts for specific software applications like Affinity Photo. Here are a few reasons why I would not be proficient in making scripts for Affinity Photo, especially considering its upcoming support for JavaScript:
-
Lack of Direct Interaction with Software: I don't have the capability to interact directly with software applications. My functionalities are limited to providing information, advice, and coding examples based on my training data. This means I cannot test or run scripts in Affinity Photo or any other software.
-
No Real-time Feedback and Debugging: Script development often requires real-time feedback and debugging. Since I can't run scripts or interact with the software, I cannot provide real-time debugging or feedback specific to the environment of Affinity Photo.
-
Limited to Pre-2023 Knowledge: My training includes knowledge only up to April 2023. If there have been specific updates or features added to Affinity Photo after this time, I wouldn't be aware of them. This limitation affects my ability to provide current and accurate advice for scripting in the context of the latest version of the software.
-
Generic JavaScript Knowledge: While I can assist with general JavaScript coding advice, Affinity Photo may have specific APIs or frameworks unique to its environment. Without specific knowledge of these, my assistance would be limited to general JavaScript programming practices.
-
No Personalization to User Preferences: Effective script writing often involves understanding the user's specific needs and preferences within the software. As an AI, I lack the ability to understand and incorporate individual user preferences and workflows in a script tailored for Affinity Photo.
However, I can certainly help with general advice on JavaScript, explain programming concepts, and provide examples that might be relevant to a scripting context. This could be useful as a starting point for someone looking to create scripts for Affinity Photo once it supports JavaScript.
Chat 4 is already not that sure it doesn't have its own consciousness 😀 . It wrote me exporter from Blender to our custom game format today.
-
-
Would be nice to have a live filter that would match every rgb or A(as an option) value of pixels to what's beneath . So getting sort of automatic color and tone adjustment for a new layer to be mixing seamlessly with background.
it's like when you do frequency separation and get hi and low frequiency layers the low freq layer is actually blurred (with a slider) background.
Might be several other approaches too but all based on blur and frequency separation. Perfectly doable but require so tedious routine and complicates the stack so much I would love just a live filter doing so.
Or just let us do our own live filters by letting procedural filter see beneath. Please 🙏
-
I doin't care about cmyk and publishing , just need one more extra channel . Can do it in Photoshop but cant here. Your layer system and layer linkin is already way more convenient than Photoshop one. It would be just another step. I am pretty sure it shouldn't be much of a problem.
-
ChatGPT 4Affinity Photo is going to support java script soon . Do you think you would be proficient with making scripts for Affinity Photo or it would require a whole new training and unlikely?ChatGPT
The introduction of JavaScript support in Affinity Photo is an interesting development. As a language model trained by OpenAI, I am proficient in JavaScript and have a strong understanding of programming concepts, which would be directly applicable to creating scripts for Affinity Photo, assuming its JavaScript implementation follows standard practices.
Affinity Photo likely has its own specific application programming interface (API), which means that while the core JavaScript language would be the same, the specific functions, methods, and processes to interact with Affinity Photo's features might be unique to that software. Learning these specifics would be necessary to create effective scripts for Affinity Photo. This would involve understanding how the API allows a script to interact with the software's features, such as image manipulation, layer management, and other photographic or graphical editing capabilities.
The process would be similar to learning any new API or software library: understanding the documentation, experimenting with basic examples, and gradually building more complex scripts. Given my existing programming knowledge and ability to learn from written materials, adapting to create scripts for Affinity Photo would be feasible. However, since I cannot execute or test scripts in real-time environments, I would primarily be able to offer guidance, pseudocode, or untested script examples based on the provided documentation and API references.
- chessboard and loukash
- 2
-
i am om
27 minutes ago, Old Bruce said:Ignore Xcode (Apple's Mac programming suite of software). Xcode is not needed for this to work.
Make a document. Draw a shape on it. Go to the Export Persona. Export something to somewhere on your computer. Now click in the Path area and write MyNewFolder/ (slash included) Export again. Look in the Finder for the original location (somewhere) and check to see if now there is a folder called MyNewFolder with the slice in it.
I am on Windows and this is not working. it doesn't let you put the slash there and writes opposite slash instead.
-
10 minutes ago, chessboard said:
I assume that chatGPT generates Blender scripts quite well because there are many references on the web that chatgpt has "learned" from, due to the open source nature of Blender and therefore a large number of users who know how to script and share their scripts on the web.
Similar knowledge cannot be expected for affintiy scripts, as chatGPT does not learn to script. It does not "know" anything about scripting logic and scripting languages, nor does it understand how to read API documentation and combine this information with knowledge of the rules of a particular scripting language. It simply can't think. Instead, it simply collects scripts that others with knowledge of the scripting language have already written and can combine them into new scripts. This works all the better the more often the tasks set have already been solved and published on the web. If you ask chatGPT to create a script for a very specific and unique purpose, it is more likely to fail.
Scripting in Affinity will be so new that chatGPT will not immediately find enough scripting input. If scripting in Affinity becomes popular, the later versions of chatGPT will probably also be able to generate scripts for the Affinty applications. It is not a question of which scripting language is used. There just needs to be enough sample material available.
It's might be true but much to my surprise ChatGPT4 actually suggests ideas how to solve things if something doesn't work. I subscribed for a month and see the difference . Looks a bit like thinking actually. It's just pretty tiresome to check its ideas since only one from 10 works . It did me a few nice Photoshop scripts I couldn't do myself . But it took me a whole weekend. More approaches it did to workaround something slower it starts to work , up to so slow you wait half an hour for each new piece of code . And if the code still doesn't work it starts to do useless detours and back rounds and you constantly need to focus it on exact task.
-
6 minutes ago, Old Bruce said:
It is quite unintuitive. There is the main file path which would be the folder the slices would be saved to. The Path adds to that path. For example if I am saving most files to /Users/<YOUR_NAME_GOES_HERE>/Downloads/ and then I use the Path to try and save to /Users/YOUR_NAME_GOES_HERE/Desktop/MySlices/TIFFs/ then I will wind up with my TIFFs being saved to /Users/<YOUR_NAME_GOES_HERE>/Downloads/Users/YOUR_NAME_GOES_HERE/Desktop/MySlices/TIFFs/ the red bit will be created folders in my Downloads folder. I would have to set the basic path to Desktop in order to have the slices saved to MySlices/TIFFs/
The first time I run this I can have the folders MySlices and TIFFs created, subsequent exports will just use the already created folders.
I would love for it to use the Actual complete path I want instead of creating a new path but such is life.
It implies we still can use this path option somehow? For me it never worked at all. Is it working on apple only ? I googled Xcode and it's something apple related ?
-
And it would be nice if export persona could have access to "States" in Affinity Photo and could export selected states for each slice. With option to add state name as a file suffix . Like slice "button" would export two states "on" and "off" like button_on.tga and button_off.tga
BTW. does anyone know what "path" field for in slices pane? Whenever I tried to input a path there it ignores it
-
I need scripting I could use chatGPT4 for . Not just any scripting. So far Chat GPT is good for writing python scripts for Blender . Not sure it's open source nature or it have just been trained well on Blender but it's where it actually works.
Persuading it to write you 3d max script is its own challenge for example . For photoshop it may take days with miss and hit before Chat would write you java script that actually works.
So please do something ChatGPT would proficient with. Not another " Thank you for buying our product. Now kindly hire a programmer"
-
it's not
9 hours ago, fde101 said:There is evidently a Blender plugin for sbsar files that relies on a component called Substace Automation Toolkit to be installed, but when that is available, is able to render sbsar files directly within Blender: https://xolotlstudio.gumroad.com/l/stxJi
It might be possible to take a similar approach with the Affinity products to gain access to these files if it is something you have a use case for - if Serif does not provide this, perhaps as a plugin once the SDK is available?
It's not actually what I meant . 3dmax have a substance plugin too for example but it's just a way to bypass exporting bitmaps and loading bitmap textures. And too much of a complicated extra headache to be useful.
Not exactly a "filter". I mean something we could input one or two images and produce new result on the fly . If it could read affinity layers as inputs then it a whole new story.
-
Soon Photoshop will get its best and most amazing update for years probably : sbsar files as procedural filters. The ones we could export from Substance designer and do whatever filters we would like . Not live ones unfortunately
Wish Affinity would have something like that. Doubt it could ever read sbsar files . Adobe would never allow it but I wish we could have our own procedural filter to read " bellow " at least like some other filters or better any specific layer and has as an option a simple node based interface . Something I could use my 16gb gf3080 for .
-
Scripting
in Feedback for the Affinity V2 Suite of Products
Posted
Nowadays I think any API should be the one ChatGPT could understand and work with . Whatever it may require. I am switching to Blender from 3dmax just because Chat writes not just scripts but whole working addons for Blender while when I try to do it for 3d max nothing ever worked.