Jump to content
You must now use your email address to sign in [click for more info] ×

JIPJIP

Members
  • Posts

    54
  • Joined

  • Last visited

Everything posted by JIPJIP

  1. Thank you for your response, John. So it is still not possible in macros to resize to the same size images with different sizes AND choose the resizing algorythm. Hope this will be incremented.
  2. Ohh, i forgot : and having no problem with images with different sizes.
  3. Hi, I don't know if there is a more recent topic about this subject. Is there now (2024) a way to resize an image AND specify the method (bilinear, bicubic, etc ..;) in a macro ? Thanx
  4. windows or your antivirus (both can have this feature) does not allow "not allowed applications" to write on some protected area on your disk. Desktop is one of those area. You have to find if it is windows or your antivirus or both, and then, there is a list of allowed application (one for windows, one for your antivirus if it has this feature) in wich you have to add new affinity version to aloow it to write on desktop.
  5. @R C-R Interresting : I noticed this afternoon, that there is a step in my macro which kills sharpness of bicubic downsizing in affinity. I know, this is not about vector to bitmap conversion, but, maybe there is some process related, look at the rest : I found that bicubic downsizing alone is lot much better than when whole the macro is processed (I didn't noticed that before, but I'm still looking at this problem - I'm maybe old school but I can't tolerate major's behavior, so I search how to get rid of them). What ruins the sharpness of the bicubic downsizing is when black pixel layer (for background) and downsized photo pixel layer are merged. (This does not happens with macro in cs6). And what I noticed, is that after downsizing the largest side of my photo at the size of the square thumbnail side, the other side of my photo is not an integer number but a decimal number (perfect precision). So what I suspect is that when layers are merging, an other kind of pixel sampling happens to stretch the decimal sized side to an integer value, and ruins the quality of the bicubic algorythm. Maybe cs6 is less fussy about precision and it's bicubic downsizing process round both sides to integers in one step. Or something like that. So about converting vector to bitmap, maybe there is something of the same kind which involve rounding some values at the end of the process and sampling some very small offsets which ruins the final result. PNG doesn't have alpha channel ; I don't know how transparency is stored, but maybe it involved a kind of merging layers process which gives the same result as my macro. It would be interresting to check if the vector to bitmap conversion gives the same result with formats with alpha channels or a better result. And for my case, is there is a way to force downsizing operation to "snap" to integer pixel number and don't stick to decimal precision when aplying bicubic algorythm ?
  6. Just curious, which camera and which anamorphic lens do you use ?
  7. This is what I think too. And in my bitmap reduction, it seems that not all bicubic filters are the same in every software.
  8. I tried downsampling those thumbnails crancking dpi up to 1200, but for bitmap images this trick does nothing.
  9. @R C-R I RC-R, Here are 2 downsized images into thumbails from a by batch of 34 (with macro which makes the image square - enlarges canvas - and puts a black background if the base image is not square). I even had 3 bugged images when doing this batch. I put one of those bugged image so you can see it (the second one) - (this bug is a strange blur). I tried several time and had bugs each time. I didn't try mono-threaded, maybe bugs wouldn't happen, I don't know. I tried all possible algorythms and choosed bicubic which gave me the best result (lancsoz were sharper but to crispy to be usable for me). So the first ones are affinity (bicubic),and the second ones are cs6 (bicubic too, but the original images (very large ! 50Mpixel), have been sharpened with a complex macro before reduction - process that can't be done in affinity). Thumbnail 1 : Thumbnail 2 (one of the 3 bugged in batch in affinity - strange blur -) :
  10. I deleted the sample I put with thumbnails because it happens small images are scaled up when uploaded and this aggravate the problems on affinity thumbails which are better than what shown on the forum (but strangely does not aggrave that much cs6 thumbnails which were sharper and really clean ...). Sorry, my navigator was slightly zoomed. So there are probably no scaling when uploaded on the forum. So I do it again.
  11. Got similar problem when downsampling photo to web size. And creating thumbnails too. I tryed every sampling option, and I got either too smooth, either to sharp, to crispy. But photoshop CS6 did very better. And allowed complex fine tuned sharpening too (lot of steps involved, but easily reduced to one tweakable macro). And without crashing !
  12. Or by screens with more or less slightly different response curve which don't show the same way antialiasing blending. I don't know if it is still the case, but mac did not use the standard gamma values. Valid test for checking this point should be to look at the same image side by side on mac and on PC.
  13. I will give this a try. I used the icon and mouse click to record the fill action. But what's strange is that the macro alone works as intended, but not when in batch mode ... And more than that, it worked in batch mode as intended the first time I used it (I still have my first pack of images processed right from this first time). But it has been the only time ... from the second try problem was here. But Inbetween I closed and opened again affinity photo.
  14. When I created the macro the color was right (black). When I use the macro alone (on one photo) the color is right (black). And i f change the color in the color picker before running the macro, the macro runs with the choosen color. When I used the macro in batch job the first time the color was right (black). When I use now the macro in batch job the color is wrong (white) regardless of the color in the color picker. Any idea ?
  15. This is the best possible representation of a circle in a bitmap format (like png or any other bitmap format) at the resolution you choosed when creating it. (As opposed to vector format which describes shapes mathematically and are rasterized on the fly at the displayed resolution). This is not a bug. This is how a circle can be antialised at this resolution and this is the best every existing software can do at this resolution. The best way to increase quality with a bitmap image like this is to increase resolution when creating your circle (or converting it from a vector format to a bimap format) and choose a resolution adapted to (or larger than) your final work. If you want to be resolution independent you have to work in vector format.
  16. I have to add when I run the macro alone (not in batch), it works fine and keeps to color I put in the color selector before running it.
  17. Hi, I have a macro where one step fills a pixel layer with a color. This color I choosed isn't recorded in the macro (if there is way to record it , please let me know how). I used this macro in a batch job one time and everything worked fine. But now, it does not fill the pixel layer with the right color. I tried to change colors in the color selector before creating and running the batch, but it seems color selector is resseted the wrong way anytime I run the batch now. It worked once, but doesn't want to now. What am I doing wrong ? Thanx.
  18. @Deckonym You should explain what you want to do with your layers. It seems you have to adapt your 3d workflow first. (3d in my trade since about 30 years). Most 3D softwares are able to render lot of different output buffers that allow many possibilities in compositing.
  19. Still not sure what you mean by transparent. What for do you need an alpha channel here ? For what I think I undestand you want to do, you should, if your 3D software allows it, do render passes (or whatever name it is called in your 3D software - what 3D software do you use ?) and separate each light and environment lighting in different passes. Then recompose them after that, so you could turn on and off each light as you need when compositing. In this case layers should be composited additively. (Assuming your base rendering (with all lights) is as you want, as it seems to be). (For me your light in the small room seems stronger and burns the walls around her - or you show this image in linear mode and have not yet apllied a transfer curve - which should be done with a 32bit per channel linear render to keep values precise enough). I think you want to be able to turn on/off each light as needed, is this what you want to do ?
  20. In fact I did the inverse in my case. I had a light the director didn't want. It was faster to compute the scene with this light only than to compute the whole scene without this light. So I computed the scene with this light only and substracted it in compositing from the original scene. It worked perfectly.
  21. I'm not sure I undestand well what you want. But if I understand, I already did that with substractive blending mode between your two rendered images : Substract the layer without light to the layer with light and you get only the light result.
  22. Oh ? Doesn't sound to need lot of code line to implement it ... Strange and disappointing this has not be done from the first step of programming this macro recorder. Being able to modify done job is the basic way of working ...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.