Jump to content
You must now use your email address to sign in [click for more info] ×

Jörn Reppenhagen

Members
  • Posts

    98
  • Joined

  • Last visited

Reputation Activity

  1. Confused
    Jörn Reppenhagen got a reaction from Westerwälder in AI generative Fill in Affinity   
    I am badly missing two specific topics in this AI discussion: 1) AI, 2) Cloud or local AI use?
    As said before, PS and LM voted for a cloud approach.
    Benefits are quick and easy implementation without specific hardware requirements, no need to download and install huge data volumes prior to use.
    But the drawbacks are serious:
    NO PRIVACY anymore. AI needs to know the whole image to decide which AI model to use - there are countless AI models for multiple purposes, each providing it's own picture style; thus the model needs to be chosen or adapted according to the image to process. In addition, each and every image needs to be scanned for prohibited contents, so the cloud service can deny processing your image. Thus each and every image we process using AI needs to be transferred to unknown destinations. Massive restrictions regarding contents, especially sensitive if it comes to any sexual-related matters. I saw AI models denying to generate anything containing terms like "butt". So you need to become real creative if you wish to remove anything near such body parts. Also many AI models restrict celebrity rendering or any styles similar to styles of famous modern artists, also copies of other works. Of course there's a high risk of false positives. Restricted flexibility: There's a wealth of great extensions for AI generators like Stable Diffusion, free of cost. You cannot install such extensions in a cloud environment. Endless operating costs. A cloud solution would force Serif to introduce one of the dreaded subscription models, the reason why we abandoned Adobe, the reason why we chose Affinity. Of course AI features could be offered as a separate service - but this would lead to the same ugly result: Subscription. In that case, we could also return to Adobe. And that's exactly what would happen. Legal liability if abusive images get rendered by the provider's cloud service. Thus the provider needs to examine every single image very closely, worsening the privacy issue mentioned above, also increasing risks for the provider. That's why I believe a local install would be the only way to go.
    The question is not if Serif needs to implement AI features, the question is how this should be done in the best way possible.
    I guess it would be quite beneficial if we try to collect some ideas helping Serif with their decision.
  2. Like
    Jörn Reppenhagen got a reaction from Ash777 in Color replacement brush not working as expected   
    Banacan: For this usage case, you might be better off with a simple HSL adjustment - especially because of the transparent background and the uniform color range.
    Select HSL adjustment, choose red color circle, select the picker, click on a bright red area, set Hue Shift to e. g. -62, Saturation Shift to 56, Luminosity Shift to 49 - or to your liking.
    Then make sure the HSL layer is selected, invert the layer (color reverts to red), choose the Paint Brush Tool [B], select pure white as a color (255, 255, 255), then paint over the areas you wish to recolor, Opacity controls intensity. Take care not to select "Wet Edges" and "Protect Alpha".
    That's essentially the same as Recolor - but the above also works with more complex color combinations (not just with "everything red" things).
    Workaround for setting correct hue with Recolor: Set your target color using RGB, then switch from RGB to HSL. The now shown Hue value is the value to set in the Recolor dialogue.
    In private: I also find the behavior of the color replacement brush rather strange/senseless - it just doesn't do what it should be supposed to do.
  3. Like
    Jörn Reppenhagen got a reaction from Nomad Raccoon in AI generative Fill in Affinity   
    I am badly missing two specific topics in this AI discussion: 1) AI, 2) Cloud or local AI use?
    As said before, PS and LM voted for a cloud approach.
    Benefits are quick and easy implementation without specific hardware requirements, no need to download and install huge data volumes prior to use.
    But the drawbacks are serious:
    NO PRIVACY anymore. AI needs to know the whole image to decide which AI model to use - there are countless AI models for multiple purposes, each providing it's own picture style; thus the model needs to be chosen or adapted according to the image to process. In addition, each and every image needs to be scanned for prohibited contents, so the cloud service can deny processing your image. Thus each and every image we process using AI needs to be transferred to unknown destinations. Massive restrictions regarding contents, especially sensitive if it comes to any sexual-related matters. I saw AI models denying to generate anything containing terms like "butt". So you need to become real creative if you wish to remove anything near such body parts. Also many AI models restrict celebrity rendering or any styles similar to styles of famous modern artists, also copies of other works. Of course there's a high risk of false positives. Restricted flexibility: There's a wealth of great extensions for AI generators like Stable Diffusion, free of cost. You cannot install such extensions in a cloud environment. Endless operating costs. A cloud solution would force Serif to introduce one of the dreaded subscription models, the reason why we abandoned Adobe, the reason why we chose Affinity. Of course AI features could be offered as a separate service - but this would lead to the same ugly result: Subscription. In that case, we could also return to Adobe. And that's exactly what would happen. Legal liability if abusive images get rendered by the provider's cloud service. Thus the provider needs to examine every single image very closely, worsening the privacy issue mentioned above, also increasing risks for the provider. That's why I believe a local install would be the only way to go.
    The question is not if Serif needs to implement AI features, the question is how this should be done in the best way possible.
    I guess it would be quite beneficial if we try to collect some ideas helping Serif with their decision.
  4. Like
    Jörn Reppenhagen got a reaction from Nomad Raccoon in AI generative Fill in Affinity   
    I didn't read the full discussion (16 pages is a bit much), but here's three points to consider:
    1) Looking away (or making fun of the technology) does not help at all. People WANT it. And even Skylum (Luminar) already jumped, better: stumbled on the train.
    2) Everybody who ever did some more complex image corrections will LOVE this new tool.
    3) Serif will lose a major part of their customer base to PS and LM if they keep on ignoring reality.
    Now for a possible solution ...
    I read things like "Cloud" and "legal" ... Sounds like a real wrong approach. It's the approach PS and LM chose, but there's another solution, maybe better: local install.
    If Serif provides a local installation of "Stable Diffusion" (most flexible AI solution today, also free to use) combined with a "safe" pre-installed model/checkpoint, disallowing nudity, also celebrity and artist fakes, they should be on the safe side: Images will be generated by the user's computer, not by a cloud service owned by Affinity. However, users could use other models/checkpoints different from the "safe" one (like URPM) at their very own discretion.
    That way, there wouldn't be any cloud costs to be covered by Serif, also Serif would not be liable for naughty things users might do if they dare to download and install other models/checkpoints than the preinstalled and highly recommended one.
    There's several more benefits, but I wish to keep this short.
    Drawback: Users need a modern GPU, starting from RTX 2070 with 8 GB of VRAM and more, and about 30 GB free disk space. That's what I would call minimum requirements.
     
    Think about it, correct me if I am wrong.
  5. Like
    Jörn Reppenhagen got a reaction from Tomasz.Bossi in AI generative Fill in Affinity   
    I am badly missing two specific topics in this AI discussion: 1) AI, 2) Cloud or local AI use?
    As said before, PS and LM voted for a cloud approach.
    Benefits are quick and easy implementation without specific hardware requirements, no need to download and install huge data volumes prior to use.
    But the drawbacks are serious:
    NO PRIVACY anymore. AI needs to know the whole image to decide which AI model to use - there are countless AI models for multiple purposes, each providing it's own picture style; thus the model needs to be chosen or adapted according to the image to process. In addition, each and every image needs to be scanned for prohibited contents, so the cloud service can deny processing your image. Thus each and every image we process using AI needs to be transferred to unknown destinations. Massive restrictions regarding contents, especially sensitive if it comes to any sexual-related matters. I saw AI models denying to generate anything containing terms like "butt". So you need to become real creative if you wish to remove anything near such body parts. Also many AI models restrict celebrity rendering or any styles similar to styles of famous modern artists, also copies of other works. Of course there's a high risk of false positives. Restricted flexibility: There's a wealth of great extensions for AI generators like Stable Diffusion, free of cost. You cannot install such extensions in a cloud environment. Endless operating costs. A cloud solution would force Serif to introduce one of the dreaded subscription models, the reason why we abandoned Adobe, the reason why we chose Affinity. Of course AI features could be offered as a separate service - but this would lead to the same ugly result: Subscription. In that case, we could also return to Adobe. And that's exactly what would happen. Legal liability if abusive images get rendered by the provider's cloud service. Thus the provider needs to examine every single image very closely, worsening the privacy issue mentioned above, also increasing risks for the provider. That's why I believe a local install would be the only way to go.
    The question is not if Serif needs to implement AI features, the question is how this should be done in the best way possible.
    I guess it would be quite beneficial if we try to collect some ideas helping Serif with their decision.
  6. Sad
    Jörn Reppenhagen got a reaction from R C-R in AI generative Fill in Affinity   
    Of course there's no universal way - but that's the exact purpose of discussing such a topic: Finding out what Affinity users actually want - and in which way this is congruent with Serif's plans. But that requires talking about possible implementation options, their drawbacks and benefits.
    I already answered that in my previous post. 😉
  7. Like
    Jörn Reppenhagen got a reaction from Frozen Death Knight in AI generative Fill in Affinity   
    I am badly missing two specific topics in this AI discussion: 1) AI, 2) Cloud or local AI use?
    As said before, PS and LM voted for a cloud approach.
    Benefits are quick and easy implementation without specific hardware requirements, no need to download and install huge data volumes prior to use.
    But the drawbacks are serious:
    NO PRIVACY anymore. AI needs to know the whole image to decide which AI model to use - there are countless AI models for multiple purposes, each providing it's own picture style; thus the model needs to be chosen or adapted according to the image to process. In addition, each and every image needs to be scanned for prohibited contents, so the cloud service can deny processing your image. Thus each and every image we process using AI needs to be transferred to unknown destinations. Massive restrictions regarding contents, especially sensitive if it comes to any sexual-related matters. I saw AI models denying to generate anything containing terms like "butt". So you need to become real creative if you wish to remove anything near such body parts. Also many AI models restrict celebrity rendering or any styles similar to styles of famous modern artists, also copies of other works. Of course there's a high risk of false positives. Restricted flexibility: There's a wealth of great extensions for AI generators like Stable Diffusion, free of cost. You cannot install such extensions in a cloud environment. Endless operating costs. A cloud solution would force Serif to introduce one of the dreaded subscription models, the reason why we abandoned Adobe, the reason why we chose Affinity. Of course AI features could be offered as a separate service - but this would lead to the same ugly result: Subscription. In that case, we could also return to Adobe. And that's exactly what would happen. Legal liability if abusive images get rendered by the provider's cloud service. Thus the provider needs to examine every single image very closely, worsening the privacy issue mentioned above, also increasing risks for the provider. That's why I believe a local install would be the only way to go.
    The question is not if Serif needs to implement AI features, the question is how this should be done in the best way possible.
    I guess it would be quite beneficial if we try to collect some ideas helping Serif with their decision.
  8. Like
    Jörn Reppenhagen got a reaction from jc4d in Moon again. To stack or not to stack   
    I still like the single image better.
    The moon is a horrible bright object, so there shouldn't be any strong noise-related issues. Problems are equipment, correct adjustment and of course "seeing"/atmospheric turbulences.
    "Lucky imaging" often works great, meaning: take a lot of single shots in a series, later pick the sharpest one. Exposure times should be as short as possible.
    @irandar - Honestly, I wonder about how you managed to get such a low picture quality out of your gear? Noise levels and graininess are unbelievable high. - Both camera and MAK should provide far better results.
    Thus it could be interesting to know a bit more about your setup and your procedure.
    I'd first try lowering the ISO value as low as possible. Plus, I'd make sure the picture is focused correctly (maybe a Bahtinov mask would be the way to go); there's several focusing methods, I can explain if you like. Plus (this is wild guess): Switch the camera off for a few minutes to cool down. Then switch on and take the photos as fast as possible. I know, a bit hard without a tracking mount, but still doable. Reason: Usually the noise levels increase with the sensor temperature.
    And a general hint: If the stars "twinkle", this is good for a romantic mood, but bad for taking astrophotos. The more the stars twinkle, the worse is the atmospheric turbulences, making sharp pictures very, very difficult if not impossible. But that's not the reason for noise and grain in your photos.
    Another idea: You didn't take the photos from a warm room, but outside in the cold? If not - you should.
    ---
    Two example pics, single shots, not stacked in any way - first one (no editing at all, right from the camera) showing the size of the moon at 750 mm focal length, second one giving an idea of the quality a single shot should provide. Pictures were among my very first astrophotos, unfortunately taken without a field flattener, edited with no greater expertise.
    Info: Taken with a cheap Canon 750D, first picture ISO 100, 1/100 s; second picture ISO 400, 1/60 s, both with a Skywatcher Newton 150/750.
    Hint: Right-click the pictures, select Open LINK(!) in new tab - then you'll be able to watch them in their original size.
     


  9. Like
    Jörn Reppenhagen got a reaction from ronnyb in Moon again. To stack or not to stack   
    I still like the single image better.
    The moon is a horrible bright object, so there shouldn't be any strong noise-related issues. Problems are equipment, correct adjustment and of course "seeing"/atmospheric turbulences.
    "Lucky imaging" often works great, meaning: take a lot of single shots in a series, later pick the sharpest one. Exposure times should be as short as possible.
    @irandar - Honestly, I wonder about how you managed to get such a low picture quality out of your gear? Noise levels and graininess are unbelievable high. - Both camera and MAK should provide far better results.
    Thus it could be interesting to know a bit more about your setup and your procedure.
    I'd first try lowering the ISO value as low as possible. Plus, I'd make sure the picture is focused correctly (maybe a Bahtinov mask would be the way to go); there's several focusing methods, I can explain if you like. Plus (this is wild guess): Switch the camera off for a few minutes to cool down. Then switch on and take the photos as fast as possible. I know, a bit hard without a tracking mount, but still doable. Reason: Usually the noise levels increase with the sensor temperature.
    And a general hint: If the stars "twinkle", this is good for a romantic mood, but bad for taking astrophotos. The more the stars twinkle, the worse is the atmospheric turbulences, making sharp pictures very, very difficult if not impossible. But that's not the reason for noise and grain in your photos.
    Another idea: You didn't take the photos from a warm room, but outside in the cold? If not - you should.
    ---
    Two example pics, single shots, not stacked in any way - first one (no editing at all, right from the camera) showing the size of the moon at 750 mm focal length, second one giving an idea of the quality a single shot should provide. Pictures were among my very first astrophotos, unfortunately taken without a field flattener, edited with no greater expertise.
    Info: Taken with a cheap Canon 750D, first picture ISO 100, 1/100 s; second picture ISO 400, 1/60 s, both with a Skywatcher Newton 150/750.
    Hint: Right-click the pictures, select Open LINK(!) in new tab - then you'll be able to watch them in their original size.
     


  10. Like
    Jörn Reppenhagen got a reaction from NotMyFault in Moon again. To stack or not to stack   
    I still like the single image better.
    The moon is a horrible bright object, so there shouldn't be any strong noise-related issues. Problems are equipment, correct adjustment and of course "seeing"/atmospheric turbulences.
    "Lucky imaging" often works great, meaning: take a lot of single shots in a series, later pick the sharpest one. Exposure times should be as short as possible.
    @irandar - Honestly, I wonder about how you managed to get such a low picture quality out of your gear? Noise levels and graininess are unbelievable high. - Both camera and MAK should provide far better results.
    Thus it could be interesting to know a bit more about your setup and your procedure.
    I'd first try lowering the ISO value as low as possible. Plus, I'd make sure the picture is focused correctly (maybe a Bahtinov mask would be the way to go); there's several focusing methods, I can explain if you like. Plus (this is wild guess): Switch the camera off for a few minutes to cool down. Then switch on and take the photos as fast as possible. I know, a bit hard without a tracking mount, but still doable. Reason: Usually the noise levels increase with the sensor temperature.
    And a general hint: If the stars "twinkle", this is good for a romantic mood, but bad for taking astrophotos. The more the stars twinkle, the worse is the atmospheric turbulences, making sharp pictures very, very difficult if not impossible. But that's not the reason for noise and grain in your photos.
    Another idea: You didn't take the photos from a warm room, but outside in the cold? If not - you should.
    ---
    Two example pics, single shots, not stacked in any way - first one (no editing at all, right from the camera) showing the size of the moon at 750 mm focal length, second one giving an idea of the quality a single shot should provide. Pictures were among my very first astrophotos, unfortunately taken without a field flattener, edited with no greater expertise.
    Info: Taken with a cheap Canon 750D, first picture ISO 100, 1/100 s; second picture ISO 400, 1/60 s, both with a Skywatcher Newton 150/750.
    Hint: Right-click the pictures, select Open LINK(!) in new tab - then you'll be able to watch them in their original size.
     


  11. Like
    Jörn Reppenhagen got a reaction from NotMyFault in Stacking without tracking   
    Sure it's possible. Walking from Berlin to Moscow is also possible.
    If you concentrate on the moon, you won't need any tracking mount. Jupiter and Saturn may also come into reach.
    It is also possible to do some more basic astrophotography without - but it's like walking from Berlin to Moscow, with the results usually being "suboptimal", politely expressed.
    If you love this hobby, do yourself a favor and invest in a tracking mount, there's inexpensive solutions providing you far more observation and general comfort and allowing longer exposure times (but don't expect being able to do minutes of exposure).
    But your first step should be achieving sharp, focused pictures without excessive noise and grain - see thread linked by NotMyFault.
  12. Like
    Jörn Reppenhagen reacted to NotMyFault in Stacking without tracking   
    I would recommend to just give it a try.
    We had a report where user tried to stack images taken in 2012 with a very noise Sony APSC and  f8 mirror objective. 
    This did not work well, based on insufficient source image quality
     My own experiments with 2016 EOS 80D gave quite good results. There is no hard limit. What is holding you back ?
    You may read this post and download the Sony / Canon RAW files, to see what I mean.
    Its only about stacking for noise reduction, not for alignment. It should only show examples of sufficient/ insufficient images. 
     
  13. Like
    Jörn Reppenhagen reacted to NotMyFault in Moon again. To stack or not to stack   
    Below you can find the result of a stack of 8 images. It will reduce the noise - without loosing details, as it will happen when using denoise filter, and proves that Affinity works well when used with source images of sufficient quality.
    I fear your camera gear (Sony NEX-5N according to EXIF) released 2011 and Celestron C90 1000mm lens (2009?) is unable to provide sufficient image quality in the RAW files.
    It is not possible and does not make any sense trying to restore image quality which has not been taken during the capture process.
    The level of noise and other quality degrading defects in the RAW files far to big to be compensated in post-processing.

  14. Like
    Jörn Reppenhagen reacted to MEB in Applying perspective transformation to both layer and mask   
    Hi Wimads,
    Welcome to Affinity Forums
    Use a Live Perspective Filter (menu Layer ▸ New Live Filter Layer ▸ Perspective Filter) and nest it to the pixel layer with the mask also nested to the pixel layer. See the layer structure below (to nest masks, adjustments/filters etc to a pixel layer drag them over the thumbnail of the pixel layer in the Layers panel):

  15. Like
    Jörn Reppenhagen reacted to MEB in Applying perspective transformation to both layer and mask   
    Hi @Roelf,
    Welcome to Affinity Forums
    Group the layers ands apply the the perspective correction to the group.
  16. Like
    Jörn Reppenhagen reacted to Dan C in Restaurieren funktioniert nicht   
    Hallo Sabillie,
    Willkommen in den Foren
    Stellen Sie sicher, dass Sie das Inpainting-Werkzeug auf einer (Pixel-) Ebene und nicht auf einer (Bild-) Ebene verwenden. Wenn ja, klicken Sie mit der rechten Maustaste und rastern Sie die Ebene, bevor Sie fortfahren.
    Überprüfen Sie Ihre Kontextsymbolleiste mit dem ausgewählten Malwerkzeug und stellen Sie sicher, dass die Pinseleinstellungen korrekt sind und die Deckkraft nicht auf 0% eingestellt ist.
    Stellen Sie schließlich sicher, dass Sie das Werkzeug auf der richtigen Ebene verwenden. Wenn Sie eine Einstellungsebene ausgewählt haben, wird das Werkzeug verarbeitet (mit dem Fortschrittsbalken), aber es werden keine Änderungen an Ihrem Bild vorgenommen. Ich hoffe das hilft!
  17. Like
    Jörn Reppenhagen got a reaction from NotMyFault in Restaurieren funktioniert nicht   
    Diese Fehler mache ich besonders gerne:
    Mehrere Ebenen, aber nicht die Pixel-Ebene ausgewählt, Pixel-Ebene nicht blau markiert. Passiert ständig, wenn man irgendeine Anpassung anwendet, die als neue Ebene ergänzt und damit automatisch ausgewählt wird. Klick auf die Pixel-Ebene, dann gehts wieder. Bild ist keine Pixel-Ebene. In dem Fall: Rechtsklick auf die Ebene, dann "Rastern ..." wählen. Deckkraft 0 oder sehr niedrig. Je niedriger die Deckkraft, desto weniger wirkt der Effekt. Fluss 0 oder sehr niedrig. Auch dann gehts nicht - oder nicht vernünftig. Das wurde zwar alles bis auf Fluss schon oben von Dan C und Komatös erwähnt, aber vielleicht ists so beschrieben noch ein Häppchen verdaulicher.
  18. Haha
    Jörn Reppenhagen reacted to Komatös in Restaurieren funktioniert nicht   
    Hallo @AnMaSp und willkommen im Forum.
    Also die Funktion Klönen geht nur in einem Chat-Programm. 😝
    Klon- und Malpinsel funktionieren nur auf Pixelebenen. Prüfe bitte ob die Ebenen als Bild-Ebene angezeigt werden. Wenn dem so ist, musst Du die Ebene erst rastern. Entweder über Ebenen -> Rastern... über die Menüzeile oder mit Rechtsklick und Auswahl aus dem Drop-Down-Menü.
  19. Like
    Jörn Reppenhagen got a reaction from Pauls in Stacking jpg astro files. Strange results or not?   
    @irandar: The results are typical - it cannot work this way.
    Stacking expects pictures where the objects have an predictable, identical offset - example:
    Photo 1: Star A at position 100 x 200, star B at position 150 x 170.
    Photo 2: Star A at position 110 x 205, star B at position 160 x 175.
    So we've got an offset of 10 in the X axis, an offset of 5 in the Y axis. The same offsets for star A and star B. And the same offsets for stars C, D, E, ...
    Your main problem is not exposure times, it's looking towards the rotational axis of the sky, the celestial pole, while there's significant time differences between your photos. Your AP stack shows that the North Star, Polaris, is somewhere near the upper right corner of the picture, the pivot point of the sky. This causes objects near the upper right corner "moving" slower than objects far away. Thus the offsets more and more increase with the distance from the North Star, aren't identical anymore. You can see this by the different lengths of the star trails.
    That's why the stacking algorithms produce that star trails, it just cannot work with stars of different offsets in the same picture.
    Just imagine sitting inside a dome with stars painted on it's walls, the dome rotating around you. If you look up the stars near the top move slower than the stars at the sides. And if you look parallel to the ground, the stars seem to move with the same speed. If you take photos of the top and stack them, you'll get the same results as with your stacked photos. But if you take pictures parallel to the ground, the results will be much better as the offsets become almost identical.
    Remedies: a) Very short exposure times, as already mentioned in this thread. PLUS: Photos need to be taken immediately after each other. b) Taking photos more parallel to the ground plane. c) Use of a less expensive motorized mount like Star Adventurer or AZ-GTi, or a "real" mount starting from the EQ5 class, not below.
    I'd go for option c) if you wish to dive deeper into astrophotography as these mounts also allow mounting of smaller telescopes. You won't be able to achieve real long exposure times of several minutes, but this would solve your problem. If you wish to stick with your DSLR and shorter focal lengths for astrophotography, mounts like Star Adventurer and AZ-GTi are your friends. For everything else a "real" mount (EQ5 and up) is the way to go.
    Additional info:
    a) Your photos are out of focus, stars are discs, not points. Focusing on stars isn't easy, manual focusing with maximum magnification of the preview might help, also a Bahtinov mask.
    With higher focal length/magnification, a Bahtinov mask would be the way to go. With lower focal length/magnification (like 55 mm), it's better to focus via the preview at maximum magnification, take a sample picture (short exposure time, higher ISO) and check if stars are points by watching the taken photos at maximum magnification, correct the focus if needed. Nothing's more frustrating than finding out all your pictures are out of focus - wasting the results of several hours of imaging.
    b) Take some dark frames as Pauls already mentioned. You've got masses of defective pixels in your photos, that tiny red, blue, green or white dots just every digital camera produces with longer exposure times. Dark frames help eliminating these. Also playing with the Stacking Options (Threshold and Clipping iterations) might help.
    c) Use RAW photos. JPGs usually look better out of the camera, but the camera's JPG algorithms apply a lot of changes, often in a more or less random and unpredictable way. But astrophotography and stacking need unmodified data to achieve best results. Just set your camera output to RAW + highest JPG quality, so you always get a pair of RAW and JPG, can't forget setting the output to RAW.
  20. Like
    Jörn Reppenhagen got a reaction from Dan C in 1.9 Astro Stacking Fuji RAF - No Color   
    @Kevin Barry- ASI071MC Pro uses a Sony sensor. This thread is about Fujifilm.
    @Dan C - I am reluctant to opening a new thread if there's already an old thread dealing with the matter. In most forums you'll immediately get lectured for redundancy.
    I'll try astrostacking again, maybe I just missed setting a vital option - and if the result is still the same, I'll open a new thread.
    Note: There's no total absence of colors like before - there's just ONE color; in my case it's yellow, with all the other colors not showing up.
  21. Like
    Jörn Reppenhagen reacted to firstdefence in Remove dark circles under eyes   
    Filters > Frequency Separation (set to about 3px Create a New Pixel layer and place it in-between the high and low frequency layers change the blend mode to lighter colour Select a Basic brush with some blurring and also select a colour from the cheek area using the picker on the colour panel. Use the brush to paint over the area's under the eyes You can alter the pixel layers opacity if needed.
  22. Like
    Jörn Reppenhagen reacted to iconoclast in Can't capture results of Divide blend mode   
    Hi!
    Rasterise your stock images before merging (rightclick > Rasterize image). I'm not sure that this is the reason, but it's worth a try.
    Sorry, I'm in a hurry.
  23. Like
    Jörn Reppenhagen reacted to AJS in Can't capture results of Divide blend mode   
    Working with a stock image to try out some Affinity Photo techniques, I got an interesting-looking result by applying the Divide blend mode to a layer.
    Problem is, I can't capture that result to work with it further. If I merge the layers, the image changes. Even weirder, if I Copy Flattened and paste onto a new layer, the image changes, when I thought the entire idea of Copy Flattened is that it captures & reproduces exactly what you see. Then I tried taking a Snapshot and creating a Snapshot layer. That also changed the image! I can't believe that both these methods failed at exactly the task they're supposed to do.
    Btw, while writing this, the reCAPTCHA kept expiring and making me check the box again every twenty seconds, and the 822kb image upload failed, so Affinity's kinda whiffing it in an exciting variety of ways today.
    Meanwhile I tweeted the example images, you can see them there.
  24. Like
    Jörn Reppenhagen got a reaction from walt.farrell in Color replacement brush not working as expected   
    Banacan: For this usage case, you might be better off with a simple HSL adjustment - especially because of the transparent background and the uniform color range.
    Select HSL adjustment, choose red color circle, select the picker, click on a bright red area, set Hue Shift to e. g. -62, Saturation Shift to 56, Luminosity Shift to 49 - or to your liking.
    Then make sure the HSL layer is selected, invert the layer (color reverts to red), choose the Paint Brush Tool [B], select pure white as a color (255, 255, 255), then paint over the areas you wish to recolor, Opacity controls intensity. Take care not to select "Wet Edges" and "Protect Alpha".
    That's essentially the same as Recolor - but the above also works with more complex color combinations (not just with "everything red" things).
    Workaround for setting correct hue with Recolor: Set your target color using RGB, then switch from RGB to HSL. The now shown Hue value is the value to set in the Recolor dialogue.
    In private: I also find the behavior of the color replacement brush rather strange/senseless - it just doesn't do what it should be supposed to do.
  25. Like
    Jörn Reppenhagen reacted to Chris B in Affinity Photo 1.10.1.1142 crashes   
    Do you have any selections active when you zoom with the mousewheel?
    Active selections on the Windows version can consume a lot of CPU power because I think they are refreshing too fast compared to the macOS version.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.