Jump to content
You must now use your email address to sign in [click for more info] ×

Jörn Reppenhagen

Members
  • Posts

    98
  • Joined

  • Last visited

Profile Information

  • Gender
    Male
  • Location
    : Grave of democracy (Germany)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Of course there's no universal way - but that's the exact purpose of discussing such a topic: Finding out what Affinity users actually want - and in which way this is congruent with Serif's plans. But that requires talking about possible implementation options, their drawbacks and benefits. I already answered that in my previous post. 😉
  2. That's a secondary thought as the heart of most AI features is the models/checkpoints and a local architecture including a Python installation. If you wish to use generative infill, inpaint, outpaint, upscale, importing custom objects/faces into the database, even swap things (or faces) in movies and the like - it all builds on the same base infrastructure. And it all works with a local installation, without sending all your personal works to foreign servers, keeping your property your property. That's also a secondary step as a suggestion should be something we are sure of. I don't see too much sense in drowning a suggestion in countless pro and contra posts, Serif guys would very quickly lose any interest in following an endless discussion. Thus I find it better to discuss a suggestion and come to a mutual conclusion, THEN launch the idea as a suggestion.
  3. I am badly missing two specific topics in this AI discussion: 1) AI, 2) Cloud or local AI use? As said before, PS and LM voted for a cloud approach. Benefits are quick and easy implementation without specific hardware requirements, no need to download and install huge data volumes prior to use. But the drawbacks are serious: NO PRIVACY anymore. AI needs to know the whole image to decide which AI model to use - there are countless AI models for multiple purposes, each providing it's own picture style; thus the model needs to be chosen or adapted according to the image to process. In addition, each and every image needs to be scanned for prohibited contents, so the cloud service can deny processing your image. Thus each and every image we process using AI needs to be transferred to unknown destinations. Massive restrictions regarding contents, especially sensitive if it comes to any sexual-related matters. I saw AI models denying to generate anything containing terms like "butt". So you need to become real creative if you wish to remove anything near such body parts. Also many AI models restrict celebrity rendering or any styles similar to styles of famous modern artists, also copies of other works. Of course there's a high risk of false positives. Restricted flexibility: There's a wealth of great extensions for AI generators like Stable Diffusion, free of cost. You cannot install such extensions in a cloud environment. Endless operating costs. A cloud solution would force Serif to introduce one of the dreaded subscription models, the reason why we abandoned Adobe, the reason why we chose Affinity. Of course AI features could be offered as a separate service - but this would lead to the same ugly result: Subscription. In that case, we could also return to Adobe. And that's exactly what would happen. Legal liability if abusive images get rendered by the provider's cloud service. Thus the provider needs to examine every single image very closely, worsening the privacy issue mentioned above, also increasing risks for the provider. That's why I believe a local install would be the only way to go. The question is not if Serif needs to implement AI features, the question is how this should be done in the best way possible. I guess it would be quite beneficial if we try to collect some ideas helping Serif with their decision.
  4. I didn't read the full discussion (16 pages is a bit much), but here's three points to consider: 1) Looking away (or making fun of the technology) does not help at all. People WANT it. And even Skylum (Luminar) already jumped, better: stumbled on the train. 2) Everybody who ever did some more complex image corrections will LOVE this new tool. 3) Serif will lose a major part of their customer base to PS and LM if they keep on ignoring reality. Now for a possible solution ... I read things like "Cloud" and "legal" ... Sounds like a real wrong approach. It's the approach PS and LM chose, but there's another solution, maybe better: local install. If Serif provides a local installation of "Stable Diffusion" (most flexible AI solution today, also free to use) combined with a "safe" pre-installed model/checkpoint, disallowing nudity, also celebrity and artist fakes, they should be on the safe side: Images will be generated by the user's computer, not by a cloud service owned by Affinity. However, users could use other models/checkpoints different from the "safe" one (like URPM) at their very own discretion. That way, there wouldn't be any cloud costs to be covered by Serif, also Serif would not be liable for naughty things users might do if they dare to download and install other models/checkpoints than the preinstalled and highly recommended one. There's several more benefits, but I wish to keep this short. Drawback: Users need a modern GPU, starting from RTX 2070 with 8 GB of VRAM and more, and about 30 GB free disk space. That's what I would call minimum requirements. Think about it, correct me if I am wrong.
  5. Sure it's possible. Walking from Berlin to Moscow is also possible. If you concentrate on the moon, you won't need any tracking mount. Jupiter and Saturn may also come into reach. It is also possible to do some more basic astrophotography without - but it's like walking from Berlin to Moscow, with the results usually being "suboptimal", politely expressed. If you love this hobby, do yourself a favor and invest in a tracking mount, there's inexpensive solutions providing you far more observation and general comfort and allowing longer exposure times (but don't expect being able to do minutes of exposure). But your first step should be achieving sharp, focused pictures without excessive noise and grain - see thread linked by NotMyFault.
  6. I still like the single image better. The moon is a horrible bright object, so there shouldn't be any strong noise-related issues. Problems are equipment, correct adjustment and of course "seeing"/atmospheric turbulences. "Lucky imaging" often works great, meaning: take a lot of single shots in a series, later pick the sharpest one. Exposure times should be as short as possible. @irandar - Honestly, I wonder about how you managed to get such a low picture quality out of your gear? Noise levels and graininess are unbelievable high. - Both camera and MAK should provide far better results. Thus it could be interesting to know a bit more about your setup and your procedure. I'd first try lowering the ISO value as low as possible. Plus, I'd make sure the picture is focused correctly (maybe a Bahtinov mask would be the way to go); there's several focusing methods, I can explain if you like. Plus (this is wild guess): Switch the camera off for a few minutes to cool down. Then switch on and take the photos as fast as possible. I know, a bit hard without a tracking mount, but still doable. Reason: Usually the noise levels increase with the sensor temperature. And a general hint: If the stars "twinkle", this is good for a romantic mood, but bad for taking astrophotos. The more the stars twinkle, the worse is the atmospheric turbulences, making sharp pictures very, very difficult if not impossible. But that's not the reason for noise and grain in your photos. Another idea: You didn't take the photos from a warm room, but outside in the cold? If not - you should. --- Two example pics, single shots, not stacked in any way - first one (no editing at all, right from the camera) showing the size of the moon at 750 mm focal length, second one giving an idea of the quality a single shot should provide. Pictures were among my very first astrophotos, unfortunately taken without a field flattener, edited with no greater expertise. Info: Taken with a cheap Canon 750D, first picture ISO 100, 1/100 s; second picture ISO 400, 1/60 s, both with a Skywatcher Newton 150/750. Hint: Right-click the pictures, select Open LINK(!) in new tab - then you'll be able to watch them in their original size.
  7. Diese Fehler mache ich besonders gerne: Mehrere Ebenen, aber nicht die Pixel-Ebene ausgewählt, Pixel-Ebene nicht blau markiert. Passiert ständig, wenn man irgendeine Anpassung anwendet, die als neue Ebene ergänzt und damit automatisch ausgewählt wird. Klick auf die Pixel-Ebene, dann gehts wieder. Bild ist keine Pixel-Ebene. In dem Fall: Rechtsklick auf die Ebene, dann "Rastern ..." wählen. Deckkraft 0 oder sehr niedrig. Je niedriger die Deckkraft, desto weniger wirkt der Effekt. Fluss 0 oder sehr niedrig. Auch dann gehts nicht - oder nicht vernünftig. Das wurde zwar alles bis auf Fluss schon oben von Dan C und Komatös erwähnt, aber vielleicht ists so beschrieben noch ein Häppchen verdaulicher.
  8. @irandar: The results are typical - it cannot work this way. Stacking expects pictures where the objects have an predictable, identical offset - example: Photo 1: Star A at position 100 x 200, star B at position 150 x 170. Photo 2: Star A at position 110 x 205, star B at position 160 x 175. So we've got an offset of 10 in the X axis, an offset of 5 in the Y axis. The same offsets for star A and star B. And the same offsets for stars C, D, E, ... Your main problem is not exposure times, it's looking towards the rotational axis of the sky, the celestial pole, while there's significant time differences between your photos. Your AP stack shows that the North Star, Polaris, is somewhere near the upper right corner of the picture, the pivot point of the sky. This causes objects near the upper right corner "moving" slower than objects far away. Thus the offsets more and more increase with the distance from the North Star, aren't identical anymore. You can see this by the different lengths of the star trails. That's why the stacking algorithms produce that star trails, it just cannot work with stars of different offsets in the same picture. Just imagine sitting inside a dome with stars painted on it's walls, the dome rotating around you. If you look up the stars near the top move slower than the stars at the sides. And if you look parallel to the ground, the stars seem to move with the same speed. If you take photos of the top and stack them, you'll get the same results as with your stacked photos. But if you take pictures parallel to the ground, the results will be much better as the offsets become almost identical. Remedies: a) Very short exposure times, as already mentioned in this thread. PLUS: Photos need to be taken immediately after each other. b) Taking photos more parallel to the ground plane. c) Use of a less expensive motorized mount like Star Adventurer or AZ-GTi, or a "real" mount starting from the EQ5 class, not below. I'd go for option c) if you wish to dive deeper into astrophotography as these mounts also allow mounting of smaller telescopes. You won't be able to achieve real long exposure times of several minutes, but this would solve your problem. If you wish to stick with your DSLR and shorter focal lengths for astrophotography, mounts like Star Adventurer and AZ-GTi are your friends. For everything else a "real" mount (EQ5 and up) is the way to go. Additional info: a) Your photos are out of focus, stars are discs, not points. Focusing on stars isn't easy, manual focusing with maximum magnification of the preview might help, also a Bahtinov mask. With higher focal length/magnification, a Bahtinov mask would be the way to go. With lower focal length/magnification (like 55 mm), it's better to focus via the preview at maximum magnification, take a sample picture (short exposure time, higher ISO) and check if stars are points by watching the taken photos at maximum magnification, correct the focus if needed. Nothing's more frustrating than finding out all your pictures are out of focus - wasting the results of several hours of imaging. b) Take some dark frames as Pauls already mentioned. You've got masses of defective pixels in your photos, that tiny red, blue, green or white dots just every digital camera produces with longer exposure times. Dark frames help eliminating these. Also playing with the Stacking Options (Threshold and Clipping iterations) might help. c) Use RAW photos. JPGs usually look better out of the camera, but the camera's JPG algorithms apply a lot of changes, often in a more or less random and unpredictable way. But astrophotography and stacking need unmodified data to achieve best results. Just set your camera output to RAW + highest JPG quality, so you always get a pair of RAW and JPG, can't forget setting the output to RAW.
  9. @Bruno106: Could you post two of your original RAW files and two of the FITS files? I am no AP expert, but I could give this a try.
  10. Stacking adds a Curves and a Levels adjustment to the picture layer after stacking, so the visible results indeed are stretched. Just deactivate or delete the Curves and Levels adjustment layers to work with the unstretched image.
  11. Had a similar problem. My mistake had been the white balance setting in the stacking dialog - it was set to "Daylight", which caused all colors to be rendered yellow/white.
  12. See this video by James Ritson: Plus, you may want to check out this YouTuber and enter "masking" into the search box. You'll find countless useful hints about masking and working with masks: https://www.youtube.com/watch?v=j8jszUpmSM0
  13. Plus, a JPG usually gets heavily processed in-camera before saving. Usually it takes quite some effort making a RAW look as good as the JPG. That's why I always set my camera to save RAW and JPG - to get a JPG I can use immediately, and a RAW I can really play with. You've got some harsh contrasts in your image, pure white to pure black. For not overexposing, the shadows need to become quite dark in the RAW. But still, it seems to be a bit overdone. If you post the original RAW file, we could check and compare the RAW development with other RAW converters. Comparing highly processed JPGs and pristine RAWs just doesn't work.
  14. @Dan C - Mystery solved: a typical case of HAUS (highly advanced user stupidity). A mysterious entity (ME) had set the white balance in the stacking dialog to "daylight".
  15. @Kevin Barry- ASI071MC Pro uses a Sony sensor. This thread is about Fujifilm. @Dan C - I am reluctant to opening a new thread if there's already an old thread dealing with the matter. In most forums you'll immediately get lectured for redundancy. I'll try astrostacking again, maybe I just missed setting a vital option - and if the result is still the same, I'll open a new thread. Note: There's no total absence of colors like before - there's just ONE color; in my case it's yellow, with all the other colors not showing up.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.