Jump to content
You must now use your email address to sign in [click for more info] ×

Asser82

Members
  • Posts

    141
  • Joined

  • Last visited

Everything posted by Asser82

  1. I can answer this more precisely. As in any RAW development tool the non destructive edit operations are stored only in the catalog of the tool or in a XMP or similar sidecar file next to the RAW on disc. This depends on your lightroom settings. However the information in such a file has only a meaning to the tool which produced it, at least in full extent. There are parts like star ratings or embedded previews which can be reused by other tools, but the other settings like "Clarity" or your local adjustments cannot be used by other tools 1 to 1 because the underlying RAW engines are different. This is the reason why Lightroom produces a "pre developed" TIFF when it transferes the image to another tool via "Edit In". Another tool just does not understand the Lightroom language and even if it understands it, it has not the code to reproduce the same results. What this means is: 1) If you use "Edit In" you automatically generate large additional files, which consume up your disc space. Additionally your image manipulation process becomes destructive at this point. You just cannot change your Lightroom edits afterwards and reapply the external editor's changes. 2) If an external editor has to use the RAWs directly instead of generated TIFFs, then a Lightroom plugin must be provided by the external tool, which passes the selected RAWs to the external tool. But again the RAWs are provided as they were imported from the camera. Your Lightroom edits stay in Lightroom. This is not Affinity specific.
  2. If the develop module covers your needs, you don't needed it. My needs are not covered, because I need non destructive raw processing with sidecar files integrated into a photo management software where the developed changes can be previewed. The tool must support body and lense profiles and auto correct photos according to these.
  3. Highlights, shadows, midtones and blacks can be adjusted with the selective tone panel globally. Dxo 12 (PhotoLab) has now also non destructive local adjustments with gradient masks, smart masks with edge detection or nik like u-points.
  4. For what do you need layers in a raw development tool? If you use dxo, you do not need Affinity Photo's develop persona. Layers, retouching, photo combining (panorama, stacking, ...) is the domain of Affinity Photo (for me).
  5. This was one of the reasons for me in addition to raw development quality and lens support to buy Dxo PhotoLab this week. This constelation works really great: 1) Lightroom 6 as DAM (Import+Library) 2) Dxo PhotoLab Elite for RAW (Develop) 3) Affinity Photo (Post Develop) No software renting. 2) and 3) are developed in future. For 1) I do not need further development because the DAM part is complete for me.
  6. You do not have the MSI Afterburner or the RIvaTuner Statistics Server (RTSS) running when your NVIDIA GPU is active? I have experienced problems with the way how this tool hacks itself into the process.
  7. I have now performed further measurements on my PC: 1 core: 11 sec 2 cores: 9 sec 3 cores: 7 sec 4 cores: 6,5 sec 5 cores: 6 sec 6 cores: 6 sec 7 cores: 5,5 sec 8 cores: 5 sec 8 cores + take no action in assistent: 4 sec So most improvements happen on the first 4 cores. The retina setting did not have any speed effect here. But even on one core this PC is twice as fast, as my old one with 4 cores. Maybe it is not only the CPU, but also the much better graphic card with fast VRAM. Old: Phenom II X4 2,8 GHz + 16GB 1333 Mhz DDR3 Ram + Radeon HD 5770 1GB Graphics New: Core i7 7700k 4,5 GHz + 16GB 3200 MHz DDR4 Ram + NVidia GTX 1070 8GB Graphics
  8. I have once read a book which covered some adobe lightroom interna. I mean in the library module they start with the embedded jpeg preview if you do not trigger raw based preview generation while importing the photos into the catalog. The user gets the preview immediatly so that he sees something. When he starts the development of a raw the raw data is read and the development process is applied to get a raw data based preview, if this not happened on import. So in the worst case the new image might look completely different to the embedded preview. But normally it is not the case. Next there are different previews for different zoom levels. Standard previews with approximatly the size of the screen which are used for overview zoom levels and 1:1 previews for 100% and above. After raw data based previews are generated, they are stored in a central preview cache with early stage development data. So next time the raw is opened the raw data based previews are generated with the help of the cache which shortens the process. The important part is, that you always see a preview. And it is not a good situation when you need more than 1 or 2 seconds to see the first preview, even if it is not that accurate. My feeling is, that the Affinity preview generation process is not that well stuctured and that there is no preview cache available. Thats why we have to wait so long every time we open the same raw.
  9. It looks like all cores get a job to do. But it is difficult to tell, whether a 8/16 core ryzen would lead to a further improvement. It is often not the case.
  10. I have updated my system recently to a core i7 7700k with 3200Mhz DDR4 ram and my CR2 raw opening times dropped from 22 seconds to 5. A movement of the photos from HDD to SSD did not change opening times. So the bottleneck is not I/O, at least not when getting data from disk. It seems that the raw data processing algorithms are CPU heavy. While opening a raw file all logical cores are busy, so Affinity splits the job on all cores already. It feels like it does too much to set up its internal data structures compared to other tools or it does not use the preview which is stored in raw files to display an image fast and analyze the raw data in the background while the preview is visible. I have installed IrfanView just for fun. Displaying the RAWs there does not consume noticeable CPU time. On the other hand my CPU is not used fully when opening a RAW file in Affinity, while it is when I move the sliders after the file is loaded. So there is some other aspect. Analyzing the RAM usage I can see, that the software allocates 1GB(!) of RAM for each RAW file I open. Allocation of big buffers in a fragmented environment can cause big delays also. The memory consumption on opening the files in IrfanView is difficult to measure but not more than 100MB. But even if the allocation is not the problem, building a data model of the size of one GB can consume much time. This is not needed in IrfanView because you cannot develop RAWs there.
  11. what is the definition of "personal use" and "control"? If I am the only one who uses it, is it sufficient for personal use, even if someone else profits from it? I have a dedicated/non shared pc at work which I do not own. Is this enough for "control"? Or short: Can I use one license for my private photos at home, and as an image editor at work?
  12. In topaz clarity all colors seem to be washed out, when the photo is a 16bit prophoto file. It even does not look right with adobergb. only srgb seems to be OK. So I have to convert to srgb in AFP to see accurate previews in Topaz, is this a known issue? prozessing tools: LR6 -> AFP -> Topaz.
  13. I am currently not at home, but maybe you are using the live filter, where I have used the static gaussian filter. If it is not reproducible, I will take a second look at it, and will post a youtube clip if I can reproduce it.
  14. Hi MEB, it also bleeds to the canvas borders: create new file, add pixel layer, fill black, blur If the whole thing is by design, maybe the blur filters can get a slider to control the bleeding amount in a future release.
  15. Hi, there was a thread here, which was not commented yet: https://forum.affinity.serif.com/index.php?/topic/34531-blurring/ For many people the usage of blurring is very limited, when there is no way to prevent bleeding on selection/image borders. See near the end of the thread for a summary.
  16. If I should tell, where small freezes are coming from, I would they that they are caused by garbage collection. Affinity's frontend is written in .NET/WPF. In .NET memory is managed by a subsystem, which periodically scanns all object instances to detect the ones, which are no longer referenced by the application, so that the memory for them can be freed. While this "garbage collection" runs, the UI thread execution can be affected, which results in a non responsive UI. Another problem occures when the underlying data structures are not designed as thread save. This introduces requirements that the data structures must be modified by a single thread at a time which is often enough the UI thread. While the Ui thread is busy => no UI feedback. The third point is that the backend libraries with all the algorithmic stuff will be written in a language which is common for Mac and Windows which will be something like C/C++, so that the same code can be compiled for the different target systems. I think that there might also be a minor performance penalty on the border between the managed and the native world. Now to the mac side of life: Because there is no full .NET framework or WPF for Mac, the frontend will be written in a native language like C++/QT. C++ produces more lightweight unmanaged code and the developer has much more control when and how memory is allocated and freed. In addition there should be no performance lost in the communication between front and backend, because both are native.
  17. For your information: I have extended what I have found to the state of the art. See above.
  18. Maybe move this issue to the common bugs sections? The beta section does not seem to be moderated...
  19. I have drawn a picture, see attachment. The question is whether the pixel colors outside of the mask (marked red) should be taken into calculation of the pixel colors inside, weighted by the gaussian curve in this case. I would also think, that they should not. If on the left side the pixels were all black and on the right all white, I would not expect any shade of gray after bluring. I have created screenshots which display what happens in affinity and GIMP. Both produce gray gradients near the mask border. In Ps there seems to be an option (at least in some filters), whether to bleed or not. See https://www.youtube.com/watch?v=tdfQ38YlzeY from 3:05. Btw. I did not find a way to blur the black square in affinity without introducing bleeding. Tried separate layers, masks, adjustments, no way. I even copied the black box into a new image without any white pixels and without masks. Even there bleeding is introduced, where GIMP keeps the image black. So in therms of non bleeding blurring capabilities of a black box :-) : 1) Ps 2) GIMP 3) AFP
  20. Duplicate to?: https://forum.affinity.serif.com/index.php?/topic/34411-blur-filter/
  21. I have recorded some simple steps to reproduce something similar: https://youtu.be/FXBosYgmT0g Thx Chris for the hint with the added scrollbar. Knowing that I had only to generate a case where the panel has a a hard life to decide whether the scrollbar is needed. Your developers will hate me for that :-/
  22. Might be related to: https://forum.affinity.serif.com/index.php?/topic/33262-afphoto-files-are-getting-bigger-and-bigger/ https://forum.affinity.serif.com/index.php?/topic/33969-ap-15145-beta-win-file-size-issue/ It is AF photo there, but I am sure that the same file format is used. For me it looks like a memory leak, where the history items remain in the solution, event if the history panel does not know anything about them.
  23. Here the relevant snapshot which might help. Looks like a layouting problem when there are many items to load.
  24. While the Affinity guys are thinking about a PDF or online help I have helped myself by adapting the HTML help for windows to be useable on my android device. So maybe this can help you: https://forum.affinity.serif.com/index.php?/topic/33786-instruction-reading-affinity-help-on-android/
  25. Yes, that was a test, that disabling of layers works. Who knows... But it is interesting to see the opposite character of these colors. A, wait, are you drawing on the green channel only? white adds green, black removes green?
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.