Jump to content
You must now use your email address to sign in [click for more info] ×

Asser82

Members
  • Posts

    141
  • Joined

  • Last visited

Posts posted by Asser82

  1. 2 hours ago, Whitedog said:

    I'm curious. How does Affinity Photo handle Adobe Lightroom edits?

    I can answer this more precisely. As in any RAW development tool the non destructive edit operations are stored only in the catalog of the tool or in a XMP or similar sidecar file next to the RAW on disc. This depends on your lightroom settings. However the information in such a file has only a meaning to the tool which produced it, at least in full extent. There are parts like star ratings or embedded previews which can be reused by other tools, but the other settings like "Clarity" or your local adjustments cannot be used by other tools 1 to 1 because the underlying RAW engines are different. This is the reason why Lightroom produces a "pre developed" TIFF when it transferes the image to another tool via "Edit In". Another tool just does not understand the Lightroom language and even if it understands it, it has not the code to reproduce the same results. What this means is:

    1) If you use "Edit In" you automatically generate large additional files, which consume up your disc space. Additionally your image manipulation process becomes destructive at this point. You just cannot change your Lightroom edits afterwards and reapply the external editor's changes.

    2) If an external editor has to use the RAWs directly instead of generated TIFFs, then a Lightroom plugin must be provided by the external tool, which passes the selected RAWs to the external tool. But again the RAWs are provided as they were imported from the camera. Your Lightroom edits stay in Lightroom.

    This is not Affinity specific.

     

     

     

  2. 12 minutes ago, Rick G said:

    Is not Dxo 12 a competing product in the $100 - $150 price range? Why do I need it?

    If the develop module covers your needs, you don't needed it. My needs are not covered, because I need non destructive raw processing with sidecar files integrated into a photo management software where the developed changes can be previewed. The tool must support body and lense profiles and auto correct photos according to these.

  3. 1 hour ago, TEcHNOpls said:

    I do a lot of shadow highlight recovery, this works best in RAW (ie. DNG, CR2) format,  but I can do only global adjustments in DxO.

    So I want to import DNG file into AP, but than I can't save what my settings were to ie. recover some pixels, or redevelop the raw file (brighter/darker spots).

    Highlights, shadows, midtones and blacks can be adjusted with the selective tone panel globally. Dxo 12 (PhotoLab) has now also non destructive local adjustments with gradient masks, smart masks with edge detection or nik like u-points.

  4. On 25.10.2017 at 7:42 PM, WMax70 said:

    Ahw.... this hurts. 

    So many request to get Nik tools working. Now it is acquired by DXO. 

    But we still want to have Vivenza2 working... 

    please ensure it will... 

    It will be continued. But there are competitors stepping in. They will add value here. 

     

    https://photorumors.com/2017/10/25/breaking-dxo-acquires-nik-collection-from-google-will-continue-development/

     

    This was one of the reasons for me in addition to raw development quality and lens support to buy Dxo PhotoLab this week. This constelation works really great:

     

    1) Lightroom 6 as DAM (Import+Library)

    2) Dxo PhotoLab Elite for RAW (Develop)

    3) Affinity Photo (Post Develop)

     

    No software renting. 2) and 3) are developed in future. For 1) I do not need further development because the DAM part is complete for me.

  5.  

    Crash when blending layers using NVIDIA GPU (Driver Version - 381.65) note that it doesn't happen when using the Intel HD4600 integrated GPU:
     
    Faulting application name: Photo.exe, version: 1.5.2.66, time stamp: 0x590348d3
    Faulting module name: ucrtbase.dll, version: 10.0.15063.0, time stamp: 0xaf2e320b
    Exception code: 0xc0000409
    Fault offset: 0x00000000000734be
    Faulting process id: 0xd68
    Faulting application start time: 0x01d2c45d37fd94c1
    Faulting application path: C:\Program Files\Affinity\Affinity Photo Customer Beta\Photo.exe
    Faulting module path: C:\WINDOWS\System32\ucrtbase.dll
     
    Sorry there is no Crash report, but the Log and SystemInfo is attached. There is a dump file which can be downloaded from: https://www.dropbox.com/s/uz3vhkbccp9x4q2/Photo.exe.3432.dmp?dl=0

     

     

    You do not have the MSI Afterburner or the RIvaTuner Statistics Server (RTSS) running when your NVIDIA GPU is active? I have experienced problems with the way how this tool hacks itself into the process.

  6. I have now performed further measurements on my PC:

     

    1 core: 11 sec

    2 cores: 9 sec

    3 cores: 7 sec

    4 cores: 6,5 sec

    5 cores: 6 sec

    6 cores: 6 sec

    7 cores: 5,5 sec

    8 cores: 5 sec

    8 cores + take no action in assistent: 4 sec

     

    So most improvements happen on the first 4 cores. The retina setting did not have any speed effect here.

     

    But even on one core this PC is twice as fast, as my old one with 4 cores. Maybe it is not only the CPU, but also the much better graphic card with fast VRAM.

     

    Old: Phenom II X4 2,8 GHz + 16GB 1333 Mhz DDR3 Ram + Radeon HD 5770 1GB Graphics

    New: Core i7 7700k 4,5 GHz + 16GB 3200 MHz DDR4 Ram + NVidia GTX 1070 8GB Graphics

  7. If that is true then the JPG preview image is a nice full resolution excellent quality image.

    I have once read a book which covered some adobe lightroom interna. I mean in the library module they start with the embedded jpeg preview if you do not trigger raw based preview generation while importing the photos into the catalog. The user gets the preview immediatly so that he sees something. When he starts the development of a raw the raw data is read and the development process is applied to get a raw data based preview, if this not happened on import. So in the worst case the new image might look completely different to the embedded preview. But normally it is not the case. Next there are different previews for different zoom levels. Standard previews with approximatly the size of the screen which are used for overview zoom levels and 1:1 previews for 100% and above. After raw data based previews are generated, they are stored in a central preview cache with early stage development data. So next time the raw is opened the raw data based previews are generated with the help of the cache which shortens the process.

     

    The important part is, that you always see a preview. And it is not a good situation when you need more than 1 or 2 seconds to see the first preview, even if it is not that accurate. My feeling is, that the Affinity preview generation process is not that well stuctured and that there is no preview cache available. Thats why we have to wait so long every time we open the same raw.

  8. Hi Asser82, do you feel then that the critical part of a machine spec would be the number of cores available rather than whether it's an i5 or i7 ?

    Looking at MEBs topic on performance I was kind of getting that feeling. Or is it primarily how the data model is getting built ?

     

    If it's the first at least I can factor that in to my buying decisions if it's the second then may as well hold tight and see how thing progress!

     

    Regards

    It looks like all cores get a job to do. But it is difficult to tell, whether a 8/16 core ryzen would lead to a further improvement. It is often not the case.

  9. I have updated my system recently to a core i7 7700k with 3200Mhz DDR4 ram and my CR2 raw opening times dropped from 22 seconds to 5. A movement of the photos from HDD to SSD did not change opening times. So the bottleneck is not I/O, at least not when getting data from disk.

     

    It seems that the raw data processing algorithms are CPU heavy. While opening a raw file all logical cores are busy, so Affinity splits the job on all cores already. It feels like it does too much to set up its internal data structures compared to other tools or it does not use the preview which is stored in raw files to display an image fast and analyze the raw data in the background while the preview is visible. I have installed IrfanView just for fun. Displaying the RAWs there does not consume noticeable CPU time.

     

    On the other hand my CPU is not used fully when opening a RAW file in Affinity, while it is when I move the sliders after the file is loaded. So there is some other aspect. Analyzing the RAM usage I can see, that the software allocates 1GB(!) of RAM for each RAW file I open. Allocation of big buffers in a fragmented environment can cause big delays also. The memory consumption on opening the files in IrfanView is difficult to measure but not more than 100MB.

     

    But even if the allocation is not the problem, building a data model of the size of one GB can consume much time. This is not needed in IrfanView because you cannot develop RAWs there.

  10. Have you checked Preserve Alpha in the Filter dialog? I'm not getting this with layers...

    Maybe i'm missing something?

     

    I am currently not at home, but maybe you are using the live filter, where I have used the static gaussian filter. If it is not reproducible, I will take a second look at it, and will post a youtube clip if I can reproduce it.

  11. Hi all,

    Thanks for your feedback. It does indeed bleed to the selection (Show Pixel Selection unticked).

    Photoshop does offer more control here. I'm not sure if call this an improvement or a bug but i will pass it to the dev team to be looked at.

     

    attachicon.gifblur_bleeding.png

     

    Hi MEB,

     

    it also bleeds to the canvas borders:

    create new file, add pixel layer, fill black, blur

     

    If the whole thing is by design, maybe the blur filters can get a slider to control the bleeding amount in a future release.

  12. If I should tell, where small freezes are coming from, I would they that they are caused by garbage collection. Affinity's frontend is written in .NET/WPF. In .NET memory is managed by a subsystem, which periodically scanns all object instances to detect the ones, which are no longer referenced by the application, so that the memory for them can be freed. While this "garbage collection" runs, the UI thread execution can be affected, which results in a non responsive UI.

     

    Another problem occures when the underlying data structures are not designed as thread save. This introduces requirements that the data structures must be modified by a single thread at a time which is often enough the UI thread. While the Ui thread is busy => no UI feedback.

     

    The third point is that the backend libraries with all the algorithmic stuff will be written in a language which is common for Mac and Windows which will be something like C/C++, so that the same code can be compiled for the different target systems. I think that there might also be a minor performance penalty on the border between the managed and the native world.

     

    Now to the mac side of life:

     

    Because there is no full .NET framework or WPF for Mac, the frontend will be written in a native language like C++/QT. C++ produces more lightweight unmanaged code and the developer has much more control when and how memory is allocated and freed. In addition there should be no performance lost in the communication between front and backend, because both are native.

  13. I have drawn a picture, see attachment. The question is whether the pixel colors outside of the mask (marked red) should be taken into calculation of the pixel colors inside, weighted by the gaussian curve in this case. I would also think, that they should not. If on the left side the pixels were all black and on the right all white, I would not expect any shade of gray after bluring.

     

    I have created screenshots which display what happens in affinity and GIMP. Both produce gray gradients near the mask border. In Ps there seems to be an option (at least in some filters), whether to bleed or not. See https://www.youtube.com/watch?v=tdfQ38YlzeY from 3:05.

     

    Btw. I did not find a way to blur the black square in affinity without introducing bleeding. Tried separate layers, masks, adjustments, no way. I even copied the black box into a new image without any white pixels and without masks. Even there bleeding is introduced, where GIMP keeps the image black. So in therms of non bleeding blurring capabilities of a black box :-) :

     

    1) Ps

    2) GIMP

    3) AFP

    post-32068-0-16544100-1485426554_thumb.png

    post-32068-0-49680400-1485455044_thumb.png

    post-32068-0-68258300-1485455056_thumb.png

  14. Might be related to:

     

    https://forum.affinity.serif.com/index.php?/topic/33262-afphoto-files-are-getting-bigger-and-bigger/

    https://forum.affinity.serif.com/index.php?/topic/33969-ap-15145-beta-win-file-size-issue/

     

    It is AF photo there, but I am sure that the same file format is used. For me it looks like a memory leak, where the history items remain in the solution, event if the history panel does not know anything about them.

  15. I'd really like a pdf Help as well - I can read it when away from Affinity/my desktop machine.

     

    PLEASE provide a Mac and Windows PDF manual. I don't care about printing, but using it on a phone or 2nd screen is very useful. Also, as mentioned earlier ePub (on phone apps) and PDF conversions do not provide all information, which makes those options insufficient. This is very important to me.

     

    Isn't PagePlus X9 among the best applications available for PDF manual creation?

     

    While the Affinity guys are thinking about a PDF or online help I have helped myself by adapting the HTML help for windows to be useable on my android device. So maybe this can help you:

     

    https://forum.affinity.serif.com/index.php?/topic/33786-instruction-reading-affinity-help-on-android/

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.