Jump to content
Sign in to follow this  
Saijin_Naib

Bug: EXR CMOS RAF Imported at Half Resolution

Recommended Posts

I'm writing with an issue I've noticed for importing RAF files from a Fujifilm X-S1, which employs the EXR-CMOS, which has a non-traditional orientation, similar to the SuperCCD layout. Affinity Photo imports my 12MP RAF files as 6.11MP (2848x2144 RGBA/32) instead of the native 12.00MP (4000x3000 RGBA/32) it should be. These same RAF files are read properly by Win10 Photos and SilkyPix (Fuji's bundled RAW software).

Thanks!

X-S1_RAF_JPEG_Sample.7z

Share this post


Link to post
Share on other sites
16 hours ago, Chris_K said:

Hi Saijin_Naib

Thanks for raising this, I have passed it on to the development team

Cheers

Chris,

As I understand, Serif (may) use DCRaw/LibRAW, so I've raised the issue to the maintainer of DCRaw as well. I've not heard of anything yet.

Thanks!

Share this post


Link to post
Share on other sites
Quote

Dear Sir:

X-S1 files contains two subframes (~6Mpix each).  Depending of camera settings, second subframe may have lower data values (extended dynamic range mode, second subframe looks like underexposed). Two pixel (sub-)sets are shifter by '0.5 pix'.

 

LibRaw extracts both, you need to set imgdata.params.shot_select to 1 (before running LibRaw::open_file) to extract second subframe. Our RawDigger also show both framse (use Frame: selector on program window bottom).

 

FastRawViewer extracts and shows only 1st frame, because it build for speed.

 

To create higher resolution image, one needs to merge these two sub-frames. SilkyPix (most likely) does this. Camera itself also does this when recording external JPEG

Alex Tutubalin, LibRaw team

 

Edited by Saijin_Naib
Fixing formatting

Share this post


Link to post
Share on other sites

LibRaw is able to extract both subframes from these RAF files (and from DNG files converted by Adobe Converter, these files also contains all data).

Two calls of open_file+unpack is needed for such extraction (shot_select=0 and shot_select=1). Not sure that Affinity Photo does this, most likely they use just single open_file+unpack call for 1st subframe extraction only.

 

LibRaw does not contains any code to merge this sub-frames on/for postprocessing, so one needs to implement own subframe merge.  Also, accurate merge may be not so simple as it sounds, because colors (at least, white balance) looks slightly different for two subframes. Accurate merge may require a lot of camera-specific work (or vendor specific, if all needed data contained in RAF-files metadata).  I'm very unsure this specific work worth the result, because all these cameras are very old (was launched 7...14 years ago), so most users has upgraded for something more common.


Cameras that implements two-subframes are:
  - S3 Pro and S5 Pro (but not S2 pro)
  - many fixed-lens cameras with EXR suffix in model name ( not sure about full list, just checked S200 EXR,  F900 EXR, HS20 EXR samples)
  - X-S1 (although it is X-named camera, it is not X-Trans, but usual bayer 2x2 pattern)


Current Fuji X-Trans cameras are 'single subframe', so subframe merge is not needed (another story is X-Trans pattern: there are NO fast and good-enough demosaic methods for this pattern, so hypothetical prone to moire is either nor real /for fast X-Trans demosaic methods/, or real but results in very slow processing).

Alex


Share this post


Link to post
Share on other sites

I know this is a big ask to make, given how the EXR CMOS is a bit antiquated by most standards (despite, personally, believing it can still return beautiful images), but I would LOVE to see Affinity Photo support these types of RAW images properly/natively. It'd be the one of the only commercial programs to do so aside from the (awful) SilkyPix and Adobe Photoshop/Lightroom (ACR 10.2 tested).

Share this post


Link to post
Share on other sites

Any updates on this?

I've also noticed that when I shoot in 6MP mode, the second subframe (which should be a different exposure) is disregarded entirely, cutting out a large amount of the dynamic range of my images, which is the entire purpose of the EXR sensor arrangement.

Share this post


Link to post
Share on other sites

Hi @Saijin_Naib, we currently have no plans to implement support for FujiFilm EXR CMOS. As we've only had 1 piece of feedback from yourself, and it's an old image format, we can't justify the investment in time that it would take in order to perform the multiple sub-frame merging. 

Can Fujifilm's SilkyPix output a merged 16-bit image (e.g. JPEG-XR)? Is your camera supported by Fujifilm's new Raw Studio?

http://www.fujifilm.com/news/n171130.html

Share this post


Link to post
Share on other sites
16 hours ago, Mark Ingram said:

Hi @Saijin_Naib, we currently have no plans to implement support for FujiFilm EXR CMOS. As we've only had 1 piece of feedback from yourself, and it's an old image format, we can't justify the investment in time that it would take in order to perform the multiple sub-frame merging. 

Can Fujifilm's SilkyPix output a merged 16-bit image (e.g. JPEG-XR)? Is your camera supported by Fujifilm's new Raw Studio? 

http://www.fujifilm.com/news/n171130.html

@Mark Ingram,

Silkypix does use SOME of the second subframe. It depends upon the mode the shot was taken, as well as settings/image content. It apparently will also output 16BIT TIFF, but again, I have no control over what/how it chooses to use the subframes, which brings me back to my problem and request from you.

So, for shots taken for full resolution, Silkypix demosaics both 6.1mp subframes into the full 12mpix image.

For shots taken for full dynamic range, Silkypix uses SOME of both subframes using whatever internal algorithm to decide which data to keep/use from the metered/under-exposed subframes.

Adobe Camera RAW (Lightroom) does a much better job than Silkypix and uses far more of the data from both subframes, but it still is throwing out some of it.

The best way I've seen currently is for me to use dcraw to dump all subframes to separate 16bit TIFFs with no color management/white balance/other changes. Raw RAW, if you will. I then make a new HDR Image in Affinity, align, and choose not to tonemap. This gives me an image that is very close to what the SOOC JPEG represents, but with much more processing latitude. I can then develop or Tonemap and get some really great results, seemingly beyond what I can get from a straight RAF import to Affinity, or in other tools (LightRoom/Silkypix).

Here's an export that I uploaded to Twitter.

XA5.thumb.jpg.fa60c67df2d9be40d16b9553355b932e.jpg

I understand that it's old, but it seems like your concept of HDR/Image Stacks fit well with how the EXR sensor works and even some of your tutorials are directly related to how EXR sensor data works. SN mode for EXR is basically the equivalent of making an image stack and setting it to median to filter out sensor noise. I've done it from dcraw dumped TIFFs, and it looks awesome.

HDR Image is again very similar to how EXR DR mode works but combining multiple exposures with different shutter speeds/EVs and then tonemapping.

I see a good fit here, and the possibility to use existing frameworks within your tools with existing concepts of your tools/personas to get more out of the data. From what I've been reading, there are a number of different cameras/sensors that also employ multiple subframes for similar functions, and yet even other functions. I don't see this request as valid ONLY for EXR data, but in my case, I'm asking primarily for EXR data since that's what I'm using. dcraw/libRAW both support enumerating and extracting any/all subframes from RAW data, so I don't think this is out of place to ask for support for, since your RAW engine already can get me this data, but as configured, just throws out half of the data arbitrarily.

 

Edited by Saijin_Naib
Replaced image with higher-quality copy, adjusted text to reflect this change.

Share this post


Link to post
Share on other sites
31 minutes ago, Mark Ingram said:

Whilst we can get both halves of the data, as the LibRaw team commented, merging the data together (with differing white balance, exposure, orientation, etc) is incredibly difficult, and will be time consuming to implement. 

Fair point. It sounds like this is an all-or-nothing type implementation, correct?

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.