A general discussion of dynamic range

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

bleirer

Bill, Cleveland OH.
Supporting Member
Marketplace
With the upcoming R5ii supposedly having 16 stops of dynamic range, and the R1 supposed to maybe have a sensor where every pixel is exposed at two ISO levels at the same time, it got me wondering about dynamic range.

I mean printing paper can only be so black if it is fully exposed or printed with the darkest ink, and it can never be whiter than the color of the paper. And a screen can never be blacker than when it is at 0,0,0 RGB nor whiter than when set at max 255, 255,255.

So what is actually happening when we say a camera has 16 stops of dynamic range?
 
Dynamic range is limited by noise - a lower the noise floor means increased dynamic range. You may not be able to print 16 stops but that large dynamic range gives flexibility in post with shadow detail, etc while keeping noise as low as possible.
 
With the upcoming R5ii supposedly having 16 stops of dynamic range, and the R1 supposed to maybe have a sensor where every pixel is exposed at two ISO levels at the same time, it got me wondering about dynamic range.

I mean printing paper can only be so black if it is fully exposed or printed with the darkest ink, and it can never be whiter than the color of the paper. And a screen can never be blacker than when it is at 0,0,0 RGB nor whiter than when set at max 255, 255,255.

So what is actually happening when we say a camera has 16 stops of dynamic range?
Two things before the thread devolves. First, the dynamic range is the number of discernable steps between those absolute blacks and whites. The more, the better (the human eye can distinguish about 20 stops). Second, and practically, is the relationship between dynamic range and exposure latitude, which is a direct, fairly linear high r^2 correlation. That means "bringing out the details in the shadows." So a generic MFT sensor with say 8 stops, if you don't nail exposure you will not be able to bring up the shadows. A Sony FF with 13 stops, no problem.

Curious if they can actually produce 16 stops. The R5C claimed 15, but every test I've seen questioned it and the best testers in my book (Cine D) pegged it at about 11.
 
So say I have a theoretical scene on a sunny day that includes some rocks of various values on a field of fresh snow. I use my blinkies to set my highest possible exposure that doesn't blow the brightest snow. In theory if there were 16 stops I would see what on the computer? For example would a shadow directly under a rock have some detail?
 
So say I have a theoretical scene on a sunny day that includes some rocks of various values on a field of fresh snow. I use my blinkies to set my highest possible exposure that doesn't blow the brightest snow. In theory if there were 16 stops I would see what on the computer? For example would a shadow directly under a rock have some detail?
In your example you might see solid, unrecoverable black from a camera with, say, 10 stops of dynamic range, but get some detail with a camera with 13 stops of dynamic range. Of course, it depends on the light in the scene and the difference between the brightest brights and the darkest darks. :)
 
So starting with the whitest white just barely having detail, dynamic range is the number of times the light reaching the camera can be halved and still have detail that is not pure blackness or noise?

Or starting with the blackest black, one could double 16 times and still not blow the whites?
 
So say I have a theoretical scene on a sunny day that includes some rocks of various values on a field of fresh snow. I use my blinkies to set my highest possible exposure that doesn't blow the brightest snow. In theory if there were 16 stops I would see what on the computer? For example would a shadow directly under a rock have some detail?
That depends in part on the contrast ratio/dynamic range of your monitor and that includes what it can accept in terms of bit depth of image files. If like most monitors you're limited to 8 bit image depth (255,255,255 whites) then the extra DR in the image won't be directly visible. But that extra bit depth at capture and in processing (e.g. via 14 bit 'hi-bit' processing) can allow you pull up shadows without excessive noise or quantization artifacts so that those shadow areas fit into your monitor (or print) output capabilities.

As you posted in your first post the output medium has a dynamic range limit itself and if your monitor doesn't have as many stops of dynamic range you won't see deeply into shadows even if the image being processed was shot and captured with sufficient DR and you processed in a hi-bit depth mode. That's even more dramatic in prints where many print processes like commercial offset or ink jet printing may be limited to around 5 stops of dynamic range regardless of the captured scene, the camera's capabilities and the processing chain. One of the tricks is to process images to fit within the DR of the output medium via tools like Curves, Highlight and Shadow recovery, dodging, burning, etc.

FWIW, I look at it as a series of steps each with their own DR characteristics:

- The scene itself and the lighting create the initial DR demands. A relatively low contrast scene in soft light might only have a few stops of DR but a high contrast scene in direct light with shadows might push things very hard. The classic example is a black wool tux next to a white satin wedding dress photographed under harsh mid day lighting. A very common situation for outdoor wedding photographers that can push DR requirements very hard if you want detail in those blacks and whites under hard lighting that casts shadows. If you can't fill with flash or bounce cards that kind of scene can push DR requirements to or beyond the limit of many cameras.

- The camera's DR at the shooting ISO. This is the one we tend to focus on and the place we can buy our way to more latitude as technology improves but it's one step in the chain. Though even in great light for a moderate contrast scene more DR never hurts as it lets us make modest exposure mistakes without penalty when needed. But in high DR scenes/lighting this can be very important.

- The processing as in processing 8 bit images or doing RAW conversion to hi-bit mode on import. If you keep everything in 8 bit mode then harsh scenes like the example above may be clipped or may end up showing quantization artifacts (banding, posterization) after processing to try to recover detail.

- Output medium and its DR capabilities which can range to just a handful of useful stops of DR when printing to more in a high contrast monitor but none of this is infinite so it can place restrictions on what's possible in the displayed image from a DR perspective. And like all web sharing situations we can't control the DR of remote viewer's monitors so we process to an 8 bit sRGB image that will look good to most even if it doesn't reveal huge DR or color gamut capabilities of the rest of our image processing chain.
 
That depends in part on the contrast ratio/dynamic range of your monitor and that includes what it can accept in terms of bit depth of image files. If like most monitors you're limited to 8 bit image depth (255,255,255 whites) then the extra DR in the image won't be directly visible. But that extra bit depth at capture and in processing (e.g. via 14 bit 'hi-bit' processing) can allow you pull up shadows without excessive noise or quantization artifacts so that those shadow areas fit into your monitor (or print) output capabilities.

As you posted in your first post the output medium has a dynamic range limit itself and if your monitor doesn't have as many stops of dynamic range you won't see deeply into shadows even if the image being processed was shot and captured with sufficient DR and you processed in a hi-bit depth mode. That's even more dramatic in prints where many print processes like commercial offset or ink jet printing may be limited to around 5 stops of dynamic range regardless of the captured scene, the camera's capabilities and the processing chain. One of the tricks is to process images to fit within the DR of the output medium via tools like Curves, Highlight and Shadow recovery, dodging, burning, etc.

FWIW, I look at it as a series of steps each with their own DR characteristics:

- The scene itself and the lighting create the initial DR demands. A relatively low contrast scene in soft light might only have a few stops of DR but a high contrast scene in direct light with shadows might push things very hard. The classic example is a black wool tux next to a white satin wedding dress photographed under harsh mid day lighting. A very common situation for outdoor wedding photographers that can push DR requirements very hard if you want detail in those blacks and whites under hard lighting that casts shadows. If you can't fill with flash or bounce cards that kind of scene can push DR requirements to or beyond the limit of many cameras.

- The camera's DR at the shooting ISO. This is the one we tend to focus on and the place we can buy our way to more latitude as technology improves but it's one step in the chain. Though even in great light for a moderate contrast scene more DR never hurts as it lets us make modest exposure mistakes without penalty when needed. But in high DR scenes/lighting this can be very important.

- The processing as in processing 8 bit images or doing RAW conversion to hi-bit mode on import. If you keep everything in 8 bit mode then harsh scenes like the example above may be clipped or may end up showing quantization artifacts (banding, posterization) after processing to try to recover detail.

- Output medium and its DR capabilities which can range to just a handful of useful stops of DR when printing to more in a high contrast monitor but none of this is infinite so it can place restrictions on what's possible in the displayed image from a DR perspective. And like all web sharing situations we can't control the DR of remote viewer's monitors so we process to an 8 bit sRGB image that will look good to most even if it doesn't reveal huge DR or color gamut capabilities of the rest of our image processing chain.

This was so helpful and it now makes me laugh watching all of the Z6iii debates.
So basically enjoy your DR with your favorite beverage - I am drinking my espresso and roasting coffee - I think it has some nice DR going on.
I am bookmarking your post for sure.
 
What is the max theoretically possible DR for a 14 bit sensor?
 
What is the max theoretically possible DR for a 14 bit sensor?
Higher bit depth does not translate directly into a higher dynamic range, higher bit depths simply slice the signal more finely and do not affect the ratio between the highest and lowest brightnesses that a sensor can detect.
Higher bit depth translates into more colors, shades of gray etc.
 
Last edited:
If a 14 bit sensor records the whitest white at 16,384, wouldn't one stop fall at half that value, two stops, at 1/4, etc. It would have to hit one eventually, no?
 
If a 14 bit sensor records the whitest white at 16,384, wouldn't one stop fall at half that value, two stops, at 1/4, etc. It would have to hit one eventually, no?
I've not done designs with camera sensors but I have done designs that require analog to digital conversion. Given that here is how I think this unfolds...
The sensor photosite's analog level is determined by how many photons it collects. Thru testing (and sensor design) Nikon determines how many photon's are needed for black (it may likely be too noisy to use 0/1 photon's equal to black but all that is part of the sensor design). Photosite design also determines the maximum sensor readout level. These two things determine the sensors dynamic range. Care must be taken downstream not to degrade it.

The next step is conversion to digital and there are countless ways to accomplish that each with pros and cons. Generally speaking 14 bits can usually be converted faster than 16 bits. But 16 bits may not necessarily be an improvement over 14 bits for a variety of reasons. In any event [and with good design] both 14 bit and 16 bit converters capture the entire dynamic range of the sensor - it's just how finely they do it.

Edited.
 
Last edited:
A while ago I messed around with Rawdigger, which maps the raw value 0 through 16384 (if 14 bit) for each red, green, green, blue pixel. One thing I did was shot something uniform at base ISO, filling the whole frame with something like a white piece of paper or a white monitor screen. I centered the meter for the first shot then increased exposure for a series of shots until I was well past blinkies. I then had it graph the whole R, G1, G2, and B channel.

One interesting finding was that when the meter was centered it always was around 3 1/3 stops from blowing out. It might have possibly been 3 1/2 but I was using 1/3 stop increments. To me that meant that most of the latitude for dynamic range was below what the meter called middle. The meter was always going for that 3 1/3 mark. Someone did some math and said that meant the meter was going for around 12.9% reflectance.

Another interesting thing was that when shooting a standard profile in camera I had about 2/3 stops beyond the highest non blinkie exposure to push it before the pixels were actually blown.

Another thing I don't know what it means was that green seemed to always be the first to blow out.
 
I suspect that's because the Bayer Matrix color distribution is half green micro-filters, one quarter red and one quarter blue. There's a lot more photosites collecting light for the green channel.


This is before demosaicing, so for whaever area of the sensor is selected the rawdigger shows separate channels for green 1 and green 2. I'm wondering if white light dependiing on if it is cool or warm when bouncing off something white will have a little bump in a part of the spectrum rather than exactly uniform.
 
This is before demosaicing, so for whaever area of the sensor is selected the rawdigger shows separate channels for green 1 and green 2. I'm wondering if white light dependiing on if it is cool or warm when bouncing off something white will have a little bump in a part of the spectrum rather than exactly uniform.
Yes, of course photosite data is converted to RGB pixel data during demosaicing but the sensor as a whole and the data that drives the demosaicing is roughly a stop more sensitive to green light than red or blue light.

Bayer discussed this a bit and the intent was to more closely mimic the human eye which apparently is more sensitive to green light than red or blue light due to the way our rods and cones work and their relative proportion in the eye.

I think your observation that the green channel clips first when presented with a nominally white target makes sense from this standpoint but you’d have to drill pretty deep into the demosaicing algorithm to know exactly how that works. For instance one simple and crude way to demosaic RAW data is to use nearest neighbor interpolation to estimate the red and blue levels for a native green photosite. But each photosite has more green neighbors than red and blue neighbors. So more complex algorithms are used but without knowing those details it’s hard to know how much if any residual green bias exists after demosaicing. But Bayer from Kodak’s stated intent was to bias the color sensitivity towards green like the human eye.
 
Yes, of course photosite data is converted to RGB pixel data during demosaicing but the sensor as a whole and the data that drives the demosaicing is roughly a stop more sensitive to green light than red or blue light.

Bayer discussed this a bit and the intent was to more closely mimic the human eye which apparently is more sensitive to green light than red or blue light due to the way our rods and cones work and their relative proportion in the eye.

I think your observation that the green channel clips first when presented with a nominally white target makes sense from this standpoint but you’d have to drill pretty deep into the demosaicing algorithm to know exactly how that works. For instance one simple and crude way to demosaic RAW data is to use nearest neighbor interpolation to estimate the red and blue levels for a native green photosite. But each photosite has more green neighbors than red and blue neighbors. So more complex algorithms are used but without knowing those details it’s hard to know how much if any residual green bias exists after demosaicing. But Bayer from Kodak’s stated intent was to bias the color sensitivity towards green like the human eye.

Not disagreeing, but mentioning that this is not demosaiced, just each raw pixel. Interesting that the green is more sensitive, I know our eyes are more sensitive to that middle part of the spectrum.
 
Not disagreeing, but mentioning that this is not demosaiced, just each raw pixel. Interesting that the green is more sensitive, I know our eyes are more sensitive to that middle part of the spectrum.
Yes, I understood that.

Also, unless you used a known white card under controlled temperature lighting it's hard to know what the green component of the reflected light was. As this is RAW data it's also prior to any WB adjustments in-camera or during RAW conversion. It's quite possible that the reflected light hitting the camera was itself biased a bit. But it could just be a normal sensor behavior.

That also brings up a point about color channel headroom at the RAW data level. After WB adjustments are applied during RAW conversion or in post one or more channels can have more or less headroom than what we see in the linear RAW data. IOW, if say the red channel has to be bumped up to achieve the desired WB then its headroom after RAW conversion will be less than what we see in the RAW data prior to WB adjustments. Same for the other channels, the final headroom in each color channel after RAW conversion and WB adjustments can be quite different than the data off the sensor.

[edit] this discussion is reminiscent of many discussions back in the early commercial DSLR days about ideal sensor architectures. Early on there was a lot of interest in Foveon sensors that don't use a Bayer filter but pick off the different RGB channels as light penetrates deeper through a layered sensor. One of the concerns with Foveon designs was the amplitude of each channel was different as light had to penetrate upper layers to get to the deeper and different wavelength lower layers. Turned out that wasn't a big deal and it was fine to have sensors with somewhat different sensitivity for each color channel. I believe Sigma purchased Foveon and may have cameras based on this approach.

Back then there were a bunch of discussions both on photo forums but also among engineering teams about ideal color optical sensor strategies with proponents for RGB Bayer, Foveon and CMY Bayer based sensors. It seems for the most part RGB Bayer became the most used approach.
 
Last edited:
OK, so thinking about dynamic range, light of whatever qualities strikes a scene and is absorbed or bounced about, eventually reflecting back to the camera at various levels of luminance depending on if the objects were light or dark, reaching a pixel element in a sensor and is eventually converted to a number representing red or green or blue of a certain strength. From zero to 16,384 for 14 bit. The raw converter does its thing and one is left with a pixel with 3 values representing not only the brightness of the original scene but also hue and saturation.

Assuming the brightest parts of a well lit contrasty scene were exposed to be placed near the highest number but not blown, then it seems there are around 3.5 halvings of that highest brightness before hitting the level indicated by the meter. So most of the dynamic range is below the middle? If a given camera can end up with 10 stops of DR, 6.5 of them are below middle and 3.5 above?
 
With the upcoming R5ii supposedly having 16 stops of dynamic range, and the R1 supposed to maybe have a sensor where every pixel is exposed at two ISO levels at the same time, it got me wondering about dynamic range.

I mean printing paper can only be so black if it is fully exposed or printed with the darkest ink, and it can never be whiter than the color of the paper. And a screen can never be blacker than when it is at 0,0,0 RGB nor whiter than when set at max 255, 255,255.

So what is actually happening when we say a camera has 16 stops of dynamic range?
Dont worry about specs - they often lie anyway.
Dual ISO is usually consecutive and not co-existent.
Printing can be more forgiving than most people think.
Big sensors with big photo sites can have better dynamic range but that is at a base ISO of 100 or 64 (if you are lucky) .. 🦘
 
Dont worry about specs - they often lie anyway.
Dual ISO is usually consecutive and not co-existent.
Printing can be more forgiving than most people think.
Big sensors with big photo sites can have better dynamic range but that is at a base ISO of 100 or 64 (if you are lucky) .. 🦘

This dual iso doesn't seem to be dual gain the way current cameras are, but each pixel is read out at two gain levels.
 
Two things before the thread devolves. First, the dynamic range is the number of discernable steps between those absolute blacks and whites. The more, the better (the human eye can distinguish about 20 stops). Second, and practically, is the relationship between dynamic range and exposure latitude, which is a direct, fairly linear high r^2 correlation. That means "bringing out the details in the shadows." So a generic MFT sensor with say 8 stops, if you don't nail exposure you will not be able to bring up the shadows. A Sony FF with 13 stops, no problem.

Curious if they can actually produce 16 stops. The R5C claimed 15, but every test I've seen questioned it and the best testers in my book (Cine D) pegged it at about 11.
I recall somewhere i was told my D3 X has around 8 stops and in the Blacks up to 16 bit, only in the blacks, does this make sense or is that even possible ?
 
Back
Top