Traditional 18% Grey Exposure

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Why haven't Nikon, Canon, Sony, et al, given all of us, an option to override the 18%,
Anybody can meter a subject any way they want.

This aside, start with the often overlooked -18% GRAY REFLECTION IS NOT A MID TONE.
It is not an international standard either.

The instructions with a Kodak Gray Card say (with some gobbledegook) "aim the surface of the gray card toward one third of the compound angle between your camera and the main light source"!
There is diagrammatic guidance as to how to to achieve this in the card instructions.

Doing this is equivalent to reducing reflectance by half a stop and about 12 to 14%.
Most handheld meters - when they were still in production - were calibrated to 12 or 14% reflectance.

Kodak clarify 18% is a medium light tone in their long out of publication "Kodak Professional PhotoGuide".

In the era more than 40 years ago when normal black and white film processing limited results to 8-9 stops dynamic range, 12-14 % reflectance generally worked OK.

Using a grey card can have the advantage of avoiding over exposure – a potential a disaster for highlight detail with a digital images.

Using in camera spot or centre weighted metering is an alternative to a gray card - when you know light green spring foliage is about 18% reflectance, darker summer green foliage is about 12% and light brown soil is about 18% etc.

Using Nikon Matrix metering has the potential advantage of taking into account subject contrast range, variable lens aperture and focus breathing light transmission, the mix of colours in the image and the distance to the main subject as well as the average scene reflectance - for a generally more accurate exposure of most individual photographic subjects.

It is difficult to dispute that using Nikon Active-D-Lighting (in camera) can reduce exposure by the equivalent of changing the subject reflectance used for metering - to reduce the chance of burned out highlights.

Other brands have their specific meeting methods - and all can take account (if set in the menu) of exposure most suited to landscapes, or to portraits etc.

The camera histogram especially on cameras able to do this pre-exposure gives far more exposure information including contrast range (individually fof R, G and B colour if selected), possible burned out highlights or lost shadow detail than "last century"metering methods.

Back to your question - modern metering in-camera is generally much more comprehensive and usually more accurate than relying solely on mid highlight (18%) reflectance.
 
Last edited:
It's not worth losing a wink of sleep over the precise value of 18% tone (on which cameras meter light), as the value is not universal anyways, and camera manufactures don't spell out an exact value. As we know since the halcyon film era with the Sunny 16 Rule and Kodak's guide, digital photography has changed how arena of exposure theory&practice somewhat.... Here's one essay justifying the underlying rationale for correct exposure of digital sensors, which followed up on the classic proposal for ETTR by the late Michael Reichmann, founder of Luminous Landscape


However, this was 2003 and cameras - especially sensors - have evolved As I understand how Nikon ILCs work, and use the things in the wild, the metering sensor/algorithm samples the scene fairly accurately, even handling the challenging tones in scenes rather well.... extremes of contrast etc. Since its release in the FA, 4 decades back, Nikon's matrix mode has improved greatly at averaging the luminance of the scene.

As we know, in spot exposure (light meter) the luminance of parts of the scene can be measured 'manually' to inform a more precise exposure. Nikon's spot-metering works rather well at this,as the metered spot corresponds not only in its area, but also tracks the dynamic position of single-point Autofocus Dot.

The histogram is our friend nonetheless in the overall process of checking and tweaking exposure. Optimal exposure in some conditions can also require manual white balance to avoid blowing out a colour channel.

It's important IMHO - to expose effectively - to understand how sensors work. Jim Kasson has just published part I of an essay on this topic, over at LensRentals. Steve also uses the Dick-Feynman's-raindrops-filling-buckets metaphor for exposure; in fact, he's penned an entire book around the subject [EDIT: and a recheck since my last read confirms that Steve's explanations kick into touch a whole pile of published 'advice' on the world Wild web .... Dare I add his Advanced Exposure Techniques chapter is a remedy to the bedlam of misinformation!]

 
Last edited:
Well I have a much simpler POV -- which is tied to the fact that I shoot Lossless RAW with a FLAT picture control colour profile -- my "job" is to expose my scene prioritising on the subject as to NOT clip white or blacks or to only do so with knowledge that any data in these areas will be limited. It is in post processing that I make Image Exposure choices (which in the old Ansel Adams' system means Mid-Tones) and also choices about the level of tonal values across the rest of the spectrum throughout the images (globally and locally, overall white point and black point and balance between highlights and shadows).

For a very long time Flat profile as been claimed to be the closest in camera histogram to an actual histogram based on the RAW data. I find this claim to be reasonable.

I will strongly resist any attempt to change what we old folk have grown up with and got used to. Particularly, since given the controls available to us in the newest cameras (and many older version to) I consider it unnecessary for Manufacturers to make any further changes. We already have everything we need.

So read on -- metering in camera matrix, evaluative etc has always attempted to vary the exposure settings to achieve a predefined value AND that value is set by the manufacturer and adjusted by the settings chosen -- full-frame matrix, small, standard or average centre weighted, spot, AND highlight weighted (or protection) matrix. AND also metering with face detection enabled (b4: human faces only) where exposure is adjusted for the faces of human portrait subjects detected by the camera when Matrix Metering is selected and so on.

All the parameters of the Optimal Exposure that each of the metering choices seek to achieve can be "fine-tuned" -- b6 in the Z9 (3.10) -- and can be saved for each Custom Setting Bank. While such adjustments are similar to using Exposure Compensation the amount of the adjustment is hidden unless one looks in b6 -- and this is one very good reason for not using them. I just use the EC setting to vary the relative exposure of what I am seeking to shoot -- the old small white subject on dark background OR dark subject on bright background rules apply AND since I shoot many different types of subject in many different settings and light on evon one game drive there is NO WAY I want to have hidden "fine-tuning" settings.

THAT SAID - back in 2015/6 - I attended workshops and seminars AND then 2 safaris by a famous south-africa wildlife photographer and guide. I was shooting D810, D5 and D500 (or the previous version D4s etc..) AND I was taken by his suggestion that we "seek" to shoot at -1.0 EV in 14-bit RAW when using these bodies -- he put me on to using AF-fine tuning. AND YES it worked and saved me 1-stop of ISO with seemingly no penalty.

ALL can be fine but you have to remember that you made these changes.

You have to do the math if changing an Exposure fine-tune setting to -1EV turns an 18% gray to another number -- it does and the number will be lower. BUT how much I don't know or really care.

With the latest Mirrorless bodies (like the Z9) I have yet to understand how much further subject detection based exposure optimisation can be taken -- we already have the ability to shoot multi-shot HDR in camera and receive either 1 combined image (RAW format) or multiple individual files (like a bracketed sequence) - Z9 Photo Shooting Menu - HDR Overlay - which extends the dynamic range of the sensor.

So what more do we really need? Just do what I do maximise the data you capture without clipping and fix the image's "exposure" in post. As noted if you want to achieve a darker outcome then use EC or fine tuning, but please don't suggest the rest of us have to follow you over this cliff like lemings.
 
Last edited:
Well, when the light meter is fooled by the 18% rule, have the "option" to switch that off.

Manual exposure. Completely 100% manual exposure settings (no-semi-auto anything) will (as you know) not allow any exposure changes to be made by any part of the camera that reads for 18% gray (though that meter is still working, as you can see in the exposure bar in most camera displays). This also means that the exposure compensation dial does nothing.

Stating the obvious, that means that your brain is required to make exposure decisions (as the guy in the video showed that he automatically makes EC adjustments with his gray matter without thinking). That sounds like "off" to me, and the replacement everyone in this thread is asking for is quite simply: the brain.

Best value: go ahead and use that brain in semi-auto and let your eyes/brain make EC adjustments on top of the camera's 18% gray interpretation of the scene. It's much easier.

This is the way.
 
Last edited:
Well, when the light meter is fooled by the 18% rule, have the "option" to switch that off. It isn't replacing anything. It's just an added option to use if you wish. to see what the end results would be. If it doesn't work then go back to the regular system.
I have learned basically when my camera light meter will be fooled and instead of using exposure compensation I just use the needle or reading of my meter to get the compensation. If I need one stop I just adjust my shutter or aperture or iso to get what I need. I shoot a D500 usually in manual mode. It keeps me from making the mistake of leaving compensation on. Lol. I also found that your focus point is actually much smaller. My box it is the lower left corner. Knowing that allows me to shoot thru heavy branches and still get a lock on the bird. Basically knowing your equipment is the best. Know the meter know the focus point and you are ahead of the curve. Steve taught me about the focus point in one of his books I believe
 
I have been wondering about this topic for a while now. Why haven't Nikon, Canon, Sony, et al, given all of us, an option to override the 18%, if and when we feel it is necessary? I think it would be more than beneficial in some cases. How hard can it be for the camera makers to make this a reality?

What do you think of this option?

This video is what prompted me to finally ask the question.
Exposure Compensation is just for that problem - Although cameras do a better job these days few scenes are equal to 18% ... 🦘
 
I agree that the camera needs some point of reference;
Perhaps some yes and perhaps some no.
There can be more to accurate exposure than taking a single reflectance measurement.

It can be overlooked that one of the pluses of Nikon early "distance" metering is that a bright distant scene was considered likely to be low contrast that benefitted from some under exposure - and that was what was applied.

More recent in camera metering takes into account scene contrast using the ability to measure light levels at different points across the frame, is likely to benefit from some RGB analysis from a separate RGB unit, and can expose differently for example if Portrait or Landscape, maybe with a contrast adjustment is selected in the Nikon Picture Control menu. Other camera brands have equivalents.

Modern in camera metering can take into account much more than a single neutral card reading - with the ability to more accurately calculate a subject specific exposure.

Many photographers aged under 40 have never seen a gray card - and would not know how to use one to a good standard!

A limitation of "advanced" in camera metering is that it is a form of AI - and AI does not always get everything 100% right.
Fortunately in the digital era the histogram gives a lot of information about the content of a scene.

Photographers able to reasonably interpret histogram information can soon learn from the histogram detail if the image captured is likely to produce the intended result.
 
Last edited:
Maybe I'm weird (correction...I'm weird), but I just don't have exposure problems. Here and there I may mess up, but I don't generally miss shots due to botched exposure. Between the advanced metering, exp. comp., and being able to shoot manual...I'm good. Now missed focus shots...I can fill a 100TB drive with those suckers.
 
Your camera meter turns colors into tones of grey. But our world is not grey; it is full of colors. Thus, there will be instances when the camera will provide an incorrect interpretation of reflectance.

In camera metering (and AF) does not detect colour as colour - it detects colour as shades of gray plus black and white.

When I wrote, "Quit thinking of 18% grey as a B-W choice, think instead of 18% as a MID TONE, regardless of the color. ALL colors have a mid tone equivalent." in post 31, above, I was responding to the OP's line about the world being in color. Mid grey is the same reflectance (18%) and mid green (fresh lawn grass", for example.
 
When I wrote, "Quit thinking of 18% grey as a B-W choice, think instead of 18% as a MID TONE, regardless of the color. ALL colors have a mid tone equivalent." in post 31, above, I was responding to the OP's line about the world being in color. Mid grey is the same reflectance (18%) and mid green (fresh lawn grass", for example.
That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.
 
That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.
This is a great question. In respect to the OP, the EC works against a single average.

We can view the RGB histogram and see if the differences and make gray matter decisions on those values (we may or may not care if one channel is clipping, as long as the important one is not). They're often in the same general region, but not always (especially not in infrared photography, but I digress).

Is the monochrome histogram a conglomerate of those three separate channels and is the basic averaging meter working on that conglomerate, or it's own 18% average, based on what?

Chris
 
If I may, let me write what was kicking around in my brain for a while.

Imagine this, The sensor being shutterless could have different shutter speeds on the same exposure.
For example: A room with a window, the meter would pick up the brighter light, ‘Understand’ its a window, and for those pixels covering the window area up the shutter speed.

If its shutterless, why can’t it perform a variable exposure?!
 
too complicAted folks 😂
If exposure is not right just turn one of the 3 dials left or right to make it right. This is a first world problem, when camera does most of the works ppl complain cannot customize, when it doesn’t ppl complain it doesn’t work correctly.

I am sure if you can put out a tried and tested working proposal of what alternative scheme to replace the conventional metering that works for more situation, manufacturer will look into it however unless you can let the camera know your intend or understand the context of the scene, it is clearly not possible now. Hack we can’t even agree with each other… 🤪
 
If I may, let me write what was kicking around in my brain for a while.

Imagine this, The sensor being shutterless could have different shutter speeds on the same exposure.
For example: A room with a window, the meter would pick up the brighter light, ‘Understand’ its a window, and for those pixels covering the window area up the shutter speed.

If its shutterless, why can’t it perform a variable exposure?!
Not bad your idea.
Actually, there is something like that already in development:
 
That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.
If you stop for a minute - if 18% is a mid tone (Kodak "invented" the 18% reflectance card and say it is not) then 36% reflectance is 1 stop brighter, 72% is 2 stops brighter and 100% reflectance is only about 2.3 stops brighter!

This helps explain why Kodak card instructions imply working to about 12% - and hand hand held meters are generally calibrated to 12-14%.
12% calibration plus 1 stop brighter is 24% reflectance, 48% 2 stop brighter and 96% 3 stops brighter - ignoring the minor discrepancy that even new white paint reflects only about 90% of light.

3 stops calibration below 100% of subject reflectance is understandable when going back about 50 years to when multi coated lenses were still rare, 8 stops dynamic range in an image was good for an "average" lens/camera using B&W film, and the human eye is better at detecting shadow detail than highlight detail anyway.

EDIT digital at low ISO's can accommodate a much wider dynamic range than 8 stops. End of Edit.

Sensors do not exactly detect R, G and B values.
At the "elementary explanation level" 1R, 2 G, and 1 B coloured filter above the sensor each detect shades of gray - not enough information without modification to produce around 1,000, 000 separate colours.
The "in camera electronics" use information from about 40 surrounding filter sites to very accurately predict the colour to allocate to each filter site.

The reason for 2 G for 1 R and 1 B filter is reported as being because the human eye is most sensitive to the G segment of the colour spectrum - in line with your supposition that variable colour weighting might be appropriate for different subjects.
Those who read instruction books are likely to be aware that setting portrait in Nikon picture control processes the images for skin tones with natural texture and a rounded feel, Vivid emphasises primary colours etc.
Other camera brands have similar options.

It helps to understand the finished Image straight out of camera consists of much more than an exposure reading assessment.

Getting slightly advanced, those familiar with spot metering might take a highlight and a shadow reading to work out a mid point appropriate for a subject with a high contrast range.

Similarly in studio portraiture it is common to measure the light falling on the subject from each light source to ensure the contrast ratio between lights is appropriate for the intended result.
A gray card and hand held meter is useful for this, though generally replaced toward the end of last century by flash meters able to accurately measure the short duration of an electronic flash.

I often rely on Nikon Matrix metering; though over a number of years I have learned when - for the results I want - a bit of minus or a bit of plus exposure compensation is appropriate – or when spot metering might be more appropriate.

Others may choose to use different metering methods to mine :)
 
If you stop for a minute - if 18% is a mid tone (Kodak "invented" the 18% reflectance card and say it is not) then 36% reflectance is 1 stop brighter, 72% is 2 stops brighter and 100% reflectance is only about 2.3 stops brighter!

This helps explain why Kodak card instructions imply working to about 12% - and hand hand held meters are generally calibrated to 12-14%.
12% calibration plus 1 stop brighter is 24% reflectance, 48% 2 stop brighter and 96% 3 stops brighter - ignoring the minor discrepancy that even new white paint reflects only about 90% of light.

3 stops calibration below 100% of subject reflectance is understandable when going back about 50 years to when multi coated lenses were still rare, 8 stops dynamic range in an image was good for an "average" lens/camera using B&W film, and the human eye is better at detecting shadow detail than highlight detail anyway.

EDIT digital at low ISO's can accommodate a much wider dynamic range than 8 stops. End of Edit.

Sensors do not exactly detect R, G and B values.
At the "elementary explanation level" 1R, 2 G, and 1 B coloured filter above the sensor each detect shades of gray - not enough information without modification to produce around 1,000, 000 separate colours.
The "in camera electronics" use information from about 40 surrounding filter sites to very accurately predict the colour to allocate to each filter site.

The reason for 2 G for 1 R and 1 B filter is reported as being because the human eye is most sensitive to the G segment of the colour spectrum - in line with your supposition that variable colour weighting might be appropriate for different subjects.
Those who read instruction books are likely to be aware that setting portrait in Nikon picture control processes the images for skin tones with natural texture and a rounded feel, Vivid emphasises primary colours etc.
Other camera brands have similar options.

It helps to understand the finished Image straight out of camera consists of much more than an exposure reading assessment.

Getting slightly advanced, those familiar with spot metering might take a highlight and a shadow reading to work out a mid point appropriate for a subject with a high contrast range.

Similarly in studio portraiture it is common to measure the light falling on the subject from each light source to ensure the contrast ratio between lights is appropriate for the intended result.
A gray card and hand held meter is useful for this, though generally replaced toward the end of last century by flash meters able to accurately measure the short duration of an electronic flash.

I often rely on Nikon Matrix metering; though over a number of years I have learned when - for the results I want - a bit of minus or a bit of plus exposure compensation is appropriate – or when spot metering might be more appropriate.

Others may choose to use different metering methods to mine :)

It definitely gets into the math weeds. But these days sensors are often 14 bit. So with demosaicing that gives 2^14 For red times 2^14 For green times 2^14 for blue, which is a lot of color.
 
It seems to me that most of these arguments/opinions/observations are assuming the presence of one, single, correct exposure. Exposure is a creative exercise, and there could be several "correct" exposures for every shot, that vary with the intent of the person behind the camera.This is made so much easier with the real-time image shown in the EVF of most modern mirrorless cameras.
 
these days sensors are often 14 bit. So with demosaicing that gives 2^14 For red times 2^14 For green times 2^14 for blue, which is a lot of color.
Yes - but perhaps as the camera initially records only white to black tones for 3 primary colours (perhaps 42 individual measurements), internal processing is required to achieve around 1,000,000 different colours between the primary R, G and blue primary colours above the sensor chip.
 
Yes - but perhaps as the camera initially records only white to black tones for 3 primary colours (perhaps 42 individual measurements), internal processing is required to achieve around 1,000,000 different colours between the primary R, G and blue primary colours above the sensor chip.

I'm not sure I understand the 42 individual measurements. A 14 bit pixel/individual sensor element records 2^14 different levels of exposure. The raw value for a red pixel for example might be a number 0 to 16383 or similar levels of exposure. But In demosaicing the raw converter also uses the neighboring green and blue values, each 0 to 16383. So I get more than 4 trillion. JPEGs are limited to 8 bit, so that would end up more than 16 million, but TIFFs can be 16 bit or maybe even 32 bit.
 
I'm not sure I understand the 42 individual measurements.
You are right.
I was overlooking that the recording was not limited to single bit steps.
A 14 bit pixel/individual sensor element records 2^14 different levels of exposure. The raw value for a red pixel for example might be a number 0 to 16383 or similar levels of exposure. But In demosaicing the raw converter also uses the neighboring green and blue values, each 0 to 16383. So I get more than 4 trillion.

JPEGs are limited to 8 bit, so that would end up more than 16 million, but TIFFs can be 16 bit or maybe even 32 bit.
It matters little that Nikon bodies do not so far as I know record 16 bit TIFF files.
 
It seems to me that most of these arguments/opinions/observations are assuming the presence of one, single, correct exposure. Exposure is a creative exercise, and there could be several "correct" exposures for every shot, that vary with the intent of the person behind the camera.This is made so much easier with the real-time image shown in the EVF of most modern mirrorless cameras.
Well, that's why the exposure compensation adjustments exist, as does manual mode.

I feel like this topic has gone around a loop twice, from "why can't we adjust what you're exposing for" to using manual mode/spot exposure/exposure compensation all the way around again.
 
Back
Top