JoPoV
Active member
Yes 16 rules. But incident light does not lie. And matrix metering or auto iso would give really strange result on a movie.
If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).
Anybody can meter a subject any way they want.Why haven't Nikon, Canon, Sony, et al, given all of us, an option to override the 18%,
Well, when the light meter is fooled by the 18% rule, have the "option" to switch that off.
In camera metering (and AF) does not detect colour as colour - it detects colour as shades of gray plus black and white.ALL colors have a mid tone equivalent.
I have learned basically when my camera light meter will be fooled and instead of using exposure compensation I just use the needle or reading of my meter to get the compensation. If I need one stop I just adjust my shutter or aperture or iso to get what I need. I shoot a D500 usually in manual mode. It keeps me from making the mistake of leaving compensation on. Lol. I also found that your focus point is actually much smaller. My box it is the lower left corner. Knowing that allows me to shoot thru heavy branches and still get a lock on the bird. Basically knowing your equipment is the best. Know the meter know the focus point and you are ahead of the curve. Steve taught me about the focus point in one of his books I believeWell, when the light meter is fooled by the 18% rule, have the "option" to switch that off. It isn't replacing anything. It's just an added option to use if you wish. to see what the end results would be. If it doesn't work then go back to the regular system.
Exposure Compensation is just for that problem - Although cameras do a better job these days few scenes are equal to 18% ...I have been wondering about this topic for a while now. Why haven't Nikon, Canon, Sony, et al, given all of us, an option to override the 18%, if and when we feel it is necessary? I think it would be more than beneficial in some cases. How hard can it be for the camera makers to make this a reality?
What do you think of this option?
This video is what prompted me to finally ask the question.
Perhaps some yes and perhaps some no.I agree that the camera needs some point of reference;
Your camera meter turns colors into tones of grey. But our world is not grey; it is full of colors. Thus, there will be instances when the camera will provide an incorrect interpretation of reflectance.
In camera metering (and AF) does not detect colour as colour - it detects colour as shades of gray plus black and white.
That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.When I wrote, "Quit thinking of 18% grey as a B-W choice, think instead of 18% as a MID TONE, regardless of the color. ALL colors have a mid tone equivalent." in post 31, above, I was responding to the OP's line about the world being in color. Mid grey is the same reflectance (18%) and mid green (fresh lawn grass", for example.
This is a great question. In respect to the OP, the EC works against a single average.That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.
Not bad your idea.If I may, let me write what was kicking around in my brain for a while.
Imagine this, The sensor being shutterless could have different shutter speeds on the same exposure.
For example: A room with a window, the meter would pick up the brighter light, ‘Understand’ its a window, and for those pixels covering the window area up the shutter speed.
If its shutterless, why can’t it perform a variable exposure?!
Nikon's industrial and healthcare customers are the market for this type of solution -- but expect a fair while later that some of this tech may flow down into consumer products for those who can afford this level of tech.Not bad your idea.
Actually, there is something like that already in development:
If you stop for a minute - if 18% is a mid tone (Kodak "invented" the 18% reflectance card and say it is not) then 36% reflectance is 1 stop brighter, 72% is 2 stops brighter and 100% reflectance is only about 2.3 stops brighter!That was certainly the standard line when photography was analog. I wonder whether it is still true with digital cameras. Since the sensors are detecting R, G, and B values, and any conversion of color to grayscale I've ever seen employs an unequal weighting of each color channel, it would not surprise me to find that it is no longer true for digital camera sensors.
If you stop for a minute - if 18% is a mid tone (Kodak "invented" the 18% reflectance card and say it is not) then 36% reflectance is 1 stop brighter, 72% is 2 stops brighter and 100% reflectance is only about 2.3 stops brighter!
This helps explain why Kodak card instructions imply working to about 12% - and hand hand held meters are generally calibrated to 12-14%.
12% calibration plus 1 stop brighter is 24% reflectance, 48% 2 stop brighter and 96% 3 stops brighter - ignoring the minor discrepancy that even new white paint reflects only about 90% of light.
3 stops calibration below 100% of subject reflectance is understandable when going back about 50 years to when multi coated lenses were still rare, 8 stops dynamic range in an image was good for an "average" lens/camera using B&W film, and the human eye is better at detecting shadow detail than highlight detail anyway.
EDIT digital at low ISO's can accommodate a much wider dynamic range than 8 stops. End of Edit.
Sensors do not exactly detect R, G and B values.
At the "elementary explanation level" 1R, 2 G, and 1 B coloured filter above the sensor each detect shades of gray - not enough information without modification to produce around 1,000, 000 separate colours.
The "in camera electronics" use information from about 40 surrounding filter sites to very accurately predict the colour to allocate to each filter site.
The reason for 2 G for 1 R and 1 B filter is reported as being because the human eye is most sensitive to the G segment of the colour spectrum - in line with your supposition that variable colour weighting might be appropriate for different subjects.
Those who read instruction books are likely to be aware that setting portrait in Nikon picture control processes the images for skin tones with natural texture and a rounded feel, Vivid emphasises primary colours etc.
Other camera brands have similar options.
It helps to understand the finished Image straight out of camera consists of much more than an exposure reading assessment.
Getting slightly advanced, those familiar with spot metering might take a highlight and a shadow reading to work out a mid point appropriate for a subject with a high contrast range.
Similarly in studio portraiture it is common to measure the light falling on the subject from each light source to ensure the contrast ratio between lights is appropriate for the intended result.
A gray card and hand held meter is useful for this, though generally replaced toward the end of last century by flash meters able to accurately measure the short duration of an electronic flash.
I often rely on Nikon Matrix metering; though over a number of years I have learned when - for the results I want - a bit of minus or a bit of plus exposure compensation is appropriate – or when spot metering might be more appropriate.
Others may choose to use different metering methods to mine![]()
Yes - but perhaps as the camera initially records only white to black tones for 3 primary colours (perhaps 42 individual measurements), internal processing is required to achieve around 1,000,000 different colours between the primary R, G and blue primary colours above the sensor chip.these days sensors are often 14 bit. So with demosaicing that gives 2^14 For red times 2^14 For green times 2^14 for blue, which is a lot of color.
Yes - but perhaps as the camera initially records only white to black tones for 3 primary colours (perhaps 42 individual measurements), internal processing is required to achieve around 1,000,000 different colours between the primary R, G and blue primary colours above the sensor chip.
You are right.I'm not sure I understand the 42 individual measurements.
A 14 bit pixel/individual sensor element records 2^14 different levels of exposure. The raw value for a red pixel for example might be a number 0 to 16383 or similar levels of exposure. But In demosaicing the raw converter also uses the neighboring green and blue values, each 0 to 16383. So I get more than 4 trillion.
It matters little that Nikon bodies do not so far as I know record 16 bit TIFF files.JPEGs are limited to 8 bit, so that would end up more than 16 million, but TIFFs can be 16 bit or maybe even 32 bit.
Well, that's why the exposure compensation adjustments exist, as does manual mode.It seems to me that most of these arguments/opinions/observations are assuming the presence of one, single, correct exposure. Exposure is a creative exercise, and there could be several "correct" exposures for every shot, that vary with the intent of the person behind the camera.This is made so much easier with the real-time image shown in the EVF of most modern mirrorless cameras.