How does crop mode or cropping in post impact subject isolation/DOF?

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

I read the PhotographyLife article and it draws the same conclusions as an article that I referenced previously. If the Hasselblad user changes backs to use a a smaller format, the depth of field will be the same if he uses the same settings from the same position, e.g. f/8 with an 80 mm lens at 5 meters. The entrance pupil will be the same and so the DOF will be the same also, but the field of view will be different. To get the same field of view and depth of field after switching to a smaller format, the user would have to use a shorter focal lenghth and open up the diaphragm. it is all explained in the articles which are well worth reading criticaly. Unput from multipe sources and civil discussion has solved he contradictions. Goodd work by all (or most all)
Cheers,
Bill


What i get out of the PL article is the Hasselblad user doing nothing else but changing backs to a smaller size would experience reduced dof if they then viewed both images equally sized side by side. I think all agree there would be no difference in dof if the smaller one was viewed smaller without normalizing the image size.

I guess in my view everything has to be considered in the context of the final image size and viewing distance. It would be like trying to get better dof for macro by shooting farther away and cropping in post. It doesn't work because the dof doesn't stay the same with cropping.

Anyway, good talk, thanks to all for participating.
 
Last edited:
What i get out of the PL article is the Hasselblad user doing nothing else but changing backs to a smaller size would experience reduced dof if they then viewed both images equally sized side by side. I think all agree there would be no difference in dof if the smaller one was viewed smaller without normaziling the image size.

I guess in my view everything has to be considered in the context of the final image size and viewing distance. It would be like trying to get better dof for macro by shooting farther away and cropping in post. It doesn't work because the dof doesn't stay the same with cropping.

Anyway, good talk, thanks to all for participating.
I only mentioned Hassy Backs because the sensor size does not effect depth of field.
Resizing does not change the depth of field - the DOF is not the image quality its the closet to the furthest part of the image that is in reasonable focus.
Re- Hyperfocal distance...🦘
 
I only mentioned Hassy Backs because the sensor size does not effect depth of field.
Resizing does not change the depth of field - the DOF is not the image quality its the closet to the furthest part of the image that is in reasonable focus.
Re- Hyperfocal distance...🦘

So why do depth of field calculators change hyperfocal distance results when the only field changed is switching from a full frame camera to a crop camera? I think it is because they are equalizing the image size as viewed. Otherwise the hyperfocal result shouldn't change.
 
So why do depth of field calculators change hyperfocal distance results when the only field changed is switching from a full frame camera to a crop camera? I think it is because they are equalizing the image size as viewed. Otherwise the hyperfocal result shouldn't change.
Angle of view changes with different sized sensors but DOF and hyperfocal distance remains the same.
DOF Formula.png
 
I read the PhotographyLife article and it draws the same conclusions as an article that I referenced previously. If the Hasselblad user changes backs to use a a smaller format, the depth of field will be the same if he uses the same settings from the same position, e.g. f/8 with an 80 mm lens at 5 meters. The entrance pupil will be the same and so the DOF will be the same also, but the field of view will be different. To get the same field of view and depth of field after switching to a smaller format, the user would have to use a shorter focal lenghth and open up the diaphragm. it is all explained in the articles which are well worth reading criticaly. Unput from multipe sources and civil discussion has solved he contradictions. Goodd work by all (or most all)
Cheers,
Bill
We seem to be reading the same material and arriving at very different results. FWIW, I fully agree with your observations on equivalency and what happens if you change one or more parameters (e.g. lens focal length or aperture) to maintain the same field of view inn addition to changing sensor/media size. But what you posted above about no change in DoF when you keep everything the same except sensor/media size is not what I read from the Photography Life article. Specifically:

4.3.1) Smaller Sensor = decreased depth of field (if identical focus distance, physical focal length and physical f-number)​

When you put photographs from two cameras next to each other to compare them, you are typically looking at these images at the same size. However, the image sensors that generated these two images may be very different in size. For example: the iPhone has a sensor that is less that one seventh the size of a 35mm full-frame DSLR in each of its dimensions. This means that the physical image that was projected onto the image plane of the iPhone was magnified by a factor more than seven times more than the DSLR’s image so that it could be displayed at the same size in the side-by-side comparison in this post.

This magnification magnifies everything – also imperfections and blurring in the projected image. This means that, at the same distance from your subject, at the same physical focal length and aperture setting, a camera with a smaller sensor will have shallower depth of field than the one with a larger sensor. The images will have the same perspective, but different fields of view (framing), so it is a bit of an apples and oranges comparison. However, the result is real, and goes contrary to common knowledge and what one might have expected!

This is exactly what I was referring to in terms of DoF changing with a crop/smaller sensor when all else is held constant. But this doesn't in any way challenge the idea of equivalency when you change multiple parameters to hold other things like field of view constant.

I'm not sure how you think this article and the quote above agrees with your statement about the Hasseblad user changing backs but keeping focal length and aperture the same, the portion I quoted above says just the opposite that DoF does change in that scenario. But as Bill and I have both pointed out this is based on comparing final images at the same size and not just accepting the cropped image as a smaller portion of the original image and this is implicit in the online DoF calculators.
 
Last edited:
Angle of view changes with different sized sensors but DOF and hyperfocal distance remains the same.View attachment 40853
I have no quarrel with the formula, but notice the circle of confusiion portion. That I think might be why we come to different conclusions. The circle of confusion is not a fixed property of the lens or sensor, but rather the size of blur we humans can still call in focus. It changes with viewing distance, image size, and visual acuity of the viewer. In the CoC calculator below, check out how if you change from a full frame body to a crop body but do nothing else, the CoC changes. Plug the new CoC into the formula and the resulting DoF is different.

Quote source from the link below:

"The Circle of Confusion is just a number that represents the diameter or the maximum size that a blur spot, on the image captured by the camera sensor, will be seen as a point in the final image by a viewer for a given viewing conditions (print size, viewing distance and viewer’s visual acuity).

As a result, once you’ve decided the values of sensor size, max. print size, viewing distance and visual acuity, you can calculate the Circle of Confusion. By doing so, you’re establishing the convention of what is considered to be acceptably sharp in the image."

 
Last edited:
I have no quarrel with the formula, but notice the circle of confusiion portion. That I think might be why we come to different conclusions. The circle of confusion is not a fixed property of the lens or sensor, but rather the size of blur we humans can still call in focus. It changes with viewing distance, image size, and visual acuity of the viewer. In the CoC calculator below, check out how if you change from a full frame body to a crop body but do nothing else, the CoC changes. Plug the new CoC into the formula and the resultin DoF is different.

Quote source from the link below:

"The Circle of Confusion is just a number that represents the diameter or the maximum size that a blur spot, on the image captured by the camera sensor, will be seen as a point in the final image by a viewer for a given viewing conditions (print size, viewing distance and viewer’s visual acuity).

As a result, once you’ve decided the values of sensor size, max. print size, viewing distance and visual acuity, you can calculate the Circle of Confusion. By doing so, you’re establishing the convention of what is considered to be acceptably sharp in the image."

The problem with that is that the COC is not directly linked to sensor size.
COC of a sensor is more related to pixel density rather than sensor size.
 
The problem with that is that the COC is not directly linked to sensor size.
COC of a sensor is more related to pixel density rather than sensor size.

In the calculator any crop body gives the same result and any full frame body gives the same result. It's the sensor size not the density.
 
One scenario one sees often is the idea of shooting macro at a greater distance with the hopes of avoiding a narrow dof, and instead cropping later. I'm pretty sure you covered that situation and that turns out to be a 'no free lunch' kind of thing, since the cropping would take back the dof that shooting from farther away initially had.
Nobody seems to have answered this detail.

In the Mirrorless era, some might suggest this is an obsolete issue :cautious:
This is because at f5.6 and wider with good eyesight you normally see the exact depth of field in the viewfinder.
In addition at smaller than f5.6 using a custom function to show real time depth of field at smaller than f5.6 apertures there is a bright image clearly showing the depth of field that you are getting.

When you can distinctly see the zone of sharpness in the viewfinder (as you can with ML) - depth of field formulae debates are much less needed.

For most practical purposes for macro work there is no practical difference with a different focal length lens though there is a small mathematical difference.

As an example if you focus at 1/40th of HD dof is 1/41 of focus distance in front of the point of focus, and one 1/39 behind.
In practical photographic terms dof is equal both sides of the point of focus - for macro image sizes.

If you double the focal length you are likely to be at 1/80 HD - with 1/81 of the doubled focus distance acceptably sharp in front and 1/79 sharp behind.
The result is no practical difference in depth of field

There is a Nikon macro (and a few other lenses) "however"
At infinity focus these lenses use an f8 app size when f8 is selected.

By 1:1 they physically open up the f8 aperture to around f5.6 - with 1 stop less dof.
In addition they focus breathe by about 1 stop loosing about another stop depth of field.

The overall effect is that at 1:1 an infinity exposure time is used rather than minus 2 stops that close up formulae suggests is required at 1:1 focus.
Some old symmetrical lens designs that do not do either have about 2 stops different dof at 1:1 than Nikon AF macro lenses, offset by a 2 stops increase in exposure time.

Another important detail for relative novices is that depth of field formula assumes as a starting point that the image whether first cropped or not is enlarged enough to produce a 10 x 8 inch print.
To achieve this requires more detail in the DX image - mathematically needing a smaller circle of confusion for depth of field calculations.

Although it may initially sound counter intuitive a smaller circle of confusion results in an increase in depth of field.
As a result you get about one stop more depth of field when maintaining the focuse distance, angle of view and aperture with DX compared to FX.

Repeating what I said near the start, one of the advantages of ML is that with good eyesight you can easily see the depth of field you are getting in the viewfinder - making a need for dof calculations more redundant than they used to be.

One of the advantages of some current PP software is you can focus stack several images to double of treble dof - perhaps lets not go down this rabbit hole.
 
Back
Top