Will new "blur" LR Classic feature make fast f/4 wildlife lenses obsolete?

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Well not really. You can clone a layer, apply a given radius blur and then paint on a mask to vary the opacity of that blur for different parts of the image. But you still have a fixed radius blur being applied with different opacities throughout the image which isn't what happens optically with an actual wider aperture lens.

Optically the far background has a much larger radius blur than the close background nearer the subject it's not the same as one radius of blur applied at varying opacities. Sure you could conceivably create several layers each with a different blur radius and then use masks to selectively apply them but still the transitions wouldn't be continuous and smooth between them the way it is with actual optics.

Stated differently the size of the Circles of Confusion increase as you move deeper into the frame and further behind (also further in front) of the subject so out of focus objects bloom into larger and larger areas the deeper you get. A single blur radius applied with varying opacity to different parts of the image doesn't behave that way.

BTW, this is what I took from @Steve's examples further up in this thread. The far background doesn't seem to transition as smoothly and have as large a blur radius in the software blur solution as it does with an actual wider aperture lens. Ideally there would be some sort of distance or depth into the image transfer function that would emulate increasing blur radius as you get further from the subject. To do that accurately somehow the camera would have to know not just the distance to the subject but the distances to the background and varying distance across the background for things like landscape shots that show depth from a high vantage point which seems quite difficult.

Lot's of things can be accomplished with enough clever software so maybe this could all be done in time but I don't think the technology is there just yet especially in a fully automated form though someday it might be.
I would agree. The key words you used was "single blur"
 
This thread is very informative. What I find interesting is that there seems to an assumption that isolating the bit of the image that is deemed important is best achieved by blurring the rest of the image. As I understand things we see with our brains. Our eyes scan the view and the brain makes up the image we see. Seeing is an acquired ability. What is done by fixing an image either by photography, drawing, painting, on a flat surface is the creator's choice. Our eyes do roam around the flat image but that's not the same as looking at what the image is of. At the present time it is fashionable to have bokeh. 'Bokeh' is in as is the concept of AI. [ For what it's worth I don't understand how there can be 'artificial intelligence' when nobody knows what natural intelligence is. ] Dave mentions AI above. I am of the opinion AI is a lot less than it is made out to be.
Anyway IMO we each see the world in our own way. Whether or not that individual's 'way of seeing' is commercial is something else.
 
I don't want to put words in Stefan's mouth and do not wish to claim any special insight into what he was thinking. I can say the way I read it and what I tend to think about the topic.
1) There is a risk for photos to start having the same look. A few years back, everyone was posting and shooting the "animal portrait" "Hero" shot. Those are still great but it got kind of old. I have always gravitated toward more environmental shots. Hero shots still have their place and do showcase magnificent creatures but variety is the spice of ife.

2) completely blown out backgrounds can, at times, detract from an image, especially when it strips the sense of time and place from the photo. Sometimes it is difficult to for the viewer to discern if the image was takin in the backcountry wilderness, a city park, a zoo or an animal farm. Sometimes context is important.

Kind of like the old joke, "how can you tell the difference between one Disco song and another?" "they have different names."

Again, I don't presume to have any special insight into what the other fellow was thinking, just musing my personal reaction to his post.

The cool thing about photography is it is a form of art, thus there really are no correct or incorrect opinions.

Jeff
I am not disagreeing with you :)
 
I think its a good thing, but in Photoshop if you export an image that you used ai on, the metadata is stamped with a content credential stating that fact. Even if it was something basic like filling in a rotated crop where you would normally use content aware fill, if you use generative fill the file gets permanently labeled.

Does Lightroom label denoise or blur that way?
I don't see that in LR yet. I would hope they would just write some descriptions about edits into the metadata. It's just as important to describe both what was not done as well as what was done with AI. I think the distinction in PS is you can add content that was not in the original photo.
 
LR does include denoise in jpg.

Untitled-1.jpg
You can only see EXIF info for this image if you are logged in.
 
Personally, I think the best use for this tool isn't using it at 100% :)

I think the best way to use it is for just knocking off the hard edges in the background and not trying to completely simulate a faster lens - I think when you push it that far, it tends to look a little unrealistic (although, this can also depend on the shot - texture of the background, that sort of thing).

I’ve been trying this out on a variety of my photos for a little while now, and this is exactly the comment I would make. I‘ve tried it on a variety of subjects, from photographing the local high school marching band, to birds, critters, portraits, etc. I’ve tried it with a variety of lenses, all of which are pretty fast and have fairly good bokeh to begin with. Primarily I was trying the new 135f1.8 Plena, the 50f1.2, the 600f4TC, 800f6.3Z and 70-200f2.8.

What I found was that if I had more DOF than I wanted, using the tool to get the DOF I wanted (and would’ve gotten with a better aperture selection) just ended up creating a fairly fake look. There were often hard edges between in-focus and out-of-focus that wouldn’t normally exist. I was able to get it a lot better, with a fair bit of time using the “visualize depth” option and adding/removing/softening as needed, but it took a lot of effort to get something I didn’t think was too obviously digitally-altered.

However for cases where the DOF was already pretty close to what I wanted, and I already had a pretty nice bokeh, I found some images benefited from a little bit of lens blur - often in the 30-50% range. But I still often needed to do a little bit of touch-up with the visualize depth enabled.

This feature will almost certainly get better over time. However I think it’s going to be difficult for photos taken with current cameras to truly match high quality, fast glass. Now camera phones …. That might be a different story, for the ones which have depth sensors, but that would primarily be for algorithms built in to those devices until/unless they include that information in their raw format (and the offline tools are made to read and use that information).
 
Two years ago I photographed a Swallow-tailed Kite, sitting on a branch of a tree with very leafy background, in Malahide, Ontario , Canada. The photo was taken wide-open at F4, yet the background was still a distraction. At that time I used a software called InPixio to blur the background. The resulting image was good enough for Flickr software to pick it up for their Explore showcasing it on the day I uploaded it. Viewers liked it and so did Flickr's AI software. Funny, isn't it? I guess the beauty is in the eye of the beholder. Since then I tired to do the same with few other images, turning them more into art than the real imagery. Because of the effort it took and not always getting the optimal results I eventually stopped using the software. Here's the link to that image if anyone wants to take a look at it.
Background blur
 
Last edited:
The technology isn't there yet - but I have no doubt it will be at some point. The biggest issue I see with it is that it's not great at transitions from sharp to blurry like a real lens - it's applied in too general a way without enough subtle granularity in the transitions.

Still, let's play a game using some bokeh examples from one of my recent videos. In this case, we're going to compare the 600 6.3 to the 600 F/4. Also keep in mind this is an INCREDIBLY easy and unrealistic scene - real wildlife shots would be much tougher.

First, just what the two lenses look like side by side, the backgrounds unmolested by technology :)

View attachment 73732

Now, how it looks with 100% blur applied to the F/6.3 lens.


View attachment 73733

At first glance, the 600PF looks really good here, but look closer. Notice how the blur "spreads out" in the 600 F/4 shot, as opposed to the 600 6.3 where it simply takes the background and applies uniform blur to it. You can especially see this in the greens in the lower left and also the yellows right next to the clamp (note the size the the dark areas). Also, note the transitions between the blurry areas - they are still contrasty and hard compared to the 600 F/4. However, the tool does soften the textures in the background which is a plus. Will a casual facbook user notice things like this? Probably not. However, they may notice that the effect looks somehow "fake" without being able to put a finger on it.

Here's a quick side by side with just the 600 6.3 so you can see the difference the filter made.

View attachment 73734

Also, keep in mind that you can also apply this to the F/4 shots too. In this case, both at 100% blur.


View attachment 73735

Personally, I think the best use for this tool isn't using it at 100% :)

I think the best way to use it is for just knocking off the hard edges in the background and not trying to completely simulate a faster lens - I think when you push it that far, it tends to look a little unrealistic (although, this can also depend on the shot - texture of the background, that sort of thing).
Nice comparison Steve, I was about to do something like that to see the differences myself, thanks for doing this. Although I do feel it can help a little to soften the bokeh in the background for slower zoom lenses and when its a simple image, it is far too much for me to even bother for more complicated scenes.
 
$5k for 600 F6.3, 3.2 lbs and 5-10 extra minutes in post processing to get 90% of the way there.
VS.
$15k for 600 F4 at 7 lbs to gain the extra 10% that likely won't be noticed by anyone I share with.
I love the smooth look of the faster lens, but most of my photography time involves some hiking, so the weight factor is important. I'm in for the "blur" effect. Take out the texture and clarity, maybe mute the contrast, sharpening and saturation a bit and you can get pretty close.
Yes, it takes time and the pro is not going to want to do this, but it's worth the trade off for me.
I think having to shoot at the higher ISO or lower shutter speed has the potential to muddy the shot more.
 
I would agree. The key words you used was "single blur"

I haven't used it much, but the Iris Blur tool in Photoshop solves that as it simulates the effect of DOF. I think this new tool is better, though, especially used in Photoshop in combination with masking. That whole blur gallery in PS has interesting possibilities.
 
I’ve been using Photoshop for a number of years to selectively blur images. Blurring images has a number of advantages including subject isolation and noise reduction in the background. I’ve used Gaussian Blur, Field Blur, and Iris Blur. With Cloud masking in Photoshop, most images can be accurately blurred in five minutes. The technique is dependent upon highly accurate masking and the ability to combine layer masks to create selective masking. In my experience, the Cloud masking in Photoshop is more accurate than the masking produced by Lightroom. You can also perform blurring on a gradient mask to smooth transitions if required. Alternatively, Iris Blur can be used to perform this function.

For those individuals who cannot afford a $15,000 f/4 prime lens, I would suggest these techniques as an alternative. In general, the blurring produced by Photoshop is not comparable to the creamy blur produced by an f/4 prime but does improve most images.

I’ve included a number of images that have been selectively blurred in Photoshop for your evaluation. The last image uses a blur on a linear gradient over the water in the proximal foreground to hopefully lead the viewer into the image.
06152023-GLP-180-NEF-BCG.jpg
You can only see EXIF info for this image if you are logged in.
06152023-GLP-186-NEF-BCG.jpg
You can only see EXIF info for this image if you are logged in.
06152023-GLP-187-NEF-BCG.jpg
You can only see EXIF info for this image if you are logged in.
06152023-GLP-197-NEF-BCG.jpg
You can only see EXIF info for this image if you are logged in.
06152023-GLP-205-NEF-BCG.jpg
You can only see EXIF info for this image if you are logged in.
10152023-GLP-BCG-0548.jpg
You can only see EXIF info for this image if you are logged in.
 
Last edited:
Can you post the original for comparison? Be interesting to see what you added.
Simonsi:

Sorry it took so long. I had computer problems. I will repost the original RAW (converted to JPG, no adjustments) and the adjusted picture where I used Adobe Lens Blur.

Tom
WR011599.jpg
You can only see EXIF info for this image if you are logged in.
WR011599.jpg
You can only see EXIF info for this image if you are logged in.
 
I assume this cropped image is the processed version. If so, something strange and nervous is going on with the out of focus bushes and grass behind the cat's back and up and right from its hindquarters all the way to the right border of the frame. It looks like what sometimes happens with Topaz Sharpen to out of focus areas with sharp lines. Did you run some sharpening or is that from the blurring in post?
Sorry it took so long. I had computer problems. I will repost the original RAW (converted to JPG, no adjustments) and the adjusted picture where I used Adobe Lens Blur.

Tom
WR011599.jpg
 
I assume this cropped image is the processed version. If so, something strange and nervous is going on with the out of focus bushes and grass behind the cat's back and up and right from its hindquarters all the way to the right border of the frame. It looks like what sometimes happens with Topaz Sharpen to out of focus areas with sharp lines. Did you run some sharpening or is that from the blurring in post?
I ran Topaz Photo Ai
 
I ran Topaz Photo Ai
FWIW, when I get these kinds of artifacts in semi-out of focus backgrounds from the Topaz products I open the original and the post-Topaz image as layers in Photoshop and use a layer mask to detune or eliminate the background artifacts. You can also use the masking built into Topaz products but I like layer masks in PS as I can make more careful adjustments and easily back things out if I go too far.
 
I have not even tried the new blur feature. I can only say in general terms everyone does some computer manipulation of every image. This is just another tool. I think cheaper good lenses, because of all the available manipulations at our disposable are catching up.There is nothing any computer can do to capture the creamy bokeh of a great prime...which is why I do lug along my extremely heavy old Nikon f4 prime and my 500PF which is light and excellent but just not quite the same incredible quality
 
Depends on whether you now see photography as a mix of images and art.
Remember what the camera captures is not what you see with your eye, so some editing is needed to ”create” the image.
At some point your effort is part art and part image.

That all said, version 1 of this feature is decent, and even better if you separate the subject to its own layer.

I think many that are comfortable with ACR will do a combination of both.

I think you are right on the button. A photograph is an artefact, made by a camera with a range of controls available to the photographer - the most significant of which is what you were pointing it out. Anything goes. If you choose to treat the image untouched as the finished article or to indulge in a lot of post processing it's really up to you. There are no rules. Each photographer will employ as many (or few) of the available controls on the camera and on the computer screen as they judge to get the best image to suit their tastes.
There are aspects that I dislike (long exposures to turn the sea into a layer of mist is one example) but I don't consider this a sin.
A RAW file is not sacred, it has to be converted to make it printable.

ISTM that the aging of the photography population is signing the death warrant of the fast long lenses -- folks just can't lug them around anymore. Software allows the older photographer to get the look he wants with equipment he can carry.

Yep, as a creaky 75 year old I will join that line. My longest lens is an F mount 500PF and that is challenging enough for my back.

Chaz
 
Short answer…it won’t make them obsolete. What it will do…with careful application…is provide those of us who for either financial or weight reasons don’t have the 600TC lens the ability to get our images a little closer to what it does.
 
Since we can now considerable blur backgrounds with the new "blur" feature in LR Classic and Adobe Camera Raw, will this feature become so refined that expensive f/4 telephoto lenses will become obsolete? I doubt it, but what are your thoughts?
No serious photographer will ever rely on software solutions, if the initial product they produce is of low quality. Personally, I would never be proud of my picture, if I knew that the RAW picture is not on par with my standards.
 
Back
Top