Lightroom Classic Version 13.0 just out

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Ivan Rothman

Well-known member
Supporting Member
Marketplace
Lightroom has just released it's new version 13.0 which has some major upgrades in the right panel of the Develop Module.
1. Lens Blur to blur out background. I have played with this a little and it does well with images.
2. Color Mixer. This replaces the classical HSL Color panel. You can still select the familiar and classical HSL by choosing MIXER. But for more precise targeted adjustments choose POINT COLOR
3. HDR Output. This allows better ease of viewing HDR images on either new monitors that support HDR displays or on the older Standard Dynamic Range (SDR) displays.

Anthony Morganti has put out a nice video showing these new features:
 
The lens blur tool is the first one of these AI tools that really sort of troubles or bothers me. I think this is because - no pun intended - it seems to ke that it really starts to blur the lines between what is authentic and what isn't, or maybe more to the point between what is truly "of the artist" and what is from a machine.

Previously, you could use AI to do a few things, one of which was inserting new elements in a photo. This is a pretty clear line: if you're concerned with the authenticity of a photo, you're not going to want totally new things inserted into it. Or, if you see a photo you know had something inserted by AI, you know that the photo is ultimately the work of a machine. This can also apply to removing things using AI by generating a background.

Now I think there may be at least some gray area here when we're talking about very small things. I don't consider a photo any less the work of the artist if someone removes a twig from a photo (though maybe it does bother you). It doesn't bother me that much if someone extends the edge of a frame by 5% or something by generating a background. It would make me look at the photo differently if someone removed a whole bunch of foliage and generated a third of a bird to fill in the gaps. It would make me look differently if someone extended a frame by 50% and added a whole background scene in there.

I think what it comes down to is whether the final product represents a photo that is substantially the same as the original, or if they're creating something of an entirely different character. A minor extension of the edge to give a subject more headroom or make the composition look slightly better is the sort of thing that could easily be something as natural as the difference between the VR element re-entering before a shot and slightly changing the framing from the original. A change like this doesn't bother me because the final product has the same essential character as the original. Most people would look at the two photos and say, "that's basically the same photo it's just framed slightly differently." The same applies to a photo with and without the single twig in the way.

I'm not saying that there's no element of skill involved in getting that composition perfect in the first place or getting the bird without the twig in the original image. There can be, but ultimately I see the difference as so minor that it doesn't strike me as "no longer the same photo."

There are edge cases. In one example I remember Duade Paton had a photo with a nice setting sun the top half or so of which had been cut off, so he extended the frame to put in the rest of the sun. That's a harder case because in some ways the change was essential to the character of the photo. I think some would be less okay with this than others.

Having played with it a bit, I think this new tool really blurs the line even more than Duade's sun. I am not at so I can't share it now, but earlier I took an old shot of a heron in flight which was always pretty mediocre and blurred the background and it's like a totally different photo. Many of us on this forum who have spent a lot of time around nature photography have heard Steve and others talk about the fact that at levels of skilled wildlife photography the photographer is usually looking at the background first and foremost even before considering the subject. It's that important - so being able to take just about any background and turn it into a great background is, to me, often going to really completely change an image to the point where it's really just not the same photo.

On the other hand, there's a degree to which someone might reasonably say that this is just sortof functioning as an extension of a photographer's equipment and is giving them the chance to do in post what they already would have intentionally done in the field if they weren't prevented by economics. I can't afford an f/2.8 telephotos lens, but if I could, I'd have one and that heron would have been photographed with that nice background in the first place, so did I really change my photo's character, or did I just get the equivalent of a much more affordable fast aperture?

These sorts of dynamics and questions leave me feeling pretty uncomfortable about how to view photography in a way other advancements haven't.
 
One word of warning - there is a problem with the SDK that blocks Negative Lab Pro conversions. Do not upgrade if you are using Negative Lab Pro until there is a bug fix.

In other respects, the updates look good and are significant. Morganti's video is very good and has a little different perspective from Julieanne Kost.

Here is a link to a blog post from Lightroom Queen:

Point Mixer is a big change to HSL so here is the latest from Julieanne Kost:

Here is Julieanne's take on Lens Blur. It is an AI based enhancement from Adobe.
 
The lens blur tool is the first one of these AI tools that really sort of troubles or bothers me. I think this is because - no pun intended - it seems to ke that it really starts to blur the lines between what is authentic and what isn't, or maybe more to the point between what is truly "of the artist" and what is from a machine.

Previously, you could use AI to do a few things, one of which was inserting new elements in a photo. This is a pretty clear line: if you're concerned with the authenticity of a photo, you're not going to want totally new things inserted into it. Or, if you see a photo you know had something inserted by AI, you know that the photo is ultimately the work of a machine. This can also apply to removing things using AI by generating a background.

Now I think there may be at least some gray area here when we're talking about very small things. I don't consider a photo any less the work of the artist if someone removes a twig from a photo (though maybe it does bother you). It doesn't bother me that much if someone extends the edge of a frame by 5% or something by generating a background. It would make me look at the photo differently if someone removed a whole bunch of foliage and generated a third of a bird to fill in the gaps. It would make me look differently if someone extended a frame by 50% and added a whole background scene in there.

I think what it comes down to is whether the final product represents a photo that is substantially the same as the original, or if they're creating something of an entirely different character. A minor extension of the edge to give a subject more headroom or make the composition look slightly better is the sort of thing that could easily be something as natural as the difference between the VR element re-entering before a shot and slightly changing the framing from the original. A change like this doesn't bother me because the final product has the same essential character as the original. Most people would look at the two photos and say, "that's basically the same photo it's just framed slightly differently." The same applies to a photo with and without the single twig in the way.

I'm not saying that there's no element of skill involved in getting that composition perfect in the first place or getting the bird without the twig in the original image. There can be, but ultimately I see the difference as so minor that it doesn't strike me as "no longer the same photo."

There are edge cases. In one example I remember Duade Paton had a photo with a nice setting sun the top half or so of which had been cut off, so he extended the frame to put in the rest of the sun. That's a harder case because in some ways the change was essential to the character of the photo. I think some would be less okay with this than others.

Having played with it a bit, I think this new tool really blurs the line even more than Duade's sun. I am not at so I can't share it now, but earlier I took an old shot of a heron in flight which was always pretty mediocre and blurred the background and it's like a totally different photo. Many of us on this forum who have spent a lot of time around nature photography have heard Steve and others talk about the fact that at levels of skilled wildlife photography the photographer is usually looking at the background first and foremost even before considering the subject. It's that important - so being able to take just about any background and turn it into a great background is, to me, often going to really completely change an image to the point where it's really just not the same photo.

On the other hand, there's a degree to which someone might reasonably say that this is just sortof functioning as an extension of a photographer's equipment and is giving them the chance to do in post what they already would have intentionally done in the field if they weren't prevented by economics. I can't afford an f/2.8 telephotos lens, but if I could, I'd have one and that heron would have been photographed with that nice background in the first place, so did I really change my photo's character, or did I just get the equivalent of a much more affordable fast aperture?

These sorts of dynamics and questions leave me feeling pretty uncomfortable about how to view photography in a way other advancements haven't.
I’m on the other side of the fence…I care more about what the final image on the blog looks like…so have been used to darkening amd de contrasting backgrounds for a long time. And while one would almost always prefer to have either the $14K f4 exotic prime or the perfect positioning of the subject relative to the background to get the nicest bokeh…a lot of the time you just get and can’t get that perfect positioning…and realistically if you’re not making money with your cameras it’s awfully hard to justify the $15.5K lens and the probably another 2K for the appropriate tripod and gimbal head, especially if it’s not a lens you’re going to use every time out and are willing to carry it and the tripod a lot and not get any decent shots out of that effort today. Sure, some non pros will get them anyway and it’s cheaper than some other hobbies…but most of us aren’t going to spend that kind of money…and the reality of backgrounds being what they are is just that…reality. So the choice comes down to putting a busy BG image on the blog or your wall…or processing it to get the best final image out of what you got. That doesn’t preclude trying to get better images out of the camera and improving your skill set…but a lot of people are going to go for the best looking image from what you got in the field for whatever reason. I played with the lens blur a bit on an early AM Tricolor backlit at Blavk Point I got a couple weeks back…and comparing the best I could do without lens blur vs adding in the lens blur with a select BG and subtract linear gradient so only the BG got blurred made the final one better. Not as good as an f4 lens obviously…but still better. I think that unless you’re in journalism or entering a contest with rules that PP is just fine…clone our trash or sticks, sky replacement as long as you do it well, or whatever is just fine and there’s no need to tell anybody what you did because most people don’t care, they care what it looks like.

As we all know…there is n9 such thing as a no post processing photo…it just doesn’t exist. JPEG out of camera is processed by whatever algorithm is in there. RAW looks like crap unless processed. Ansel Adams PP’ed his images…but he used the enlarger to do it, we use LR and AI today so that we don’t need to necessarily develop pro level editing skills…again though, we all try to improve but the goal is something people like to look at…so PP in just about any manner one wants is A OK with me. And if you tell me it’s fine…but it’s also OK if you don’t tell me, unless I specifically ask and you lie about it which makes you a jerk for lying.
 
I’m on the other side of the fence…I care more about what the final image on the blog looks like…so have been used to darkening amd de contrasting backgrounds for a long time. And while one would almost always prefer to have either the $14K f4 exotic prime or the perfect positioning of the subject relative to the background to get the nicest bokeh…a lot of the time you just get and can’t get that perfect positioning…and realistically if you’re not making money with your cameras it’s awfully hard to justify the $15.5K lens and the probably another 2K for the appropriate tripod and gimbal head, especially if it’s not a lens you’re going to use every time out and are willing to carry it and the tripod a lot and not get any decent shots out of that effort today. Sure, some non pros will get them anyway and it’s cheaper than some other hobbies…but most of us aren’t going to spend that kind of money…and the reality of backgrounds being what they are is just that…reality. So the choice comes down to putting a busy BG image on the blog or your wall…or processing it to get the best final image out of what you got. That doesn’t preclude trying to get better images out of the camera and improving your skill set…but a lot of people are going to go for the best looking image from what you got in the field for whatever reason. I played with the lens blur a bit on an early AM Tricolor backlit at Blavk Point I got a couple weeks back…and comparing the best I could do without lens blur vs adding in the lens blur with a select BG and subtract linear gradient so only the BG got blurred made the final one better. Not as good as an f4 lens obviously…but still better. I think that unless you’re in journalism or entering a contest with rules that PP is just fine…clone our trash or sticks, sky replacement as long as you do it well, or whatever is just fine and there’s no need to tell anybody what you did because most people don’t care, they care what it looks like.

As we all know…there is n9 such thing as a no post processing photo…it just doesn’t exist. JPEG out of camera is processed by whatever algorithm is in there. RAW looks like crap unless processed. Ansel Adams PP’ed his images…but he used the enlarger to do it, we use LR and AI today so that we don’t need to necessarily develop pro level editing skills…again though, we all try to improve but the goal is something people like to look at…so PP in just about any manner one wants is A OK with me. And if you tell me it’s fine…but it’s also OK if you don’t tell me, unless I specifically ask and you lie about it which makes you a jerk for lying.
I can appreciate Ivan's idea that the tool is just taking the place of a better lens, but if we're talking about the idea of just doing whatever is possible to get a final image that sells... if that's the goal then at a certain point what's the value of a photographer at all? If one can just sit in a room and ask a computer for a picture of an eagle feeding its young and it will make one that looks as good or better as a photo, why take the photo?
 
  • Like
Reactions: seh
The lens blur tool is the first one of these AI tools that really sort of troubles or bothers me. I think this is because - no pun intended - it seems to ke that it really starts to blur the lines between what is authentic and what isn't, or maybe more to the point between what is truly "of the artist" and what is from a machine.

Previously, you could use AI to do a few things, one of which was inserting new elements in a photo. This is a pretty clear line: if you're concerned with the authenticity of a photo, you're not going to want totally new things inserted into it. Or, if you see a photo you know had something inserted by AI, you know that the photo is ultimately the work of a machine. This can also apply to removing things using AI by generating a background.

Now I think there may be at least some gray area here when we're talking about very small things. I don't consider a photo any less the work of the artist if someone removes a twig from a photo (though maybe it does bother you). It doesn't bother me that much if someone extends the edge of a frame by 5% or something by generating a background. It would make me look at the photo differently if someone removed a whole bunch of foliage and generated a third of a bird to fill in the gaps. It would make me look differently if someone extended a frame by 50% and added a whole background scene in there.

I think what it comes down to is whether the final product represents a photo that is substantially the same as the original, or if they're creating something of an entirely different character. A minor extension of the edge to give a subject more headroom or make the composition look slightly better is the sort of thing that could easily be something as natural as the difference between the VR element re-entering before a shot and slightly changing the framing from the original. A change like this doesn't bother me because the final product has the same essential character as the original. Most people would look at the two photos and say, "that's basically the same photo it's just framed slightly differently." The same applies to a photo with and without the single twig in the way.

I'm not saying that there's no element of skill involved in getting that composition perfect in the first place or getting the bird without the twig in the original image. There can be, but ultimately I see the difference as so minor that it doesn't strike me as "no longer the same photo."

There are edge cases. In one example I remember Duade Paton had a photo with a nice setting sun the top half or so of which had been cut off, so he extended the frame to put in the rest of the sun. That's a harder case because in some ways the change was essential to the character of the photo. I think some would be less okay with this than others.

Having played with it a bit, I think this new tool really blurs the line even more than Duade's sun. I am not at so I can't share it now, but earlier I took an old shot of a heron in flight which was always pretty mediocre and blurred the background and it's like a totally different photo. Many of us on this forum who have spent a lot of time around nature photography have heard Steve and others talk about the fact that at levels of skilled wildlife photography the photographer is usually looking at the background first and foremost even before considering the subject. It's that important - so being able to take just about any background and turn it into a great background is, to me, often going to really completely change an image to the point where it's really just not the same photo.

On the other hand, there's a degree to which someone might reasonably say that this is just sortof functioning as an extension of a photographer's equipment and is giving them the chance to do in post what they already would have intentionally done in the field if they weren't prevented by economics. I can't afford an f/2.8 telephotos lens, but if I could, I'd have one and that heron would have been photographed with that nice background in the first place, so did I really change my photo's character, or did I just get the equivalent of a much more affordable fast aperture?

These sorts of dynamics and questions leave me feeling pretty uncomfortable about how to view photography in a way other advancements haven't.
Adobe is very heavily invested in AI tools and capabilities. They have a lot of different audiences, and for the commercial world these changes are quite important. I'm sure they will also spread into stock imagery which is more about creativity than standard images.

In terms of how and when you choose to use those tools, it's a personal choice. I do wish Adobe would adopt some sort of standard that embeds editing choices in the metadata. At some point the specific AI edits used may become important, and a documented record would be useful.

Many photo contest organizations are having trouble with these issues and are behind the curve. At one extreme, some of them are banning certain AI based tools. But others are describing limits, specific adjustments, and trying to find a middle ground.

I'm trying to use the tools and learn to recognize where they struggle so I can incorporate that in judging photos. Often there is a halo or odd behavior in transition areas. I still think it's important to consider the background rather than assuming you can fix it when editing. But I do think you can take a good background and make it better - turning an f/4 or f/5.6 background into something that looks like f/2.8 . For many images, that's a game changer.
 
I can appreciate Ivan's idea that the tool is just taking the place of a better lens, but if we're talking about the idea of just doing whatever is possible to get a final image that sells... if that's the goal then at a certain point what's the value of a photographer at all? If one can just sit in a room and ask a computer for a picture of an eagle feeding its young and it will make one that looks as good or better as a photo, why take the photo?
Some people will use an AI generated original and some will start with their own image…but neither is inherently right or wrong. Depends on why the image 8s being created…and while I would only manipulate my images, the next person might want to do everything via AI and then sell it? The why take the photo at all is different for each case…I want to modify m6 own photos, the guy with the AI original is simply producing what the art director wants at the lowest cost.
 
Adobe is very heavily invested in AI tools and capabilities. They have a lot of different audiences, and for the commercial world these changes are quite important. I'm sure they will also spread into stock imagery which is more about creativity than standard images.

In terms of how and when you choose to use those tools, it's a personal choice. I do wish Adobe would adopt some sort of standard that embeds editing choices in the metadata. At some point the specific AI edits used may become important, and a documented record would be useful.

Many photo contest organizations are having trouble with these issues and are behind the curve. At one extreme, some of them are banning certain AI based tools. But others are describing limits, specific adjustments, and trying to find a middle ground.

I'm trying to use the tools and learn to recognize where they struggle so I can incorporate that in judging photos. Often there is a halo or odd behavior in transition areas. I still think it's important to consider the background rather than assuming you can fix it when editing. But I do think you can take a good background and make it better - turning an f/4 or f/5.6 background into something that looks like f/2.8 . For many images, that's a game changer.
I agree…much better to have a food or the right BG to start with…but a lot of that depends on whether one is willing to spend the money for the exotic lens or what you get for the BG…and if the only opportunity you had for the wedge tailed rainbow nutwarbler (because they’re endangered and skittish say) is at 30myard range with the BG a foot behind…then editing is just fine and using PP to add bokeh is ok absent the few situations when it isn’t. Just like a sky replacement because the sky was crappy the one day you were on location but yesterday 3 miles down the coast it was perfect sky…as long as you don’t run afoul of those other situations. My view is that PP is just fine, except for when it’s not. But all of that is as we know a personal choice and I’m good with that...and understand that other’s mileage may vary.
 
This debate is of course predictable. All of this AI coming to photography represents major changes and people are going to have very different perspectives. I welcome the ability to remove distracting objects (which we could do to a lesser extent and with more difficulty for a long time) as I don't think it changes the narrative. However, I am on the side of the fence with SCoombs; the lens blur begins to make me uncomfortable.

Having these changes show up in the metadata per Eric's suggestion is I think a very good one.

But regardless of your stance, these things are here to stay and what we have now will almost assuredly seem trivial to what we will have in a very short period of time.

I do absolutely love the new color grading tools; more PS power inside LR; that's always a good thing.
 
The lens blur tool is the first one of these AI tools that really sort of troubles or bothers me. I think this is because - no pun intended - it seems to ke that it really starts to blur the lines between what is authentic and what isn't, or maybe more to the point between what is truly "of the artist" and what is from a machine.

Previously, you could use AI to do a few things, one of which was inserting new elements in a photo. This is a pretty clear line: if you're concerned with the authenticity of a photo, you're not going to want totally new things inserted into it. Or, if you see a photo you know had something inserted by AI, you know that the photo is ultimately the work of a machine. This can also apply to removing things using AI by generating a background.

Now I think there may be at least some gray area here when we're talking about very small things. I don't consider a photo any less the work of the artist if someone removes a twig from a photo (though maybe it does bother you). It doesn't bother me that much if someone extends the edge of a frame by 5% or something by generating a background. It would make me look at the photo differently if someone removed a whole bunch of foliage and generated a third of a bird to fill in the gaps. It would make me look differently if someone extended a frame by 50% and added a whole background scene in there.

I think what it comes down to is whether the final product represents a photo that is substantially the same as the original, or if they're creating something of an entirely different character. A minor extension of the edge to give a subject more headroom or make the composition look slightly better is the sort of thing that could easily be something as natural as the difference between the VR element re-entering before a shot and slightly changing the framing from the original. A change like this doesn't bother me because the final product has the same essential character as the original. Most people would look at the two photos and say, "that's basically the same photo it's just framed slightly differently." The same applies to a photo with and without the single twig in the way.

I'm not saying that there's no element of skill involved in getting that composition perfect in the first place or getting the bird without the twig in the original image. There can be, but ultimately I see the difference as so minor that it doesn't strike me as "no longer the same photo."

There are edge cases. In one example I remember Duade Paton had a photo with a nice setting sun the top half or so of which had been cut off, so he extended the frame to put in the rest of the sun. That's a harder case because in some ways the change was essential to the character of the photo. I think some would be less okay with this than others.

Having played with it a bit, I think this new tool really blurs the line even more than Duade's sun. I am not at so I can't share it now, but earlier I took an old shot of a heron in flight which was always pretty mediocre and blurred the background and it's like a totally different photo. Many of us on this forum who have spent a lot of time around nature photography have heard Steve and others talk about the fact that at levels of skilled wildlife photography the photographer is usually looking at the background first and foremost even before considering the subject. It's that important - so being able to take just about any background and turn it into a great background is, to me, often going to really completely change an image to the point where it's really just not the same photo.

On the other hand, there's a degree to which someone might reasonably say that this is just sortof functioning as an extension of a photographer's equipment and is giving them the chance to do in post what they already would have intentionally done in the field if they weren't prevented by economics. I can't afford an f/2.8 telephotos lens, but if I could, I'd have one and that heron would have been photographed with that nice background in the first place, so did I really change my photo's character, or did I just get the equivalent of a much more affordable fast aperture?

These sorts of dynamics and questions leave me feeling pretty uncomfortable about how to view photography in a way other advancements haven't.
I agree to an extent. However, I sometimes will slightly soften a background and have been for a long time now, but keeping in character with the photo. The thing is, background blur is tricky and it can look VERY out of place if not done properly. You need to start with an already sort of blurry background and the amount of blur has to stay believable. If you have a bird with stacks just a few inches behind it, it'll be painfully obvious the background was blurred. I think that this new tool - used in careful moderation - will be more of a help than a hinderance. I'd use it to "knock the edge" off of a background that's already in the ballpark, but not with something where it would look out of place.

Don't worry, we'll see lots of examples of it being done incorrectly as people discover it :ROFLMAO:
 
I agree to an extent. However, I sometimes will slightly soften a background and have been for a long time now, but keeping in character with the photo. The thing is, background blur is tricky and it can look VERY out of place if not done properly. You need to start with an already sort of blurry background and the amount of blur has to stay believable. If you have a bird with stacks just a few inches behind it, it'll be painfully obvious the background was blurred. I think that this new tool - used in careful moderation - will be more of a help than a hinderance. I'd use it to "knock the edge" off of a background that's already in the ballpark, but not with something where it would look out of place.

Don't worry, we'll see lots of examples of it being done incorrectly as people discover it :ROFLMAO:
Agree completely. Once masking came into LR, you could lower texture and apply a bit of blur to the BG. And depending on the photo it could look clearly contrived. But in the right circumstances, i.e. all the background was, well, really in the background as in far away; it could have a positive effect. And I often used it; albeit sparingly. And of course, you could do this even better in PS for years.

So why is this different? It's not really other than it's capable of so much more there is concern that skills are no longer needed; or needed to a lesser extent. I think that is the core of many people's fear of these new tools.

But they sure aren't going anywhere; so I will embrace them and use what I am comfortable with and skip what I am not. All any of us can do.
 
Agree completely. Once masking came into LR, you could lower texture and apply a bit of blur to the BG. And depending on the photo it could look clearly contrived. But in the right circumstances, i.e. all the background was, well, really in the background as in far away; it could have a positive effect. And I often used it; albeit sparingly. And of course, you could do this even better in PS for years.

So why is this different? It's not really other than it's capable of so much more there is concern that skills are no longer needed; or needed to a lesser extent. I think that is the core of many people's fear of these new tools.

But they sure aren't going anywhere; so I will embrace them and use what I am comfortable with and skip what I am not. All any of us can do.

I agree with Steve about the way it will look when used in the wrong situation with an important caveat: I suspect that a big part of why blur used in the wrong situations looks wrong to most people is heavily a matter of what they're used to and that if more and more people start using it in the wrong situations and more people see it more regularly - i.e., as it is normalized - then blur of objects right next to the subject is going to start to look right to all but experienced photographers. This has happened with a few other things, in the past, though not so much that I can think of in the areas we're thinking about here.

So I do think that's a potential concern.

In any case, I agree with Platalea ajaja here that the thing that starts to bother me here is that it's not a human being putting in skilled - and more importantly creative - work to do some of these things. I do think there are a lot of layers and nuance here. If an AI can create an outstanding, photorealistic image of an Osprey and its prey out of nowhere I don't have a problem with that image existing and I don't have a problem with that image being used for various things, commercial uses, etc. I think there is a concern about how this kind of thing can impact photographers' work and incomes, of course, but leaving that aside I think the ability to create images like this is fine.

On the other hand, I think that something very important is lost if images like this are the only way we see images of ospreys with their prey and if photography that is authentically artistic is not practiced and recognized and valued - and I suppose if we want to go in that direction even "protected" (whatever that means). Without going into the specifics to avoid running afoul of the forum's rule, I suspect that we are in the coming years and decades even going to start to see many religions start to think about and focus on and preach about the value and dignity of human work and creativity and the dangers of losing it to AI - so in other words, I really do think the question of a machine doing everything vs. a human being being the primary artistic agent is central to the whole thing.
 
  • Like
Reactions: seh
In playing with this more, it does actually seem pretty hit or miss right now and it's unclear how much better it will get and how quickly in the area it has trouble, which is detecting the subjects. So far almost every photo I've tried it on actually produces some very bad artifacts because it can't quite map out all the right parts of the image to keep in focus. Some of this stuff might be able to be fixed with the refinement brush - for instance, it keeps reading beaks of herons as part of the background and basically erasing them (it's pretty weird). Others seem very hard to fix even with refinement, for instance one thing it seems to be awful at is getting hairs on a person's head or whiskers on an animal as part of the in focus area, and adjusting the range isn't helping, and the refinement brush seems almost impossible to get on all the hairs correctly.
 
As it currently stands I don't think the blur tool is going to put an end to sales of wide aperture lenses nor eliminate the need to be aware of what's in the BG. Aside from the masking accuracy issues with the degree of blur that it currently applies it's not going to make an image shot with a TC at f8 and a close BG look like it was taken with an exotic prime and a distant BG. Here's an example image.

1) As shot. I had been shooting these two cubs with the mom so had my aperture cranked down to f8(accidentally 7.1 in this frame). When they settled down I forgot to reset back to max aperture.
_NZ99368.jpg
You can only see EXIF info for this image if you are logged in.
2) With blur tool set to max and depth left at default setting which looked pretty good. Not a huge difference.
_NZ99368-2.jpg
You can only see EXIF info for this image if you are logged in.
 
Everything that the blur function can do - could be done before. Just took hours and sometimes days and not seconds. Adobe is just making it easier.
If the tool is overdoing it - it's because you were heavy-handed.

Embrace the advances in technology. And do exactly what you like to do with it. That's why you subscribe to it. I doubt anyone would LOVE to go back to the days before Autofocus on cameras.

Also, keep in mind - photographers are not the only Photoshop users. Graphic Designers (and other types) form a HUGE part of the user group - and they are more than often the kind that earns money as they work as Graphic Designers - rather than photographers who mostly consist of non-earning users.
 
Last edited:
Everything that blur function can do - could be done before. Just took hours and sometimes days and not seconds.
Well said.
Back in the days of black and white darkrooms, we would do things like dodging and burning. Now in the digital world we can do that so much better and easier.
I don't believe that blurring a background takes away from the honesty of what the picture is showing. And I do agree with Steve that blurring a background (with whatever technique you use) is best done with a gentle hand to make the result look more realistic.
 
The lens blur tool is the first one of these AI tools that really sort of troubles or bothers me. I think this is because - no pun intended - it seems to ke that it really starts to blur the lines between what is authentic and what isn't, or maybe more to the point between what is truly "of the artist" and what is from a machine.

Previously, you could use AI to do a few things, one of which was inserting new elements in a photo. This is a pretty clear line: if you're concerned with the authenticity of a photo, you're not going to want totally new things inserted into it. Or, if you see a photo you know had something inserted by AI, you know that the photo is ultimately the work of a machine. This can also apply to removing things using AI by generating a background.

Now I think there may be at least some gray area here when we're talking about very small things. I don't consider a photo any less the work of the artist if someone removes a twig from a photo (though maybe it does bother you). It doesn't bother me that much if someone extends the edge of a frame by 5% or something by generating a background. It would make me look at the photo differently if someone removed a whole bunch of foliage and generated a third of a bird to fill in the gaps. It would make me look differently if someone extended a frame by 50% and added a whole background scene in there.

I think what it comes down to is whether the final product represents a photo that is substantially the same as the original, or if they're creating something of an entirely different character. A minor extension of the edge to give a subject more headroom or make the composition look slightly better is the sort of thing that could easily be something as natural as the difference between the VR element re-entering before a shot and slightly changing the framing from the original. A change like this doesn't bother me because the final product has the same essential character as the original. Most people would look at the two photos and say, "that's basically the same photo it's just framed slightly differently." The same applies to a photo with and without the single twig in the way.

I'm not saying that there's no element of skill involved in getting that composition perfect in the first place or getting the bird without the twig in the original image. There can be, but ultimately I see the difference as so minor that it doesn't strike me as "no longer the same photo."

There are edge cases. In one example I remember Duade Paton had a photo with a nice setting sun the top half or so of which had been cut off, so he extended the frame to put in the rest of the sun. That's a harder case because in some ways the change was essential to the character of the photo. I think some would be less okay with this than others.

Having played with it a bit, I think this new tool really blurs the line even more than Duade's sun. I am not at so I can't share it now, but earlier I took an old shot of a heron in flight which was always pretty mediocre and blurred the background and it's like a totally different photo. Many of us on this forum who have spent a lot of time around nature photography have heard Steve and others talk about the fact that at levels of skilled wildlife photography the photographer is usually looking at the background first and foremost even before considering the subject. It's that important - so being able to take just about any background and turn it into a great background is, to me, often going to really completely change an image to the point where it's really just not the same photo.

On the other hand, there's a degree to which someone might reasonably say that this is just sortof functioning as an extension of a photographer's equipment and is giving them the chance to do in post what they already would have intentionally done in the field if they weren't prevented by economics. I can't afford an f/2.8 telephotos lens, but if I could, I'd have one and that heron would have been photographed with that nice background in the first place, so did I really change my photo's character, or did I just get the equivalent of a much more affordable fast aperture?

These sorts of dynamics and questions leave me feeling pretty uncomfortable about how to view photography in a way other advancements haven't.
Me too, but I'm afraid we are "kicking against the pricks."
 
My LR Classic became unstable after this latest auto update on Windows 11 Pro, and required a complete system reboot.

It is now working after an Uninstall, then Reinstall
 
Last edited:
Everything that the blur function can do - could be done before. Just took hours and sometimes days and not seconds. Adobe is just making it easier.
If the tool is overdoing it - it's because you were heavy-handed.

Embrace the advances in technology. And do exactly what you like to do with it. That's why you subscribe to it. I doubt anyone would LOVE to go back to the days before Autofocus on cameras.

Also, keep in mind - photographers are not the only Photoshop users. Graphic Designers (and other types) form a HUGE part of the user group - and they are more than often the kind that earns money as they work as Graphic Designers - rather than photographers who mostly consist of non-earning users.
Your right Elsa. Photographers have been using blending techniques in PS for years to accomplish this.
 
Back
Top