Make ISO 12,800 Look Like ISO 400: Lightroom Denoise Master Class

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Thanks, Steve. I'll watch your video later.

I use DXO PR for noise reduction: Is the general consensus that Lightroom is equal to, or better now? TBH, although I think DXO does a good job, I'd prefer not to use multiple software packages to achieve the same result.
I've used Pure Raw and it's really good. However, I still like my Lightroom technique better since I have more control. With PR, I dump in mu files and it's all automatic (at least the last time I used it). Results were good, but at times I thought overly sharp. I think DxO is probably a better choice if you have a LOT of photos that you need to do, but for me I only process a limited number from any given trip. Figure for every day on a trip, maybe 1-5 photos (some places are better than others, hence the range).

Lightroom does take longer, but don't let the video length fool you - it takes a LOT longer to talk about it than to do it :) I can knock out a shot in a few minutes.
 
Sorry. I must be getting too old to wander around in this fantasyland.

I have to be the advocate of truth on this one and suggest a few corrections to the topic.

If one shoots improperly exposed garbage willingly no amount of sharpening will make them a photographer. Buying a cell phone that does AI is all they need to reach their maximum photographic potential. The bar will be sufficiently lowered to guarantee success.

Denoise does not make 12,800 or any other ISO "Look like ISO 400". ISO 400 always looks like 400 and 12800 always looks like 12800. They are electronic gain values. One and zeros and RAW data never lies. AI simply aborts it into an illusion of what people WISH they had the pride and skill to create. Garbage in gets faked quality out. Did anyone else bother to read the Hogan guides or study induced noise or what dual gain is or any of those other principles that apply to the tools they paid thousands of bucks for?

AI denoise aborts what the photographers selects and replaces it with computer generated information.

But whatever turns you on. I've read great articles about how the Samsung S23Ultra is even better than what you are suggesting.

Does anyone reading any of this actually believe that any publisher of serious photography doesn't recognize the difference between faked out BS, AI converted throw aways and professional images created with human intelligence and pride?

When did grain in photographs become an evil concept? How many great historical images get thrown out of the MMoA when edge to edge, AI sharpness becomes a curators rule of thumb? What makes obvious fakery more marketable than classic realistic photographs that were shot precisely the way the creator intended and displayed with warts and all?

If AI make proper photography this easy, why are all so many saps buying long f/4 at 8 or 10 or 16 grand a pop. Are people really that unmotivated that they will spend big bucks for fan boy bodies and lenses and use them to shoot garbage to correct with CGI?

I'm sure that the AI photography crowds can't stand music performed live because it has ambience. Live music is never sharp "edge to edge". AI music always is.

I remember when photographic techniques were the main topic in BCG forums. Oh well...it's a tik tok, autotune world now and nothing is real. People sell out and forget where they began. But that's the business end of the internet "photography" world. You have to preach to the choir or the donation plate doesn't get filled. The choir wants to hear how to shoot pictures just like Steve or Simon or Gregory without putting in the time to learn to use the tools of the craft.

Why should they when the photographers they admire say "don't bother shooting images until you get it right...AI all you trash instead of throwing it out and you'll be just as good as me. Sort of...".

Ironically, I remember very well being influenced by Steve to buy my Wimberly head and learning to use it properly. I remember reading if you are chasing the subject you are going the wrong way. I remember Steve saying that getting closer is the solution to not having a long enough lens for a shot. I remember Steve emphasizing the value of being a competent naturalist in getting the shot right in the first place. All that ancient photography information was fascinating and helped me create.

It doesn't seem to carry any weight in the crowded "show me the money" internet photography world now that preserving throw aways shot at 20 frames a second is the most profitable aspect of photography to promote.

The only reason some people need to cheat is that someone tells them it's acceptable.
I think you're exaggerating the impact of what is being spoken about here: all we're talking about is noise reduction, not using AI to make up for lack or photographic skill, or replace content.

Was using ISO 100 film cheating compared to using ISO 3200 in the days of film cameras?

Noise has been an issue throughout the history of photography and its great that we now have technology that can be used as a tool as and when it's needed.
 
I've used Pure Raw and it's really good. However, I still like my Lightroom technique better since I have more control. With PR, I dump in mu files and it's all automatic (at least the last time I used it). Results were good, but at times I thought overly sharp. I think DxO is probably a better choice if you have a LOT of photos that you need to do, but for me I only process a limited number from any given trip. Figure for every day on a trip, maybe 1-5 photos (some places are better than others, hence the range).

Lightroom does take longer, but don't let the video length fool you - it takes a LOT longer to talk about it than to do it :) I can knock out a shot in a few minutes.
If you want the DxO result (noise reduction and lens optics modules) you can also use DxO PhotoLab Elite. The current version is 7 I think. It has sliders to control the amount of the effect you get.

I also find that the best noise reduction software can vary, depending on the image you are working with. Sometimes one does better than the other and the best one is not always the same. But for higher ISO images, I‘m tending to use the ACR version of the LR noise reduction you describe or DxO, either Pure Raw or PhotoLab.
 
I've used Pure Raw and it's really good. However, I still like my Lightroom technique better since I have more control. With PR, I dump in mu files and it's all automatic (at least the last time I used it). Results were good, but at times I thought overly sharp. I think DxO is probably a better choice if you have a LOT of photos that you need to do, but for me I only process a limited number from any given trip. Figure for every day on a trip, maybe 1-5 photos (some places are better than others, hence the range).

Lightroom does take longer, but don't let the video length fool you - it takes a LOT longer to talk about it than to do it :) I can knock out a shot in a few minutes.
Thanks, Steve. I'll watch your video later and give it a try on a few images.

Here in the PNW (BC), noise is a constant issue for 50% of the year (at a conservative estimate), especially when needing faster shutter speeds.
 
I was convinced that Topaz DeNoise was the best tool available, then I really dug into Lightroom's denoise and found I could do as well or better with Lightroom. I'll be trying out a couple of your tips with regard to the order of processing and especially the background tricks you showed. Thanks, great video, sir!
 
Your video enhanced my ideas about use of Denoise in LR. Thank you for it. I was using lower values (15-25), I am going to experiment with the higher ones using your tips.
I also liked a lot your rules of thumb for finding candidate pictures before the process is started.

From my attempts, it seems that a picture usually looks more natural if instead of increasing the amount, details are pushed up on sharpening sliders after Denoise. The rationale behind it is the fact that pushing the details slider is taking into account shorter wavelengths of digital information than the the amount slider. It more directly counterbalances the Denoise process, but on a new dng file.
 
Your video enhanced my ideas about use of Denoise in LR. Thank you for it. I was using lower values (15-25), I am going to experiment with the higher ones using your tips.
I also liked a lot your rules of thumb for finding candidate pictures before the process is started.

From my attempts, it seems that a picture usually looks more natural if instead of increasing the amount, details are pushed up on sharpening sliders after Denoise. The rationale behind it is the fact that pushing the details slider is taking into account shorter wavelengths of digital information than the the amount slider. It more directly counterbalances the Denoise process, but on a new dng file.
While I use all the sliders, I do tend to use the Sharpen one the most - I'll play with the Detail a little more in the future :) Thanks!
 
I've been using the denoise in LR a lot more lately so I'm looking forward to watching the video to learn a bunch of stuff I never new :). Thanks for providing it for free!
 
Sorry. I must be getting too old to wander around in this fantasyland.

I have to be the advocate of truth on this one and suggest a few corrections to the topic.

If one shoots improperly exposed garbage willingly no amount of sharpening will make them a photographer. Buying a cell phone that does AI is all they need to reach their maximum photographic potential. The bar will be sufficiently lowered to guarantee success.

Denoise does not make 12,800 or any other ISO "Look like ISO 400". ISO 400 always looks like 400 and 12800 always looks like 12800. They are electronic gain values. One and zeros and RAW data never lies. AI simply aborts it into an illusion of what people WISH they had the pride and skill to create. Garbage in gets faked quality out. Did anyone else bother to read the Hogan guides or study induced noise or what dual gain is or any of those other principles that apply to the tools they paid thousands of bucks for?

AI denoise aborts what the photographers selects and replaces it with computer generated information.

But whatever turns you on. I've read great articles about how the Samsung S23Ultra is even better than what you are suggesting.

Does anyone reading any of this actually believe that any publisher of serious photography doesn't recognize the difference between faked out BS, AI converted throw aways and professional images created with human intelligence and pride?

When did grain in photographs become an evil concept? How many great historical images get thrown out of the MMoA when edge to edge, AI sharpness becomes a curators rule of thumb? What makes obvious fakery more marketable than classic realistic photographs that were shot precisely the way the creator intended and displayed with warts and all?

If AI make proper photography this easy, why are all so many saps buying long f/4 at 8 or 10 or 16 grand a pop. Are people really that unmotivated that they will spend big bucks for fan boy bodies and lenses and use them to shoot garbage to correct with CGI?

I'm sure that the AI photography crowds can't stand music performed live because it has ambience. Live music is never sharp "edge to edge". AI music always is.

I remember when photographic techniques were the main topic in BCG forums. Oh well...it's a tik tok, autotune world now and nothing is real. People sell out and forget where they began. But that's the business end of the internet "photography" world. You have to preach to the choir or the donation plate doesn't get filled. The choir wants to hear how to shoot pictures just like Steve or Simon or Gregory without putting in the time to learn to use the tools of the craft.

Why should they when the photographers they admire say "don't bother shooting images until you get it right...AI all you trash instead of throwing it out and you'll be just as good as me. Sort of...".

Ironically, I remember very well being influenced by Steve to buy my Wimberly head and learning to use it properly. I remember reading if you are chasing the subject you are going the wrong way. I remember Steve saying that getting closer is the solution to not having a long enough lens for a shot. I remember Steve emphasizing the value of being a competent naturalist in getting the shot right in the first place. All that ancient photography information was fascinating and helped me create.

It doesn't seem to carry any weight in the crowded "show me the money" internet photography world now that preserving throw aways shot at 20 frames a second is the most profitable aspect of photography to promote.

The only reason some people need to cheat is that someone tells them it's acceptable.
I am right there with you with regard to the serious ethical and artistic problems that AI can introduce, and in fact I'm so harsh on it that I actually think there are serious philisophical and even religious issues with AI content creation and AI in general.

However, I think you're conflating a lot of different things, not all of which are the same things. That's partly the fault of companies like Adobe and others because of their very broad use of the term AI. For instance, if you spend much time around portrait photography as I do you'll see there are a ton of people and companies selling things branded as "AI" which are not remotely the same thing as AI image generation or ChatGPT or that sort of thing. I get a half dozen ads a day for "AI presets" for Lightroom which claim to use AI to edit your portraits for you - but if you know much about Lightroom that might make you wonder what the heck they're talking about since there's no apparent way to create a preset which would do this. Well, the answer is that what these really do is to use LR's masking to try to create different looks. They call them "AI" because Adobe calls it "AI" when LR selects the subject or the eyes or the lips or whatever to put a mask around it so you can dodge/burn it, adjust the saturation or contrast, etc.

It DOESN'T mean that these presents actually use AI in the sense of MidJourney's image generation or even Adobe's own much more limited generative fill. They're not creating anything or adding anything new to the photos - they're just selecting parts of the photo to mask, something which photographers have been doing manually since the first digital camera 20+ years ago. Yet Adobe calls it AI because they can and (presumably) they think it sounds good.

Does the AI denoise do more than this? Sure, but it's also not anything like what most people mean by "AI" these days. Again, I'm with you on AI being overall very harmful to the creative arts, but I really think this is barking up the wrong tree. AI denoise is not fundamentally changing photos or replacing what a photographer photographed with computer generated information. In fact, remember that for a digital photograph the process of taking the data that the sensor records about many photons strike one photosite vs. another and turning that into an image involves the camera/RAW processor - i.e., a computer - making a lot of assumptions and choices that could be made differently and that are made differently by different RAW processors and/or cameras. The very act of using a digital camera in the first place means that a computer has interpreted the data that the sensor recorded. In one sense, so-called "AI denoising" amounts to using empiricism to try to get the interpretation of the recorded data to be more accurate to the original scene.

Either way, it's not remotely the same thing as generative AI, which is what gets all the buzz and which really is problematic in many ways.
 
I am right there with you with regard to the serious ethical and artistic problems that AI can introduce, and in fact I'm so harsh on it that I actually think there are serious philisophical and even religious issues with AI content creation and AI in general.

However, I think you're conflating a lot of different things, not all of which are the same things. That's partly the fault of companies like Adobe and others because of their very broad use of the term AI. For instance, if you spend much time around portrait photography as I do you'll see there are a ton of people and companies selling things branded as "AI" which are not remotely the same thing as AI image generation or ChatGPT or that sort of thing. I get a half dozen ads a day for "AI presets" for Lightroom which claim to use AI to edit your portraits for you - but if you know much about Lightroom that might make you wonder what the heck they're talking about since there's no apparent way to create a preset which would do this. Well, the answer is that what these really do is to use LR's masking to try to create different looks. They call them "AI" because Adobe calls it "AI" when LR selects the subject or the eyes or the lips or whatever to put a mask around it so you can dodge/burn it, adjust the saturation or contrast, etc.

It DOESN'T mean that these presents actually use AI in the sense of MidJourney's image generation or even Adobe's own much more limited generative fill. They're not creating anything or adding anything new to the photos - they're just selecting parts of the photo to mask, something which photographers have been doing manually since the first digital camera 20+ years ago. Yet Adobe calls it AI because they can and (presumably) they think it sounds good.

Does the AI denoise do more than this? Sure, but it's also not anything like what most people mean by "AI" these days. Again, I'm with you on AI being overall very harmful to the creative arts, but I really think this is barking up the wrong tree. AI denoise is not fundamentally changing photos or replacing what a photographer photographed with computer generated information. In fact, remember that for a digital photograph the process of taking the data that the sensor records about many photons strike one photosite vs. another and turning that into an image involves the camera/RAW processor - i.e., a computer - making a lot of assumptions and choices that could be made differently and that are made differently by different RAW processors and/or cameras. The very act of using a digital camera in the first place means that a computer has interpreted the data that the sensor recorded. In one sense, so-called "AI denoising" amounts to using empiricism to try to get the interpretation of the recorded data to be more accurate to the original scene.

Either way, it's not remotely the same thing as generative AI, which is what gets all the buzz and which really is problematic in many ways.
I'd argue that 90% of what we call AI actually isn't AI.
 
I have put aside an hr or so this afternoon to watch Steve’s Lightroom Denoise video, however as an interim question can the same/similar processes be used in Adobe Camera Raw’s Denoise function?
 
Thanks for this Steve.
How would you handle the occasional bit of color noise? The ai noise reduction in lightroom doesn't seem to do anything to it. You can adjust it in the manual noise reduction panel but only if you also move the luminance slider.

I've never noticed color noise at all in the DNG's after LR's noise reduction, but I have had color noise "problems" because of a mixup it can cause. When LR creates a DNG with the noise reduction, it defaults the luminance and chroma noise reduction sliders for that DNG to zero. If you then try to copy the develop settings to another photo or make a preset out of it or to go back to the original pre-noise-reduced photo and paste the settings onto that, it copies that 0 chroma noise reduction onto the other photo and it winds up looking awful until you realize it and fix it.
 
This thread is about using LR to deal with noise in high ISO images. PERIOD. It is NOT about Generative AI.

Comments should be about the information provided in the video, or comments how the techniques Steve presents are similar or different, better or worse, etc that other ways to tame noise in high ISO images.

Thank you!
 
This is such an amazing video, I was tempted for a minute to return back to LR, then I remembered the agony that I suffered after a few catastrophic catalog crashes and was jolted back to reality. Thanks for supplying such a well-reasoned and thoughtful workflow and I think the rationales and results speak for themselves. PS, I wouldn't expect any Christmas cards from Topaz.
 
I appreciate the time Steve put into making this video, and was pleased to learn some new tricks, such as the use of negative texture, clarity and sharpness on enhancing noise reduction. But I have to disagree with his opinion on third party noise reduction plugins. I have found Topaz Sharpen AI and Denoise AI to be as good, and in some cases, significantly better than Lightroom Denoise and Sharpen. I posted an article on my blog about the use of these plugins in salvaging a severely underexposed ISO 12800 image: https://erkesphoto.com/photography-...aging-a-severely-underexposed-high-iso-image/
After watching his video , I reprocessed these images using Lightroom and couldn't get anything usable.
Below are sample images from the article:
20230619_SonyA1_0161-2.jpg
You can only see EXIF info for this image if you are logged in.

20230619_SonyA1_0161.jpg
You can only see EXIF info for this image if you are logged in.

20230619_SonyA1_0161-Edit.jpg
You can only see EXIF info for this image if you are logged in.


20230619_SonyA1_0279-Edit.jpg
You can only see EXIF info for this image if you are logged in.


The first image is the original raw, the second is the raw brightened +2.88 in exposure. The third image is the finished, processed image.
The fourth image is another image from that same shoot. I've posted it to illustrate a second disagreement I have with Steve's video. I disagree with his opinions on evaluating sharpness and deciding what is an acceptably sharp image. I never look at images higher than 100% view. 100 percent provides a view of 1 image pixel to 1 monitor pixel. Anything higher requires the software program to manufacture pixels to fill in for the increased magnification. It also gets you far into the realm of pixel peeping and is unnecessary in my opinion.
The fourth image would probably not qualify as being acceptably sharp by Steve's criteria. But I've printed it up to 24 inches (on the long side) and it is more than sharp enough. I've shown it to photographers and nonphotographers alike and no one has commented that it looks soft or out of focus. In fact I sometimes get comments about how sharp it looks. This image will be on the cover of a national magazine in April of this year.
"Sharpness" is a relative thing and difficult to evaluate and compare in different images. It depends on subject magnification, light quality ( soft light, direct light, backlight, use of flash) and atmospheric conditions. I look at images on my computer and can't necessarily pick out those taken with my Sony 200-600 from those taken with my 600mm, although I know from photos of my resolution charts (using the same image magnification) that the 600mm is sharper. I tend to use the 200-600mm more often because of its compositional versatility. One thing I can conclude after over forty years as a serious photographer is that composition trumps resolution any day.
I appreciate all the info that Steve provides in his books and videos. I've learned a lot from him. But I felt like I had to voice my opinion on this video and his previous video on evaluating sharpness.
 

Attachments

  • 20230619_SonyA1_0279-Edit.jpg
    20230619_SonyA1_0279-Edit.jpg
    1.5 MB · Views: 58
I appreciate the time Steve put into making this video, and was pleased to learn some new tricks, such as the use of negative texture, clarity and sharpness on enhancing noise reduction. But I have to disagree with his opinion on third party noise reduction plugins. I have found Topaz Sharpen AI and Denoise AI to be as good, and in some cases, significantly better than Lightroom Denoise and Sharpen. I posted an article on my blog about the use of these plugins in salvaging a severely underexposed ISO 12800 image: https://erkesphoto.com/photography-...aging-a-severely-underexposed-high-iso-image/
After watching his video , I reprocessed these images using Lightroom and couldn't get anything usable.
Below are sample images from the article:
View attachment 80338
View attachment 80339
View attachment 80340

View attachment 80341

The first image is the original raw, the second is the raw brightened +2.88 in exposure. The third image is the finished, processed image.
The fourth image is another image from that same shoot. I've posted it to illustrate a second disagreement I have with Steve's video. I disagree with his opinions on evaluating sharpness and deciding what is an acceptably sharp image. I never look at images higher than 100% view. 100 percent provides a view of 1 image pixel to 1 monitor pixel. Anything higher requires the software program to manufacture pixels to fill in for the increased magnification. It also gets you far into the realm of pixel peeping and is unnecessary in my opinion.
The fourth image would probably not qualify as being acceptably sharp by Steve's criteria. But I've printed it up to 24 inches (on the long side) and it is more than sharp enough. I've shown it to photographers and nonphotographers alike and no one has commented that it looks soft or out of focus. In fact I sometimes get comments about how sharp it looks. This image will be on the cover of a national magazine in April of this year.
"Sharpness" is a relative thing and difficult to evaluate and compare in different images. It depends on subject magnification, light quality ( soft light, direct light, backlight, use of flash) and atmospheric conditions. I look at images on my computer and can't necessarily pick out those taken with my Sony 200-600 from those taken with my 600mm, although I know from photos of my resolution charts (using the same image magnification) that the 600mm is sharper. I tend to use the 200-600mm more often because of its compositional versatility. One thing I can conclude after over forty years as a serious photographer is that composition trumps resolution any day.
I appreciate all the info that Steve provides in his books and videos. I've learned a lot from him. But I felt like I had to voice my opinion on this video and his previous video on evaluating sharpness.
The video was not on salvaging a photo. It was about taken a good photo with noise and cleaning it up.
 
The video was not on salvaging a photo. It was about taken a good photo with noise and cleaning it up.
No, it was about taking a high ISO image and using Lightroom tools to denoise and sharpen. My article was about noise reduction and sharpening techniques using Topaz products, first illustrated with a normal high ISO image, then demonstrating the technique on an extreme example, essentially an ISO 102,400 image--something that LR tools cannot duplicate because they do not have the variety of Denoise and Sharpen algorithms that Topaz products have. Steve claims that LR methods are superior to Topaz methods and I think I have demonstrated that that is not true. You are either hung up on the word "salvage" or you think that an image to be used on the cover of a national magazine is not "good."
 
No, it was about taking a high ISO image and using Lightroom tools to denoise and sharpen. My article was about noise reduction and sharpening techniques using Topaz products, first illustrated with a normal high ISO image, then demonstrating the technique on an extreme example, essentially an ISO 102,400 image--something that LR tools cannot duplicate because they do not have the variety of Denoise and Sharpen algorithms that Topaz products have. Steve claims that LR methods are superior to Topaz methods and I think I have demonstrated that that is not true. You are either hung up on the word "salvage" or you think that an image to be used on the cover of a national magazine is not "good."
So I guess you are saying topaz could have given a better outcome on Steve's images.
 
Back
Top