Lightroom Classic Update - introduces Denoise-AI powered noise reduction.

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

Processing time for circa 2017 Dell XPS15 with I7-7700, 3.4GHz, 32GB and GTX1050 4GB was slightly under 4 minutes for the same cat photo I posted at the outset of this thread. My old editing system, I7-6700K, 4.2GHz, 32GB, GTX1660 Super 6GB to 65 seconds on the same photo. GPU is definitely the key with Adobe's Denoise AI feature. One other detail of note, though they were being used, neither of these two systems came anywhere close to showing the level of GPU usage that I saw on the RTX3090.

At this point can't say whether it's a matter of insufficient VRAM on the graphics board or much more advanced hardware in the 30 series board...or both. These two 10/16 series boards are getting a bit long in the tooth. Will be interesting to hear what those running 20 series or other 30 series boards of different levels are experiencing.
 
Processing time for circa 2017 Dell XPS15 with I7-7700, 3.4GHz, 32GB and GTX1050 4GB was slightly under 4 minutes for the same cat photo I posted at the outset of this thread. My old editing system, I7-6700K, 4.2GHz, 32GB, GTX1660 Super 6GB to 65 seconds on the same photo. GPU is definitely the key with Adobe's Denoise AI feature. One other detail of note, though they were being used, neither of these two systems came anywhere close to showing the level of GPU usage that I saw on the RTX3090.

At this point can't say whether it's a matter of insufficient VRAM on the graphics board or much more advanced hardware in the 30 series board...or both. These two 10/16 series boards are getting a bit long in the tooth. Will be interesting to hear what those running 20 series or other 30 series boards of different levels are experiencing.
I think you are correct on GPU usage. I re-ran the test on my system after allowing optimization/increased clocking, and I cut the processing time in half, from just over 20 seconds to 11. Not an expert, but for the clocking change to have made a difference, LR must also be using the CPU/GPU in addition to the GPU?

I'm going to look for a sale on 4090 GPUs I think :)
 
Very interesting ... I'll need to try this out tonight, but I'm going to guess that it will take a few rounds of improvement to see the quality level match Topaz and PureRaw. I'll be very happy if I'm wrong.
I was like you…but the. Tried it on a couple shots today…they were 9,000 ISO…compared DxO 2, DXO 3, Topaz DeNoise, Topaz Photo AI, and LR. Started with some SOOC RAW Z9 shots. DxO 3 was the best but only because it has sharpening as well as NR, with sharpening off in the high NR mode (can’t remember the exact name) DxO 3 and LR were tied in NR and about the same in detail non-loss due to NR. Both of the Topaz products were bout the same as far as NR goes but with sharpening off in Photo AI the loss of fine detail was worse than either DxO 3 or LR. I’m very impressed…and there was a video posted here earlier (post 13) that I watched…the guy did shots up to ISO 100,000…and was amazed by the better details retention and non loss of detail (for example the greenish edge in a foreground glass partition at a concert hall) than either of the Topaz products he was comparing. Maybe the reason Adobe took so long was because like Apple they didn’t want to rush to market and wanted to get it right. LR has the percentage slider like Topaz does and DxO does not. I will have to do some ting after my next outing…but I’m impressed enough that if I hadn’t already upgraded to DxO 3 I would wait and I am waiting before deciding whether to pay Topaz for another year of updates to their suite. Very pleasantly surprised and impressed.

And…shoulda added this…LR was actually quicker on my M1 Pro Mac Studio than the other were from start to finish through the whole process.
 
Last edited:
You may not care and I may not care, but given the frequent gnashing of teeth and whining about file sizes of higher resolution files these days, a LOT of people DO care...hence my reason for pointing it out. Consider it a PSA...:)

Cheers!
Yeah…PSAs are good…and TBH I used to worry about file sizes…but then I just told myself that hard drives are cheap. Even my 16TB OWC ThunderBay mini was under $1,000…and I think nothing of spending that much on a lens or half that for a new backpack from Nya-Evo…so I just don’t worry about file sizes because I never look at them. Making a RAW into a layered PS file does essentially the same thing…as does upping the resolution with Gigapixel or LR. I wasn’t dumping on you at all…so Cheers!…back to ya.
 
I think the new LrC Enhance module is awesome, although I don't know if it's really any better or worse than the other similar software. Here are an uncropped or post-processed jpeg straight out of LR, and a version treated with all 3 Enhance modules - De-noise at 59%, Raw details on, then then Super Resolution, and further cropped square. (Note, I had previously posted a previous version of this image in one of the other forums here). To me the results are pretty impressive.

LIWO-0257.jpg
You can only see EXIF info for this image if you are logged in.




LIWO closeup- copy.jpeg
You can only see EXIF info for this image if you are logged in.
 
I was like you…but the. Tried it on a couple shots today…they were 9,000 ISO…compared DxO 2, DXO 3, Topaz DeNoise, Topaz Photo AI, and LR. Started with some SOOC RAW Z9 shots. DxO 3 was the best but only because it has sharpening as well as NR, with sharpening off in the high NR mode (can’t remember the exact name) DxO 3 and LR were tied in NR and about the same in detail non-loss due to NR. Both of the Topaz products were bout the same as far as NR goes but with sharpening off in Photo AI the loss of fine detail was worse than either DxO 3 or LR. I’m very impressed…and there was a video posted here earlier (post 13) that I watched…the guy did shots up to ISO 100,000…and was amazed by the better details retention and non loss of detail (for example the greenish edge in a foreground glass partition at a concert hall) than either of the Topaz products he was comparing. Maybe the reason Adobe took so long was because like Apple they didn’t want to rush to market and wanted to get it right. LR has the percentage slider like Topaz does and DxO does not. I will have to do some ting after my next outing…but I’m impressed enough that if I hadn’t already upgraded to DxO 3 I would wait and I am waiting before deciding whether to pay Topaz for another year of updates to their suite. Very pleasantly surprised and impressed.

And…shoulda added this…LR was actually quicker on my M1 Pro Mac Studio than the other were from start to finish through the whole process.

The DXO Photolab has the same NR engine but has an amount slider and a little preview window to see the effect.
 
  • Like
Reactions: JAS
Yeah…PSAs are good…and TBH I used to worry about file sizes…but then I just told myself that hard drives are cheap. Even my 16TB OWC ThunderBay mini was under $1,000…and I think nothing of spending that much on a lens or half that for a new backpack from Nya-Evo…so I just don’t worry about file sizes because I never look at them. Making a RAW into a layered PS file does essentially the same thing…as does upping the resolution with Gigapixel or LR. I wasn’t dumping on you at all…so Cheers!…back to ya.
No worries, I didn't feel dumped on, but I do appreciate that for many on fixed or marginal incomes that enjoy this often hobby that can turn into a major money pit...even when you can buy a portable 4TB HDD for <$90 USD. And they're perfectly serviceable for occasional access or archival storage.
 
I suspect the sluggish performance you're seeing is due to the nature of the M1 chip and it's graphics optimizations being geared more towards certain video codecs and not necessarily the sort of computational methods for that Adobe has chosen in for their denoise algorithms. I do understand that for some types of graphics operations, the M1 chips are about equivalent to the GTX1050 mobile version, though in other operations, they're way beyond that.

I'd look to hear the results of other Mac users of different configurations. Given the memory usage I'm seeing my GPU use, it can be very intensive.

Nothing I've done on my fairly beefy Win 10 system has taken over 10 seconds. I've not yet updated my Dell XPS15 laptop (GTX1050 discrete GPU) and updated LRC, but will do that shortly and run some tests for comparison to my desktop. I've never used any of the Topaz products, so I can't speak to their performance or their optimal computing requirements. I've stuck with LRC, PS and DxO for years and that has served me very well.
This concerns me. I had decided to replace my aging Windows 10 Dell desktop with either a Mac Mini M2 Pro or an Mac Studio M1 Max so I’d be very interested in hearing reports how they work with the Lightroom Classic AI denoise functionality.
I have no problems sticking with a PC, but I was looking forward to having my mobile devices, travel laptop (M1 MB Air), and desktop in the Apple ecosystem.
 
This concerns me. I had decided to replace my aging Windows 10 Dell desktop with either a Mac Mini M2 Pro or an Mac Studio M1 Max so I’d be very interested in hearing reports how they work with the Lightroom Classic AI denoise functionality.
I have no problems sticking with a PC, but I was looking forward to having my mobile devices, travel laptop (M1 MB Air), and desktop in the Apple ecosystem.
I have an M1 MBP with 24 core GPU and 32mb RAM. What I am seeing is the following - previews in 3-5 seconds and processing of lossless compressed or HE* file in 45 seconds and an DX sized file in 20 seconds.

I am now glad I went for the additional GPU as well. I am not sure this answers the question related to video but gives you some times to work with.
 
This concerns me. I had decided to replace my aging Windows 10 Dell desktop with either a Mac Mini M2 Pro or an Mac Studio M1 Max so I’d be very interested in hearing reports how they work with the Lightroom Classic AI denoise functionality.
I have no problems sticking with a PC, but I was looking forward to having my mobile devices, travel laptop (M1 MB Air), and desktop in the Apple ecosystem.

Possibly useful article.

 
I have an M1 MBP with 24 core GPU and 32mb RAM. What I am seeing is the following - previews in 3-5 seconds and processing of lossless compressed or HE* file in 45 seconds and an DX sized file in 20 seconds.

I am now glad I went for the additional GPU as well. I am not sure this answers the question related to video but gives you some times to work with.
Have you tried using the new Lightroom denoise functionality on your laptop? The Mac Mini M2 Pro and Mac Studio I'm speccing out has similar 24 GPU cores and 32 GB RAM. The Studio can be upgraded to 32 core GPU and 64 GB RAM if needed. I just wonder though how they work with LR denoise.
It used to be that LR wasn't really GPU intensive, but it appears that is now changing. So, I'm trying to find out if a PC's discrete graphics card with it's own RAM would handle things better than Apple's integrated graphics and shared memory. I don't need screaming fast, but I don't want to wait minutes for things to process.
 
Have you tried using the new Lightroom denoise functionality on your laptop? The Mac Mini M2 Pro and Mac Studio I'm speccing out has similar 24 GPU cores and 32 GB RAM. The Studio can be upgraded to 32 core GPU and 64 GB RAM if needed. I just wonder though how they work with LR denoise.
It used to be that LR wasn't really GPU intensive, but it appears that is now changing. So, I'm trying to find out if a PC's discrete graphics card with it's own RAM would handle things better than Apple's integrated graphics and shared memory. I don't need screaming fast, but I don't want to wait minutes for things to process.
@PKW my posted times were in reply to you. YES that is LR Denoise AI. So now you know with your proposed specs on an M1. Could be much faster on an M2?? or is it just core driven???. If you can find someone with higher cores or RAM you might see if it's worth it.

What I am seeing is the following - previews in 3-5 seconds and processing of lossless compressed or HE* file in 45 seconds and an DX sized file in 20 seconds.
 
@PKW my posted times were in reply to you. YES that is LR Denoise AI. So now you know with your proposed specs on an M1. Could be much faster on an M2?? or is it just core driven???. If you can find someone with higher cores or RAM you might see if it's worth it.
I reread your post more carefully after I made my reply to you and I realized you were probably talking about running denoise on your laptop, so thanks for reaffirming that. That's acceptable performance for my use. And like you said, now I need to see if the M2 chip on the Mac mini or more GPU cores and RAM on the Mac Studio will be even faster. The PC I was looking at (from Puget Systems) spec'd for Lightroom is really expensive, so it's good to know the Apple silicon Macs can deliver, at a much lesser cost.
Thanks, BarkingBeans!
 
I have an M1 MBP with 24 core GPU and 32mb RAM. What I am seeing is the following - previews in 3-5 seconds and processing of lossless compressed or HE* file in 45 seconds and an DX sized file in 20 seconds.
For what it is worth with a 32 GB Memory MacBook Pro with M2 MAX chip I am getting a similar preview time and 35 seconds for a full NEF file pre cropped to about DX.

There is some YouTube debate as to whether the full file is read first will the XMP data applied at the end of the denoise cycle.
 
As of this evening the 800 PF has come into stock for next day delivery at Grays of Westminster.

How long it will remain in stock I do not know.

Whether you switch is your decision.

You could ask Grays for a part exchange indication on your 500mm as the 20% VAT tax included in the UK 800mm street price is usually reduced when there is a part exchange.
 
For what it is worth with a 32 GB Memory MacBook Pro with M2 MAX chip I am getting a similar preview time and 35 seconds for a full NEF file pre cropped to about DX.

There is some YouTube debate as to whether the full file is read first will the XMP data applied at the end of the denoise cycle.

I don't get how precropping can have an impact if the software requires the raw file to work?
 
For what it is worth with a 32 GB Memory MacBook Pro with M2 MAX chip I am getting a similar preview time and 35 seconds for a full NEF file pre cropped to about DX.

There is some YouTube debate as to whether the full file is read first will the XMP data applied at the end of the denoise cycle.
Okay, so your MacBook Pro with an M2 chip runs similar to BarkingBeans with an M1…how many GPU cores does yours have?
 
Okay, so your MacBook Pro with an M2 chip runs similar to BarkingBeans with an M1…how many GPU cores does yours have?
Maybe for once I got my financial timing right :)

I was getting to the stage where I was having to seriously consider paying for something like Topaz for noise reduction.

What I can say is in the Group I meet every 2 weeks to discuss mainly computer issues my Mac LRC denoise performs much faster with 45 MP files than Topaz does with files from lower MP cameras using PC's - clarifying nothing unless the PC and Mac have similar specifications.

I expect in about a months time the Group will compare Topaz denoise with Lightroom denoise.

LRC denoise may be only a little beyond beta development - with improvements promised over the coming months.
Topaz may also up their denoise performance.

The spec of the Mac desktop that I use is 12 core CPU, 38 Core GPU and 16 Core Neural Engine that to some extent work together.
I expect a lesser LRC denoise performance from my 4 year old 27 inch iMac when it returns from repair next week.
 
Yeah unfortunately to-date none of the long prime Z lenses have been profiled. So far the only "wildlife" lens is the 100-400mm. Which of course I don't use.


One of us must be doing something different. Either I'm not using LR well or you're doing something odd in PL6. By "sharpening" if you mean in the lens correction panel I've never had that produce any artifacts at all. I typically leave it at default value but in limited testing I did at higher settings never saw any artifacts. As opposed to Topaz which can produce some pretty funky looking stuff. So much so that when I used to use Topaz denoise AI I always turned sharpening off. But I've completely abandoned Topaz in favor of PL.
I think that’s the difference, I use DeepPrime in Pure Raw 2. There you have the option to turn detail sharpening on/off (no strength or level option) and I have to turn it off to avoid details looking very “crispy”.
 
I think that’s the difference, I use DeepPrime in Pure Raw 2. There you have the option to turn detail sharpening on/off (no strength or level option) and I have to turn it off to avoid details looking very “crispy”.

If you upgrade to Pureraw 3, there are levels of control for the lens correction, I think 5 levels. I've had it on the one just below the maximum. Photolab goes farther, giving a slider to control amount. In Photolab you can turn off global sharpening which is similar to unsharp mask. They recommend you turn it off if there is a dxo lens profile, and just use lens sharpening, which is AI.
 
The spec of the Mac desktop that I use is 12 core CPU, 38 Core GPU and 16 Core Neural Engine that to some extent work together.
I expect a lesser LRC denoise performance from my 4 year old 27 inch iMac when it returns from repair next week.
@PKW creates some interesting choices. Seems like when the M1 was out there were more GPU choices. Interesting in that this morning my processing times are 25 seconds. I can't figure out why it's gone down, so making a decision on the GPU is tougher. Seems like you would want the Studio version over the Mini which seems to cap at 16 GPU cores. Maybe consider getting a 16" MBP (benefit is to use as one of your screens and can be moved if needed) where you have other options in the range of the Studio.
 
Back
Top