A little more about LRC Denoise GPU matters...

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

MotoPixel

Well-known member
Supporting Member
Marketplace
I did a little research today, between periods of work in the yard and came across some relevant info regarding Denoise performance as relates to both Win and Mac systems.

An interesting writeup by Eric Chan of Adobe here in which he states...

"Need for speed. Denoise is by far the most advanced of the three Enhance features and makes very intensive use of the GPU. For best performance, use a GPU with a large amount of memory, ideally at least 8 GB. On macOS, prefer an Apple silicon machine with lots of memory. On Windows, use GPUs with ML acceleration hardware, such as NVIDIA RTX with TensorCores. A faster GPU means faster results"

On Win machines, nVidia GPU's with ML (machine learning) are those with "Tensor" cores which first appeared in the 20 series boards. Older 9, 10 and 16 series do not include Tensor Cores. A good overview of nVidia GPU configurations is found here if you're interested. That would certainly explain why I saw much lower GPU utilization when run on a GTX1660 or GTX1050 as compared to my RTX3090 machine.

Although it wasn't stated explicitly, Adobe Denoise may not be optimized for the AMD line of GPU's as they don't include the ML hardware (Tensor Cores) of the nVidia GPUs.

On the Mac side, a good overview of the differences in the various M1 and M2 chip capabilities here. All the M1 and M2 chips appear to have 16 neural engine cores except the M1-Ultra, but Eric did not specifically mention how much reliance the application puts on the ML (Neural Engine Cores) when running on Macs.

I'll update here with any additional information I happen across.

Cheers!
 
"Apple silicon machine with lots of memory" is a bit rich for my blood but my Win PC with older 3090 GPU with 24Gb of VRAM processes a Z9 file in 7-8 seconds. The GPU gets to around 80% utilization. This is the highest utilization I have seen of the GPU from any Adobe product to date.
 
That makes me feel better about the $$ on spent on a MacBook Pro with M2 chips!
Me too - with an M2 MAX chip.

Perhaps we have to increasingly recognise that to take full advantage of whatever post processing tools will be available in the future the computers we use may have a useful life span of no more than about 5 years.
 
Me too - with an M2 MAX chip.

Perhaps we have to increasingly recognise that to take full advantage of whatever post processing tools will be available in the future the computers we use may have a useful life span of no more than about 5 years.
Len, I think that's a pretty accurate statement, though both Apple and Microsoft seem to find ways to obsolete our older hardware with newer hardware requirements, too. Win 11 basically made all CPU's prior to 8th Gen Intel incompatible with the OS. My almost antique, but perfectly serviceable I7-6700K based system when paired with 64GP of memory and the latest GPU was plenty fast for anything I needed to do with image or video editing. Same issue with some of the older Macs not supporting the latest MacOS, though they've rarely been as upgradeable as Win or Linux machines.

Another aspect of this is that to minimize your cost and maximize your performance, you'd better have a clear understanding of what your software requires in terms of hardware and where to put your money. So often I see people buy the latest/fastest CPU/memory/MB, storage, etc., then skimp on the GPU and wonder why some of their applications are slow. Yeah, I know it's different on the Mac side...take what we make and like it, more or less. :)

More and more, the "heavy lifting" is being done by the GPU, not the CPU. It's going to be incumbent on the applications developers to provide more detailed guidance in terms of where the performance bottlenecks can occur in different operations within their software. They're not being very transparent about that, perhaps for IP reasons.
 
Update, Reduced my processing time from over 20 seconds to between 6 and 7 seconds, . Swapped my 3060Ti card for an RTX 4090. This was a full size NEF from d850.
That's a nice bump in speed, though at 20 seconds, that 3060 Ti seems to be a good value option for someone not doing heavy video editing and not wanting to make the jump all the way to a 3090 or 4090. 20 seconds seems like a tolerable amount of time.

I'll be curious to hear a report on times with a 4070 or 4060 when it hits the market. The 40 series have next generation, improved Tensor Cores, though no real info regarding how that might benefit what's going on in LRC.
 
That's a nice bump in speed, though at 20 seconds, that 3060 Ti seems to be a good value option for someone not doing heavy video editing and not wanting to make the jump all the way to a 3090 or 4090. 20 seconds seems like a tolerable amount of time.

I'll be curious to hear a report on times with a 4070 or 4060 when it hits the market. The 40 series have next generation, improved Tensor Cores, though no real info regarding how that might benefit what's going on in LRC.
The 4090 is so BIG, and cabling is an issue, as well as power supply. When I built this PC a few months back, I installed a 1000w PS, just because, thought it might be overkill, and now with this graphics card, I find its just barely big enough.
 
FWIW, I have an NVIDIA 3070 RTX with 8GB. When running a batch of 100+ images on my 2 yr old well configured Windows system, 64GB ram, all internal SSDs, the GPU utilization fluctuates, but tends to be in the 90% range most of the time. My NEF files are from a Nikon Z9, HE*.
 
FWIW, I have an NVIDIA 3070 RTX with 8GB. When running a batch of 100+ images on my 2 yr old well configured Windows system, 64GB ram, all internal SSDs, the GPU utilization fluctuates, but tends to be in the 90% range most of the time. My NEF files are from a Nikon Z9, HE*.
Bill, thanks for the data point. The processing time is acceptable for you?
 
I too can confirm that the GPU is now of major importance for the AI denoise feature. I was running a Nvidia GT 1030 before and not only did Lightroom estimate around 5 minutes for an image, it also crashed the driver so it was un-usable. Without the GPU it estimated and took around 25 minutes per image but at least it completed. So I got a new RTX 3060 GPU which was fairly reasonable at $300 to try and see if it would improve things. Seems to process flawlessly now and everything I have tried completes in 20 secs or less. Whew!
 
As a decades long windows user, I bought a Lenovo mini desktop in Dec 2022 for use with Lightroom Classic, 32 mb RAM upgrade with Intel I7 processor. Thought it would be good even with integrated Intel GPU. But it's glacial with the new LR denoise routine, 6-8 minutes or more for a 24 mg raw file. No fix possible. whilecTopaz Denoise works pretty quickly, I sort of wish had bought a Mac instead despite the learning curve.
 
I have Windows (Intel 13th generation + 128 GB RAM) with quite old Radeon Rx Vega 64.
It is not lightning fast for denoise, but it is done in a couple of minutes.
Considering the fact I am denoising only selected few pictures, which are destined for publication it is reasonable. Results are worth it IMO.
 
I made a comment on 4/29/2023 and was asked a question.... I apologize for not answering at that time. I simply must have not noticed the email that indicated this thread had a new entry... my bad.

Answering the question "is processing time is acceptable for you?" I would say yes. I should mention my use of Adobe Denoise does not always involve the exact same disk drive configuration.

For example, I do volunteer work for a dance company and I keep their catalog and images on an external Seagate expansion drive. My large Adobe Denoise batch runs tend to be for that activity. I have never timed a batch run of say 50-100 images, but yes I am OK with the stability and time of a batch run.

My personal images that have been imported to LR, the LR Catalog, and all related files are stored on internal SSDs (M.2 and SATA SSD drives). Adobe Denoise for my personal files tends to be a single file or a small batch of say 2-10 images at a time. A rough guess of the processing time for each image for that configuration is about 15-20 seconds (about the same as using Topaz Denoise).
 
This article might be useful. I don't have a particularly powerful Windows computer, but get the denoise back from a 45 mp file in reasonable time. I think especially pay attention that the gpu is working and increasing the camera raw cache size to 20.

 
Older thread but I do have a quick question on the various level/performance of different GPUs.

Update, Reduced my processing time from over 20 seconds to between 6 and 7 seconds, . Swapped my 3060Ti card for an RTX 4090. This was a full size NEF from d850.

@Viseguy , did you need the 4090 to get that performance boost? Or could the 4060, 4070, 4070Ti, 4080, or now any of the "super" replacement cards give that same boost. In other words, will LR actually use everything the 4090 has to offer or was that an overkill, and you do gaming and other things with the card as well?

More and more, the "heavy lifting" is being done by the GPU, not the CPU.

@MotoPixel , definitely agree, but to what extent. How much GPU is needed for LR before I have diminishing returns/an overkill card? Anyone have any experience or knowledge on what level of card is needed. That 4090 is a beast and it's exspensive. I would hate to pop for that and come to find a $300 card would do the same (in LR).

Thanks for any insight into the GPU/LR mystery.
 
Older thread but I do have a quick question on the various level/performance of different GPUs.



@Viseguy , did you need the 4090 to get that performance boost? Or could the 4060, 4070, 4070Ti, 4080, or now any of the "super" replacement cards give that same boost. In other words, will LR actually use everything the 4090 has to offer or was that an overkill, and you do gaming and other things with the card as well?



@MotoPixel , definitely agree, but to what extent. How much GPU is needed for LR before I have diminishing returns/an overkill card? Anyone have any experience or knowledge on what level of card is needed. That 4090 is a beast and it's exspensive. I would hate to pop for that and come to find a $300 card would do the same (in LR).

Thanks for any insight into the GPU/LR mystery.
Are you using the AI based Denoise function of LR? Those AI based functions are the areas that LR properly uses the GPU. Otherwise CPU, memory, and disk speed are all more important for overall performance.
 
Are you using the AI based Denoise function of LR? Those AI based functions are the areas that LR properly uses the GPU. Otherwise CPU, memory, and disk speed are all more important for overall performance.
Just starting to use the denoise functionality on some of my hard cases. I did monitor my cpu and gpu utilization when I initialize the denoise, gpu definitely hits in the 90's. I'll need to watch during normal photo fuctions, which is what I'm more concerned with at this point.

Thanks!
 
Older thread but I do have a quick question on the various level/performance of different GPUs.



@Viseguy , did you need the 4090 to get that performance boost? Or could the 4060, 4070, 4070Ti, 4080, or now any of the "super" replacement cards give that same boost. In other words, will LR actually use everything the 4090 has to offer or was that an overkill, and you do gaming and other things with the card as well?



@MotoPixel , definitely agree, but to what extent. How much GPU is needed for LR before I have diminishing returns/an overkill card? Anyone have any experience or knowledge on what level of card is needed. That 4090 is a beast and it's exspensive. I would hate to pop for that and come to find a $300 card would do the same (in LR).

Thanks for any insight into the GPU/LR mystery.
You might find this (long) thread a helpful read: https://www.lightroomqueen.com/comm...ience-gigabyte-nvidia-rtx-4070-ti-12gb.47572/ .

--Ken
 
Just starting to use the denoise functionality on some of my hard cases. I did monitor my cpu and gpu utilization when I initialize the denoise, gpu definitely hits in the 90's. I'll need to watch during normal photo fuctions, which is what I'm more concerned with at this point.

Thanks!
I use Denoise a bit and run a 3090 in my system. It takes around 9 seconds for a 45mp raw. I can live with that. For my use of LR, the biggest performance priority for me is the rendering of previews as you move between images and between modules and I always like to have full size previews. For this CPU and disk speed and having plenty of memory are the most important factors. Also having the correct settings for storage and retention of previews to match the way you use LR. I have a 13900k CPU and Samsung 990 m2 drives and this gives very good performance of LR for the way I use it. If I had to prioritize where I would spend money to run LR it is 1.CPU 2.Drive speed 3.Memory sufficiency (64Gb is good) 4.GPU But this is for how I use it, YMMV.
 
How much GPU is needed for LR before I have diminishing returns/an overkill card? Anyone have any experience or knowledge on what level of card is needed. That 4090 is a beast and it's exspensive. I would hate to pop for that and come to find a $300 card would do the same (in LR).

I ran latest LrC's AI Denoiser on a couple of Sony A7IV Large RAW Compressed 46mb images using a W11 box with an RTX 3050 Ventus 2x OC 8gb GDD6 GPU (currently $220 on Amazon) and latest NVidia Studio driver. From the time I tapped the "Enhance" button to when the DNG 74mb files arrived in LrC was approx 25 sec. GPU punched up to 90-96%.

I suppose the "need for speed" for AI Denoising all depends on your wallet and how impatient we may be.
 
Last edited:
  • Like
Reactions: seh
I use Denoise a bit and run a 3090 in my system. It takes around 9 seconds for a 45mp raw. I can live with that. For my use of LR, the biggest performance priority for me is the rendering of previews as you move between images and between modules and I always like to have full size previews. For this CPU and disk speed and having plenty of memory are the most important factors. Also having the correct settings for storage and retention of previews to match the way you use LR. I have a 13900k CPU and Samsung 990 m2 drives and this gives very good performance of LR for the way I use it. If I had to prioritize where I would spend money to run LR it is 1.CPU 2.Drive speed 3.Memory sufficiency (64Gb is good) 4.GPU But this is for how I use it, YMMV.
I could live with 9 seconds! I'm running an i7-9700K with 32G of mem and an integrated RTX2060 and (2) Samsung 990 m2's. After a fresh reboot and nothing else opened, a 49.7M file from a Z9 took 28 seconds from hitting the "enhance" button to done. CPU and memory were sleeping, GPU hit 97%. As little as I use that function, I can live with that as well. But my guess is that the more I become accustomed to that function, I may be using it on more photos, even maybe some batch jobs. And like you, I'm as bit more concerned with rendering and the develop module. I don't like any lag or sluggishness.

It's good to hear I don't need a 4090 for good LR performance, that's crazy money! I'll go back through and check all my LR settings again to make sure it's tweaked correctly and then see if I can talk myself into the existing LR speeds. Thanks for the data, much appreciated!
 
I suppose the "need for speed" for AI Denoising all depends on your wallet and how impatient we may be.
Hi Phil, thanks for the info! I do appreciate all the feedback from everyone as puts a $$ figure to the processing time to help me with a decision. At this point I don't think a saved 20 seconds is worth $500-$1000. But who knows what the future will bring. Thanks again.
 
Back
Top