I did a little research today, between periods of work in the yard and came across some relevant info regarding Denoise performance as relates to both Win and Mac systems.
An interesting writeup by Eric Chan of Adobe here in which he states...
"Need for speed. Denoise is by far the most advanced of the three Enhance features and makes very intensive use of the GPU. For best performance, use a GPU with a large amount of memory, ideally at least 8 GB. On macOS, prefer an Apple silicon machine with lots of memory. On Windows, use GPUs with ML acceleration hardware, such as NVIDIA RTX with TensorCores. A faster GPU means faster results"
On Win machines, nVidia GPU's with ML (machine learning) are those with "Tensor" cores which first appeared in the 20 series boards. Older 9, 10 and 16 series do not include Tensor Cores. A good overview of nVidia GPU configurations is found here if you're interested. That would certainly explain why I saw much lower GPU utilization when run on a GTX1660 or GTX1050 as compared to my RTX3090 machine.
Although it wasn't stated explicitly, Adobe Denoise may not be optimized for the AMD line of GPU's as they don't include the ML hardware (Tensor Cores) of the nVidia GPUs.
On the Mac side, a good overview of the differences in the various M1 and M2 chip capabilities here. All the M1 and M2 chips appear to have 16 neural engine cores except the M1-Ultra, but Eric did not specifically mention how much reliance the application puts on the ML (Neural Engine Cores) when running on Macs.
I'll update here with any additional information I happen across.
Cheers!
An interesting writeup by Eric Chan of Adobe here in which he states...
"Need for speed. Denoise is by far the most advanced of the three Enhance features and makes very intensive use of the GPU. For best performance, use a GPU with a large amount of memory, ideally at least 8 GB. On macOS, prefer an Apple silicon machine with lots of memory. On Windows, use GPUs with ML acceleration hardware, such as NVIDIA RTX with TensorCores. A faster GPU means faster results"
On Win machines, nVidia GPU's with ML (machine learning) are those with "Tensor" cores which first appeared in the 20 series boards. Older 9, 10 and 16 series do not include Tensor Cores. A good overview of nVidia GPU configurations is found here if you're interested. That would certainly explain why I saw much lower GPU utilization when run on a GTX1660 or GTX1050 as compared to my RTX3090 machine.
Although it wasn't stated explicitly, Adobe Denoise may not be optimized for the AMD line of GPU's as they don't include the ML hardware (Tensor Cores) of the nVidia GPUs.
On the Mac side, a good overview of the differences in the various M1 and M2 chip capabilities here. All the M1 and M2 chips appear to have 16 neural engine cores except the M1-Ultra, but Eric did not specifically mention how much reliance the application puts on the ML (Neural Engine Cores) when running on Macs.
I'll update here with any additional information I happen across.
Cheers!