2024 - Camera parity, what's next?

If you would like to post, you'll need to register. Note that if you have a BCG store account, you'll need a new, separate account here (we keep the two sites separate for security purposes).

I think that is already being delivered in the extraordinary wildlife lenses available for mirrorless systems. (I own one.)
The Canon 100-500 is quite light weight and a joy to carry around in the field. At F 7.1 it isn't a low light monster but with today’s cameras and noise reduction software it hasn’t been a big limiting factor for me. Even at 7.1, the image looks brighter than the 200-500 lens I used to use.

I agree lighter lenses with modern composite lens barrel components will be welcome. Most of the composites are stronger than aluminum and won’t distort due to temp changes.
 
Last edited:
The real value of some of these features is for video or JPEG shooting. While I can always make a better image with a Raw file, when you have an output format - or near final output - in camera tools are useful. It's the same idea as Auto mode for exposure - it's usually quite good, but with a different mode, Exposure compensation and a picture control you can always make it better. Look how many here are using Photoshop, Lightroom, Topaz, or DXO for editing.

Nikon's thought process for enthusiast and pro cameras is you'd rather post process and optimize images. But if you prefer a preset - they do offer a wide range of Picture Controls plus the ability to download other PC presets or make your own. That is a significant advancement in the Z6iii.
Near-term, I agree, but it doesn’t have to so. There’s nothing stopping a RAW file from containing multiple images, nor is there anything stopping the camera from compositing multiple frames into a single output file.

Imagine either you set your camera to “HDR” mode or the camera auto-detects it, and when you take the photo, the camera captures three images and stores them in one .NEF file. Opening LR you might see the “middle” exposure, but have considerably more dynamic range available due to the high/low exposures. As cameras get more capable (CPU-wise and in frame rate) perhaps the camera does the combination in-camera, and we get a single RAW of arbitrary bit depth based on the requirements of the scene.

Or perhaps if you capture a focus stack, and your RAW container has all frames in one file. Olympus already outputs a single JPEG; it would be easy enough today to make a single-file RAW stack with the embedded, processed JPEG.

I think today camera manufacturers assume you’re going to PP your files, but I think they also see potential competitive advantage in their “rendering engines”. People already pick brands based on “color science”, but there’s so much more potential in software, as iPhones and the like have proven with the relative miracles they’ve pulled off with their tiny sensors!

One thing is for sure, it’s a lot easier to develop software than it is to fight physics to double sensor density or optical resolution.
 
We can expect more innovations in sensor technology, in end user features eg resolution, dynamic range etc and particularly Autofocus.... As well as more efficient Fab technology that lowers production costs

The Lowlight performance of high resolution sensors is likely to be improved


Autofocus technology is an active area of research, particularly to improve the robustness and speed of On-Sensor AF systems; most are reliant on horizontal PDAF sensor structures [interesting interview with Fujifilm engineers]. A recent example is the rumoured quad pixel AF rumoured to be released by Canon. It will apparently work similar to cross-type sensors (key to the highly advanced AF performance of the Nikon D6)


Advances in Lidar based AF are another possibility, which DJI already markets

 
Last edited:
Back
Top