The Surprising and Puzzling Paradox with Modern Cameras and Lenses
I’ve spent a lot of time pondering a critical issue with cameras. In many ways, it still feels like we’re caught in a traditional problem that hasn’t disappeared. It starts with modern cameras. Lots of things about them are rooted in tradition. And that’s wonderful. Photography needs to be loyal to where it began. But embracing digital still hasn’t truly happened. One of the most perplexing things about modern cameras has to do with lenses. My hope is that it doesn’t take long for a lot of rapid change to occur.
To get into this, I need to explain a few things. Let’s start with modern camera lenses. Lots of lenses are marketed in a way that ignores the romance and highlights the technology. There’s a lot of discussion about the clinical efforts. It’s common to see MTF charts, pixel peeping, corner checking, bokeh shaping, etc. I mean, when did onion bokeh suddenly become a thing? Who was pixel peeping that hard on an image and suddenly said that the bokeh was that awful? Indeed, modern lens marketing is all about discussing how good the lenses are. More importantly, it implies how much less post-production you have to do because you have such a clean photo.
And that’s fine, I guess. But the paradox here is that cameras are marketed differently.
Talk about cameras, and there’s a lot of different things discussed. Dynamic range is discussed because it tells you how much you can recover in post-production. The color range is the same thing. High ISO noise is discussed similarly to show you how clean the images are–and it’s the most sensical thing. Then there’s frames per second, autofocus tracking, etc. Essentially, cameras are marketed to how much more work you can do in post-production vs. the other cameras out there.
Why? Why are lenses marketed to show how little post-production you need to do, but cameras are designed to show how much? Why can’t the two work together?
Some of you are probably going to give some obvious answers. But why not just work with the post-production companies to put it in the camera to begin with? Phase One cameras have this, and so does Zeiss. But it needs to be widespread. Further, with what AI can do, much more can be possible.
My proposal: allow me to do the following:
- Boot the camera up
- Sync it to an app
- Tell the app what sort of genres I shoot often
- Feed it 10 of my favorite images
- Let the app digest the images and come up with an average idea of the images I want to produce
- Fine tune
- Export that data to the camera
- Let the camera spit out that data as RAWs that I can edit if I wish
- Let the camera also spit out that data as JPEGs
Cameras can do many things already like multiple exposures, stroboscopic flash, second-curtain flash, etc. What this could do is give birth to a totally different type of creator. And more importantly, it can create even more of a market using its own apps and platforms. Everyone can genuinely have their own unique look. You won’t constantly be feeding the same images over and over to satisfy an algorithm. Humans won’t be so robotic when it comes to creating. And truthfully, it will mean photographers can actually create more in-camera.
I hope this happens one day and that the industry stops working against itself with modern cameras.