To paraphrase REM “Is this the end of the world as we know it? ”
Certainly there will be many who think the end is neigh and that photography as a means of creative control through camera techniques is in it’s death throes. I’m not talking about the ‘art’ photography we are supposed to admire like the recent winners of the DEUTSCHE BÖRSE PHOTOGRAPHY PRIZE who don’t make photographs whilst still entering a photography competition, I am talking about the regular every day photography that you and everyone you know engages in, whether with a camera or a phone.
From the BBC an article by Leo Kelion tells how in the not too distant future what you photography will be irrelevant because computational photography (software) will allow you to change where and what you focus on.
Imagine a camera that allows you to see through a crowd to get a clear view of someone who would otherwise be obscured, a smartphone that matches big-budget lenses for image quality, or a photograph that lets you change your point of view after it’s taken. The ideas may sound outlandish but they could become commonplace if “computational photography” lives up to its promise. Unlike normal digital photography – which uses a sensor to capture a single two-dimensional image of a scene – the technique records a richer set of data to construct its pictures. Instead of trying to mimic the way a human eye works, it opens the activity up to new software-enhanced possibilities. Pelican Imaging is one of the firms leading the way…..
A companion app uses this information to let the snapper decide which parts of their photo should be in focus after they are taken. This includes the unusual ability to choose multiple focal planes. For example a photographer in New York could choose to make the details of her husband’s face and the Statue of Liberty behind him sharp but everything else – including the objects in between them – blurred.
We have already featured the Lytro Camera that allows this but the new technology is of a whole different order of things and the suggestions are that even camera phones will do this along with sophisticated HDR that actually looks good
For now, high dynamic range (HDR) imaging offers a ready-to-use taste of computational photography. It uses computer power to combine photos taken at different exposures to create a single picture whose light areas are not too bright and dim ones not too dark.
However, if the subject matter isn’t static there can be problems stitching the images together. Users commonly complain of moving objects in the background looking as if they’re breaking apart. One solution – currently championed by chipmaker Nvidia – is to boost processing power to cut the time between each snap. But research on an alternative technique which only requires a single photo could prove superior. “Imagine you have a sensor with pixels that have different levels of sensitivity,” explains Prof Shree Nayar, head of Columbia University’s Computer Vision Laboratory. “Some would be good at measuring things in dim light and their neighbours good at measuring very bright things. “You would need to apply an algorithm to decode the image produced, but once you do that you could get a picture with enormous range in terms of brightness and colour – a lot more than the human eye can see.” Even if current HDR techniques fall out of fashion, computational photography offers other uses for multi-shot images.
So do you want to embrace this or does it fill you with loathing?
Pelican makes a phone camera that allows two subjects to be in focus but not objects in between them
Here to make you feel better is a picture by Henri Cartier-Bresson