Oxford School of Photography

insights into photography

Daily Archives: July 11, 2013

OVER 200 SEMINARS TO HELP YOU BECOME A BETTER PHOTOGRAPHER

This event in London in October might be something you would find interesting, we over here at OSP Towers don’t know much about it but we have the links for you to check it out and decide for yourself. Personally a photography event that only has pictures of people with their cameras on the website leaves me a bit cold but I don’t really care about kit I care about photography

PhotoLive 2013 is the UK’s biggest and best photography training event.

Taking place over the weekend of 26-27 October at the Hotel Novotel London West, PhotoLive 2013 brings the experts to you, with over 200 big name-led seminars designed to help you improve your photography.

THE PASSION

Whether you’re passionate about landscapes, portraits, wildlife, travel, macro or Photoshop, you’ll come away fromPhotoLive 2013 inspired and informed.

We’ve designed the show to suit everyone. Tickets start at just £20.

You can attend as many seminars as you like and there’s an identical programme on both the Saturday and Sunday, enabling you to come on both days and take in as much as you can. See the Schedule page for more details.

THE PEOPLE

Sign up today to receive expert tuition and insight from legendary photographers and Photoshop experts including Steve Bloom, Kate Hopewell-Smith, Glyn Dewis, David Noton, Andy Rouse, Tom Mackie and George Cairns.

Our aim is to help you become a better photographer, so if you love photography and want to take your skills to the next level, meet like-minded photography enthusiasts and have a brilliant day (or weekend) out, then PhotoLive 2013 is for you!

Hotel Novotel London West, 1 Shortlands, London W6 8DR

 

Computational photography: the snap is only the start

To paraphrase REM “Is this the end of the world as we know it? ”

Certainly there will be many who think the end is neigh and that photography as a means of creative control through camera techniques is in it’s death throes. I’m not talking about the ‘art’ photography we are supposed to admire like the recent winners of the  DEUTSCHE BÖRSE PHOTOGRAPHY PRIZE who don’t make photographs whilst still entering a photography competition, I am talking about the regular every day photography that you and everyone you know engages in, whether with a camera or a phone.

From the BBC an article by Leo Kelion tells how in the not too distant future what you photography will be irrelevant because computational photography (software) will allow you to change where and what you focus on.

Imagine a camera that allows you to see through a crowd to get a clear view of someone who would otherwise be obscured, a smartphone that matches big-budget lenses for image quality, or a photograph that lets you change your point of view after it’s taken. The ideas may sound outlandish but they could become commonplace if “computational photography” lives up to its promise. Unlike normal digital photography – which uses a sensor to capture a single two-dimensional image of a scene – the technique records a richer set of data to construct its pictures. Instead of trying to mimic the way a human eye works, it opens the activity up to new software-enhanced possibilities. Pelican Imaging is one of the firms leading the way…..

A companion app uses this information to let the snapper decide which parts of their photo should be in focus after they are taken. This includes the unusual ability to choose multiple focal planes. For example a photographer in New York could choose to make the details of her husband’s face and the Statue of Liberty behind him sharp but everything else – including the objects in between them – blurred.

We have already featured the Lytro Camera that allows this but the new technology is of a whole different order of things and the suggestions are that even camera phones will do this along with sophisticated HDR that actually looks good

For now, high dynamic range (HDR) imaging offers a ready-to-use taste of computational photography. It uses computer power to combine photos taken at different exposures to create a single picture whose light areas are not too bright and dim ones not too dark.

However, if the subject matter isn’t static there can be problems stitching the images together. Users commonly complain of moving objects in the background looking as if they’re breaking apart. One solution – currently championed by chipmaker Nvidia – is to boost processing power to cut the time between each snap. But research on an alternative technique which only requires a single photo could prove superior. “Imagine you have a sensor with pixels that have different levels of sensitivity,” explains Prof Shree Nayar, head of Columbia University’s Computer Vision Laboratory. “Some would be good at measuring things in dim light and their neighbours good at measuring very bright things. “You would need to apply an algorithm to decode the image produced, but once you do that you could get a picture with enormous range in terms of brightness and colour – a lot more than the human eye can see.” Even if current HDR techniques fall out of fashion, computational photography offers other uses for multi-shot images.
So do you want to embrace this or does it fill you with loathing?
_68638269_pelican_camera_engineers_refocus_1and3
Pelican makes a phone camera that allows two subjects to be in focus but not objects in between them
Here to make you feel better is a picture by Henri Cartier-Bresson
henri_cartier-bresson-sunday