Pointed out by my father-in-law. Info below via fast company:
“So what's the fuss all about? It's called light field, or plenoptic, photography, and the core thinking behind Lytro is contained neatly in one paper from the original Stanford research--though the basic principle is simple. Normal cameras work in roughly the same way your eye does, with a lens at the front that gathers rays of light from the view in front of it, and focuses them through an aperture onto a sensor (the silicon in your DSLR or the retina in your eye). To focus your eye or a traditional camera you adjust the lens in different ways to capture light rays from different parts of the scene and throw it onto the sensor. Easy. This does have a number of side effects, including the need to focus on one thing. This adds complexity, and, if used well, beauty to a photo.
But Lytro's technology includes a large array of microlenses in front of the camera sensor. Think of them as a synthetic equivalent of the thousands of tiny lenses on a fly's eye. The physics and math gets a bit tricky here, but the overall result is this: Instead of the camera's sensor recording a single image that's shaped by the settings of your camera lens, aperture and so on, the sensor records a complex pattern that represents light coming from all the parts of the scene in front of it, not just the bits you would've focused on using a normal camera. The image is then passed to software which can decode it.
And this is where things get freaky. Because the system captures data about the direction of light rays from the scene, it can be programmed to ‘focus’ on any depth in the photo--years after you took the original image.”
Read the rest here, morecan be found here, here, and here. Visit the Lytro Picture Gallery here.
No comments:
Post a Comment