Hi AppNewser readers - we're now a part of Mediabistro's SocialTimes.com. For more great App news, reviews, guides and tips, head over to InsideMobileApps.

Google Camera Brings Lens Blurring Manipulation to Mobile Photography

DSLR cameras are, for the most part, bulky but beautiful machines for capturing what our eyes see. They also give photographers a chance to manipulate the perception of distance, and that’s a feat mobile photography is about to accomplish.

Yesterday, a Google Camera was released, and with it, the ability to manipulate lens blurring, also known as the bokeh effect. With a small amount of blur, a photo can look real, like how the human eyes view the world. With a lot more bokeh, a photo can look miniature, fake, animated even – like those tilt-shift images that makes cities appear as miniature universes.

image3

The lens blurring that comes with an app goes beyond capturing light the way traditional lenses do. Google is using 3D imaging to capture an entire spatial experience so that users can later opt to pick a perspective of their choosing:

Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture. Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene. Here’s an example — on the left is a raw input photo, in the middle is a “depth map” where darker things are close and lighter things are far away, and on the right is the result blurred by distance:

Here’s how we do it. First, we pick out visual features in the scene and track them over time, across the series of images. Using computer vision algorithms known as Structure-from-Motion (SfM) andbundle adjustment, we compute the camera’s 3D position and orientation and the 3D positions of all those image features throughout the series.

Once we’ve got the 3D pose of each photo, we compute the depth of each pixel in the reference photo using Multi-View Stereo (MVS) algorithms. MVS works the way human stereo vision does: given the location of the same object in two different images, we can triangulate the 3D position of the object and compute the distance to it. How do we figure out which pixel in one image corresponds to a pixel in another image? MVS measures how similar they are — on mobile devices, one particularly simple and efficient way is computing the Sum of Absolute Differences (SAD) of the RGB colors of the two pixels.

 

So there you have it. Lens blurring for mobile cameras. It’s the same image, but a different eye.

Related Stories
Mediabistro Course

Content Marketing 101

Content Marketing 101Get hands-on content marketing training in our brand new boot camp, Content Marketing 101! Starting September 8, digital marketing and content experts will teach you the tips and tricks for creating, distributing and measuring the success of your brand's content. Register now!