The Innovative Entertainment Series is supported by Dolby. “Like” Dolby for a chance to have Adam West read your Facebook status update, live, on camera, February 18th.
In 1838, Sir Charles Wheatstone first described the process of stereopsis: the process by which humans perceive three dimensions from two highly similar, overlaid images. Or, the process by which Avatar looks like a mind-blowingly immersive alien landscape instead of a bunch of brightly colored fuzz.
3D technology has come a long way since Wheatstone developed his stereoscope, then used to view static images and eventually pictures. Now we get to wear Wayfarer knock-offs and enjoy 3D films, television shows and video games.
For some people, seeing cool images might be enough. But others might be curious how Pandora was brought to life, or how TRON: Legacy zapped them into its glowing world. The answer is both reassuringly simple and inordinately complex, depending on who you ask and how you look at it.
How do 3D films work? What’s the difference between polarity and anaglyph (we’ll get there), and what are the next steps for 3D gadgets and imagery? Have a look below for a breakdown of how today’s “it” technology functions. Plus, we put in some sweet looking pictures. What’s not to love?
A tremendous thank you to David Leitner, Rob Willox and Professor Ian Howard for their collective insight and help in describing the various forms of 3D technology below.
Stereoscopy 101
Big words! Academic nomenclature! Relax, this is actually the easy part. 3D, or “stereoscopy,” refers to how your eyes and brain create the impression of a third dimension. Human eyes are approximately 50 mm to 75 mm apart — accordingly, each eye sees a slightly different part of the world. Don’t believe me? Hold up a pen, pencil or any other thin object. Close one eye. Now switch.
The image on either side should be pretty similar but slightly offset, like that line behind the woman’s head in the picture above. These two slightly different images enter the brain, at which point it does some high-powered geometry to make up for the disparity between the two images. This disparity is “3D” — essentially, your brain making up for the fact that you’re getting two different perspectives of the same thing.
This is also, essentially, what modern 3D technology is trying to replicate. All those silly sunglasses and silver-coated projectors are all designed to feed your individual eyes different perspectives of the same image. Easy, right?
Well, yes. It is pretty easy for your brain to figure out the disparity between the two images. Your brain can automatically figure out all the angles and math and geometry to sync the images. The hard part is getting a camera to do the same thing, and to get those individual images to your individual eyes without butchering the whole effect.
What We Watch
Films
Film has been one of the pioneers of 3D, thanks to its hefty budgets and some technological daring. There are largely two ways 3D has been achieved in motion pictures: anaglyph and polarized glasses.
Anaglyph is a fancy way of referring to the red-and-blue glasses we used to wear. By projecting a film in those colors — one in red, one in blue — each eye would get an individual perspective and your brain would put the 3D effect together. Other colors could be used, providing they were distinct enough to be separated on screen. This technique, however, didn’t allow for a full range of color and had a tendency to “ghost,” or have the once-distinct images bleed into one another. Not cool.
Much more common is the use of polarized glasses, which take advantage of the fact that light can be polarized, or given different orientations. For example, one image can be projected in a horizontal direction while the second can be projected in a vertical direction. The corresponding glasses would allow horizontal polarization in one eye and vertical polarization in the other. The problem is that this kind of 3D requires you to keep your head still, à la A Clockwork Orange. Tilting your head can distort how the waves get to your eyes, messing with the color and 3D effect. Also not cool.
This is the tricky part. To counteract this, 3D now uses rotational polarity, meaning the film being projected actually has two different spins to it. The glasses then pick up those opposite rotations — clockwise in one eye, counterclockwise in another eye — to separate the image. Now you can tilt your head or place it on your boy/girlfriend’s shoulder and still be able to watch the movie.
Television
It’s possible to use the same techniques in film projectors for home theaters, but you would need some serious cash. Films use special silver-coated screens that are much better at reflecting light back to the viewing audience. Your television, unfortunately, is not silver-coated. There are, however, two ways to get 3D at home: active and passive.
The most common, active 3D, involves wearing those electronic RoboCop glasses. The glasses are synced up to your television and actively open and close shutters in front of your eyes, allowing only one eye to see the screen at a time. This sounds like a recipe for a stroke, but the shutters move so quickly that they’re hardly noticeable. These shutter lenses are made possible because of the refresh rate on televisions. 3D-enabled televisions have high image refresh rates, meaning the actual image on screen is quickly loaded and reloaded. Through the glasses, you receive one constant image instead of a flicker.
Passive systems are less common but run much like your 3D film. These televisions have a thin, lenticular screen over the standard display. A lenticular screen is made up of a series of incredibly thin magnifying strips that show a slightly different perspective of the screen to each eye, as illustrated above. While this technology doesn’t require bulky, expensive glasses, it can limit the image quality. Essentially, each eye only sees one half of the screen at any given time. For example, if a screen had 100 pixels, 50 pixels would be magnified and sent to the left eye and the other 50 pixels would be magnified and sent to the right eye. In practice, your brain is actually able to put the two images together and retain the entire 100 pixel fidelity.
How It’s Made
Cameras
There is a lot of fancy footwork that goes into creating 3D. The real heavy lifting, however, is all just a matter of geometry and precision. To get a 3D image, you essentially need two versions of the same scene filmed from the precisely correct angle as if your eyes were seeing the same scene. Filmmakers need to triangulate the distance between the two cameras and make sure they are focused on the same object. They also need to zoom and track, or move, at the same speed, otherwise the images won’t sync up. In modern film rigs, these two cameras are bolted into place preventing any unwanted jostling or disparity.
Close-ups, a staple of modern film, are hard to capture in 3D because the cameras need to be extraordinarily close together to mimic the angle of your eyes. To solve this, filmmakers sometimes use mirror rigs. Mirror rigs film through one lens, and that image is then bounced by a tiny internal mirror to another camera where a second image can be recorded. Providing there are no imperfections on the mirror (including scratches, dirt or warping), the close-up will be filmed in 3D.
Computer Graphics
There is a difference between creating three-dimensional graphics and images that appear to be 3D in the theater. Again, it’s all just a matter of some high-tech geometry. To get a movie like Toy Story 3 into 3D, animators create two versions of each frame, one from the perspective of each eye. Because computer-generated movies don’t need cameras, it’s much easier to get perfectly synced images and to fine-tune any mistakes in post-production. The downside is that this technique requires a lot of time and elbow grease to get perfect.
It’s possible to create a 3D video game using the same technique; however, games add their own complications. Films and shows are largely pre-recorded and all have a fixed perspective — you can’t move the camera’s focus or orientation when you’re watching a film. Video games allow you to change the perspective by moving your on-screen character. This creates a labor-intensive problem since animators need to create objects that can be seen in 3D from a variety of angles depending on where the user is looking and moving.
The Future
One of the toughest problems to solve with 3D technology is the fundamental halving of any image. Lenticular screens send half the image to each eye, shutter lens glasses physically block one eye from seeing the image, and polarized glasses only send half the displayed light to each eye.
The human eye needs approximately 50 frames per second in order to see film as one continuous image. 3D effectively halves that so each eye would only see 25 frames per second and get some nauseating flicker. Modern technology has been able to significantly up that frame rate (or refresh rate in televisions) so that we can achieve the illusion of 3D.
Advances in computing and memory have also made 3D possible in a number of handheld and consumer products. There are already prototypes for 3D laptops, cameras, camcorders, and a variety of other tech.
In the coming years, keep a look out for technology that uses autostereoscopy, or 3D that doesn’t require glasses in any way. The Nintendo 3DS, Nintendo’s newest portable 3D gaming device, is one such device. One of its tricks is syncing a lenticular display with its forward-facing camera. By using eye recognition, it can track where the user’s face is and shift the display to accurately display 3D no matter how the user views the screens. Look for autostereoscopy to test the waters on handheld devices before it heads to large format screens.
We’re just at the start of what 3D can offer, with a lot more successes and failures to occur in the meantime. Let us know in the comments what you hope to see for the future of 3D, or what 3D-enabled tech you’re looking to scoop up.
Source:www.mashable.com
No comments:
Post a Comment