How do stereograms work?

In an earlier post I introduced the Wireframe Stereogram and mentioned that it grew out of a need to simplify stereograms. I needed to be able to generate them fast enough for an interactive display, and the typical SIRDS, SIS and MTS kind of stereograms took too long to produce, at least with my very basic technology and programming skills. To illustrate that, here’s a little introduction to stereograms and how they work.

The most basic type of stereogram is the stereo pair. This was all the rage just a few years after photography was invented, and today we can find lots of stereophotographs on flickr and the like.

stereo pair

Our brain has many ways to work out depth in a visual scene. The one that stereograms make use of is stereopsis, where the brain compares the two slightly different views our eyes have of the same scene, and interprets the differences as depth. Stereograms simply present these two different views next to each other, and it’s then up to you to look at the left view with your left eye, and the right view with your right eye (in stereograms designed for “parallel”, or divergent, viewing).

This works great, but we can only present very narrow scenes. Once the views start overlapping, the pair may still look ok in 2D, but once we fuse the stereo pair, we realise there’s a problem in 3D; now, our brain gets confused as to which scene the overlapping areas belong to, because there is conflicting information.

overlapping stereo pair

That’s where the Single Image Stereograms come in. Be it Random Dot (SIRDS), patterned (SIS), Mapped Texture (MTS) or whatever, they make sure that overlapping areas look the same in both views, eliminating that conflict. In other words, an MTS like the one below re-colours the left and right parts of the cube so they look the same when they overlap in the two views.

mapped texture stereogram

There are many ways to do this, but all of them take a lot of computation to work out what colour each pixel should be so it looks right on all the surfaces it ends up on in the 3D scene. That takes too much time to do it in real-time for an interactive display.

The Wireframe Stereogram eliminates that problem by getting rid of all the surfaces of the objects in the scene, and making all edges the same colour. This means that the two views we are combining are mostly transparent, and where edges in both views cross, they are the same colour, so there is no conflict.

wireframe stereogram

Of course, loosing all the surfaces and textures is a great sacrifice, and a Wireframe Stereogram will never have the same impact as a good Mapped Texture Stereogram. But this is compensated by the fact that we can now move things around in the 3D scene in real time. (To experience that, check out the Interactive Wireframe Stereograms.) We can’t have everything…