With the re-introduction of 3D to our screens following the success of Avatar, there is a lot of fear and misinformation about 3D (stereographic) production. It's only natural that people should have questions, but its also important that people get the right answers to those questions.
In particular, there has been a lot of discussion about how shallow depth of field works when making 3D images. Some people are unsure about how the distance between the cameras (otherwise known as the 'interaxial') should be affected by the depth of field. Some even go so far as to suggest that even the merest hint of depth of field makes for uncomfortable viewing because in 'real life' nothing is blurry.
The problem this creates is that it clashes with the standard toolset cinematographers have used for over a hundred years to focus the viewers attention on specific elements of the frame - not to mention the problems this throws up for shooting formats.
By way of example, the 2D version of a 3D movie is typically generated simply by discarding 'one eye' from the stereoscopic presentation. If the proponents of the 'no depth of field in 3D' camp are correct, then it creates an interesting question for producers. If you plan to release in 2D and 3D how do you shoot both with and without depth of field? How do cinematographers even frame the story in such a situation?
Compounding this is the tendency of professional cameras to employ larger sensor formats and the fact that a mirror rig is de rigueur for most stereo work. Both of these factors work together to pretty much ensure that eradicating any background blurriness is a very difficult proposition indeed, unless you happen to be shooting in bright sunlight or on a very wide lens.
The good news is that it's business as usual; there is no difference between the same stereographic images with or without depth of field.
To illustrate this, I shot a test series with varying interaxial distances, f stops and focal lengths. To see the 3D effect you will need to view them stereoscopically. If you have a pair of red/cyan glasses you may want to click on the Anaglyph thumbnails. If not you can try the cross-eye freeviewing thumbnail if you are able to do this. Alternatively, if you have a stereo monitor you can load the colourimages in Stereoscopic player making sure to select side by side, right image first as your viewing option (not left, as the frames are reversed for normal side by side as compared to the freeview format).
As I briefly mentioned above, shooting in 3D has a tendency towards a shallower depth of field than 2D.
To begin with, a camera with a large sensor will give a shallower depth of field than a comparable setup with a small sensor, to which the use of a mirror rig compounds the issue.
For those that don't know, Typical values of interaxial vary between about 5 and 60mm for most 3D work. Pro lenses will typically have a barrel radius that exceeds this by a factor of 2 or more. How to get the camera lenses close enough together without them bumping into one another? The answer is to mount the cameras on a mirror rig. Positioned at 90 degrees to each other, one camera now looks into a reflection on a half-silvered mirror, while the other looks directly through the back of the same mirror. In this way, you can have an interaxial of 0mm (which astute readers will have realised is shooting '2D' as the image is the same in both cameras).
The problem is that the amount of light is divided in half and each camera therefore loses a stop. To maintain exposure, the cameras must open up by a stop, which correspondingly makes for a shallower depth of field. Also, close-ups are usually shot on lenses in the medium to long range for reasons of not distorting the features of the face with foreshortening and of course, the longer the lens, the shallower the depth of field. Is the language of 3D cinematography then, to be limited to brightly lit wide angle shots?
The test then, is to shoot the same image with both a shallow and deep depth of field and compare the differences; which I have done below using a canon 5D Mk2. This camera has quite a large sensor (24 x 36mm) so producing a shallow depth of field should be easy on this camera – in fact, it was getting a deep depth of field proved to be more difficult.
Firstly I shot a simple scene with a background parallax of about 2% of screen width, which is suitable for screens up to about 50". To really get a sense of the stereoscopic effect, try clicking on the images to see them full screen. The convergence point was set on the near side of the subject of our scene; the Chinese incense burner 0.4m away. The furthest object in the background is 2.46m, and the focal length was about 45mm.
In the first image (taken at F22) the background is…well, I wouldn't say it's perfectly sharp – that's one of the problems with such a large sensor. Even at F22 it can be difficult to get a really deep depth of field – which is the opposite to the problem you have with a camera with a small chip. While not perfectly sharp, it's sufficiently sharp for our purposes.
Comparing these two sets of images, there is no greater eyestrain or discomfort with the version shot at F4 as compared to those shot at F22. Some people complain that it feels strange to be converging on objects that are out of focus, as you can in the above images. This brings me back to the point we touched on above. The entire point of selective focus in film language is to guide the viewers attention.
The only time you will see the subject of the shot go out of focus is if the cinematographer is directing your attention to another part of the frame, for example, the 'monster over the shoulder' shot. The irresistible force that draws your eye to the monster, rather than continuing to look at our blurry hapless hero in the foreground is the same force at work here. Of course you can force yourself to look at an out of focus section of the image, but the eye seeks resolution by finding the point of focus. It feels wrong in 2D or 3D.
Another common misconception is that if the background is sufficiently blurry you can get away with having more parallax. I took the same series of photographs with an interocular of 24mm, which would give us a background parallax of 6% of image size. First the F22 images;
Now lets see what happens when we decrease the depth of field by by shooting the same set of images at F4;
I moved the camera back and zoomed in to 105mm. At F22 it looks like this;
The same applies to foreground objects, but care must be taken to deal with edge violations which is a discussion for another time, however, the principles remain the same.
In the end the result is a thorough debunking of two persistant myths of 3D; that you can't shoot shallow depth of field, and that you can get away with 'turning up' the 3D if the background is blurry.