IVCam: Rendering with Real-Lens OpticsDavid M. Sirkin, Alex C. Snoeren, and David C. Zhang 6.837: Introduction to Computer Graphics Massachusetts Institute of Technology {sirkin, snoeren, davidz}@mit.eduAbstract:This paper presents a set of Ivray extensions to render scenes using a physically-based camera model.By using actual lens prescriptions, the resulting renderer, Ivcam, is able to accurately reproduce lenseffects including focus, depth of field, barrel distortion, aperture, and exposure. Ivcam replaces thepinhole camera used in Ivray with a film plane and lens system and simulates the propagation ofindividual light rays through the various lens elements and aperture stops. The Ivcam UI contains extensions to support lens selection and focusing functionality. The Inventorcamera parameters are continually updated to present an approximate preview, although Inventor isclearly unable to accurately reflect many of the effects. We show sample images rendered using avariety of lens systems, including fish-eye, telephoto, double-gauss, and wide-angle. In addition, imagesrendered with a wide-angle lens are compared to actual photographs of the scene using a similar lens. Introduction Ivray ray-traces scenes as if they were viewed through a pinhole lens onto a pinhole focal plane. Theresult is that the focal length is fixed and the entire scene is clearly focused with no distortion. Realcameras have variable focal lengths, apertures and focal planes with non-trivial area and finitedepth-of-field (DOF). Composing a photograph can be considered a balancing of these four effects. Wide-angle lenses cover alarger viewing angle than normal or telephoto lenses from the same subject distance, but also increasethe perceived depth between foreground and background. Moving the camera position can compensatefor lens coverage, but with the result of a change in perspective. The effect can be used to expand orflatten apparent depth in a photograph. Small apertures create larger DOF from the same subject distance with the same focal length, but alsodecrease the light intensity on the film plane. Opening up the aperture can compensate for exposure, butwith the result of a change in DOF. The effect can be used to de-focus distracting backgrounds from animportant foreground subject. Lenses of various sort create their own unique image distortions. Lenses with large surfaces or manyelements suffer from chromatic abberation, barrel or pincushion distortions. Lenses with small surfaceareas (or equivalently, small apertures) suffer from diffraction and loss of critical focus. In practice,photographers have to trade-off perspective, focal length and exposure with lens- and aperture-induceddistortions. Our project incorporates the influence of such camera realities, allowing computer-generated scenes to be viewed alongside film or video more convincingly. The remainder of this paper is organized as follows. Section 2 itemizes the specific goals of this project.The following section describes the large subset of these goals that were achieved, and details many ofthe technical issues difficulties encountered along the way. The contribution of each team member islisted in Section 4, and the paper concludes by discussing some of the many lessons learned during thecourse of the project. Goals Our goal was to extend Ivray to capture the various effects of lens imaging by developing a newapplication, Ivcam, which wraps around the basic ray-tracing functionality of Ivray. Ivcam’s userinterface would allow the user to adjust various aspects of the camera model, including lens structure,focus, and aperture. Specifically, we hoped to account for the following four effects: 1. Perspective variations due to distance between the subject and lens. 2. Depth-of-field (DOF) of sharp focus, which depends on factors such as image size and distance,lens focal length, and aperture. 3. Exposure variations due to the radius and geometry of lens elements and aperture, includingvignetting. 4. Reduced image quality due to distortion (diffraction and abberation) introduced by the lens designand aperture. Figure 1: Simulated Wide-AngleFigure 2: Simulated TelephotoFigures 1 and 2 demonstrate the type of effects we set out to capture. These images were not rendered,but rather manually doctored to simulate the effects of photo-realistic rendering. Both images arefocused on the plane of the light bulb, and have equally-sized bases. Figure 1 simulates the results of using a wide-angle lens. The DOF covers the entire scene, perspectiveis very strong, and there is significant barrel distortion (note the curvature in the armature). Forcomparison, Figure 2 shows the same image through a telephoto lens. Here, DOF is very shallow (notethe out-of-focus armature and front shade edge), there is little sense of perspective, and distortion isminimal. Our initial research produced two separate approaches, which we hoped to implement and compare, bothto each other and to actual photographs. The obvious approach involved interposing actual lens elementsin the scene between the eye and world-space objects. Assuming the ray-tracing engine properlymodeled the optical effects, this should produce images similar to those captured with a camera. Thesecond, more rigorous method involved modifying the camera model to include a lens system, andaccurately simulating the radiometry inside the camera itself. By accounting for the lens effects insidethe camera itself, we hoped not only to see a speed increase, but more accurate results as well. Lens Rendering The first approach is straightforward. One could use a standard rendering engine (which correctlyhandles diffraction calculations for multiple adjacent or wholly-contained objects-note the currentimplementation of Ivray does not do this), and then render through lenses in world space. This involvesmodeling lens objects in Inventor. While we believe many aspects of a real lens system can be modeled in this fashion, the additionalcomputations required to traverse a lens system for each ray cast adds significant calculation to therendering process. Furthermore, without intelligent super-sampling methods, various subtlecharacteristics of real lenses, such as variability in exposure and aperture may not be captured. Camera ModelA second, more sophisticated approach involves including the camera model in the rendering engineitself. This technique was used in previous work by Kolb, Mitchell, and Hanrahan
View Full Document