UT PSY 380E - Image formation- geometrical optics

Unformatted text preview:

40 III. Image formation: geometrical optics Surfaces typically reflect incident light in all directions (unless they happen to be perfectly specular, like perfect mirrors). For example, Lambertian surfaces reflect light equally in directions. This fact poses a serious obstacle to obtaining useful visual information from the environment, and undoubtedly it greatly slowed the evolution of vision in biological organisms. To see this, consider a piece of plane white paper and imagine that you have attached to each point on the paper a small sensor (a receptor or a photocell). As the paper is turned in various directions the light falling on the sensors goes up and down, but for each direction the paper is pointed the light is very nearly uniform across all the sensors. Clearly the responses of these sensors would supply very little information about objects in the environment. The problem, of course, is that light is reflected in all directions and hence each sensor receives light from every object in front of it. Useful vision requires that the light waves be sorted out so that the light from each direction falls on a unique region of the sensor array. This sorting process is called image formation. Evolution has produced two general types of solutions to the problem of image formation that will be described below. A. Visual scene as a collection of point sources To understand image formation it is useful to note that the visual scene can be regarded as a collection of point sources; a point source is a infinitesimal light source that emits light in all directions. B. Two point sources The problem of image formation is illustrated in Figure 3.1A for the special case where there are just two small objects (point sources) in the environment. The light waves from the two point sources are represented by concentric circles. As can be seen, even in this simple case, the light from the two point sources is completely confounded in the sensor array. Clearly something must be done if separate images of the two point sources are to be formed on the sensor array. C. Imaging with pin-holes One solution is to place the sensor array behind an opaque surface that has a small aperture (a pin-hole) in it. The principle is illustrated in Figure 3.1B. Because of the pinhole, the light waves from the point sources fall on different parts of the sensor array. This method of image formation can produce moderate resolution images (pin hole cameras were popular several decades ago) and the method is found in some primitive biological vision systems (e.g., the Nautilus eye).41 Figure 3.1 A B C However, pinhole imaging has some serious weaknesses. It is clear from Figure 8B that the smaller the pin hole the more localized will be the images of the two point sources and hence the better will be the resolution of images in general. Unfortunately, shrinking the size of the aperture only works up to a certain point. Once the aperture becomes smaller than some critical size, diffraction of the light waves by the aperture causes the images of point sources to start increasing is size (diffraction will be discussed in more detail later). Also, the need for a small aperture implies that little light from the objects will reach the sensor array. This can also result in poor resolution under low ambient lighting conditions. D. Imaging with lenses A more elegant solution is to use lenses in combination with relatively large apertures. Lenses are transparent objects that have approximately spherical surfaces. A typical lens shape is shown in Figure 3.1C. The fundamental principle that allows lenses to form images is that the speed of light is lower in dense materials, such as glass or42 water, than in air. Consider the light wave from the upper point source that is just about to enter the lens. The part of the wave touching the lens will enter it first and will be momentarily slowed down relative to the rest of the wave. The result is that once the wave is inside the lens it will be much less curved or may to curved the other way. Furthermore, the part of the light wave that enters the lens first will be the last to exit the lens, thus the rest of the light wave will again gain ground on the part that entered first. The end result is that the lens converts diverging spherical light waves from point sources into converging spherical light waves that collapse back into point sources at some distance from the lens. 1. The image and object planes The plane where the image of the point source is located is called the image plane. In Figure 3.1C, the image plane coincides with the sensor array. The plane where the point-source object is located is called the object plane. The distance from the lens to the image plane is called the image distance, di. The distance from the lens to the object plane is the object distance, do. 2. The image distance depends on the object distance With a pinhole imaging system, the size of the point source image on the sensor array depends little on the distance from the point source to the sensor array or on the position of the aperture (see Figure 3.1B). The distances are usually much more critical with the lens imaging systems. It is apparent from Figure 3.1C that if the sensor array were moved closer to the lens the image of the point source would become blurred. Similarly, if the sensor array was moved further away the image would also become blurred because the light waves will expand after they collapse to a point. Thus, in order to obtain optimally sharp images, the sensor array must be placed at a precise distance from the lens. Unfortunately, this introduces another problem because the distance from the lens to the image plane depends on the distance of the object. For example, suppose a point source is moved very far away from the lens. When the light waves reach the lens they will have much less curvature than a close point source (like that in Figure 3.1C); indeed if the source is at an infinite distance the waves will be flat. Because the light waves are flatter they will be more concave to the right when they exit the lens than those in Figure 3.1C, and hence will collapse to a point in front of the sensor array (and be blurred on the sensor array). Thus, one can see intuitively that the image distance is a decreasing function of object distance. To maintain sharp images of objects at different distances lens-based


View Full Document

UT PSY 380E - Image formation- geometrical optics

Download Image formation- geometrical optics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Image formation- geometrical optics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Image formation- geometrical optics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?