DOC PREVIEW
NMT EE 552 - Digital Image Stabilization and Sharpening

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Digital Image Stabilization and Sharpening Kyle Chavez Evan Sproul 4/21/2011 EE 5521. Introduction One promising method of solar power concentration is the heliostat array. A heliostat consists of a large mirror with the mechanisms and circuitry necessary to actuate it, such that it reflects sunlight onto a given target throughout the day. A heliostat array is a collection of heliostats that focuses sunlight continuously on a central receiver, often called a power tower. Figure 1 shows an example of a heliostat array. Figure 1: Sandia National Laboratories National Solar Thermal Test Facility heliostat array. This array consists of 222 heliostats and a single 200ft tall power tower. Optimally concentrating sunlight with heliostats requires that each heliostat mirror is correctly canted (positioned) and focused (shaped). In order to minimize facet canting and focal errors, Sandia National Labs and New Mexico Tech are developing heliostat optical analysis tools. In many cases these tools utilize a camera positioned on top of the power tower (200 feet above ground level). This camera position combined with the large heliostat field size places the camera at a total distance of 300-700 feet away from the heliostat mirror surface being analyzed. Figure 2 displays the camera and heliostat positioning.Figure 2: Camera and heliostat positions during optical analysis. To compensate for the large distance between the heliostat and camera, a long range optical zoom lens (80-400 mm focal length) is used when acquiring images and video of heliostat mirrors. This lens allows the camera to acquire high resolution images at great distances. However, one major drawback of this system is the high sensitivity of the zoom lens and camera to small position disturbances. These disturbances include camera and lens position shift due to wind, structural vibrations, and thermally induced strain. Although these disturbances are small, they can create large shifts in the cameras field of view. The optical analysis of heliostats relies heavily upon determining the exact positions of specific features in acquired images and video. As a result, any shift in a camera’s field of view will have a negative effect on analysis by altering the location of these features within an image. This alteration results in poor analysis and incorrect heliostat canting and focusing. The following project attempts to implement a software solution to track and correct for shifts in camera position. In the project we reviewed and compared literature on multiple methods for motion correction. We then implemented one method on sample video that mimics the camera position change commonly seen during optical analysis. As a secondary portion of the project, the blur induced by camera motion was investigated and corrected for a single frame using methods previously demonstrated in the EE 552 course. 2. Preexisting Methods for Motion Correction Our investigation of the preexisting methods for motion correction followed a project paper from a Northwestern University digital image processing course [1]. In the paper, Brooks discusses the various methods for motion tracking and control. These methods include spacio-temporal approaches such as block matching [2], direct optical flow estimation [3], and least mean-square error matrix inversion [4], as well as, matching methods including bit-plane matching [5], point-to-line correspondence [6], feature tracking [7], pyramidal approaches [8], and block matching [9]. Upon investigating these references we decided to pursue a gray-coded bit-plane matching (GSBPM) method [5]. This method requires minimal computation and is ideal for quickly determining and correcting motion.3. Implementation of GCBPM Method In order to implement the selected correction method, we first had to generate a video file that accurately mimicked the sort of motion experienced by the camera on top of the power tower. Our initial attempt to acquire actual tower top video was unsuccessful due to unusually high winds that exceeded the scope of this project. As a result, we instead developed a generic video (acquired in an indoor office setting) that had camera motions similar to that induced by average wind speeds and other disturbances. Using the webcam also allowed us to utilize a slightly lower image resolution. This smaller resolution allowed us to capture a longer uncompressed video segment that we could then attempt to process. Upon creating the video, development of the bit-plane algorithm began. During development we followed the methods presented by Gonzalez [10] and Ko [5]. Initially, we used the dollar bill image presented in Figure 3.14 of Gonzalez to ensure our algorithm’s accuracy. Figure 3 displays our algorithm’s results for comparison purposes. Figure 3: Bit-plane sliced of 100 dollar bill image. After successful implementation of the initial bit-plane algorithm, we moved towards adapting the algorithm for gray-coded bit-plane separation as discussed in Ko’s paper. Equations 1 and 2 were used to generate the final gray-coded bit-plane. ft(x,y) = aK-12K-1 + aK-22K-2 + … + a020 (1) gK-1 = aK-1 gk = ak ak+1, 0 ≤ k ≤ K-2 (2) In the Equation 1 ft(x,y) represents the gray level of the tth image frame at coordinates x and y. In Equation 2 the gk represents the eight-bit gray code. In both equations ak are the standard binary coefficients.Our results after applying Equations 1 and 2 to the initial frame of our video segment are shown if Figure 4. From the results of Figure 4 we chose bit-planes four and five as good candidates for analysis. These layers showed a large amount of detail making them ideal for detecting changes in the image. Figure 4: Gray-coded bit-planes of initial video frame. After determining an ideal bit-plane we created the appropriate code to calculate the approximate motion vector of successive video frames. The code analyzes four small sub-images of the single frame bit-plane image. The sub-images of the bit-plane image are shown in Figure 5.Figure 5: Single bit-plane with highlighted and zoomed sub-images. During analysis, the code uses Equation 3 to find the error function of each sub-image between successive frames. In this equation W is the width of the sliding block and p is the range of pixels being


View Full Document

NMT EE 552 - Digital Image Stabilization and Sharpening

Download Digital Image Stabilization and Sharpening
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Digital Image Stabilization and Sharpening and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Digital Image Stabilization and Sharpening 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?