DOC PREVIEW
NMT EE 552 - Localized Deblurring of High Speed Photography

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Localized Deblurring of High Speed Photography Charlie Jensen EE 552: Digital Image Processing Abstract: There is been significant contributions to the area of deblurring images as a result of camera motion. In many cases innovative image impulse response estimation and deconvolution processes implemented over the entire image wer e proposed. While these approaches have focused on still cameras and real time video, there has been little work conducted on the iterative process of localized deblurring of high velocity objects usi ng high speed cameras. In many high speed videos there is an object of interest that blurs as a result of poor lighting conditions, camera exposure limitations, or a combination of both. In many cases this provides many limi tations in both the qualitative and quantitative approach of post capture data processing. This paper will discuss a solution to the localized deblurring of high velocity objects that involves the subtraction of two images. As will be discussed the processing of the residual windowing varies when considering camera settings and the speed of the object of interest. Introduction: A common occurrence in high speed camera photography is that the object of interest is traveling with such high velocity that the object is blurred from frame to frame. Blurring is caused by limitations of a cameras exposure time, insufficient external light, and extremis object/projectile velocities. For example, a projectile trav eling at 2 km/s will blur approximately ¾” with a 10 us exposure time. Localized deblurring offers two advantages to high speed photography diagnostics. First, better image quality of the object of interest shall greatly enhance visual test results. Secondly test data such as velocity or angle may be processed more easily and accurately. This report identifies a process utilizing image comparison to locate a blurred object of interest. The object is then passed through a series of filters, performing a localized deblurring of the object. There has been a significant amount of work conducted on single image motion deblurring resulting from a camera moving as an image was captured. There are three courses of action to deblur an image: prior understanding of the cap ture‐time in the form of the exposure in relation to the object of interest, point spread function (PSF) estimation, and image deconvolution. There are many methods of estimating the impulse response of an image or the PSF such as the method mentioned in Single‐image Motion Deblurring Using Adaptive Anisotropic Regularization, where the maximum likelihood estimation with an auxiliary edge‐reserving regularization term are combined. In this application all of the PSF estimation proposed algorithms may not apply due to localized blur versus full image blur. Localized Blur Detection: First, the proposed localized deblurring assumes prior knowledge of the PSF estimation concerning the object of interest and some trial and error to fine tune results. Due to a wide variety of high velocity projectiles the technique will have to be adjusted from video to video. A example of a typical image encountered by high speed photography may be found in figure 1. In an effort to find the blurred object from frame to frame a simple image subtraction is implemented. The resulting image clearly illustrates the area of the imaged that was blurred. Image subtraction presents an issue for the first image, as that the initial frame showing the object of interest will show not subtraction deblurring. If a later image is subtracted from a previous image residual of the object in the first image is provided. Using this process the location of the blur may be found. Depending on the velocity of the object the blur may overlap which provides a partial residual in the initial frame. To ensure that the entire object of interest is included in the deblurring process the partial residual has to be expanded. A Gaussian low‐pass filter is used to complete the expansion. By setting a large standard deviation and a sufficient mask size the window may then be large enough to accommodate the entire area of interest. An averaging filter consisting of all ones may filter the data if the residual is sparse. The Gaussian weighted average was chosen to provide a large residual window. Finally to isolate the object in motion a window thresholding was implemented by finding the minimum and the maximum pixel values in the image. The exact threshold intensities may be found by mul tiplying the difference in minimum and maximum pixels by a percentage. This allows better control of the residual window size. The black and white image enabled all of the white pixels to be replaced by the values in the original image. The final blur detection process is to subtract the residual image with the original image that will later be replaced with the deblurred residual. Deblurring the Original Image: In the previous section a window was established to allow for a deblurred window to be combined with the original image. The next step was to utilize a Wiener filter in combination with the PSF to deblur the entire original image. The PSF as mentioned may be roughly estimated by knowing some of the camera parameters, along with additional inputs (i.e. the velocity of the projectile) to provide the estimate. One of the benefits of this approach is in many cases the effects of the deblurring filter such as ringing affect the edges of an image allowing for the residual window to contain the deblurred object alone. Once the deblurr ed image is obtained the values are then placed into the residual image that was thresholded. The residual image may then be recombined with the original subtracted image. Considering the blur of the object of interest the residual may look discontinuous in the original image because of the Wiener filtered area caused by the Gaussian filtering. If this is the case a second round of thresholding may be implemented to incorporate more of the original image background. Results: An original image containing a blurred object of interest is shown in figure 1. The projectile velocity was validated through the use of a chronograph. This combined with the high speed camera settings the PSF was estimated and fine‐tuned. The resulting deblurred image is illustrated in Figure 4. Some unwanted noise as a result of the Gaussian and Wiener filtering


View Full Document

NMT EE 552 - Localized Deblurring of High Speed Photography

Download Localized Deblurring of High Speed Photography
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Localized Deblurring of High Speed Photography and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Localized Deblurring of High Speed Photography 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?