DOC PREVIEW
UT EE 381K - Visible and Long-Wave Infrared Image Fusion Schemes for Situational

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Visible and Long-Wave Infrared Image Fusion Schemes for Situational AwarenessMulti-Dimensional Digital Signal ProcessingLiterature SurveyNathaniel WalkerThe University of Texas at [email protected] 20, 2008AbstractImage fusion schemes are still a popular area of academic research, but the growing field has few agreed upon standards. Most schemes rely upon multi-scale decompositions. These algorithms contain many steps and complex decision models that may be difficult to implement in real-time video systems. They are also susceptible to artifacts and noise enhancement because they treat source images as equally likely contributors to the fused result. This paper will propose the study of a new brand of image fusion scheme for situational awareness applications where a visible light camera and an infrared camera are used. In the proposed scheme, the image with the most content will be defined as the primary image and used as the base for the fused image. The other source image is defined as secondary and used to overlay high contrast object information onto the primary image. During daylight hours the algorithm would use the visible light camera as primary and add objects of interest from the infrared sensor. This model maintains consistency with the intention of situational awareness systems to complement the human visual system with infrared content. During nighttime operation, the infrared sensor would be primary and the algorithm would add objects of interest from the visible light camera.I. IntroductionSituational awareness is the ability to see, understand, and interpret your surroundings in such a way as to be able to make informed decisions. In environments where the human senses are obscured, distracted or overwhelmed, it is desirable to deploy electronic systems that perform the functions of the human senses and communicate the received information to a user. Such systems also provide an opportunity to enhance sensing capabilities beyond that of a typical human. It is no easy task to build electronics that recreate the complexity of the human senses to take in information, or the power of the human brain to combine this information and interpret it to make rational decisions. We have made remarkable progress in sensing technology that provides visual awareness. High-resolution optical cameras have long captured the world as we see it. Infrared detectors now allow us to provide imagery in spectral bands that the human eye cannot see. This allows us to see thermal content in a scene, as well as provide visibility in low-light and adverse weather conditions.However, much work remains to determine the best way to combine these two sources of information for interpretation by the human visual system. Situational awareness systems are often limited not by their ability to sense the environment, but by their ability to effectively combine and communicate information to the user [1]. Often, image fusion techniques are evaluated subjectively by peer reviewers [2]. It would be helpful to develop quantitative criteria for objectively evaluating fusion techniques. Some work has been done to define a set of quantitative evaluation criteria [3] but the ideal fused image quality metric is not yet agreed upon.Since humans do not normally see emissions in the infrared spectrum, we are not used to interpreting that data. Fusion algorithms must strike a balance between taking advantage of the new information that is available from infrared sensors, while presenting the information in a way that is familiar, notdistracting, and easy to interpret. Specific applications must be considered, because the optimalfusion approach may depend on the scene content or application objectives.Effectiveness is just one important characteristic of image fusion systems, although it has received most of the academic community’s attention on this topic. Speed, efficiency and complexity of algorithms are also a concern in situational awareness applications. Most implementations in situational awareness operate on streaming video from multiple sensors for near real-time display. Low-latency and high frame rates (30 fps) are essential, driving a trade-off between image quality and algorithm complexity and speed. Also, many situational awareness applications are for mobile platforms (vehicles, UAVs, humans) where size, weight and power must be drivers for design. Sacrifices in image quality are often accepted if an algorithm can save system resources.The focus of this paper will be to propose an image fusion algorithm that maintains image quality while improving simplicity and speed over those used in existing situational awareness systems. I am interested in applying the strengths of leading high-quality image fusion algorithms, while developinga simpler implementation for use in mobile situational awareness systems.II. BackgroundA number of image fusion techniques have been proposed and studied. There have also been multiple frameworks proposed for classifying and categorizing techniques [4] [5]. Proposals for image quality metrics and fusion image evaluation have also varied [2] [6] [7]. The fragmentation in the research is reflective of the topic’s early stage of development. The area of image fusion technology is still immature and growing with few standards in place and a limited history of existing implementations outside of academia.The simplest form of fusion algorithm is additive. The fused result is a linear combination of the source images. This approach is not commonly used. In fact, there is little analysis available in the research because it is rarely used.Most popular image fusion algorithms are based on multi-scale decomposition (MSD) techniques. An excellent survey of existing methods can be found in [4]. The MSD representation decomposes an image into contributions from different spatial frequencies. An MSD transform is performed on each source image, and then the coefficients are combined in some intelligent manner as determined by the fusion algorithm of choice. Finally, an inverse MSD transform produces the fused image.MSD fusion schemes generally combine source image data by identifying edges and local areas of high contrast in each source, then transferring those edges to the fused image. One trouble is that many algorithms inadvertently add noise from both sources, in addition to actual scene content. The fused image


View Full Document

UT EE 381K - Visible and Long-Wave Infrared Image Fusion Schemes for Situational

Documents in this Course
Load more
Download Visible and Long-Wave Infrared Image Fusion Schemes for Situational
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Visible and Long-Wave Infrared Image Fusion Schemes for Situational and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Visible and Long-Wave Infrared Image Fusion Schemes for Situational 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?