DOC PREVIEW
UCSD CSE 190 - Computer Vision Based Fire Detection

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Computer Vision Based Fire DetectionNicholas TrueUniversity of California, San Diego9500 Gilman Drive, La Jolla, CA [email protected] this paper we use a combination of techniques to de-tect fire in video data. First, the algorithm locates regions ofthe video where there is movement. From these regions fire-colored pixels are extracted using a perceptron. Lastly, weuse dynamic texture analysis to confirm that these moving,fire-colored regions have the temporal and motion charac-teristics of fire.1. IntroductionAutomatic fire detection devices have been around sincethe first smoke alarm was invented by Francis Upton in1890 [2]. After further technological advances in the mid1960s reduced the price of smoke detectors, these devicesstarted showing up in buildings all over the world, becom-ing the ubiquitous and essential devices that they are to-day [2]. However, automated fire detection devices suchas smoke detectors have some significant limitations whichmake them useless in many important situations. For in-stance, smoke detectors, the most common type of fire de-tection device, only work well in small enclosed spaces likethose find in homes and offices. However, in large openspaces such as warehouses, atriums, theaters, and the out-doors, smoke detectors are ineffective because they requiresmoke to build up to sufficient levels to set them off. In openspaces, the fire is usually out of control by the time smokehas built up sufficiently to set off the alarm. Heat sensorssuffer from the same shortcomings as smoke detectors.Video-based fire detection does not suffer from the spaceconstraints that smoke and heat detection do. Cameras candetect and pinpoint fire from long distances as soon as thefire starts, allowing the fire to be dealt with before it gets outof control. Furthermore, cameras can cover very large ar-eas, potentially mitigating their high cost compared to otherfire detection technologies. Video-based fire detection evenhas the potential to be placed on mobile platforms such asplanes and robots.2. ApproachSince fire is a complex but unusual visual phenomenon,we decided upon a multi-feature-based approach for our al-gorithm. The hope and the goal of such an algorithm isto find a combination of features whose mutual occurrenceleaves fire as their only combined possible cause.Fire has distinctive features such as color, motion, shape,growth, and smoke behavior. For this project we focused onfeatures such as color and motion and we hope to includeadditional feature analysis in future work.Figure 1. The fire detection algorithm outline.1234To reduce the total computational load, the first stepof our algorithm is to perform frame differencing to get arough idea where motion occurred. The regions of the videowhich are moving are fed to a fire color classification algo-rithm.There are a number of different ways of detecting fire-colored pixels. In [15], the authors used a mixture of Gaus-sian in the RGB color space to classify pixels as being fire-colored or not. In [7], pixels whose color landed in a spe-1faculty.ksu.edu.sa2www.sumo1.com3hubblesite.org4cksinfo.comcific segment of the HSV color space were classified as be-ing fire colored. We decided to use a multilayer perceptron,like [9], to classify pixels as being fire colored or not. Spa-tial clustering and analysis is performed on the pixels thatget classified as being fire colored.The next stage involves grabbing a short 50 to 100 framesequence of the video focused and centered at each of thesemoving fire-colored regions. These short video sequencesare then fed to a module which performs dynamic textureanalysis it. The result is that the target region is either clas-sified as being fire or not.2.1. Motion DetectionThe first step for our algorithm is to find regions in thevideo stream where there is motion. This is done throughframe differencing based on image intensity values.Given that flames often flicker and jump, the algorithmhas a built in timer which keeps track how long it’s beensince there’s been movement at a particular pixel. Thishelps ‘smooth’ out the results for the query, ‘where is theremotion in the video’.Figure 2. Original frame sequence.Figure 3. motion detected by frame differencing.2.2. Detecting Fire-Colored PixelsTo classify pixels as being fire colored or not, we decidedto use a multilayer perceptron. The perceptron has two lay-ers, three input nodes for each of the color channels, andone node in the hidden layer. (Given that perceptrons arewell described in the literature and known in many fields, Iwill forgo their description here.)At first glance one might say that fire color classificationfails miserably because it tends to label lots of non fire pix-els as being fire. However, the goal of this first classifier isFigure 4. Original image.Figure 5. Red denotes pixels that were classified as being the colorof fire.to get a very high true positive rate and a low false negativerate, irregardless of the false positive rate. This is becausethe color classifier is the first in a sequence of classifierswhose job is to weed out the color classifier’s false posi-tives. Thus, these additional classification algorithms willhelp to reduce the overall false positive rate.One of the advantages of choosing a perceptron to per-form color classification is that it is easy to retrain. Giventhat the input images might be coming from different typesof cameras with different levels of saturation and differentcolor ranges, ease of retraining is an important feature. Thisway, it is easy to apply this technology to existing camerasystems with a minimum amount of time and difficulty.2.3. Motion + ColorThe follow images are the result of passing the imagethrough the motion detection module and then passing thoseresults through the fire-color classifier.As you can see, these combined results are much moreFigure 6. Fire-color classification sans motion detection.Figure 7. Fire-color classification plus motion detection.accurate than just motion or color alone. Furthermore, byonly passing regions of the image that display movement,significantly fewer pixels have to get fed to the color classi-fier, resulting in sizable time savings. Chaining these clas-sifiers also reduces the size and number of regions that thedynamic texture classifier has to check.2.4. Dynamic Texture AnalysisThe idea behind dynamic textures is that certain videosequences of moving scenes exhibit specific


View Full Document

UCSD CSE 190 - Computer Vision Based Fire Detection

Documents in this Course
Tripwire

Tripwire

18 pages

Lecture

Lecture

36 pages

Load more
Download Computer Vision Based Fire Detection
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Computer Vision Based Fire Detection and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Computer Vision Based Fire Detection 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?