DOC PREVIEW
Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Spatio-Temporal Frequency Analysis for Removing Rain and Snow from VideosPeter Barnum Takeo Kanade Srinivasa G NarasimhanCarnegie Mellon University5000 Forbes Avenue, Pittsburgh, PA, USA{pbarnum,tk,srinivas}@cs.cmu.eduAbstractCapturing good videos outdoors can be challenging dueto harsh lighting, unpredictable scene changes, and mostrelevant to this work, dynamic weather. Particulate weather,such as rain and snow, creates complex flickering effectsthat are irritating to people and confusing to vision algo-rithms. Although each raindrop or snowflake only affects asmall number of pixels, collections of them have predictableglobal spatio-temporal properties. In this paper, we for-mulate a model of these global dynamic weather frequen-cies. To begin, we derive a physical model of raindrops andsnowflakes that is used to determine the general shape andbrightness of a single streak. This streak model is combinedwith the statistical properties of rain and snow, to determinehow they effect the spatio-temporal frequencies of an imagesequence. Once detected, these frequencies can then be sup-pressed. At a small scale, many things appear the same asrain and snow, but by treating them as global phenomena,we achieve better performance than with just a local anal-ysis. We show the effectiveness of removal on a variety ofcomplex video sequences.1. IntroductionA movie captured on a day with rain or snow will haveimages covered with bright streaks from moving raindropsor snowflakes. Not only can they annoy or confuse a humanviewer, but they degrade the effectiveness of any computervision algorithm that depends on small features, such as fea-ture point tracking or object recognition.Fortunately, the effect of each particle is predictable.Several local removal methods that look at individual pixelsor small spatio-temporal blocks have been proposed. Butas with the aperture problem of stereo vision, just using lo-cal information is problematic. While it is true that rainand snow have a predictable local effect, many other thingshave a similar appearance, such as panning along a picketfence, viewing a referee’s shirt during a football game, orviewing a mailbox during a snowstorm (Figure 1). Even(a) (b) (c)Figure 1. (a) Part of one image from a video sequence with snow,(b) removing the snow with per-pixel temporal median filtering,and (c) removing by looking at the global frequencies. By exam-ining global patterns, the snow can be removed while leaving therest of the image unchanged.a human viewing a scene with rain would have difficultypointing out individual streaks, but rather would commenton its general, global properties. In this work, we demon-strate how to use such global properties over space and timeto detect and remove rain and snow from videos.1.1. Previous work on local methodsThe first methods for removing dynamic weather used atemporal median filter for each pixel [12, 18]. This worksbecause in moderate intensity storms, each pixel is clearmore often than corrupted. However, any movement willcause blurring artifacts. Zhang et al [20] extended this ideaby correcting for camera motion, although this is only ef-fective when the video frames can be aligned and there areno moving objects in the scene.Garg and Nayar [8] suggested modifying camera param-eters during video acquisition, based on an explicit statis-tical model. They suggest using temporal and spatial blur-ring, either by increasing the exposure time or reducing thedepth of field. This removes rain for the same reasons as1the per-pixel median filtering, but will also cause unwantedblurring in scenes with moving objects or objects at differ-ent depths.In a different paper, Garg and Nayar [7] suggested thateach streak can be individually segmented by finding smallblocks of pixels in individual streaks that change over spaceand time in the same way as rain. Searching for individualstreaks can theoretically work for dynamic scenes with amoving camera. In practice, unless the rain is seen againsta relatively textureless background, it is hard to segmentstreaks in this way.1.2. Using global frequency informationRain and snow change large regions of the image in aconsistent and predictable way. In order to determine theinfluence of rain and snow on a video, we develop a modelin frequency space, based on their physical and statisticalcharacteristics.The dynamics of falling particles are well understood,and it is simple to determine the general properties of thestreak that a given raindrop or snowflake will create. Thegeneral shape and appearance of a streak can be approxi-mated with a motion-blurred, circular Gaussian. The sta-tistical characteristics of rain and snow have been studiedin the atmospheric sciences, and it is possible to predict theexpected number and sizes of the streaks as well.The information of how one streak appears, combinedwith a prediction of the range of streak sizes, allows usto estimate the appearance of rain and snow in frequencyspace. This frequency model is fit to the image sequence,from which the rain and snow can be detected and removed.Sequences with light precipitation or simple motion canbe cleared up with many algorithms. To test the robustnessof our work, we performed tests on several challenging se-quences. Some have several moving objects, others have acluttered foreground with high frequency textures, and allof them are taken with a moving camera. We present quali-tative results of removal accuracy and demonstrate how theremoval increases the performance of feature point tracking.2. The properties of rain and snow streaksIn this section, we derive the distribution of the shapesand sizes of rain and snow streaks in videos. To deter-mine the expected properties of the streaks in an image, weuse the physical and statistical properties of raindrops andsnowflakes. Later in Section 3.1, we present a frequencyspace analysis.2.1. Imaging a single streakOver a camera’s integration time, raindrops andsnowflakes move significantly and therefore appear asmotion-blurred streaks. For a particle of a given size, speed,and distance from the camera, the length and breadth of thecorresponding streak can be computed.For common altitudes and temperatures, a raindrop’sspeed s can be predicted by a polynomial in its diametera [6]s(a) = −0.2 + 5.0a − 0.9a2+ 0.1a3(1)Finding the speed of snowflakes is more difficult [14, 1], be-cause of their complex shape. We find that even if the exactcharacteristics of the snow


Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos

Download Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Spatio-Temporal Frequency Analysis for Removing Rain and Snow from Videos 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?