Surface Ship Detection from Active Sonar Image DataDaniel Huff05/05/05Image from Applied Research Labs, UTOptical Flow Method•separate static image and dynamic return sequence via time-domain filtering•input images must be exactly alignedLANE et al.: ROBUST TRACKING OF MULTIPLE OBJECTS IN SECTOR-SCAN SONAR IMAGE SEQUENCES 33Fig. 3. Processing stages for moving object detection, motion characterization, and tracking.Fig. 4. FFT processing to separate moving and static observations in a sonar image sequence.Model-based tracking algorithms [21]–[23] are well suitedfor polyhedral and manufactured objects where a wire framemodel exists. In these methods, 3-D polyhedral models of theobjects are given. Detection and segmentation of the movingtarget thus reduces to a problem of recognition, which for sonarobservations requires the motion information as a classificationfeature [3]–[6].C. Paper StructureTo better deal with the difficulties identified in Section II-A,we have chosen to investigate an intuitively appealing ap-proach to observation detection and tracking using frequency-domain filtering, optical flow, and a delayed decision trackingtree. Fig. 3 shows a block diagram overviewing the basicstages, which also provides the paper structure.Initially, raw digitized sonar images are preprocessed usingamedian filter to remove noise [3]. To identify and dis-tinguish pixels corresponding to observations of moving andstatic objects, a frequency-domain technique using lowpassand bandpass filtering has proved effective and is described inSection III. Those observations which are identified as movingthen have their motion information characterized (describedin Section IV), using an optical flow method from [24]. Themethod is augmented by smoothing to maintain the velocitygradient constraint in space and time, and an association stageto produce results for only significant observations in theimage.Section V then describes a relaxation method which prob-abilistically associates velocity information from observationsin contiguous frames to perform the tracking. The opticalflow data is used both to constrain the search areas and toprovide information for matching observations in consecutiveimages. The degree of match is measured by a compatibilitymeasure and recorded in a tracking tree. As multiple tracksare kept and cumulative totals are maintained, the systemhas the ability to revise its decisions in the light of new[Lane, Chantler, & Dai 1996]Clutter Map ProcessingFigure from Lo & Ferguson, 2004•Geometric fading algorithm•Suppress image features that appeared in previous frames•Only works if successive images are exactly alignedMoving Platform Problem:must align images•Previous work: fixed platform or target•Current work: moving platform and target•Inertial & compass data too coarse•Manual ad-hoc image alignment works*•Proposal: use coarse navigation data as an initial constraint for automatic image alignmentPrevious work: [Lo & Ferguson 2004], [Perry & Guan 2004], [Lane, Chantler, & Dai
View Full Document