New version page

BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTION

Upgrade to remove ads

This preview shows page 1 out of 4 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTIONKyungnam Kim1, Thanarat H. Chalidabhongse2, David Harwood1, Larry Davis11Computer Vision Lab, University of Maryland, College Park, MD 20742, USA2Faculty of Information Technology, King Mongkut’s Institute of Technology, Thailand1{knkim|harwood|[email protected]},2{[email protected]}ABSTRACTWe present a new fast algorithm for background modelingand subtraction. Sample background values at each pixelare quantized into codebooks which represent a compressedform of background model for a long image sequence. Thisallows us to capture structural background variation due toperiodic-like motion over a long period of time under lim-ited memory. Our method can handle scenes containingmoving backgrounds or illumination variations (shadows andhighlights), and it achieves robust detection for compressedvideos. We compared our method with other multimodemodeling techniques.1. INTRODUCTIONIn visual surveillance, a common approach for discriminat-ing moving objects from the background is detection bybackground subtraction. Some background models assumethat the series of intensity values of a pixel can be modeledby a single unimodal distribution. This basic model is usedin [1, 2]. However, a single-mode model cannot handle mul-tiple backgrounds, such as waving trees. The generalizedmixture of Gaussians (MOG) has been used to model com-plex, non-static backgrounds [3, 4]. The MOG has somedisadvantages. Background having fast variations cannotbe accurately modeled with just a few Gaussians, causingproblems for sensitive detection. To overcome these prob-lems, a non-parametric technique [5] was developed for es-timating background probabilities at each pixel from manyrecent samples over time using Kernel density estimation.These pixel-based techniques assume that the time series ofobservation is independent at each pixel. In contrast, someresearchers employ a region- or frame-based approach bysegmenting an image into regions or by refining low-levelclassification obtained at the pixel level [4, 6].Our codebook (CB) background subtraction algorithmwas intended to sample values over long times, without mak-ing parametric assumptions. The key features of our al-gorithm are in the followings: (1) resistance to artifactsof acquisition, digitization and compression, (2) capabil-ity of coping with illumination changes, (3) adaptive andcompressed background models that can capture structuralbackground motion over a long period of time under limitedmemory, (4) unconstrained training that allows movingforeground objects in the scene during the initial trainingperiod.In Sec.2, we describe the background modeling methodalong with the color metric and foreground detection. Weshow, in Sec.3, that the method is suitable for both station-ary and moving backgrounds and is robust with respect toimage quality. Conclusions and future work are presentedin Sec.4.2. BACKGROUND MODELINGThe CB algorithm adopts a quantization/clustering technique[7], to construct a background model. Samples at each pixelare clustered into the set of codewords. The background isencoded on a pixel by pixel basis.2.1. Construction of the codebookLet X be a training sequence for a single pixel consist-ing of N RGB-vectors: X = {x1, x2, ..., xN}. Let C ={c1, c2, ..., cL} represent the codebook for the pixel con-sisting of L codewords. Each pixel has a different codebooksize based on its sample variation. Each codeword ci, i =1 . . . L, consists of an RGB vector vi= (¯Ri,¯Gi,¯Bi) and a6-tuple auxi= hˇIi,ˆIi, fi, λi, pi, qii. The tuple auxicon-tains intensity (brightness) values and temporal variablesdescribed below.ˇI,ˆI : the min and max brightness, respectively,that the codeword accepted;f : the frequency with which the codeword has occurred;λ : the maximum negative run-length (MNRL)defined as the longest interval during thetraining period that the codeword has NOT recurred;p, q : the first and last access times, respectively,that the codeword has occurred.In the training period, each value, xt, sampled at time t iscompared to the current codebook to determine which code-word cm(if any) it matches (m is the matching codeword’sindex). We use the matched codeword as the sample’s en-coding approximation. To determine which codeword willbe the best match, we employ a color distortion measure andbrightness bounds. The detailed algorithm is given below.Algorithm for Codebook ConstructionI. L ← 0 (← means assignment), C ← ∅ (empty set)II. for t=1 to N doi. xt= (R, G, B), I ← R + G + Bii. Find the codeword cmin C = {ci|1 ≤ i ≤ L} matching toxtbased on two conditions (a) and (b).(a) colordist(xt, vm) ≤ ²1(b) brightness(I, hˇIm,ˆImi) = trueiii. If C = ∅ or there is no match, then L ← L + 1. Create a newcodeword cLby setting• vL← (R, G, B)• auxL← hI, I, 1, t − 1, t, ti.iv. Otherwise, update the matched codeword cm, consisting ofvm= (¯Rm,¯Gm,¯Bm) and auxm= hˇIm,ˆIm, fm, λm, pm,qmi, by setting• vm← (fm¯Rm+Rfm+1,fm¯Gm+Gfm+1,fm¯Bm+Bfm+1)• auxm← h min{I,ˇIm}, max{I,ˆIm}, fm+ 1,max{λm, t − qm}, pm, t i.end forIII. For each codeword ci, i = 1 . . . L, wrap around λiby settingλi← max{λi, (N − qi+ pi− 1)}.The two conditions (a) and (b), detailed in Eq.2,3 later,are satisfied when the pure colors of xtand cmare closeenough and the brightness of xtlies between the accept-able brightness bounds of cm. Instead of finding the nearestneighbor, we just find the first codeword to satisfy these twoconditions. ²1is the sampling threshold (bandwidth).Note that reordering the training set almost always re-sults in codebooks with the same detection capacity. Re-ordering the training set would require maintaining all ora large part of it in memory. Experiments show that one-pass training is sufficient. Retraining (i.e., iteration of thecodebook construction algorithm) does not affect detectionsignificantly.2.2. Maximum Negative Run-LengthWe refer to the codebook obtained from the previous step asthe fat codebook. In the temporal filtering step, we refine thefat codebook by separating the codewords that might con-tain moving foreground objects from the true backgroundcodewords, thus allowing moving foreground objects dur-ing the initial training period. The true background, whichincludes both static pixels and moving background pixels,usually is quasi-periodic (values recur in a bounded period).This motivates the temporal criterion


Download BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTION
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTION and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTION 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?