Clemson ECE 847 - Tracking Under Low-light Conditions Using Background Subtraction

Unformatted text preview:

IntroductionMethodsCamera CalibrationBackground SubtractionAlgorithmExperimental ResultsFull Uniform Light SourceSingle Light SourceNo Light SourceConclusionTracking Under Low-light ConditionsUsing Background SubtractionMatthew BenninkClemson UniversityClemson, South CarolinaAbstractA low-light tracking system was developed using background subtraction. Results are given for afull uniform light source, single light source and no light source. The results show that althoughthe system performs well with full uniform light, it is easily confused by shadows given a singlelight source, and is almost useless with no light, even though low-light cameras are being used. Theresults will be discussed as well as the methods used to obtain the results. Some possible solutionswill be presented in order to improve the tracking system for low-light conditions.1 IntroductionTracking has a variety of applications. For example, a security team may want to track suspiciouspersons or a manufacturing plant may want to follow a p roduct through the assembly process.One common method used with video trackin g is background subtraction. Abbott and Williamsused background subtraction with connected components analysis to segment video [1], Davis andSharma used back ground subtraction with thermal cameras for tracking purposes [2], while Hooverused it with regular video feeds, also for tracking purposes [3]. In this paper, we will follow analgorithm very similar to Hoover’s algorithm, but we w ill use low-light cameras in place of regularcameras. In doing so, we hope to track objects with little or no light present. These camerascontain a small amount of LEDs around the lens to provide ambient light without producing anylight visible to the naked eye.2 MethodsBefore any code is wr itten, it is n ecessary to setup the tracking area and the cameras. In our case,we used masking tape to sketch out a rectangle approximately 4m long by 3m wide. The cameraswere positioned above the tracking area and facing the center. Once the inital setup is complete,the cameras are calibrated. Using the calibration matrices and background subtraction, pixels arehighlighted where the tracking system believes an object exists. We provide a discussion of cameracalibration and backgroun d subtraction, then follow with the algorithm.12.1 Camera CalibrationCamera calibration is necessary to map image coordinates to real wor ld coordinates. A b riefoverview of the calibration matrices will be presented and then a discuss ion of the image calibrationtool we used will be given.Calibration requires two sets of information, the intrinsic values specific to the camera and theextrinsic values dependent on the world geometry. The intrinsic values include the focal length ofthe camera, the aspect ratio, the principal point, and skew. Rotation and translation m ake up theextrinsic values. The reduced equation mapping world coordinates to image coordinates is givenbelow, where f is the focal length and (u0, v0) is the p rincipal point. We assume that the skew iszero and the aspect r atio is 1:1.xy1=f 0 u00 f v00 0 1R11R12R13R21R22R23R31R32R33XYZ+TxTyTzCamera calibration is not trivial, but several tools are available to calibrate cameras. We chose touse a 3rd party Matlab toolbox [4]. The toolbox, d eveloped by Jean-Yves Bouguet, computes boththe intrinsic values and extrinsic values of the camera. Calibration requires a calibration image,usually a black and white chessboard of sorts. We constructed a 3 x 3 chessboard using black foamboard and white 11" x 8.5" printer paper. Images were captured with the board tilted at variousangles. The calibration software uses these images to produce the intrinsic values. The board wasthen placed in the origin of our tracking area. The origin may be placed anywhere, but for easeof computation, we chose a corner, allowing for only positive world coordinates. A single imagewas captured, and this image was used to determine the extrinsic values, rotation and translation.Since rotation and translation are extrinsic, it is required to periodically update these matricessince they will change as the room is used . For example, the camera may be shifted slightly onaccident.2.2 Background SubtractionBackground su btraction, on the other hand, is extremely trivial. With grey-level values, a differencemap is produced by computing the absolute difference between each pixel in the image. Thisdifference image is then thr esholded to remove any difference values below some fixed threshold.Background subtraction is only effective when the f oreground objects differ from th e backgroun d.For example, black objects tracked on a black surf ace will not show up in a difference image becausetheir grey level values are too similar. To produce good results, it is recommended that the imagesbe pre-processed beforehand to remove noise and increase the breadth of the grey-level intensities.2.3 AlgorithmNow that we’ve provided some background, here is the algorithm used. First, we calibrate thecameras. Second, we create a mapping of image co ordinates to world coordinates. This speeds upthe computation tremendously. Backgroun d images are s tored, and mask images are produced inorder to track only what is in the tracking area. Now, we loop over time. We set the occupancymap pixels to 1 indicating that none of the floor can be seen. Th en , for each camera, a differenceimage is computed between the current image and the background image. If the d ifference is less2than some threshold, the floor can be seen in this area, and a 0 is placed in the occupancy map.After looping through all the cameras, the occupancy map is displayed. A value of 0 indicates atleast one camera can see the floor. A value of 1 ind icates that no camera can see the floor.PseudocodeCalibrate the camerasCreate a lookup table of image coordinates to world coordinatesCapture background images of empty tracking areaCreate mask image (1 is trackable, 0 is untrackable)Loop over timeSet Occupancy Map to 1 for all pixelsFor each cameraCompute difference of current image with background imageIf the difference is within desired thresholdSet Occupancy Map to 0 at that location3 Experimental ResultsThe results varied over each of the test cases. With full uniform light, we acheived fairly goodresults. With a single light source, the results were not very good.


View Full Document

Clemson ECE 847 - Tracking Under Low-light Conditions Using Background Subtraction

Download Tracking Under Low-light Conditions Using Background Subtraction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Tracking Under Low-light Conditions Using Background Subtraction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Tracking Under Low-light Conditions Using Background Subtraction 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?