DOC PREVIEW
PSU CSE/EE 486 - Tracking

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CSE486, Penn StateRobert CollinsLecture 28Intro to TrackingSome overlap with T&V Section 8.4.2 and Appendix A.8CSE486, Penn StateRobert CollinsRecall: Blob Merge/SplitWhen two objects pass close to each other, they are detected as asingle blob. Often, one object will become occluded by the otherone. One of the challenging problems is to maintain correctlabeling of each object after they split again.mergesplitocclusionocclusionCSE486, Penn StateRobert CollinsData AssociationMore generally, we seek to match a set of blobs across frames, to maintain continuity of identity and generate trajectories.CSE486, Penn StateRobert CollinsData Association ScenariosMulti-frame Matching (matching observations in a new frame to a set of tracked trajectories)observations?track 1track 2How to determine which observations to add to which track?CSE486, Penn StateRobert CollinsTracking MatchingIntuition: predict next position along each track.observations?track 1track 2How to determine which observations to add to which track?CSE486, Penn StateRobert CollinsTracking MatchingIntuition: predict next position along each track.observations?track 1track 2How to determine which observations to add to which track?Intuition: match should be close to predicted position.d1d2d3d4d52CSE486, Penn StateRobert CollinsTracking MatchingIntuition: predict next position along each track.observations?track 1track 2How to determine which observations to add to which track?Intuition: match should be close to predicted position.d1d2d3Intuition: some matches are highly unlikely.CSE486, Penn StateRobert CollinsGatingA method for pruning matches that are geometrically unlikely from the start. Allows us to decompose matching into smaller subproblems.observations?track 1track 2How to determine which observations to add to which track??gatingregion 2gatingregion 1CSE486, Penn StateRobert CollinsFiltering FrameworkDiscrete-time state space filteringWe want to recursively estimate the current stateat every time that a measurement is received.Two step approach:1) prediction: propagate state pdf forward in time,taking process noise into account (translate, deform, and spread the pdf)2) update: use Bayes theorem to modify prediction pdf based on current measurementCSE486, Penn StateRobert CollinsPredictionKalman filtering is a common approach.System model and measurement model are linear.Noise is zero-mean GaussianPdfs are all Gaussian1) System model2) Measurement model p(vk) = N(vk| 0, Qk)p(nk) = N(nk| 0, Rk)More detail is found in T&V Section 8.4.2 and Appendix A.8CSE486, Penn StateRobert CollinsKalman FilterAll pdfs are then Gaussian. (note: all marginalsof a Gaussian are Gaussian)CSE486, Penn StateRobert CollinsKalman Filter3CSE486, Penn StateRobert CollinsExampleellipsoidal gating regionCSE486, Penn StateRobert CollinsSimpler Prediction/GatingConstant position + bound on maximum interframe motionrrconstant positionpredictionThree-frame constant velocity predictionpk-1pk(pk-pk-1)pk+ (pk-pk-1)predictiontypically, gatingregion can be smallerCSE486, Penn StateRobert CollinsAside: Camera Motion Hypothesis: constant velocity target motion model is adequate provided we first compensate for effects of any background camera motion.CSE486, Penn StateRobert CollinsCamera Motion EstimationApproach:Estimate sparse optic flow using Lucas-Kanade algorithm (KLT)Estimate parameteric model (affine) of scene image motionNote: this offers a low computational cost alternative to imagewarping and frame differencing approaches.used for motion prediction, and zoom detectionCSE486, Penn StateRobert CollinsTarget Motion Estimation= target position in frame f = camera motion from frame f to frame gPfTfg))]*(([*221111PTPPTPttttttttApproach: Constant velocity estimate, after compensating for camera motionPt 1Pt 2PtTtt21Ttt1CSE486, Penn StateRobert CollinsGlobal Nearest Neighbor (GNN)Evaluate each observation in track gating region. Choose “best” one to incorporate into track.track1a1j = score for matching observation j to track 1o1o2o3o4Could be based on Euclidean or Mahalanobis distance to predicted location (e.g. exp{-d2}). Also could be based on similarity of appearance (e.g. template correlation score)4CSE486, Penn StateRobert CollinsData AssociationWe have been talking as if our objects are points. (which they areif we are tracking corner features or radar blips). But our objects are blobs – they are an image region, and have an area.X(t-1) X(t) X(t+1)V(t) V(t+1)constant velocityassumes V(t) = V(t+1)Map the object regionforward in time to predict a new region.CSE486, Penn StateRobert CollinsData AssociationDetermining the correspondence of blobs across frames is based onfeature similarity between blobs.Commonly used features: location , size / shape, velocity, appearanceFor example: location, size and shape similarity can be measuredbased on bounding box overlap:2 * area(A and B)area(A) + area(B)score = A = bounding box at time tB = bounding box at time t+1ABCSE486, Penn StateRobert CollinsAppearance InformationCorrelation of image templates is an obvious choice (between frames)Extract blobsData associationvia normalizedcorrelation.Update appearance template of blobsCSE486, Penn StateRobert CollinsAppearance via Color HistogramsColor distribution (1D histogram normalized to have unit weight)R’G’B’discretizeR’ = R << (8 - nbits)G’ = G << (8 - nbits)B’ = B << (8 - nbits)Total histogram size is (2^(8-nbits))^3example, 4-bit encoding of R,G and B channelsyields a histogram of size 16*16*16 = 4096.CSE486, Penn StateRobert CollinsSmaller Color HistogramsR’G’B’discretizeR’ = R << (8 - nbits)G’ = G << (8 - nbits)B’ = B << (8 - nbits)Total histogram size is 3*(2^(8-nbits))example, 4-bit encoding of R,G and B channelsyields a histogram of size 3*16 = 48.Histogram information can be much much smaller if we are willing to accept a loss in color resolvability.Marginal R distributionMarginal G distributionMarginal B distributionCSE486, Penn StateRobert CollinsColor Histogram Examplered green blue5CSE486, Penn StateRobert CollinsComparing Color DistributionsGiven an n-bucket model histogram {mi| i=1,…,n} and data histogram {di| i=1,…,n}, we follow Comanesciu, Ramesh and Meer * to use the distance function:niiidm11Why?1) it shares optimality properties with the notion of Bayes error2) it imposes a metric structure 3) it is relatively invariant to object


View Full Document

PSU CSE/EE 486 - Tracking

Download Tracking
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Tracking and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Tracking 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?