Unformatted text preview:

Rapid object detectionNotes CNS/CS/EE 14815 May 2003Anelia AngelovaAttentionBottom-up attention (Itti,Koch,Niebur,1998)Use simple features to discard quickly large ’un-interesting’ regionsSimple features (Viola,Jones)Example features - weak learnersFast Evaluation of features - Integral ImageResult featuresTo o simple featuresCombine the features into complex classifier• AdaBoost-Linear combination of features-Greedy selection• Cascade of classifiers-Take only high confident responses of fea-tures-Combine features in conjunctionAdaBoost (Freund,Schapire)T={(x1,y1), ..., (xN,yN)} training setg:RD→{−1; +1}, G = {g}αhypothesis space˜g(x)=Mi=1αigi(x) aggregated hypotˆg(x)=sign(˜g(x)) decisionAlgorithmw0(n)=1N;for i=1 to MFind hypothesis giwith min error ii=minαNn=1wi−1(n)|yngα(xn)−1|2wi(n)=wi−1(n)(1−i)iifgi(xn)yn= −1wi(n)=wi(n)Nk=1wi(k)αi= log(1−ii)Output hypothesis: ˆg(x)=sign(Mi=1αigi(x))AdaBoost as gradient descent (Ma-son et al)T = {(xn,yn)}Nn=1, G = {g}αAssume before step m we have already selectedG(x)=m−1i=1αigi(x)We want to select a new function gm∈ G to addto the linear combination i.e. G(x)+αgm(x)According to what criterion?Define a cost functional over the lin(G) :C(F (x)) =1NNn=1c(F (xn)) c(.) is a cost functionFinally we need to find:gm=argming∈GC(G(x)+αg(x))How to do that? ... we need a direction (function) inwhich out cost is minimized mostGradient descent!Define inner product space with<F,G>=1NNn=1F (xn)G(xn)C(F (x)+f(x)) ≈ C(G(x))+ < C(F (x)),αf(x) > α>0minC(F (x)+αf (x)) ⇔ min < C( F (x))αf (x) >Concrete example C(.)= C(y G(x))min< C(yG(x)),gm(x) >=min1NNn=1C(ynGn(xn))yngm(xn)⇔ max1ZNn=1C(ynGn(xn))yngm(xn),Z =Nn=1C(ynGn(xn)) note Z<0denote Dn=C(ynGn(xn))ZmaxNn=1Dnyngm(xn)=maxn:yn=gm(xn)Dn−n:yn=gm(xn)Dn=rightDn−wrongDn=1− 2wrongDn⇔ minwrongDn⇔min error over examples if they are taken withweights Dn!!!!Finally:Gradient descent view:-finds the next best function in the direction ofthe gradient-prescribes to assign weights over exanples-gives way to compute the weights at each iter-ation⇒ finding the best function is equivalent to find-ing the function with minimum error wrt thosewe ights!Assume gmis found. How to find α ?dC(G(x)+αgm(x))dα=0for AdaBoost: c(x)=e−xNn=1e−ynGn(xn)−αgm(xn)ymgm(xn)ym=0righte−ynGn(xn))e−α=wronge−ynGn(xn))eαα =12ln(rightDnwrongDn)=12log((1−m)m)Cascade of Classifiers(Viola,Jones)At each stage:– if nonface,declare nonface,exit– if face,evaluate t he next stageRapid object detection, Viola andJones: Outline* Define a Hypothesis space- Overcomplete basis of simple f eatures- Image preprocessed for speed optimization - Inte-gral Image* Algorithm: AdaBoost- iteratively selects best weak learner* Optimization:Cascade of AdaBoost learnersReferences:Itti, Koch, Niebur, ’A Model of Saliency-BasedVisual Attention for Rapid Scene Analysis’,PAMI1998;20(11):1254-1259Freund, Schapire, ’A short introduction to boost-ing’,Int. Joint Conf. Artif. Intel.,1999Mason, Baxter, Bartlett, Frean, ’Boosting algo-rithms as gradient descent in function space’,1999Viola, Jones, ’Rapid object detection using acasacde of simple features’,


View Full Document

CALTECH EE 148A - Rapid object detection

Download Rapid object detection
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Rapid object detection and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Rapid object detection 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?