UH ECE 4371 - Multihypothesis Sequential Probability Ratio Tests

Unformatted text preview:

Slide 1OutlineIntroObjectivesSystem ModelSlide 6Slide 7Recap…Test δaSlide 10Test δbSlide 12Recap…Asymptotic OptimalitySlide 15Higher-order ApproximationSlide 17ConclusionReferencesSlide 20Slide 21Higher-order ApproximationSlide 23Expected OvershotSlide 25Expected r_maxD_iAsymmetric of MSPR TestsSlide 29Slide 30Multihypothesis Sequential Probability Ratio TestsPresented by: Yi (Max) HuangAdvisor: Prof. Zhu HanWireless Network, Signal Processing & Security LabUniversity of Houston, USA2009 11 12Published by V. P. Dragalin, A. G. Tartakovsky, and V. V. VeeravalliPart I: Asymptotic OptimalityPart II: Accurate Asymptotic Expansions for Expected Sample SizeOutlineIntroductionObjectivesSystem ModelMSPR TestsTest δa – Bayesian Optimality Test δb – LLR testAsymptotic Optimality of MSPR TestsThe case of i.i.d. ObservationsHigher-order Approximation for accurate resultsConclusionsIntroMSPRT – Multiple hypothesis + SPRTMany applications need M≥2 hypothesisMultiple-resolution radar systemsCDMA systemsThe quasi-Bayesian MSPRT with expected sample size in asymptotical i.i.d case was established by V. Veeravalli.Need…make accurate decision by shortest observation time for MSPRTObjectivesContinue investigate the asymptotic behaviors for two MSPRTsTest δa quasi-Bayesian optimalityTest δb log-likelihood-ratio (LLR) testGoal:Asymptotically optimal to any positive moment of stopping time distribution.minimize average observation time and generalize the results for most environments.System ModelThe sequential testing of M hypotheses ( ).. is a sequential hypothesis test. d is “terminal decision function”; d=i, Hi occurs is the prior distribution hypotheses.loss function:  In case of zero-one :  is the conditional error probability.Ri is the risk function:when and =Pr[accepting Hi incorrectly] The class of test: The predefined value:Z is the log-likelihood function and ratio (up to given time n) is the positive threshold and is defined by is the expected stopping time (average stopping time)Recap…IntroductionObjectivesSystem ModelMSPR TestsTest δa – Bayesian Optimality Test δb – LLR testAsymptotic Optimality of MSPR TestsThe case of i.i.d. ObservationsHigher-order Approximation for accurate resultsConclusionsTest δa  and is applied by a Bayesian framework.If , thenIf , thenIn the special case of zero-one loss: Remark: stop as soon as the largest posterior probability exceeds a threshold, A.Test δb and is corresponded to LLR testvi is the accepting time for HiIn the special case of zero-one loss:If , thenRecap…IntroductionObjectivesSystem ModelMSPR TestsTest δa – Bayesian Optimality Test δb – LLR testAsymptotic Optimality of Ei[T] in MSPRTThe case of i.i.d. ObservationsHigher-order Approximation for accurate resultsConclusionsAsymptotic Optimality is Kullback-Leibler (KL) information distance  is minimal distance between Hi and othersConstraints: Dij must be positive, finiteRi to 0Asymptotic lower bounds for any positive moments of stopping time:Now, the minimal expected stopping time in asymptotic optimality:Higher-order ApproximationIn order to has accurate results, the higher order approx. expected stopping time Is estimated:Using non-linear renewal theory, & are now in form of random walk crossing a constant boundary + the non-linear term “slowly changing”The result of “slowly changing” is that limiting overshot (xi) of random walk over a fixed threshold are unchangedRedefined to When “r=M-1” and “\i” exclusion of “i” from the sethr,i is expected value of max r zero-mean normal r.v.The expected stopping time in higher-order approx. for Test δb and Test δa :ConclusionThe proposed MSPRT are asymptotically optimal under fairly general conditions in discrete or continuous time, stochastic models, and, …etc.Asymptotically optimal minimize any positive moment of the stopping time (average observation time) in both generalized approx. by “risk” go to zero, and higher-order approximations , up to a vanishing term by non-linear renewal theory.ReferencesV. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli, “Multihypothesis Sequential Probability Ratio Tests – Part I: Asymptotic Optimality”, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOV. 1999V. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli, “Multihypothesis Sequential Probability Ratio Tests – Part II: Accurate Asymptotic Expansions for the Expected Sample Size”, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOV. 1999Higher-order ApproximationIn order to has accurate results, the higher order apprex. expect stopping time Is estimated:Nonlinear to randomExpected OvershotExpected r_maxD_irelax the condition , and rewrite it to:Asymmetric of MSPR TestsSymmetric case:guarantees -to-zero rate keeping up ‘s rate,


View Full Document

UH ECE 4371 - Multihypothesis Sequential Probability Ratio Tests

Download Multihypothesis Sequential Probability Ratio Tests
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Multihypothesis Sequential Probability Ratio Tests and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Multihypothesis Sequential Probability Ratio Tests 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?