DOC PREVIEW
Stanford CS 262 - Heuristic Local Alignerers

This preview shows page 1-2-3-4-25-26-27-51-52-53-54 out of 54 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 54 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Heuristic Local Alignerers 1. The basic indexing & extension technique 2. Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3. Systems for local alignmentIndexing-based local alignment Dictionary: All words of length k (~10) Alignment initiated between words of alignment score ≥ T (typically T = k) Alignment: Ungapped extensions until score below statistical threshold Output: All local alignments with score > statistical threshold …… …… query DB query scanIndexing-based local alignment—Extensions A C G A A G T A A G G T C C A G T C T G A T C C T G G A T T G C G A Gapped extensions until threshold • Extensions with gaps until score < C below best score so far Output: GTAAGGTCCAGT GTTAGGTC-AGTSensitivity-Speed Tradeoff long words (k = 15) short words (k = 7) Sensitivity ü Speed ü Kent WJ, Genome Research 2002 Sens. Speed X%Sensitivity-Speed Tradeoff Methods to improve sensitivity/speed 1. Using pairs of words 2. Using inexact words 3. Patterns—non consecutive positions ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCACGGACCAGTGACTACTCTGATTCCCAG…… ……ATAACGGACGACTGATTACACTGATTCTTAC…… ……GGCGCCGACGAGTGATTACACAGATTGCCAG…… TTTGATTACACAGAT T G TT CAC GMeasured improvement Kent WJ, Genome Research 2002Non-consecutive words—Patterns Patterns increase the likelihood of at least one match within a long conserved region 3 common 5 common 7 common Consecutive Positions Non-Consecutive Positions 6 common On a 100-long 70% conserved region: Consecutive Non-consecutive Expected # hits: 1.07 0.97 Prob[at least one hit]: 0.30 0.47Advantage of Patterns 11 positions 11 positions 10 positionsMultiple patterns • K patterns § Takes K times longer to scan § Patterns can complement one another • Computational problem: § Given: a model (prob distribution) for homology between two regions § Find: best set of K patterns that maximizes Prob(at least one match) TTTGATTACACAGAT T G TT CAC G T G T C CAG TTGATT A G Buhler et al. RECOMB 2003 Sun & Buhler RECOMB 2004 How long does it take to search the query?Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1 x2 x3 xK 2 1 K 2Example: The Dishonest Casino A casino has two dice: • Fair die P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 • Loaded die P(1) = P(2) = P(3) = P(5) = 1/10 P(6) = 1/2 Casino player switches back-&-forth between fair and loaded die once every 20 turns Game: 1. You bet $1 2. You roll (always with a fair die) 3. Casino player rolls (maybe with fair die, maybe with loaded die) 4. Highest number wins $2Question # 1 – Evaluation GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How likely is this sequence, given our model of how the casino works? This is the EVALUATION problem in HMMs Prob = 1.3 x 10-35Question # 2 – Decoding GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs FAIR LOADED FAIRQuestion # 3 – Learning GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs Prob(6) = 64%The dishonest casino model FAIR LOADED 0.05 0.05 0.95 0.95 P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2A HMM is memory-less At each time step t, the only thing that affects future states is the current state πt K 1 … 2Definition of a hidden Markov model Definition: A hidden Markov model (HMM) • Alphabet Σ = { b1, b2, …, bM } • Set of states Q = { 1, ..., K } • Transition probabilities between any two states aij = transition prob from state i to state j ai1 + … + aiK = 1, for all states i = 1…K • Start probabilities a0i a01 + … + a0K = 1 • Emission probabilities within each state ei(b) = P( xi = b | πi = k) ei(b1) + … + ei(bM) = 1, for all states i = 1…K K 1 … 2 End Probabilities ai0 in Durbin; not neededA HMM is memory-less At each time step t, the only thing that affects future states is the current state πt P(πt+1 = k | “whatever happened so far”) = P(πt+1 = k | π1, π2, …, πt, x1, x2, …, xt) = P(πt+1 = k | πt) K 1 … 2A HMM is memory-less At each time step t, the only thing that affects xt is the current state πt P(xt = b | “whatever happened so far”) = P(xt = b | π1, π2, …, πt, x1, x2, …, xt-1) = P(xt = b | πt) K 1 … 2A parse of a sequence Given a sequence x = x1……xN, A parse of x is a sequence of states π = π1, ……, πN 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1 x2 x3 xK 2 1 K 2Generating a sequence by the model Given a HMM, we can generate a sequence of length n as follows: 1. Start at state π1 according to prob a0π1 2. Emit letter x1 according to prob eπ1(x1) 3. Go to state π2 according to prob aπ1π2 4. … until emitting xn 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1 x2 x3 xn 2 1 K 2 0 e2(x1) a02Likelihood of a parse Given a sequence x = x1……xN and a parse π = π1, ……, πN, To find how likely this scenario is: (given our HMM) P(x, π) = P(x1, …, xN, π1, ……, πN) = P(xN | πN) P(πN | πN-1) ……P(x2 | π2) P(π2 | π1) P(x1 | π1) P(π1) = a0π1 aπ1π2……aπN-1πN eπ1(x1)……eπN(xN) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1 x2 x3 xK 2 1 K 2Likelihood of a parse Given a sequence x = x1……xN and a parse π = π1, ……, πN, To find how likely this scenario is: (given our HMM) P(x, π) = P(x1, …, xN, π1, ……, πN) = P(xN | πN) P(πN | πN-1) ……P(x2 | π2) P(π2 | π1) P(x1 | π1) P(π1) = a0π1 aπ1π2……aπN-1πN eπ1(x1)……eπN(xN) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1 x2 x3 xK 2 1 K 2 A compact way to write a0π1


View Full Document

Stanford CS 262 - Heuristic Local Alignerers

Documents in this Course
Lecture 8

Lecture 8

38 pages

Lecture 7

Lecture 7

27 pages

Lecture 4

Lecture 4

12 pages

Lecture 1

Lecture 1

11 pages

Biology

Biology

54 pages

Lecture 7

Lecture 7

45 pages

Load more
Download Heuristic Local Alignerers
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Heuristic Local Alignerers and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Heuristic Local Alignerers 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?