DOC PREVIEW
BU CS 333 - Dynamic Programming Technique

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Dynamic Programming TechniqueSlide 2When is Dynamic Programming used?Fibonacci's Series:Slide 5What does the Execution Tree look like?The Main Idea of Dynamic ProgrammingDynamic Programming Solution for FibonacciThe Binomial CoefficientThe recursive algorithmbinomialCoef(n, k) 1. if k = 0 or k = n 2. then return 1 3. else return (binomialCoef( n -1, k -1) + binomialCoef(n-1, k))Dynamic SolutionThe B MatrixCompute B[4,2]=Dynamic ProgramDynamic programmingNumber of iterationsPrinciple of Optimality (Optimal Substructure)Principle of optimality - shortest path problemPrinciple of optimalityPrinciple of optimality - MST problemSlide 22A problem that does not satisfy the Principle of OptimalityDynamic Programming TechniqueD.P.2•The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables (arrays) to construct a solution.•Used extensively in "Operation Research" given in the Math dept.D.P.3When is Dynamic Programming used?•Used for problems in which an optimal solution for the original problem can be found from optimal solutions to subproblems of the original problem•Often a recursive algorithm can solve the problem. But the algorithm computes the optimal solution to the same subproblem more than once and therefore is slow. •The following two examples (Fibonacci and binomial coefficient) have such a recursive algorithm•Dynamic programming reduces the time by computing the optimal solution of a subproblem only once and saving its value. The saved value is then used whenever the same subproblem needs to be solved.D.P.4Fibonacci's Series:•Definition: S0 = 0,S1 = 1,Sn = Sn-1 + Sn-2 n>10, 1, 1, 2, 3, 5, 8, 13, 21, …•Applying the recursive definition we get: fib (n)1. if n < 22. return n3. else return (fib (n -1) + fib(n -2))D.P.5fib (n)1. if n < 22. return n3.else return fib (n -1) + fib(n -2)What is the recurrence equation?The run time can be shown to be very slow: T(n) =  ( n )where  = (1 + sqrt(5) ) / 2  1.61803D.P.6What does the Execution Tree look like?Fib(5) +Fib(4) +Fib(3) +Fib(3) +Fib(2) +Fib(2) +Fib(1) Fib(2) +Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) Fib(1) Fib(0)D.P.7The Main Idea of Dynamic Programming•In dynamic programming we usually reduce time by increasing the amount of space•We solve the problem by solving subproblems of increasing size and saving each optimal solution in a table (usually). •The table is then used for finding the optimal solution to larger problems. •Time is saved since each subproblem is solved only once.D.P.8Dynamic Programming Solution for Fibonacci•Builds a table with the first n Fibonacci numbers.fib(n)1. A[0] =02. A[1] = 13. for i  2 to n4. do A[ i ] = A [i -1] + A[i -2 ]5. return AIs there a recurrence equation?What is the run time?What is the space requirements?If we only need the nth number can we save space?D.P.9The Binomial Coefficient or 0 1n<k<0 1110for )!(!!nkkknknknnkknknknD.P.10The recursive algorithmbinomialCoef(n, k)1. if k = 0 or k = n2. then return 13. else return (binomialCoef( n -1, k -1) + binomialCoef(n-1, k))D.P.11binomialCoef(n, k)1. if k = 0 or k = n2. then return 13. else return (binomialCoef( n -1, k -1) + binomialCoef(n-1, k))nk( )n -1k -1 ) n-1k( )(n -2k -2n -2k -1n -2k -1n -2k ((( (n -3k -3n -3k -2n -3k -2n -3k -1n -3k -2n -3k -1n -3k -1n -3k ((((( ( (() ))))))))) ))The Call TreeD.P.12Dynamic Solution•Use a matrix B of n+1 rows, k+1 columns where •Establish a recursive property. Rewrite in terms of matrix B:B[ i , j ] = B[ i -1 , j -1 ] + B[ i -1, j ] , 0 < j < i 1 , j = 0 or j = i•Solve all “smaller instances of the problem” in a bottom-up fashion by computing the rows in B in sequence starting with the first row.nkB[n,k] =D.P.13The B Matrix0 1 2 3 4 ... j k01234in11 11 2 11 3 3 11 4 6 4 1B[i -1, j -1] B[i -1, j ]B[ i, j ]D.P.14Compute B[4,2]= •Row 0: B[0,0] =1•Row 1: B[1,0] = 1B[1,1] = 1•Row 2: B[2,0] = 1B[2,1] = B[1,0] + B[1,1] = 2 B[2,2] = 1•Row 3: B[3,0] = 1B[3,1] = B[2,0] + B[2,1] = 3 B[3,2] = B[2,1] + B[2,2] = 3•Row 4: B[4,0] = 1B[4,1] = B[3, 0] + B[3, 1] = 4 B[4,2] = B[3, 1] + B[3, 2] = 642( )D.P.15Dynamic Program•bin(n,k )1. for i = 0 to n // every row2. for j = 0 to minimum( i, k ) 3. if j = 0 or j = i // column 0 or diagonal4. then B[ i , j ] = 15. else B[ i , j ] =B[i -1, j -1] + B[i -1, j ]6. return B[ n, k ]•What is the run time?•How much space does it take?•If we only need the last value, can we save space?D.P.16Dynamic programming •All values in column 0 are 1•All values in the first k+1 diagonal cells are 1 •j i and 0 < j <=min{i,k} ensures we only compute B[ i, j ] for j < i and only first k+1 columns.•Elements above diagonal (B[i,j] for j>i) are not computed since jiis undefined for j >iD.P.17Number of iterations1 1 11 12 1212 2 1200 00 0110                 ji kinjiikjki kni kniki kk kn k kn k kmin( , )( ) ( )( )( )( )( )( )( )D.P.18Principle of Optimality(Optimal Substructure)The principle of optimality applies to a problem (not an algorithm)A large number of optimization problems satisfy this principle. Principle of optimality:Given an optimal sequence of decisions or choices, each subsequence must also be optimal.D.P.19Principle of optimality - shortest path problemProblem: Given a graph G and vertices s and t, find a shortest path in G from s to tTheorem: A subpath P’ (from s’ to t’) of a shortest path P is a shortest path from s’ to t’ of the subgraph G’ induced by P’. Subpaths are paths that start or end at an intermediate vertex of P.Proof: If P’ was not a shortest path from s’ to t’ in G’, we can substitute the subpath from s’ to t’ in P, by the shortest path in G’ from s’ to t’. The result is a shorter path from s to t than P. This contradicts our assumption that P is a shortest path from s to t.D.P.20Principle of optimalityab c dfeG’G3 1356P’={(c.d), (d,e)}P={ (a,b), (b,c) (c.d), (d,e)}P’ must be a shortest path from c to e in G’, otherwise P cannot be a shortest path from a to e in G.10713D.P.21Principle of optimality - MST


View Full Document

BU CS 333 - Dynamic Programming Technique

Download Dynamic Programming Technique
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Dynamic Programming Technique and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Dynamic Programming Technique 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?