DOC PREVIEW
MSU CSE 830 - Lecture 10: Dynamic Programming

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Dynamic ProgrammingOptimization ProblemsExample ProblemSlide 4Fibonacci numbersRecursive ComputationBottom-up computationKey implementation stepsExample: Matrix MultiplicationExample InputIdentify subsolutionsDevelop a recurrence relationSet up table of valuesOrder of Computation of ValuesRepresenting optimal solutionPseudocodeBacktrackingPrinciple of OptimalityExample 1Example 2Example 3Example 4: The Traveling Salesman ProblemNo!Summary of bad examplesWhen is dynamic programming effective?Efficient Top-Down ImplementationTrading Post ProblemLongest Common Subsequence ProblemLongest Increasing Subsequence ProblemBook Stacking ProblemDynamic Programming•Optimization Problems•Dynamic Programming Paradigm•Example: Matrix multiplication•Principle of Optimality•Exercise: Trading post problemOptimization Problems•In an optimization problem, there are typically many feasible solutions for any input instance I•For each solution S, we have a cost or value function f(S)•Typically, we wish to find a feasible solution S such that f(S) is either minimized or maximized•Thus, when designing an algorithm to solve an optimization problem, we must prove the algorithm produces a best possible solution.Example ProblemYou have six hours to complete as many tasks as possible, all of which are equally important.Task A - 2 hours Task D - 3.5 hoursTask B - 4 hours Task E - 2 hoursTask C - 1/2 hour Task F - 1 hourHow many can you get done?•Is this a minimization or a maximization problem?•Give one example of a feasible but not optimal solution along with its associated value.•Give an optimal solution and its associated value.Dynamic Programming•The key idea behind dynamic program is that it is a divide-and-conquer technique at heart•That is, we solve larger problems by patching together solutions to smaller problems•However, dynamic programming is typically faster because we compute these solutions in a bottom-up fashionFibonacci numbers•F(n) = F(n-1) + F(n-2)–F(0) = 0–F(1) = 1•Top-down recursive computation is very inefficient–Many F(i) values are computed multiple times•Bottom-up computation is much more efficient–Compute F(2), then F(3), then F(4), etc. using stored values for smaller F(i) values to compute next value–Each F(i) value is computed just onceRecursive ComputationF(n) = F(n-1) + F(n-2) ; F(0) = 0, F(1) = 1Recursive Solution:F(6) = 8F(1)F(1) F(0)F(2)F(3)F(1) F(0)F(2)F(1)F(1) F(0)F(2)F(3)F(4)F(1) F(0)F(2)F(1)F(1) F(0)F(2)F(3)F(4)F(5)Bottom-up computationWe can calculate F(n) in linear time by storing small values.F[0] = 0F[1] = 1for i = 2 to nF[i] = F[i-1] + F[i-2]return F[n]Moral: We can sometimes trade space for time.Key implementation steps•Identify subsolutions that may be useful in computing whole solution–Often need to introduce parameters•Develop a recurrence relation (recursive solution)–Set up the table of values/costs to be computed•The dimensionality is typically determined by the number of parameters•The number of values should be polynomial•Determine the order of computation of values•Backtrack through the table to obtain complete solution (not just solution value)Example: Matrix Multiplication•Input–List of n matrices to be multiplied together using traditional matrix multiplication–The dimensions of the matrices are sufficient•Task–Compute the optimal ordering of multiplications to minimize total number of scalar multiplications performed•Observations: –Multiplying an X  Y matrix by a Y  Z matrix takes X  Y  Z multiplications–Matrix multiplication is associative but not commutativeExample Input•Input: –M1, M2, M3, M4•M1: 13 x 5•M2: 5 x 89 •M3: 89 x 3 •M4: 3 x 34•Feasible solutions and their values–((M1 M2) M3) M4:10,582 scalar multiplications–(M1 M2) (M3 M4): 54,201 scalar multiplications–(M1 (M2 M3)) M4: 2856 scalar multiplications–M1 ((M2 M3) M4): 4055 scalar multiplications–M1 (M2 (M3 M4)): 26,418 scalar multiplicationsIdentify subsolutions•Often need to introduce parameters •Define dimensions to be (d0, d1, …, dn) where matrix Mi has dimensions di-1 x di•Let M(i,j) be the matrix formed by multiplying matrices Mi through Mj•Define C(i,j) to be the minimum cost for computing M(i,j)Develop a recurrence relation•Definitions–M(i,j): matrices Mi through Mj–C(i,j): the minimum cost for computing M(i,j)•Recurrence relation for C(i,j)–C(i,i) = ???–C(i,j) = ???•Want to express C(i,j) in terms of “smaller” C termsSet up table of values•Table–The dimensionality is typically determined by the number of parameters–The number of values should be polynomialC 1 2 3 410203040Order of Computation of Values•Many orders are typically ok.–Just need to obey some constraints•What are valid orders for this table?C 1 2 3 410 1 2 320 4 530 640Representing optimal solutionP 1 2 3 410 1 1 320 2 330 340C 1 2 3 410 5785 1530 285620 1335 184530 907840P(i,j) records the intermediate multiplication k used to computeM(i,j). That is, P(i,j) = k if last multiplication was M(i,k) M(k+1,j)Pseudocodeint MatrixOrder()forall i, j C[i, j] = 0;for j = 2 to nfor i = j-1 to 1C(i,j) = mini<=k<=j-1 (C(i,k)+ C(k+1,j) + di-1dkdj)P[i, j]=k; return C[1, n];BacktrackingProcedure ShowOrder(i, j) if (i=j) write ( “Ai”) ; else k = P [ i, j ] ;write “ ( ” ;ShowOrder(i, k) ;write “  ” ;ShowOrder (k+1, j) ;write “)” ;Principle of Optimality•In book, this is termed “Optimal substructure”•An optimal solution contains within it optimal solutions to subproblems.•More detailed explanation–Suppose solution S is optimal for problem P.–Suppose we decompose P into P1 through Pk and that S can be decomposed into pieces S1 through Sk corresponding to the subproblems.–Then solution Si is an optimal solution for subproblem PiExample 1•Matrix Multiplication–In our solution for computing matrix M(1,n), we have a final step of multiplying matrices M(1,k) and M(k+1,n).–Our subproblems then would be to compute M(1,k) and M(k+1,n)–Our solution uses optimal solutions for computing M(1,k) and M(k+1,n) as part of the overall solution.Example 2•Shortest Path Problem–Suppose a shortest path from s to t visits u–We can decompose the path into s-u and u-t.–The s-u path must be a shortest path from s to u, and the u-t path must be a shortest path from u to t•Conclusion: dynamic programming can be used for computing shortest pathsExample


View Full Document

MSU CSE 830 - Lecture 10: Dynamic Programming

Download Lecture 10: Dynamic Programming
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 10: Dynamic Programming and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 10: Dynamic Programming 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?