DOC PREVIEW
CMU CS 15122 - Lecture Notes on Dynamic Programming

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IntroductionFibonacci NumbersTop-Down Dynamic ProgrammingBottom-Up Dynamic ProgrammingImplementing ROBDDsEncoding the n-Queens ProblemLecture Notes onDynamic Programming15-122: Principles of Imperative ComputationFrank PfenningLecture 23November 16, 20101 IntroductionIn this lecture we introduce dynamic programming, which is a high-levelcomputational thinking concept rather than a concrete algorithm. Perhapsa more descriptive title for the lecture would be sharing, because dynamicprogramming is about sharing computation. We have already seen earlierthat sharing of space is also crucial: binary decision diagrams in which sub-trees are shared are (in practice) much more efficient than binary decisiontrees in which there is no sharing.In order to apply dynamic programming, we generally look for the fol-lowing conditions:1. The optimal solutions to a problem is composed of optimal solutionsto subproblems, and2. if there are several optimal solutions, we don’t care which one we get.The emphasis on optimality in these conditions dates back to the 1930’swhen dynamic programming was developed. The name also refers to pro-gramming in the sense of the operations research literature (like, for exam-ple, integer programming) and does not refer to programming the way weunderstand today.Under the above conditions, the idea of dynamic programming is tobuild an exhaustive table with optimal solutions to subproblems. Then wecombine these solutions according to properties of the specific problem weare addressing. The use of a table avoids recomputation—it shares compu-tation by storing results and avoiding their recomputation.LECTURE NOTES NOVEMBER 16, 2010Dynamic Programming L23.2We start with a simple example and the discuss the implementation ofBDDs, which uses dynamic programming in several places.2 Fibonacci NumbersAs a very simple example, we consider the computation of the Fibonaccinumbers. They are defined by specifying, mathematically,f0= 0f1= 1fn+2= fn+1+ fn(n ≥ 0)A direct (and very inefficient) implementation is a recursive functionint fib0(int n) {REQUIRES(n >= 0);if (n == 0) return 0;else if (n == 1) return 1;else return fib0(n-1) + fib0(n-2);}When we draw the top part of a tree of the recursive calls that have to bemade, we notice that values are computed multiple times.fib(n)&fib(n‐1)& fib(n‐2)&fib(n‐2)& fib(n‐3)&fib(n‐3)& fib(n‐4)&Before we move on to improve the efficience of this program, did younotice that the program above is buggy in C, and the corresponding versionwould be questionable in C0? Think about it before reading on.LECTURE NOTES NOVEMBER 16, 2010Dynamic Programming L23.3The problem is that addition can overflow. In C, the result is unde-fined, and arbitrary behavior by the code would be acceptable. In C0 theresult would be defined, but it would be the result of computing modulo232which could be clearly the wrong answer. We can fix this by explicitlychecking for overflow.#include <limits.h>int safe_plus(int x, int y) {if ( (x > 0 && y > INT_MAX-x)|| (x < 0 && y < INT_MIN-x) ) {fprintf(stderr, "integer overflow\n");abort();} else {return x+y;}}int fib1(int n) {REQUIRES(n >= 0);if (n == 0) return 0;else if (n == 1) return 1;else return safe_plus(fib1(n-1),fib1(n-2));}Now the program will abort in a well-defined manner upon overflow, in-stead of exhibiting undefined behavior which might silently give us thewrong result.3 Top-Down Dynamic ProgrammingIn top-down dynamic programming we store the values as we compute themrecursively. Then, if we need to compute a value we just reuse the valueif we have computed it already. A characteristic pattern for top-down dy-namic programming is a top-level function that allocates an array or similarstructure to save computed results, and a recursive function that maintainsthis array.LECTURE NOTES NOVEMBER 16, 2010Dynamic Programming L23.4int fib2_rec(int n, int* A) {REQUIRES(n >= 0);if (n == 0) return 0;else if (n == 1) return 1;else if (A[n] > 0) return A[n];else {int result = safe_plus(fib2_rec(n-1,A), fib2_rec(n-2,A));A[n] = result; /* store A[n] == fib(n) */return result;}}int fib2(int n) {REQUIRES(n >= 0);int* A = calloc(n+1, sizeof(int));if (A == NULL) { fprintf(stderr, "allocation failed\n"); abort(); }/* calloc initializes the array with 0s */int result = fib2_rec(n, A);free(A);return result;}We also call this programming technique memoization.We might be tempted to improve this function slightly, by looking upthe second value:int fib2_rec(int n, int* A) {REQUIRES(n >= 0);if (n == 0) return 0;else if (n == 1) return 1;else if (A[n] > 0) return A[n];else {int result = safe_plus(fib2_rec(n-1,A), A[n-2]);A[n] = result; /* store A[n] == fib(n) */return result;}}This would be incorrect, but why?LECTURE NOTES NOVEMBER 16, 2010Dynamic Programming L23.5The problem is that C does not guarantee the order of evaluation exceptin some specific circumstances such as short-circuiting conjunction. So itmight evaluate A[n-2] first, before fib2_rec(n-1,A). At that point, A[n−2]might still be 0 and we obtain an incorrect answer.4 Bottom-Up Dynamic ProgrammingTop-down dynamic programming retains the structure of the original (in-efficient) recursive function. Bottom-up dynamic programming inverts theorder and starts from the bottom of the recursion, building up the table ofvalues. In bottom-up dynamic programming, recursion is often profitablyreplaced by iteration.In our example, we would like to compute A[0], A[1], A[2], . . . in thisorder.int fib3(int n) {REQUIRES(n >= 0);int i;int* A = calloc(n+1, sizeof(int));if (A == NULL) { fprintf(stderr, "allocation failed\n"); abort(); }A[0] = 0; A[1] = 1;for (i = 2; i <= n; i++) {/* loop invariant: 2 <= i && i <= n+1; *//* loop invariant: A[i] = fib(i) for i in [0,i) */A[i] = safe_plus(A[i-1], A[i-2]);}ASSERT(i == n+1);int result = A[n];free(A);return result;}We have indicated the loop invariant here only informally, although wecould refer to one of the earlier implementations if we wanted to, perhapsviewing fib0 as a specification.5 Implementing ROBDDsIn the implementation of ROBDDs, dynamic programming plays a perva-sive role. Recall fromLecture 19 that ROBDDs are binary decision diagramsLECTURE NOTES NOVEMBER 16, 2010Dynamic Programming L23.6(BDDs) satisfying two conditions:Irredundancy: The low and high successors of every node are distinct.Uniqueness: There are no two distinct nodes testing the same variablewith the same successors.These two conditions guarantee


View Full Document

CMU CS 15122 - Lecture Notes on Dynamic Programming

Download Lecture Notes on Dynamic Programming
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes on Dynamic Programming and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes on Dynamic Programming 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?