DOC PREVIEW
UCSD CSE 101 - Greedy Algorithm

This preview shows page 1-2-14-15-30-31 out of 31 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Greedy AlgorithmGreedElements of the Greedy AlgorithmProof of Kruskal’s AlgorithmSlide 6Greedy Algorithm: Huffman CodesSlide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Proof That Huffman’s Merge is OptimalDynamic ProgrammingElements of Dynamic ProgrammingDynamic Programming: Matrix Chain ProductSlide 19Slide 20Slide 21Slide 22Slide 23Dynamic Programming: Longest Common SubsequenceSlide 25Slide 26Dynamic Programming: KnapsackSlide 28Slide 29Slide 30Slide 31Slide 32Greedy Algorithm•A greedy algorithm always makes the choice that looks best at the moment•Key point: Greed makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution•Note: Greedy algorithms do not always yield optimal solutions, but for SOME problems they doGreed•When do we use greedy algorithms?–When we need a heuristic (e.g., hard problems like the Traveling Salesman Problem)–When the problem itself is “greedy”•Greedy Choice Property (CLRS 16.2)•Optimal Substructure Property (shared with DP) (CLRS 16.2)•Examples:–Minimum Spanning Tree (Kruskal’s algorithm)–Optimal Prefix Codes (Huffman’s algorithm)Elements of the Greedy Algorithm•Greedy-choice property: “A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.” –Must prove that a greedy choice at each step yields a globally optimal solution•Optimal substructure property: “A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. This property is a key ingredient of assessing the applicability of greedy algorithm and dynamic programming.”Proof of Kruskal’s AlgorithmBasis: |T| = 0, trivial.Induction Step: T is promising by I.H., so it is a subgraph of some MST, call it S. Let ei be the smallest edge in E, s.t. T{ei} has no cycle, eiT.If eiS, we’re done.Suppose eiS, then S’ = S  {ei} has a unique cycle containing ei, and all other arcs in cycle  ei (because S is an MST!)Call the cycle C. Observe that C with ei cannot be in T, because T  {ei} is acyclic (because Kruskal adds ei)•Then C must contains some edge ej s.t. ejS, and we also know c(ej)c(ei).•Let S’ = S  {ei} \ {ej}•S’ is an MST, so T{ei} is promisingejeiProof of Kruskal’s AlgorithmGreedy Algorithm: Huffman Codes•Prefix codes–one code per input symbol –no code is a prefix of another •Why prefix codes?–Easy decoding–Since no codeword is a prefix of any other, the codeword that begins an encoded file is unambiguous–Identify the initial codeword, translate it back to the original character, and repeat the decoding process on the remainder of the encoded fileGreedy Algorithm: Huffman Codes•Huffman coding–Given: frequencies with which with which source symbols (e.g., A, B, C, …, Z) appear in a message–Goal is to minimize the expected encoded message length•Create tree (leaf) node for each symbol that occurs with nonzero frequency–Node weights = frequencies•Find two nodes with smallest frequency•Create a new node with these two nodes as children, and with weight equal to the sum of the weights of the two children•Continue until have a single treeGreedy Algorithm: Huffman Codes•Example: A E G I M N O R S T U V Y BlankA E G I M N O R S T U V Y Blank1 5 7 9 13 14 15 18 19 20 21 22 241 5 7 9 13 14 15 18 19 20 21 22 241 3 2 2 1 2 2 2 2 1 1 1 1 31 3 2 2 1 2 2 2 2 1 1 1 1 3Frequency:1. Place the elements into minimum heap (by frequency).2. Remove the first two elements from the heap.3. Combine these two elements into one.4. Insert the new element back into the heap. Note: circle for node, rectangle for weight (= frequency) Note: circle for node, rectangle for weight (= frequency)Greedy Algorithm: Huffman CodesStep 1: Step 2:Step 3: Step 4:22AAMM22TTUU22AAMM4422VVYY22AAMM22TTUU22TTUU44NN4422VVYY22AAMMGreedy Algorithm: Huffman CodesStep 5:Step 6: 22TTUU44NN4422VVYY22AAMM44OORR22TTUU44NN4422VVYY22AAMM44OORR44SSGGGreedy Algorithm: Huffman CodesStep 7Step 822TTUU44NN4422VVYY22AAMM55IIEE44SSGG44OORR22TTUU44NN55IIEE44SSGG44OORR4422VVYY22AAMM77Greedy Algorithm: Huffman Codes•Step 955IIEE44SSGG994422VVYY22AAMM7722TTUU44NN44OORR881515Greedy Algorithm: Huffman CodesFinally:•Note that the 0’s (left branches) and 1’s (right branches) give the code words for each symbol55IIEE44SSGG994422VVYY22AAMM7722TTUU44NN44OORR881515242400110000000000000000000000001111111111111111111111Proof That Huffman’s Merge is Optimal•Let T be an optimal prefix-code tree in which a, b are siblings at deepest level, L(a) = L(b)•Suppose that x, y are two other nodes that are merged by the Huffman algorithm–x, y have lowest weights because Huffman chose them–WLOG w(x)  w(a), w(y)  w(b); L(a) = L(b)  L(x), L(y)–Swap a and x: cost difference between T and new T’ is•w(x)L(x) + w(a)L(a) – w(x)L(a) – w(a)L(x) = (w(a) – w(x))(L(a) – L(x)) // both factors non-neg  0–Similar argument for b, y  Huffman choice also optimalTxa byDynamic Programming•Dynamic programming: Divide problem into overlapping subproblems; recursively solve each in the same way. •Similar to DQ, so what’s the difference: –DQ partition the problem into independent subproblems.–DP breaking it into overlapping subproblems, that is, when subproblems share subproblems. –So DP saves work compared with DQ by solving every subproblems just once ( when subproblems are overlapping).Elements of Dynamic Programming•Optimal substructure: A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Whenever a problem exhibits optimal substructure, it is a good clue that DP might apply.(a greedy method might apply also.)•Overlapping subproblems: A recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems.Dynamic Programming: Matrix Chain Product•Matrix-chain multiplication problem: Give a chain of n matrices A1, A2, …, An to be multiplied, how to get the product A1 A2 …An. with minimum number of scalar multiplications. •Because of the associative law of matrix multiplication, there are many possible orderings to calculate the product for the same matrix


View Full Document

UCSD CSE 101 - Greedy Algorithm

Download Greedy Algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Greedy Algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Greedy Algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?