DOC PREVIEW
UCSD CSE 101 - Greedy Algorithm

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Greedy Algorithm• A greedy algorithm always makes the choice that looks best at the moment• Key point: Greed makes a locally optimal choice in the hope that this choice will lead to a globallyoptimal solution• Note: Greedy algorithms do not always yield optimal solutions, but for SOME problems they do Greed• When do we use greedy algorithms?– When we need a heuristic (e.g., hard problems like the Traveling Salesman Problem)– When the problem itself is “greedy”• Greedy Choice Property (CLRS 16.2)• Optimal Substructure Property (shared with DP) (CLRS 16.2)• Examples:– Minimum Spanning Tree (Kruskal’s algorithm)– Optimal Prefix Codes (Huffman’s algorithm)Elements of the Greedy Algorithm• Greedy-choice property: “A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.” – Must prove that a greedy choice at each step yields a globally optimal solution• Optimal substructure property:“A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. This property is a key ingredient of assessing the applicability of greedy algorithm and dynamic programming.”Greedy Algorithm: Minimum Spanning Tree• Kruskal’s minimum spanning tree algorithm– INPUT: edge-weighted graph G = (V,E), with |V| = n– OUTPUT: T a spanning tree of G (touches all vertices, and therefore has n-1 edges) of minimum cost (= total edge weight)– Will grow a set of edges T until it contains n-1 edges– Algorithm: Start with T empty, then iteratively add to T the smallest (minimum-cost) remaining edge in the graph if the edge does not form a cycle in TClaim: Greedy -MST is correct.Proof (by induction):§ Define a promising edge set E’ ⊆ E to be any edge set that is a subgraph of some MST. We will show by induction that as T is constructed, T is always promising.Proof of Kruskal’s Algorithm§ Basis: |T| = 0, trivial.§ Induction Step: T is promising by I.H., so it is a subgraph of some MST, call it S. Let ei be the smallest edge in E, s.t. T∪{ei} has no cycle, ei∉T.§ If ei∈S, we’re done.§ Suppose ei∉S, then S’ = S ∪ {ei} has a unique cycle containing ei, and all other arcs in cycle ≤ ei (because S is an MST!)§ Call the cycle C. Observe that C with eicannot be in T, because T ∪ {ei} is acyclic (because Kruskal adds ei)• Then C must contains some edge ej s.t.ej∉S, and we also know c(ej)≥c(ei).• Let S’ = S ∪ {ei} \ {ej}• S’ is an MST, so T∪{ei} is promisingejeiProof of Kruskal’s Algorithm2Greedy Algorithm: Huffman Codes• Prefix codes– one code per input symbol – no code is a prefix of another • Why prefix codes?– Easy decoding– Since no codeword is a prefix of any other, the codeword that begins an encoded file is unambiguous– Identify the initial codeword, translate it back to the original character, and repeat the decoding process on the remainder of the encoded fileGreedy Algorithm: Huffman Codes• Huffman coding– Given: frequencies with which with which source symbols (e.g., A, B, C, …, Z) appear in a message– Goal is to minimize the expected encoded message length• Create tree (leaf) node for each symbol that occurs with nonzero frequency– Node weights = frequencies• Find two nodes with smallest frequency• Create a new node with these two nodes as children, and with weight equal to the sum of the weights of the two children• Continue until have a single treeGreedy Algorithm: Huffman Codes• Example:A E G I M N O R S T U V Y BlankA E G I M N O R S T U V Y Blank1 5 7 9 13 14 15 18 19 20 21 22 241 5 7 9 13 14 15 18 19 20 21 22 241 3 2 2 1 2 2 1 3 2 2 1 2 2 2 2 1 1 1 1 32 2 1 1 1 1 3Frequency:1. Place the elements into minimum heap (by frequency).2. Remove the first two elements from the heap.3. Combine these two elements into one.4. Insert the new element back into the heap. Note: circle for node, rectangle for weight (= frequency) Note: circle for node, rectangle for weight (= frequency) Greedy Algorithm: Huffman CodesStep 1: Step 2:Step 3: Step 4:22AA MM22TT UU22AA MM4422VV YY22AA MM22TT UU22TT UU44NN4422VV YY22AA MMGreedy Algorithm: Huffman CodesStep 5:Step 6: 22TT UU44NN4422VV YY22AA MM44OO RR22TT UU44NN4422VV YY22AA MM44OO RR44SS GGGreedy Algorithm: Huffman CodesStep 7Step 822TT UU44NN4422VV YY22AA MM55II EE44SS GG44OO RR22TT UU44NN55II EE44SS GG44OO RR4422VV YY22AA MM773Greedy Algorithm: Huffman Codes• Step 955II EE44SS GG994422VV YY22AA MM7722TT UU44NN44OO RR881515Greedy Algorithm: Huffman CodesFinally:• Note that the 0’s (left branches) and 1’s (right branches) give the code words for each symbol55II EE44SS GG994422VV YY22AA MM7722TT UU44NN44OO RR881515242400 110000 00 0000000000000000001111111111111111111111Proof That Huffman’s Merge is Optimal• Let T be an optimal prefix-code tree in which a, b are siblings at deepest level, L(a) = L(b)• Suppose that x, y are two other nodes that are merged by the Huffman algorithm– x, y have lowest weights because Huffman chose them– WLOG w(x) ≤ w(a), w(y) ≤ w(b); L(a) = L(b) ≥ L(x), L(y)– Swap a and x: cost difference between T and new T’ is• w(x)L(x) + w(a)L(a) – w(x)L(a) – w(a)L(x)= (w(a) – w(x))(L(a) – L(x)) // both factors non-neg≥ 0– Similar argument for b, yà Huffman choice also optimalTxa byDynamic Programming• Dynamic programming: Divide problem into overlapping subproblems; recursively solve each in the same way. • Similar to DQ, so what’s the difference:– DQ partition the problem into independentsubproblems.– DP breaking it into overlapping subproblems, that is, when subproblems share subproblems. – So DP saves work compared with DQ by solving every subproblems just once ( when subproblems are overlapping).Elements of Dynamic Programming• Optimal substructure: A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Whenever a problem exhibits optimal substructure, it is a good clue that DP might apply.(a greedy method might apply also.)• Overlapping subproblems: A recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems. Dynamic Programming: Matrix Chain Product• Matrix-chain


View Full Document

UCSD CSE 101 - Greedy Algorithm

Download Greedy Algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Greedy Algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Greedy Algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?