DOC PREVIEW
UD CISC 320 - CISC320 final exam review notes

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CISC320 algorithms, 10S, final exam review notesFirst half (Chapter 0 through 4.4 is summarized on the midterm review sheet. this sheetcontinues from there.Chapter 4, sections 4.5-4.8:Key points:Consider Dijkstra’s single source shortest path algorithm on a weighted directed graph of n verticesand m edges with no negative edge weights.• It uses a priority queue which must be1. built on n vertices (using n insert()’s for instance), then2. experience n extract-min()’s as the algorithm proceeds.3. experience a total of up to m decrease-key operations as the distances to neighbors areupdated for each vertex processed.• A binary heap may be used to give O(m log(n)) time for Dijkstra’s alg.• A d-ary heap, for d about m/n performs better, especially as m approaches n2. (pg 114)When negative edges (but no negative cycles) are involved, the Bellman Ford algorthm works inO(mn) time.Chapter 5, greedy algorithms:• Minimal spanning trees: Kruskal’s and Prim’s algorithms are both greedy style, but withdifferent strategies of ”best greedy choice”.• Prim’s is very similar to Dijkstra’s algorithm for a different problem, uses a priority queue. Itgradually builds a MST as a growing tree.• Kruskal’s algorith considers edges in increasing order by weight, Needs a union-find dynamicdisjoint set implementation. It manages a forest of trees that are gradually joined to form theMST.• Union-find with the union by weight heuristic performs n union() and find() operations in totaltime O(n log(n)).• Union-find with also the path-compression heuristic performs n union() and find() operationsin total time O(n log∗(n)).• log∗(n) is a very slow growing function. For instance, log∗(n) ≤ 5 for any n < google2, wheregoogle = 10100.• Huffman codes for data compression: variable length bit codes for letters are designed greedilybased on frequency of occurrence of the letters of the alphabet used.• Importantly the bit codes have the prefix property. Know what this means and why it matters.• Provably the Huffman codes are optimal over all variable bit codes with the prefix property.1Chapter 6, Dynamic Programming:Strategy: Find a function of a set of parameters that1. Has a clear conceptual definition2. Has a recurrence relation expressing it’s value at parameters in terms of other values at pa-rameters in which at least one is decreased.3. Solves the given problem for some set of parameter values.Then figure out the table of values needed and the order of construction.The run time cost is the size of the table times the cost to compute one entry. The latter is typicallydominated by the number of other entries looked up to compute the given entry.Memoization can do the table work for you. In the recursive function do: (1) first see if theparameters hash to a table entry already computed. If s o, use it. Otherwise compute the valueaccording to the recursive definition and put it in the table just be fore returning it.Many problems yield good dynamic programming solutions.• Longest increasing subsequence• Edit distance (note relevance to bioinformatics)• Knapsack in O(nW )• chain multiplication in O(n3)• Traveling Salesman in O(n22n)• The many chapter-end exercisesChapter 8 NP-complete problems:Classes of problems• P - search problems having an algorithm running in O(nc) time for some constant c on probleminstances with input of size n. Sometimes problems in P are called “tractable”. A problemnot known to be in P is “hard”.• NP - search problems having an ”checker” algorithm C(x, y) that verifies if y is a solution forinstance x in polynomial time.• Reduction: Show problem A has an algorithm that works by calling an algorithm for problemB together with p olynomially much additional work. If A is a hard problem, reduction to B isa way to show that B must also be hard.• NP-complete - problems in NP to which all problems in NP can be reduced.Hard problems:• SAT, 3SAT, c ircuit-SAT• vertex cover, clique, independent set• Travelling salesman, Rudrata cycle, Rudrata path• Knapsack, subset sum2• ...Examples of reductions about which exam questions could be asked• Traveling salesman to Rudrata cycle• Rudrata cycle to Rudrata path• SAT to 3SAT• 3SAT to Independent vertex set Bisection and doubling: Optimization problems to searchproblems• Search problem to decision problemReduction categories (generalizations, uses of gadgets).Cook’s theorem: reduction of any NP problem to circuit-SAT.Overall: Things to know about each algorithm:• Why is it correct (vis a vis it’s input/output specification)? What are the theorems andproperties used to explain it’s workings?• What measure, n, of it’s input is used as basis for analysis (for instance, n = bound on numberof bits in numbers, or n = size of an array)?• What formula (function of that n) estimates it’s runtime? Usually the formula is a recurrencerelation.• What is the solution of that formula/recurrence?Kinds of questions: multiple choice, short answer, write algorithm, track


View Full Document

UD CISC 320 - CISC320 final exam review notes

Download CISC320 final exam review notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view CISC320 final exam review notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view CISC320 final exam review notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?