DOC PREVIEW
UMD CMSC 351 - Lecture Notes

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture Notes CMSC 251i0j7264123412340p4p3p2p1pm[i,j]5158881204810484000Final order34A3A2A1A4A3A2A1A23211331321s[i,j]ij234Figure 34: Chain Matrix Multiplication.In the figure below we show an example. This algorithm is tricky, so it would be a good idea to tracethrough this example (and the one given in the text). The initial set of dimensions are h5, 4, 6, 2, 7imeaning that we are multiplying A1(5 × 4) times A2(4 × 6) times A3(6 × 2) times A4(2 × 7). Theoptimal sequence is ((A1(A2A3))A4).Lecture 27: NP-Completeness: General Introduction(Tuesday, May 5, 1998)Read: Chapt 36, up through section 36.4.Easy and Hard Problems: At this point of the semester hopefully you have learned a few things of whatit means for an algorithm to be efficient, and how to design algorithms and determine their efficiencyasymptotically. All of this is fine if it helps you discover an acceptably efficient algorithm to solveyour problem. The question that often arises in practice is that you have tried every trick in the book,and still your best algorithm is not fast enough. Although your algorithm can solve small problemsreasonably efficiently (e.g. n ≤ 20) the really large applications that you want to solve (e.g. n ≥ 100)your algorithm does not terminate quickly enough. When you analyze its running time, you realize thatit is running in exponential time, perhaps n√n,or2n,or2(2n),orn!, or worse.Towards the end of the 60’s and in the eary 70’s there were great strides made in finding efficientsolutions to many combinatorial problems. But at the same time there was also a growing list ofproblems for which there seemed to be no known efficient algorithmic solutions. The best solutionsknown for these problems required exponential time. People began to wonder whether there was someunknown paradigm that would lead to a solution to these problems, or perhaps some proof that theseproblems are inherently hard to solve and no algorithmic solutions exist that run under exponentialtime.Around this time a remarkable discovery was made. It turns out that many of these “hard” problemswere interrelated in the sense that if you could solve any one of them in polynomial time, then youcould solve all of them in polynomial time. The next couple of lectures we will discuss some of theseproblems and introduce the notion of P, NP, and NP-completeness.Polynomial Time: We need some way to separate the class of efficiently solvable problems from ineffi-ciently solvable problems. We will do this by considering problems that can be solved in polynomialtime.82Lecture Notes CMSC 251We have measured the running time of algorithms using worst-case complexity, as a function of n, thesize of the input. We have defined input size variously for different problems, but the bottom line is thenumber of bits (or bytes) that it takes to represent the input using any reasonably efficient encoding. (Bya reasonably efficient encoding, we assume that there is not some significantly shorter way of providingthe same information. For example, you could write numbers in unary notation 111111111= 1002=8rather than binary, but that would be unacceptably inefficient.)We have also assumed that operations on numbers can be performed in constant time. From now on,we should be more careful and assume that arithmetic operations require at least as much time as thereare bits of precision in the numbers being stored.Up until now all the algorithms we have seen have had the property that their worst-case running timesare bounded above by some polynomial in the input size, n.Apolynomial time algorithm is anyalgorithm that runs in time O(nk) where k is some constant that is independent of n. A problem is saidto be solvable in polynomial time if there is a polynomial time algorithm that solves it.Some functions that do not “look” like polynomials but are. For example, a running time of O(n log n)does not look like a polynomial, but it is bounded above by a the polynomial O(n2), so it is consideredto be in polynomial time.On the other hand, some functions that do “look” like polynomials are not. For example, a running timeof O(nk) is not considered in polynomial time if k is an input parameter that could vary as a functionof n. The important constraint is that the exponent in a polynomial function must be a constant that isindependent of n.Decision Problems: Many of the problems that we have discussed involve optimization of one form oranother: find the shortest path, find the minimum cost spanning tree, find the knapsack packing ofgreatest value. For rather technical reasons, most NP-complete problems that we will discuss will bephrased as decision problems. A problem is called a decision problem if its output is a simple “yes” or“no” (or you may think of this as True/False, 0/1, accept/reject).We will phrase many optimization problems in terms of decision problems. For example, rather thanasking, what is the minimum number of colors needed to color a graph, instead we would phrase thisas a decision problem: Given a graph G and an integer k, is it possible to color G with k colors. Ofcourse, if you could answer this decision problem, then you could determine the minimum number ofcolors by trying all possible values of k (or if you were more clever, you would do a binary search onk).One historical artifact of NP-completeness is that problems are stated in terms of language-recognitionproblems. This is because the theory of NP-completeness grew out of automata and formal languagetheory. We will not be taking this approach, but you should be aware that if you look in the book, itwill often describe NP-complete problems as languages.Definition: Define P to be the set of all decision problems that can be solved in polynomial time.NP and Polynomial Time Verification: Before talking about the class of NP-complete problems, it is im-portant to introduce the notion of a verification algorithm. Many language recognition problems thatmay be very hard to solve, but they have the property that it is easy to verify whether its answer iscorrect.Consider the following problem, called the undirected Hamiltonian cycle problem (UHC). Given anundirected graph G, does G have a cycle that visits every vertex exactly once.An interesting aspect of this problems is that if the graph did contain a Hamiltonian cycle, thenit would be easy for someone to convince you that it did. They would simply say “the cycle ishv3,v7,v1,...,v13i”. We could then


View Full Document
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?