Unformatted text preview:

MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.1 Scribe Notes: Intro duction to Computational Complexity Jason Furtado February 28, 2008 Motivating Complexity Theory Penrose’s Argument (continued) Last lecture, Roger Penrose’s argument was introduced. It says that it was impossible for a com-puter to simulate the human brain. His argument is based on Godel’s Incompleteness Theorem. Penrose’s argument says that, within any formal system F , we can construct a Godel sentence G(F ) that the computer c annot prove but a human can prove it. Professor Aaronson proposes that there is a simple way to see that there must be a problem with Penrose’s Argument or any similar argument. Penrose is attempting to show that no computer could pass the Turing Test. The Turing Test states that a computer is intelligent if it could carry on a textual conversation with a human and the human would be unable to tell that he or she was having a conversation with a computer rather than another human. A Turing Test lasts a finite amount of time (it is a conversation). If you could type incredibly quickly, say 5000 characters per minute, the number of possible instant messaging conversation you could have is finite. Therefore, in principle, a giant lookup table could be constructed that stored an intelligent human response to every possible question that could be asked. Then the computer could just consult the lookup table whenever a question is asked of it. The problem with this lookup table is that it would have to be incredibly large. The amount of storage necessary would be many times larger than the size of the observable universe (which has maybe 1080 atoms). Therefore, such a system would be infeasible to create. The argument has now changed from theoretical possibility to a question of feasible given resources. The argument against such a system has to do with the amount of memory, processing power, etc. needed to create a system that could reliably pass the Turing Test. Since the system is theoretically possible, the question is whether or not the amount of resources necessary is a reasonable amount. In order to have such a conversation, we will need a way to measure the efficiency of algorithms. The field that studies such issues is called computational complexity theory, and will occ upy us for most of the rest of the course. In contrast to what we saw before, computational complexity is a field where almost all of the basic questions are still unanswered – but also where some of what we do know is at the forefront of mathematics. Dijkstra Algorithm In 1960, the famous computer scientist Edsgar Dijkstra discovered an algorithm that finds the shortest path between two vertices of a graph, in an amount of time linear in the number of edges. The algorithm was named after him (Dijkstra’s Algorithm), and something like it is used (for example) in Google Maps and every other piece of software to find directions. There’s a famous anecdote where Dijkstra (who worked in a math department) was trying to explain his new result 12 to a colleague. And the colleague said, ”but I don’t understand. Given any graph, there’s at most a finite number of non-intersecting paths. Every finite set has a minimal element, so just enumerate them all and you’re done.” The colleague’s argument does, indeed, show that the shortest path problem is computable. But from a modern p erspective, showing a problem is computable does not say much. The important question is whether a problem is solvable efficiently. Efficiency Computer scientists are interested in finding efficient algorithms to solve problems. Efficiency is typically measured by considering an algorithm’s running time as a function of the size of the input. Here input size usually (not always) refers to the number of bits necessary to describe the input. This way of describing efficiency avoids being too tied to a particular machine model (Macs, PC’s, Turing machines, etc.), and lets us focus on more “intrinsic” properties of the problem and of algorithms for solving it. Circuit Complexity A good place to begin studying complexity questions is circuit complexity. Historically, this is also one of the first places where complexity questions were asked. Let’s pose the following general question: Given a Boolean function f : {0, 1}n → {0, 1}, what is the size of the smallest circuit that computes it? (That is, how many gates are needed?) As an example, suppose we’re trying to compute the XOR of the n input bits, x1, . . . , xn. Then how many gates to do we need? (For simplicity, let’s suppose that a two-input XOR gate is available.) Right, n − 1 gates are certainly sufficient: XOR x1 and x2, then XOR the result with x3, then XOR the result with x4, etc. Can we show that n − 1 gates are necessary? Claim: You need at least n − 1 gates in order for the single output bit to depend on all n input bits using 2-input gates. Proof: Using the ”Chocolate-breaking” argument as a basis, we can view each input as a group. Each XOR function joins two groups, so you then have one less total group than before. If you continue to join groups until there is only a single group, you would have used n − 1 XOR functions to join the n inputs. Note: In lecture, it was brought up that another measure of efficie ncy in circuits is the depth of the circuit. This is the maximum number of gates needed to traverse the circuit from an input to the output. For a XOR circuit of n inputs constructed using two-input XOR gates, one can easily show that a depth of log n is necessary and sufficient. Shannon’s Counting Argument Is there a Boolean Function with n inputs that requires a circuit of exponential size in n? Well, let’s look at some possible candidates. Does the Majority function require an exponential-size circuit? No, it doesn’t. What about SAT and other N P -complete problems? Well, that’s getting ahead of ourselves, but the short answer is that no one knows! But in 1949, Claude Shannon, the father of information theory, mathematical cryptography and several other fields (and who we’ll meet again


View Full Document

MIT 6 080 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?