DOC PREVIEW
MIT 6 080 - Lecture Notes

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.6.080/6.089 GITCS DATE Lecture 12 Lecturer: Scott Aaronson Scribe: Mergen Nachin 1 Review of last lecture NP-completeness in practice. We discussed many of the approaches people use to cope with • NP-complete problems in real life. These include brute-force search (for small instance sizes), cleverly-optimized backtrack search, fixed-parameter algorithms, approximation algorithms, and heuristic algorithms like simulated annealing (which don’t always work but often do pretty well in practice). Factoring, as an intermediate problem between P and NP-complete. We saw how factoring, • though be lieved to be hard for classical computers, has a special structure that makes it different from any known NP-complete problem. coNP. • 2 Space complexity Now we will categorize problems in terms of how much memory they use. Definition 1 L is in PSPACE if there exists a poly-space Turing machine M such that for all x, x is in L if and only if M(x) accepts. Just like with NP, we can define PSPACE-hard and PSPACE-complete problems. An interesting example of a PSPACE-complete problem is n-by-n chess: Given an arrangement of pieces on an n-by-n chessboard, and assuming a polynomial upper bound on the number of moves, decide whether White has a winning strategy. (Note that we need to generalize chess to an n-by-n board, since standard 8-by-8 chess is a finite game that’s completely solvable in O(1) time.) Let’s now define another complexity class called EXP, which is apparently even bigger than PSPACE. Definition 2 Deterministic f (n)-Time, denoted DTIME(f(n)), is the class of decision problems solvable by a Turing machine in time O(f(n)). Definition 3 Exponential Time, denoted EXP, equals the union of DT IM E(2p(n)) over all poly-nomials p. Claim 4 P SP ACE ⊆ EXP 12-1Figure 1: A general picture that we believe Proof Just like in problem set 2. In order for the machine to halt, any “configuration” must be visited at most once. Assuming an upper bound of p(n) on the number of tape squares, there are 2p(n) possible settings of 0’s and 1’s on the tape, p(n) possible locations for the Turing machine head, and s possible states for the head. So an upper bound on number of s teps is 2p(n)p(n)s, which is exponential in p(n). Claim 5 P ⊆ P SP ACE. Proof Obviously a P machine can’t access more than a polynomial number of memory locations. Claim 6 NP ⊆ P SP ACE. Proof Enumerate all possible polynomial-size proofs. Remark Does NP=PSPACE? Just like P vs. NP, this is a longstanding open problem! It’s conjectured that they’re different. Definition 7 LOGSPACE is the class of decision problems solvable by a Turing machine with O(log n) bits of memory. Remark But isn’t the input already n bits? A technicality: The LOGSPACE machine has read-only access to the input. It’s only the read-write memory that’s logarithmically bounded. Claim 8 LOGSP ACE ⊆ P . 12-2 EXPPSPACEPSPACE-CompleteNP-CompleteNPPFigure by MIT OpenCourseWare.�• � ��• �Proof Same as Claim 4, except “scaled down by an exponential.” There are at most s2c log nc log n = nO(1) possible configurations of a Turing machine with c log n tape squares. Remark Does LOGSPACE=P? Another longstanding open problem! Again, conjecture is that they are different. We can prove LOGSP ACE =� P SP ACE, using a Space Hierarchy Theorem similar to the Time Hierarchy Theorem that we saw in Lecture 7. As usual, it’s not hard to prove that more of the same resource (time, space, etc) provides more computational p ower than less of it. The hard part is to compare different resources (like time vs. space, or determinism vs. nondeterminism). 3 Why do we believe P = N P if we can’t prove it? • Hardness of solving NP-complete problems in practice: the empirical case. • There are “vastly easier” problems than NP-complete ones (like factoring) that we already have no idea how to solve in P. • As G¨odel argued, P=NP would mean mathematical creativity could be automated. God would not be so kind! We know that LOGSP ACE = P SP ACE. But this means either LOGSP ACE = P , or P =� NP , or NP =� P SP ACE! And if one is true, then why not all three? Incidentally, let me tell you one of the inside jokes of complexity theory. Let LINSPACE be the set of problems solvable in linear space. Then one of the very few separations we can prove is that LIN SP ACE =� P . Why? Well, suppose P = LIN SP ACE. Then P = P SP ACE also. Why? Pad the inputs! But that means LINSPACE=PSPACE, which is ruled out by the Space Hierarchy Theorem! The punchline is, while we know P and LINSPACE are different, we have no idea which one is not contained in the other one (or as is most likely, whether neither is contained in the other one). We just know that they’re different! 4 Why is it so hard to prove P = N P ? Because P = N P ! • Because there really are lots of clever, non-obvious polynomial-time algorithms. Any proof that 3SAT is hard will have to fail to show 2SAT is hard, even though the “handwaving intuition” seems the same for b oth (there are 2n possible solutions, and clearly each one takes at least constant time to check!). Simple as it seems, this criterion already rules out almost every amateur P =� NP proof in Prof. Aaronson’s inbox... • Let NPSPACE (Nondeterministic PSPACE) be the class of problems solvable by a PSPACE machine that can make nondeterministic transitions. Then by analogy to P =� N P , you might conjecture that P SP ACE =� NP SP ACE. But Savitch’s Theorem shows that this conjecture is false: PSPACE=NPSPACE. So any proof of P =� NP will have to fail when the polynomially-bounded resource is space rather than time. 12-35 • The Baker-Gill-Solovay argument. Techniques borrowed from logic and computability theory, like the ones used to prove the Time Hierarchy Theorem, all seem to relativize. In other words, they all seem to go through without change if we assume there’s a magical oracle in the world


View Full Document

MIT 6 080 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?