DOC PREVIEW
MIT 6 080 - Lecture Notes

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MIT OpenCourseWare http ocw mit edu 6 080 6 089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use visit http ocw mit edu terms 6 080 6 089 GITCS March 4 2008 Lecture 8 Lecturer Scott Aaronson 1 Scribe Hristo Paskov Administrivia There will be two scribes notes per person for the term In addition you may now use the book to review anything not clear from lecture since we re on complexity 2 Recap In most cases of interest to us the real question is not what s computable it s what s computable with a reasonable amount of time and other resources Almost every problem in science and industry is computable but not all are e ciently computable And even when we come to more speculative matters like whether it s possible to build a machine that passes the Turing Test we seethat with unlimited resources it s possible almost trivially just create a giant lookup table The real question is can we build a thinking machine that operates within the time and space constraints of the physical universe This is part of our motivation for switching focus from computability to complexity 2 1 2 1 1 Early Results Claude Shannon s counting argument Most Boolean functions require enormous circuits to compute them i e the number of gates is exponential in the number of inputs The bizarre thing is we know this yet we still don t have a good example of a function that has this property 2 1 2 Time Hierarchy Theorem Is it possible that everything that is computable at all is computable in linear time No There is so much that we don t know about the limits of feasible computation so we must savor what we do know There are more problems that we can solve in n3 than in n2 steps Similarly there are more problems that we can solve in 3n than in 2n steps The reason this is true is that we can consider a problem like Given a Turing Machine does it halt in n3 steps Supposing it were solvable in fewer than n3 steps we could take the program that would solve the problem and feed it itself as input to create a contradiction This is the nite analog of the halting problem 2 1 3 Existence of a fastest algorithm One thing we have not mentioned is a bizarre phenomenon in runtime people discovered in the 60 s Does there have to be a fastest algorithm for every problem Or on the contrary could there be an in nite sequence of algorithms to solve a problem each faster but no fastest one 8 1 2 1 4 Concrete example Matrix Multiplication Given two n n matrices nd 2 1 5 a11 a21 a12 a22 an1 an2 a1n a2n b11 b21 b12 b22 bn1 bn2 ann b1n b2n bnn c11 c21 cn1 c12 c22 cn2 c1n c2n cnn Straightforward way The straightforward way takes n3 steps because of the way we multiply columns and rows However there do exist better algorithms 2 1 6 Strassen s algorithm In 1968 Strassen found an algorithm that takes only O n2 78 steps His algorithm used a divide and conquer approach repeatedly dividing the original matrix into n2 n2 matricies and combining in clever way It s a somewhat practical algorithm people actually use it in scienti c computing and other areas Strassen s algorithm served as one of the early inspirations for the theory of e cient algorithms after all if people had missed something so basic for more than a century then what else might be lurking in algorithm land 2 1 7 Faster and faster algorithms Sometime later an algorithm that takes O n2 55 time was found The best currently known algo rithm is due to Coppersmith and Winograd and takes O n2 376 algorithm Many people believe that we should be able to do better The natural limit is n2 since in the best case we have to look at all entries of the matrices Some people conjecture that for all 0 there exists an algorithm that takes O n2 time 2 1 8 Practical considerations If matrices are reasonably small then you re better o multiplying them with the na ve O n3 algorithm It s an empirical fact that as you go to asymptotically faster algorithms the constant prefactor seems to get larger and larger You would want to switch to a more sophisticated algorithm with larger matrices since there the constant prefactor constant doesn t matter as much For all we know it could be that as you go to bigger and bigger matrices there s an unending sequence of algorithms that come into play Or the sequence could terminate we don t know yet Note that we can t obviously just make an algorithm that chooses the best algorithm for a given matrix size since each new faster algorithm might require some extra insight to come up with If there were an algorithm to produce these algorithms then we d have a single algorithm after all This illustrates the notion of an in nite progression of algorithms with no best one Note that this sequence is countable since there s only a countable number of algorithms 2 1 9 Blum Speedup Theorem In 1967 Blum showed that we can construct problems for which there s provably no best algorithm Indeed there exists a problem such that for every O f n algorithm there s also an O log f n 8 2 algorithm How could this be The problem would take a lot of time to solve by any algorithm some giant stack of exponentials such that taking log any number of times won t get it down to a constant We re not going to prove this theorem but be aware that it exists 2 1 10 Summary So for every problem there won t necessarily be a best algorithm We don t know how often this weird phenomenon arises but we do know that it can in principle occur Incidentally there s a lesson to be learned from the story of matrix multiplication It was intuitively obvious to people that n3 was the best we could do and then we came up with algorithms that beat this bound This illustrates why it s so hard to prove lower bounds because you have to rule out every possible way of being clever and there are many non obvious ways 3 The meaning of e cient We ve been talking about e cient algorithms but what exactly do we mean by that One de nition computer scientists nd useful though not the only one is that e cient polynomial time That is an algorithm is e cient if there exists a k such that all instances of size n are solved in O nk time Things like n or logn are not polynomial but there is a polynomial larger than them hence they re also e cient No sooner do you postulate this criterion for e ciency than people think …


View Full Document

MIT 6 080 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?