DOC PREVIEW
MIT 6 01 - Algorithms and Complexity

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

6.081, Spring Semester, 2007—Lecture 4 Notes 1MASSACHVSETTS INSTITVTE OF TECHNOLOGYDepartment of Electrical Engineering and Computer Science6.081—Introduction to EECS ISpring Semester, 2007Lecture 4 NotesAlgorithms and ComplexityProblems and AlgorithmsIn computer science, we speak of problems, algorithms, and implementations. These things are allrelated, but not the same, and it’s important to understand the difference and keep straight in ourminds which one we’re talking about.1.Generally speaking, a problem is defined by a goal: we’d like to have this list of numbers sorted,to map an image represented by an array of digital pictures into a string describing the image inEnglish, or to compute the nth digit of π. The goals are about a pure computational problem,in which inputs are provided and an output is produced, rather than about an interactive processbetween computer and world. So, for example, we wouldn’t consider “keep the robot driving downthe center of the hallway” to be an algorithmic goal. (That’s a great and important type of goal,but needs to be treated with a different view, as we shall see).An algorithm is a step-by-step strategy for solving a problem. It’s sometimes likened to a recipe, butthe strategy can involve potentially unboundedly many steps, controlled by iterative or recursivecontructs, like “do something until a condition happens.” Generally, algorithms are deterministic,but there is an important theory and practice of randomized algorithms. An algorithm is correctif it terminates with an answer that satisfies the goal of the problem. There can be many differentalgorithms for solving a particular problem: you can sort numbers by finding the smallest, then thenext smallest, etc; or your can sort them by dividing them into two piles (all the big ones in one,all the small ones in another), then dividing those piles, etc.. In the end these methods both solvethe problem, but they involve very different sets of steps. There is a formal definition of the classof algorithms as being made up of basic computational steps, and a famous thesis, due to AlonzoChurch and Alan Turing, that any function that could possibly be computed, can be done so bya Turing machine, which is equivalent to what we know of as a computer, but with a potentiallyinfinite memory.An implementation is an actual physical instantiation of an algorithm. It could be a particularcomputer program, in Python or Scheme or FORTRAN, or a special-purpose circuit, or (my per-sonal favorite) the consequence of a bunch of water-driven valves and fountains in your garden.There is some latitude in going from an algorithm to an implementation; we usually expect theimplementation to have freedom to choose the names of variables, or whether to use for or whileor recursion, as long as the overall structure and basic number and type of steps remains the same.It is very important to maintain these distinctions. When you are approaching a problem thatrequires a computational solution, the first step is to state a problem (goal) clearly and accurately.When you’re doing that, don’t presuppose the solution: you might add some extra requirementsthat your real problem doesn’t have, and thereby exclude some reasonable solutions.1This distinction is nicely described in David Marr’s book, Vision6.081, Spring Semester, 2007—Lecture 4 Notes 2Given a problem statement, you can consider different algorithms for solving the problem. Algo-rithms can be better or worse along different dimensions: they can be slower or faster to run, beeasier or harder to implement, or require more or less memory. In the following, we’ll consider therunning-time requirements of some different algorithms.It’s also important to consider that, for some problems, there may not be any algorithm that is evenreasonably efficient that can be guaranteed to get the exact solution to your problem. (Findingthe minimum value of a function, for example, is generally arbitrarily difficult). In such cases, youmight also have to consider trade-offs between the efficiency of an algorithm and the quality of thesolutions it produces.Once you have an algorithm, you can decide how to implement it. These choices are in the domainof software engineering (unless you plan to implement it using fountains or circuits). The goal willbe to implement the algorithm as stated, with goals of maintaining efficiency as well as minimizingtime to write and debug the code, and making it easy for other software engineers to read andmodify.Computational ComplexityHere’s a program for adding the integers up to n:def sumInts(n):count = 0while i < n:count = count + nreturn countHow long do we expect this program to run? Answering that question in detail requires a lotof information about the particular computer we’re going to run it on, what other programs arerunning at the same time, who implemented the Python interpreter, etc. We’d like to be able totalk about the complexity of programs without getting into quite so much detail.We’ll do that by thinking about the order of growth of the running time. That is, as we increasesomething about the problem, how does the time to compute the solution increase? To be concrete,here, let’s think about the order of growth of the computation time as we increase n, the numberof integers to be added up. In actual fact, on some particular computer (with no other processesrunning), this program would take some time R(n), to run on an input of size n. This is too specificto be useful as a general characterization; instead, we’ll say thatDefinition 1 For a process that uses resources R(n) for a problem of size n, R(n) has an order ofgrowth Θ(f(n)) if there are positive constants k1and k2independent of n such thatk1f(n) ≤ R(n) ≤ k2f(n) ,for n sufficiently large.To get an idea of what this means, let’s explore the assertion that our sumInts program has orderof growth Θ(n). This means that there are some constants, let’s imagine 5 and 10, such that thetime to run our program is always between 5n and 10n miliseconds (you’re free to pick the units6.081, Spring Semester, 2007—Lecture 4 Notes 3along with the constants). This means that each new integer we propose to add in doesn’t cost anymore than any of the previous integers. Looking at the program above, that seems roughly right.We’d say, then, that this is a linear time algorithm.There are at least a couple if things to think about, before


View Full Document

MIT 6 01 - Algorithms and Complexity

Documents in this Course
Week 1

Week 1

3 pages

Op-Amps

Op-Amps

8 pages

Op-Amps

Op-Amps

6 pages

Syllabus

Syllabus

14 pages

Planning

Planning

14 pages

Load more
Download Algorithms and Complexity
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Algorithms and Complexity and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Algorithms and Complexity 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?