Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Grade School Multiplication: Quadratic timeHow much time does it take?Worst Case TimeWhat is T(n)?Slide 25Slide 26Slide 27Factoring The Number N By Trial DivisionSlide 29Notation to Discuss Growth RatesSlide 31Other Useful Notation: ΩSlide 33Yet More Useful Notation: ΘSlide 35Slide 36Slide 37Slide 38Slide 39Slide 40Slide 41Slide 42Slide 43Slide 44Names For Some Growth RatesLarge Growth RatesSmall Growth RatesSome Big OnesSlide 49Slide 50Slide 51Ackermann’s FunctionSlide 53Slide 54Slide 55Slide 5615-251Great Theoretical Ideas in Computer ScienceThis is The Big Oh!Lecture 21 (March 31, 2009)** ** ** ** ** ** ** ** ** ** ** +How to add 2 n-bit numbers*** ** ** ** ** ** * ** ** ** ** ** +How to add 2 n-bit numbers*** ** ** ** ** * ** * *** ** ** ** ** +How to add 2 n-bit numbers*** ** ** ** * ** * *** * *** ** ** ** ** +How to add 2 n-bit numbers*** ** ** * ** * *** * *** * *** ** ** ** ** +How to add 2 n-bit numbers*** * *** * *** * *** * *** * *** * *** * *** * *** * *** *** * +* * “Grade school addition”How to add 2 n-bit numbers+T(n) = amount of time grade school addition uses to add two n-bit numbers* * * * * * * * * ** * * * * * * * * ** * * * * * * * * ** * * * * * * * * * *Time complexity of grade school additionWhat do we mean by “time”?Our GoalWe want to define “time” in a way that transcends implementation details and allows us to make assertions about grade school addition in a very general yet useful way.A given algorithm will take different amounts of time on the same inputs depending on such factors as:–Processor speed–Instruction set–Disk speed–Brand of compilerRoadblock ???On any reasonable computer, adding 3 bits and writing down the two bit answer can be done in constant timePick any particular computer M and define c to be the time it takes to perform on that computer. Total time to add two n-bit numbers using grade school addition: cn [i.e., c time for each of n columns]On another computer M’, the time to perform may be c’.Total time to add two n-bit numbers using grade school addition: c’n [c’ time for each of n columns]The fact that we get a line is invariant under changes of implementations. Different machines result in different slopes, but the time taken grows linearly as input size increases. # of bits in the numberstimeMachine M: cnMachine M’: c’nThus we arrive at an implementation-independent insight: Grade School Addition is a linear time algorithmThis process of abstracting away details and determining the rate of resource usage in terms of the problem size n is one of the fundamental ideas in computer science.Time vs Input SizeFor any algorithm, define Input Size = # of bits to specify its inputs.Define TIMEn = the worst-case amount of time used by the algorithmon inputs of size nWe often ask: What is the growth rate of Timen ?X* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *n2How to multiply 2 n-bit numbers.X* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *n2How to multiply 2 n-bit numbers.The total time is bounded by cn2 (abstracting away the implementation details).# of bits in the numberstimeGrade School Addition: Linear timeGrade School Multiplication: Quadratic timeNo matter how dramatic the difference in the constants, the quadratic curve will eventually dominate the linear curveHow much time does it take to square the number n using grade school multiplication?Grade School Multiplication:Quadratic time# of bits in numberstimeInput size is measured in bits, unless we say otherwise.c(log n)2 time to square the number nHow much time does it take?Nursery School Addition Input: Two n-bit numbers, a and b Output: a + bStart at a and increment (by 1) b timesT(n) = ?If b = 000…0000, then NSA takes almost no timeIf b = 1111…11111, then NSA takes cn2n timeWorst Case TimeWorst Case Time T(n) for algorithm A:T(n) = Max[all permissible inputs X of size n] Runtime(A,X)Runtime(A,X) = Running time of algorithm A on input X.What is T(n)?Kindergarten Multiplication Input: Two n-bit numbers, a and b Output: a * bStart with a and add a, b-1 timesRemember, we always pick the WORST CASE input for the input size n. Thus, T(n) = cn2nThus, Nursery School addition and Kindergarten multiplication are exponential time. They scale HORRIBLY as input size grows.Grade school methods scale polynomially: just linear and quadratic. Thus, we can add and multiply fairly large numbers.If T(n) is not polynomial, the algorithm is not efficient: the run time scales too poorly with the input size.This will be the yardstick with which we will measure “efficiency”.Multiplication is efficient, what about “reverse multiplication”?Let’s define FACTORING(N) to be any method to produce a non-trivial factor of N, or to assert that N is prime.Factoring The Number N By Trial DivisionTrial division up to Nfor k = 2 to N doif k | N thenreturn “N has a non-trivial factor k”return “N is prime”cN (logN)2 time if division is c(logN)2 timeIs this efficient?No! The input length n = log N. Hence we’re using c 2n/2 n2 time.Can we do better?We know of methods for FACTORING that are sub-exponential (about 2n1/3 time) but nothing efficient.Notation to Discuss Growth RatesFor any monotonic function f from the positive integers to the positive integers, we say “f = O(n)” or “f is O(n)”If some constant times n eventually dominates f[Formally: there exists a constant c such that for all sufficiently large n: f(n) ≤ cn ]# of bits in numberstimef = O(n) means that there is a line that can be drawn that stays above f from some point onFor any monotonic function f from the positive integers to the positive integers, we say “f = Ω(n)” or “f is Ω(n)”If f eventually dominates some constant times n[Formally: there exists a constant c such that for all sufficiently large n: f(n) ≥ cn ]Other Useful Notation: Ω# of …
View Full Document