Toronto CSC 148 - Time Analysis of Algorithms Without Recursion

Unformatted text preview:

TIME ANALYSIS OFALGORITHMS WITHOUT RECURSION1Time efficiencyWe like to know the time efficiency of a program for several reasons:• To get an estimate of how long a program will run. (Waiting for it tofinish may not be feasible!)• To get an estimate of how large an input the program can handle withouttaking too long.• To compare the efficiency of different programs for solving the sameproblem.We could run timing tests on the program, but we prefer to analyze theefficiency of the algorithm — before we’ve written the program.Why?2Big differencesWe have seen that, for a given problem, one algorithm may be vastly moreefficient than others. Examples:Searching a list of n elements:• linear search takes time on the order of n.• binary search takes time on the order of log2n.Calculating the nth Fibonacci number:• the obvious recursive algorithm takes time on the order of the Fibonaccinumber itself, roughly φn.• using a helper function, the algorithm takes time on the order of n.[ Is “on the order of” bothering you? Good — we’ll define things properlyvery soon.]3Small differencesWe can sometimes fine tune a given algorithm to make it a little faster.Example: Linear search with a dummy record is faster than ordinary linearsearch because there is less work per iteration.(But any kind of linear search takes takes time on the order of n becausethere are roughly n iterations in the worst case.)4Big differences are the first priorityWhen analyzing algorithms for a give n problem, it makes sense to pay atten-tion to the big differences first.• Example: We prefer to calculate Fibonacci numbers using the more com-plicated algorithm rather than the simple but slow one.Only then does it make sense worry about the small difference s.• Example: There is no sense in fine tuning the simple algorithm. Any ef-fort towards fine tuning should be spent on the helper-function algorithm,where for example we might be able to cut out one parameter.5What we needWe need a technique for analyzing algorithm efficiency that:• is precise about what we mean by “on the order of”• can distinguish the big differences• (ideally) allows for quick and easy analysisSolution: We will learn a technique that estimates time to within a constantfactor.Because we ignore the constant factors, the analysis is both easier andmachine-independent. And it still makes the big distinctions.6Ignoring constant factorsConsider two programs A and B for solving a given problem, with these run-ning times on inputs of size n:TA(n) = n3and TB(n) = 8n + 3.1 2 3 4 5255075100125bbbbbbrrrrrrnT (n)Which program is faster?For inputs of size less than 3: program A.For inputs of size greater than 3: program B.n = 3 is called the breakpoint.7B is eventually fasterno matter what the constantsWhat if program A were a million times faster and program B a million timesslowe r, i.e., if:TA(n) = n3/1, 000, 000 andTB(n) = (8n + 3) ∗ 1, 000, 000.It would still be true that:• B would eve ntually be faster than A, and• B’s superiority would grow as n increases.(The breakpoint would change, however.)This is true no matter what the constants are!Conclusion: For large values of n, the form of this mathematical functionhas more effect on its growth rate than a constant multiple.8Growth rates of v arious functionsfunction Approximate Value of T (n) for n =T (n) 10 100 1,000 10,000 105log n 3 6 9 13 16√n 3 10 31 100 316n 10 100 1000 10,000 105n log n 30 600 9000 13 × 10416 × 105n2100 10,000 1061081010n31000 106109101210152n1024 1030103001030001030,000There is a computational cliff w hen we reach “exponential” functions: onesin which the variable appears in the exponent.To get a sense of scale:• there are 1043atoms in the universe• there have been 1017seconds since the big bangIf we can perform 1 billion operations per second,• 1016operations take 1 year• 1020operations take 10,000 years!Would it help if we could do 100 million billion per second? How about 1050?9Constants Can MatterConstant factors are insignificant relative to the form of the mathematicalfunction.But they are important in these situations:• When we’ve already picked the algorithm to use, and we ’ re ready to finetune it.We may be able to reduce the constants.• When two algorithms have the same order.Considering the constants may reveal that one is faster than the other.• When the problem size is small.We may be below the breakpoint.Sometimes, we need to know the actual running time of a program, e.g., withreal-time systems.We can determine running time by directly measuring it (on the desired com-puter and for various sizes of input with v arious characteristics).10Towards a DefinitionSay we have an algorithm (or program) whose running time on an input ofsize n is f(n).We don’t know what f(n) is, but we want to estimate it to within a constantfactor.Let’s call that estimate g(n).We would be happy with our estimate g(n) even if the relationship be tweeng(n) and f(n) holds only after some point B.In other words, from B onwards, we want g(n) to estimate an upper boundon f(n), to within a constant factor.That is, we want there to be some constant factors c and B such thatf(n) ≤ c · g(n) for all n ≥ B.We don’t care what these constants are; we just need them to exist.11O Notation, or “big-oh” notationConsider any 2 func tions f, g defined on the nonnegative integers N = {0, 1, 2, . . .}such that f(n), g(n) ≥ 0 for all n ∈ N.Definition: f(n) is O(g(n)) if there exist positive constants c and B suchthatf(n) ≤ c · g(n) for all n ≥ B.This means that, to within a constant factor, f(n) grows no faster than g(n).We pronounce this:f has order g, orf is “oh” or “big-oh” of gIn CSC 148, we are not going to use this formal definition in any particularlyrigorous way, but w henever your instructor is covering an example, you ne edto realize that this definition is in the back of her or his mind.122 Key Properties of O NotationConstant factors disappearIf d > 0, thendf(n) is O(f(n)) and f(n) is O(df(n)) .Examples:6n andn2are O(n).n is O(29n) and O(642n).Low-or der terms disappearIf h(n) is generally smaller than g(n), then we can ignore h(n). More formally:If limn→∞h(n)g(n)= 0 then g(n) + h(n) is O(g(n)).Examples:n5+ n3+ 6n2is O(n5).n2+ n(log n)3is O(n2).13More O Notation Examples6n is O(n) and O(3n) and O(2n).In fact, it is all of these:...O(2n)O(n3)O(6n − 99)O(3n)O(n + 8)O(n)But it is not any of these:O(log n


View Full Document

Toronto CSC 148 - Time Analysis of Algorithms Without Recursion

Download Time Analysis of Algorithms Without Recursion
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Time Analysis of Algorithms Without Recursion and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Time Analysis of Algorithms Without Recursion 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?