Analysis of AlgorithmsTime and spaceWhat does “size of the input” mean?Characteristic operationsExact valuesHigher-level languagesAverage case and worst caseConstant timeLinear timeConstant time is (usually) better than linear timeThe array subset problemmembersubsetAnalysis of array subset algorithmWhat about the constants?Simplifying the formulaeBig-O notationCan we justify Big-O notation?y = x2 + 3x + 5, for x=1..10y = x2 + 3x + 5, for x=1..20Common time complexitiesThe EndAnalysis of AlgorithmsTime and space•To analyze an algorithm means:–developing a formula for predicting how fast an algorithm is, based on the size of the input (time complexity), and/or–developing a formula for predicting how much memory an algorithm requires, based on the size of the input (space complexity)•Usually time is our biggest concern–Most algorithms require a fixed amount of spaceWhat does “size of the input” mean?•If we are searching an array, the “size” of the input is the size of the array•If we are merging two arrays, the “size” is the sum of the two array sizes•If we are computing the nth Fibonacci number, or the nth factorial, the “size” is n•We choose the “size” to be the parameter that most influences the actual time/space required–It is usually obvious what this parameter is–Sometimes we need two or more parametersCharacteristic operations•In computing time complexity, one good approach is to count characteristic operations–What a “characteristic operation” is depends on the particular problem–If searching, it might be comparing two values–If sorting an array, it might be swapping the contents of two array locations–Sometimes we just look at how many times the innermost loop is executedExact values•It is sometimes possible to compute exact time and space requirements in assembly language–We know exactly how many cycles each machine instruction takes–For a problem with a known sequence of steps (factorial, Fibonacci), we can determine how many instructions of each type are required•However, often the exact sequence of steps cannot be known in advance–The steps required to sort an array depend on the actual numbers in the array (which we do not know in advance)Higher-level languages•In a higher-level language (such as Java), we do not know how long each operation takes–Which is faster, x < 10 or x <= 9 ?–We don’t know exactly what the compiler does with this–The compiler probably optimizes the test anyway (replacing the slower version with the faster one)•In a higher-level language we cannot do an exact analysis–Our timing analyses will use major oversimplifications–Nevertheless, we can get some very useful resultsAverage case and worst case•Usually we would like to find the average time to perform an algorithm•However,–Sometimes the “average” isn’t well defined –Sometimes finding the average is too difficult•Often we have to be satisfied with finding the worst (longest) time required–Sometimes this is even what we want (say, for time-critical operations)Constant time•Constant time means there is some constant k such that this operation always takes k nanoseconds•A Java statement takes constant time if:–It doesn’t include a loop–It doesn’t include calling a method whose time isn’t known or isn’t a constant•If a statement involves a choice (if or switch) among operations, each of which takes constant time, we consider the statement to take constant time–This is consistent with worst-case analysisLinear time•We may not be able to predict to the nanosecond how long a Java program will take, but do know some things about timing:–for (i = 0; i < n; i++) j = j * i;takes linear time: it’s some constant (call it k) times n–The constant k is how long it takes to go through the loop once (the time for j = j * i, plus loop overhead)–n is the number of times through the loop (we can use this as the “size” of the problem)–Total time is k*n + c, for some constants k and c ; this time is linear in nConstant time is (usually)better than linear time•Suppose we have two algorithms to solve a task:–Algorithm A takes 5000 time units–Algorithm B takes 100*n time units•Which is better?–Clearly, algorithm B is better if our problem size is small, that is, if n < 50–Algorithm A is better for larger problems, with n > 50–So B is better on small problems that are quick anyway–But A is better for large problems, where it matters more•We usually care most about very large problemsThe array subset problem•Suppose you have two sets, represented as unsorted arrays: int sub[] = { 7, 1, 3, 2, 5 };int super[] = { 8, 4, 7, 1, 2, 3, 9 }; and you want to test whether every element of the first set (sub) also occurs in the second set (super): System.out.println(subset(sub, super));•(The answer in this case should be false, because sub contains the integer 5, and super doesn’t)•Let’s start with a helper function, member, to test whether one number is in an arraymember boolean member(int x, int[] a) { int n = a.length; for (int i = 0; i < n; i++) { if (x == a[i]) return true; } return false;}•If x is not in a, the loop executes n = a.length times–This is the worst case•If x is in a, the loop executes n/2 times on average•Either way, linear time is required: k*n+csubset•boolean subset(int[] sub, int[] super) { int m = sub.length; for (int i = 0; i < m; i++) if (!member(sub[i], super) return false; return true;}•The loop (and the call to member) will execute: m = sub.length times, if sub is a subset of super–Fewer than sub.length times otherwise•This is linear number of times through the loop•But the loop body doesn’t take constant time, since it calls member, which takes linear timeAnalysis of array subset algorithm•We’ve seen that the loop in subset executes m = sub.length times (in the worst case)•Also, the loop in subset calls member, which executes in time linear in n = super.length •Hence, the execution time of the array subset method is m*n, along with assorted constants–We go through the loop in subset m times, calling member each time–We go through the loop in member n times–If m and n are similar, this is roughly quadraticWhat about the constants?•Forget the constants!•An added constant, f(n)+c, becomes less and less important as n gets larger•A constant multiplier, k*f(n), does not get less important,
View Full Document