Algorithm AnalysisPurposeProblem – Algorithm – Data Structures & TechniquesSlide 5Model that we will assumeOther ModelsSlide 9What to Analyze: T(n)How to Analyze T(n)?Algorithmic Notation & AnalysisAsymptotic NotationSome Standard Function CurvesBig-oh : O()Example for big-OhPonder thisLittle-oh : o()Big-omega : Ω ()Little-omega : ()Theta : Θ()Example (for theta)The Asymptotic NotationsAsymptotic GrowthsRules of Thumb while using Asymptotic NotationsRules of Thumb while using Asymptotic Notations…Slide 31A few examplesReflexivitySymmetryTranspose symmetryTransitivityMore rules…Some More ProofsSome More Proofs…Maximum subsequence sum problemMaxSubSum: Solution 1Solution 1: AnalysisMaxSubSum: Solution 2Solution 2: AnalysisMaxSubSum: Solution 3Slide 48Slide 49Slide 50Slide 51Solution 3: AnalysisMaxSubSum: Solution 4MaxSubSum: Solution 5 (another (N) algorithm)Algorithm 5 pseudocodeMaxSubSum Running TimesSlide 58Logarithmic BehaviorSummary11Algorithm AnalysisCpt S 223. School of EECS, WSU2PurposeWhy bother analyzing code; isn’t getting it to work enough?Estimate time and memory in the average case and worst caseIdentify bottlenecks, i.e., where to reduce timeCompare different approaches Speed up critical algorithmsCpt S 223. School of EECS, WSUProblem – Algorithm – Data Structures & TechniquesProblem(e.g., searching)Algorithm(e.g., binary search)Data structures & Techniques(e.g., sorting is a techniquearray is a data struct.)usessolvescontainsSpecification(Input => Output)Many algorithms can exist to solve oneproblemWhen designing algorithms,we may end up breakingthe problem into smaller“sub-problems”3Cpt S 223. School of EECS, WSU5Algorithm AnalysisPredict resource utilization of an algorithmRunning timeMemoryDependent on architecture & modelSerialParallelQuantumDNA…Cpt S 223. School of EECS, WSUModel that we will assumeSimple serial computing modelcontrolALUregistersCPURAM(mainmemory)I/OSecondary storage(disk)Keyboard, mouseMonitor, printerINPUTOUTPUTdevices7Cpt S 223. School of EECS, WSU8Other ModelsMulti-core modelsCo-processor modelMulti-processor modelShared memory machineMultiple processors sharing common RAMMemory is cheap ($20 per GB)Memory bandwidth is NOTCray XMT Supercomputer•Up to 64 TB (65,536 GB) shared memory•Up to 8000 processors•128 independent threads per processor•$150MCpt S 223. School of EECS, WSU9Other ModelsDistributed memory modelwww.top500.orgFastest supercomputer (as of June 2012):• Sequoia @ Lawrence Livermore National Lab•16.32 PetaFlop/s (16.32 x 1015 floating point ops per sec)• IBM BlueGene/Q architecture•#processing cores: 1.5M cores•#aggregate memory: 1,572 TB• Price tag: millions of $$$ Supercomputer speed is measured in number offloating point operations per sec (or FLOPS)9Cpt S 223. School of EECS, WSU10What to Analyze: T(n)Running time T(n)N or n is typically used to denote the size of the inputSorting?Multiplying two integers?Multiplying two matrices?Traversing a graph?T(n) measures number of primitive operations performedE.g., addition, multiplication, comparison, assignmentCpt S 223. School of EECS, WSUHow to Analyze T(n)?As the input (n) grows what happens to the time T(n)?Cpt S 223. School of EECS, WSU 11Algorithmic Notation & AnalysisBig-O O()Omega Ω()small-O o()small-omega ()Theta ()Worst-caseAverage-caseBest-case “Algorithms”Lower-bound Upper-boundTight-bound“Optimality”Describes the inputDescribes the problem &its solutionAsymptotic notations that help us quantify & compare costsof different solutions15Cpt S 223. School of EECS, WSUAsymptotic NotationThetaTight boundBig-Oh Upper boundOmega Lower boundThe main idea: Express cost relative to standard functions (e.g., lg n, n, n2, etc.)16Cpt S 223. School of EECS, WSU17Some Standard Function CurvesInitial aberrations do NOT matter!linearquadraticcubic(curves not to scale)17Cpt S 223. School of EECS, WSUBig-oh : O()f(N) = O(g(N)) if there exist positive constants c and n0 such that:f(N) ≤ c*g(N), for all N>n0Asymptotic upper bound, possibly tight≤Upper bound for f(N)What you want to measureA standard function(e.g., n, n2)18Cpt S 223. School of EECS, WSUExample for big-OhE.g., let f(n)=10 n ==> f(n) = O(n2)Proof:If f(n) = O(g(n)) ==> f(n) ≤ c g(n) , for all n>n0(show such a <c,n0> combination exists)==> c=1, n0 = 9(Remember: always try to find the lowest possible n0)c g(n) = n2f(n) = 10 nn0n 10 n c n21 10 12 20 43 30 94 40 165 50 25.. … ..9 90 8110 100 10011 110 121… .. …Breakevenpoint (n0)c=1Up & down fluctuation allowed in this zone19Cpt S 223. School of EECS, WSUPonder thisIf f(n) = 10 n, then:Is f(n) = O(n)?Is f(n) = O(n2)?Is f(n) = O(n3)?Is f(n) = O(n4)?Is f(n) = O(2n)?…If all of the above, then what is the best answer? f(n) = O(n)Yes: e.g., for <c=10, n0=1>Yes: e.g., for <c=1, n0=9>20Cpt S 223. School of EECS, WSULittle-oh : o()f(N) = o(g(N)) if there exist positive constants c and n0 such that:f(N) < c*g(N) when N>n0E.g., f(n)=10 n; g(n) = n2==> f(n) = o(g(n))<Strict upper bound for f(N)21Cpt S 223. School of EECS, WSUBig-omega : Ω ()f(N) = Ω(g(N)) if there exist positive constants c and n0 such that:f(N) ≥ c*g(N), for all N>n0E.g., f(n)= 2n2; g(n) =n log n==> f(n) = Ω (g(n))≥Lower bound for f(N), possibly tight22Cpt S 223. School of EECS, WSULittle-omega : ()f(N) = (g(N)) if there exist positive constants c and n0 such that:f(N) > c*g(N) when N>n0E.g., f(n)= 100 n2; g(n) =n==> f(n) = (g(n))>Strict lower bound for f(N)23Cpt S 223. School of EECS, WSU24Theta : Θ()f(N) = Θ(h(N)) if there exist positive constants c1,c2 and n0 such that:c1h(N) ≤f(N) ≤ c2h(N) for all N>n0(same as)f(N) = Θ(h(N)) if and only if f(N) = O(h(N)) and f(N) = Ω(h(N))≡Tight bound for f(N)24Cpt S 223. School of EECS, WSUExample (for theta)f(n) = n2 – 2n f(n) = Θ(?)Guess: f(n) = Θ(n2)Verification:Can we find valid c1, c2, and n0?If true: c1n2 ≤f(n) ≤ c2n2 c1n2 ≤ n2 – 2n ≤ c2n2 …25Cpt S 223. School of EECS, WSUThe Asymptotic Notations Notation Purpose DefinitionO(n) Upper bound, possibly tightf(n) ≤ c g(n)(n) Lower bound, possibly tightf(n) ≥ c g(n)(n) Tight bound c1 g(n) ≤ f(n) ≤
View Full Document