Slide 1Amortized AnalysisAmortized AnalysisAmortized ComplexityAmortized ComplexityAmortized != Average CaseAmortized != Average Case ExampleExample #1: Resizing stackAmount of copyingWhy doubling? Other approachesExample #2: Queue with two stacksExample #2: Queue with two stacksExample #2: Queue with two stacksExample #2: Queue with two stacksExample #2: Queue with two stacksAnalysisAmortized bounds are not limited to array-based data structuresAmortized useful?Not always so simpleCSE332: Data AbstractionsLecture 26: Amortized AnalysisTyler RobisonSummer 20101Amortized Analysis 2Recall our plain-old stack implemented as an array that doubles its size if it runs out of roomHow can we claim push is O(1)?Sure, most calls will take O(1), but some will take O(n) to resizeWe can’t claim O(1) as guaranteed run-time, but we can claim it’s an O(1) amortized bound(Very) rough idea: Resizing won’t happen ‘very often’Amortized bounds are not a handy-wave concept thoughIt’s a provable bound for the running time, not necessarily for every operation, but over some sequence of operationsDon’t use amortized to mean ‘we don’t really expect it to happen much’Amortized Analysis 3This lecture:What does amortized mean?How can we prove an amortized bound?Will do a couple simple examples The text has more complicated examples and proof techniquesThe idea of how amortized describes average cost is essentialAmortized Complexity4If a sequence of M operations takes O(M f(n)) time, we say the amortized runtime is O(f(n))The worst case time per operation can be larger than f(n)For example, maybe f(n)=1, but the worst-case is nBut the worst-case for any sequence of M operations is O(M f(n))Best case could, of course, be betterAmortized guarantee ensures the average time per operation for any sequence is O(f(n))Amortized Complexity5If a sequence of M operations takes O(M f(n)) time, we say the amortized runtime is O(f(n))Another way to think about it: If every possible sequence of M operations runs in O(M*f(n)) time, we have an amortized bound of O(f(n))A good way to convince yourself that an amortized bound does or does not hold: Try to come up with a worst-possible sequence of operationsEx: Do BST inserts have an amortized bound of O(logM)? We can come up with a sequence of M inserts that takes O(M2) time (M in-order inserts); so, no – O(MlogM) is not an amortized boundAmortized != Average Case6The ‘average case’ is generally probabilisticWhat data/series of operations are we most likely to see?Ex: Average case for insertion in BST is O(logn); worst case O(n)For ‘expected’ data, operations take about O(logn) eachWe could come up with a huge # of operations performed in a row that each have O(n) timeO(logn) is not an amortized bound for BST operationsAmortized bounds are not probabilisticIt’s not ‘we expect …’; it’s ‘we are guaranteed ...’If the amortized bound is O(logn), then there does not exist a long series of operations whose average cost is greater than O(logn)Amortized != Average Case Example7Consider a hashtable using separate chainingWe generally expect O(1) time lookups; why not O(1) amortized?Imagine a worst-case where all n elements are in one bucketLookup one will likely take n/2 timeBecause we can concoct this worst-case scenario, it can’t be an amortized boundBut the expected case is O(1) because of the expected distribution keys to different buckets (given a decent hash function, prime table size, etc.)Example #1: Resizing stack8From lecture 1: A stack implemented with an array where we double the size of the array if it becomes fullClaim: Any sequence of push/pop/isEmpty is amortized O(1)Though resizing, when it occurs, will take O(n)Need to show any sequence of M operations takes time O(M)Recall the non-resizing work is O(M) (i.e., M*O(1))Need to show that resizing work is also O(M) (or less)The resizing work is proportional to the total number of element copies we do for the resizingWe’ll show that:After M operations, we have done < 2M total element copies (So number of copies per operation is bounded by a constant)Amount of copying9How much copying gets done?Take M=100: Say we start with an empty array of length 32 and finish with 100 elementsWill resize to 64, then resize to 128Each resize has half that many copies (32 the first time, 64 the second)In this case, 96 total copies; 96< 2MShow: after M operations, we have done < 2M total element copiesLet n be the size of the array after M operationsEvery time we’re too full to insert, we resize via doublingTotal element copies = INITIAL_SIZE + 2*INITIAL_SIZE + … + n/8 + n/4 + n/2 < nIn order to get it to resize to size n, we need at least half that many pushes:M n/2So2M n > number of element copiesWhy doubling? Other approaches10If array grows by a constant amount (say 1000), operations are not amortized O(1)Every 1000 inserts, we do additional O(n) work copyingSo work per insert is roughly O(1)+O(n/1000) = O(n)After O(M) operations, you may have done (M2) copiesIf array shrinks when 1/2 empty, operations are not amortized O(1)… why not?When just over half full, pop once and shrink, push once and grow, pop once and shrink, …Guesses for shrinking when ¾ empty?If array shrinks when ¾ empty, it is amortized O(1)Proof is more complicated, but basic idea remains: by the time an expensive operation occurs, many cheap ones occurredExample #2: Queue with two stacks11A queue implementation using only stacks (as on recent homework)class Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); }}CBAinoutenqueue: A, B, CExample #2: Queue with two stacks12A clever and simple queue implementation using only stacksclass Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void enqueue(E x){ in.push(x); } E dequeue(){ if(out.isEmpty()) { while(!in.isEmpty()) { out.push(in.pop()); } } return out.pop(); }}inoutdequeueBCOutput: AExample #2: Queue with two stacks13A clever and simple queue implementation using only stacksclass Queue<E> { Stack<E> in = new Stack<E>(); Stack<E> out = new Stack<E>(); void
View Full Document