Unformatted text preview:

Lecture 11 Unbounded Arrays 15 122 Principles of Imperative Computation Spring 2024 Rob Simmons Frank Pfenning Arrays have efficient O 1 access to elements given an index but their size is set at allocation time This makes storing an unknown number of ele ments problematic if the size is too small we may run out of places to put them and if it is too large we will waste memory Linked lists do not have this problem at all since they are extensible but accessing an element is O n In this lecture we introduce unbounded arrays which like lists can hold an arbitrary number of elements but also allow these element to be retrieved in O 1 time What gives Adding and removing an element to the unbounded array has cost either O 1 or O n but in the long run the average cost of each such operation is constant the challenge will be to prove this last statement Additional Resources Review slides https cs cmu edu 15122 handouts slides review 11 uba pdf tgz Code for this lecture https cs cmu edu 15122 handouts code 11 uba There is one short video associated with this lecture Amortized Analysis https youtu be L8cXZ 4RHt8 This maps to our learning goals as follows Programming We introduce unbounded arrays and operations on them Algorithms and Data Structures Analyzing them requires amortized anal ysis a particular way to reason about sequences of operations on data structures Computational Thinking We also briefly talk again about data structure invariants and interfaces which are crucial computational thinking con cepts But first let s introduce the idea of amortized analysis on a simpler exam ple LECTURE NOTES Carnegie Mellon University 2024 Lecture 11 Unbounded Arrays 1 The n bit Counter 2 A simple example we use to illustrate amortized analysis is the idea of a binary counter that we increment by one at a time If we have to flip each bit individually flipping n bits takes O n time Obviously if we have an n bit counter the worst case running time of an single increment operation is O n But does it follow that the worst case running time of k operations is O kn Not necessarily Let s look more carefully at the cases where the operation we have to perform is the most expensive operation we ve yet considered We can observe two things informally First the most expensive operations get further and further apart as time goes on Second whenever we reach a most expensive so far operation at step k the total cost of all the operations up to and including that operation is 2k 1 Can we extend this reasoning to say that the total cost of performing k operations will never exceed 2k One metaphor we frequently use when doing this kind of analysis is banking It s difficult to think in terms of savings accounts full of microsec onds so when we use this metaphor we usually talk about tokens repre senting an abstract notion of cost With a token we can pay for the cost of a particular operation in this case the constant time operation of flip ping a bit If we reserve or budget two tokens every time we perform any increment putting any excess into a savings account then we see that after the expensive operations we ve looked out our savings account contains 1 token Our savings account appears to never run out of money Lecture 11 Unbounded Arrays 3 This is good evidence but it still isn t a proof To offer something like a proof as always we need to talk in terms of invariants And we can see a very useful invariant the number of 1 bits always matches the number in our savings account This observation leads us to the last trick that we ll use when we perform amortized analysis in this class we associate one to ken with each 1 in the counter as part of a meta data structure invariant Like normal data structure invariants this meta data data structure invariant should hold before and after carrying out an operation on the data struc ture Differently from normal data structure invariants it is not captured in code it resides in our minds only 2 Amortized Analysis With Data Structure Invariants Whenever we increment the counter we ll always flip some number maybe zero of lower order 1 s to 0 and then we ll flip exactly one 0 to 1 un less we re out of bits in the counter For example incrementing the 8 bit counter 10010011 yields 10010100 the two rightmost 1 s got flipped to 0 s the rightmost 0 was flipped into a 1 and all other bits were left unchanged We can explain budgeting two tokens for each increment as follows one token is used to pay for flipping the rightmost 0 in the current operation and one token is saved for when the resulting 1 will need to be flipped into a 0 as part of a future increment So how do we pay for flipping the right most 1 s in this example By using the tokens that were saved when they got flipped from a 0 into the current 1 No matter how many lower order 1 bits there are the flipping of those low order bits is paid for by the tokens associated with those bits Then because we re always gaining 2 tokens whenever we perform an increment one of those tokens can be used to flip the lowest order 0 to a 1 and the other one can be associated with that new 1 in order to make sure the data structure invariant is preserved Graphically any time we increment the counter it looks like this Well not every time if the counter is limited to n bits and they re all 1 then we ll flip all the bits to 0 In this case we can just throw away or lose track of our two new tokens because we can restore the data structure invariant without needing the two new tokens In the accounting or banking view Lecture 11 Unbounded Arrays 4 when this happens we observe that our savings account now has some extra savings that we ll never need Now that we ve rephrased our operational argument about the amount of savings as a data structure invariant that is always preserved by the in crement operation we can securely say that each time we increment the counter it suffices to reserve exactly two tokens This means that a series of k increments of the n bit counter starting when the counter is all zeroes will take time in O k We can also say that each individual operation has an amortized running time of 2 bit flips which means that the amortized cost of each operation is in O 1 It s not at all contradictory for bit flips to have an amortized running time in O 1 and a worst case running time in O n In summary to talk about the amortized running time or more gener ally …


View Full Document

CMU CS 15122 - Unbounded-Arrays

Download Unbounded-Arrays
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Unbounded-Arrays and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Unbounded-Arrays 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?