DOC PREVIEW
MIT 6 826 - Generalizing Abstraction Functions

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

6.826—Principles of Computer Systems 2000 Handout 8. Generalizing Abstraction Functions 1 8. Generalizing Abstraction Functions In this handout, we give a number of examples of specs and implementations for which simple abstraction functions (of the kind we studied in handout 6 on abstraction functions) don’t exist, so that the abstraction function method doesn’t work to show that the implementation satisfies the spec. We explain how to generalize the abstraction function method so that it always works. We begin with an example in which the spec maintains state that doesn’t actually affect its behavior. An optimized implementation can simulate the spec without having enough state to generate all the state of the spec. By adding history variables to the implementation, we can extend its state enough to define an abstraction function, without changing its behavior. An equivalent way to get the same result is to define an abstraction relation from the implementation to the spec. Next we look at implementations that simulate a spec without taking exactly one step for each step of the spec. As long as the external behavior is the same in each step of the simulation, an abstraction function (or relation) is still enough to show correctness, even when an arbitrary number of transitions in the specification correspond to a single transition in the implementation. Finally, we look at an example in which the spec makes a non-deterministic choice earlier than the choice is exposed in the external behavior. An implementation may make this choice later, so that there is no abstraction relation that generates the premature choice in the spec’s state. By adding prophecy variables to the implementation, we can extend its state enough to define an abstraction function, without changing its behavior. An equivalent way to get the same result is to define a backward simulation from the implementation to the spec. If we avoided extra state, too few or too many transitions, and premature choices in the spec, the simple abstraction function method would always work. You might therefore think that all these problems are not worth solving, because it sounds as though they are caused by bad choices in the way the spec is written. But this is wrong. A spec should be written to be as clear as possible to the clients, not to make it easy to prove the correctness of an implementation. The reason for these priorities is that we expect to have many more clients for the spec than implementers. The examples below should make it clear that there are good reasons to write specs that create these problems for abstraction functions. Fortunately, with all three of these extensions we can always find an abstraction function to show the correctness of any implementation that actually is correct. A statistical database Consider the following specification of a “statistical database” module, which maintains a collection of values and allows the size, mean, and variance of the collection to be extracted. Recall that the mean m of a sequence db of size n > 0 is just the averagenidbi∑)(, and the 6.826—Principles of Computer Systems 2000 Handout 8. Generalizing Abstraction Functions 2 variance is ()222)()(mnidbnmidbii−=−∑∑. (We make the standard assumptions of commutativity, associativity, and distributivity for the arithmetic here.) MODULE StatDB [ V WITH {Zero: ()->V, "+": (V,V)->V, (V,V)->V, "-": (V,V)->V, "/": (V,Int)->V} ] EXPORT Add, Size, Mean, Variance = VAR db : SEQ V := {} % a multiset APROC Add(v) = << db + := {v}; RET >> APROC Size() -> Int = << RET db.size >> APROC Mean() -> V RAISES {empty} = << IF db = {} => RAISE empty [*] VAR sum := (+ : db) | RET sum/Size() FI >> APROC Variance() -> V RAISES {empty} = << IF db = {} => RAISE empty [*] VAR avg := Mean(), sum := (+ : {v :IN db | | (v - avg)**2}) | RET sum/Size() FI >> END StatDB This spec is a very natural one that directly follows the definitions of mean and variance. The following implementation of the StatDB module does not retain the entire collection of values. Instead, it keeps track of the size, sum, and sum of squares of the values in the collection. Simple algebra shows that this is enough to compute the mean and variance in the manner done below. MODULE StatDBImpl % implements StatDB [ V WITH {Zero: ()->V, "+": (V,V)->V, (V,V)->V, "-": (V,V)->V, "/": (V,Int)->V} ] EXPORT Add, Size, Mean, Variance = VAR count := 0 sum := V.Zero() sumSquare := V.Zero() APROC Add(v) = << count + := 1; sum + := v; sumSquare + := v**2; RET >> APROC Size() -> Int = << RET count >> APROC Mean() -> V RAISES {empty} = << IF count = 0 => RAISE empty [*] RET sum/count FI >> APROC Variance() -> V RAISES {empty} = << IF count = 0 => RAISE empty [*] VAR avg := Mean() | RET sumSquare/count – avg**2 FI >> END StatDBImpl StatDBImpl implements StatDB, in the sense of trace set inclusion. However we cannot prove this using an abstraction function, because each nontrivial state of the implementation6.826—Principles of Computer Systems 2000 Handout 8. Generalizing Abstraction Functions 3 corresponds to many states of the specification. This happens because the specification contains more information than is needed to generate the desired external behavior. In this example, the states of the specification could be partitioned into equivalence classes based on the possible future behavior: two states are equivalent if they give rise to the same future behavior. Then any two equivalent states yield the same future behavior of the module. Each of these equivalence classes corresponds to a state of the implementation. To get an abstraction function we must add history variables, as explained in the next section. History variables The problem in the StatDB example is that the specification states contain more information than the implementation states. A history variable is a variable that is added to the state of the implementation T in order to keep track of the extra information in the specification S that was left out of the implementation. Even though the implementation has been optimized not to retain certain information, we can put it back in to prove the implementation correct, as long as we do it in a way that does not change the behavior of the implementation. What we do is to construct a new implementation TH that has the same behavior as T, but a bigger state. If we can show


View Full Document

MIT 6 826 - Generalizing Abstraction Functions

Documents in this Course
Consensus

Consensus

10 pages

Load more
Download Generalizing Abstraction Functions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Generalizing Abstraction Functions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Generalizing Abstraction Functions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?