DOC PREVIEW
CMU CS 15816 - Lecture Notes on Choice and Replication

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Offering OutputTerminationUncovering More ParallelismExample: File IndexingTaking Stock, Part IExternal ChoiceInternal ChoiceExample: File Indexing RevisitedReplicationCut as Composition RevisitedSharingTaking Stock, Part IIExercisesReferencesLecture Notes onChoice and Replication15-816: Linear LogicFrank PfenningLecture 5February 1, 2012We continue to examine a computational interpretation of linear logic, wherepropositions are session types, and proof are π-calculus processes. The ma-terial is drawn from a recent paper by Caires et al. [CPT12], which containsadditional details and further references.1 Offering OutputWe have already seen that A ( B, offering an input, also requires output inorder to use the offer. But how do we offer an output? A natural candidateis A ⊗ B, which is true if we can prove A and B separately, each using aportion of the available linear hypotheses.∆ ` A ∆0` B∆, ∆0` A ⊗ B⊗RAt first glance, it would appear process offer A ⊗ B along x should outputtwo channels, one offering A and another offering B.∆ ` P :: y : A ∆0` Q :: z : B∆, ∆0` P ? Q :: x : A ⊗ B⊗R?But we see that this doesn’t quite work: P offers along y and Q offers alongz, but neither y nor z are even mentioned in the final sequent. Looking atthe pattern of (R we see that we need to exploit the linearity of the channelx along which A ⊗ B is offered. So we output only a single new channelLECTURE NOTES FEBRUARY 1, 2012Choice and Replication L5.2of type A along x and then offer B, again along x. We could equally welloutput a new channel of type B and then behave as A, but it seems morenatural if the continuation is on the right of the connective.∆ ` P :: y : A ∆0` Q :: x : B∆, ∆0` (νy)xhyi.(P | Q) :: x : A ⊗ B⊗RAgain, this is a form of bound output, but this time as part of an offer.Conversely, to communicate with a process offering A ⊗ B along x wehave to input an A along x, after which we still have to communicate withx now of type B.∆, y:A, x:B ` Q :: z : C∆, x:A ⊗ B ` x(y).Q :: z : C⊗LWhen we remove the process we recognize the usual left rule.∆, A, B ` C∆, A ⊗ B ` C⊗LWe should verify the cut reduction property and see which form of processreduction it suggests. First, on bare sequents:∆1` A ∆2` B∆1, ∆2` A ⊗ B⊗R∆, A, B ` C∆, A ⊗ B ` C⊗L∆, ∆1, ∆2` Ccut−→∆2` B∆1` A ∆, A, B ` C∆, ∆1, B ` Ccut∆, ∆1, ∆2` CcutWe assign names and processes to the open premises:∆1` P1:: y : A∆2` P2:: x : B∆, w:A, x:B ` Q :: z : CWith this process assignment, the reduction viewed on processes becomes:∆, ∆1, ∆2` (νx)((νy)xhyi.(P1| P2) | x(w).Q) :: z : C−→∆, ∆1, ∆2` (νx)(P2| (νy)(P1| Q{y/w})) :: z : CLECTURE NOTES FEBRUARY 1, 2012Choice and Replication L5.3Again, this is a simple instance of the usual input/output reduction of theπ-calculus, modulo some structural congruences.2 TerminationThe logical constant 1 means we have no antecedents. Consequently, thelinear antecedent 1 is just replaced by the empty collection of antecedents,which means it is just erased.· ` 11R∆ ` C∆, 1 ` C1LA fruitful way to think of this is as the nullary version of ⊗, since it is infact its unit. We would output nothing and have no continuation:· ` xhi.0 :: x : 11RIn the π-calculus, the inactive or terminated process is represented by 0which is the unit of parallel composition in that P | 0 ≡ P .Conversely, we can expect nothing from an offer of 1 along x, except toterminate the connection.∆ ` Q :: z : C∆, x:1 ` x().Q :: z : C1LThe two match perfectly, with the following cut reduction.· ` 11R∆ ` C∆, 1 ` C1L∆ ` Ccut−→∆ ` COn process expressions, labeling the open premise as ∆ ` Q :: z : C:(νx)(xhi.0 | x().Q) −→ QThis can be seen as an interaction in the polyadic π-calculus.LECTURE NOTES FEBRUARY 1, 2012Choice and Replication L5.43 Uncovering More ParallelismEven though the assignment above perfectly fulfills our goal of a Curry-Howard isomorphism, we have proposed some alternatives in order toachieve more parallelism in the composition of independent processes. Ingeneral, a process P that does not offer anything is typed as ∆ ` P :: x : 1.Once composed with appropriate processes as required by ∆, it should bea closed process that evolves by internal actions only. If we have a secondsuch process, ∆0` Q :: z : 1 each should be able to evolve independently,without interaction. We could achieve this by adding x:1 to ∆0and thenusing cut.∆ ` P :: x : 1∆0` Q :: z : 1∆0, x:1 ` x().Q :: z : 11L∆, ∆0` (νx)(P | x().Q) :: z : 1cutHowever, because of the action prefix x() guarding Q, this corresponds to asequential composition (P before Q) rather than a parallel composition. If wecut the other way, we have Q before P . In other words, we can not composenoninteracting processes in a truly parallel manner.There are several ways we can obtain more parallelism. One is to makethe 1L rule silent in that we have the same proof term in the premise andconclusion [CP10]. In that case multiple different proofs collapse to thesame process, so we no longer have an isomorphism. We say that proofsare contracted to programs, a phenomenon which also occurs in numerousother applications of the idea underlying the Curry-Howard isomorphism.We could also allow process reduction under a prefix, which is, however,unusual for the π-calculus.We briefly explore writing (x().0) | Q as the proof term for 1L, recover-ing parallelism.· ` xhi.0 :: x : 1(1R)∆ ` (x().0) | Q :: z : C∆, x:1 ` Q :: z : C(1L)Again we give up a full isomorphism, because we cannot tell exactly atwhich point 1L has been applied due to the convention that we type mod-ulo structural congruences.We can make corresponding improvements to other rules1. For exam-1as suggested by Favonia in class; see also Exercise 1LECTURE NOTES FEBRUARY 1, 2012Choice and Replication L5.5ple, in the ⊗R rule∆ ` P :: y : A ∆0` Q :: x : B∆, ∆0` (νy)xhyi.(P | Q) :: x : A ⊗ B⊗Rwe notice that P can depend only on y, but not x (since it is typed in ∆ andoffers A along y). We can therefore safely rewrite it as:∆ ` P :: y : A ∆0` Q :: x : B∆, ∆0` (νy)(P | xhyi.Q) :: x : A ⊗ B⊗RThis allows P to reduce and even interact with its environment in ∆ in-stead of waiting until the channel y has been sent along x. Since the onlyoccurrence of y is the one sent along x, communication along y will have towait, however, until y


View Full Document

CMU CS 15816 - Lecture Notes on Choice and Replication

Download Lecture Notes on Choice and Replication
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes on Choice and Replication and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes on Choice and Replication 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?