Unformatted text preview:

0x1A Great Papers in Computer Security Vitaly Shmatikov CS 380S http://www.cs.utexas.edu/~shmat/courses/cs380s/slide 2 Secure Multi-Party Computation General framework for describing computation between parties who do not trust each other Example: elections • N parties, each one has a “Yes” or “No” vote • Goal: determine whether the majority voted “Yes”, but no voter should learn how other people voted Example: auctions • Each bidder makes an offer • Goal: determine whose offer won without revealing losing offersslide 3 More Examples Example: distributed data mining • Two companies want to compare their datasets without revealing them – For example, compute the intersection of two customer lists Example: database privacy • Evaluate a query on the database without revealing the query to the database owner • Evaluate a statistical query without revealing the values of individual entriesslide 4 A Couple of Observations We are dealing with distributed multi-party protocols • “Protocol” describes how parties are supposed to exchange messages on the network All of these tasks can be easily computed by a trusted third party • Secure multi-party computation aims to achieve the same result without involving a trusted third partyslide 5 How to Define Security? Must be mathematically rigorous Must capture all realistic attacks that a malicious participant may try to stage Should be “abstract” • Based on the desired “functionality” of the protocol, not a specific protocol • Goal: define security for an entire class of protocolsslide 6 Functionality K mutually distrustful parties want to jointly carry out some task Model this task as a “functionality” f: ({0,1}*)K ({0,1}*)K Assume that this functionality is computable in probabilistic polynomial time K inputs (one per party); each input is a bitstring K outputsslide 7 Ideal Model Intuitively, we want the protocol to behave “as if” a trusted third party collected the parties’ inputs and computed the desired functionality • Computation in the ideal model is secure by definition! A B x1 f2(x1,x2) f1(x1,x2) x2slide 8 Slightly More Formally A protocol is secure if it emulates an ideal setting where the parties hand their inputs to a “trusted party,” who locally computes the desired outputs and hands them back to the parties [Goldreich-Micali-Wigderson 1987] A B x1 f2(x1,x2) f1(x1,x2) x2slide 9 Adversary Models Some participants may be dishonest (corrupt) • If all were honest, we would not need secure multi-party computation Semi-honest (aka passive; honest-but-curious) • Follows protocol, but tries to learn more from received messages than he would learn in the ideal model Malicious • Deviates from the protocol in arbitrary ways, lies about his inputs, may quit at any point For now, focus on semi-honest adversaries and two-party protocolsslide 10 Correctness and Security How do we argue that the real protocol “emulates” the ideal protocol? Correctness • All honest participants should receive the correct result of evaluating functionality f – Because a trusted third party would compute f correctly Security • All corrupt participants should learn no more from the protocol than what they would learn in the ideal model • What does a corrupt participant learn in ideal model? – His own input and the result of evaluating fslide 11 Simulation Corrupt participant’s view of the protocol = record of messages sent and received • In the ideal world, this view consists simply of his input and the result of evaluating f How to argue that real protocol does not leak more useful information than ideal-world view? Key idea: simulation • If real-world view (i.e., messages received in the real protocol) can be simulated with access only to the ideal-world view, then real-world protocol is secure • Simulation must be indistinguishable from real viewslide 12 Technicalities Distance between probability distributions A and B over a common set X is ½ * sumX(|Pr(A=x) – Pr(B=x)|) Probability ensemble Ai is a set of discrete probability distributions • Index i ranges over some set I Function f(n) is negligible if it is asymptotically smaller than the inverse of any polynomial  constant c m such that |f(n)| < 1/nc n>mslide 13 Indistinguishability Notions Distribution ensembles Ai and Bi are equal Distribution ensembles Ai and Bi are statistically close if dist(Ai,Bi) is a negligible function of i Distribution ensembles Ai and Bi are computationally indistinguishable (Ai  Bi) if, for any probabilistic polynomial-time algorithm D, |Pr(D(Ai)=1) - Pr(D(Bi)=1)| is a negligible function of i • No efficient algorithm can tell the difference between Ai and Bi except with a negligible probabilityslide 14 SMC Definition (First Attempt) Protocol for computing f(XA,XB) betw. A and B is secure if there exist efficient simulator algorithms SA and SB such that for all input pairs (xA,xB) … Correctness: (yA,yB)  f(xA,xB) • Intuition: outputs received by honest parties are indistinguishable from the correct result of evaluating f Security: viewA(real protocol)  SA(xA,yA) viewB(real protocol)  SB(xB,yB) • Intuition: a corrupt party’s view of the protocol can be simulated from its input and output This definition does not work! Why?slide 15 Randomized Ideal Functionality Consider a coin flipping functionality f()=(b,-) where b is random bit • f() flips a coin and tells A the result; B learns nothing The following protocol “implements” f() 1. A chooses bit b randomly 2. A sends b to B 3. A outputs b It is obviously insecure (why?) Yet it is correct and simulatable according to our attempted definition (why?)slide 16 SMC Definition Protocol for computing f(XA,XB) betw. A and B is secure if there exist efficient simulator algorithms SA and SB such that for all input pairs (xA,xB) … Correctness: (yA,yB)  f(xA,xB) Security: (viewA(real protocol), yB)  (SA(xA,yA), yB) (viewB(real protocol), yA)  (SB(xB,yB), yA) • Intuition: if a corrupt party’s view of the protocol is correlated with the honest party’s output, the simulator must be able to capture this correlation Does this fix the problem with coin-flipping f?slide 17 Oblivious Transfer (OT) Fundamental SMC


View Full Document

UT CS 380S - Computer Security

Download Computer Security
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Computer Security and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Computer Security 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?