1Off by Default!Hitesh Ballani∗, Yatin Chawathe†, Sylvia Ratnasamy†, Timothy Roscoe†, Scott Shenker‡∗Cornell University†Intel-Research‡ICSI/UC BerkeleyI. INTRO D U C T I O NThe original Internet architecture was designed to provide universalreachability; any host can send any amount of traffic (modulocongestion control) to any destination. This blanket openness enabledthe Internet to adopt a single, globally routable address space. Unfor-tunately, today’s less trustworthy Internet environment has revealedthe downside of such openness—every host is vulnerable to attackby any other host(s). In the face of mounting security concerns,a primitive set of protective mechanisms (such as firewalls andNATs) have been widely deployed while the research communityhas produced numerous proposals that address security vulnerabilitiesin a more comprehensive fashion [1], [2], [3], [4], [5], [6], [7].These proposals use various sophisticated architectures and approachthe problem from many different perspectives. However, none ofthem take the simplest and most direct approach: allow each hostto explicitly declare to the network routing infrastructure what trafficit wants routed to it.The goal of this paper is to explore the basic feasibility of such anapproach. We describe an IP-level control protocol by which endhostssignal, and routers exchange, reachability constraints on differentdestination prefixes. Our interest in such a protocol stems fromthe conjecture that, if feasible, a reachability control protocol couldencompass a number of previous security proposals (as described inSection-II) while enabling a network that is intrinsically less trusting.Specifically, under our proposal, a router may forward a packet fromhost A to host B only if B has explicitly informed the network ofits willingness to accept incoming traffic from A. In effect, we’reproposing to flip the default constraint on host reachability from“on” to “off”. Given current security woes, we believe this moreconservative default is appropriate.Yet it is important to preserve the opportunity for openness. Thegreat strength of the existing “default-on” model is the flexibility itgives applications in their choice of communication models (client-to-server, server-to-server, peer-to-peer) which has been credited withenabling the variety of Internet applications we enjoy today. To pre-serve this flexibility, our protocol allows hosts to dynamically modifyand inform the network of their current reachability constraints; i.e.,our conservatism extends only to the network’s default behavior.On the face of it, requiring the network to dynamically maintainreachability information for every destination would seem to placean intractable burden on routers. Our feasibility analysis suggests thatthis is not necessarily the case and that a default-off Internet mightwell be a practical option.We do not claim that such a default-off approach is sufficientor optimal. On the contrary, the general problem (control overhost reachability) is a non-trivial one with a large design spaceand it’s likely too early for any particular approach to claim theprize. Moreover, given the complementary tradeoffs between varioussolutions (as pointed out in the next section), it is quite likely that the“sweet spot” in the design space involves more than one approach.Nonetheless, we hope that exploring an extreme design point willSolution ⇒ Access Control CriteriaProactive Proactive Reactiveat victim in network in networkCompromise High High Not ⇐Assumptionsattacks High High Useful ⇐EffectivenessLow High ⇐ ComplexityResource Not High Medium ⇐Assumptionsexhaustion Useful High Medium ⇐Effectivenessattacks High Medium ⇐ ComplexityExamples Firewalls Mayday, i3 Pushback,Handley et. al. AITFTABLE IAccess Control solutions for various attacksbetter reveal (and stimulate discussion on) the different options andhence initiate a more principled approach to arriving at the idealsolution.II. TAX O N O M Y OF PROBLEM A N D SOLUTIONSBefore describing our solution, we first briefly discuss some broadcategories of attacks and defenses. Not everything fits neatly into thistaxonomy, but our goal is not to achieve completeness but to providesome pedagogical context that hopefully will make the nature of ourproposal clearer.The attacks we consider fall into two broad categories: compromiseattacks and resource-exhaustion attacks. Compromise attacks arethose that subvert the victim, be it an end host (client or server), orrouter. A common approach to dealing with such attacks is to controlaccess to the victim. Applying access control thus requires identifyingmalicious traffic. Moreover, because compromise attacks need onlyone or a few packets to cause damage, such access control shouldbe pro-active; that is, the network must prevent such packets fromreaching the intended victim.1. This access control can be exercisedanywhere in the network, either at the victim or closer to the source.Resource-exhaustion attacks leave the victim intact but unable toprovide much service to legitimate clients. Here too, access controlserves as one type of defense. In this case however, the access controlcan also be reactive, in that the victim can invoke it after an attack isdetected (though, of course, pro-active defenses are still preferable,in that no outage need occur). If the resource being exhausted is host-specific, such as disk or cpu, then the control could be exercised nearthe victim, but if bandwidth is the exhausted resource then the controlmust be applied closer to the source.A second form of defense against resource-exhaustion attacksinvolves resource sharing mechanisms that control how resourcesare allocated across all requesting users (for that resource) withoutattempting to classify users as legitimate or not. For example, thereare a variety of mechanisms such as Fair Queuing and its manyvariants that can help alleviate bandwidth exhaustion attacks while1Of course, building secure operating systems would be the first line ofdefense against such attacks, but here we concern ourselves only with network-level defenses.2careful CPU scheduling and memory allocation by the operatingsystem can serve to protect resources at the endhost. Such defensesavoid the need to identify malicious users but are somewhat lesseffective in that they cannot completely deny service to malicioususers, are vulnerable in the face of spoofed addresses/identifiers,
View Full Document