1 15-744: Computer Networking L-18 Naming Today’s Lecture • Naming and CDNs • Required readings • Middleboxes No Longer Considered Harmful • Internet Indirection Infrastructure • Optional readings • Democratizing content publication with Coral 2 3 Overview • Akamai • i3 • Layered naming • DOA • SFR 4 How Akamai Works End-user cnn.com (content provider) DNS root server Akamai server 1 2 34Akamai high-level DNS server Akamai low-level DNS server Akamai server 10 678912 Get index.html Get /cnn.com/foo.jpg 11 Get foo.jpg 52 5 Akamai – Subsequent Requests End-user cnn.com (content provider) DNS root server Akamai server 1 2Akamai high-level DNS server Akamai low-level DNS server Akamai server 78912 Get index.html Get /cnn.com/foo.jpg Coral: An Open CDN • Implement an open CDN • Allow anybody to contribute • Works with unmodified clients • CDN only fetches once from origin server Origin Server Coral httpprx dnssrv Coral httpprx dnssrv Coral httpprx dnssrv Coral httpprx dnssrv Coral httpprx dnssrv Coral httpprx dnssrv Browser Browser Browser Browser Pool resources to dissipate flash crowds 6 Using CoralCDN • Rewrite URLs into “Coralized” URLs • www.x.com → www.x.com.nyud.net:8090 • Directs clients to Coral, which absorbs load • Who might “Coralize” URLs? • Web server operators Coralize URLs • Coralized URLs posted to portals, mailing lists • Users explicitly Coralize URLs 7 httpprx dnssrv Browser Resolver DNS Redirection Return proxy, preferably one near client Cooperative Web Caching CoralCDN components httpprx www.x.com.nyud.net 216.165.108.10 Fetch data from nearby ? ? Origin Server 83 Functionality needed DNS: Given network location of resolver, return a proxy near the client put (network info, self) get (resolver info) → {proxies} HTTP: Given URL, find proxy caching object, preferably one nearby put (URL, self) get (URL) → {proxies} 9 Use a DHT? • Supports put/get interface using key-based routing • Problems with using DHTs as given • Lookup latency • Transfer latency • Hotspots NYU Columbia Germany Japan NYC NYC 10 Coral distributed index • Insight: Don’t need hash table semantics • Just need one well-located proxy • put (key, value, ttl) • Avoid hotspots • get (key) • Retrieves some subset of values put under key • Prefer values put by nodes near requestor • Hierarchical clustering groups nearby nodes • Expose hierarchy to applications • Rate-limiting mechanism distributes puts Key-based XOR routing 000… 111… Distance to key None < 60 ms < 20 ms Thresholds • Minimizes lookup latency • Prefer values stored by nodes within faster clusters4 Prevent insertion hotspots NYU • Halt put routing at full and loaded node • Full → M vals/key with TTL > ½ insertion TTL • Loaded → β puts traverse node in past minute • Store at furthest, non-full node seen Store value once in each level cluster Always storing at closest node causes hotspot … (log n) β reqs / min Coral Contributions • Self-organizing clusters of nodes • NYU and Columbia prefer one another to Germany • Rate-limiting mechanism • Everybody caching and fetching same URL does not overload any node in system • Decentralized DNS Redirection • Works with unmodified clients No centralized management or a priori knowledge of proxies’ locations or network configurations 14 15 Overview • i3 • Layered naming • DOA • SFR Multicast S1 C1 C2 S2 R RP RRRRRP: Rendezvous Point 165 Mobility HA FA Home Network Network 5 5.0.0.1 12.0.0.4 Sender Mobile Node 5.0.0.3 17 18 i3: Motivation • Today’s Internet based on point-to-point abstraction • Applications need more: • Multicast • Mobility • Anycast • Existing solutions: • Change IP layer • Overlays So, what’s the problem? A different solution for each service The i3 solution • Solution: • Add an indirection layer on top of IP • Implement using overlay networks • Solution Components: • Naming using “identifiers” • Subscriptions using “triggers” • DHT as the gluing substrate 19 Indirection Every problem in CS … Only primitive needed i3: Rendezvous Communication • Packets addressed to identifiers (“names”) • Trigger=(Identifier, IP address): inserted by receiver 20 Sender Receiver (R) ID R trigger send(ID, data) send(R, data) Senders decoupled from receivers6 21 i3: Service Model • API • sendPacket(id, p); • insertTrigger(id, addr); • removeTrigger(id, addr); // optional • Best-effort service model (like IP) • Triggers periodically refreshed by end-hosts • Reliability, congestion control, and flow-control implemented at end-hosts i3: Implementation • Use a Distributed Hash Table • Scalable, self-organizing, robust • Suitable as a substrate for the Internet 22 Sender Receiver (R) ID R trigger send(ID, data) send(R, data) DHT.put(id) IP.route(R) DHT.put(id) 23 Mobility and Multicast • Mobility supported naturally • End-host inserts trigger with new IP address transparent to sender • Robust and supports location privacy • Multicast • All receivers insert triggers under same ID • Sender uses that ID for sending • Can optimize tree construction to balance load Mobility • The change of the receiver’s address • from R to R’ is transparent to the sender 247 Multicast • Every packet (id, data) is forwarded to each receiver Ri that inserts the trigger (id, Ri) 25 26 Anycast • Generalized matching • First k-bits have to match, longest prefix match among rest Sender (R1) (R2) (R3) a b a b1 a b2 a b3 Triggers • Related triggers must be on same server • Server selection (randomize last bits) Generalization: Identifier Stack • Stack of identifiers • i3 routes packet through these identifiers • Receivers • trigger maps id to <stack of ids> • Sender can also specify id-stack in packet • Mechanism: • first id used to match trigger • rest added to the RHS of trigger • recursively continued 27 Service Composition • Receiver mediated: R sets up chain and passes
View Full Document