Internet Architecture and AssumptionsCourse statusInternet ArchitecturePrioritiesFundamental GoalSharing and MultiplexingDatagram SwitchingPreview: An Age-Old DebateSurvivabilityFate SharingTypes of ServiceVarieties of NetworksSo, how do you support them?The “Curse of the Narrow Waist”Goal #4: Distributed ManagementA problem: ManagementPowerPoint PresentationGoal #5: Cost EffectivenessGoal #6: Ease of AttachmentGoal #7: AccountabilityStopping Unwanted Traffic is HardSome environments challenge the modelDesign wrapupProject StuffSome thoughtsSome resourcesSome toolsNext Time: End-to-end Arguments, ALFInternet Architecture and AssumptionsDavid AndersenCMU Computer ScienceCourse status•27 registered (goal: 24)•24 on waitlist (goal: 0)•So – still not looking so good.–If you’re dropping, remember to actually drop!•Remember: Project groups!Internet Architecture•Background–“The Design Philosophy of the DARPA Internet Protocols” (David Clark, 1988).•Fundamental goal: Effective network interconnection•Goals, in order of priority:1. Continue despite loss of networks or gateways2. Support multiple types of communication service3. Accommodate a variety of networks4. Permit distributed management of Internet resources5. Cost effective6. Host attachment should be easy7. Resource accountabilityPriorities•Technical Lessons–Packet switching–Fate Sharing/Soft state•The effects of the order of items in that list are still felt today–E.g., resource accounting is a hard, current research topic•Let’s look at them in detailFundamental Goal•“technique for multiplexed utilization of existing interconnected networks”•Multiplexing (sharing)–Shared use of a single communications channel•Existing networks (interconnection)–Tries to define an “easy” set of requirements for the underlying networks to support as many as possibleSharing and Multiplexing•Question #1: How do you avoid an all-to-all network topology?–Multiplexing!–How can you do it? TDMA, FDMA, CDMA–And you can do statistical multiplexing•Stat mux: Efficient sharing of resources–A link can always transmit when it has data!Datagram Switching•Information for forwarding traffic is contained in destination address of packet•No state established ahead of time (helps fate sharing)•Basic building block – must build things like TCP on top•Pretty much implies statistical multiplexing•Alternatives:•Circuit Switching: Signaling protocol sets up entire path out-of-band. (cf. the phone network)•Virtual Circuits: Hybrid approach. Packets carry “tags” to indicate path, forwarding over IP•Source routing: Complete route is contained in each data packetPreview: An Age-Old DebateIt is held that packet switching was one of the Internet’s greatest design choices.Of course, there are constant attempts to shoehorn the best aspects of circuits into packet switching.Examples: Capabilities, MPLS,ATM, IntServ QoS, etc.•Circuits vs Packets?•Circuits: Guaranteed QoS, dedicated connection, easy accounting•Packets: Efficiency, simplicitySurvivability•If network disrupted and reconfigured–Communicating entities should not care!–No higher-level state reconfiguration–Ergo, transport interface only knows “working” and “not working.” Not working == complete partition.•How to achieve such reliability?–Where can communication state be stored?Network HostFailure handing Replication “Fate sharing”Net Engineering Tough SimpleSwitches Maintain state StatelessHost trust Less MoreFate Sharing•Lose state information for an entity if (and only if?) the entity itself is lost.•Examples:–OK to lose TCP state if one endpoint crashes•NOT okay to lose if an intermediate router reboots–Is this still true in today’s network?•NATs and firewalls•Survivability compromise: Heterogenous network -> less information available to end hosts and Internet level recovery mechanismsConnection StateStateNo StateTypes of Service•TCP vs. UDP–Elastic apps that need reliability: remote login or email–Inelastic, loss-tolerant apps: real-time voice or video–Others in between, or with stronger requirements–Biggest cause of delay variation: reliable delivery•Today’s net: ~100ms RTT•Reliable delivery can add seconds.•Original Internet model: “TCP/IP” one layer–First app was remote login…–But then came debugging, voice, etc.–These differences caused the layer split, added UDP•No QoS support assumed from below–In fact, some underlying nets only supported reliable delivery•Made Internet datagram service less useful!–Hard to implement without network support–QoS is an ongoing debate…Varieties of Networks•Interconnect the ARPANET, X.25 networks, LANs, satellite networks, packet networks, serial links…•Mininum set of assumptions for underlying net–Minimum packet size–Reasonable delivery odds, but not 100%–Some form of addressing unless point to point•Important non-assumptions:–Perfect reliability–Broadcast, multicast–Priority handling of traffic–Internal knowledge of delays, speeds, failures, etc.•Much engineering then only has to be done onceSo, how do you support them?•Need to interconnect many existing networks•Hide underlying technology from applications•Decisions:–Network provides minimal functionality–“Narrow waist”Tradeoff: No assumptions, no guarantees.TechnologyApplications email WWW phone...SMTP HTTP RTP...TCP UDP…IP ethernet PPP…CSMA async sonet... copper fiber radio...The “Curse of the Narrow Waist”•IP over anything, anything over IP–Has allowed for much innovation both above and below the IP layer of the stack–An IP stack gets a device on the Internet•Drawbacks:–difficult to make changes to IP–But…people are trying (cf GENI)–Only a small amount of information available about lower levels. (cf wireless)Goal #4: Distributed Management•Independently managed as a set of independent “Autonomous Systems”–ISPs–CMU–Etc.•BGP (Border Gateway Protocol) connects ASes together–Completely (well…) decentralized routing–Is this a good thing? (wait two slides)A problem: Management•“Some of the most significant problems with the Internet today relate to lack of sufficient tools for distributed management, especially in the area of routing.”•The Internet is now a hugely complex beast–18,000 constituent networks–Routing tables
View Full Document