Final ReviewAnnouncementsSlide 3Physical & Link LayersLink LayerMore Link LayerNIC Hardware & OS OverviewMore NIC/OSWireless Link Layers & Wireless RoutingFinal ReviewCS144 Review Session 9June 4, 2008Derrick IsaacsonMaria KazandjievaBen NhamAnnouncements•Upcoming dates–Final Exam: June 6, 12:15 p.m.Final Review1. Physical & Link Layers2. NIC Hardware3. Wireless Link Layers4. Wireless Routing5. Network Coding6. Security7. SIPPhysical & Link Layers•Chips vs. bits – chips are data transferred at physical layer, bits are data above physical layer•Encoding motivations–DC balancing, synchronization–Can recover from some chip errors–Higher encoding -> fewer bps but more robust–Lower encoding -> more bps but less robustLink Layer•Single-hop addressing (Ethernet address)•Media Access Control (MAC) – regulate access to shared medium and maximize efficiency–Time Division Multiple Access (TDMA)–Carrier Sense Multiple Access, Collision Detection (CSMA/CD)–Carrier Sense Multiple Access, Collision Avoidance (CSMA/CA)–Request-to-send, clear-to-send (RTS/CTS)More Link Layer•Collision Detection–Constrains max length of wire/ min length of segment–Randomized exponential backoff on collision detection–Less efficient use of link when there are a high number of collisions•Collision Domain–Hubs connect segments to create a larger shared collision domain–Switches store and forward packets from separate collision domainsNIC Hardware & OS Overview•Hardware user/kernel boundary – expensive to switch between modes–System calls – calls into kernel on behalf of currently running process–Interrupts – code not acting on behalf of current process, NIC generated, TCP/IP processing•OS gives each process virtual address space for fault isolation–Paging – divide memory into chunks and map between virtual and physical pages of memory•Device communication – between processor and device over I/O bus–Memory mapped devices–Special I/O instructions–DMAMore NIC/OS•Expensive context switches can affect networking performance–TCP push bit gives hint to OS when to wake up listening process–Send and receive packets in batches–Minimize latency for TCP•Device driver architecture–Polling – loop asking card when buffer free/ has packet – Wastes CPU, high latency if you schedule poll later–Interrupt driven – most OSes use this – low latency but expensive and poor performance for high-throughput scenarios–Best is adaptive algorithm between interrupts and polling•Socket implementation – buffering–Need to encapsulate data easily–Solution – don’t store packets in contiguous memoryWireless Link Layers& Wireless Routing•See section 8
View Full Document