Network Applications: High-Performance Network ServerOutlineAdminRecap: HTTPRecap: Server Processing StepsConcurrency: Limit by the BottleneckRecap: Writing High Performance Servers: Using Multi-ThreadsRecap: Implementing ThreadsRecap: Problem of Per-Request ThreadBackground: Little’s Law (1961)Little’s LawQuestion: Using a Fixed Set of Threads (Thread Pool)Design 1: Threads Share Access to the welcomeSocketDesign 2: Producer/ConsumerCommon Issues Facing Design 1 and 2Slide 16Concurrency and Shared DataSimple ExampleSlide 19What Happened?QuestionSynchronizationJava Lock (1.5)Java LockSlide 25Java SynchronizedJava synchronizedDiscussionSynchronization on thisSlide 30Slide 31Slide 32Synchronized MethodExampleState Analysis of K Thread PoolSummary of Key IdeasSummary: Problem and SolutionSlide 38Why not SynchronizationSynchronization OverheadSlide 41MainWorkerSlide 44Slide 45Problem of Busy-waitSlide 47Solution: SuspensionAlternative: SuspensionSlide 50Wait-sets and NotificationSlide 52Wait-setsSlide 54Wait-set and Notification (cont)NotificationSlide 57Slide 58Summary: Thread-Based Network ServerSummary: Guardian via Suspension: WaitingSummary: Guarding via Suspension: Changing a ConditionNoteJava (1.5)Producer/Consumer ExampleBlocking Queues in Java 1.5CorrectnessCorrectness PropertiesSafety PropertiesMake Program ExplicitPowerPoint PresentationStatementsSlide 72Check SafetyReal Implementation of waitStatesLiveness PropertiesMain Thread can always add to QAll elements in Q can be removedNetwork Applications:High-Performance Network Server2/7/20122OutlineAdmin and recapHigh performance HTTP server3AdminQuestions on programming assignment 0?4Recap: HTTPHTTP message flowstateless server•each request is self-contained; thus cookie and authentication,are needed in each messagereducing latency•persistent HTTP–the problem is introduced by layering !•conditional GET reduces server/network workload and latency•cache and proxy reduce traffic and latencyHTTP message formatsimple methodsrich headersURL does not specify content type- Is the application extensible, scalable, robust, secure?Recap: Server Processing StepsAccept ClientConnectionReadRequestFindFileSendResponse HeaderRead FileSend Datamay blockwaiting ondisk I/OWant to be able to process requests concurrently.may blockwaiting onnetworkConcurrency: Limit by the BottleneckCPUDISK BeforeNETCPUDISKNETAfterRecap: Writing High Performance Servers: Using Multi-ThreadsA thread is a sequence of instructions which may execute in parallel with other threadsBasic design: on-demand per-request threadone thread created for each client connectiononly the flow (thread) processing a particular request is blockedRecap: Implementing Threads8class RequestHandler extends Thread { RequestHandler(Socket connSocket) { … } public void run() { // process request } …} Thread t = new RequestHandler(connSocket);t.start(); class RequestHandler implements Runnable { RequestHandler(Socket connSocket) { … } public void run() { // process request } …} RequestHandler rh = new RequestHandler(connSocket);Thread t = new Thread(rh);t.start();Recap: Problem of Per-Request ThreadHigh thread creation/deletion overheadToo many threads resource overuse throughput meltdown response time explosionQ: how many threads active at any instance of time?10Background: Little’s Law (1961)For any system with no or (low) loss. Assume mean arrival rate , mean time at device R, and mean number of requests at device QThen relationship between Q, , and R:R, QR, QRQExample: Yale College admits 1500 students each year, and mean time a student stays is 4 years, how many students are enrolled?Little’s Law11timearrival123AttAAAreaR tAreaQ RQQuestion: Using a FixedSet of Threads (Thread Pool)What are some design possibilities?12welcomesocketDesign 1: Threads Share Access to the welcomeSocket13WorkerThread { void run { while (true) { Socket myConnSock = welcomeSocket.accept(); // process myConnSock myConnSock.close(); } // end of while}welcomesocketThread 1 Thread 2Thread Ksketch; notworking codeDesign 2: Producer/Consumer14welcomesocketMainthreadThread 2Thread KThread 1Q: Dispatchqueuemain { void run { while (true) { Socket con = welcomeSocket.accept(); Q.add(con); } // end of while}WorkerThread { void run { while (true) { Socket myConnSock = Q.remove(); // process myConnSock myConnSock.close(); } // end of while}sketch; notworking codeCommon Issues Facing Design 1 and 2Both designs involve multiple threads modify the same data concurrentlyDesign 1:Design 2:In our original TCPServerMT, do we have multiple threads modifying the same data concurrently?15welcomeSocketQOutlineRecapHigh-performance serverMulti-threads basicThread concurrency and shared data16Concurrency and Shared DataConcurrency is easy if threads don’t interactEach thread does its own thing, ignoring other threadsTypically, however, threads need to communicate with each otherCommunication/coordination can be done by shared dataIn Java, different threads may access static and heap simultaneously, causing problem17Simple Examplepublic class ShareExample extends Thread { private static int cnt = 0; // shared state public void run() { int y = cnt; cnt = y + 1; } public static void main(String args[]) { Thread t1 = new Example(); Thread t2 = new Example(); t1.start(); t2.start(); Thread.sleep(1000); System.out.println(“cnt = “ + cnt); }}18What is potential result?Simple ExampleWhat if we add a println: int y = cnt; System.out.println(“Calculating…”); cnt = y + 1;19What Happened?A thread was preempted in the middle of an operationReading and writing cnt was supposed to be atomic to happen with no interference from other threadsBut the scheduler interleaves threads and caused a race conditionSuch bugs can be extremely hard to reproduce, and so hard to debug20QuestionIf instead ofint y = cnt;cnt = y+1;We had writtencnt++;Would this avoid race condition?Answer: NO!•Don’t depend on your intuition about atomicity21SynchronizationRefers to mechanisms allowing a programmer to control the execution order of some operations across different threads in a concurrent
View Full Document