DOC PREVIEW
Yale CPSC 433 - Network Applications: High-Performance Network Server

This preview shows page 1-2-3-4-5-37-38-39-40-41-42-74-75-76-77-78 out of 78 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 78 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Network Applications: High-Performance Network ServerOutlineAdminRecap: HTTPRecap: Server Processing StepsConcurrency: Limit by the BottleneckRecap: Writing High Performance Servers: Using Multi-ThreadsRecap: Implementing ThreadsRecap: Problem of Per-Request ThreadBackground: Little’s Law (1961)Little’s LawQuestion: Using a Fixed Set of Threads (Thread Pool)Design 1: Threads Share Access to the welcomeSocketDesign 2: Producer/ConsumerCommon Issues Facing Design 1 and 2Slide 16Concurrency and Shared DataSimple ExampleSlide 19What Happened?QuestionSynchronizationJava Lock (1.5)Java LockSlide 25Java SynchronizedJava synchronizedDiscussionSynchronization on thisSlide 30Slide 31Slide 32Synchronized MethodExampleState Analysis of K Thread PoolSummary of Key IdeasSummary: Problem and SolutionSlide 38Why not SynchronizationSynchronization OverheadSlide 41MainWorkerSlide 44Slide 45Problem of Busy-waitSlide 47Solution: SuspensionAlternative: SuspensionSlide 50Wait-sets and NotificationSlide 52Wait-setsSlide 54Wait-set and Notification (cont)NotificationSlide 57Slide 58Summary: Thread-Based Network ServerSummary: Guardian via Suspension: WaitingSummary: Guarding via Suspension: Changing a ConditionNoteJava (1.5)Producer/Consumer ExampleBlocking Queues in Java 1.5CorrectnessCorrectness PropertiesSafety PropertiesMake Program ExplicitPowerPoint PresentationStatementsSlide 72Check SafetyReal Implementation of waitStatesLiveness PropertiesMain Thread can always add to QAll elements in Q can be removedNetwork Applications:High-Performance Network Server2/7/20122OutlineAdmin and recapHigh performance HTTP server3AdminQuestions on programming assignment 0?4Recap: HTTPHTTP message flowstateless server•each request is self-contained; thus cookie and authentication,are needed in each messagereducing latency•persistent HTTP–the problem is introduced by layering !•conditional GET reduces server/network workload and latency•cache and proxy reduce traffic and latencyHTTP message formatsimple methodsrich headersURL does not specify content type- Is the application extensible, scalable, robust, secure?Recap: Server Processing StepsAccept ClientConnectionReadRequestFindFileSendResponse HeaderRead FileSend Datamay blockwaiting ondisk I/OWant to be able to process requests concurrently.may blockwaiting onnetworkConcurrency: Limit by the BottleneckCPUDISK BeforeNETCPUDISKNETAfterRecap: Writing High Performance Servers: Using Multi-ThreadsA thread is a sequence of instructions which may execute in parallel with other threadsBasic design: on-demand per-request threadone thread created for each client connectiononly the flow (thread) processing a particular request is blockedRecap: Implementing Threads8class RequestHandler extends Thread { RequestHandler(Socket connSocket) { … } public void run() { // process request } …} Thread t = new RequestHandler(connSocket);t.start(); class RequestHandler implements Runnable { RequestHandler(Socket connSocket) { … } public void run() { // process request } …} RequestHandler rh = new RequestHandler(connSocket);Thread t = new Thread(rh);t.start();Recap: Problem of Per-Request ThreadHigh thread creation/deletion overheadToo many threads  resource overuse  throughput meltdown  response time explosionQ: how many threads active at any instance of time?10Background: Little’s Law (1961)For any system with no or (low) loss. Assume mean arrival rate , mean time at device R, and mean number of requests at device QThen relationship between Q,  , and R:R, QR, QRQExample: Yale College admits 1500 students each year, and mean time a student stays is 4 years, how many students are enrolled?Little’s Law11timearrival123AttAAAreaR tAreaQ RQQuestion: Using a FixedSet of Threads (Thread Pool)What are some design possibilities?12welcomesocketDesign 1: Threads Share Access to the welcomeSocket13WorkerThread { void run { while (true) { Socket myConnSock = welcomeSocket.accept(); // process myConnSock myConnSock.close(); } // end of while}welcomesocketThread 1 Thread 2Thread Ksketch; notworking codeDesign 2: Producer/Consumer14welcomesocketMainthreadThread 2Thread KThread 1Q: Dispatchqueuemain { void run { while (true) { Socket con = welcomeSocket.accept(); Q.add(con); } // end of while}WorkerThread { void run { while (true) { Socket myConnSock = Q.remove(); // process myConnSock myConnSock.close(); } // end of while}sketch; notworking codeCommon Issues Facing Design 1 and 2Both designs involve multiple threads modify the same data concurrentlyDesign 1:Design 2:In our original TCPServerMT, do we have multiple threads modifying the same data concurrently?15welcomeSocketQOutlineRecapHigh-performance serverMulti-threads basicThread concurrency and shared data16Concurrency and Shared DataConcurrency is easy if threads don’t interactEach thread does its own thing, ignoring other threadsTypically, however, threads need to communicate with each otherCommunication/coordination can be done by shared dataIn Java, different threads may access static and heap simultaneously, causing problem17Simple Examplepublic class ShareExample extends Thread { private static int cnt = 0; // shared state public void run() { int y = cnt; cnt = y + 1; } public static void main(String args[]) { Thread t1 = new Example(); Thread t2 = new Example(); t1.start(); t2.start(); Thread.sleep(1000); System.out.println(“cnt = “ + cnt); }}18What is potential result?Simple ExampleWhat if we add a println: int y = cnt; System.out.println(“Calculating…”); cnt = y + 1;19What Happened?A thread was preempted in the middle of an operationReading and writing cnt was supposed to be atomic to happen with no interference from other threadsBut the scheduler interleaves threads and caused a race conditionSuch bugs can be extremely hard to reproduce, and so hard to debug20QuestionIf instead ofint y = cnt;cnt = y+1;We had writtencnt++;Would this avoid race condition?Answer: NO!•Don’t depend on your intuition about atomicity21SynchronizationRefers to mechanisms allowing a programmer to control the execution order of some operations across different threads in a concurrent


View Full Document
Download Network Applications: High-Performance Network Server
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Network Applications: High-Performance Network Server and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Network Applications: High-Performance Network Server 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?