UT Dallas CS 4337 - #Sebesta ch13 concurrency - to use - short (45 pages)

Previewing pages 1, 2, 3, 21, 22, 23, 43, 44, 45 of 45 page document View the full content.
View Full Document

#Sebesta ch13 concurrency - to use - short



Previewing pages 1, 2, 3, 21, 22, 23, 43, 44, 45 of actual document.

View the full content.
View Full Document
View Full Document

#Sebesta ch13 concurrency - to use - short

22 views


Pages:
45
School:
University of Texas at Dallas
Course:
Cs 4337 - Organization of Programming Languages
Unformatted text preview:

Chapter 13 Concurrency Chapter 13 Topics Introduction Introduction to Subprogram Level Concurrency Semaphores Monitors Message Passing C C Support for Concurrency process thread Java Threads C Threads Concurrency in Functional Languages Statement Level Concurrency Copyright 2012 Addison Wesley All rights reserved 1 2 Introduction Concurrency can occur at four levels Machine instruction level High level language statement level Unit level Program level Because there are no language issues in instruction level and program level concurrency they are not addressed here Copyright 2012 Addison Wesley All rights reserved 1 3 Multiprocessor Architectures Late 1950s one general purpose processor and one or more special purpose processors for input and output operations Early 1960s multiple complete processors used for program level concurrency Mid 1960s multiple partial processors used for instruction level concurrency Single Instruction Multiple Data SIMD machines Multiple Instruction Multiple Data MIMD machines A primary focus of this chapter is shared memory MIMD machines multiprocessors Copyright 2012 Addison Wesley All rights reserved 1 4 Categories of Concurrency Categories of Concurrency Physical concurrency Multiple independent processors multiple threads of control Logical concurrency The appearance of physical concurrency is presented by timesharing one processor software can be designed as if there were multiple threads of control Coroutines quasi concurrency have a single thread of control A thread of control in a program is the sequence of program points reached as control flows through the program Copyright 2012 Addison Wesley All rights reserved 1 5 Motivations for the Use of Concurrency Multiprocessor computers capable of physical concurrency are now widely used Even if a machine has just one processor a program written to use concurrent execution can be faster than the same program written for nonconcurrent execution Involves a different way of designing software that can be very useful many real world situations involve concurrency Many program applications are now spread over multiple machines either locally or over a network Copyright 2012 Addison Wesley All rights reserved 1 6 Introduction to Subprogram Level Concurrency A task or process or thread is a program unit that can be in concurrent execution with other program units Tasks differ from ordinary subprograms in that A task may be implicitly started When a program unit starts the execution of a task it is not necessarily suspended When a task s execution is completed control may not return to the caller Tasks usually work together Copyright 2012 Addison Wesley All rights reserved 1 7 Two General Categories of Tasks Heavyweight tasks execute in their own address space Lightweight tasks all run in the same address space more efficient A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way Copyright 2012 Addison Wesley All rights reserved 1 8 Task Synchronization A mechanism that controls the order in which tasks execute Two kinds of synchronization Cooperation synchronization Competition synchronization Task communication is necessary for synchronization provided by Shared nonlocal variables Parameters Message passing Copyright 2012 Addison Wesley All rights reserved 1 9 Kinds of synchronization Cooperation Task A must wait for task B to complete some specific activity before task A can continue its execution e g the producerconsumer problem Competition Two or more tasks must use some resource that cannot be simultaneously used e g a shared counter Competition is usually provided by mutually exclusive access approaches are discussed later Copyright 2012 Addison Wesley All rights reserved 1 10 Need for Competition Synchronization Task A TOTAL TOTAL 1 Task B TOTAL 2 TOTAL Depending on order there could be four different results Copyright 2012 Addison Wesley All rights reserved 1 11 Scheduler Providing synchronization requires a mechanism for delaying task execution Task execution control is maintained by a program called the scheduler which maps task execution onto available processors Copyright 2012 Addison Wesley All rights reserved 1 12 Task Execution States New created but not yet started Ready ready to run but not currently running no available processor Running Blocked has been running but cannot now continue usually waiting for some event to occur Dead no longer active in any sense Copyright 2012 Addison Wesley All rights reserved 1 13 Liveness and Deadlock Liveness is a characteristic that a program unit may or may not have In sequential code it means the unit will eventually complete its execution In a concurrent environment a task can easily lose its liveness If all tasks in a concurrent environment lose their liveness it is called deadlock Copyright 2012 Addison Wesley All rights reserved 1 14 Design Issues for Concurrency Competition and cooperation synchronization Controlling task scheduling How can an application influence task scheduling How and when tasks start and end execution How and when are tasks created The most important issue Copyright 2012 Addison Wesley All rights reserved 1 15 Methods of Providing Synchronization Semaphores Monitors Message Passing Copyright 2012 Addison Wesley All rights reserved 1 16 Semaphores Dijkstra 1965 A semaphore is a data structure consisting of a counter and a queue for storing task descriptors A task descriptor is a data structure that stores all of the relevant information about the execution state of the task Semaphores can be used to implement guards on the code that accesses shared data structures Semaphores have only two operations wait and release originally called P and V by Dijkstra Semaphores can be used to provide both competition and cooperation synchronization Copyright 2012 Addison Wesley All rights reserved 1 17 Cooperation Synchronization with Semaphores Example A shared buffer The buffer is implemented as an ADT with the operations DEPOSIT and FETCH as the only ways to access the buffer Use two semaphores for cooperation emptyspots and fullspots The semaphore counters are used to store the numbers of empty spots and full spots in the buffer Copyright 2012 Addison Wesley All rights reserved 1 18 Cooperation Synchronization with Semaphores continued DEPOSIT must first check emptyspots to see if there is room in the buffer If there is room the counter of emptyspots is decremented and


View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view #Sebesta ch13 concurrency - to use - short and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view #Sebesta ch13 concurrency - to use - short and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?