DOC PREVIEW
UMD CMSC 433 - Concurrency abstractions

This preview shows page 1-2-3-23-24-25-26-46-47-48 out of 48 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CMSC 433Concurrency abstractionsOct 30th, 2007Project 3, due Nov 14th•parallel graph exploration from a start node•find all reachable nodes•perform a computation at each node•both finding the neighbors of a node, and performing the computation, might be expensive•want to use concurrency to accomplish this quicklyNodespublic interface Node <N extends Node<N, V>, V> {! V compute();! Collection<N> neighbors();}Explorerpublic interface Explorer <N extends Node<N, V>, V> {! public void explore(N start, int parallelism) throws InterruptedException;! public Collection<N> reached();! public V getValue(N n);}Implementing explore•I’ll give you a sequential implementation of explore•You have to write a concurrent one•explore is only called once on an explorer•can return before all computation is done•but all computation should be queued up for execution•if interuptted, computation should be aborted promptlyMock Framework•We’ll provide a mock framework•can configure graph layout•time required for call to neighbors() and compute()•can mock processors•want to mock 600 processors?Concurrent correctness•We will be executing these projects on a system with 32 cores•We will ask you to explain why certain bad things can’t happenEfficiency matters•There will be some test cases that check that your implementation isn’t too inefficient•And that it also reaches moderate level of parallelism•Some sort of non-grade prize for the most efficient solutionSimplicity•You don’t need to create threads, synchronize, use Locks, Conditions or volatile variables to complete this project•I encourage you not to•We might require it, or dock you points if you doConcurrency without threads•You can write concurrent applications that don’t use explicit threads or synchronization•Use built-in abstractions that support coordination and parallel execution•a lot of this is covered in Section I-5.1 of JCIPJava 6 alert•Some of the features I’m going to be talking about are available only in Java 6+, not in Java 5.•If you’ve got an Apple computer, we’re still screwed•Leopard doesn’t provide Java 6, as many people had hoped/expectedKey concepts•thread-safe collections•concurrent collections•blocking queues•synchronizers•thread locals•executorsThread safe collections•Standard collections or other abstractions that are intended to be thread safe•Generally limited to one thread operating on them at a time•watch out for sequences that need to be atomic•Can use Collections wrapped methodsReadWriteLock•You can use a ReadWriteLock implementation•e.g., ReentrantReadWriteLock•Protect read-only operations with a read lock•allow simultaneous reads•BE CAREFUL: some operations you might think are read-only are actually read/write operations•e.g., WeakHashMap.get()Concurrent collections•Designed to allow multiple simultaneous accesses and updates•blocking only when they “conflict”•Higher space overhead•not much time overhead•Many of the concurrent collections do not allow null keys or valuesConcurrentHashMap•Allows simultaneous reads, and by default up to 16 simultaneous writers•can increase the number of simultaneous writers•Use Collections. newSetFromMap to construct concurrent setSpecial methods•V putIfAbsent(K key, V value)•store the value only if the key has no mapping•return old value (null if none)•boolean remove(K key, V oldValue)•remove mapping only if it has the specified value•boolean replace(K key, V oldValue, V newValue)•update the mapping only if it has the specified valueConcurrentSkipLists•Skip Lists are a probablistic alternative to balanced trees•something I invented back in 1988•ConcurrentSkipLists provide a concurrent sorted set implementation•and lots of other API improvements over TreeMaps•Java 6 onlyCopyOnWriteArrayList•Using locking to ensure only one thread can update it at a time•any update copies the backing array•thus, read only operations don’t need any locks•iteration uses a snapshot of the array•allows concurrent modification and update•Suitable only if updates rareImportant use case•Keeping track of listeners to an Observable•while iterating through list of listeners, one of them might ask to be unsubscribed•a “concurrent update”, even though we only have one threadBlocking Queues and Dequeues•A Queue is a first-in, first-out queue•A dequeue is a Double-Ended Queue•allows addition and removal atboth ends•a dequeue can also serve as a stackWhat happens when it can’t immediately succeed?throws exceptionreturns special valueblocksinsertremoveexamineadd(e)offer(e)put(e)remove()poll()take()elementpeek()Queue notes•Blocking queues also offer timed offer and poll methods•Several different implementations, each with its own advantages•ConcurrentLinkedQueue•doesn’t support blocking, but allows for simultaneous addition/deletion•Array/Linked Blocking Dequeue/QueueSynchronizers•Other ways to wait for some condition to be true•CountDownLatch•FutureTask•Semaphore•BarriersCountDownLatch•A variable that can be decremented•never incremented•You can wait for it to get to zero•You can also find out the current value•most of the time, you won’t need to find out the current valueFutureValue<V>•implements Runnable•constructor takes a Callable<V>•like Runnable, except it returns a value of type V•need to ask someone to run the FutureValue•then any thread can call get() on the FutureValue•blocks until result is availableSemaphore•Contains a count of the number of permits available•You can acquire or release permits•no checking that you are releasing permits you have•really, just a counter•Acquire blocks if not enough permits are availableFairness•Consider the Blocking queue example from the midterm•What happens if one person wants to atomically remove 10 elements from a queue that can contain upto 20 elements•but there is a constant stream of other threads that want to remove smaller number of elements?Starvation!Some abstractions have fair variants•For example, fair semaphores and fair reentrant locks•Generally, fair guarantees first-come, first-served•But fair almost always reduces throughput•over and above implementation cost•letting running threads run improves throughputFair semaphores•We could have used a fair semaphore for our blocking queue


View Full Document

UMD CMSC 433 - Concurrency abstractions

Documents in this Course
Trace 1

Trace 1

62 pages

Reflection

Reflection

137 pages

Testing

Testing

25 pages

Paradigms

Paradigms

10 pages

Testing

Testing

17 pages

Java RMI

Java RMI

17 pages

Java RMI

Java RMI

17 pages

Java RMI

Java RMI

17 pages

Trace 1

Trace 1

46 pages

Jini

Jini

4 pages

Final

Final

15 pages

Java RMI

Java RMI

13 pages

Testing

Testing

16 pages

Load more
Download Concurrency abstractions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Concurrency abstractions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Concurrency abstractions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?