DOC PREVIEW
UNO CSCI 8530 - Study Notes

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

34 communications of the acm | NovEmbER 2008 | vol. 51 | No. 11soFtWArE PrActitionErs todAY could be forgiven if recent microprocessor developments have given them some trepidation about the future of software. While Moore’s Law continues to hold (that is, transistor density continues to double roughly every 18 months), due to both intractable physical limitations and practical engineering considerations, that increasing density is no longer being spent on boosting clock rate, but rather on putting multiple CPU cores on a single CPU die. from the software perspective, this not a revolutionary shift, but rather an evolutionary one: multicore CPUs are not the birthing of a new paradigm, but rather the progression of an old one (multiprocessing) into more widespread deployment. from many recent articles and papers on the subject, however, one might think that this blossoming of concurrency is the coming of the apocalypse that “the free lunch is over.”10As practitioners who have long been at the coal face of concurrent systems, we hope to inject some calm reality (if not some hard-won wisdom) into a discussion that has too often descended into hysterics. Specifically, we hope to answer the essential question: what does the proliferation of concurrency mean forDoi:10.1145/1400214.1400227What does the proliferation of concurrency mean for the software you develop?BY BRYan cantRiLL anD Jeff BonWicKReal-World concurrency the software that you develop? Perhaps regrettably, the answer to that ques-tion is neither simple nor universal—your software’s relationship to concur-rency depends on where it physically executes, where it is in the stack of ab-straction, and the business model that surrounds it. And given that many soft-ware projects now have components in different layers of the abstraction stack spanning different tiers of the archi-tecture, you may well find that even for the software that you write, you do not have one answer but several: some of your code may be able to be left forever executing in sequential bliss, and some of your code may need to be highly par-allel and explicitly multithreaded. Fur-ther complicating the answer, we will argue that much of your code will not fall neatly into either category: it will be essentially sequential in nature but will need to be aware of concurrency at some level. While we will assert that less—much less—code needs to be par-allel than some might fear, it is none-theless true that writing parallel code remains something of a black art. We will also therefore give specific imple-mentation techniques for developing a highly parallel system. As such, this article will be somewhat dichotomous: we will try to both argue that most code can (and should) achieve concurrency without explicit parallelism, and at the same time elucidate techniques for those who must write explicitly parallel code. Indeed, this article is half stern lecture on the merits of abstinence and half Kama Sutra.some historical contextBefore discussing concurrency with re-spect to today’s applications, it is help-ful to explore the history of concurrent execution: even by the 1960s—when the world was still wet with the morning dew of the computer age—it was becom-ing clear that a single central process-ing unit executing a single instruction stream would result in unnecessar-ily limited system performance. While computer designers experimented with different ideas to circumvent this limi-practicetation, it was the introduction of the Burroughs B5000 in 1961 that captured the idea that ultimately proved to be the way forward: disjoint CPUs concur-rently executing different instruction streams, but sharing a common memo-ry. In this regard (as in many) the B5000 was at least a decade ahead of its time. But it was not until the 1980s that the need for multiprocessing became clear to a wider body of researchers, who over the course of the decade explored cache coherence protocols (for example, the Xerox Dragon and DEC Firefly), proto-typed parallel operating systems (for example, multiprocessor Unix running on the AT&T 3B20A), and developed par-allel databases (for example, Gamma at the University of Wisconsin).In the 1990s, the seeds planted by re-searchers in the 1980s bore the fruit of practical, shipping systems, with many computer companies (for example, Sun, SGI, Sequent, Pyramid) placing big bets on symmetric multiprocessing. These bets on concurrent hardware necessi-tated corresponding bets on concurrent software: if an operating system cannot execute in parallel, not much else in the system can either. These companies came to the realization (independently) that their operating systems must be re-written around the notion of concurrent execution. These rewrites took place in the early 1990s and the resulting sys-tems were polished over the decade. In fact, much of the resulting technology can today be seen in open source oper-ating systems like OpenSolaris, Free-BSD, and Linux. Just as several computer companies made big bets around multiprocessing, several database vendors made bets around highly parallel relational data-bases; upstarts like Oracle, Teradata, Tandem, Sybase and Informix needed to use concurrency to achieve a perfor-mance advantage over the mainframes that had dominated transaction pro-cessing until that time.5 As in operating systems, this work was conceived in the illustration by andy GilmoreNovEmbER 2008 | vol. 51 | No. 11 | communications of the acm 3536 communications of the acm | NovEmbER 2008 | vol. 51 | No. 11practiceDEC’s Piranha—for a detailed discus-sion of this motivation.1) Were software not ready, these microprocessors would not be commercially viable today. So if anything, the “free lunch” that some de-cry as being “over” is in fact, at long last, being served—one need only be hungry and know how to eat!concurrency is for PerformanceThe most important conclusion from our foray into the history of concurrency is that concurrency has always been em-ployed for one purpose: to improve the performance of the system. This seems almost too obvious to make explicit. Why else would we want concurrency if not to improve performance? And yet for all its obviousness, concurrency’s


View Full Document

UNO CSCI 8530 - Study Notes

Download Study Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?