DOC PREVIEW
UI STAT 5400 - Parallel computing

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

122S:166Parallel computingLecture 27 Nov. 21, 2008Kate Cowles374 [email protected] is Parallel Computing?Traditionally, software has been written for serial compu-tation:• To be run on a s ingle computer having a single CentralProcessin g Unit (CPU);• A problem is broken into a discrete series of instruc-tions.• Instructions are executed one after another.• On ly one instruction may execute at any moment intime.3Parallel computing• In the simplest sense, parallel computing is the simul-taneous use of multiple compute resources to solve acomputational problem.– To be run using multiple CPUs– A problem is broken into discrete parts th at canbe solved concurrently– Each part is further broken down to a series ofinstructions– Instructions from each part execute simultaneouslyon different CPUs• The compute resources can include:– A s ing le computer with multiple processors;– An arbitrary number of computers connected by anetwork;– A c ombination of both.4• The computational problem usually demonstrates char-acteristics such as the ability to be:– Broken apart into discrete pieces of work that canbe solved simultaneously;– Execute multiple program instr uction s at any mo-ment in time;– Solved in less time with multiple compute resourcesthan with a single compute resource.• Trad itional ly, parallel computing has been consideredto be ”the high end of computing” and has been mo-tivated by numerical simulations of complex systemsand ”Grand Challenge Problems” such as:– weather and climate– chemical and nuclear reactions– biological, human genome– geological, seismi c activity– mechanical devices - from prosthetics to spacecraft– electronic circuits– manufacturing processes5Commercial applications are providing an equalor greater driving force in the development offaster computers• par allel databases, data mining• oil exploration• web search engines, web based business services• com puter -aided diagnosis in medicine• ma nagem ent of national and multi-national corpora-tions• advanced graphics and virtual reality, particularly inthe entertainment industry• networked video and multi-media technologies• coll aborative work environments6Why Use Parallel Computing?• The primary reasons for using parallel computing:– Save time - wall clock time– Solve larger problems– Provide concurrency (do multiple things at thesame time)• Oth er reasons might include:– Taking advantage of non-local resources - usingavailable compute resources on a wide area net-work, or even the Internet when local compute re-sources are scarce.– Cost savings - using multiple ”cheap” computingresources instead of paying for time on a super-computer.– Overcoming memory constraints - single comput-ers have very finite memory resources. For largeproblems, using the memories of multiple comput-ers may overcome this obstacle.7General terminology in Parallel Computing• Tas k: A logically discrete section of computationalwork. A task is typically a program or program-likeset of instructions that is executed by a p rocessor.• Parallel Task: A task that can be executed by multipleproces so rs safely (yields correct results)• Ser ial Execution: Execution of a program sequen-tially, one statement at a time. In the simp les t sense,this is what happens on a one processor machine.However, virtually all parallel tasks will have sectionsof a parallel program that must be executed serially.• Parallel Execution: Execution of a program by morethan one task, with each task being able to executethe same or different statement at the same momentin time.• Sha red Memory: From a strictly hardware point ofview, describes a computer architecture where all pro-cessors have direct (usually bus bas ed) access to com-mon physical memory. In a programming sense , itdescrib es a model where parallel tasks all have thesame ”picture” of memory and can directly addressand access the same logical memory locations regard-less of where the physical memory actually exists.8• Distributed Memory: In hardware, refers to networkbased memory access for physical memory that is notcommon. As a programming model, tasks can onlylogically ”see” local machine memory and must usecommunications to access memory on other machineswhere other tasks are executing.• Com munications Parallel tasks typically need to ex-change data. There are several ways this can be ac-complished, such as through a shared memory busor over a network, however the actual event of dataexchange is commonly referred to as communicationsregardless of the method employed.• Synchronization The coordination of parallel tasks inreal time, very often associated with communications.Often implemented by establishing a synchronizationpoint within an application where a task may not pro-ceed further until another task(s) reaches the same orlogically equivalent point.Synchronization usually involves waiting by at leastone task, and can therefore cause a parallel applica-tion’s wall clock execution time to increase.9• Granularity: In parallel computing, granularity is aqualitative measure of the ratio of computation tocommunication.– Coarse: relatively lar ge amounts of computationalwork are done between communication events– Fine: relatively small amounts of computationalwork are done between communication events• Ob ser ved Speedup: Observed speedup of a code whichhas been parallelized, defined as:wall-clock time of serial execution/ wall-clock time ofparallel executionOne of the simplest and most widely used indicatorsfor a parallel program’s performance.10• Parallel Overhead: The amount of time required tocoordina te parallel tasks, as opp o se d to doing usefulwork. Parallel overhead can include factors such as:– Task start-up time– Synchronizations– Data communications– Software overhead imposed by parallel compilers,libraries, tools, operating system, etc.– Task termination time• Sca lability: Refers to a parallel system’s (hardwareand/or software) ability to demonstrate a proportion-ate increase in parallel speedup with the addition ofmore processors. Factors that contribute to scalabilityinclude:– Hardware - particularly memory-cpu bandwidthsand network communications– Application algorithm– Parallel overhead related– Characteristics of your specific application and co


View Full Document

UI STAT 5400 - Parallel computing

Documents in this Course
Load more
Download Parallel computing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Parallel computing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Parallel computing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?