Unformatted text preview:

October 16, 1999 10:26 PM 1ComputersbyVojin G. OklobdzijaAdvanced Computer Engineering LaboratoryElectrical and Computer Engineering DepartmentUniversity of CaliforniaDavis, CA 95616A computer is a system capable of solving various scientific and non-scientific problems,storing retrieving and manipulating data, communicating and sharing data, controllingprocesses and interacting with the surrounding environment.Today, typically a computer consists of a complex electronic system containing a massiveamount of components which are very highly integrated. Even the simplest computer todayis far more complex than the computers from not so distant past and often containsmillions, or even hundred of millions of transistors. The micro-photograph of the IBM 620microprocessor containing 7 millions of transistors is shown in the Figure 1.Computers emerged from complex digital systems and controllers in the areas where thebehavioral specification for such a system could have been satisfied with some sort of ageneral purpose digital system. One of the first mini-computers (18-bit PDP-4) was built asa generalization of an atomic plant controller. Similar faith was of the first microprocessor,Intel 4004 (S. Shima), which was originally commissioned as a generalized calculator chip.Because of computers versatility and general purpose orientation there is hardly any placetoday that does not contain a computer in one or the other form. This is made possible bytwo facts:(a) their structure and organization is general and therefore it can be easilycustomized in many different forms. (b) because of their general structure, they can be mass-produced which keeps theircost down.The four and eight bit microcontrollers which still represent over 90% of themicroprocessors sold, are generally selling for way under a dollar which is due to the factthat they can be produced in very large quantities.By their power computers are traditionally classified in 4 major categories: personalcomputers, mid-range or work-stations, main-frames and super-computers, though theboundaries between those categories are blurred. The reasons for that is that the sametechnology is being used for all four categories with the only exception being main-framesand super-computers which are resorting to bipolar and gallium arsenide. Therefore, theperformance increases are being achieved mainly through the improvements in technologyand the performance is roughly doubling every two years as shown in Figure 2. Thechanges in the architecture do not contribute to the performance increases because thearchitecture issue has been settled around RISC architecture and the machines utilizingRISC architecture have clearly demonstrated their advantage over CISC.RISC stands for Reduced Instruction Set Computer while CISC stands for ComplexInstruction Set Computer. RISC is characterized by simple instructions in the instructionset which is architected to fit the machine pipeline in a way in which one instruction can beOctober 16, 1999 10:26 PM 2issued in every cycle. The characteristics of CISC are complex instructions which havegrown mainly out of the micro-programming design style of the computer.Another distinguished computer class are so called "Super-Computers". They are the onesthat have been driving all of the advanced concepts in architecture as well as being thevehicles for driving the technology to its limits. Though we had several computers beingdesignated as "super-computers" in the past, such as IBM-Stretch, IBM System 360 Model91 and Control Data's CDC 6600, the real era of "super-computers" started with CRAY-1engineered and designed by Seymour Cray. Probably the best description of a super-computer is a design where the performance is the prime objective and the cost is beingignored. They are manufactured in small numbers for very special customers requiring veryhigh performance and willing to pay a premium cost for that performance. The first CRAY-1 was introduced in 1976 and had a clock cycle of 12.5nS. The latest CRAY is the CRAY-4, built in gallium arsenide technology with the cycle time of 1nS capable of achieving 256gigaflops in a 128 processor configuration. CRAY-4 is truly a state of the art in almost allof aspects of engineering.Today, a typical high-performance computer system employs more than one processor invarious arrangements. There has been a long effort to parallelize the execution of theprograms and take advantage of the number of relatively inexpensive processors in order toachieve a high processing rate. These efforts, up to today, have been met with a limitedsuccess. A number of parallel machines have been introduced with a various degree ofsuccess. They can be divided in several categories however, most of the machinesintroduced fall into one of the two typical structures: SIMD and MIMD. The first one,Single Instruction Multiple Data is characterized by executing one instruction at the time,operating on an array of data elements in parallel. A typical example of SIMD architecture isso called "Connection Machine" CM-1 introduced by Connection Machines of CambridgeMassachusetts in the first half of 1984. This machine is characterized by an array of up to64K-processors divided in four quadrants containing 16K-processors each. CM-1 has beensuperseded by CM-2, 3 and CM-5. The operations of the processors are controlled by thesame instruction issued from the central instruction unit. Another example of parallel SIMDarchitecture is the IBM GF-11 machine, capable of a peak execution rate of 11 billionfloating-point operations per second. GF-11 was used to calculate the masses of theparticles and would take 3 months to finish the calculation.The current trend today is toward distributed computing on a large, even global, scale. Thisinvolves a network of workstations connected via high bandwidth, low latency networksacting as a computing platform. The goal is to take advantage of a large resource pool ofworkstations comprising of hundreds of gigabytes of memory, terrabytes of disk space andhundreds of gigaflops of processing power that is often being idle. This new paradigm incomputing is expected to impact the fundamental design techniques for large systems andtheir ability to solve large problems, serve large number of users and provide a


View Full Document

UCD EEC 180B - Computers

Download Computers
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Computers and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Computers 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?