CS 1550: Introduction to Operating SystemsClass outlineOverview: Chapter 1What is an operating system?Operating system timelineFirst generation: direct inputSecond generation: batch systemsStructure of a typical 2nd generation jobSpoolingThird generation: multiprogrammingTimesharingTypes of modern operating systemsComponents of a simple PCCPU internalsStorage pyramidDisk drive structureMemoryAnatomy of a device requestOperating systems conceptsProcessesInside a (Unix) processDeadlockHierarchical file systemsInterprocess communicationSystem callsMaking a system callSystem calls for files & directoriesMore system callsA simple shellMonolithic OS structureVirtual machinesMicrokernels (client-server)Metric unitsChapter 1CS 1550:Introduction to Operating SystemsProf. Ahmed [email protected]://www.cs.pitt.edu/~amer/cs1550Chapter 12CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Class outlineIntroduction, concepts, review & historical perspectiveProcessesSynchronizationSchedulingDeadlockMemory management, address translation, and virtual memoryOperating system management of I/OFile systemsSecurity & protectionDistributed systems (as time permits)Chapter 13CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Overview: Chapter 1What is an operating system, anyway?Operating systems historyThe zoo of modern operating systemsReview of computer hardwareOperating system conceptsOperating system structureUser interface to the operating systemAnatomy of a system callChapter 14CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)What is an operating system?A program that runs on the “raw” hardware and supportsResource AbstractionResource SharingAbstracts and standardizes the interface to the user across different types of hardwareVirtual machine hides the messy details which must be performedManages the hardware resourcesEach program gets time with the resourceEach program gets space on the resourceMay have potentially conflicting goals:Use hardware efficientlyGive maximum performance to each userChapter 15CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Operating system timelineFirst generation: 1945 – 1955Vacuum tubesPlug boardsSecond generation: 1955 – 1965TransistorsBatch systemsThird generation: 1965 – 1980Integrated circuitsMultiprogrammingFourth generation: 1980 – presentLarge scale integrationPersonal computersNext generation: ???Systems connected by high-speed networks?Wide area resource management?Chapter 16CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)First generation: direct inputRun one job at a timeEnter it into the computer (might require rewiring!)Run itRecord the resultsProblem: lots of wasted computer time!Computer was idle during first and last stepsComputers were very expensive!Goal: make better use of an expensive commodity: computer timeChapter 17CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Second generation: batch systemsBring cards to 1401Read cards onto input tapePut input tape on 7094Perform the computation, writing results to output tapePut output tape on 1401, which prints outputChapter 18CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt) $END $RUN $LOADStructure of a typical 2nd generation job $FORTRAN $JOB, 10,6610802, ETHAN MILLERFORTRANprogramData forprogramChapter 19CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)SpoolingOriginal batch systems used tape drivesLater batch systems used disks for bufferingOperator read cards onto disk attached to the computerComputer read jobs from diskComputer wrote job results to diskOperator directed that job results be printed from diskDisks enabled simultaneous peripheral operation on-line (spooling)Computer overlapped I/O of one job with execution of anotherBetter utilization of the expensive CPUStill only one job active at any given timeChapter 110CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)OperatingsystemThird generation: multiprogrammingMultiple jobs in memoryProtected from one anotherOperating system protected from each job as wellResources (time, hardware) split between jobsStill not interactiveUser submits jobComputer runs itUser gets results minutes (hours, days) laterJob 1Job 2Job 3MemorypartitionsChapter 111CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)TimesharingMultiprogramming allowed several jobs to be active at one timeInitially used for batch systemsCheaper hardware terminals -> interactive useComputer use got much cheaper and easierNo more “priesthood”Quick turnaround meant quick fixes for problemsChapter 112CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Types of modern operating systemsMainframe operating systems: MVSServer operating systems: FreeBSD, SolarisMultiprocessor operating systems: Cellular IRIXPersonal computer operating systems: Windows, UnixReal-time operating systems: VxWorksEmbedded operating systemsSmart card operating systemsSome operating systems can fit into more than one categoryChapter 113CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Components of a simple PCHard drivecontrollerVideocontrollerMemoryUSBcontrollerNetworkcontrollerOutsideworldCPUComputer internals(inside the “box”)Chapter 114CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)ExecuteunitExecuteunitExecuteunitExecuteunitBufferFetchunitDecodeunitFetchunitDecodeunitFetchunitDecodeunitCPU internalsPipelined CPU Superscalar CPUChapter 115CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Access latency1 ns2–5 ns50 ns5 ms50 sec< 1 KB1 MB256 MB40 GB> 1 TBCapacityStorage pyramidRegistersCache (SRAM)Main memory (DRAM)Magnetic diskMagnetic tapeGoal: really large memory with very low latencyLatencies are smaller at the top of the hierarchyCapacities are larger at the bottom of the hierarchySolution: move data between levels to create illusion of large memory with low latencyBetterBetterChapter 116CS
View Full Document