Unformatted text preview:

Chapter 2 Basics of Algorithm Analysis Analyzing algorithms involves thinking about how their resource require ments the amount of time and space they use will scale with increasing input size We begin this chapter by talking about how to put this notion on a concrete footing as making it concrete opens the door to a rich understanding of computational tractability Having done this we develop the mathematical machinery needed to talk about the way in which different functions scale with increasing input size making precise what it means for one function to grow faster than another We then develop running time bounds for some basic algorithms begin ning with an implementation of the Gale Shapley algorithm from Chapter 1 and continuing to a survey of many different running times and certain char acteristic types of algorithms that achieve these running times In some cases obtaining a good running time bound relies on the use of more sophisticated data structures and we conclude this chapter with a very useful example of such a data structure priority queues and their implementation using heaps 2 1 Computational Tractability A major focus of this book is to nd ef cient algorithms for computational problems At this level of generality our topic seems to encompass the whole of computer science so what is speci c to our approach here First we will try to identify broad themes and design principles in the development of algorithms We will look for paradigmatic problems and ap proaches that illustrate with a minimum of irrelevant detail the basic ap proaches to designing ef cient algorithms At the same time it would be pointless to pursue these design principles in a vacuum so the problems and 30 Chapter 2 Basics of Algorithm Analysis approaches we consider are drawn from fundamental issues that arise through out computer science and a general study of algorithms turns out to serve as a nice survey of computational ideas that arise in many areas Another property shared by many of the problems we study is their fundamentally discrete nature That is like the Stable Matching Problem they will involve an implicit search over a large set of combinatorial possibilities and the goal will be to ef ciently nd a solution that satis es certain clearly delineated conditions As we seek to understand the general notion of computational ef ciency we will focus primarily on ef ciency in running time we want algorithms that run quickly But it is important that algorithms be ef cient in their use of other resources as well In particular the amount of space or memory used by an algorithm is an issue that will also arise at a number of points in the book and we will see techniques for reducing the amount of space needed to perform a computation Some Initial Attempts at De ning Ef ciency The rst major question we need to answer is the following How should we turn the fuzzy notion of an ef cient algorithm into something more concrete A rst attempt at a working de nition of ef ciency is the following Proposed De nition of Ef ciency 1 An algorithm is ef cient if when implemented it runs quickly on real input instances Let s spend a little time considering this de nition At a certain level it s hard to argue with one of the goals at the bedrock of our study of algorithms is solving real problems quickly And indeed there is a signi cant area of research devoted to the careful implementation and pro ling of different algorithms for discrete computational problems But there are some crucial things missing from this de nition even if our main goal is to solve real problem instances quickly on real computers The rst is the omission of where and how well we implement an algorithm Even bad algorithms can run quickly when applied to small test cases on extremely fast processors even good algorithms can run slowly when they are coded sloppily Also what is a real input instance We don t know the full range of input instances that will be encountered in practice and some input instances can be much harder than others Finally this proposed de nition above does not consider how well or badly an algorithm may scale as problem sizes grow to unexpected levels A common situation is that two very different algorithms will perform comparably on inputs of size 100 multiply the input size tenfold and one will still run quickly while the other consumes a huge amount of time 2 1 Computational Tractability 31 So what we could ask for is a concrete de nition of ef ciency that is platform independent instance independent and of predictive value with respect to increasing input sizes Before focusing on any speci c consequences of this claim we can at least explore its implicit high level suggestion that we need to take a more mathematical view of the situation We can use the Stable Matching Problem as an example to guide us The input has a natural size parameter N we could take this to be the total size of the representation of all preference lists since this is what any algorithm for the problem will receive as input N is closely related to the other natural parameter in this problem n the number of men and the number of women Since there are 2n preference lists each of length n we can view N 2n2 suppressing more ne grained details of how the data is represented In considering the problem we will seek to describe an algorithm at a high level and then analyze its running time mathematically as a function of this input size N Worst Case Running Times and Brute Force Search To begin with we will focus on analyzing the worst case running time we will look for a bound on the largest possible running time the algorithm could have over all inputs of a given size N and see how this scales with N The focus on worst case performance initially seems quite draconian what if an algorithm performs well on most instances and just has a few pathological inputs on which it is very slow This certainly is an issue in some cases but in general the worst case analysis of an algorithm has been found to do a reasonable job of capturing its ef ciency in practice Moreover once we have decided to go the route of mathematical analysis it is hard to nd an effective alternative to worst case analysis Average case analysis the obvious appealing alternative in which one studies the performance of an algorithm averaged over random instances can sometimes provide considerable insight but very often it can also become a quagmire As we observed earlier it s very hard to


View Full Document

USC CSCI 570 - Chapter 2 Basics of Algorithm Analysis

Download Chapter 2 Basics of Algorithm Analysis
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Chapter 2 Basics of Algorithm Analysis and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Chapter 2 Basics of Algorithm Analysis and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?