DOC PREVIEW
UMD CMSC 351 - Lecture 22: Graphs Representations and BF

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture Notes CMSC 251Observation: For a digraph e ≤ n2= O(n2). For an undirected graph e ≤n2= n(n − 1)/2=O(n2).A graph or digraph is allowed to have no edges at all. One interesting question is what the minimumnumber of edges that a connected graph must have.We say that a graph is sparse if e is much less than n2.For example, the important class of planar graphs (graphs which can be drawn on the plane so that notwo edges cross over one another) e = O(n). In most application areas, very large graphs tend to besparse. This is important to keep in mind when designing graph algorithms, because when n is reallylarge and O(n2) running time is often unacceptably large for real-time response.Lecture 22: Graphs Representations and BFS(Thursday, April 16, 1998)Read: Sections 23.1 through 23.3 in CLR.Representations of Graphs and Digraphs: We will describe two ways of representing graphs and digraphs.First we show how to represent digraphs. Let G =(V,E) be a digraph with n = |V | and let e = |E|.We will assume that the vertices of G are indexed {1, 2,...,n}.Adjacency Matrix: An n × n matrix defined for 1 ≤ v, w ≤ n.A[v,w]=1 if (v, w) ∈ E0 otherwise.If the digraph has weights we can store the weights in the matrix. For example if (v, w) ∈ E thenA[v, w]=W(v, w) (the weight on edge (v, w)). If (v, w) /∈ E then generally W (v, w) neednot be defined, but often we set it to some “special” value, e.g. A(v, w)=−1,or∞. (By ∞we mean (in practice) some number which is larger than any allowable weight. In practice, thismight be some machine dependent constant like MAXINT.)Adjacency List: An array Adj[1 ...n]of pointers where for 1 ≤ v ≤ n, Adj[v] points to a linked listcontaining the vertices which are adjacent to v (i.e. the vertices that can be reached from v by asingle edge). If the edges have weights then these weights may also be stored in the linked listelements.31101011002312321Adjacency matrixAdjAdjacency list322313211Figure 23: Adjacency matrix and adjacency list for digraphs.We can represent undirected graphs using exactly the same representation, but we will store each edgetwice. In particular, we representing the undirected edge {v, w} by the two oppositely directed edges(v, w) and (w, v). Notice that even though we represent undirected graphs in the same way that we65Lecture Notes CMSC 251represent digraphs, it is important to remember that these two classes of objects are mathematicallydistinct from one another.This can cause some complications. For example, suppose you write an algorithm that operates bymarking edges of a graph. You need to be careful when you mark edge (v, w) in the representationthat you also mark (w, v), since they are both the same edge in reality. When dealing with adjacencylists, it may not be convenient to walk down the entire linked list, so it is common to include cross linksbetween corresponding edges.211231101013Adjacency list (with crosslinks)Adjacency matrixAdj411244231341110001110432131324Figure 24: Adjacency matrix and adjacency list for graphs.An adjacency matrix requires Θ(n2) storage and an adjacency list requires Θ(n+ e) storage (one entryfor each vertex in Adj and each list has outdeg(v) entries, which when summed is Θ(e). For sparsegraphs the adjacency list representation is more cost effective.Shortest Paths: To motivate our first algorithm on graphs, consider the following problem. You are givenan undirected graph G =(V, E) (by the way, everything we will be saying can be extended to directedgraphs, with only a few small changes) and a source vertex s ∈ V . The length of a path in a graph(without edge weights) is the number of edges on the path. We would like to find the shortest path froms to each other vertex in G. If there are ties (two shortest paths of the same length) then either pathmay be chosen arbitrarily.The final result will be represented in the following way. For each vertex v ∈ V , we will store d[v]which is the distance (length of the shortest path) from s to v. Note that d[s]=0. We will also store apredecessor (or parent) pointer π[v], which indicates the first vertex along the shortest path if we walkfrom v backwards to s. We will let π[s]=NIL.It may not be obvious at first, but these single predecessor pointers are sufficient to reconstruct theshortest path to any vertex. Why? We make use of a simple fact which is an example of a more generalprincipal of many optimization problems, called the principal of optimality. For a path to be a shortestpath, every subpath of the path must be a shortest path. (If not, then the subpath could be replaced witha shorter subpath, implying that the original path was not shortest after all.)Using this observation, we know that the last edge on the shortest path from s to v is the edge (u, v),then the first part of the path must consist of a shortest path from s to u. Thus by following thepredecessor pointers we will construct the reverse of the shortest path from s to v.Obviously, there is simple brute-force strategy for computing shortest paths. We could simply startenumerating all simple paths starting at s, and keep track of the shortest path arriving at each vertex.However, since there can be as many as n! simple paths in a graph (consider a complete graph), thenthis strategy is clearly impractical.Here is a simple strategy that is more efficient. Start with the source vertex s. Clearly, the distance toeach of s’s neighbors is exactly 1. Label all of them with this distance. Now consider the unvisited66Lecture Notes CMSC 25122222233333s111ss: Finished: Discovered: UndiscoveredFigure 25: Breadth-first search for shortest paths.neighbors of these neighbors. They will be at distance 2 from s. Next consider the unvisited neighborsof the neighbors of the neighbors, and so on. Repeat this until there are no more unvisited neighbors leftto visit. This algorithm can be visualized as simulating a wave propagating outwards from s, visitingthe vertices in bands at ever increasing distances from s.Breadth-first search: Given an graph G =(V,E), breadth-first search starts at some source vertex s and“discovers” which vertices are reachable from s. Define the distance between a vertex v and s to be theminimum number of edges on a path from s to v. Breadth-first search discovers vertices in increasingorder of distance, and hence can be used as an algorithm for computing shortest paths. At any giventime there is a “frontier” of vertices that have


View Full Document

UMD CMSC 351 - Lecture 22: Graphs Representations and BF

Download Lecture 22: Graphs Representations and BF
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 22: Graphs Representations and BF and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 22: Graphs Representations and BF 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?