Unformatted text preview:

CS 240A: Graph and hypergraph partitioningSlide 2Slide 3Sparse Matrix Vector MultiplicationDefinition of Graph PartitioningApplicationsPartitioning by Repeated BisectionSeparators in theorySlide 9Separators in practiceIterative swapping: Kernighan/Lin, Fiduccia/MattheysesSimplified Fiduccia-Mattheyses: Example (1)Simplified Fiduccia-Mattheyses: Example (2)Simplified Fiduccia-Mattheyses: Example (3)Simplified Fiduccia-Mattheyses: Example (4)Simplified Fiduccia-Mattheyses: Example (5)Simplified Fiduccia-Mattheyses: Example (6)Simplified Fiduccia-Mattheyses: Example (7)Simplified Fiduccia-Mattheyses: Example (8)Simplified Fiduccia-Mattheyses: Example (9)Simplified Fiduccia-Mattheyses: Example (10)Simplified Fiduccia-Mattheyses: Example (11)Spectral BisectionMotivation for Spectral Bisection2nd eigenvector of L(planar mesh)Laplacian MatrixProperties of Laplacian MatrixSpectral Bisection AlgorithmNodal Coordinates: Random SpheresRandom Spheres: Well Shaped GraphsGeneralizing planar separators to higher dimensionsStereographic ProjectionChoosing a Random SphereRandom Sphere AlgorithmSlide 35Random Sphere AlgorithmSlide 37Slide 38Slide 39Multilevel PartitioningMultilevel Partitioning - High Level AlgorithmMaximal Matching: ExampleExample of CoarseningExpanding a partition of Gc to a partition of GSlide 45Slide 46Most of the following slides adapted from: Dynamic Load Balancing for Adaptive Scientific Computations via Hypergraph RepartitioningGraph Models: Approximate Communication Metric for SpMVGraph Model for Sparse MatricesHypergraph Model for Sparse MatricesHypergraph ModelSlide 52Multilevel Scheme (Serial & Parallel)Recursive BisectionData LayoutCoarsening via Parallel Matching in 2D Data LayoutCoarsening (cont.)Decomposition QualityParallel PerformanceZoltan 2D Distribution: Decomposition QualityZoltan 2D Distribution: Parallel PerformanceCS 240A: Graph and hypergraph partitioningCS 240A: Graph and hypergraph partitioningThanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for some of these slides.CS 240A: Graph and hypergraph partitioningCS 240A: Graph and hypergraph partitioning•Motivation and definitions•Motivation from parallel computing•Theory of graph separators•Heuristics for graph partitioning•Iterative swapping•Spectral•Geometric•Multilevel•Beyond graphs•Shortcomings of the graph partitioning model•Hypergraph models of communication in MatVec•Parallel methods for partitioning hypergraphsCS 240A: Graph and hypergraph partitioningCS 240A: Graph and hypergraph partitioning•Motivation and definitions•Motivation from parallel computing•Theory of graph separators•Heuristics for graph partitioning•Iterative swapping•Spectral•Geometric•Multilevel•Beyond graphs•Shortcomings of the graph partitioning model•Hypergraph models of communication in MatVec•Parallel methods for partitioning hypergraphsSparse Matrix Vector MultiplicationSparse Matrix Vector MultiplicationDefinition of Graph PartitioningDefinition of Graph Partitioning•Given a graph G = (N, E, WN, WE)•N = nodes (or vertices),•E = edges•WN = node weights• WE = edge weights•Often nodes are tasks, edges are communication, weights are costs•Choose a partition N = N1 U N2 U … U NP such that•Total weight of nodes in each part is “about the same”•Total weight of edges connecting nodes in different parts is small•Balance the work load, while minimizing communication•Special case of N = N1 U N2: Graph Bisection1 (2)2 (2) 3 (1)4 (3)5 (1)6 (2)7 (3)8 (1)5461212123ApplicationsApplications•Telephone network design•Original application, algorithm due to Kernighan•Load Balancing while Minimizing Communication•Sparse Matrix times Vector Multiplication•Solving PDEs•N = {1,…,n}, (j,k) in E if A(j,k) nonzero, •WN(j) = #nonzeros in row j, WE(j,k) = 1•VLSI Layout•N = {units on chip}, E = {wires}, WE(j,k) = wire length•Sparse Gaussian Elimination•Used to reorder rows and columns to increase parallelism, and to decrease “fill-in”•Data mining and clustering•Physical Mapping of DNAPartitioning by Repeated BisectionPartitioning by Repeated Bisection•To partition into 2k parts, bisect graph recursively k timesSeparators in theorySeparators in theory•If G is a planar graph with n vertices, there exists a set of at most sqrt(6n) vertices whose removal leaves no connected component with more than 2n/3 vertices. (“Planar graphs have sqrt(n)-separators.”)•“Well-shaped” finite element meshes in 3 dimensions have n2/3 - separators. •Also some other classes of graphs – trees, graphs of bounded genus, chordal graphs, bounded-excluded-minor graphs, …•Mostly these theorems come with efficient algorithms, but they aren’t used much.CS 240A: Graph and hypergraph partitioningCS 240A: Graph and hypergraph partitioning•Motivation and definitions•Motivation from parallel computing•Theory of graph separators•Heuristics for graph partitioning•Iterative swapping•Spectral•Geometric•Multilevel•Beyond graphs•Shortcomings of the graph partitioning model•Hypergraph models of communication in MatVec•Parallel methods for partitioning hypergraphsSeparators in practiceSeparators in practice•Graph partitioning heuristics have been an active research area for many years, often motivated by partitioning for parallel computation. •Some techniques:•Iterative-swapping (Kernighan-Lin, Fiduccia-Matheysses)•Spectral partitioning (uses eigenvectors of Laplacian matrix of graph)•Geometric partitioning (for meshes with specified vertex coordinates)•Breadth-first search (fast but dated)•Many popular modern codes (e.g. Metis, Chaco, Zoltan) use multilevel iterative swappingIterative swapping: Iterative swapping: Kernighan/Lin, Fiduccia/MattheysesKernighan/Lin, Fiduccia/Mattheyses•Take a initial partition and iteratively improve it•Kernighan/Lin (1970), cost = O(|N|3) but simple•Fiduccia/Mattheyses (1982), cost = O(|E|) but more complicated•Start with a weighted graph and a partition A U B, where |A| = |B|•T = cost(A,B) =  {weight(e): e connects nodes in A and B}•Find subsets X of A and Y of B with |X| = |Y|•Swapping X and Y should decrease cost:•newA = A - X U Y and newB = B - Y U X•newT = cost(newA , newB) < cost(A,B)•Compute newT efficiently for many possible X and Y, (not time to do all possible), then choose smallestSimplified Fiduccia-Mattheyses: Example


View Full Document

UCSB CS 240A - Graph and hypergraph

Download Graph and hypergraph
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Graph and hypergraph and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Graph and hypergraph 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?