Unformatted text preview:

Algebraic Multigrid We’re going to discuss algebraic multigrid, but first begin by discussing ordinary multigrid. Both of these deal with scale space, examining the image at multiple scales. This is important for segmentation, because an image segmentation is really a representation at a coarser scale than the pixel. To a first approximation, think that multigrid is like diffusion, and algebraic multigrid is like anisotropic diffusion. In both cases we build a scale space on top of them. Multigrid The basic idea behind multigrid comes from the following insight: if we smooth something, we can then subsample it without losing any information. In multigrid, we use this to solve PDEs. We start with some random initialization, in which our error is like white noise. Then we use an iterative solver, that has the effect of smoothing the error. After a while, the error is all low frequency. So we can subsample to a coarser scale, without losing information. We repeat, but everything is cheaper at the coarser scale. Eventually, we reach a small scale, where we can inexpensively get an exact solution. Then, knowing the solution at a coarse scale, we interpolate this to get an approximate solution at a fine scale. This new solution only has high frequency error, which we can get rid of pretty efficiently. We’re going to break this into two parts. First, we’ll just consider the representation constructed by multigrid. Then we’ll see how it is used to solve PDEs. Gaussian Pyramid The basic idea behind multigrid is also used in vision, and called the Gaussian pyramid. It was introduced by Burt and Adelson in the early 80s, but seems to have appeared in multigrid earlier. As we discussed previously, we can’t just shrink an image by sampling it. If we do that, we get aliasing. High frequencies have a big, random effect on the sampled image. So we first smooth, to get rid of high frequencies. Then when we sample, we don’t lose any information. If we smooth with a Gaussian, and repeat this, we get the Gaussian pyramid. This has been suggested for a bunch of vision applications: Motion: We want to find the transformation that relates two images taken from different viewpoints. If we start at the top of the pyramid, it’s easy to find the best solution, because there are so few. Then, as we move down the pyramid, we can assume that we have an approximate solution from the coarser level. We can take a Taylor series expansion around that solution, and analytically find the solution at the finer scale. This kind of approach is widely used.Matching: Here we have an image and a template. We match them at a coarse scale, and then at the finer scale, we search near that scale. This is like motion, but a different application, and we may not be able to solve anything analytically. Compression: An early suggestion was to use this pyramid for compression. This was through the Laplacian pyramid. This is like the G.P., but at each level we store the difference between the image, and the upsampled version of the coarser image. The idea is that this difference should be less correlated and lower energy, so easier to compress. Has the nice advantage that it makes progressive encoding easy. Some version of this is included in JPEG. The Laplacian pyramid isn’t really directly used in compression, but the multiscale idea is one of the forerunners of wavelets, which are used in JPEG 2000. Multigrid for solving PDEs We’ll look at one example of this, using the Poisson equation. I will be very sloppy about boundary conditions and discretization, and just try to give the intuitions. )y,x(fuu.e.i)y,x(fuyyxx If we write this discretely, for a grid we get: )1,()1,(),1(),1(),(41),(2 yxuyxuyxuyxuyxuhyxf This leads to an iterative algorithm, where we base one of these values on previous versions of the others. )1,()1,(),1(),1(),(4121yxuyxuyxuyxuyxfhzm Then we can set: 11 mmzu Note that at least it’s easy to see that the right solution is a fixed point of this iteration. We can study convergence rates, and issues of local minima, but not here. This is called the Jacobi iteration. Other methods (Gauss-Seidel) are better, but this is simpler. It may be a good idea to slow down the updates a bit, as: 11)1(mmmwzwuu It turns out that w = 4/5 is a good choice, and we get:nconvolutioindicateswheref5hu01011101051u2m1m That is, we are updating u by basically smoothing the old u with an averaging filter. So, let’s look at what happens to the error with this iteration. We define the error at iteration m to be: mmuuv  Then we can show: )1,()1,(),(),1(),1(51),(1yxvyxvyxvyxvyxvyxvmmmmmm Substituting, the left side equals: ),(),(),(11yxuyxuyxvmm  while the right side is: )1,()1,()1,()1,(),(),(),1(),1(),1(),1(),1(),1(51)1,()1,(),(),1(),1(51yxuyxuyxuyxuyxuyxuyxuyxuyxuyxuyxuyxuyxvyxvyxvyxvyxvmmmmmmmmmmm We have 4),()1,()1,(),(),1(),1(51),(21hyxfyxuyxuyxuyxuyxuyxummmmmm So, canceling some terms, we just need to show that: 4),()1,()1,(),(),1(),1(51),(2hyxfyxuyxuyxuyxuyxuyxu This is just the discrete statement of the Poisson equation. So, the point is that this iteration smooths the error in our original estimate. Intuitively, this makes sense, since at every iteration we are estimating a point by averaging the neighbors, which averages the error. We are smoothing with something like a low passfilter, so what happens is that the high frequency components of the noise disappear quickly, but low frequency noise is slow to disappear. So when high frequency noise is mostly gone, we downsample, and get a smaller problem, where the noise is high frequency again. We recursively repeat this process, and get an exact solution to the coarser problem. Then we upsample. When we upsample a noise free solution, we may get noise, but only in high frequency, so smoothing quickly gets rid of that. Of course, we need to fill in some details here: - First, we don’t actually downsample um because if f has significant high frequency


View Full Document

UMD CMSC 828 - Algebraic Multigrid

Download Algebraic Multigrid
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Algebraic Multigrid and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Algebraic Multigrid 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?