Skip to content

AnPophale/ID5130-Parallel-Scientific-Computing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ID5130-Parallel-Scientific-Computing

Parallel implementation of the Multigrid method

This project was part of the course ID5130 Parallel Scientific Computing taken during the Jan-May 2024 semester. The main objective of this work is to parallelize the multigrid method using OpenMP for shared memory systems and MPI for distributed memory systems. Further, the performance of standard iterative solvers such as the Jacobi and Gauss Seidel methods is compared with the Multigrid method, along with parallel implementations of both.

Presented below is a short summary of the project, while a detailed explanation of the work can be found in the project report along with the C codes to implement all the methods, which have been uploaded in the codes folder. The data collected for different methods, such as the number of iterations, runtime and speedup is tabulated in the supplementary information.

Motivation:
Tackling engineering problems with a computational approach often involves solving partial differential equations. These equations when solved numerically using methods such as the finite volume or finite difference methods generate a system of linear equations that need to be solved. The solution to these equations is generally obtained using iterative methods such as the Jacobi and Gauss Seidel methods. It has been shown that as the error decreases, the rate of convergence of these methods decreases making them slow.

The Multigrid method is a principle which can be applied to such iterative methods to accelerate their convergence and obtain solutions with a reduced computational effort which is especially important for larger problems. In this work, we parallelize the multigrid method to investigate the potential speed up benefits when the method is distributed across several processors.

Fig 1: The Multigrid Algorithm

Figure 1: The Multigrid Algorithm

Multigrid Method:
Decomposing the error field over the discretized domain for iterative solvers using a Fourier series reveals some interesting and useful properties of the error. It is observed that high frequency error components are eliminated quickly but the low frequency components slow down the convergence, requiring a larger number of equations to smoothen out. The multigrid method makes use of this property of the error field.

The key idea in the multigrid method is that low frequency error components on a fine mesh, have a high frequency on a coarse mesh which can be eliminated quickly. Hence, in multigrid method, the errors are transferred from a fine mesh to a coarser mesh where they can be smoothed out faster. This methodology can be applied to any of the iterative solvers which are known as smoothers in this context and we use the Jacobi and Gauss Seidel methods as smoothers in this study. This algorithm for the Multigrid method is depicted in Fig. 1 which shows an example of the V cycle multigrid with 3 mesh levels.

Fig 2: Types of Multigrid cycles

Figure 2: Types of Multigrid cycles

Based on the number of levels of the coarse and fine meshes as well as the patterns in which the solution data is transferred across various meshes, there are 3 types of multigrid methods which are the V cycle, W cycle and the F cycle as shown in Fig. 2. Further the method can be classified as Geometric Multigrid (GMG) or Algebraic Multigrid (AMG), where GMG requires the geometric information of all mesh levels while AMG does not need such data and hence is useful for more complicated cases such as with unstructured meshes.

In this project, we use 2 level V cycle geometric multigrid for simplicity as the main aim is to parallelize the code and compare its performance with other methods.

Codes:
We consider an example of a 2D Poisson equation discretized using the Finite Difference method which leads to a system of linear equations to be solved. To solve these equations we implement the Jacobi and Gauss Seidel methods with and without applying the Multigrid principle as well as serial and parallel versions of each. An analytical solution exists for the chosen Poisson equation which is used to determine the convergence criteria.

Jacobi Method

Red Black Gauss Seidel Method:
The Red Black Gauss Seidel Method is a modification of the standard Gauss Seidel algorithm allowing it to run in parallel which is not possible otherwise due to the data dependency inherent to the method. More details about this method can be found here.

To run the parallel codes it is necessary to install the OpenMP and MPI libraries for C without which the codes may not compile and run properly.

Results:
Some of the results for the Jacobi and Gauss Seidel methods with Multigrid are shown below in Fig. 3 and 4. We analyze the variation of the speed up with the problem size for a fixed number of processors as well as the variation of speed up with number of processors used while keeping the problem size fixed.

Fig 3: Speed up vs Problem Size for parallel multigrid methods using 4 processors

Figure 3: Speedup vs Problem Size for Parallel Multigrid Methods using 4 processors

Fig 4: Speed up vs Number of processors for parallel multigrid methods using a problem size of 513

Figure 4: Speed up vs Number of processors for Parallel Multigrid Methods using a problem size of 513

Conclusions:
Some of the key conclusions from this study are as follows:

  • The Multigrid method requires a significantly lower number of iterations compared to Jacobi or Gauss Seidel methods as expected.
  • The Multigrid method with the Red Black smoother was found to be the fastest among all the methods.
  • Along with this the OpenMP parallelization of this method provides the best performance when compared to MPI. For OpenMP, 16 threads give the best performance, while for MPI 8 processors provide the best speedup.
  • This work can be further extended using various levels and cycles for the multigrid method as well as using GPU acceleration to parallelize the multigrid algorithm.

References:
[1] Mazumder, S. (2015). Numerical methods for partial differential equations: Finite Difference and Finite Volume Methods. Academic Press.

About

Parallel implementation of the Multigrid Method

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages