Papers
Topics
Authors
Recent
Search
2000 character limit reached

GLU3.0: Fast GPU-based Parallel Sparse LU Factorization for Circuit Simulation

Published 1 Aug 2019 in cs.DC and cs.DS | (1908.00204v3)

Abstract: LU factorization for sparse matrices is the most important computing step for many engineering and scientific computing problems such as circuit simulation. But parallelizing LU factorization with the Graphic Processing Units (GPU) still remains a challenging problem due to high data dependency and irregular memory accesses. Recently GPU-based hybrid right-looking sparse LU solver, called GLU (1.0 and 2.0), has been proposed to exploit the fine grain level parallelism of GPU. However, a new type of data dependency (called double-U dependency) introduced by GLU slows down the preprocessing step. Furthermore, GLU uses fixed GPU thread allocation strategy, which limits the parallelism. In this article, we propose a new GPU-based sparse LU factorization method, called {\it GLU3.0}, which solves the aforementioned problems. First, it introduces a much more efficient data dependency detection algorithm. Second, we observe that the potential parallelism is different as the matrix factorization goes on. We then develop three different modes of GPU kernel which adapt to different stages to accommodate the computing task changes in the factorization. Experimental results on circuit matrices from University of Florida Sparse Matrix Collection (UFL) show that GLU3.0 delivers 2-3 orders of magnitude speedup over GLU2.0 for the data dependency detection. Furthermore, GLU3.0 achieve 13.0 $\times$ (arithmetic mean) or 6.7$\times$ (geometric mean) speedup over GLU2.0 and 7.1$\times$ (arithmetic mean) or 4.8 $\times$ (geometric mean) over the recently proposed enhanced GLU2.0 sparse LU solver on the same set of circuit matrices.

Citations (27)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.