Any matrix with a significant number of zero-valued elements is referred to as a sparse matrix. The meaning of “significant” in the preceding definition is rather vague. It is pinned down (in a circular way) by defining a sparse matrix to be a matrix with enough zeros to benefit from the use of sparsity techniques. The intent of this definition is to emphasize that there is a computational overhead required by sparse matrix procedures. If the degree of sparsity in a matrix compensates for the algorithmic overhead, sparsity techniques should be employed. Otherwise, dense matrix algorithms should be utilized. This argument simply restates a fundamental rule of numerical analysis, a priori information concerning the nature of a problem tends to result in more efficient solution techniques.
The next few sections will explore the application of sparsity techniques to the solution of large systems of linear equations. The standard approach is to break the solution into three phases:
Analyze. Determine an ordering of the equations such that the LU decomposition will retain as much sparsity as possible. This problem has been shown to be N-P complete (i.e. the optimal solution can not be efficiently determined). However, a number of satisfactory heuristics are available. The analysis phase of the solution usually produces a complete definition of the sparsity pattern that will result when the LU decomposition is computed.
Factor. Compute the LU decomposition.
Solve. Use the LU decomposition to compute a solution to the system of equations (i.e. perform forward and backward substitution).
The degree to which these phases are distinct depends on the implementation.