The preceding sections examined dense matrix algorithms for solving systems of linear equations.
It was seen that significant savings in storage and computation is achieved by exploiting the
structure of symmetric matrices. An even more dramatic performance gain is possible by exploiting
the sparsity intrinsic to many classes of large systems. Sparse matrix algorithms are based on
the simple concept of avoiding the unnecessary storage of zeros and unnecessary arithmetic
associated with zeros (such as multiplication by zero or addition of zero). Recognizing and
taking advantage of sparsity often permits the solution of problems that are otherwise
computationally intractable. Practical examples provided by Tinney and Hart [17]
suggest that in the analysis of large power system networks the use of sparse matrix algorithms
makes both the storage and computational requirements approximately linear with respect to the
size of the network. In other words, data storage is reduced from an
O( n^{2} )problem to an O( n ) problem
and computational complexity diminishes from O( n^{3} )
to O( n ).

- 8.1 Sparse Matrix Methodology
- 8.2 Abstract Data Types for Sparse Matrices
- 8.3 Pivoting To Preserve Sparsity
- 8.4 Symbolic Factorization of Sparse Matrices
- 8.5 Creating PAP
^{T}from a Symbolic Factorization - 8.6 Numeric Factorization of Sparse Matrices
- 8.7 Solving Sparse Linear Systems
- 8.8 Sparse LU Factor Update