3.4 Computational Complexity of Linear Systems

As was mentioned in Section 3.1, the decomposition algorithm for solving linear equations is motivated by the computational inefficiency of matrix inversion. Inverting a dense matrix A requires

$$2{n}^{3}+O\left({n}^{2}\right)$$

floating point operations. Computing the LU decomposition of A requires

$$\frac{2}{3}{n}^{3}+\frac{1}{2}{n}^{2}+\frac{1}{6}n$$

or

$$\frac{2}{3}{n}^{3}+O\left({n}^{2}\right)$$

floating point operations. Computing x from the factorization requires

$$2{n}^{2}+n$$

or

$$2{n}^{2}+O\left(n\right)$$

floating point operations (which is equivalent to computing the product $\mathbf{A^{-1}b}$). Therefore, solving a linear system of equations by matrix inversion requires approximately three times the amount of work as a solution via LU decomposition.

When A is a sparse matrix, the computational discrepancy between the two methods becomes even more overwhelming. The reason is straightforward. In general,

  • Inversion destroys the sparsity of A, whereas

  • LU decomposition preserves the sparsity of A.

Much work can be avoided by taking advantage of the sparsity of a matrix and its triangular factors. Algorithms for solving sparse systems of equations are are described in detail in Section 8 of this document.