The preceding sections examined dense matrix algorithms for solving systems of linear equations. It was seen that significant savings in storage and computation is achieved by exploiting the structure of symmetric matrices. An even more dramatic performance gain is possible by exploiting the sparsity intrinsic to many classes of large systems. Sparse matrix algorithms are based on the simple concept of avoiding the unnecessary storage of zeros and unnecessary arithmetic associated with zeros (such as multiplication by zero or addition of zero). Recognizing and taking advantage of sparsity often permits the solution of problems that are otherwise computationally intractable. Practical examples provided by Tinney and Hart[13] suggest that in the analysis of large power system networks the use of sparse matrix algorithms makes both the storage and computational requirements approximately linear with respect to the size of the network. In other words, data storage is reduced from an O( n2 ) problem to an O( n ) problem and computational complexity diminishes from O( n3 ) to O( n ).