Eigenvalue computations are prone to various sources of errors, such as rounding, truncation, ill-conditioning, and algorithmic instability. Rounding errors occur when you use finite-precision arithmetic to perform operations on real or complex numbers. Truncation errors occur when you approximate infinite or continuous processes by finite or discrete ones, such as using finite difference or finite element methods. Ill-conditioning means that small perturbations in the input data can cause large changes in the output results, making them sensitive to noise or errors. Algorithmic instability means that the numerical method can amplify the errors during the iteration or convergence process, leading to inaccurate or divergent results.
-
The "various sources of error" listed here apply generally, in any numerical calculation. It would be helpful to explain them a bit in the context of finding eigenvalues.
To quantify the errors in eigenvalue computations, you need to use some error measures that can compare the computed eigenvalues with the exact or reference ones. One common measure is the relative error, which is the absolute difference between the computed and exact eigenvalues divided by the magnitude of the exact eigenvalues. Another common measure is the residual norm, which is the norm of the matrix-vector product of the input matrix and the computed eigenvector minus the product of the computed eigenvalue and the computed eigenvector. The smaller the relative error or the residual norm, the more accurate the eigenvalue solution.
To estimate the errors in eigenvalue computations, you can use some error bounds that can provide upper or lower limits for the errors based on some assumptions or properties of the input matrix and the numerical method. One example of an error bound is the Gershgorin circle theorem, which states that every eigenvalue of a matrix lies within at least one of the Gershgorin discs, which are circles in the complex plane centered at the diagonal entries of the matrix and with radii equal to the sum of the absolute values of the off-diagonal entries in the same row or column. Another example of an error bound is the Bauer-Fike theorem, which states that the eigenvalues of a perturbed matrix are close to the eigenvalues of the original matrix within a factor that depends on the condition number of the eigenvector matrix and the norm of the perturbation matrix.
To understand the sources and effects of errors in eigenvalue computations, you can use some error analysis techniques that can examine the error propagation and distribution in the numerical method and the input matrix. One example of an error analysis technique is the backward error analysis, which measures how much the input matrix has to be perturbed to make the computed eigenvalues exact. A further example of an error analysis technique is the forward error analysis, which measures how much the computed eigenvalues differ from the exact ones due to the errors in the input matrix and the numerical method.
To improve the accuracy and reliability of eigenvalue computations, you can use some error reduction strategies that can minimize or eliminate the errors in the input matrix and the numerical method. One example of an error reduction strategy is the scaling and balancing, which transforms the input matrix by multiplying its rows and columns by suitable factors to make it more symmetric and well-conditioned. Another example is the deflation and iteration, which removes the computed eigenvalues and eigenvectors from the input matrix and repeats the numerical method on the reduced matrix to obtain more accurate results.
To verify the validity and correctness of eigenvalue computations, you can use some error verification methods that can check the consistency and compatibility of the computed eigenvalues and eigenvectors with the input matrix and the numerical method. An example of an error verification method is the inverse iteration, which uses the computed eigenvalues as shifts to solve a linear system involving the input matrix and a random vector, and then normalizes the solution vector to obtain a refined eigenvector. Another example is the Rayleigh quotient, which uses the computed eigenvectors to evaluate a scalar function involving the input matrix and the eigenvector, and then compares the function value with the computed eigenvalue to obtain a refined eigenvalue.