1. Introduction
The least squares method is a core approach for solving overdetermined linear systems widely used in data fitting, signal processing, and machine learning. However, directly solving the normal equations may lead to numerical instability, especially when the matrix condition number is large [1] .
Matrix decomposition transforms the original matrix into a more manageable form, significantly improving computational efficiency and stability [2]. This paper focuses on QR decomposition, SVD decomposition, and Cholesky decomposition, analyzing their numerical properties in least squares problems and validating their applicability through experiments. Furthermore, this paper compares the performance of direct solving (without decomposition) with decomposition methods, highlighting the advantages of decomposition methods [3].
2. Theoretical background
2.1. Least squares problem (example)
The least squares solution
The analytical solution is
but direct computation may amplify errors due to the squared condition number of
2.2. Matrix decomposition methods
(1) QR Decomposition
Decompose
Features: Avoids computing
(2) SVD Decomposition
Decompose
Features: Optimal stability, capable of handling rank-deficient matrices.
(3) Cholesky Decomposition
When is
Feature: High computational efficiency, but sensitive to ill-conditioned matrices, and can only be used in positive definite matrices
2.3. Condition number and numerical stability
(1) Condition Number
The matrix condition number
measures the sensitivity of the problem to the tiny perturbation of the initial values of the multiplication by matrices. For the least square problems specifically, the relatively error will satisfy the inequalities of
As a result, the error of the solutions can be magnified exponentially if the condition number
For the Cholesky decomposition, direct calculation of
leading to poor stability for Cholesky decomposition.
For the QR decomposition, the original matrix is decomposed into the multiplication of the orthogonal matrix
For the SVD, the orthogonal matrices act as the same functions, while the algorithm for SVD also contains the truncation of the small singular values, it can reduce the condition number implicitly and avoid the ill-conditioning problems.
(2) Floating-point Arithmetic
Due to the methods of the floating-point arithmetic, the computation of matrices will propagate errors inevitably [6].
For the Cholesky decomposition, the prerequisite is that the matrix should by positive definite, so the floating-point mechanism may damage the positive-definiteness and the occurrence of having square root of the negative values. The square root and division of calculation will also lead to the round-off error by calculating in the process of Cholesky decomposition.
For the QR decomposition, the Gram-Schmidt orthogonalization may loss orthogonality due to the round-off error by calculating
Although improved methods such as Householder transformation and Givens rotation can maintain orthogonality, round - off errors still exist.
For SVD, the iteration algorithm will also produce round - off errors when calculating the singular values and singular vectors. Consequently, error propagation may lead to the loss of the matrix's orthogonality.
3. Experimental design and implementation
3.1. Experimental setup
In the numerical experiment, the matrix of a specific dimension will be generated randomly and be calculated by various decomposition approaches and the direct solving method. The final results will be represented by the errors between the accurate solution and the numerical solutions, and the computation time will also be considered as an index to exhibit the efficiency of the algorithm. Two categories of the matrices will be generated with well-conditioning being random and full-rank, while the ill-conditioned ones are constructed by adding small perturbation to approximate singularity and high condition number
Given that the Cholesky decomposition can only be utilized in the scenario that the matrix is positive definite, two groups of matrices will be tested while the first group is the
Tools: Python + NumPy/SciPy.
3.2. Experimental results
Method |
Well-conditioned error |
Well-conditioned times |
Ill-conditioned error |
Ill-conditioned times |
QR |
1.03e-12 |
0.00027s |
4.89e-06 |
0.0037s |
SVD |
8.39e-13 |
0.00084s |
2.88e-08 |
0.0076s |
Direct |
2.57e-12 |
0.00030s |
4.34e-01 |
0.0094s |
Method |
Well-conditioned error |
Well-conditioned times |
Ill-conditioned error |
Ill-conditioned times |
QR |
3.12e-14 |
0.00097s |
6.45e-07 |
0.0029s |
SVD |
2.98e-14 |
0.00182s |
3.82e-09 |
0.0086s |
Cholesky |
1.05e-14 |
0.00062s |
9.87e-03 |
0.0012s |
Direct |
1.33e-14 |
0.00046s |
8.46e-01 |
0.0040s |
3.3. Discussion
As shown in Table 1 and Table 2, from the above results, we can conclude that Cholesky decomposition has the smallest error and the highest efficiency when dealing with well - conditioned problems. However, it can only be applied to positive definite matrices under restricted conditions. Meanwhile, direct solving has a similar error but poorer stability.
For the ill-conditioned Matrix, SVD decomposition has the smallest error and the highest numerical stability, although with the longest calculating time, followed by QR decomposition which has moderate numerical stability. Cholesky decomposition and direct solving fail due to the instability in the circumstances of ill-conditioning problems.
4. Practical application case study
4.1. Structural engineering analysis: ill-conditioned stiffness matrix in bridge problems
(1) Background
In bridge structural mechanics analysis, the stiffness matrix
(2) Experimental Design
Generate Ill-conditioned Stiffness Matrix: Construct a matrix
Solution Methods: Use Cholesky decomposition, QR decomposition, and SVD decomposition to solve for
Evaluation Metrics: Relative error
Computation time.
(3) Results
Method |
Relative Error |
Computation Time |
Cholesky |
9.73e-01 |
0.00011s |
QR |
2.15e-06 |
0.00235s |
SVD |
1.04e-08 |
0.00731s |
Direct |
1.23e-00 |
0.00043s |
As shown in Table 3, in engineering problems, efficiency can be negligible since instantaneity is not required, while precision is a key factor to consider. Ill - conditioned problems prevent the application of Cholesky Decomposition due to its numerical instability. As a result, SVD can be the most suitable matrix decomposition for this problem.
4.2. Finance modeling: well-conditioned covariance matrix in investment combination
(1) Background
In the optimization of investment combination, the covariance matrix
(2) Experimental Design
Step1: Generate Well-conditioning Covariance Matrix:
Construct a matrix
Step2:Solution Methods:
Use Cholesky decomposition, QR decomposition to get the answers.
Step3: Evaluation Metrics:
The difference of the objective function and the optimal solution theoretically.
Computation time.
(3) Results
Method |
Relative Error |
Computation Time |
Cholesky |
1.22e-21 |
0.00012s |
QR |
4.50e-22 |
0.00161s |
As shown in Table 4, the financial problems have a great demand in the efficiency of the algorithm in order to capture the opportunities to maximize the profits and avoid loss and need a flexible analysis of the trends of the market. Considering that the covariance matrices are usually well-conditioning and positive definite, it is an optimal choice to utilize the Cholesky Decomposition as an approach.
4.3. Signal processing: moderate condition number designing matrix in filter
(1) Background
In signal denoising, the design matrix
(2) Experimental Design
Step1: Generate Moderate-conditioning Matrix:
Construct a matrix
Step2:Solution Methods:
Use QR decomposition and SVD decomposition to get the answers.
Step3: Evaluation Metrics:
The SNR improvement by the calculation.
Computation time.
(3) Results
Method |
Relative Error |
Computation Times |
SVD |
1.03e-01 |
0.00124s |
QR |
9.99e-02 |
0.00876s |
As shown in Table 5, for signal processing problems, there are small differences in error, improvement, and computation times between the two approaches, and no numerical instability issues occur for them. As a result, both of the algorithm can be applied in solving the equations of signal processing.
5. Conclusion
In terms of numerical stability and computational efficiency, SVD decomposition performs best for ill - conditioned problems and has the highest precision in solving least - square problems. However, it has higher computational costs and is unable to obtain the answer instantaneously for high - dimensional matrices because of its high computational complexity. QR decomposition offers a balance between stability and efficiency. It can provide moderate numerical stability when facing ill - conditioned problems and also has relatively higher computational efficiency. Cholesky decomposition is the fastest for well-conditioned matrices with low computational complexity but is sensitive to condition numbers and can only be restricted in solving the positive definite matrices.
As for the limitations of direct solving, Direct solving produces large errors for ill-conditioned matrices and is not recommended due to the lack of orthogonality that can reduce the condition number and easily to propagate significant errors in the multiplication and solving inverse.
This paper puts forward application recommendations, hich can meet the requirements of high-precision : Prioritize SVD decomposition;For real-time computation: Use Cholesky decomposition for well-conditioned problems and QR decomposition for ill-conditioned problems.
References
[1]. Slock D T M .Backward consistency concept and round-off error propagation dynamics in recursive least-squares algorithms [J].Optical Engineering, 1992, 31(6): 1153-1169.
[2]. Bjrck, ke. Numerical Methods for Least Squares Problems [J].SIAM, 1996.
[3]. Demmel J .Applied Numerical Linear Algebra [M].Tsinghua University Press : Siam, 2011.
[4]. Watkins D S .Fundamentals of matrix computations [J].Wiley, 2019.
[5]. Gardiner R B J D .Fundamentals of Matrix Computations (David S. Watkins) [J].Siam Review, 2006, 35(3): 520-521.
[6]. Chen S C .Statistical Modeling, Matrix Computation and Stochastic Process. EXPERIENCES [J]. 2014. [1] Higham, N. J. (2002). -Accuracy and Stability of Numerical Algorithms (2nd Edition). SIAM.
[7]. Klaus-Jürgen Bathe, Ramm E , Wilson E L .Finite element formulations for large deformation dynamic analysis [J].International Journal for Numerical Methods in Engineering, 2010, 9(2): 353-386.
[8]. Noble J .Statistical Digital Signal Processing And Modeling [J]. 2016.
[9]. Narsimha B , Reddy P K A .Digital Signal Processing Lab Using MATLAB [J]. 2019.
Cite this article
Zhao,Y. (2025). Numerical Stability Analysis of Matrix Decomposition in Solving Linear Systems. Theoretical and Natural Science,132,39-45.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of CONF-APMM 2025 Symposium: Simulation and Theory of Differential-Integral Equation in Applied Physics
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Slock D T M .Backward consistency concept and round-off error propagation dynamics in recursive least-squares algorithms [J].Optical Engineering, 1992, 31(6): 1153-1169.
[2]. Bjrck, ke. Numerical Methods for Least Squares Problems [J].SIAM, 1996.
[3]. Demmel J .Applied Numerical Linear Algebra [M].Tsinghua University Press : Siam, 2011.
[4]. Watkins D S .Fundamentals of matrix computations [J].Wiley, 2019.
[5]. Gardiner R B J D .Fundamentals of Matrix Computations (David S. Watkins) [J].Siam Review, 2006, 35(3): 520-521.
[6]. Chen S C .Statistical Modeling, Matrix Computation and Stochastic Process. EXPERIENCES [J]. 2014. [1] Higham, N. J. (2002). -Accuracy and Stability of Numerical Algorithms (2nd Edition). SIAM.
[7]. Klaus-Jürgen Bathe, Ramm E , Wilson E L .Finite element formulations for large deformation dynamic analysis [J].International Journal for Numerical Methods in Engineering, 2010, 9(2): 353-386.
[8]. Noble J .Statistical Digital Signal Processing And Modeling [J]. 2016.
[9]. Narsimha B , Reddy P K A .Digital Signal Processing Lab Using MATLAB [J]. 2019.