No time to fix this but there seem to be errors

edit

I only had a look at the section "Using for solution to linear inverse problems" because I was going to point students to something for further reading on economy/deluxe decomposition, but that section seems to be full of mistake. I'm about to catch a flight but maybe I'll come back to this article later, but since that part looks so weird, I am suspicious of everything. Maybe someone else can look at it if I get distracted and don't come back. Loisel (talk) 18:06, 21 February 2020 (UTC)Reply

Householder-Example: QR=A ?

edit

translation question

edit

I'm trying to translate this article in french. I'm wondering about the example of Householder reflections method and especially, the calcul for the length of the vector  . I see that  , I would prefer   or  . PierreLM April 13th 2005

You know you can just use that in your French version. Dysprosia 13:30, 14 Apr 2005 (UTC)

I'm so sorry,  . So there is no problem. PierreLM April 15th 2005

Sign of in Householder

edit

The section on the Householder method says that   should take its sign from the first element in  . In the example which follows, however, the second matrix generated appears not to use the sign operation. Am I missing something, or should this be fixed?--rlhatton 16:11, 25 January 2006 (UTC)Reply

Using the other sign does not matter if you use exact computations, as in the example. I tried to clarify this. However, perhaps it is better to change the example to match the sign convention, or at least make a remark at that point. -- Jitse Niesen (talk) 13:44, 27 January 2006 (UTC)Reply
He's right though. When you do it correctly, the signs change in the second Q matrix. The correct answer has signs flipped. — Preceding unsigned comment added by 50.40.70.99 (talk) 16:37, 30 November 2016 (UTC)Reply

QR decomposition with rank deficiency

edit

Is there somebody to editorialize the definition without assuming the full rank? S. Jo 14:03, 7 May 2006 (UTC)Reply

Aha, I didn't notice that this was the point behind the edit I reverted. Better now? -- Jitse Niesen (talk) 01:45, 8 May 2006 (UTC)Reply
Somehow, I don't think that we need to start with a square matrix. When we solve an over-determined system of equations in order to minimize the unitarily invariant norm of residuals, we sometimes use the QR-factorization to reduce a computational complexity. That kind of application reveals one of the essential properties among which the QR-factorization has. Also, except for the uniqueness, QR factorization has good features even in non-square matrices, I guess. For example, we can recognize that an orthonormal basis of the column space of A (=QR) consists of the columns of Q associated with non-zero diagonal entries of R. S. Jo 02:21, 8 May 2006 (UTC)Reply
I agree that it's possible to give only the general definition (complex rectangular matrices), and that the general definition is useful, but I think that it's better from an educational point of view to start with the simpler definition (real square matrices). However, I do not feel strongly about this.
I do not understand your last sentence. If we take
 
then A = QR where Q is the 3-by-3 identity matrix and R = A, but the column space of A is not spanned by the first and third columns of Q. -- Jitse Niesen (talk) 05:00, 8 May 2006 (UTC)Reply
You are right. My claim was true only if A is full rank, I guess. S. Jo 21:38, 8 May 2006 (UTC)Reply

Householder-Example: QR=A ?

edit

Hello there,

I'm just wondering why (in the householder-example) the resulting Q- and R-matrices don't multiply to A. Futhermore, the given Q and Q^t are not equal to I. Is the example correct?

When I calculate the products, I get QR = A and QQT = I. Could you please be more specific? What value do you take for Q, and what do you get for QQT? -- Jitse Niesen (talk) 12:59, 8 June 2006 (UTC)Reply

Oh, you're right. I'm sorry, math is too hard for me :-)

 

No worries, happens to me all the time ;) -- Jitse Niesen (talk) 13:54, 9 June 2006 (UTC)Reply
Why isn't there a more simple example (easier numbers, these n/175 scares me ;-) )? 84.72.103.163 (talk) 22:35, 11 August 2009 (UTC)Reply

I do not understand the example. The first step is still clear:  

In the next step,   with the Euklidian norm  . Apparently,   is used instead. The reason should be made explicit. Then the text gives the equation for the Householder Matrix as  , but in the example the term   suddenly appears. Textbooks give  , with   the dot-product, which is 2/14 (arguably, this can be written as  ).

typo ?

edit

"We can similarly form Givens matrices G2 and G3, which will zero the sub-diagonal elements a21 and a32, forming a rectangular matrix" should this actually say triangular instead of rectangular? -- User:Richard Giuly

Yes, indeed. Now fixed. -- Jitse Niesen (talk) 12:43, 22 October 2006 (UTC)Reply

A more stable version of Householder?

edit

Hi, I am new to the Wikipedia community, as far as editing goes... this is my first so please let me know if this is against protocol. After studying numerical linear algebra using the textbook by Trefethen and Bau, I noticed that there is a crucial step in the Householder implementation of the QR decomposition in this article that makes this article less than optimal in numerical stability. The operation that I would like to verify is the last one in the following snippet of pseudo code from "Numerical Linear Algebra":

for k = 1 to n

x = Ak:m,k
vk = sign(x1)||x||2e1 + x
vk = vk / ||vk||2
Ak:m,k:n = Ak:m,k:n - 2vk(vk*Ak:m,k:n)

//---------

Notation:

* = complex conjugate
Ak:m,k:n = sub-matrix of A starting at the k-th row and k-th column
sign = sign of

The reason that I say this is more accurate is because the one presented in this page requires that A be multiplied with the successive new Qn every iteration of the loop, which adds to the numerical instability of the algorithm. The pseudo code proposed by Trefethen and Bau reduces that by having a fresh copy of the original A every iteration. --Jasper 06:58, 23 January 2007 (UTC)Reply

I'm afraid that I don't quite understand what you mean. What is the difference between the statement
Ak:m,k:n = Ak:m,k:n - 2vk(vk*Ak:m,k:n)
in Trefethen and Bau's code, and the operation
 
in the article?
I readily agree that the pseudocode is much easier to understand than the explanation in the article. Indeed, I've never been happy how the recursion is explained in the article (I think I am myself to blame for this), so please do go ahead and improve the text. -- Jitse Niesen (talk) 12:06, 24 January 2007 (UTC)Reply
The reason that say that is because the operation you had was:
Qn = I - ...
after trying to implement that I noticed that you have to do tempA = tempA*Qn after initializing it to A with every iteration of Q. Whereas the technique I had you can get a fresh copy of A every time. —The preceding unsigned comment was added by 144.118.52.106 (talk) 17:36, 31 January 2007 (UTC).Reply

QR decomposition for complex matrices?

edit

hi. it seems that the algorithms that are discussed in this page are only compatible with matrices that have real elements and it seems that they can't be modified easily to be compatible with complex matrices.please guide me. —The preceding unsigned comment was added by Sina kh (talkcontribs) 11:59, 14 February 2007 (UTC).Reply

Actually, I think that the algorithms all work for complex matrices if you replace the transpose with the conjugate transpose. For the first algorithm (Gram-Schmidt), this is mentioned in the section listed in the references. -- Jitse Niesen (talk) 04:44, 15 February 2007 (UTC)Reply
Hi everyone.i had checked converting tranpose to transpose conjugate before and it doesn't work in householder reflectors algorithm(Q will be unitary but R won't be upper triangular).In Gram-Schmidt the algorithm have a constraint that "A" must be square(the author have mentioned it in the "definition" section).Sina kh 13:25, 27 February 2007 (UTC)Reply
I'm afraid I don't understand you. I guess the A you're considering is not square. Why do you think R is not upper triangular when doing QR via Householder reflections? Where does it say that A has to be square for Gram-Schmidt? Did you check whether that is in fact true? -- Jitse Niesen (talk) 13:59, 28 February 2007 (UTC)Reply

hi there.I have performed QR decomposition via householder reflection algorithm for a non-square small-sized complex matrix.I tried to follow the algorithm exactly:

A=

    -10+4i    4-5i
    2-i       8+6i
    6         7i

a1=A(:,1)=

         -10+4i
         2-i
         6

alfa=norm(a1)=sqrt(a1'*a1) ("'" means transpose conjugate)

alfa=12.53

u1=a1-(alfa*e) e=(1,0,0)t

u1=

   -22.53+4i
   2-i
   6

v1=u1/norm(u1)=

               -0.9482+0.1683i
               0.0842-0.0421i
               0.2525

q1=I(3*3)-2*v1*v1'=

                    -0.8548             0.1738 + 0.0515i   0.4789 - 0.0850i
                    0.1738 - 0.0515i    0.9823            -0.0425 + 0.0213i
                    0.4789 + 0.0850i   -0.0425 - 0.0213i   0.8725          

q1*a=

         11.8198 - 4.0000i   -1.7425 + 9.0803i
         0.1775 + 0.3551i    8.1473 + 4.5214i
         -0.0000 + 1.0652i   2.1279 + 3.6281i

as it is clear,the elements (2,1) and (3,1) of the matrix "q1*a" are not zero.(against the algorithm).I continued the algorithm but i don't want to take your time more than this.finally:

Q=

  -0.8548            -0.2307 + 0.4265i   0.0231 - 0.1837i
  0.1738 - 0.0515i   -0.1566 - 0.0464i    -0.5619 - 0.7904i
  0.4789 + 0.0850i   -0.4857 + 0.7088i   0.1520 + 0.0464i

and

R=

 11.8198 - 4.0000i  -1.7425 + 9.0803i
  0.8315 - 0.6607i   0.3750 - 4.5215i
 -0.3873 + 0.1202i  -7.9021 + 4.6355i


I also performed QR decomposition in MATLAB and the result was:

Q=

    -0.7981 + 0.3192i  -0.3036 - 0.2224i   0.3220 - 0.1260i
     0.1596 - 0.0798i  -0.7472 - 0.4151i  -0.4190 + 0.2490i
     0.4789 - 0.0000i  -0.1779 - 0.3101i   0.8017 - 0.0085i

and

R=

    12.5300            -3.9904 + 7.6616i
       0               -10.7413          
       0               0       

PS:I apologize if the text has some mistakes gramatically or in vocabulary.I'm not a native speaker and i am not fluent and i am not familiar with the other languages of the the article. thank you.Sina kh 17:07, 5 March 2007 (UTC)Reply

It took me a while before I had time to go through it; sorry. It also was more complicated than I'd expected. Householder reflections behave a bit differently in complex spaces.
You are right, the algorithm does not work for complex matrices. However, it's possible to fix it. Instead of
 
you have to take
 
Of course, if v and x are real vectors then ω = 1 and we get the same algorithm as before.
In your Matlab code, you write q1 = eye(3) - (1 + conj(v1'*a1)/(v1'*a1)) * v1*v1' instead of q1 = eye(3) - 2*v1*v1' and it should work.
This formulation of the method is taken from Dirk Laurie's post to NA Digest. Apparently, this is somewhere in Golub and Van Loan; I'll look it up and add to the Wikipedia article. Let me know if you need some more help.
PS: Your English is fine. -- Jitse Niesen (talk) 06:45, 10 March 2007 (UTC)Reply
The link to Dirk Laurie's post above is dead. I found Dirk's post at netlib.org. I am not sure whether I have the privilege to change Jitse's note above, so I am appending this one. If it is acceptable under Wikipedia guidelines to change Jitse's note, I hope someone will please do so, and then delete this note. Thanks. Mike Mannon (talk) 01:37, 8 September 2014 (UTC)Reply
In fact, Golub & Van Loan and Stoer & Bulirsch use a slightly different (but equivalent) approach. I added a sentence about it to the article. -- Jitse Niesen (talk) 07:19, 14 March 2007 (UTC)Reply
The   referred to above in Dirk Laurie's post is only necessary if   (or its conjugate  ) is potentially not real (as stated in the same post). However, in this case it is always real:
 
  and   are always real, and the remaining term   has only one non-zero element  :
 
which is also real. In which case, the top and bottom of   are always the same. So I think the   can be replaced with 2. -- Rogueigor (talk) 19:07, 7 September 2017 (UTC)Reply

Apparently the way sign of alpha is computed is also different. Even with these 2 changes I didn't get the correct answer. Can anyone please add a example for complex matrix too? thanks. — Preceding unsigned comment added by 192.54.222.11 (talk) 11:43, 30 December 2013 (UTC)Reply

FULL QR decomposition

edit

Hello everyone. Is anyone interested in adding some information for FULL QR decomposition? Since currently, everything seem to be in reduced form. —The preceding unsigned comment was added by Yongke (talkcontribs) 22:11, 6 March 2007 (UTC).Reply

QR alorithm

edit

There is a page on the same topic on QR algorithm, which I think should be merged in this page, and then the QR algorithm should link to this page.

--Fbianco 22:12, 15 April 2007 (UTC)Reply

I don't think they should be merged. This page explains the QR decomposition and algorithms for computing it. The QR algorithm describes an algorithm for computing the eigenvalues of a matrix, which uses the QR decomposition. Those are different topics. -- Jitse Niesen (talk) 03:35, 16 April 2007 (UTC)Reply

Suggested fix for number of iterations using Householder reflections

edit

The text has- "After   iterations of this process,  " I think it should be  . Consider  . It neads no iterations and correctly  .

But

  needs one iteration. Incorrectly  

If the #rows > #columns every column needs to be "triangularized" in turn
If the #columns >= #rows then every row except the first needs to be "triangularized".
Thanks for all the work ( I stumbled on this while implementing)

I am making the change-chad

Householder Problem

edit

In the Householder section, since R = Qm...Q2Q1A, shouldn't it follow that the QR decomp is given by Q = Q1TQ2T... QmT instead of just Q = Q1Q2...Qm? -- anonymous

Yep, you're right. Thanks for bringing this to our attention. -- Jitse Niesen (talk) 16:58, 27 October 2007 (UTC)Reply

Mention Octave instead of Matlab

edit

I guess Wikipedia doesn't have a specific policy on using non-free software for mathematics, but I thought it would be nice to mention GNU Octave instead of Octave in the section where it mentions the numerical QR factorisation. I note that Octave's result is -Q and -R of Matlab's result.

I know lots of people have vociferous opposition to free software in mathematics, so this is why I ask before I go ahead and do it. I don't want to start an edit war. Swap 17:39, 26 October 2007 (UTC)Reply

I'm not sure why Matlab is mentioned at all. It doesn't seem to add anything, so I removed that sentence.
On my computer, the R matrix returned by Matlab has minus signs on the diagonal just like Octave. That's not so surprising because both of them use the same routines (LAPACK) to do the factorization. I guess different versions of Matlab yield different results and that's why the article had R with the opposite sign. -- Jitse Niesen (talk) 16:58, 27 October 2007 (UTC)Reply

total flop counts

edit

Does someone know the total flop counts for the various QR algorithms (in particular, for a general m x n matrix, not necessarily square)?

Also, this link at the bottom of the page is bad -- maybe someone knows the new URL? The failed URL is http://www.cs.ut.ee/~toomas_l/linalg/lin2/node5.html Lavaka (talk) 16:18, 4 April 2008 (UTC)Reply

I'll answer my own question, thanks to Golub and Van Loan's "Matrix Computations". The setup is: A is a m x n matrix, with m >= n. I assume that if m < n, then everything still holds if you swap m and n. The flop counts for various algorithms are:
"Householder QR":  
"Givens QR":  
"Hessenberg QR via Givens" (i.e. A is in Hessenberg form):  
"Fast Givens QR":  
"Modified Gram-Schmidt":  
So, if m << n, then it looks like Modified Gram-Schmidt is the winner. 131.215.105.118 (talk) 19:10, 4 April 2008 (UTC)Reply

Why writing transposed Qi instead of Qi?

edit

Why in the

 

there is:   instead of simply:  ? After all   because it's symmetric, isn't it?! —Preceding unsigned comment added by Milimetr88 (talkcontribs) 16:27, 6 December 2009 (UTC)Reply

No...   — Preceding unsigned comment added by 131.211.53.159 (talk) 14:05, 13 December 2016 (UTC)Reply

Using Householder reflections O(n^3) incorect

edit

If using the trick (and not explicitly forming Q)   Then the algorithm is  

However, the current explanation loops over an n-by-n matrixm multiplication n times, making the given algorithm   ( , yes you can do matrix multiplication faster than  , but its still >   making the algorithm still >  ).

Can you even form the Q matrix of qr factorization by householder reflections in   time? —Preceding unsigned comment added by 139.222.243.106 (talk) 22:51, 1 November 2010 (UTC)Reply

Example is grossly incorrect

edit

The example given is just wrong. Using R, I get this:

   > x
      V1  V2  V3
   1 12 -51   4
   2  6 167 -68
   3 -4  24 -41
   > qr.R(qr(x))
         V1   V2  V3
   [1,] -14  -21  14
   [2,]   0 -175  70
   [3,]   0    0 -35

Testing the result given in the example gives this:

   > as.matrix(q) %*% as.matrix(r)
              V1        V2         V3
   [1,] 11.99996 -50.98650   4.002192
   [2,]  5.99998 166.97736  24.000335
   [3,]  3.99994 -67.98786 -40.998769
   > q
           V1      V2      V3
   1 -0.85714  0.3110 -0.4106
   2 -0.42857 -0.8728  0.2335
   3 -0.28571  0.3761  0.8814
   > r
      V1        V2       V3
   1 -14   -8.4286  -2.0000
   2   0 -187.1736 -35.1241
   3   0    0.0000 -32.1761

Clearly the example is screwed up just after the first step (the first column of Q is correct as is the first element of R). — Preceding unsigned comment added by 67.160.196.149 (talk) 08:23, 19 January 2012 (UTC)Reply

The example is correct. The QR decomposition is not unique. Q is orthogonal, R is upper triangular, and Q*R = A. What part worries you? You even used a calculator to prove the example was correct. JackSchmidt (talk) 19:24, 19 January 2012 (UTC)Reply

I and others prefer forcing the R factor diagonal to be positive real which reduces range of different solutions. — Preceding unsigned comment added by 149.199.62.254 (talk) 15:57, 17 May 2016 (UTC)Reply

Inconsistencies in matrix dimension for rectangular matrices

edit

The given dimensions of the matrices in the "Rectangular matrix" paragraph seem inconsistent to me. If, e.g., R is n by n as stated in the first sentence, then its bottom (m-n) rows are not zero. If "Q1 is m×n, Q2 is m×(m−n)", then Q should be m by m and not "an m×n unitary matrix Q". Am I wrong? — Preceding unsigned comment added by 195.176.179.210 (talk) 12:54, 15 February 2012 (UTC)Reply

Well spotted. That was the result of this edit of 5 February, so not long ago. I've just undone it. Hope that section makes more sense now. Qwfp (talk) 15:28, 15 February 2012 (UTC)Reply

Origin of the name?

edit

It would be interesting from a historical point of view (of just for curiosity's sake) to know who first used the letters "QR", and why (if any special reason exists). — Preceding unsigned comment added by Boriaj (talkcontribs) 07:57, 27 July 2012 (UTC)Reply

The QR algorithm article has some history in its lead and its history section. It appears that QR decomposition hadn't been invented prior to its use in the QR algorithm, though I could be wrong. The Francis (1961) paper referenced there says (bottom left of first page): "... the QR transformation, as I have (somewhat arbitrarily) named this modification of Rutishauer's algorithm...", so it appears that John G.F. Francis is responsible for naming the orthogonal/unitary matrix Q. The R is an obvious name for a right triangular matrix (more often referred to as an upper triangular matrix these days, it seems). Qwfp (talk) 12:22, 27 July 2012 (UTC)Reply

Stability of inverse using QR decomposition

edit

In the section "Using for solution to linear inverse problems", there is the sentence Compared to the direct matrix inverse, inverse solutions using QR decomposition are more numerically stable as evidenced by their reduced condition numbers [Parker, Geophysical Inverse Theory, Ch1.13].

That particular referenced book is not available to me, so I don't know what's written there. The argument by itself however is clearly wrong, as if $A = QR$, then $\cond(A) = \cond(R)$ (at least in the 2-norm). Hence, the condition number of the linear system one must solve (now involving R) has not changed. There might be still advantages due to more subtle issues (though I doubt that the advantage would be big).

Can anybody say more about the statement? I'd actually be in favor of removing it completely, rather than having it here in its current form

Wmmrmchl (talk) 14:10, 10 December 2012 (UTC)Reply

The referenced text discusses this in terms of the condition numbers of   versus   (in this article's notation; the original uses   instead of  ). Specifically,  , where   indicates the largest eigenvalue of a matrix  . Contrast this with  . Therefore, least-squares solutions computed using the QR transform, requiring solving   can potentially be more accurate than those relying on solving   (for some   and  ). The sentence in question really should be reworded. Aldebrn (talk) 12:45, 18 June 2013 (UTC)Reply

Sign of alpha in householder example

edit

The part about householder says: u = x - alpha * e1. In the example below, alpha is -14 and x is [12,6,-4]^T. 12 - (-14) is 26. So according to the above, u should be [26,6,-4]^T. However, it's [-2,6,-4]^T in the rest of the example, so alpha was added rather than subtracted. It still seems to work though, it seems to work with both. Still, why is the example inconsistent with the article? Might be worth explaining. Thanks! 85.2.5.221 (talk) 00:42, 6 January 2014 (UTC)Reply

Is QR decomposition basis-dependent?

edit

QR decomposition is basis-dependent, yes? --VictorPorton (talk) 23:07, 25 April 2014 (UTC)Reply

Remark about "Using for solution to linear inverse problems"

edit

The first paragraph states that QR decomposition has a lower condition number than direct matrix inverse. However, "condition number" is a property of the problem (solving a linear system) and not of the method. So AFAIK, this sentence has no sense. Somebody knows what is the actual reason why QR decomposition is more "numerically stable"? — Preceding unsigned comment added by Alexshtf (talkcontribs) 12:02, 17 August 2014 (UTC)Reply

This statement should be made in reference to another method of solving Ax=b such as LU decomposition. If you are solving linear equations Ax=b with LUx = b versus QRx=b then the condition number of L and Q are relevant to the accuracy of the solution. L can have a large condition number; however, Q is orthogonal and thus has condition number 1. — Preceding unsigned comment added by 128.61.55.207 (talk) 19:36, 16 February 2016 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified 2 external links on QR decomposition. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 10:21, 21 July 2016 (UTC)Reply

Alternative QR decomposition for rectangular matrix

edit

In our linear algebra course we used the term QR decomposition of a m×n matrix to mean Q (of dimensions m×n) with orthonormal columns and upper triangular R (of dimensions n×n). This would be A=Q₁R₁ in the article's section Rectangular matrix. However, I don't know how to edit the section to mention that QR decomposition can sometimes mean Q₁R₁. If I should edit it at all. Michal Grňo (talk) 03:53, 17 June 2019 (UTC)Reply

Q is not always orthogonal

edit

At the very beginning of the page, there is a mistake: "In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R." I think it's worth noting that Q is only orthogonal if it's square. Also it can be changed to "matrix with orthonormal columns Q" — Preceding unsigned comment added by Astadnik (talkcontribs) 16:30, 18 October 2021 (UTC)Reply

Is Someone Willing to Include a Year of Publication for a paper on QR Factorization?

edit

It would be useful to know how old   factorization is.

For example, when studying algorithms for the maximum flow problem, we could say that...

  1. first came the Ford–Fulkerson algorithm (1956)
  2. second came the Edmonds–Karp algorithm (1970)
  3. third came the Preflow–push algorithm (1974)
  4. fourth came the Chen, Kyng, Liu, Peng, Gutenberg and Sachdeva's algorithm (2022)

The Maximum Flow problem is not related to   factorization of matrices.

However, I was hoping that the article on   factorization could be edited to say when QR factorization was first published. If someone published it even earlier, that is good, the Wikipedia page is openly editable by anyone, and we can amend the official date of publication and name of the author.

The year of publication is useful for determining what method was popular before   factorization and what method of solving linear systems computationally was popular after   factorization.

People are always publishing faster, more efficient approaches to solving these problems.

At the time of my writing, the this article on   factorization does not say anything about when the method was first published. In lieu of date of first publication, we could cite the older method or approach (the proceeding popular technique) and maybe also cite one of the newer techniques. That way, the position of   factorization in time is available to the reader.

Usually, the old approaches were easy to understand, but take a long time to computer.

The old techniques for matrix decomposition are good for undergraduate students.

The newer techniques run quickly on GPUs (graphics processing units), but are too complicated for most undergraduate students to understand.

We must guide people through the progression of different techniques for solving linear systems through time, so that students see the easy-to-understand stuff first, and they see the complicated cutting-edge research sometime later. 174.215.16.174 (talk) 16:50, 6 June 2023 (UTC)Reply

Clarification needed on assumptions in ‘Cases and definitions’ Section

edit

At the end of the subsection "square matrix" there is the following statement: “If A has n linearly independent columns,[...] More generally, the first k columns of Q form an orthonormal basis for the span of the first k columns of A for any 1 ≤ kn”. However, this is not generally true. It assumes that the first k columns of A are linearly independent. Shouldn't this be stated explicitly? Simonedl (talk) 02:54, 12 January 2024 (UTC)Reply

Householder reflections Example is Wrong

edit

I see others have commented on the incorrect result of the Example given in the "Householder reflections". This is easily verified by multiplying QR [any matrix calculator will verify this] and it is not equal to the original matrix A. I discovered this by trying to use this as an example for a unit test I was writing and found that if all the values in both the Q and R matrices are negated [multiplied by -1] then QR=A. — Preceding unsigned comment added by 81.187.174.43 (talk) 08:21, 28 October 2024 (UTC)Reply