Talk:Determinant/Archive 3

Latest comment: 5 months ago by Thatwhichislearnt in topic "the linear map" in the introduction
Archive 1 Archive 2 Archive 3

Condense the lead

I propose a shorter, simpler, more concise, more user-friendly, more informative lead:

In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det(A), det A, or |A|.
In the case of a 2x2 matrix, the specific formula for the determinant is simply the upper left element times the lower right element, minus the product of the other two elements. Similarly, suppose we have a 3x3 matrix A, and we want the specific formula for its determinant |A|:
|A| =   = a -b +c  = aei+bfg+cdh-ceg-bdi-afh.
Each of the 2x2 determinants in this equation is called a "minor". The same sort of procedure can be used to find the determinant of a 4x4 matrix, and so forth.
Any matrix has a unique inverse if its determinant is nonzero. Various properties can be proved, including that the determinant of a product of matrices is always equal to the product of determinants; and, the determinant of a Hermetian matrix is always real.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and the determinant is used to solve those equations. The use of determinants in calculus includes the Jacobian determinant in the substitution rule for integrals of functions of several variables. Determinants are also used to define the characteristic polynomial of a matrix, which is essential for eigenvalue problems in linear algebra. Sometimes, determinants are used merely as a compact notation for expressions that would otherwise be unwieldy to write down.

Of course, wikilinks would be included as appropriate.Anythingyouwant (talk) 05:16, 3 May 2015 (UTC)

I went ahead and installed this.Anythingyouwant (talk) 19:46, 3 May 2015 (UTC)

But what is it?

The lead should summarize what a determinant is, but does not: the determinant is a useful value that can be computed from the elements of a square matrix.

This is equivalent to π is a useful number in geometry.

Is there a better concise description? 15.234.216.87 (talk) 21:00, 26 June 2015 (UTC)

agreed - I came here to find out what it /represented/ conceptually, and I still don't know 92.24.197.193 (talk) 18:17, 18 October 2015 (UTC)
Part of the problem is that there isn't a simple answer: it depends on the underlying number system (field or ring). When the matrix entries are from the real numbers, the determinant is a real number that equals the scale factor by which a unit volume (or area or n-volume) in Rn will be transformed by the matrix (i.e., by the transformation represented by the matrix). (In addition, a negative value indicates that the matrix reverses orientation.) That's why matrices with determinant = 0 don't have inverses: unit volumes get collapsed by (at least) one dimension, and that can't be reversed by a linear transformation. But when the numbers come from, say, the complex numbers, what the determinant means is harder to describe. (And besides, the geometric interpretation above is not necessarily important in the application at hand; e.g., to solve a system of linear equations, all you really care about is whether the determinant is non-zero, not precisely how the coefficient matrix transforms the solution space.) -- Elphion (talk) 22:24, 18 October 2015 (UTC)
Ah great! The scale factor explanation at least gives me a mental picture of something I can relate to, and make obvious the corollary about matrices of determinant 0 lacking inverses. And the "care about [...] whether the determinant is non-zero" is comparable to caring merely whether the determinant of a quadratic is positive or not. For me this is a great start to an answer - thanks! 92.25.0.45 (talk) 21:02, 20 October 2015 (UTC)

determinant expansion problem

Regarding this edit, I question the statement "The sums and the expansion of the powers of the polynomials involved only need to go up to n instead of ∞, since the determinant cannot exceed O(An)." The expression O(An) has no standard defined meaning that I'm aware of. The O() notation is not usually defined for matrix arguments. You can't fix the notational problem by writing instead that det(kA)=O(kn) since the inner infinite sum becomes divergent if k gets too large. In fact the truncated version is true for all A but the original is not true except in some unclear formal sense. One cannot prove a true statement from a false one, so the attempt to do so is futile. McKay (talk) 05:57, 10 March 2016 (UTC)

I fear you may be overthinking it... All the formula does is write down Jacobi's formula det (I+A) = exp(tr log(I+A)) in terms of formal series expansions for matrix functions, which one always does! ... unless you had a better way, maybe using matrix norms, etc... But, in the lore of the Cayley-Hamilton theorem, one always counts matrix elements, powers of matrices, traces, etc... as a given power, just to keep track, using your scaling variable k, and few mistakes are made. Fine, take the O(n) notation to be on all matrix elements: that includes powers of A! The crucial point is that this is an algorithm for getting the determinant quickly, with the instruction to cut off the calculation at order n, as anything past that automatically vanishes, by the Cayley-Hamilton theorem, "as it should". Check the third order for 2x2 matrices to see exactly how it works. If you had a slicker way of putting it, fine, but just cutting off the sums at finite order won't do, since they mix orders. You have to cut everything consistently at O(n). Perhaps you could add a footnote, with caveats on formal expansions, etc... which however would not really tell a pierful of fishermen that properly speaking there is a proof fish don't exist, a cultural glitch applications-minded people often have to endure! The "as it should" part reminds the reader he need not check the Cayley-Hamilton magic of suppressing higher orders--it must work. But, yes, it is only an algorithm, as most of its users have always understood. Cuzkatzimhut (talk) 11:58, 10 March 2016 (UTC)

physical meaning of determinant

Let suppose if we have a vector which make 2x2 matrix and if we take its determinant then this value will represent its magnitude or something else? What are the meaning of determinant? Please explain it using vectors Muzammalsafdar (talk) 15:55, 19 April 2016 (UTC)

Product of scaling factors (eigenvalues) of its eigenvectors. May wish to go to eigenvalue and eigenvector article. This is the wrong place. Here, the physical connection to areas and volumes is expounded and illustrated in sections 1.1, 1.2, 8.3. Cuzkatzimhut (talk) 19:31, 19 April 2016 (UTC)

Simple proof of inequality

Let λi are positive eigenvalues of A. Inequality 1 - 1/λi ≤ ln(λi) ≤ λi - 1 is known from standard courses in math. By taking the sum over i we obtain Σi(1 - 1/λi) ≤ ln(Πiλi) ≤ Σii - 1). In terms of trace and determinant functions, tr(I - Λ-1) ≤ ln(det(Λ)) ≤ tr(Λ - I), where Λ = diag(λ12,...,λn). Substituting Λ = UAU-1 and eliminating U, we obtain the inequality tr(I - A-1) ≤ ln(det(A)) ≤ tr(A - I). Trompedo (talk) 12:00, 17 July 2016 (UTC)

You may wish to insist on strictly positive-definite matrix A. Cuzkatzimhut (talk) 14:45, 17 July 2016 (UTC)

Hello fellow Wikipedians,

I have just modified one external link on Determinant. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 20:24, 11 December 2016 (UTC)

Replacing false claim

I am replacing this:

An important arbitrary dimension n identity can be obtained from the Mercator series expansion of the logarithm when  
 
where I is the identity matrix. The sums and the expansion of the powers of the polynomials involved only need to go up to n instead of ∞, since the determinant cannot exceed O(An).

The main problem is the last sentence, which is simply wrong. It is only necessary to try a random 2x2 matrix to see that it does not hold. The reason given is meaningless as far as I can tell. In case the reasoning is that convergent power series in an nxn matrix are equal to a polynomial of degree n in that matrix: yes, but you get the polynomial by factoring by the minimal polynomial of the matrix, not by truncating the power series. McKay (talk) 01:57, 20 January 2017 (UTC)

You are right that the the last sentence is clumsy, but you threw out the baby with the bathwater. Insert a parameter s in front of A: the left-hand side is a polynomial of order sn, and so must be the right hand side. By the Cayley–Hamilton theorem all orders > n on the right-hand side must vanish, and so can be discarded. Set s=1. The argument is presented properly on the C-H thm page, but was badly mangled here. Perhaps you may restore an echo of it. Cuzkatzimhut (talk) 14:47, 20 January 2017 (UTC)

Decomposition Methods

The article in general seems to be packed full of overly complicated ways of doing simple things. For instance the Decomposition Methods section fails to mention Gaussian Elimination (yes, it is mentioned in the Further Methods section). It makes no sense to me (so either I don't understand or the section is simply wrong) that det(A) = e*det(L)*det(U) given that det(A) = det(L') = det(U') for the upper triangular matrix U' and the lower triangular matrix L' that are EASILY arrived at (order O ~ n^3) by Gaussian Elimination. In other words, given how easy and efficient Gaussian Elimination is, LU Decomposition, etc. should be clearly justified here, and it is (they are) not. I also note the Wiki-article on Gaussian Elimination claims right up front that it operates on Rows of a matrix. This is just WRONG, it can operate on either Rows or Columns and if memory serves me (it may not) it can operate on both. Restricting discussion to row reduction is nonsense. Anyway, my main point is why should LU decomposition be mentioned here if it is MORE EXPENSIVE than Gaussian Elimination? Before discussing its details, that needs to be addressed - since as far as I can see it is only useful for reasons other than calculation of the determinant of a matrix.98.21.212.196 (talk) 22:55, 27 May 2017 (UTC)

-- umm, dude, the usual LU decomposition *is* gaussian elimination - U is the resulting row echelon form, L contains the sequence of pivot multipliers used during the elimination procedure. Which, btw, is generally done on rows because the RHS of a set of simultaneous equations - used to augment a matrix to solve those equations, which was the whole *point* of gaussian elimination - adds one or more additional columns (not rows), so the corresponding elimination procedure must act on rows. — Preceding unsigned comment added by 174.24.230.124 (talk) 06:26, 1 June 2017 (UTC)

Hello fellow Wikipedians,

I have just modified 2 external links on Determinant. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 14:05, 9 September 2017 (UTC)

Hello fellow Wikipedians,

I have just modified 2 external links on Determinant. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 14:05, 9 September 2017 (UTC)

Determinant inequalities

There is a problem with the determinant inequalities in the properties with determinants section. If we take the determinant of a commutator

 

then

 

But if   and   has a non-zero determinant, there is a contradiction. — Preceding unsigned comment added by Username6330 (talkcontribs) 21:56, 10 November 2017 (UTC)

Geometric explanation for 2x2 matrices

I would like to precise only the following part of the explanation:

"yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. a = (-b, a), such that |a||b|cosθ' , which can be determined by the pattern of the scalar product to be equal to adbc"

As far as I understand it, for the scalar product to be adbc the vector a must be (-b, a). So I would change "to a perpendicular vector, e.g." by "to the perpendicular vector". Actually, would that be the case, the module of a and a are the same, so the final expression could be better written in terms of a. Happy to do it, I would like just to confirm. Conjugado (talk) 00:52, 31 January 2018 (UTC)

Determinant identities

I suggest merging Determinant identities into this article. There's not enough there to justify a separate article. The original article creator was blocked a few times for creating inappropriate article fragments in the mainspace. -Apocheir (talk) 21:26, 8 April 2018 (UTC)

I would be in mild, perichiral, agreement. Cuzkatzimhut (talk) 21:30, 8 April 2018 (UTC)

Computationality

The page says; although other methods of solution are much more computationally efficient.

No example is given HumbleBeauty (talk) 09:23, 3 June 2019 (UTC)

Permutations

Is this already in it? - I couldn't find it.

Let S_n be the set of all permutations of {1,..,n}

Then the elements of a permutation k, when applied to the rows of a determinant (e.g.2314 means 2nd element in row 1, 3rd in row 2. 1st row 3, 4th row 4.

Multiply elements together and by odd/even -> -/+1, and add them all together to get the determinant

Darcourse (talk) 15:03, 21 May 2020 (UTC)

Your description is obscure, but seems to refer to the formula given at the beginning of section Determinant#n × n matrices. D.Lazard (talk) 15:50, 21 May 2020 (UTC)

Found it1 Thanks. @dl

Darcourse (talk) 02:03, 22 May 2020 (UTC)

3x3 column vectors

It would be very useful to simply state in the 3x3 section that when a 3x3 matrix is given by its column vectors A = [ a | b | c] then det(A) = a^T (b x c). — Preceding unsigned comment added by 2A00:23C6:5486:4A00:7DEB:F569:CA90:8032 (talk) 11:52, 21 June 2020 (UTC)

Determinants in "The Nine Chapters"

I have read the russian translation by Berezkina and found nothing that can be considered as finding the determinant. In the 8th "book" gaussian elimination is considered, but the question of existance of solution is not discussed. In case there is such topic in the book (maybe I missed it or maybe the translation is flawed), it would be helpful to have a citation and direct the reader to particluar problem in the book. Medvednikita (talk) 13:21, 16 October 2020 (UTC)

Interesting. I don't have access to this, but maybe someone else can weigh in? I'd also question the leap from "system is (in)consistent" to full-blown "determinant" for anyone, as this section seems to be doing. –Deacon Vorbis (carbon • videos) 13:34, 16 October 2020 (UTC)

Illustration of "Sarrus' scheme" (for 3x3 matrices) - why is it missing a "d"?

Is there a reason why this illustration is missing the letter "d" - i.e. why the matrix elements are listed as "a b c e f g h i j", rather than "a b c d e f g h i" (as in the preceding two illustrations for the Laplace and Leibniz formulas)? (If the "d" is included, then the resulting formula would come out to be identical to the Leibniz formula, as one would expect.) — Preceding unsigned comment added by PatricKiwi (talkcontribs) 09:45, 26 February 2021 (UTC)

Using a single style for the design of formulas

In this article, three styles of design of mathematical expressions intersect at once ({{math|...}}, {{math|''...''}}, ''...'' and <math>...</math>; e.g., for 'n': n, n, n and  ). This is very noticeable when styles are mixed within a paragraph, sentence. The suggestion is to bring all formulas to any one style, for example, <math>...</math>. — Preceding unsigned comment added by Alexey Ismagilov (talkcontribs) 12:18, 17 October 2021 (UTC)

Please, do not forget to sign your contributions in talk page with four tildes (~~~~).
This has been discuted many times and the consensus is summarized in MOS:FORMULA. Since the last version of this manual of style, the rendering of <math> </math> has been improved, and the consensus has slightly evolved. Namely, it is recommended to replace raw html (''n'' and n) by {{math|''n''}} or {{mvar|n}} (these are equivalent for isolated variables), under the condition of doing this in a whole article, or at least in a whole section. The change from {{math|}} to <math></math> is recommended for formulas that contain special characters (see MOS:BBB, for example). Otherwise, the preference of preceding editors must be retain per MOS:VAR. D.Lazard (talk) 11:00, 17 October 2021 (UTC)

According to the article you are referring to, it is said that the community has not come to a single agreement on the style of formulas. However, display formulas have to be written in LaTeX notation. Therefore, for inline formulas, it seems to me, it is also worth using LaTeX in order to adhere to the uniformity of the design. I agree that the edits that I made to the article and which were later canceled by the user D.Lazard contain several bad changes. I apologize for these changes. Expressions do not contain special characters (MOS:BBB), so I think it's better not to touch anything, according to MOS:VAR. Alexey Ismagilov (talk) 12:26, 17 October 2021 (UTC)

Multiplication in determinants is not commutative

Am I wrong in believing many of the equations presented in this article are incorrect? Ex. the very first equation

 

should be written as

 

The original equation works fine if all of the terms are just single numbers but does not work in some cases when they are vectors. Below is an identity seen in semidefinite programming, where   is a vector

 

Since the matrix is positive semidefinite, the determinant of the matrix must be nonnegative. Using the original equation presented,

 

This is impossible to evaluate since the dimensions on   do not match. instead using the second equation,

 

Which is possible to evaluate

--Karsonkevin2 (talk) 00:16, 7 December 2021 (UTC)

This looks like an anomalous cancellation. In any case, the last formula is not well formed, as expressing the equality of a scalar (a determinant) and a non-scalar matrix.
Moreover, it is explicitly said in § Definition that commutativity is required for multiplication of matrix entries. So, in the 2x2 case,   Therefore, there is no reason to not follow the alphabetical order. D.Lazard (talk) 07:50, 7 December 2021 (UTC)

Article wrong by many omissions?

The Example about "In a triangular matrix, the product of the main diagonal is the determinant"

There are many triangular matrices which can be derived. Unless you state more constraints (e.g. Hermite Normal Form or alike), you cannot simply postulate that the diagonal product is the determinant. For example, if I produce some (remember - it is not unique) upper triangular matrix with row operations, instead of column operations, I cannot reproduce the findings in the example. So, either constraints or insights are clearly missing.

2003:E5:2709:8B91:29A5:FE96:3F33:E513 (talk) 07:50, 4 August 2022 (UTC)

No, in any upper or lower triangular matrix, the determinant is the product of the main diagonal: all the terms in the expansion of the determinant along any column are 0 except for the term involving the cofactor determined by the matrix entry at the intersection of the column and the main diagonal -- and by induction the determinant of that cofactor is the product of its main diagonal (which is the rest of the diagonal of the full matrix). The explanation following the example tacitly assumes that the matrix is nonsingular, but the claim remains true for singular triangular matrices as well. -- Elphion (talk) 14:05, 4 August 2022 (UTC)

"the linear map" in the introduction

There are two mentions of "the linear map" associated to a matrix and one to "(the matrix of) a linear transformation". The word "the" in all of them is incorrect, since in each case noun referred to is not unique.

The first two occurrences should be replaced with "a". The last sentence is correct without the parenthesized clause, as a sentence about the determinant of a linear transformation. Alternatively the parenthesis can be adjusted to "a matrix representing". Thatwhichislearnt (talk) 14:32, 28 February 2024 (UTC)

"The" is correct, since there is exactly one matrix that represents a given linear map or transformation on given bases. When one use "the matrix of a linear map" this supposes that bases are implicitely chosen. D.Lazard (talk) 15:10, 28 February 2024 (UTC)
None of the sentences mention the basis. Thus it is incorrect. Thatwhichislearnt (talk) 15:17, 28 February 2024 (UTC)
For example, there is the sentence "Its value characterizes some properties of the matrix and the linear map represented by the matrix.". There is no such thing as "the linear map represented by the matrix". Thatwhichislearnt (talk) Thatwhichislearnt (talk) 15:19, 28 February 2024 (UTC)
Also, that "convention" made up by you that "this supposes that bases are implicitely [sic] chosen" need not be the assumption of a general reader of Wikipedia, even more of someone that is just beginning to learn about determinants and linear transformations. It is precisely a common point of failure that students misinterpret the correspondence between matrices and linear maps as one-to-one. Thus explicit is better than implicit. Thatwhichislearnt (talk) 15:31, 28 February 2024 (UTC)
As far as I know, for most readers, bases are always given, and they do not distinguish between a vector and its coordinate vector. So, it is convenient to not complicate the lead by discussing the choice of bases. However things must be clarified in the body of the article, although there are other inaccuracies that are more urgent to fix. D.Lazard (talk) 15:54, 28 February 2024 (UTC)
Well "they do not distinguish between a vector and its coordinate vector" and "bases are always given" are both fundamental errors. And regarding "for most readers", citation needed, if anything those would be the readers that have not learned the content properly or haven't learned it yet. Thatwhichislearnt (talk) 16:02, 28 February 2024 (UTC)
Also, it is not a complication using "a" instead of "the". For a reader without sufficient attention to detail the wording might not be noticed. Yet, the article wouldn't be lying to them. On further readings they might notice. I agree that between the two choices that don't lie to the reader: Using "a" and using "the matrix + mentioning the basis" using "a" is the one that introduces no complication to the articles' introduction. Thatwhichislearnt (talk) 16:08, 28 February 2024 (UTC)
Reverted your edit to give you the opportunity to fix the absurd edit message. Also here you complain about " to not complicate" and then you choose the option that is the more complicated? Thatwhichislearnt (talk) 16:23, 28 February 2024 (UTC)
I maintain that the indefinite article is wrong here. If you disagree wait a third person opinion. In any case, do not edit war for trying to impose your opinion. D.Lazard (talk) 17:30, 28 February 2024 (UTC)
D.Lazard's phrasing is much better, and more informative. -- Elphion (talk) 17:57, 28 February 2024 (UTC)
You maintain? On what basis? Where is the citation? Plus it also your opinion that the introduction should not be complicated. Thatwhichislearnt (talk) 18:00, 28 February 2024 (UTC)
For citation, any linear algebra text. -- Elphion (talk) 18:03, 28 February 2024 (UTC)
No no. That is not what I am asking. Citation for using "a" being wrong. Again all of those linear algebra texts say that the matrix does depend on the basis. Thus a lack of mention of the basis yields the article "a". Thatwhichislearnt (talk) 18:05, 28 February 2024 (UTC)
That is not his phrasing. That is one of the phrasings that I said should be done and he objected ("to not complicate"). Thatwhichislearnt (talk) 18:03, 28 February 2024 (UTC)
It is always the same issue with him. Editing Wikipedia became his entertainment in retirement and all over the place defends incorrect wording on the basis of "simplicity". Thatwhichislearnt (talk) 18:09, 28 February 2024 (UTC)

Sorry, I meant the text resulting from D.Lazard's edit of 17:24. I don't care whose text it is, it is superior to using just an indefinite article. The key point of matrices is that for a given choice of bases there is a 1-1 correspondence between linear transformations and matrices of appropriate size. That's where the definite article comes from. And please refrain from attacking another user; keep the discussion on the article. -- Elphion (talk) 18:16, 28 February 2024 (UTC)

The entire reason why I posted this section. The initial version of the article was wrong for implying there is "the matrix of a linear map". Then his "opinion" passes through the following stages:
1. Gaslighting that there is some made up convention that bases are implicitly assumed. That could work with non-mathematicians, but there is no such thing.
2. That do not complicate the article.
3. The (demonstrably false) opinion that "a" is wrong. When clearly when one does not make a choice of basis the association between linear maps and matrices is a one to many one. Thatwhichislearnt (talk) 18:46, 28 February 2024 (UTC)
Also, note that his edit, that you consider superior, was still leaving another occurrence of the same mistake, the mention in the part about the orientation. Thatwhichislearnt (talk) 18:50, 28 February 2024 (UTC)
And the grammar was also inadequate. Thatwhichislearnt (talk) 18:52, 28 February 2024 (UTC)

Look, I agree with D.Lazard that the definite article is superior; I agree with you that some reference to choice of bases is appropriate. And I repeat, casting shade on a fellow editor ("gaslighting" above) is not helpful, and will eventually get you blocked. -- Elphion (talk) 19:06, 28 February 2024 (UTC)

The helpful way to proceed at this point is to suggest a concrete prospective wording here on the talk page so we can discuss it. -- Elphion (talk) 19:20, 28 February 2024 (UTC)

All the occurrences are fixed now. For the last one that was left unfixed, regarding orientation, I used "a" again. In that case it is talking about determining orientation. For orientation every matrix of the endomorphism, in every base, can be used. Do you also think "the" + "basis" is better in that case? The "a" removes the error of "the" + no"basis" and it allows for the whole picture of independence of the basis. Thatwhichislearnt (talk) 19:29, 28 February 2024 (UTC)