Talk:Matrix norm

Latest comment: 3 years ago by 2607:9880:1A18:10A:64C9:2106:FDEB:3FFD in topic Horrible clashing notations

Old and unsigned

edit

What is sup(x) ???? Dr. Universe (talk) 00:08, 8 June 2015 (UTC)Reply

> sup(x) stands for the supremium or the least upper bound property of x. Themumblingprophet (talk) 01:43, 16 April 2020 (UTC)Reply

I removed the condition that the matrix be square for the induced norm (when p = 2) to be equivalent to the largest singular value. Indeed, this equivalence is true for non-square matrices too.

The following page will be replaced by a table.--wshun 01:34, 8 Aug 2003 (UTC)

The most "natural" of these operator norms is the one which arises from the Euclidean norms ||.||2 on Km and Kn. It is unfortunately relatively difficult to compute; we have

 

(see singular value). If we use the taxicab norm ||.||1 on Km and Kn, then we obtain the operator norm

 

and if we use the maximum norm ||.|| on Km and Kn, we get

 

The following inequalities obtain among the various discussed matrix norms for the m-by-n matrix A:

 
 
 
edit

This site needs to be linked to http://de.wikipedia.org/wiki/Matrixnorm

--91.113.18.247 (talk) 19:03, 5 January 2011 (UTC)Reply

What's wrong with Frobenius norm?

edit

Why does the article say that Frobenius norm is not sub-multiplicative? It does satisfy the condition  , which can be easily proved as follows:    . --Igor 21:21, Feb 18, 2005 (UTC)

Is it true that the Frobenius norm is   when p=2. It seems to me that it is the   norm that is mentioned earlier in the article.  . Also it is also called the Hilbert-Schmidt norm, because the page for Hilbert-Schmidt norm says that it is only analogous to the Frobenius norm.--kfrance 13:40, Oct 9, 2007 (MST)

@KFrance, That is not true. The Frobenius norm is the Hilbert-Schmidt norm, but it is not the same as   (this is the 'spectral norm'). For vectors,   is the Euclidean norm which is the same as the Frobenius norm if the input vector is treated like a matrix, but when the input is a matrix, the notation   usually denotes spectral norm, which is not the Frobenius norm. @Igor, that is true. Lavaka (talk) 17:54, 9 July 2014 (UTC)Reply

What happened to the article?

edit

The above discussion suggests that the article used to be more extensive. However, the revision history of the current article shows only one edit, by CyborgTosser on 25 Feb 2005. Did something drastic happen to the article? -- Jitse Niesen 11:36, 2 Mar 2005 (UTC)

I'm not quite sure what happened. Apparently there used to be an article here, but the content must have been moved. I'm not sure where and I'm not sure why, but a lot of articles link here, so I figured we needed the article. Hopefully whoever moved the content will replace whatever is relevant. CyborgTosser (Only half the battle) 03:21, 11 Mar 2005 (UTC)
I don't know either. I couldn't find the old page on wikipedia with google, but I've put a copy (from a wikipedia clone) at Matrix norm/old. Lupin 13:50, 11 Mar 2005 (UTC)
It seems that User:RickK deleted this page after it had been vandalised. Idiot. I've asked him to restore it with edit history to a subpage if possible. Lupin 14:10, 11 Mar 2005 (UTC)

Induced norm

edit

I'm a little confused where the article says that "any induced norm satisfies the inequality ...". Is the intended meaning that the operator norm satisfies that inequality, or are there other norms which are also known as induced norms which satisfy that inequality? If the former, it should be rephrased as "the induced norm satisfies..." and if the latter, an explanation of what is meant by an induced norm should be given. Lupin 01:24, 11 May 2005 (UTC)Reply

The terms "induced norm" and "operator norm" are synonymous. I used "any induced norm" instead of "the induced norm" because there are several operator norms. One example is the spectral norm, another example arises when one takes the ∞-norm on Kn, defined by
 
the resulting operator norm is
 
I hope this resolves the confusion; feel free (of course) to edit the article to make it clearer. -- Jitse Niesen 10:23, 11 May 2005 (UTC)Reply

Submultiplicativity

edit

I feel that this article is quite unclear about when submultiplicativity applies. In particular, it should be made clear that for matrix norms based on vectors p-norms that for   and   that  . This is shown in Proposition 2.7.2 on the following page [1].

You are right that this could be added. So, why don't you change the article to include this? You can edit the article by clicking on "edit this page", see How to edit a page for details. Don't worry about making mistakes; you will be corrected if necessary. I look forward to your contributions, Jitse Niesen (talk) 11:24, 12 August 2005 (UTC)Reply
It took me two days time to figure out that the statement on Wikipedia about submultiplicative property was misleading. As said, the submultiplicative property also holds for consistent p-norms, be it that in this case you are actually splitting   in two different norms. That is probably the reason why it is mentioned that the submultiplicative property holds for square matrices only. However, in the special case of the 2-norm the definition this is wrong. But even without the special case it is misleading for the reader, as the "submultiplicative" definition is used in a much wider range than a norm that only splits in two equal norms. See page 5 of [2]. — Preceding unsigned comment added by 94.210.213.220 (talk) 14:00, 20 September 2016 (UTC)Reply

Update: I edited the page. Can somebody check? Does it need references? — Preceding unsigned comment added by 94.210.213.220 (talk) 14:50, 20 September 2016 (UTC)Reply

Bad Notation

edit
  Resolved
Moreover, when m = n, then for any vector norm | · |, there exists a unique
positive number k such that k| · | is a (submultiplicative) matrix norm.

A matrix norm || · || is said to be minimal if there exists no other matrix norm
| · | satisfying |A|≤||A|| for all |A|.

Doesn't |A| specify the absolute value? Using the correct notation yields ||A||≤||A|| for all ||A||. Isn't that self evident? Furthermore m and n are not specified. Therefore I have removed this section till someone can clarify this content. It looks as if though someone partially moved content such that it's meaning was lost. —The preceding unsigned comment was added by ANONYMOUS COWARD0xC0DE (talkcontribs) 02:53, 24 December 2006 (UTC).Reply

So sorry; don't know what I was thinking. I will just change |A| to ||A||_q and ||A|| to ||A||_p, it's clear from the sentence what |A| refereed to. I was reading a book earlier and |A| was refereed to as the determinant of A. More-over I will just add these statements back in and reword them. --ANONYMOUS COWARD0xC0DE 01:06, 29 December 2006 (UTC)Reply

Matrix Norm not Vector Norm

edit
* 
* 
* 
* 
* 
* 

These are properties of vectors of the form   and not of the form  . --ANONYMOUS COWARD0xC0DE 03:38, 24 December 2006 (UTC)Reply

equivalence of norms

edit

article is not really clear about the equivalence of norms: since we are talking about matrices of finite size, all vector norms should be equivalent. the bunch of inequalities in the bottom could (mis)lead the reader into thinking otherwise. if, in addition, submultiplicativity is required, does this change? (apparently so, the article seems to imply the Banach algebra topology is not unique.) Mct mht 14:08, 13 February 2007 (UTC)Reply

trace norm vs. Frobenius norm

edit

it isn't true that the trace norm, sum(sigma), is <= the Frob. norm, sum(sigma^2); e.g. suppose all sigma<1. Lpwithers 16:34, 8 October 2007 (UTC)Reply

trace norm

edit

The article doesn't explain why the "trace norm" is an "entry-wise norm". sattath (talk) 14:49, 23 July 2008 (UTC) Fixed. --sattath (talk) 13:02, 27 April 2011 (UTC)Reply

Gradient of the Norm

edit

I'm interested in learning about the gradient of the matrix norm but I can't seem to find this information within wikipedia. I guess I'm requesting a new article and I don't know where to do that, but it seems logical for this article to point me to the gradient of the norm (maybe under see also). —Preceding unsigned comment added by Arbitrary18 (talkcontribs) 01:00, 23 September 2008 (UTC)Reply

Matrix Norm Definition

edit

Matrix norm on the set of all nxn matrices is a real value function, ||.|| defined on this set, satisfying for all nxn matrices A and B and all real number  :

  •   if   and   if and only if  
  •   for all   in   and all matrices   in  
  •   for all matrices   and   in  
  •  

@@@@

Matrix Norm Example

edit

The two following functions are two examples of matrix norm:

 

and

 


For examples: With matrix A:


 


We have:  

And:

  = |3|+|5|+|7|=15

Note: In the above example   is the maximum absolute column sum of the matrix, and   is the maximum absolute row sum of the matrix. In addition both   and   are the special norm of a general norm called p-norm for vectors

@@@@

max?

edit

In some of the definitions I wasn't sure if max should actually be the supremum. I thought a maximum is guaranteed to exist for compact sets of real numbers, but not necessarily for open sets. In the case of linear, finite-dimensional operators(open sets are mapped to open sets) wouldn't this be equivalent to the domain being compact? In the case of the induced norm that would imply (from my perspective) max in the case abs(x)<=1 and supremum in the case x not equal to zero. I am not sure if it is actually an issue or not because at least in case of the induced 2 norm, the supremum is actually part of the range. That in turn implies to me that the supremum is reached for any similarly defined induced norm because of the equivalence of norms in finite dimensional spaces. Can someone with experience maybe point out the disconnect I seem to be having? —Preceding unsigned comment added by 79.235.159.125 (talk) 18:49, 19 July 2010 (UTC)Reply

The domain is usually a sphere. These are closed and bounded, and thus compact by Heine-Borel. — Preceding unsigned comment added by 79.131.226.245 (talk) 17:06, 1 August 2011 (UTC)Reply

spectral radius

edit

There is a statement in the article: "For a symmetric or hermitian matrix A, we have equality for the 2-norm, since in this case the 2-norm is the spectral radius of A"

I guess the equality actually holds for more general case: It holds for any diagonalizable A. (Note that symmetric/hermitian is a special case of diagonalizable matrices when the diagonalizing matrix are unitary, which in turn, is a special case of normal matrices. All these are diagonalizable.)

Trivial proof: Let A = P D P-1. Then   (since the set of eigenvalues of AB is same as the set of eigenvalue of BA)

Does anybody see any problem with this argument? - Subh83 (talk | contribs) 18:47, 7 February 2013 (UTC)Reply

That argument was wrong. If   gives the set of eigenvalues of matrix  , then   if   is a cyclic permutation. - Subh83 (talk | contribs) 04:23, 8 February 2013 (UTC)Reply

Thank you

edit

This article was very useful. I was getting confused with that double-meaning notation and this article clarified it. Sorry for my English.--147.83.79.107 (talk) 15:31, 19 October 2013 (UTC)Reply


Centralized discussion on proofs

edit

See WT:MATH#Proofs, revisitedArthur Rubin (talk) 17:58, 29 September 2015 (UTC)Reply

Poor article layout

edit

It would be much clearer if the definitions of the norms and their properties was more clearly demarcated. At present, being sub-multiplicative is defined at the top, but the fact that all induced norms are sub-multiplicative is just mentioned in passing in the discussion of induced norms. Contrast this to consistency, for which the fact that induced norms are consistent is mentioned next to the definition of consistency.

I would suggest one of the following two layouts:

  1. Start with the definition of a matrix norm, and the formal definition of each property that such a norm might have. Then go through the definitions of induced, Frobenius etc. norms, with clear results for each norm on which properties it does (not) possess.
  2. Start with the definition of a matrix norm, then go through the definitions of induced, Frobenius etc. norms as examples. Then go through the definitions of each property matrix norms might have, with clear results on which norms (do not) possess the given property.

In either approach, a table of norms and properties might help the presentation.

--cfp (talk) 20:56, 14 November 2015 (UTC)Reply

Article contains no motivations or applications

edit

I came here looking for an introduction to the concept of matrix norms and an understanding of why they are important and what their applications are. The article lacks any of this information - it would be very useful to have here. 36.53.254.212 (talk) 14:07, 22 December 2015 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified one external link on Matrix norm. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 15:43, 21 January 2018 (UTC)Reply


Frobenius norm - corrected definition

edit

Dear fellow Wikipedians,

Previously, the definition

 

was given for the Frobenius norm, which only holds for real matrices (without any reference to this restirction). I now changed this, adding the correct definition (using the notation   for the conjugate transpose of  , which was used in other sections in this article), and I also changed the other parts of this section accordingly. One more thing: the inequality between the induced 2-norm and the Frobenius norm is mentioned before the Frobenius norm section, so probably we should change this.

Zimboras (talk) 12:19, 10 August 2019 (UTC)Reply

Horrible clashing notations

edit

The clashing notations here are so confusing. I see people use ||T||_p for the Schatten norms all the time, but I don't see this notation meaning something else. For the sake of having a readable article, I would suggest we use different notations for the other ones. Do other people think the other kinds of norms take precedence for this notation?

Sam W

2607:9880:1A18:10A:64C9:2106:FDEB:3FFD (talk) 06:58, 5 June 2021 (UTC)Reply

Hölder's inequality for matrices

edit

The text as of 2013-03-29 claimed that   as a matrix generalization of Hölder's inequality. It turns out this was for Schatten norm, not for induced p-norm. So I moved it to the Schatten norm section with a hint about how to derive it.

correction to the correction

edit

All induced vector norms upper bound the spectral radius, in particular,

 

This is an important inequality, so I think it should be re-included on this page.

User:Jfessler

Submultiplicativity?

edit

The section on the Frobenius norm contains this sentence:

"The Frobenius norm is sub-multiplicative and is very useful for numerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using Cauchy–Schwarz inequality."

It would be a useful improvement to this article if the meaning of this submultiplicativity were to be also stated in mathematical notation.

I hope someone knowledgeable about this subject can add the appropriate inequality to the article.