Talk:Law of the iterated logarithm
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
Iterated logarithm?
editDo Iterated logarithm and Law of the iterated logarithm have something in common despite the name? --Abdull 13:31, 3 March 2006 (UTC)
- No, nothing in common, I believe. Boris Tsirelson (talk) 08:50, 14 May 2009 (UTC)
Leo Breiman
editI put the late Leo Breiman's name in Wiki cites. Would someone with better acquaintance with Breiman and his works be willing to start an article about him? He was a distinguished statistician, and deserves recognition. Bill Jefferys 22:40, 26 April 2006 (UTC)
Puzzling picture
editI like the idea of the picture added recently by User:Dean P Foster, but looking more closely I got puzzled. The law of the iterated logarithm states that Sn is sometimes of order (but not more). Most of the time, however, it is of order The latter is probably what we see on the picture. The former is probably what we should see on a relevant picture but do not see on this picture. The logarithmic curve should be the envelope of rare large deviations (short peaks). Boris Tsirelson (talk) 08:43, 14 May 2009 (UTC)
I start to understand; it happens because of a very nonlinear (logarithmic, in fact) vertical axis. The point is that on such a plot the graph of is quite close to the graph of even though their ratio tends to infinity. For not puzzling the reader it is probably better to show both curves on the picture and to add some words of explanation. Boris Tsirelson (talk) 09:00, 14 May 2009 (UTC)
- Oops... The problem is considerably harder! The log log factor becomes relevant only for huge n (such as 101010). Straightforward simulation of a random walk on this scale is unfeasible. On the other hand, on this scale the random walk is so close to Brownian motion that their difference cannot be seen on a picture. Thus, one should simulate Brownian motion instead. Further, logarithmic axes turn the Brownian motion into an Ornstein-Uhlenbeck process (see Wiener process#Related processes). The latter behaves in the large like a sequence of i.i.d N(0,1) random variables. Thus the envelope of its large deviations is of order where . Boris Tsirelson (talk) 10:23, 14 May 2009 (UTC)
- I will generate another picture with both the log(log(n)) line and the sqrt(n) line. I agree that n has to be huge for this difference to matter. But what I do like about the graph is that it shows how to prove the theorem (as Boris points out). Namely, about each order of magnitude, where it is, above or below, is independent of where it was (this is seen by the switching about that often). Dean P Foster (talk)
- It would help if the plot indicated exactly what transformation is being used for the x-axis. Otherwise it looks fairly good. For some of the above discussion, and with the extra line included, it would be good to have a second version concentrating on the right hand end, say from 108 onwards.Melcombe (talk) 09:35, 15 May 2009 (UTC)
- Done. I think putting both graphs on the main page may be too much. The description of the transformations is in the discussion of the picture. If you think it should be in the caption on the main page, please add what you think best. Thanks everone for the suggestions! Dean P Foster (talk) —Preceding undated comment added 12:08, 15 May 2009 (UTC).
- Interesting! I guess it is in fact Ornstein-Uhlenbeck simulated. Boris Tsirelson (talk) 14:41, 15 May 2009 (UTC)
- These figures are now excellent and the information about the transformation is in the right place. If possible it would be good to copy the caption down so that it can be seen when looking at the expanded graphs. Melcombe (talk) 11:13, 18 May 2009 (UTC)
- Interesting! I guess it is in fact Ornstein-Uhlenbeck simulated. Boris Tsirelson (talk) 14:41, 15 May 2009 (UTC)
- Done. I think putting both graphs on the main page may be too much. The description of the transformations is in the discussion of the picture. If you think it should be in the caption on the main page, please add what you think best. Thanks everone for the suggestions! Dean P Foster (talk) —Preceding undated comment added 12:08, 15 May 2009 (UTC).
- It would help if the plot indicated exactly what transformation is being used for the x-axis. Otherwise it looks fairly good. For some of the above discussion, and with the extra line included, it would be good to have a second version concentrating on the right hand end, say from 108 onwards.Melcombe (talk) 09:35, 15 May 2009 (UTC)
- I will generate another picture with both the log(log(n)) line and the sqrt(n) line. I agree that n has to be huge for this difference to matter. But what I do like about the graph is that it shows how to prove the theorem (as Boris points out). Namely, about each order of magnitude, where it is, above or below, is independent of where it was (this is seen by the switching about that often). Dean P Foster (talk)
The picture and its caption are still not clear. Contrary to the comments above, there does not appear to be any explanation of the axes in text. My understanding is that the x-axis is and the y-axis is . There is also some logarithmic scaling of the axes in plotting, but the exact transformation eludes me. In any case, my guess of the y-axis seems to match the way the blue lines converge to zero as n tends to infinity. Would it be better to (1) change y-axis to so that the blue lines remain parallel, and (2) state explicitly the axes definitions and plotting scaling? — Preceding unsigned comment added by 67.164.26.84 (talk) 07:01, 1 October 2013 (UTC)
- Did you try to click on the picture? This way you get many details (that are present but hardly visible on the small picture) and additional explanations: File:Law of large numbers.gif. Boris Tsirelson (talk) 10:37, 1 October 2013 (UTC)
- I am talking about the largest resolution of the picture. I can see all the labels. However, the labels are confusing, as they do not suggest a consistent interpretation of what the lines in the picture stand for. From the way the blue lines converge to zero at infinity, I can see that they are consistent with . But then what is the red line labeled as "x-bar"? Certainly it cannot be , as that is supposed to exceed one infinitely many times, not just exceeding the blue line infinitely many times. — Preceding unsigned comment added by 67.164.26.84 (talk) 09:24, 10 October 2013 (UTC)
- It turns out that x-bar is indeed . This is obvious in hindsight, but quite confusing without proper labeling. Since the main article doese not use variables it is better to label the lines in figure exclusively in terms of n and . The existence of detailed description of axes transformation in Figure Summary was not obvious even when one clicks on the figure. I've edited the caption to make these clearer (hopefully).
The figure might be clearer if it is scaled to the variance given by CLT. That is, show in red, contant 1 in blue, and in green. — Preceding unsigned comment added by 67.164.26.84 (talk) 17:02, 15 October 2013 (UTC)
I find the picture, though pretty, misleading in the context of the main article. The law of the iterated logarithm is an asymptotic result, which cannot be captured in a graph. Indeed, the picture shows the sum (or is it an average here?) regularly exceeding the asymptotic bound, which it cannot do when n increases without bound. So what is the point of the picture? Again, it does look good, but it invites the reader to somehow apply the law to finite n, where it in fact does not apply. Chafe66 (talk) 22:43, 9 December 2015 (UTC)
- Not quite so. True, in principle, a limit says nothing about a finite n even so large as But in practice it usually says. Otherwise, why at all are we interested in calculating limits? Boris Tsirelson (talk) 05:53, 10 December 2015 (UTC)
- You said, in effect, a limit says nothing about a finite n but in practice it usually does, which confuses me. Perhaps you have in mind something like the CLT, which, though it's an asymptotic result, typically is used to approximate distributions with finite n. (I've done so a million times myself.) But what I'm saying about the graph is that, look, you can see that the sum (sorry, keep forgetting if it's a sum or avg here) regularly exceeds the limit result. So if it's being used to illustrate the principle, simply winds up illustrating the opposite, i.e., that the law is violated (often) for finite n. I'm not saying the picture is not illuminating in some ways, it just does not illustrate the law in question, because it cannot. Chafe66 (talk) 18:59, 11 December 2015 (UTC)
- Yes, it regularly exceeds the limit. So what? The law does not say that it approximates the limit from below. Or do you see that it regularly exceeds the limit by some epsilon that does not tend to 0? Boris Tsirelson (talk) 19:27, 11 December 2015 (UTC)
- Nevermind, I don't think we're getting anywhere. Chafe66 (talk) 19:44, 11 December 2015 (UTC)
- Happy editing. Boris Tsirelson (talk) 20:07, 11 December 2015 (UTC)
practical use?
editSo, can this be used to construct the confidence intervals? For example, instead of saying that
we could have been saying that
which is a nice alternative, because frankly speaking the number “95%” is quite arbitrary… For practical sample sizes the two quantities are quite similar (which could be the reason why 0.95 was chosen in the first place), in particular they are same when n = 921. Of course this all would depend on the rate of convergence of the left-hand side to the limit — anybody knows the performance of the limiting quantity in the mid-size samples? … stpasha » 22:12, 6 December 2009 (UTC)
- I am afraid, it could be drastically different from the whole philosophy of statistics; definitely, original research (to be discussed elsewhere, if you like). "Arbitrary" numbers like 0.95 are inevitable (according to that philosophy) and have important practical meaning: the probability of rejecting a true hypothesis. Boris Tsirelson (talk) 20:30, 22 December 2009 (UTC)
Wrong claim
editThe article claims, that
However, the article on convergence of random variables claims, that from almost sure convergence follows convergence in probability. How is the above claim possible?
- You are right: the "Discussion" section added recently by User:Stpasha is not accurate. Could Stpasha provide either references or proofs? Boris Tsirelson (talk) 20:24, 22 December 2009 (UTC)
- No, the almost sure convergence to sqrt(2) is false, the factor in the LIL does not establish almost sure convergence, but a limit situation where all states in the interval [-2,2] are visited infintely many often. — Preceding unsigned comment added by 163.117.87.70 (talk) 16:25, 12 November 2013 (UTC)
- The article (as of now) does not claim almost sure convergence; note "lim sup", not "lim". Yes, all states in the interval are visited infintely often. But lim sup feels only the upper end of the interval (and lim inf − only the lower end). The constant in the article is right, I just checked it against the book "Probability" by R. Durrett (Theorem (9.7)). Boris Tsirelson (talk) 19:32, 12 November 2013 (UTC)
- This is a very valid point, so I went ahead and removed the supremums from the first formula. I guess somebody more knowledgeable with statistics would have to provide further explanation of what's going on with this law. The reference i used was: A.W. van der Vaart. Asymptotic statistics. ISBN 9780521784504. … stpasha » 20:36, 22 December 2009 (UTC)
- Now better, but still a bit inaccurate. Boris Tsirelson (talk) 20:33, 22 December 2009 (UTC)
Also, the text that follows the above equality, is
- Thus, although the quantity is less than any predefined ε > 0 with probability approaching one, that quantity will nevertheless be dropping out of that interval infinitely often, and in fact will be visiting the neighborhoods of any point in the interval (0,√2) almost surely.
and it implies that in fact
Is this true? How does it follow from the Law of iterated logarithm? —Preceding unsigned comment added by 193.77.126.73 (talk) 16:52, 21 December 2009 (UTC)
- It follows from the central limit theorem. Since by the CLT, the is asymptotically normal, the probability for to lie within the interval (−ε, ε) will be approximately equal
- And that implies the convergence in probability claim. … stpasha » 20:36, 22 December 2009 (UTC)
- Now all formulas are basically correct (to the best of my knowledge), but still, two remarks. First: "...converge in distribution to a standard normal, which implies that these quantities do not converge to anything neither in probability nor almost surely" – really? Yes, in fact they do not converge, but does it follow from convergence in distribution? A sequence of random variables can converge (very easily) in distribution and also in probability (and also almost surely). Second: I never saw such notation is it a good idea? Boris Tsirelson (talk) 21:56, 22 December 2009 (UTC)
Expansion
editI don't know the first thing about advanced math, but I do know this article's a little puny. Anyone with those sources should be able to build it up some. Ten Pound Hammer, his otters and a clue-bat • (Otters want attention) 20:50, 20 January 2011 (UTC)
Notation
editCould we get a reference or link for the notation used, particularly the "-> forall" bit? Intuitively I know what it means (and I kinda like it) but more info would be nice, especially for people visiting this page who might not be familiar with that symbol. 24.220.188.43 (talk) 13:28, 15 April 2011 (UTC)
- I repeat (see several lines above): "I never saw such notation is it a good idea?" I suspect it is someone's neologism (neo-notation :-) ). --Boris Tsirelson (talk) 15:08, 15 April 2011 (UTC)
Error in statements about Laws of Large Numbers
editOne passage reads:
"
Example text
. . .
. . .
There are two versions of the law of large numbers — the weak and the strong — and they both claim that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely:
"
But, these statements about the two kinds of limiting behavior of Sn/n ought to be saying that as n →∞, Sn/n → 1 (not 0).
Maybe someone knowledgeable on the subject can fix this.Daqu (talk) 16:34, 11 February 2013 (UTC)
- Sorry, I did not understand why do you believe they tend to 1. They tend to the expectation of assumed to be 0. Boris Tsirelson (talk) 17:10, 11 February 2013 (UTC)
- Of course you are right. The reason I thought they tend to 1 was because my brain wasn't working right. Of course they should tend to the expectation of any of the i.i.d. Yi's. My apologies for my stupid error.Daqu (talk) 17:51, 11 February 2013 (UTC)
Another completely erroneous statement
editAt the end of the Discussion section, this statement appears:
"Thus, although the quantity is less than any predefined ε > 0 with probability approaching one, that quantity will nevertheless be dropping out of that interval infinitely often, and in fact will be visiting the neighborhoods of any point in the interval (0,√2) almost surely."
1) Clearly the first part of this sentence (before the comma) cannot be true, since for any positive integer n ≥ 2, the quantity in question is positive. It appears that some condition on the index n is omitted here.
2) The part after the comma makes no sense at all, because "that quantity will nevertheless be dropping out of that interval " is meaningless gobbledygook. I have no idea which interval is meant here, and dropping out of an interval makes no mathematical sense whatsoever.
3) Finally, will be dropping out uses the future progressive tense -- a completely inappropriate tense for a mathematical statement.
Maybe someone who is knowledgeable on the subject and who can explain things clearly can fix this.Daqu (talk) 17:01, 11 February 2013 (UTC)
- Well, some condition on the index n is indeed omitted there, but is clear from the context (see the formula): n tends to infinity, of course. Accordingly, "will be dropping out" means "as n tends to infinity" (which is treated as running in time, which is not rigorous but rather usual). "Which interval?" The interval according to the inequality mentioned before. Boris Tsirelson (talk) 17:17, 11 February 2013 (UTC)
- Okay -- fine, I'm glad that whoever wrote this had some sensible meaning in mind.
- But because this is intended for readers who are not necessarily familiar with the material in advance, it is much better to spell things out exactly, and not to expect people to know what the writer was thinking. There is no place for untrue statements -- regardless of which true statement was actually "meant".
- If quantifiers are needed for a statement to be literally true, then they certainly need to be there.Daqu (talk) 20:43, 12 February 2013 (UTC)
- Maybe. Really, this is a universal problem: a formulation is either formal (rigorous) or intuitive. Probably the best thing is, to give both. Boris Tsirelson (talk) 06:50, 13 February 2013 (UTC)
limsup=\infty implies no convergence in probability?
editIn the text of the article, there's a claim that given that limsup and liminf equal to \infty and -\infty a.s., the sequence of R.V.'s can't converge in probability or a.s.:
...\limsup_n \frac{S_n}{\sqrt{n}}=\infty with probability 1. An identical argument shows that \liminf_n \frac{S_n}{\sqrt{n}}=-\infty with probability 1 as well. This implies that these quantities converge neither in probability nor almost surely:
\frac{S_n}{\sqrt n} \ \stackrel{p}{\nrightarrow}\ \forall, \qquad \frac{S_n}{\sqrt n} \ \stackrel{a.s.}{\nrightarrow}\ \forall, \qquad \text{as}\ \ n\to\infty.
I can see how the argument for no convergence a.s. goes. However, I disagree that these two facts about limsup/liminf imply no convergence in probability. Consider a sequence of R.V.'s Z_n such that Z_n = 0 with probability 1-\frac{1}{n} and Z_n = n*(-1)^n with probability \frac{1}{n}. The liminf and limsup are -\infty and +\infty with probability 1, yet Z_n \to 0 in probability.
Note: I do not claim that there exists a RV such that the sequence in the article converges to it in probability, simply that there seems to be a very large leap in reasoning here that is easy to provide a counterexample to. Can someone clarify the argument or provide a reference specific to this part of the article?
P.S. Apologies for the formatting - I can't figure out how to make it work like TEX.
165.124.129.146 (talk) 19:39, 26 January 2015 (UTC)Sergey
- Oh yes, you are right, thank you. Now fixed. Boris Tsirelson (talk) 19:59, 26 January 2015 (UTC)
figure caption
editThe caption says the n^(-1/2) variance is given by the CLT. but this is the exact variance. also the caption refers to bounds given by the LLN, but I usually think of the CLT as giving this scaling factor. — Preceding unsigned comment added by Snarfblaat (talk • contribs) 20:05, 4 May 2015 (UTC)
- Yes, you are right, thank you. Now fixed. In addition: that is (plus-minus) root of variance. Boris Tsirelson (talk) 05:32, 5 May 2015 (UTC)
Kolmogorov's zero–one law
editKolmogorov's zero–one law is invoked to assert that for any fixed M, the probability that the event occurs is 0 or 1. However, the sequence is not a sequence of i.i.d. random variables, so the hypotheses of Kolmogorov's zero–one law are not satisfied (or if one wants to consider the sequence , then the considered event is not a tail event). I think one should rather invoke Hewitt–Savage zero–one law. — Preceding unsigned comment added by Lderafe (talk • contribs) 23:02, 13 October 2016 (UTC)
Why "log" instead of "ln"?
editIs there any good reason to use in this article the symbol (which is really a binary function , but [mis]used by different people to denote , or , depending on their preferences) instead of the unambiguous ? — Mikhail Ryazanov (talk) 03:10, 27 December 2017 (UTC)
- "log" seems to be essentially standard in the literature. IAmAnEditor (talk) 14:10, 1 January 2018 (UTC)
- Yes; and "ln" is more usual in Russia. Boris Tsirelson (talk) 15:46, 1 January 2018 (UTC)
- Nevertheless, the article Natural logarithm uses throughout. And regarding being "standard in the literature", in computer science usually means . — Mikhail Ryazanov (talk) 20:47, 1 January 2018 (UTC)
- The literature about the Law of the iterated logarithm seems to generally use "log". For examples, see the Encyclopedia of Statistical Sciences and the International Encyclopedia of Statistical Science, as well as books such as Limit Theorems of Probability Theory by Petrov. Wikipedia should follow the literature on this. IAmAnEditor (talk) 21:14, 1 January 2018 (UTC)
- Nevertheless, the article Natural logarithm uses throughout. And regarding being "standard in the literature", in computer science usually means . — Mikhail Ryazanov (talk) 20:47, 1 January 2018 (UTC)
- Yes; and "ln" is more usual in Russia. Boris Tsirelson (talk) 15:46, 1 January 2018 (UTC)
Wrong intra-picture description of last graph
editThe last graph (|Exhibition of Limit Theorems and their interrelationship) seems to have switched the description of the LLN and CLT - the upper-right picture seems to be the graph for CLT (description gives it as LLN), while the lower-right picture is the graph for LLN (convergence to a constant; description gives it as CLT). Oragonof (talk) 11:07, 17 February 2021 (UTC)
Is the Statement Correct?
editI am not an expert, but my understanding is that LIL does not always hold under finite mean and variances. Doesn't there have to be some conditions on other moments? Or alternatively, boundedness of the random variables? — Preceding unsigned comment added by Cihan (talk • contribs) 22:27, 13 March 2022 (UTC)
- I'm looking at the Breiman cited source right now, and his Theorem 3.52 agrees with the statement in the article, as far as I can tell. Theorem 3.52 actually has a sigma in the denominator, so variance 1 is not assumed. It is not clear to me that Breiman even assumes mean zero, but it must be in there somewhere. Doctormatt (talk) 03:01, 14 March 2022 (UTC)