Talk:Kolmogorov's zero–one law
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
??
editThere is a minor sloppyness in language. The term is the limit of the series .
- That's not a series; that's a sequence! Michael Hardy 00:36, 1 Nov 2004 (UTC)
The later converges or not, the prior exists or not.
- What does it mean for a sum of random variables not to exist? I think you've misunderstood the point.
- An infinite sum of random variables does not exist if the probability that the sum does not converge (or does not exist, the wording is up to you) is positive. GaborPete (talk) 22:07, 2 April 2024 (UTC)
Don't know how to correct this without an overhead of explanation.
Btw., what are the exact requirements on for the series to converge at all? Certainly is needed, but not sufficient. I even believe the convergences of the series would be equivalent to .
- No on both accounts. See eg. the central limit theorem.
- ouch, now I see where I was wrong: in many theorems the random variables have to be identically distributed, not so here. 217.230.28.82 12:16, 31 Oct 2004 (UTC)
- The discussion is badly edited, so I don't know who is saying what and when, but bringing up the Central Limit Theorem was off track:
- The example in the article is fine, the writer is talking about the almost sure convergence of the sum, which is indeed a tail event. Almost sure convergence is what people usually mean by the convergence of a random sum, so the writing is not awful (at least right now).
- The Central Limit Theorem does NOT talk about the almost sure convergence of a sum of random variables. It is about convergence in distribution, which has to do with the article only as a negative example: Kolmogorov's 0-1 law implies that we cannot have almost sure convergence in any CLT. GaborPete (talk) 22:07, 2 April 2024 (UTC)
How about a rigorous definition for a tail event or tail sigma algebra? —Preceding unsigned comment added by 71.207.219.120 (talk) 07:48, 23 April 2009 (UTC)
Are and the same or do I miss something? Pr.elvis (talk) 14:15, 10 August 2009 (UTC)
A couple of issues I have with this article
editFirst, I think you need to say more about the definition of a tail event, because consider:
Let be i.i.d. Bernoulli random variables on satisfying . Define as the set of all such that . Using the axiom of choice, choose a family of subsets of such that for each there exists unique such that the symmetric difference of and is finite. Now let be a subset of of cardinality whose complement is also of cardinality . Let be the set . Let be the σ-algebra on generated by the and let be the σ-algebra generated by . An arbitrary element of can be expressed in the form where . Define the probability of such an event to be . Then the event is independent of all finite subsets of the and is uniquely determined by the 'tail' for any , but has probability 1/2.
The reason why this doesn't contradict the theorem is because the theorem should only be stated for events belonging to .
Another issue is that you define a tail event as being one that is independent of all finite subsets of the , but this itself follows from the hypothesis that the are independent. In fact, under the article's definition of 'tail event', surely we don't even need to assume that the are independent for the result to be true (provided it's only stated for events in )? —Preceding unsigned comment added by 92.15.133.253 (talk) 10:38, 6 August 2010 (UTC)
More information needed
editThis article lacks a proof. 240F:7C:FC1A:1:6505:ECD6:F741:7B7E (talk) 01:06, 20 December 2014 (UTC)
Unclear sentence
editAn invertible measure-preserving transformation on a standard probability space that obeys the 0-1 law is called a Kolmogorov automorphism.
What "obeys" the 0-1 law? The space? The transformation? Neither makes sense. Indeed, it's hard to see how a thing can "obey" the 0-1 law since the 0-1 law is not formulated as a property. 76.118.180.76 (talk) 02:00, 15 December 2015 (UTC)
- X_n(ω) = X_{n-1}(T(ω))
- The sequence and the transformation are one and the same. Izmirlig (talk) 04:24, 7 August 2024 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified one external link on Kolmogorov's zero–one law. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20050115091525/http://kolmogorov.com/ to http://www.kolmogorov.com/
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 01:37, 12 December 2017 (UTC)
some more counterexamples
editFirstly the chosen example is more technical than illuminating. Second and most importantly the concluding counterexample needs more discussion.
"Without independence we can consider a sequence that's either (0,0,0,...) or (1,1,1,...) with probability 1/2 each. In this case the sum converges with probability 1/2."
The sequence of partial sums is a dependent sequence and it converges only on the event {X_n} = (0,0,0,...), which is an event with probability 1/2, so clearly the independence condition and conclusions of Kolmogorov's 0/1 law don't apply. However the sequence of partial sums is always dependent even when the increments are independent. The mixture of deterministic sequences that is used to construct the sequence of partial sums comes into play because it forms the two events of the tail field where the limit of partial sums is 0 and infinity, respectively, with probability 1/2 each. As a counterexample, it checks all the boxes but it's a bit contrived. Now that we understand this counterexample we can use it to make some more.
1. The above sequence {X_n} itself is dependent, converges with probability 1 to the random variable X = 0,1 with probability 1/2 each and the tail field is identical to the above with two atoms of probability 1/2 each, e.g. the two events {lim X_n =0} and {lim X_n =1} are tail events with probability 1/2.
2. Suppose the sequence {X_n} is i.i.d., non-negative and bounded below with probability 1/2 and 0 with probability 1/2. The sequence of partial sums violates the dependence clause and it's tail field has two atoms of probability 1/2 each.
3. Poyla's urn, an infinite exchangeable sequence without a trivial tail field. E.G. start with an urn containing R red balls and G green balls. Draw a ball with replacement, and add D balls of the same color. X_n =1 if the n^th draw is green and 0 otherwise. {X_n} is a dependent sequence. It's called an exchangeable sequence because the joint probability of any collection of its elements is in invariant under permutations. That is to say, for example, that P(X_99=1, X_163=0, X_164=1) =P(X_163=1, X_164=0, X_99=1)
You can use this principle to derive the limiting density. It's a really fun constructive calculation. Try it. Write
n P{x-1/(2n) < S_n/n < x+1/(2n) }
= n P{nx red and n(1-x) green in whichever order}
= n P{nx red followed by n(1-x) green}
=Hypergeometric (ratio of factorials)
~ sterling formula applied twice on top and once on bottom
--> beta density
The partial sums of the sequence of empirical means converges with probability 1 to a BETA(G/D, R/D) random variable. The sequence of empirical means is dependent and it's tail field is non-trivial as it's the sigma field of a continuous random variable.
I thonk that this expanded discussion should generate some good thinking.