In probability theory and mathematical statistics , the law of total cumulance is a generalization to cumulants of the law of total probability , the law of total expectation , and the law of total variance . It has applications in the analysis of time series . It was introduced by David Brillinger .[ 1]
It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable . In general, we have
κ
(
X
1
,
…
,
X
n
)
=
∑
π
κ
(
κ
(
X
i
:
i
∈
B
∣
Y
)
:
B
∈
π
)
,
{\displaystyle \kappa (X_{1},\dots ,X_{n})=\sum _{\pi }\kappa (\kappa (X_{i}:i\in B\mid Y):B\in \pi ),}
where
κ (X 1 , ..., X n ) is the joint cumulant of n random variables X 1 , ..., X n , and
the sum is over all partitions
π
{\displaystyle \pi }
of the set { 1, ..., n } of indices, and
"B ∈ π ;" means B runs through the whole list of "blocks" of the partition π , and
κ (X i : i ∈ B | Y ) is a conditional cumulant given the value of the random variable Y . It is therefore a random variable in its own right—a function of the random variable Y .
The special case of just one random variable and n = 2 or 3
edit
Only in case n = either 2 or 3 is the n th cumulant the same as the n th central moment . The case n = 2 is well-known (see law of total variance ). Below is the case n = 3. The notation μ 3 means the third central moment.
μ
3
(
X
)
=
E
(
μ
3
(
X
∣
Y
)
)
+
μ
3
(
E
(
X
∣
Y
)
)
+
3
cov
(
E
(
X
∣
Y
)
,
var
(
X
∣
Y
)
)
.
{\displaystyle \mu _{3}(X)=\operatorname {E} (\mu _{3}(X\mid Y))+\mu _{3}(\operatorname {E} (X\mid Y))+3\operatorname {cov} (\operatorname {E} (X\mid Y),\operatorname {var} (X\mid Y)).}
General 4th-order joint cumulants
edit
For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows:
κ
(
X
1
,
X
2
,
X
3
,
X
4
)
=
κ
(
κ
(
X
1
,
X
2
,
X
3
,
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
2
,
X
3
∣
Y
)
,
κ
(
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
2
,
X
4
∣
Y
)
,
κ
(
X
3
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
3
,
X
4
∣
Y
)
,
κ
(
X
2
∣
Y
)
)
+
κ
(
κ
(
X
2
,
X
3
,
X
4
∣
Y
)
,
κ
(
X
1
∣
Y
)
)
}
(
partitions of the
3
+
1
form
)
+
κ
(
κ
(
X
1
,
X
2
∣
Y
)
,
κ
(
X
3
,
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
3
∣
Y
)
,
κ
(
X
2
,
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
4
∣
Y
)
,
κ
(
X
2
,
X
3
∣
Y
)
)
}
(
partitions of the
2
+
2
form
)
+
κ
(
κ
(
X
1
,
X
2
∣
Y
)
,
κ
(
X
3
∣
Y
)
,
κ
(
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
3
∣
Y
)
,
κ
(
X
2
∣
Y
)
,
κ
(
X
4
∣
Y
)
)
+
κ
(
κ
(
X
1
,
X
4
∣
Y
)
,
κ
(
X
2
∣
Y
)
,
κ
(
X
3
∣
Y
)
)
+
κ
(
κ
(
X
2
,
X
3
∣
Y
)
,
κ
(
X
1
∣
Y
)
,
κ
(
X
4
∣
Y
)
)
+
κ
(
κ
(
X
2
,
X
4
∣
Y
)
,
κ
(
X
1
∣
Y
)
,
κ
(
X
3
∣
Y
)
)
+
κ
(
κ
(
X
3
,
X
4
∣
Y
)
,
κ
(
X
1
∣
Y
)
,
κ
(
X
2
∣
Y
)
)
}
(
partitions of the
2
+
1
+
1
form
)
+
κ
(
κ
(
X
1
∣
Y
)
,
κ
(
X
2
∣
Y
)
,
κ
(
X
3
∣
Y
)
,
κ
(
X
4
∣
Y
)
)
.
{\displaystyle {\begin{aligned}&\kappa (X_{1},X_{2},X_{3},X_{4})\\[5pt]={}&\kappa (\kappa (X_{1},X_{2},X_{3},X_{4}\mid Y))\\[5pt]&\left.{\begin{matrix}&{}+\kappa (\kappa (X_{1},X_{2},X_{3}\mid Y),\kappa (X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{2},X_{4}\mid Y),\kappa (X_{3}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{3},X_{4}\mid Y),\kappa (X_{2}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{2},X_{3},X_{4}\mid Y),\kappa (X_{1}\mid Y))\end{matrix}}\right\}({\text{partitions of the }}3+1{\text{ form}})\\[5pt]&\left.{\begin{matrix}&{}+\kappa (\kappa (X_{1},X_{2}\mid Y),\kappa (X_{3},X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{3}\mid Y),\kappa (X_{2},X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{4}\mid Y),\kappa (X_{2},X_{3}\mid Y))\end{matrix}}\right\}({\text{partitions of the }}2+2{\text{ form}})\\[5pt]&\left.{\begin{matrix}&{}+\kappa (\kappa (X_{1},X_{2}\mid Y),\kappa (X_{3}\mid Y),\kappa (X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{3}\mid Y),\kappa (X_{2}\mid Y),\kappa (X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{1},X_{4}\mid Y),\kappa (X_{2}\mid Y),\kappa (X_{3}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{2},X_{3}\mid Y),\kappa (X_{1}\mid Y),\kappa (X_{4}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{2},X_{4}\mid Y),\kappa (X_{1}\mid Y),\kappa (X_{3}\mid Y))\\[5pt]&{}+\kappa (\kappa (X_{3},X_{4}\mid Y),\kappa (X_{1}\mid Y),\kappa (X_{2}\mid Y))\end{matrix}}\right\}({\text{partitions of the }}2+1+1{\text{ form}})\\[5pt]&{\begin{matrix}{}+\kappa (\kappa (X_{1}\mid Y),\kappa (X_{2}\mid Y),\kappa (X_{3}\mid Y),\kappa (X_{4}\mid Y)).\end{matrix}}\end{aligned}}}
Cumulants of compound Poisson random variables
edit
Suppose Y has a Poisson distribution with expected value λ , and X is the sum of Y copies of W that are independent of each other and of Y .
X
=
∑
y
=
1
Y
W
y
.
{\displaystyle X=\sum _{y=1}^{Y}W_{y}.}
All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to λ . Also recall that if random variables W 1 , ..., W m are independent , then the n th cumulant is additive:
κ
n
(
W
1
+
⋯
+
W
m
)
=
κ
n
(
W
1
)
+
⋯
+
κ
n
(
W
m
)
.
{\displaystyle \kappa _{n}(W_{1}+\cdots +W_{m})=\kappa _{n}(W_{1})+\cdots +\kappa _{n}(W_{m}).}
We will find the 4th cumulant of X . We have:
κ
4
(
X
)
=
κ
(
X
,
X
,
X
,
X
)
=
κ
1
(
κ
4
(
X
∣
Y
)
)
+
4
κ
(
κ
3
(
X
∣
Y
)
,
κ
1
(
X
∣
Y
)
)
+
3
κ
2
(
κ
2
(
X
∣
Y
)
)
+
6
κ
(
κ
2
(
X
∣
Y
)
,
κ
1
(
X
∣
Y
)
,
κ
1
(
X
∣
Y
)
)
+
κ
4
(
κ
1
(
X
∣
Y
)
)
=
κ
1
(
Y
κ
4
(
W
)
)
+
4
κ
(
Y
κ
3
(
W
)
,
Y
κ
1
(
W
)
)
+
3
κ
2
(
Y
κ
2
(
W
)
)
+
6
κ
(
Y
κ
2
(
W
)
,
Y
κ
1
(
W
)
,
Y
κ
1
(
W
)
)
+
κ
4
(
Y
κ
1
(
W
)
)
=
κ
4
(
W
)
κ
1
(
Y
)
+
4
κ
3
(
W
)
κ
1
(
W
)
κ
2
(
Y
)
+
3
κ
2
(
W
)
2
κ
2
(
Y
)
+
6
κ
2
(
W
)
κ
1
(
W
)
2
κ
3
(
Y
)
+
κ
1
(
W
)
4
κ
4
(
Y
)
=
κ
4
(
W
)
λ
+
4
κ
3
(
W
)
κ
1
(
W
)
λ
+
3
κ
2
(
W
)
2
+
6
κ
2
(
W
)
κ
1
(
W
)
2
λ
+
κ
1
(
W
)
4
λ
=
λ
E
(
W
4
)
(the punch line -- see the explanation below).
{\displaystyle {\begin{aligned}\kappa _{4}(X)={}&\kappa (X,X,X,X)\\[8pt]={}&\kappa _{1}(\kappa _{4}(X\mid Y))+4\kappa (\kappa _{3}(X\mid Y),\kappa _{1}(X\mid Y))+3\kappa _{2}(\kappa _{2}(X\mid Y))\\&{}+6\kappa (\kappa _{2}(X\mid Y),\kappa _{1}(X\mid Y),\kappa _{1}(X\mid Y))+\kappa _{4}(\kappa _{1}(X\mid Y))\\[8pt]={}&\kappa _{1}(Y\kappa _{4}(W))+4\kappa (Y\kappa _{3}(W),Y\kappa _{1}(W))+3\kappa _{2}(Y\kappa _{2}(W))\\&{}+6\kappa (Y\kappa _{2}(W),Y\kappa _{1}(W),Y\kappa _{1}(W))+\kappa _{4}(Y\kappa _{1}(W))\\[8pt]={}&\kappa _{4}(W)\kappa _{1}(Y)+4\kappa _{3}(W)\kappa _{1}(W)\kappa _{2}(Y)+3\kappa _{2}(W)^{2}\kappa _{2}(Y)\\&{}+6\kappa _{2}(W)\kappa _{1}(W)^{2}\kappa _{3}(Y)+\kappa _{1}(W)^{4}\kappa _{4}(Y)\\[8pt]={}&\kappa _{4}(W)\lambda +4\kappa _{3}(W)\kappa _{1}(W)\lambda +3\kappa _{2}(W)^{2}+6\kappa _{2}(W)\kappa _{1}(W)^{2}\lambda +\kappa _{1}(W)^{4}\lambda \\[8pt]={}&\lambda \operatorname {E} (W^{4})\qquad {\text{(the punch line -- see the explanation below).}}\end{aligned}}}
We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of W of order equal to the size of the block. That is precisely the 4th raw moment of W (see cumulant for a more leisurely discussion of this fact). Hence the cumulants of X are the moments of W multiplied by λ .
In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of the normal distribution is not a moment sequence of any probability distribution).
Conditioning on a Bernoulli random variable
edit
Suppose Y = 1 with probability p and Y = 0 with probability q = 1 − p . Suppose the conditional probability distribution of X given Y is F if Y = 1 and G if Y = 0. Then we have
κ
n
(
X
)
=
p
κ
n
(
F
)
+
q
κ
n
(
G
)
+
∑
π
<
1
^
κ
|
π
|
(
Y
)
∏
B
∈
π
(
κ
|
B
|
(
F
)
−
κ
|
B
|
(
G
)
)
{\displaystyle \kappa _{n}(X)=p\kappa _{n}(F)+q\kappa _{n}(G)+\sum _{\pi <{\widehat {1}}}\kappa _{\left|\pi \right|}(Y)\prod _{B\in \pi }(\kappa _{\left|B\right|}(F)-\kappa _{\left|B\right|}(G))}
where
π
<
1
^
{\displaystyle \pi <{\widehat {1}}}
means π is a partition of the set { 1, ..., n } that is finer than the coarsest partition – the sum is over all partitions except that one. For example, if n = 3, then we have
κ
3
(
X
)
=
p
κ
3
(
F
)
+
q
κ
3
(
G
)
+
3
p
q
(
κ
2
(
F
)
−
κ
2
(
G
)
)
(
κ
1
(
F
)
−
κ
1
(
G
)
)
+
p
q
(
q
−
p
)
(
κ
1
(
F
)
−
κ
1
(
G
)
)
3
.
{\displaystyle \kappa _{3}(X)=p\kappa _{3}(F)+q\kappa _{3}(G)+3pq(\kappa _{2}(F)-\kappa _{2}(G))(\kappa _{1}(F)-\kappa _{1}(G))+pq(q-p)(\kappa _{1}(F)-\kappa _{1}(G))^{3}.}
^ David Brillinger, "The calculation of cumulants via conditioning", Annals of the Institute of Statistical Mathematics , Vol. 21 (1969), pp. 215–218.