Ball covariance is a statistical measure that can be used to test the independence of two random variables defined on metric spaces.[1] The ball covariance is zero if and only if two random variables are independent, making it a good measure of correlation. Its significant contribution lies in proposing an alternative measure of independence in metric spaces. Prior to this, distance covariance in metric spaces[2] could only detect independence for distance types with strong negative type. However, ball covariance can determine independence for any distance measure.

Ball covariance uses permutation tests to calculate the p-value. This involves first computing the ball covariance for two sets of samples, then comparing this value with many permutation values.

Background

edit

Correlation, as a fundamental concept of dependence in statistics, has been extensively developed in Hilbert spaces, exemplified by Pearson correlation coefficient,[3] Spearman correlation coefficient,[4] and Hoeffding's dependence measure.[5] However, with the advancement of time, many fields require the measurement of dependence or independence between complex objects, such as in medical imaging, computational biology, and computer vision. Examples of complex objects include Grassmann manifolds, planar shapes, tree-structured data, matrix Lie groups, deformation fields, symmetric positive definite (SPD) matrices, and shape representations of cortical and subcortical structures. These complex objects mostly exist in non-Hilbert spaces and are inherently nonlinear and high-dimensional (or even infinite-dimensional). Traditional statistical techniques, developed in Hilbert spaces, may not be directly applicable to such complex objects. Therefore, analyzing objects that may reside in non-Hilbert spaces poses significant mathematical and computational challenges.

Previously, a groundbreaking work in metric space independence tests was the distance covariance in metric spaces proposed by Lyons (2013).[2] This statistic equals zero if and only if random variables are independent, provided the metric space is of strong negative type. However, testing the independence of random variables in spaces that do not meet the strong negative type condition requires new explorations.

Definition

edit

Ball covariance

edit

Next, we will introduce ball covariance in detail, starting with the definition of a ball. Suppose two Banach spaces:   and  , where the norms   and   also represent their induced distances. Let   be a Borel probability measure on   be two Borel probability measures on  , and   be a  -valued random variable defined on a probability space such that  , and  . Denote the closed ball with the center   and the radius   in   as   or  , and the closed ball with the center   and the radius   in   as   or  . Let   be an infinite sequence of iid samples of  , and   be the positive weight function on the support set of  . Then, the population ball covariance can be defined as follows:

 

where  for   and  .

Next, we will introduce another form of population ball covariance. Suppose   which indicates whether   is located in the closed ball  . Then, let   means whether both   and   is located in  , and  . So does  ,   and   for  . Then, let  ,   be iid samples from  . Another form of population ball covariance can be shown as

 

Now, we can finally express the sample ball covariance. Consider the random sample  . Let   and   be the estimate of   and  . Denote   the sample ball covariance is  

Ball correlation

edit

Just like the relationship between the Pearson correlation coefficient and covariance, we can define the ball correlation coefficient through ball covariance. The ball correlation is defined as the square root of

 

where   and   And the sample ball correlation is defined similarly,   where   and  

Properties

edit

1.Independence-zero equivalence property: Let  ,   and   denote the support sets of  ,   and  , respectively.   implies   if one of the following conditions establish:

(a).  is a finite dimensional Banach space with  .

(b). , where   and   are positive constants,   is a discrete measure, and   is an absolutely continuous measure with a continues Radon–Nikodym derivative with respect to the Gaussian measure.

2.Cauchy–Schwarz type inequality:  

3.Consistence: If   and   uniformly converge   and   with   respectively, we have   and  .

4.Asymptotics: If   and   uniformly converge   and   with   respectively, (a)under the null hypothesis, we have  , where   are independent standard normal random variables.

(b)under the alternative hypothesis, we have  .

References

edit
  1. ^ Pan, Wenliang; Wang, Xueqin; Zhang, Heping; Zhu, Hongtu; Zhu, Jin (2019-04-11). "Ball Covariance: A Generic Measure of Dependence in Banach Space". Journal of the American Statistical Association. 115 (529): 307–317. doi:10.1080/01621459.2018.1543600. ISSN 0162-1459. PMC 7720858. PMID 33299261.
  2. ^ a b Lyons, Russell (2013-09-01). "Distance covariance in metric spaces". The Annals of Probability. 41 (5). arXiv:1106.5758. doi:10.1214/12-AOP803. ISSN 0091-1798.
  3. ^ "VII. Note on regression and inheritance in the case of two parents". Proceedings of the Royal Society of London. 58 (347–352): 240–242. 1895-12-31. doi:10.1098/rspl.1895.0041. ISSN 0370-1662.
  4. ^ Spearman, C. (January 1904). "The Proof and Measurement of Association between Two Things". The American Journal of Psychology. 15 (1): 72–101. doi:10.2307/1412159. ISSN 0002-9556. JSTOR 1412159.
  5. ^ Hoeffding, Wassily (1948). "A Non-Parametric Test of Independence". The Annals of Mathematical Statistics. 19 (4): 546–557. doi:10.1214/aoms/1177730150. ISSN 0003-4851. JSTOR 2236021.