Issue
4open
Volume 5, 2022
Statistical Inference in Markov Processes and Copula Models
Article Number 20
Number of page(s) 9
Section Mathematics - Applied Mathematics
DOI https://doi.org/10.1051/fopen/2022022
Published online 21 December 2022

© V.A. González-López & V. Litvinoff Justus, Published by EDP Sciences, 2022

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Introduction

In this article, we address the notion of conditional independence and its impact on unconditional dependence. Here, unconditional dependence is being represented by a copula function and the impact of conditional independence on unconditional dependence is studied using the assumption of conditional independence in the copula function. Independence allows us to carry out the usual inferential process, for example, those obtained by calculating the likelihood function, and because of that is a relevant notion in statistical inference. Under the independence assumption is possible to carry out the classic techniques of point estimation, hypothesis testing, and all those procedures based on the likelihood function. As can then be verified, the concept of independence occupies a relevant space in statistics and for this reason, it is widely found in the literature. Several efforts have been done to promote its identification, see for instance García and González-López [1, 2]. In such papers are proposed statistics for independence detection, see also Genest and Rémillard [3] and Hollander et al. [4]. Such statistics work under certain conditions imposed on the nature of the underlying variables. Other authors are concerned with measuring dependence. For example, García et al. [5] proposes an index with such purpose. Despite its enormous use, independence is a very restrictive case within the diversity of types of dependencies that could occur. For example, if we consider the random vector (X, Y) with bivariate Normal distribution, and density function f(x,y)=12π(1-ρ2)1/2exp(-(x2+y2-2ρxy)2(1-ρ2)),(x,y)R,ρ[-1, 1],$ f\left(x,y\right)=\frac{1}{2\pi (1-{\rho }^2{)}^{1/2}}\mathrm{exp}\left(-\frac{\left({x}^2+{y}^2-2{\rho xy}\right)}{2\left(1-{\rho }^2\right)}\right),\hspace{0.5em}\forall \left(x,y\right)\in \mathbb{R},\hspace{0.5em}\rho \in [-1,\enspace 1],$ then, the independence only occurs when ρ = 0, which shows that independence is a restrictive assumption. The theory of copulas has been extensively used to investigate such diversity. Nelsen [6] and Joe [7] show the enormous diversity of possible dependencies, addressed from the notion of copulas.

This article explores conditional independence, seeking to preserve some inferential procedures, and we also seek to characterize the impact of conditional independence on dependence. Conditional independence requires the identification of the entity Θ under which we can declare conditional independence (conditional to Θ), that requirement can be a constraint in practice since there is no clear identification procedure of Θ. The first question we seek to answer is how conditional independence impacts unconditional dependence. For the characterization of unconditional dependence, we will rely on the notion of copula. Secondly, we approach the impact of the notion of infinite exchangeability on conditional independence, based on the fact that such a notion guarantees conditional independence [8, 9]. With such a notion in hand, we return to the investigation of the structure of the copula representing unconditional dependence. That is, in this paper, we investigate the impact of conditional independence on unconditional dependence and extend this question to a stronger notion as is the case of infinite exchangeability.

Independence is a very restrictive case within the diversity of types of dependencies, on the other hand, conditional independence (to Θ) is a much more flexible concept, and for this reason, it is the condition that we explore in this article. In our approach, we take the entity Θ as being a random variable since in this way we generalize the notion. In addition, we give space so that the reader can access the interpretation that de Finetti offers of such an entity.

Here we describe the content of this paper. In Section 2 we address the notion of conditional independence and we introduce a theorem that shows a representation of the dependence (copula). We also show corollaries addressing situations of interest that result from that theorem. In Section 3 we introduce the notion of infinite exchangeability, which allows us to identify the quantity Θ, guarantying conditional independence. We then show a result offering a representation of the dependence (copula density function), given the property of infinite exchangeability. The final considerations are given in Section 4.

Conditional independence

We begin this section by introducing the notion of conditional independence and then exemplifying it. We introduce in the sequence the notion of copula function. From these notions follows our main result, Theorem 2.1 which shows the impact of conditional independence on the dependence, that is, in this case, on the copula structure. We emphasize the consequences that result from Theorem 2.1 in two corollaries and we show also some examples.

Definition 2.1

Given two continuous random variables X and Y, X and Y are conditionally independent given Θ, where Θ is a random variable, if

fX,Y|θ(s,t)=fX|θ(s)fY|θ(t), s,tR̅, θ an arbitrary value of Θ,$$ {f}_{X,Y|\theta }\left(s,t\right)={f}_{X|\theta }(s){f}_{Y|\theta }(t),\enspace \forall s,t\in \overline{\mathbb{R}},\enspace \theta \enspace \mathrm{an}\enspace \mathrm{arbitrary}\enspace \mathrm{value}\enspace \mathrm{of}\enspace \mathrm{\Theta }, $$

with fX,Y|θ, fX|θ$ {f}_{X,Y|\theta },\enspace {f}_{X|\theta }$ and fY|θ$ {f}_{Y|\theta }$ denoting the conditional density functions, given θ, of (X,Y), X and Y respectively.

The next example shows a case of conditional independence given Θ.

Example 2.1

Set the joint density function of X, Y, Θ, as

fX,Y,Θ(x,y,θ)=θ2e-θ(x+ky+λ)I(0,)(x)I(0,)(y)I(0,)(θ),  λ>0,k>0,$$ {f}_{X,Y,\mathrm{\Theta }}\left(x,y,\theta \right)={k\lambda }{\theta }^2{e}^{-\theta \left(x+{ky}+\lambda \right)}{I}_{\left(0,\mathrm{\infty }\right)}(x){I}_{\left(0,\mathrm{\infty }\right)}(y){I}_{\left(0,\mathrm{\infty }\right)}\left(\theta \right),\hspace{1em}\hspace{1em}\lambda >0,k>0, $$

where IA (x) = 1, if x ∈ A, and IA (x) = 0, if x ∉ A. The marginal distributions fX,Y, fX, fY and fΘ are respectively, fX,Y(x,y)=2(x+ky+λ)3I(0,)(x)I(0,)(y),$ {f}_{X,Y}(x,y)=\frac{2{k\lambda }}{(x+{ky}+\lambda {)}^3}{I}_{(0,\mathrm{\infty })}(x){I}_{(0,\mathrm{\infty })}(y),$ fX(x)=λ(x+λ)2I(0,)(x)$ {f}_X(x)=\frac{\lambda }{(x+\lambda {)}^2}{I}_{(0,\mathrm{\infty })}(x)$, fY(y)=(ky+λ)2I(0,)(y)$ {f}_Y(y)=\frac{{k\lambda }}{({ky}+\lambda {)}^2}{I}_{(0,\mathrm{\infty })}(y)$ and fΘ(θ)=λe-λθI(0,)(θ).$ {f}_{\mathrm{\Theta }}(\theta )=\lambda {e}^{-{\lambda \theta }}{I}_{(0,\mathrm{\infty })}(\theta ).$ Moreover, fX,Y|θ(x,y)=fX|θ(x)×fY|θ(y)=θe-θxI(0,)(x)×e-yI(0,)(y),$ {f}_{X,Y|\theta }(x,y)={f}_{X|\theta }(x)\times {f}_{Y|\theta }(y)=\theta {e}^{-{\theta x}}{I}_{(0,\mathrm{\infty })}(x)\times {k\theta }{e}^{-{k\theta y}}{I}_{(0,\mathrm{\infty })}(y),$ then X and Y are conditionally independent given Θ = θ, with different conditional distributions, being identically distributed given θ, when k = 1.

From the previous example we see that conditional independence does not impose the same conditional distribution. Note that Definition 2.1 allows θ to be considered as a constant value, and in such case θ is a parameter.

The result that we show in Theorem 2.1 seeks to understand the meaning that Θ plays in the construction of the dependence between X and Y (not conditioned to Θ = θ). For dependence between X and Y we understand the copula between them. In rough terms, the copula is a function that, combined with the marginal cumulative distributions, returns the joint cumulative distribution.

Next we formally introduce the notion of copula explaining how this notion serves to represent the dependence between X and Y.

Definition 2.2

A copula (in dimension 2) is a function C: [0, 1]2 → [0, 1] with the following properties,

  1. u,v ∈ [0, 1], C(u,0) = C(0,v) = 0, and C(u,1) = u,C(1,v) = v;

  2. for every ui, vi ∈ [0, 1], i = 1, 2, such that u1u2$ {u}_1\le {u}_2$ and v1v2,$ {v}_1\le {v}_2,$

C(u2,v2)-C(u2,v1)-C(u1,v2)+C(u1,v1)0.$$ C({u}_2,{v}_2)-C({u}_2,{v}_1)-C({u}_1,{v}_2)+C({u}_1,{v}_1)\ge 0. $$

It is possible to show that every copula given by Definition 2.2 is a cumulative distribution function on [0, 1]2, in the strict sense of the term. We now show how Definition 2.2 is formally related to the dependence between the variables of a random vector.

According to Sklar’s theorem [6], if HX,Y is the joint distribution function of (X, Y) with margins FX and FY, there exists a copula CX,Y (called 2-copula of (X, Y)) such that x,yR̅,$ \forall x,y\in \overline{\mathbb{R}},$

HX,Y(x,y)=CX,Y(FX(x),FY(y)).$$ {H}_{X,Y}(x,y)={C}_{X,Y}({F}_X(x),{F}_Y(y)). $$

If FX and FY are continuous, then CX,Y is unique. Conversely, if C is a copula (following Definition 2.2), F and G are distribution functions, then the function H(x,y) = C(F(x),G(y)) is a joint distribution function with margins F and G.

From what is given in Sklar’s theorem, the dependence between X and Y is represented by the 2-copula of (X,Y), CX,Y, since the margins FX and FY depend on X and Y, respectively. Then, our purpose is to see the effect of Definition 2.1 in CX,Y.

The next theorem provides an analytical representation of the copula, knowing that the variables are conditionally independent given the entity Θ.

Theorem 2.1

Given X and Y under the assumptions of Definition 2.1, if CX,Y is the 2-copula of (X,Y), Θ the random variable of Definition 2.1 with density function πΘ,

CX,Y(u,v)=-CX,Θ(u,FΘ(θ))θCY,Θ(v,FΘ(θ))θπΘ(θ)dθ,$$ {C}_{X,Y}\left(u,v\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} \frac{\mathrm{\partial }{C}_{X,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right)}{\mathrm{\partial }\theta }\frac{\mathrm{\partial }{C}_{Y,\mathrm{\Theta }}\left(v,{F}_{\mathrm{\Theta }}\left(\theta \right)\right)}{\mathrm{\partial }\theta }{\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta, $$(1)

with CX and CY denoting the 2-copulas of (X,Θ) and (Y,Θ) respectively, and FΘ denoting the cumulative distribution of Θ.

Proof. Since X, Y are continuous random variables we can write the cumulative distribution HX,Y between X and Y as follows, applying Definition 2.1,

HX,Y(x,y)=-x-y-fX|θ(s)fY|θ(t)πΘ(θ)dθdtds.$$ {H}_{X,Y}\left(x,y\right)={\int }_{-\mathrm{\infty }}^x {\int }_{-\mathrm{\infty }}^y {\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {f}_{X|\theta }(s){f}_{Y|\theta }(t){\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta \mathrm{d}t\mathrm{d}s. $$(2)

Considering the cumulative distribution between X and Θ, by Sklar’s theorem, HX(s,θ) = CX (FX(s), FΘ(θ)) then, the joint density function between X and Θ is given by  fX,Θ(s,θ)=sθCX,Θ(FX(s),FΘ(θ))fX(s)πΘ(θ)=cX,Θ(FX(s),FΘ(θ))fX(s)πΘ(θ),$$ \hspace{1em}\hspace{1em}{f}_{X,\mathrm{\Theta }}\left(s,\theta \right)=\frac{\mathrm{\partial }}{\mathrm{\partial }s}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left({F}_X(s),{F}_{\mathrm{\Theta }}\left(\theta \right)\right){f}_X(s){\pi }_{\mathrm{\Theta }}\left(\theta \right)={c}_{X,\mathrm{\Theta }}({F}_X(s),{F}_{\mathrm{\Theta }}(\theta )){f}_X(s){\pi }_{\mathrm{\Theta }}(\theta ), $$(3)

with cX denoting the density function related to the copula CX. Then, from equation (3), we obtain,

fX|θ(s)=sθCX,Θ(FX(s),FΘ(θ))fX(s).$$ {f}_{X|\theta }(s)=\frac{\mathrm{\partial }}{\mathrm{\partial }s}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left({F}_X(s),{F}_{\mathrm{\Theta }}\left(\theta \right)\right){f}_X(s). $$(4)

 □We proceed to integrate the equation (4). Define u = FX(s), then, du = fX (s)ds and

-xfX|θ(s)ds=-xsθCX,Θ(FX(s),FΘ(θ))fX(s)ds=0FX(x)sθCX,Θ(u,FΘ(θ))du=θCX,Θ(FX(x),FΘ(θ)).$$ {\int }_{-\mathrm{\infty }}^x {f}_{X|\theta }(s)\mathrm{d}s={\int }_{-\mathrm{\infty }}^x \frac{\mathrm{\partial }}{\mathrm{\partial }s}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left({F}_X(s),{F}_{\mathrm{\Theta }}\left(\theta \right)\right){f}_X(s)\mathrm{d}{s}\hspace{0.5em}={\int }_0^{{F}_X(x)} \frac{\mathrm{\partial }}{\mathrm{\partial }s}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right)\mathrm{d}u=\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left({F}_X(x),{F}_{\mathrm{\Theta }}\left(\theta \right)\right). $$(5)

We have similar expressions for fY(t,θ) and fY|θ(t). Then, applying equation (5) in equation (2), we obtain

HX,Y(x,y)=-θCX,Θ(FX(x),FΘ(θ))θCY,Θ(FY(y),FΘ(θ))πΘ(θ)dθ.$$ {H}_{X,Y}\left(x,y\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left({F}_X(x),{F}_{\mathrm{\Theta }}\left(\theta \right)\right)\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}\left({F}_Y(y),{F}_{\mathrm{\Theta }}\left(\theta \right)\right){\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta. $$(6)

Consider u and v in [0, 1], there exist values x and y, such that u = FX(x) and v = FY(y), then, by Sklar’s theorem and equation (6),

CX,Y(u,v)=CX,Y(FX(x),FY(y))=HX,Y(x,y)=-θCX,Θ(u,FΘ(θ))θCY,Θ(v,FΘ(θ))πΘ(θ)dθ.$$ {C}_{X,Y}\left(u,v\right)={C}_{X,Y}\left({F}_X(x),{F}_Y(y)\right)={H}_{X,Y}\left(x,y\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right)\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}\left(v,{F}_{\mathrm{\Theta }}\left(\theta \right)\right){\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta. $$

 □

We use in the following example a family of copulas that corresponds to weak dependence. It is the Farlie–Gumbel–Morgenstern family, and it encompasses the case of independence. See Nelsen [6].

Example 2.2

Consider (X,Y) with a 2-copula

CX,Y(u,v)=uv+αβ3u(1-u)v(1-v),$$ {C}_{X,Y}\left(u,v\right)={uv}+\frac{{\alpha \beta }}{3}u\left(1-u\right)v\left(1-v\right), $$(7)

with parameters α ∈ [−1, 1] and β ∈ [−1, 1]. Then, (X,Y) has a Farlie–Gumbel–Morgenstern 2-copula with parameter δ[-13, 13],δ=αβ3.$ \delta \in \left[-\frac{1}{3},\enspace \frac{1}{3}\right],\hspace{0.5em}\delta =\frac{{\alpha \beta }}{3}.$ Consider Θ as a continuous random variable, with Uniform distribution in (0,1).

If we set the 2-copulas of (X, Θ) and (Y, Θ) as being Farlie–Gumbel–Morgenstern 2-copulas with parameters α ∈ [ 1, 1] and β ∈ [−1, 1], respectively. Since θ ~ U(0,1), we have

CX,Θ(u,FΘ(θ))=+αu(1-u)θ(1-θ) and CY,Θ(v,FΘ(θ))=+βv(1-v)θ(1-θ).$$ {C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))={u\theta }+{\alpha u}(1-u)\theta (1-\theta )\enspace \mathrm{and}\enspace {C}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))={v\theta }+{\beta v}(1-v)\theta (1-\theta ). $$

And, we obtain,

θCX,Θ(u,FΘ(θ))=u+αu(1-u)(1-2θ) and θCY,Θ(v,FΘ(θ))=v+βv(1-v)(1-2θ).$$ \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))=u+{\alpha u}(1-u)(1-2\theta )\enspace \mathrm{and}\enspace \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))=v+{\beta v}(1-v)(1-2\theta ). $$

Then,

θCX,Θ(u,FΘ(θ))θCY,Θ(v,FΘ(θ))=uv+αuv(1-u)(1-2θ)+βuv(1-v)(1-2θ)+αβu(1-u)v(1-v)(1-2θ)2.$$ \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))={uv}+{\alpha uv}(1-u)(1-2\theta )+{\beta uv}(1-v)(1-2\theta )+{\alpha \beta u}(1-u)v(1-v)(1-2\theta {)}^2. $$

As a consequence,

01θCX,Θ(u,FΘ(θ))θCY,Θ(v,FΘ(θ))dθ=CX,Y(u,v).$$ {\int }_0^1 \frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))\mathrm{d}\theta ={C}_{X,Y}(u,v). $$

One question that arises is whether every expression given by the right side of equation (1) is a copula, where CX and CY are copulas. The Theorem 1, in González-López and Litvinoff Justus [10], states that the expression is always a copula. That is, postulating two copulas of (X, Θ) and (Y, Θ), we get a copula that could be used to deal with the dependence between X and Y. An example of what is stated in this paragraph appears in the previous example, since taking (X, Θ) and (Y, Θ) with Farlie–Gumbel–Morgenstern copulas, with parameters α and β, respectively, the resulting copula between X and Y (using the right side of equation (1)) is a Farlie–Gumbel–Morgenstern copula, with parameter δ=αβ3.$ \delta =\frac{{\alpha \beta }}{3}.$

The following corollary shows the version of Theorem 2.1 in terms of density functions of copulas.

Corollary 2.1

Under the assumptions of Theorem 2.1, if cX,Y, cX and cY are the density functions related to the 2-copulas CX,Y, CX and CY, respectively,

cX,Y(u,v)=-cX,Θ(u,FΘ(θ))cY,Θ(v,FΘ(θ))πΘ(θ)dθ.$$ {c}_{X,Y}\left(u,v\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {c}_{X,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right){c}_{Y,\mathrm{\Theta }}\left(v,{F}_{\mathrm{\Theta }}\left(\theta \right)\right){\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta. $$(8)

Proof: Consider that cX,Y(u,v)=uvCX,Y(u,v),$ {c}_{X,Y}(u,v)=\frac{\mathrm{\partial }}{\mathrm{\partial }u}\frac{\mathrm{\partial }}{\mathrm{\partial }v}{C}_{X,Y}(u,v),$ cX,Θ(u,FΘ(θ))=uθCX,Θ(u,FΘ(θ))$ {c}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))=\frac{\mathrm{\partial }}{\mathrm{\partial }u}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))$ and cY,Θ(v,FΘ(θ))=vθCY,Θ(v,FΘ(θ)).$ {c}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))=\frac{\mathrm{\partial }}{\mathrm{\partial }v}\frac{\mathrm{\partial }}{\mathrm{\partial }\theta }{C}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta )).$ The results follow differentiating both sides of equation (1) and permuting the integral with uv.$ \frac{\mathrm{\partial }}{\mathrm{\partial }u}\frac{\mathrm{\partial }}{\mathrm{\partial }v}.$ □

Example 2.3

Considering the Example 2.2. Since θCX,Θ(u,FΘ(θ))=u+αu(1-u)(1-2θ),$ \frac{\partial }{{\partial \theta }}{C}_{X,\Theta }(u,{F}_{\Theta }(\theta ))=u+{\alpha u}(1-u)(1-2\theta ),$ uθCX,Θ(u,FΘ(θ))=1+α(1-2θ)(1-2u).$ \frac{\partial }{{\partial u}}\frac{\partial }{{\partial \theta }}{C}_{X,\Theta }(u,{F}_{\Theta }(\theta ))=1+\alpha (1-2\theta )(1-2u).$ Then,

-cX,Θ(u,FΘ(θ))cY,Θ(v,FΘ(θ))πΘ(θ)dθ=01{1+α(1-2θ)(1-2u)}{1+β(1-2θ)(1-2v)}dθ=1+αβ3(1-2u)(1-2v),$$ {\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {c}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta )){c}_{Y,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta )){\pi }_{\mathrm{\Theta }}(\theta )\mathrm{d}\theta ={\int }_0^1 \{1+\alpha (1-2\theta )(1-2u)\}\{1+\beta (1-2\theta )(1-2v)\}\mathrm{d}\theta =1+\frac{{\alpha \beta }}{3}(1-2u)(1-2v), $$(9)

and, according to the equation (7), it is the copula density function of a FGM copula with parameter αβ3.$ \frac{{\alpha \beta }}{3}.$

The following corollary shows the version of Theorem 2.1 under the assumptions of identical conditional margins. The impact of the assumption appears in the integrating term of equation (1). This assumption about margins will become relevant later, in this paper.

Corollary 2.2

Under the assumptions of Theorem 2.1, if the variables X and Y are conditionally (to Θ) independent and identically distributed,

CX,Y(u,v)=-CX,Θ(u,FΘ(θ))θCX,Θ(v,FΘ(θ))θπΘ(θ)dθ.$$ {C}_{X,Y}(u,v)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} \frac{\mathrm{\partial }{C}_{X,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))}{\mathrm{\partial }\theta }\frac{\mathrm{\partial }{C}_{X,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta ))}{\mathrm{\partial }\theta }{\pi }_{\mathrm{\Theta }}(\theta )\mathrm{d}\theta. $$

Example 2.4

Considering the Example 2.2, if α = β, the copula of (X,Y) is a Farlie–Gumbel–Morgenstern copula with γ=α23[0,13].$ \gamma =\frac{{\alpha }^2}{3}\in \left[0,\frac{1}{3}\right].$ Then, the restriction (on the margins) imposes on the relationship between X and Y a positive Kendall’s τ coefficient (τ=29α23)$ \left(\tau =\frac{2}{9}\frac{{\alpha }^2}{3}\right)$.

Theorem 2.1 shows that the conditional independence of X and Y given Θ defines the structure of the copula between X and Y. See equation (1) for the form of the copula and see Corollary 2.1 for the form of the density function of copula between X and Y. We exemplify both results in Examples 2.2 and 2.3, respectively. Corollary 2.2 shows the case covered by Corollary 2.1, under the assumption of identical margins. The latter is exemplified in Example 2.4. The Corollary 2.2 is especially interesting for the situation that we will address in the next section.The results of this section describe the analytical form of the copula between X and Y, no longer depending on Θ. This means we have offered a representation for the predictive copula between X and Y.

The key to determining conditional independence between X and Y is to know that there exists a variable Θ establishing this independence. This leads us to the following reflections, showing a context where such an entity can be identified. That is the goal of de Finetti’s theorems of representation.

Exchangeability

We start the section with the definition of infinite exchangeability. This notion is weaker than independence and makes it possible to identify Θ. Then, we exemplify certain characteristics that result from this imposition. And, we present in Corollary 3.1, the impact of this notion on the dependence structure, that is, on the copula function.

Definition 3.1

A sequence X1,X2,X3,... of random variables is infinitely exchangeable, if for each collection of n variables Xi1,Xi2,Xin,$ {X}_{{i}_1},{X}_{{i}_2},\dots {X}_{{i}_n},$ the joint distribution is identical for all the permuted set of variables Xσ(i1),Xσ(i2),Xσ(in),$ {X}_{\sigma ({i}_1)},{X}_{\sigma ({i}_2)},\dots {X}_{\sigma ({i}_n)},$ with σ: {1, 2,…,n} → {1, 2,…,n} any permutation function.

The success in identifying the Θ entity stems from de Finetti’s representation theorem, see de Finetti [8]. de Finetti [8] proves that, a binary sequence X1, X2,... follows Definition 3.1, if and only if, there exists a distribution function FΘ on [0, 1] such that for all n the joint probability mass function P(X1=x1,,Xn=xn)=01θi=1nxi(1-θ)n-i=1nxidFΘ(θ),$ P({X}_1={x}_1,\dots,{X}_n={x}_n)={\int }_0^1 {\theta }^{{\sum }_{i=1}^n {x}_i}(1-\theta {)}^{n-{\sum }_{i=1}^n {x}_i}\mathrm{d}{F}_{\mathrm{\Theta }}(\theta ),$ where FΘ is the cumulative distribution function of the entity Θ=limni=1nXin,$ \mathrm{\Theta }={\mathrm{lim}}_{n\to \mathrm{\infty }}{\sum }_{i=1}^n \frac{{X}_i}{n},$ and xi ∈ {0, 1}. As a consequence, the conditional mass probability is decomposed in a product of Bernoulli’s

P(X1=x1,,Xn=xn|Θ=θ)=θi=1nxi(1-θ)n-i=1nxi=i=1nP(Xi=xi|Θ=θ)$$ P({X}_1={x}_1,\dots,{X}_n={x}_n|\mathrm{\Theta }=\theta )={\theta }^{\sum_{i=1}^n {x}_i}(1-\theta {)}^{n-\sum_{i=1}^n {x}_i}=\prod_{i=1}^n P({X}_i={x}_i|\mathrm{\Theta }=\theta ) $$

with P(Xi=xi|Θ=θ)=θxi(1-θ)1-xi,$ P({X}_i={x}_i|\mathrm{\Theta }=\theta )={\theta }^{{x}_i}(1-\theta {)}^{1-{x}_i},$ being θ a value of the variable Θ. Such a result is extended for all types of random variables in Hewitt and Savage [9]. For a sequence X1,X2,... of continuous random variables following Definition 3.1, there exists a variable Θ, such that the conditional density function fX1,,Xn|θ(x1,,xn)$ {f}_{{X}_1,\dots,{X}_n|\theta }({x}_1,\dots,{x}_n)$ can be decomposed as fX1,,Xn|θ(x1,,xn)=i=1nfXi|θ(xi).$ {f}_{{X}_1,\dots,{X}_n|\theta }({x}_1,\dots,{x}_n)={\prod }_{i=1}^n {f}_{{X}_i|\theta }({x}_i).$ The previous relation happens for each value θ of the variable Θ, then, the structure imposed by Definition 3.1 in the sequence X1,X2,⋯ guarantee the existence of the entity Θ.

We see in the remark to follow (Remark 3.1) that this notion leads us to conditional independence, and implies identical margins. But as we show in Example 3.1, exchangeability coexists with dependence.

Remark 3.1

Infinitely exchangeable variables (following Definition 3.1) are identically distributed conditionally to θ (value of the variable Θ identified by de Finetti’s theorem). If the infinitely exchangeable sequence is composed by absolutely continuous random variables and Θ has a density function πΘ (·), for each i,j:ij,i,j1, fXi|θ(x)=fXj|θ(x),x,$ i,j:i\ne j,\hspace{0.5em}{i},j\ge 1,\enspace {f}_{{X}_i|\theta }(x)={f}_{{X}_j|\theta }(x),\hspace{0.5em}\forall x,$ as a consequence, fXi(x)=fXj(x),x,$ {f}_{{X}_i}(x)={f}_{{X}_j}(x),\hspace{0.5em}\forall x,$ since,

fXi(x)=-fXi|θ(x)πΘ(θ)dθ=-fXj|θ(x)πΘ(θ)dθ=fXj(x),x.$$ {f}_{{X}_i}(x)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {f}_{{X}_i|\theta }(x){\pi }_{\mathrm{\Theta }}(\theta )\mathrm{d}\theta ={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {f}_{{X}_j|\theta }(x){\pi }_{\mathrm{\Theta }}(\theta )\mathrm{d}\theta ={f}_{{X}_j}(x),\hspace{0.5em}\forall x. $$

The concept of infinite exchangeability could be considered unrealistic, but it can be seen applied in usual inferential strategies. If a sample is available, say X1,,Xn, when constructing the likelihood function its elements are considered independent given a certain parameter. And how does this assumption is related to Definition 3.1? If the distribution of X1,…,Xn is invariant under permutations of the elements of X1,…,Xn, then we say that X1…,Xn is exchangeable. If X1,…,Xn is exchangeable nN$ \forall n\in \mathbb{N}$, then, we say that X1, X2,… is infinitely exchangeable. If X1,…,Xn can be embedded in an infinitely exchangeable sequence X1, X2,…, then we say that X1,…,Xn is infinitely exchangeably extendable. Bayesian and frequentist approaches, in general, treat observable values as infinitely exchangeably extendable and, as a consequence of de Finetti’s representations, independent and identically distributed conditional on some unknown entity Θ.

Although identically distributed independent random variables must be exchangeable, identically distributed exchangeable random variables need not be independent. Consider for instance the bivariate case of Gumbel distribution as shown by the next example.

Example 3.1

Let (X,Y) be a random vector, H its joint distribution function, given by H(x,y)=1-e-x-e-y+e-(x+y+αxy),x0,y0$ H\left(x,y\right)=1-{e}^{-x}-{e}^{-y}+{e}^{-\left(x+y+{\alpha xy}\right)},\hspace{0.5em}{x}\ge 0,y\ge 0$ and H(x,y) = 0, otherwise, with α a parameter in [0, 1]. We see that X and Y are exchangeable. Note also that X and Y have identical marginal distribution (exponential distribution). But, since the 2-copula of (X,Y) is CX,Y(u,v)=u+v-1+(1-u)(1-v)e-αln(1-u)ln(1-v), u,v[0, 1],$ {C}_{X,Y}(u,v)=u+v-1+(1-u)(1-v){e}^{-\alpha \mathrm{ln}\left(1-u\right)\mathrm{ln}\left(1-v\right)},\enspace {u},v\in [0,\enspace 1],$ X and Y are not independent.

The previous example shows us a case where exchangeability occurs, the variables share the same marginal distribution, but X and Y are not independent. Thus we see that exchangeability is a distinctly different notion from the notion of independence. It also follows from Example 3.1 that identically distributed variables can be dependent since every copula is a joint distribution with uniform marginal distributions, in particular, the copula of Example 3.1.

The next example shows a case of dependent variables, with identical marginal distributions but not exchangeable.

Example 3.2

Let (X,Y) be a random vector, H its joint distribution function, given by H(x,y= xy − x3y(1 − x)(1 − y), x,y ∈ [0, 1]. Then, H is a copula, since fits the conditions of Definition 2.2. X and Y are identically distributed but, not exchangeable since H is not symmetrical.

Bearing in mind the previous example, it is now necessary to mention Theorem 2.7.4 [6] which shows how the margins should be and also the copula to guarantee the exchangeability between a pair of variables. Let X and Y be continuous random variables with joint distribution function H, margins FX and FY, respectively, and copula C. Then X and Y are exchangeable if and only if FX = FY and C is symmetric (C(u,v) = C(v,u), ∀(u,v) ∈ [0, 1]2).

In the treatment and modeling via copulas, it is very common to operate with the original variables transformed by their margins, so below we summarize the impact of Definition 3.1 on such variables.

Remark 3.2

A property that is straight from Definition 3.1, is the preservation of the infinite exchangeability from the original sequence to the standardized ones. Namely, if the sequence X1, X2, X3,… of random variables is absolutely continuous, the sequence U1, U2, U3,… is also infinitely exchangeable, where Ui:=FXi(Xi),i1.$ {U}_i:={F}_{{X}_i}\left({X}_i\right),\hspace{0.5em}{i}\ge 1.$ Since, for any collection i1,…in, with arbitrary n, P(Ui1u1,,Uinun)=P(Uσ(i1)u1,,Uσ(in)un),$ P({U}_{{i}_1}\le {u}_1,\dots,{U}_{{i}_n}\le {u}_n)=P({U}_{\sigma ({i}_1)}\le {u}_1,\dots,{U}_{\sigma ({i}_n)}\le {u}_n),$ for any permutation σ and arbitrary values u1,…,un ∈ [0, 1].

P(Ui1u1,,Uinun)=P(FXi1(Xi1)u1,,FXin(Xin)un)  =P(Xi1FXi1-1(u1),,XinFXin-1(un))$$ P\left({U}_{{i}_1}\le {u}_1,\dots,{U}_{{i}_n}\le {u}_n\right)=P\left({F}_{{X}_{{i}_1}}\left({X}_{{i}_1}\right)\le {u}_1,\dots,{F}_{{X}_{{i}_n}}\left({X}_{{i}_n}\right)\le {u}_n\right)\hspace{1em}\hspace{1em}=P\left({X}_{{i}_1}\le {F}_{{X}_{{i}_1}}^{-1}\left({u}_1\right),\dots,{X}_{{i}_n}\le {F}_{{X}_{{i}_n}}^{-1}\left({u}_n\right)\right) $$

  =P(Xσ(i1)FXi1-1(u1),,Xσ(in)FXin-1(un))$$ \hspace{1em}\hspace{1em}=P\left({X}_{\sigma \left({i}_1\right)}\le {F}_{{X}_{{i}_1}}^{-1}\left({u}_1\right),\dots,{X}_{\sigma \left({i}_n\right)}\le {F}_{{X}_{{i}_n}}^{-1}\left({u}_n\right)\right) $$(10)

  =P(FXσ(i1)(Xσ(i1))u1,,FXσ(in)(Xσ(in))un),$$ \hspace{1em}\hspace{1em}=P({F}_{{X}_{\sigma ({i}_1)}}({X}_{\sigma ({i}_1)})\le {u}_1,\dots,{F}_{{X}_{\sigma ({i}_n)}}({X}_{\sigma ({i}_n)})\le {u}_n), $$(11)

where equation (10) results from Definition 3.1 and equation (11) follows from Remark 3.1 (FXi(·)=FX1(·),i1$ {F}_{{X}_i}(\middot )={F}_{{X}_1}(\middot ),\forall i\ge 1$).

We note that if the variables U1, U2, U3,… of Remark 3.2 are infinitely exchangeable, this does not imply that the variables X1, X2, X3,… are infinitely exchangeable. It is enough that one of the marginal distributions FXi0$ {F}_{{X}_{{i}_0}}$ is different from the rest {FXi}ii0,$ \{{F}_{{X}_i}{\}}_{i\ne {i}_0},$ which would violate Remark 3.1.

Under the framework of Definition 3.1 and assuming the continuity of the variable identified by the de Finetti representation, say Θ, we will have the copula between pairs of variables of the infinitely exchangeable sequence given by the copula between a member of the sequence (the first one is enough) and the variable Θ, let’s see.

Corollary 3.1

Let X1, X2,… be a sequence of continuous random variables following Definition 3.1. If Θ is the variable identified in the de Finetti representation, producing the conditional independence, and Θ has density function πΘ(·), then, for each pair of variables Xi and Xj, i ≠ j,

cXi,Xj(u,v)=-cX1,Θ(u,FΘ(θ))cX1,Θ(v,FΘ(θ))πΘ(θ)dθ,$$ {c}_{{X}_i,{X}_j}\left(u,v\right)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {c}_{{X}_1,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right){c}_{{X}_1,\mathrm{\Theta }}\left(v,{F}_{\mathrm{\Theta }}\left(\theta \right)\right){\pi }_{\mathrm{\Theta }}\left(\theta \right)\mathrm{d}\theta, $$(12)

where cXi,Xj,$ {c}_{{X}_i,{X}_j},$ and cX1,Θ$ {c}_{{X}_1,\mathrm{\Theta }}$ are the density functions related to the 2-copulas CXi,Xj,$ {C}_{{X}_i,{X}_j},$ and CX1,Θ,$ {C}_{{X}_1,\mathrm{\Theta }},$ respectively.

Proof. Since infinitely exchangeable variables are conditionally independent (to the random variable of de Finetti’s theorem) cXi,Xj(u,v)=-cXi,Θ(u,FΘ(θ))cXj,Θ(v,FΘ(θ))πΘ(θ)dθ,$ {c}_{{X}_i,{X}_j}(u,v)={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }} {c}_{{X}_i,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta )){c}_{{X}_j,\mathrm{\Theta }}(v,{F}_{\mathrm{\Theta }}(\theta )){\pi }_{\mathrm{\Theta }}(\theta )\mathrm{d}\theta,$ for each pair Xi and Xj (from Corollary 2.1).

Infinitely exchangeable sequences of absolutely continuous random variables share their unconditional marginal distributions fXi,$ {f}_{{X}_i},$ from Remark 3.1. Also, for each θ value of Θ, cXi,Θ(u,FΘ(θ))=cX1,Θ(u,FΘ(θ)), u[0,1],$ {c}_{{X}_i,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta ))={c}_{{X}_1,\mathrm{\Theta }}(u,{F}_{\mathrm{\Theta }}(\theta )),\enspace \forall u\in [0,\hspace{0.5em}1],$ since,

fXi|θ(x)=fXi,Θ(x,θ)πΘ(θ)=cXi,Θ(FXi(x),FΘ(θ))fXi(x)πΘ(θ)πΘ(θ)=cXi,Θ(FXi(x),FΘ(θ))fXi(x),$$ {f}_{{X}_i|\theta }(x)=\frac{{f}_{{X}_i,\mathrm{\Theta }}(x,\theta )}{{\pi }_{\mathrm{\Theta }}(\theta )}=\frac{{c}_{{X}_i,\mathrm{\Theta }}({F}_{{X}_i}(x),{F}_{\mathrm{\Theta }}(\theta )){f}_{{X}_i}(x){\pi }_{\mathrm{\Theta }}(\theta )}{{\pi }_{\mathrm{\Theta }}(\theta )}={c}_{{X}_i,\mathrm{\Theta }}({F}_{{X}_i}(x),{F}_{\mathrm{\Theta }}(\theta )){f}_{{X}_i}(x), $$(13)

where equation (13) follows from Sklar’s theorem. And, using that fXi(x)=fX1(x),$ {f}_{{X}_i}(x)={f}_{{X}_1}(x),$ we can see that cXi,Θ(u,FΘ(θ))=cX1,Θ(u,FΘ(θ)),i1. $ {c}_{{X}_i,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right)={c}_{{X}_1,\mathrm{\Theta }}\left(u,{F}_{\mathrm{\Theta }}\left(\theta \right)\right),\hspace{0.5em}\forall i\ne 1.\enspace $ □

However, a question that arises is whether there is any restriction on the type of copula that could result from Corollary 3.1, a hint is given in O’ Neill [11], formulated in the next remark.

Remark 3.3

Let X1,X2,… be a sequence of continuous random variables following Definition 3.1. Then, from Theorem 2 of O’ Neill [11], for each pair of variables Xi and Xj, i ≠ j, i, j ≥ 1, ρXi,Xj0,$ {\rho }_{{X}_i,{X}_j}\ge 0,$ with ρXi,Xj$ {\rho }_{{X}_i,{X}_j}$ denoting the Pearson’s rho correlation coefficient. Then, from Remark 3.2, since the variables U1,U2,… follow Definition 3.1, for each pair of variables Xi and Xj, i ≠ j, i, j ≥ 1, ρXi,Xj*0,$ {\rho }_{{X}_i,{X}_j}^{*}\ge 0,$ with ρXi,Xj*$ {\rho }_{{X}_i,{X}_j}^{*}$ denoting the Spearman’s rho correlation coefficient.

The above remark indicates that the possible copulas identified in Corollary 3.1 cannot correspond to negative correlations, under Definition 3.1.

The following example shows a case under Definition 3.1, and the resulting predictive copula.

Example 3.3

Let X1, X2,… be an infinite sequence, under Definition 3.1, of exponential random variables with parameter θ, conditionally independent, given θ, a value of the continuous random variable Θ with exponential distribution and hyperparameter λ. Then, each conditional density function is fXi|θ(xi)=θe-θxiI(0,)(xi)$ {f}_{{X}_i|\theta }({x}_i)=\theta {e}^{-\theta {x}_i}{I}_{(0,\infty )}({x}_i)$ and the prior density function of Θ is πΘ(θ)=λe-λθI(0,)(θ).$ {\pi }_{\Theta }(\theta )=\lambda {e}^{-{\lambda \theta }}{I}_{(0,\infty )}(\theta ).$ fX1,...,Xn(x1,,xn)$ {f}_{{X}_1,...,{X}_n}({x}_1,\dots,{x}_n)$ is proportional to λn!(λ+i=1nxi)n+1,$ \frac{{\lambda n}!}{(\lambda +{\sum }_{i=1}^n{x}_i{)}^{n+1}},$ for nN.$ n\in \mathbb{N}.$ Since, cXi,Xj(u,v)=fXi,Xj(xi,xj)fXi(xi)fXj(xj),$ {c}_{{X}_i,{X}_j}(u,v)=\frac{{f}_{{X}_i,{X}_j}({x}_i,{x}_j)}{{f}_{{X}_i}({x}_i){f}_{{X}_j}({x}_j)},$ for u=FXi(xi)$ u={F}_{{X}_i}({x}_i)$ and v=FXj(xj).$ v={F}_{{X}_j}({x}_j).$ Using the previous proportionality we obtain cXi,Xj(u,v)(xi+λ)2(xj+λ)2λ(xi+xj+λ)3.$ {c}_{{X}_i,{X}_j}(u,v)\propto \frac{({x}_i+\lambda {)}^2({x}_j+\lambda {)}^2}{\lambda ({x}_i+{x}_j+\lambda {)}^3}.$ On the other hand, cXi,Θ(u,FΘ(θ))=fXi|θ(xi)fXi(xi),$ {c}_{{X}_i,\Theta }(u,{F}_{\Theta }(\theta ))=\frac{{f}_{{X}_i|\theta }({x}_i)}{{f}_{{X}_i}({x}_i)},$ for u=FXi(xi),$ u={F}_{{X}_i}({x}_i),$ then, the right side of equation (12) is proportional to (xi+λ)2(xj+λ)2λ(xi+xj+λ)3.$ \frac{({x}_i+\lambda {)}^2({x}_j+\lambda {)}^2}{\lambda ({x}_i+{x}_j+\lambda {)}^3}.$ Verifying the Corollary 3.1.

Then, since we have FXi(x)=x(λ+x),x0,i,$ {F}_{{X}_i}(x)=\frac{x}{(\lambda +x)},\hspace{0.5em}\forall x\ge 0,\hspace{0.5em}\forall i,$ the copula density function cXi,Xj(u,v)=2(1-u)(1-v)(1-uv)3,$ {c}_{{X}_i,{X}_j}(u,v)=\frac{2(1-u)(1-v)}{(1-{uv}{)}^3},$ u,v[0, 1],i,j1,$ u,v\in [0,\enspace 1],\hspace{0.5em}\forall i,j\ge 1,$ and the predictive 2-copula is

CXi,Xj(u,v)=uv(2-u-v)(1-uv),u,v[0, 1],i,j1.$$ {C}_{{X}_i,{X}_j}(u,v)=\frac{{uv}\left(2-u-v\right)}{\left(1-{uv}\right)},\hspace{1em}u,v\in [0,\enspace 1],\hspace{0.5em}\forall i,j\ge 1. $$

We start the section with some notes and examples so that Corollary 3.1 is a natural consequence of the impact of conditional independence in the framework of infinite exchangeability. Remark 3.1 shows that infinite exchangeability implies conditionally independence and identical margins. While independence implies permutability, the opposite is not valid, see Example 3.1. Also, if the margins are all equal this does not imply permutability, see Example 3.2. Under Definition 3.1, we show in Corollary 3.1 that infinite exchangeability allows to obtain a representation of the copula between pair of elements of the infinite sequence in terms of the copula between each element of the infinite sequence and the entity Θ, quantity identified by de Finetti’s theorem. Also, note that the right side of equation (12) reflects that is only necessary to know the copula between one element of the infinite sequence and Θ to obtain the predictive copula between pairs of elements of the infinite sequence. Example 3.3 exposes how works in practice the corollary.

Conclusion

In the framework of absolutely continuous random variables X and Y, we use the notion of conditional independence between X and Y (conditional on Θ), to obtain a representation of the predictive 2-copula between X and Y (Theorem 2.1, see also Corollary 2.1). Such a result tells us that the representation involves the 2-copula between X and Θ, the 2-copula between Y and Θ, and the distribution of Θ. Being the property of identically distributed useful for inferential procedures, in Corollary 2.2 we reveal the impact of such an assumption on the representation of the predictive 2-copula between X and Y. There are indications (see Example 2.4) that the restriction imposed by the condition of identical margins could lead to restrictions on the values of coefficients of dependence, such as Kendall’s τ coefficient, Spearman’s ρ coefficient.

Since the notion of conditional independence requires the identification of the entity Θ making X and Y conditionally independent given Θ, we introduce the concept of infinite and exchangeable sequences, X1, X2, X3.... This condition appears as one of the inferential strategies applicable in the process of manipulating finite samples. It is reasonable to suppose that a finite sequence is part of an infinite sequence. Under Definition 3.1, by representation theorems [8, 9] we can guarantee that Θ exists and that it is a random variable. To give the reader a more precise notion of the meaning of infinite exchangeability, we show that it imposes the same margins on X and Y (conditional or not, see Remark 3.1), but that it does not impose unconditional independence (see Example 3.1) and it does impose conditional independence on Θ. Then, under infinite exchangeability, we revisit the copula density function structure and prove Corollary 3.1, which shows the structure of the predictive 2-copula density function between Xi and Xj, i ≠ j elements of the infinite sequence. We complete our findings with examples and notes for the reader. Corollaries 2.2 and 3.1, although they start from different assumptions, show that the predictive density function of the 2-copula between pairs of variables X and Y can be analytically expressed as a function of the density functions of the copula between only one variable (say X) and Θ. That is to say that the dependence, represented by the predictive 2-copula between X and Y, is only a consequence of the dependence between X and Θ. This fact shows the relevance of the identification of Θ, for the description of the predictive 2-copula.

Acknowledgments

Vinícius Litvinoff Justus gratefully acknowledges the support provided by CNPq with a fellowship of scientific initiation from University of Campinas. The authors wish to thank the referees and editors for their many helpful comments and suggestions on an earlier draft of this paper.

References

  1. García JE, González-López VA (2020), Random permutations, non-decreasing subsequences and statistical independence. Symmetry 12, 9, 1415. https://doi.org/10.3390/sym12091415. [CrossRef] [Google Scholar]
  2. García JE, González-López VA (2014), Independence tests for continuous random variables based on the longest increasing subsequence. J Multiv Anal 127, 126–146. https://doi.org/10.1016/j.jmva.2014.02.010. [CrossRef] [Google Scholar]
  3. Genest C, Rémillard B (2004), Test of independence and randomness based on the empirical copula process. Test 13, 335–369. https://doi.org/10.1007/BF02595777. [CrossRef] [Google Scholar]
  4. Hollander M, Wolfe D, Chicken E (2013), Nonparametric statistical methods, John Wiley & Sons, New York. [Google Scholar]
  5. García JE, González-López VA, Nelsen RB (2013), A new index to measure positive dependence in trivariate distributions. J Multivar Anal 115, 481–495. https://doi.org/10.1016/j.jmva.2012.11.007. [CrossRef] [Google Scholar]
  6. Nelsen RB (2006), An introduction to copulas, Springer-Verlag, New York. https://doi.org/10.1007/0-387-28678-0. [Google Scholar]
  7. Joe H (2014), Dependence modeling with copulas, Chapman and Hall/CRC, New York. https://doi.org/10.1201/b17116. [CrossRef] [Google Scholar]
  8. De Finetti B (1929), Funzione caratteristica di un fenomeno aleatorio, in: Atti del Congresso Internazionale dei Matematici: Bologna del 3 al 10 de settembre di 1928 (Comunicazioni, sezione IV (A)–V–VII), vol. 6, pp. 179–190. [Google Scholar]
  9. Hewitt E, Savage LJ (1955), Symmetric measures on Cartesian products. Trans Am Math Soc 80, 2, 470–501. https://doi.org/10.2307/1992999. [CrossRef] [Google Scholar]
  10. González-López VA, Litvinoff Justus V, A method for the elicitation of copulas (preprint) [Google Scholar]
  11. O’ Neill B (2009), Exchangeability, Correlation, and Bayes’ Effect. Int Stat Rev 77, 2, 241–250. https://doi.org/10.1111/j.1751-5823.2008.00059.x. [CrossRef] [Google Scholar]

Cite this article as: González-López VA & Litvinoff Justus V 2022. Conditional independence and predictive copula. 4open, 5, 20.

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.