latentcor
utilizes the powerful semi-parametric latent
Gaussian copula models to estimate latent correlations between mixed
data types (continuous/binary/ternary/truncated or zero-inflated). Below
we review the definitions for each type.
Definition of continuous model (Fan et al. 2017)
A random $X\in\cal{R}^{p}$ satisfies the Gaussian copula (or nonparanormal) model if there exist monotonically increasing f = (fj)j = 1p with Zj = fj(Xj) satisfying Z ∼ Np(0, Σ), σjj = 1; we denote X ∼ NPN(0, Σ, f).
X = gen_data(n = 6, types = "con")$X
X
#> [,1]
#> [1,] 0.73481316
#> [2,] 0.67904642
#> [3,] 0.06914448
#> [4,] 0.14939556
#> [5,] 0.08930517
#> [6,] 0.09801953
Definition of binary model (Fan et al. 2017)
A random $X\in\cal{R}^{p}$ satisfies the binary latent Gaussian copula model if there exists W ∼ NPN(0, Σ, f) such that Xj = I(Wj > cj), where I(⋅) is the indicator function and cj are constants.
X = gen_data(n = 6, types = "bin")$X
X
#> [,1]
#> [1,] 0
#> [2,] 1
#> [3,] 0
#> [4,] 0
#> [5,] 1
#> [6,] 1
Definition of ternary model (Quan, Booth, and Wells 2018)
A random $X\in\cal{R}^{p}$ satisfies the ternary latent Gaussian copula model if there exists W ∼ NPN(0, Σ, f) such that Xj = I(Wj > cj) + I(Wj > c′j), where I(⋅) is the indicator function and cj < c′j are constants.
X = gen_data(n = 6, types = "ter")$X
X
#> [,1]
#> [1,] 1
#> [2,] 0
#> [3,] 2
#> [4,] 1
#> [5,] 0
#> [6,] 1
Definition of truncated or zero-inflated model (Yoon, Carroll, and Gaynanova 2020)
A random $X\in\cal{R}^{p}$ satisfies the truncated latent Gaussian copula model if there exists W ∼ NPN(0, Σ, f) such that Xj = I(Wj > cj)Wj, where I(⋅) is the indicator function and cj are constants.
X = gen_data(n = 6, types = "tru")$X
X
#> [,1]
#> [1,] 0.1334432
#> [2,] 0.0000000
#> [3,] 0.0000000
#> [4,] 0.0000000
#> [5,] 0.8616549
#> [6,] 0.4751267
Mixed latent Gaussian copula model
The mixed latent Gaussian copula model jointly models W = (W1, W2, W3, W4) ∼ NPN(0, Σ, f) such that X1j = W1j, X2j = I(W2j > c2j), X3j = I(W3j > c3j) + I(W3j > c′3j) and X4j = I(W4j > c4j)W4j.
set.seed("234820")
X = gen_data(n = 100, types = c("con", "bin", "ter", "tru"))$X
head(X)
#> [,1] [,2] [,3] [,4]
#> [1,] -0.5728663 0 0 0.0000000
#> [2,] -1.5632883 0 0 0.0000000
#> [3,] 0.4600555 1 1 0.1840785
#> [4,] -1.5186510 1 2 0.0000000
#> [5,] -1.5438165 0 1 0.0000000
#> [6,] -0.5656219 0 1 0.0000000
The estimation of latent correlation matrix Σ is achieved via the bridge function F which is defined such that E(τ̂jk) = F(σjk), where σjk is the latent correlation between variables j and k, and τ̂jk is the corresponding sample Kendall’s τ.
Kendall’s τ (τa)
Given observed $\mathbf{x}_{j}, \mathbf{x}_{k}\in\cal{R}^{n}$,
$$ \hat{\tau}_{jk}=\hat{\tau}(\mathbf{x}_{j}, \mathbf{x}_{k})=\frac{2}{n(n-1)}\sum_{1\le i<i'\le n}sign(x_{ij}-x_{i'j})sign(x_{ik}-x_{i'k}), $$ where n is the sample size.
latentcor
calculates pairwise Kendall’s τ̂ as part of the estimation
process
estimate = latentcor(X, types = c("con", "bin", "ter", "tru"))
K = estimate$K
K
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.2541414 0.2436364 0.3327273
#> [2,] 0.2541414 1.0000000 0.1838384 0.2105051
#> [3,] 0.2436364 0.1838384 1.0000000 0.2466667
#> [4,] 0.3327273 0.2105051 0.2466667 1.0000000
Using F and τ̂jk, a moment-based estimator is σ̂jk = F−1(τ̂jk) with the corresponding Σ̂ being consistent for Σ (Fan et al. 2017; Quan, Booth, and Wells 2018; Yoon, Carroll, and Gaynanova 2020).
The explicit form of bridge function F has been derived for all
combinations of continuous(C)/binary(B)/ternary(N)/truncated(T) variable
types, and we summarize the corresponding references. Each of this
combinations is implemented in latentcor
.
Type | continuous | binary | ternary | zero-inflated (truncated) |
---|---|---|---|---|
continuous | Liu, Lafferty, and Wasserman (2009) | - | - | - |
binary | Fan et al. (2017) | Fan et al. (2017) | - | - |
ternary | Quan, Booth, and Wells (2018) | Quan, Booth, and Wells (2018) | Quan, Booth, and Wells (2018) | - |
zero-inflated (truncated) | Yoon, Carroll, and Gaynanova (2020) | Yoon, Carroll, and Gaynanova (2020) | See Appendix | Yoon, Carroll, and Gaynanova (2020) |
Below we provide an explicit form of F for each combination.
Theorem (explicit form of bridge function) Let $W_{1}\in\cal{R}^{p_{1}}$, $W_{2}\in\cal{R}^{p_{2}}$, $W_{3}\in\cal{R}^{p_{3}}$, $W_{4}\in\cal{R}^{p_{4}}$ be such that W = (W1, W2, W3, W4) ∼ NPN(0, Σ, f) with p = p1 + p2 + p3 + p4. Let $X=(X_{1}, X_{2}, X_{3}, X_{4})\in\cal{R}^{p}$ satisfy Xj = Wj for j = 1, ..., p1, Xj = I(Wj > cj) for j = p1 + 1, ..., p1 + p2, Xj = I(Wj > cj) + I(Wj > c′j) for j = p1 + p2 + 1, ..., p3 and Xj = I(Wj > cj)Wj for j = p1 + p2 + p3 + 1, ..., p with Δj = f(cj). The rank-based estimator of Σ based on the observed n realizations of X is the matrix R̂ with r̂jj = 1, r̂jk = r̂kj = F−1(τ̂jk) with block structure
$$ \mathbf{\hat{R}}=\begin{pmatrix} F_{CC}^{-1}(\hat{\tau}) & F_{CB}^{-1}(\hat{\tau}) & F_{CN}^{-1}(\hat{\tau}) & F_{CT}^{-1}(\hat{\tau})\\ F_{BC}^{-1}(\hat{\tau}) & F_{BB}^{-1}(\hat{\tau}) & F_{BN}^{-1}(\hat{\tau}) & F_{BT}^{-1}(\hat{\tau})\\ F_{NC}^{-1}(\hat{\tau}) & F_{NB}^{-1}(\hat{\tau}) & F_{NN}^{-1}(\hat{\tau}) & F_{NT}^{-1}(\hat{\tau})\\ F_{TC}^{-1}(\hat{\tau}) & F_{TB}^{-1}(\hat{\tau}) & F_{TN}^{-1}(\hat{\tau}) & F_{TT}^{-1}(\hat{\tau}) \end{pmatrix} $$ $$ F(\cdot)=\begin{cases} CC:\ 2\sin^{-1}(r)/\pi \\ \\ BC: \ 4\Phi_{2}(\Delta_{j},0;r/\sqrt{2})-2\Phi(\Delta_{j}) \\ \\ BB: \ 2\{\Phi_{2}(\Delta_{j},\Delta_{k};r)-\Phi(\Delta_{j})\Phi(\Delta_{k})\} \\ \\ NC: \ 4\Phi_{2}(\Delta_{j}^{2},0;r/\sqrt{2})-2\Phi(\Delta_{j}^{2})+4\Phi_{3}(\Delta_{j}^{1},\Delta_{j}^{2},0;\Sigma_{3a}(r))-2\Phi(\Delta_{j}^{1})\Phi(\Delta_{j}^{2})\\ \\ NB: \ 2\Phi_{2}(\Delta_{j}^{2},\Delta_{k},r)\{1-\Phi(\Delta_{j}^{1})\}-2\Phi(\Delta_{j}^{2})\{\Phi(\Delta_{k})-\Phi_{2}(\Delta_{j}^{1},\Delta_{k},r)\} \\ \\ NN: \ 2\Phi_{2}(\Delta_{j}^{2},\Delta_{k}^{2};r)\Phi_{2}(-\Delta_{j}^{1},-\Delta_{k}^{1};r)-2\{\Phi(\Delta_{j}^{2})-\Phi_{2}(\Delta_{j}^{2},\Delta_{k}^{1};r)\}\{\Phi(\Delta_{k}^{2})-\Phi_{2}(\Delta_{j}^{1},\Delta_{k}^{2};r)\} \\ \\ TC: \ -2\Phi_{2}(-\Delta_{j},0;1/\sqrt{2})+4\Phi_{3}(-\Delta_{j},0,0;\Sigma_{3b}(r)) \\ \\ TB: \ 2\{1-\Phi(\Delta_{j})\}\Phi(\Delta_{k})-2\Phi_{3}(-\Delta_{j},\Delta_{k},0;\Sigma_{3c}(r))-2\Phi_{3}(-\Delta_{j},\Delta_{k},0;\Sigma_{3d}(r)) \\ \\ TN: \ -2\Phi(-\Delta_{k}^{1})\Phi(\Delta_{k}^{2}) + 2\Phi_{3}(-\Delta_{k}^{1},\Delta_{k}^{2},\Delta_{j};\Sigma_{3e}(r))+2\Phi_{4}(-\Delta_{k}^{1},\Delta_{k}^{2},-\Delta_{j},0;\Sigma_{4a}(r))+2\Phi_{4}(-\Delta_{k}^{1},\Delta_{k}^{2},-\Delta_{j},0;\Sigma_{4b}(r)) \\ \\ TT: \ -2\Phi_{4}(-\Delta_{j},-\Delta_{k},0,0;\Sigma_{4c}(r))+2\Phi_{4}(-\Delta_{j},-\Delta_{k},0,0;\Sigma_{4d}(r)) \\ \end{cases} $$
where Δj = Φ−1(π0j), Δk = Φ−1(π0k), Δj1 = Φ−1(π0j), Δj2 = Φ−1(π0j + π1j), Δk1 = Φ−1(π0k), Δk2 = Φ−1(π0k + π1k),
$$ \Sigma_{3a}(r)= \begin{pmatrix} 1 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{3b}(r)= \begin{pmatrix} 1 & \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & 1 & r \\ \frac{r}{\sqrt{2}} & r & 1 \end{pmatrix}, \;\;\; \Sigma_{3c}(r)= \begin{pmatrix} 1 & -r & \frac{1}{\sqrt{2}} \\ -r & 1 & -\frac{r}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, $$
$$ \Sigma_{3d}(r)= \begin{pmatrix} 1 & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{3e}(r)= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}, \;\;\; \Sigma_{4a}(r)= \begin{pmatrix} 1 & 0 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -r & \frac{r}{\sqrt{2}} \\ 0 & -r & 1 & -\frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}, $$
$$ \Sigma_{4b}(r)= \begin{pmatrix} 1 & 0 & r & \frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & \frac{r}{\sqrt{2}} \\ r & 0 & 1 & \frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{4c}(r)= \begin{pmatrix} 1 & 0 & \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 & -r \\ -\frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -r & 1 \end{pmatrix}\;\;\text{and}\;\; \Sigma_{4d}(r)= \begin{pmatrix} 1 & r & \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}} \\ r & 1 & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}} & 1 & r \\ \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & r & 1 \end{pmatrix}. $$
Given the form of bridge function F, obtaining a moment-based
estimation σ̂jk
requires inversion of F.
latentcor
implements two methods for calculation of the
inversion:
method = "original"
Subsection
describing original method and relevant parameter
tol
method = "approx"
Subsection
describing approximation method and relevant parameter
ratio
Both methods calculate inverse bridge function applied to each
element of sample Kendall’s τ
matrix. Because the calculation is performed point-wise (separately for
each pair of variables), the resulting point-wise estimator of
correlation matrix may not be positive semi-definite.
latentcor
performs projection of the pointwise-estimator to
the space of positive semi-definite matrices, and allows for shrinkage
towards identity matrix using the parameter nu
(see Subsection describing adjustment of point-wise
estimator and relevant parameter nu
).
method = "original"
)Original estimation approach relies on numerical inversion of F based on solving uni-root
optimization problem. Given the calculated τ̂jk
(sample Kendall’s τ between
variables j and k), the estimate of latent
correlation σ̂jk is
obtained by calling optimize
function to solve the
following optimization problem: r̂jk = arg minr{F(r) − τ̂jk}2.
The parameter tol
controls the desired accuracy of the
minimizer and is passed to optimize
, with the default
precision of 1e-8:
Algorithm for Original method
Input: F(r) = F(r, Δ) - bridge function based on the type of variables j, k
estimate$K
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.2541414 0.2436364 0.3327273
#> [2,] 0.2541414 1.0000000 0.1838384 0.2105051
#> [3,] 0.2436364 0.1838384 1.0000000 0.2466667
#> [4,] 0.3327273 0.2105051 0.2466667 1.0000000
estimate$zratios
#> [[1]]
#> [1] NA
#>
#> [[2]]
#> [1] 0.5
#>
#> [[3]]
#> [1] 0.3 0.8
#>
#> [[4]]
#> [1] 0.5
optimize
function in R with accuracy
tol
.method = "approx"
)A faster approximation method is based on multi-linear interpolation of pre-computed inverse bridge function on a fixed grid of points (Yoon, Müller, and Gaynanova 2021). This is possible as the inverse bridge function is an analytic function of at most 5 parameters:
In short, d-dimensional multi-linear interpolation uses a weighted
average of 2d
neighbors to approximate the function values at the points within the
d-dimensional cube of the neighbors, and to perform interpolation,
latentcor
takes advantage of the R package
chebpol
(Gaure 2019). This
approximation method has been first described in (Yoon, Müller, and Gaynanova 2021) for
continuous/binary/truncated cases. In latentcor
, we
additionally implement ternary case, and optimize the choice of grid as
well as interpolation boundary for faster computations with smaller
memory footprint.
Algorithm for Approximation method
Input: Let ǧ = h(g), pre-computed values F−1(h−1(ǧ)) on a fixed grid $\check{g}\in\check{\cal{G}}$ based on the type of variables j and k. For binary/continuous case, ǧ = (τ̌jk, Δ̌j); for binary/binary case, ǧ = (τ̌jk, Δ̌j, Δ̌k); for truncated/continuous case, ǧ = (τ̌jk, Δ̌j); for truncated/truncated case, ǧ = (τ̌jk, Δ̌j, Δ̌k); for ternary/continuous case, ǧ = (τ̌jk, Δ̌j1, Δ̌j2); for ternary/binary case, ǧ = (τ̌jk, Δ̌j1, Δ̌j2, Δ̌k); for ternary/truncated case, ǧ = (τ̌jk, Δ̌j1, Δ̌j2, Δ̌k); for ternay/ternary case, ǧ = (τ̌jk, Δ̌j1, Δ̌j2, Δ̌k1, Δ̌k2).
Step 1 and Step 2 same as Original method.
Step 3. If |τ̂jk| ≤ ratio × τ̄jk(⋅), apply interpolation; otherwise apply Original method.
To avoid interpolation in areas with high approximation errors close
to the boundary, we use hybrid scheme in Step 3. The parameter
ratio
controls the size of the region where the
interpolation is performed (ratio = 0
means no
interpolation, ratio = 1
means interpolation is always
performed). For the derivation of approximate bound for BC, BB, TC, TB,
TT cases see Yoon, Müller, and Gaynanova
(2021). The derivation of approximate bound for NC, NB, NN, NT
case is in the Appendix.
$$ \bar{\tau}_{jk}(\cdot)= \begin{cases} 2\pi_{0j}(1-\pi_{0j}) & for \; BC \; case\\ 2\min(\pi_{0j},\pi_{0k})\{1-\max(\pi_{0j}, \pi_{0k})\} & for \; BB \; case\\ 2\{\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j})\} & for \; NC \; case\\ 2\min(\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j}),\pi_{0k}(1-\pi_{0k})) & for \; NB \; case\\ 2\min(\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j}), \\ \;\;\;\;\;\;\;\;\;\;\pi_{0k}(1-\pi_{0k})+\pi_{1k}(1-\pi_{0k}-\pi_{1k})) & for \; NN \; case\\ 1-(\pi_{0j})^{2} & for \; TC \; case\\ 2\max(\pi_{0k},1-\pi_{0k})\{1-\max(\pi_{0k},1-\pi_{0k},\pi_{0j})\} & for \; TB \; case\\ 1-\{\max(\pi_{0j},\pi_{0k},\pi_{1k},1-\pi_{0k}-\pi_{1k})\}^{2} & for \; TN \; case\\ 1-\{\max(\pi_{0j},\pi_{0k})\}^{2} & for \; TT \; case\\ \end{cases} $$
By default, latentcor
uses ratio = 0.9
as
this value was recommended in Yoon, Müller, and
Gaynanova (2021) having a good balance of accuracy and
computational speed. This value, however, can be modified by the
user.
#latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.99)$R
#latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.4)$R
latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.5491345 0.4441468 0.5814024
#> [2,] 0.5491345 1.0000000 0.4754056 0.5279002
#> [3,] 0.4441468 0.4754056 1.0000000 0.5214781
#> [4,] 0.5814024 0.5279002 0.5214781 1.0000000
The lower is the ratio
, the closer is the approximation
method to original method (with ratio = 0
being equivalent
to method = "original"
), but also the higher is the cost of
computations.
library(microbenchmark)
#microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.99)$R)
#microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.4)$R)
microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R)
#> Unit: milliseconds
#> expr
#> latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R
#> min lq mean median uq max neval
#> 25.21197 25.42875 25.97904 25.58194 25.78802 30.06851 100
Rescaled Grid for Interpolation
Since |τ̂| ≤ τ̄, the grid does not need to cover the whole domain τ ∈ [−1, 1]. To optimize memory associated with storing the grid, we rescale τ as follows: τ̌jk = τjk/τ̄jk ∈ [−1, 1], where τ̄jk is as defined above.
In addition, for ternary variable j, it always holds that Δj2 > Δj1 since Δj1 = Φ−1(π0j) and Δj2 = Φ−1(π0j + π1j). Thus, the grid should not cover the the area corresponding to Δj2 ≤ Δj1. We thus rescale as follows: Δ̌j1 = Δj1/Δj2 ∈ [0, 1]; Δ̌j2 = Δj2 ∈ [0, 1].
Speed Comparison
To illustrate the speed improvement by
method = "approx"
, we plot the run time scaling behavior of
method = "approx"
and method = "original"
(setting types
for gen_data
by replicating
c("con", "bin", "ter", "tru")
multiple times) with
increasing dimensions p = [20, 40, 100, 200, 400] at
sample size n = 100 using
simulation data. Figure below summarizes the observed scaling in a
log-log plot. For both methods we observe the expected O(p2) scaling
behavior with dimension p, i.e., a linear scaling in the log-log plot.
However, method = "approx"
is at least one order of
magnitude faster than method = "original"
independent of
the dimension of the problem.
Since the estimation is performed point-wise, the resulting matrix of estimated latent correlations is not guaranteed to be positive semi-definite. For example, this could be expected when the sample size is small (and so the estimation error for each pairwise correlation is larger)
set.seed("234820")
X = gen_data(n = 6, types = c("con", "bin", "ter", "tru"))$X
X
#> [,1] [,2] [,3] [,4]
#> [1,] -0.5182800 1 1 0.01585472
#> [2,] -1.3017092 0 0 0.00000000
#> [3,] 0.3145191 1 2 0.47477310
#> [4,] -0.6093291 1 0 1.12427204
#> [5,] -1.3175490 0 1 0.00000000
#> [6,] -0.7807245 0 1 0.00000000
out = latentcor(X, types = c("con", "bin", "ter", "tru"))
out$Rpointwise
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.9990000 0.6010872 0.8548518
#> [2,] 0.9990000 1.0000000 0.3523666 0.9990000
#> [3,] 0.6010872 0.3523666 1.0000000 0.1429834
#> [4,] 0.8548518 0.9990000 0.1429834 1.0000000
eigen(out$Rpointwise)$values
#> [1] 3.09750556 0.93152347 0.02204824 -0.05107726
latentcor
automatically corrects the pointwise estimator
to be positive definite by making two adjustments. First, if
Rpointwise
has smallest eigenvalue less than zero, the
latentcor
projects this matrix to the nearest positive
semi-definite matrix. The user is notified of this adjustment through
the message (supressed in previous code chunk), e.g.
Second, latentcor
shrinks the adjusted matrix of
correlations towards identity matrix using the parameter ν with default value of 0.001
(nu = 0.001
), so that the resulting R
is
strictly positive definite with the minimal eigenvalue being greater or
equal to ν. That is R = (1 − ν)R̃ + νI,
where R̃ is the nearest
positive semi-definite matrix to Rpointwise
.
out = latentcor(X, types = c("con", "bin", "ter", "tru"), nu = 0.001)
out$Rpointwise
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.9990000 0.6010872 0.8548518
#> [2,] 0.9990000 1.0000000 0.3523666 0.9990000
#> [3,] 0.6010872 0.3523666 1.0000000 0.1429834
#> [4,] 0.8548518 0.9990000 0.1429834 1.0000000
out$R
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.9600157 0.5973955 0.8704932
#> [2,] 0.9600157 1.0000000 0.3569106 0.9718665
#> [3,] 0.5973955 0.3569106 1.0000000 0.1407140
#> [4,] 0.8704932 0.9718665 0.1407140 1.0000000
As a result, R
and Rpointwise
could be
quite different when sample size n is small. When n is large and p is moderate, the difference is
typically driven by parameter nu
.
set.seed("234820")
X = gen_data(n = 100, types = c("con", "bin", "ter", "tru"))$X
out = latentcor(X, types = c("con", "bin", "ter", "tru"), nu = 0.001)
out$Rpointwise
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.5494941 0.4441974 0.5816633
#> [2,] 0.5494941 1.0000000 0.4765409 0.5282279
#> [3,] 0.4441974 0.4765409 1.0000000 0.5110558
#> [4,] 0.5816633 0.5282279 0.5110558 1.0000000
out$R
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.5489446 0.4437532 0.5810817
#> [2,] 0.5489446 1.0000000 0.4760643 0.5276996
#> [3,] 0.4437532 0.4760643 1.0000000 0.5105447
#> [4,] 0.5810817 0.5276996 0.5105447 1.0000000
Without loss of generality, let j = 1 and k = 2. By the definition of Kendall’s τ, $$ \tau_{12}=E(\hat{\tau}_{12})=E[\frac{2}{n(n-1)}\sum_{1\leq i\leq i' \leq n} sign\{(X_{i1}-X_{i'1})(X_{i2}-X_{i'2})\}]. $$ Since X1 is ternary, Since X2 is truncated, C1 > 0 and Since f is monotonically increasing, sign(X2 − X2′) = sign(Z2 − Z2′), From the definition of U, let Zj = fj(Uj) and Δj = fj(Cj) for j = 1, 2. Using sign(x) = 2I(x > 0) − 1, we obtain Since $\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, -Z{1}\}$, $\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, Z{1}'\}$ and $\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, -Z{2}'\}$ are standard bivariate normally distributed variables with correlation $-\frac{1}{\sqrt{2}}$, $r/\sqrt{2}$ and $-\frac{r}{\sqrt{2}}$, respectively, by the definition of Φ3(⋅, ⋅, ⋅; ⋅) and Φ4(⋅, ⋅, ⋅, ⋅; ⋅) we have Using the facts that and So that,
It is easy to get the bridge function for truncated/ternary case by switching j and k.
Let $n_{0x}=\sum_{i=1}^{n_x}I(x_{i}=0)$, $n_{2x}=\sum_{i=1}^{n_x}I(x_{i}=2)$, $\pi_{0x}=\frac{n_{0x}}{n_{x}}$ and $\pi_{2x}=\frac{n_{2x}}{n_{x}}$, then
For ternary/binary and ternary/ternary cases, we combine the two individual bounds.
Let x ∈ ℛn and y ∈ ℛn be the observed n realizations of ternary and truncated variables, respectively. Let $n_{0x}=\sum_{i=0}^{n}I(x_{i}=0)$, $\pi_{0x}=\frac{n_{0x}}{n}$, $n_{1x}=\sum_{i=0}^{n}I(x_{i}=1)$, $\pi_{1x}=\frac{n_{1x}}{n}$, $n_{2x}=\sum_{i=0}^{n}I(x_{i}=2)$, $\pi_{2x}=\frac{n_{2x}}{n}$, $n_{0y}=\sum_{i=0}^{n}I(y_{i}=0)$, $\pi_{0y}=\frac{n_{0y}}{n}$, $n_{0x0y}=\sum_{i=0}^{n}I(x_{i}=0 \;\& \; y_{i}=0)$, $n_{1x0y}=\sum_{i=0}^{n}I(x_{i}=1 \;\& \; y_{i}=0)$ and $n_{2x0y}=\sum_{i=0}^{n}I(x_{i}=2 \;\& \; y_{i}=0)$ then Since n0x0y ≤ min (n0x, n0y), n1x0y ≤ min (n1x, n0y) and n2x0y ≤ min (n2x, n0y) we obtain
It is easy to get the approximate bound for truncated/ternary case by switching x and y.