17.4.2.4 Algorithms (Repeated Measures ANOVA)


One-way/Two-way Repeated Measures

For the details of the algorithms for the one-way and two way balanced repeated measure design, please see Repeated Measures ANOVA.pdf

Two-way Mixed-Design

Multivariate Tests

Considering the model: Two-Way Mixed Design RM ANOVA with one between subject factor A and another within subject factor B.

Let k be the number of levels for factor A, p be the number of levels for factor B,n_i be the number of subjects with ith level of factor A, y_{ij}^{T} = (y_{ij1},...,y_{ijp}) be observations with respect to jth subject and ith level of factor A.

Define Error matrix as: E = \sum_{i=1}^{k}\sum_{j=1}^{n_i}(y_{ij}-\bar{y_{i.}})(y_{ij}-\bar{y_{i.}})^{T}, and Hypothesis matrix as: H = \sum_{i=1}^{k}n_i(\bar{y_{i.}}-\bar{y_{..}})(\bar{y_{i.}}-\bar{y_{..}})^{T}, and Hypothesis matrix with intercept as: H_{int} = \frac{1}{\sum_{i=1}^{k}\frac{1}{n_i}}(\sum_{i=1}^{k}\frac{y_{i.}}{n_i})(\sum_{i=1}^{k}\frac{y_{i.}}{n_i})^{T},

where

y_{i.} = \sum_{j=1}^{n_i}y_{ij}, y_{..} = \sum_{i=1}^{k}\sum_{j=1}^{n_i}y_{ij}, \bar{y_{i.}} = \frac{y_{i.}}{n_i}, \bar{y_{..}} = \frac{y_{..}}{N} and N = \sum_{i=1}^{k}n_i.

And the degree of freedom can be obtained by d_E = N-k and d_H = k-1, respectively.

Suppose the mean vectors of factor A's levels are \mu_1,...,\mu_k, and we let \bar{\mu_.} = \frac{1}{k}\sum_{i=1}^{k}\mu_i.

Main effect of within factor B

Let the contrast matrix be


B_{(p-1)\times p}=\begin{bmatrix}
1 & -1 & \cdots & 0 & 0\\ 
0 & 1 & \cdots & 0 & 0\\ 
\vdots & \vdots & \ddots & \vdots & \vdots\\ 
0 & 0 & \cdots & 1 & -1
\end{bmatrix}.

In order to test H_0: B\bar{\mu_.}, we can compute the values of Wilks' Lambda, Hotelling-Lawley Trace, Pillai's Trace and Roy's Largest Root. The SS&CPs are:

S_H = BH_{int}B^{T},S_E = BEB^{T}.

Notes: All sum of squares are calculated based on type III.

Interaction effect of B*A

The null hypothesis is H_0: B\mu_1 = B\mu_2 = \cdots = B\mu_k = 0. The SS&CPs are:

S_H = BHB^{T},S_E = BEB^{T}.

Mauchly's Test of Sphericity

Let design matrix be


X = \begin{bmatrix}
1_{n_1\times 1} & 1_{n_1\times 1} &  &  & \\ 
1_{n_2\times 1} &  & 1_{n_2\times 1} &  & \\ 
\vdots &  &  & \ddots & \\ 
1_{n_k\times 1} &  &  &  & 1_{n_k\times 1}
\end{bmatrix}.

The residual matrix is obtained by R = Y - X\left( (X^TX)^{-1}XY \right).

Let M be the (p-1)\times p orthogonal matrix which can be set like


M = \begin{bmatrix}
p-1 & -1 & \cdots & -1 & -1 \\ 
0 & p-2 & \cdots & -1 & -1 \\ 
\vdots & \vdots & \ddots & \vdots & \vdots \\ 
0 & 0 & \cdots & 1 & -1
\end{bmatrix}.

Let T = M(R^TR)M^T.

Let d=p-1, \tau=\frac{2d^2+d+2}{6d}-N-r_X, \varsigma  = \frac{(d+2)(d-1)(d-2)(2d^3+6d^2+3d+2)}{288d^2\tau^2}. Here r_X = rank(X).

Then Mauchly's W Statistic is

W = \frac{det(T)}{(tr(T)/d)^d}.

The Chi-square test value is \chi^2 = \ln(W)\tau with freedom degree df = d(d+1)/2-1.

  • Greenhouse-Geisser

\varepsilon_{gg}  = \frac{tr(T)^2}{tr(T^TT)d}.

  • Huynh-Feldt

\varepsilon_{hf}  = \min\left( \frac{nd\varepsilon_{gg}-2}{d(N-r_X)-d^2\varepsilon_{gg}} , 1\right).

  • Lower-Bound

\varepsilon_{lb}  = \frac{1}{d}.

  • Roy's Largest Root

Within and Between Test

Some basic calculations:

  • Sum Square of Total:

SS_T = \sum_{i,k,j}(y_{ikj}-\bar{y_{...}})^2, with degree freedom df = Np-1.

  • Sum Square of Between Factor A:

SS_A = \sum_{i,k,j}(\bar{y_{ik.}}-\bar{y_{...}})^2, with degree freedom df = N-1.

  • Sum Square of Within Factor B:

SS_B = \sum_{i,k,j}(y_{ikj}-\bar{y_{ik.}})^2, with degree freedom df = Np-N.

Where

\bar{y_{i..}} = \frac{1}{n_ip}\sum_{k,j}y_{ikj}, \bar{y_{...}} = \frac{1}{Np}\sum_{i,k,j}y_{ikj}, \bar{y_{..j}} = \frac{1}{N}\sum_{i,k}y_{ikj}, \bar{y_{i.j}} = \frac{1}{n_i}\sum_{k}y_{ikj}, \bar{y_{ik.}} = \frac{1}{p}\sum_{j}y_{ikj}, N = \sum_{i=1^{k}}n_i .

Tests of Within-Subjects Effects

  • Sum Square of factor B for Within test

SSW_B = \sum_{i,k,j}(\bar{y_{..j}}-\bar{y_{...}})^2, with degree freedom df = p-1.

  • Sum Square of interaction A*B for Within test

SSW_{AB} = \sum_{i,k,j}(\bar{y_{i.j}}-\bar{y_{i..}} - \bar{y_{..j}} + \bar{y_{...}})^2, with degree freedom df = (k-1)(p-1).

  • Sum Square of error(factor B) for Within test

SSW_{E} = \sum_{i,k,j}(y_{ikj}-\bar{y_{i.j}} - \bar{y_{ik.}} + \bar{y_{i..}})^2, with degree freedom df = (N-k)(p-1).

Tests of Between-Subjects Effects

  • Sum Square of intercept for Between test

SSB_{int1} = \frac{p}{\sum_{i=1}^{k}\frac{1}{n_i}}\left( \sum_{i}\sum_{k}\frac{\bar{y_{ik.}}}{n_i}  \right)^2   , with degree freedom df = k-1.

  • Sum Square of intercept for Between test(when set intercept = 0)

SSB_{int0} = (X_{A}^{T}X_{A})^{-1}X_{A}^{T}Y , with degree freedom df = k-1. Here X_A is the design matrix associated to effect A, while Y is a Np \times 1 matrix representing the indexed data.

  • Sum Square of between factor A for Between test

SSB_B = \sum_{i,k,j}(\bar{y_{i..}}-\bar{y_{...}})^2, with degree freedom df = k-1.

  • Sum Square of error(factor A) for Between test

SSB_E = \sum_{i,k,j}(\bar{y_{ik.}}-\bar{y_{i..}})^2, with degree freedom df = N-k.

Multiple Means Comparisons

There are various methods for multiple means comparison in Origin, and we use the ocstat_dlsm_mean_comparison() function to perform means comparisons.

Two types of multiple means comparison methods:

Single-step method. It creates simultaneous confidence intervals to show how the means differ, including Tukey-Kramer, Bonferroni, Dunn-Sidak, Fisher’s LSD and Scheffé mothods.

Stepwise method. Sequentially perform the hypothesis tests, including Holm-Bonferroni and Holm-Sidak tests