lcca {lcca} | R Documentation |
Fits a latent-class causal model using an EM algorithm.
lcca(formula.treatment, formula.outcome, data, nclass = 2, reference = 1, outcome.distribution = "NORMAL", iseeds = NULL, iter.max = 5000, tol = 1e-06, starting.values = NULL, flatten.rhos = 0, stabilize.alphas = 0, stabilize.betas=0., flatten.gammas = 0, se.method = "STANDARD", r.matrix = NULL, freq, weights, clusters, strata, subpop)
formula.treatment |
an object of class |
formula.outcome |
an object of class |
data |
an optional data frame, list or environment containing
the variables in the model. If |
nclass |
number of latent classes to be fit. Response items are assumed to be conditionally independent within classes. |
reference |
latent class to be used as the reference group in the baseline-category logistic model for class membership. |
outcome.distribution |
The distribution of outcome
variable, could be |
iseeds |
two integers to initialize the random number generator; see DETAILS. |
iter.max |
maximum number of iterations to be performed. Each iteration consists of an Expectation or E-step followed by a Maximization or M-step. The procedure halts if it has not converged by this many iterations. |
tol |
convergence criterion. The procedure halts if the maximum absolute change in all parameters (gammas and rhos) from one iteration to the next falls below this value. |
starting.values |
optional starting values for the model
parameters. This must be a list with four named components,
|
flatten.rhos |
optional flattening constant for estimation of rho-paramaters (item-response probabilities); see DETAILS. |
stabilize.alphas |
optional constant for stabilizing the estimated logistic coefficients; see DETAILS. |
stabilize.betas |
optional constant for stabilizing the estimated logistic coefficients; see DETAILS. |
flatten.gammas |
optional flattening constant for estimation of marginal gamma-parameters (class prevalences); see DETAILS. |
se.method |
method to use for computing standard errors; see DETAILS. |
r.matrix |
optional matrix specifying the inestimable correlations among the potential outocmes; see DETAILS. |
freq |
optional frequency or count variable, to be used when data are aggregated. |
weights |
optional numeric variable containing sampling weights for survey data; see DETAILS. |
clusters |
optional integer or factor variable containing sampling cluster identifiers; see DETAILS. |
strata |
optional integer or factor variable containing sampling stratum identifiers; see DETAILS. |
subpop |
optional logical variable indicating a subpopulation to which the model is to be fit; see DETAILS. |
This function computes model-based estimates of average
treatment effects in populations where the treatment variable is a latent
class. Treatment effects are defined in terms of potential outcomes, the
hypothetical results that could have been seen for each individual
under the different treatments.
The model assumed for the treatment is a latent-class model with
covariates of the type that can be fit with lcacov
. The
model for each potential outcome is a linear regression of the
response on another, possibly different, set of covariates.
Estimated parameters are computed by an EM algorithm.
The formula.treatment
argument should have the form
cbind(U1,U2,...)~X1+X2+...
. The term on the left-hand
side of ~
is a matrix (not a data frame) of items that
measure class membership. Each item should consist of
integer codes 1,2,...
. Items may also be factors,
in which case they will be converted to integer codes (as in the
function unclass
), and the levels
of the factors will be
ignored. The terms on the right-hand side of formula
are
covariates that predict class membership.
Missing values in U1,U2,...
are allowed and should be
conveyed by the R missing value code NA
. Cases with missing
items are retained in the fitting procedure, and the missing
values are assumed to be ignorably missing or missing at random.
The formula.outcome
argument should have the form
Y~Z1+Z2+...
. The variable on the left-hand
side of ~
is the outcome, which should be numeric. The terms
on the right-hand side of the model are covariates predicting the
outcome. Missing values in Y
are allowed and should be
conveyed by the R missing value code NA
.
The number of latent treatment classes to be fit is determined by
nclass
, which must be at least 2.
The unknown parameters to be estimated are: the coefficients in the logistic treatment model, which we call alpha's; the item-response probabilities, which we call rho's; the coefficients in the linear model for potential outcomes, which we call beta's; and the residual variances for the potential outcomes, which we call sigma-squared's. Estimation proceeds by an EM algorithm which, by default, computes maximum-likelihood estimates of these parameters. EM may converge to a global or local maximum, possibly on the boundary of the parameter space where one or more rho-parameters are zero. Sparseness in the data may cause one or more logistic coefficients to diverge to plus or minus infinity. These conditions may be remedied through the use of flattening and stabilizing constants, which are described below.
Upon convergence, population average class prevalences, which we call marginal gamma's, are estmated by averaging the estimated posterior probabilities of class membership across individuals. Estimates are provided for the means of the potential outcomes in the overall population, which we call mu's. Contrasts among these means are the average treatment effects. This function also estimates the means of the potential outcomes within each treatment class. Contrasts among these are class-specific treatment effects.
Optional starting values for parameters are provided through
starting.values
. This argument should be a list with four
named components: rho
, alpha
, beta
and
sigma2
. The component rho
should be an array of
dimension c(nitems,maxlevs,nclass)
, where nitems
is the
number of items on the left-hand side of formula.treatment
,
maxlevs
is the maximum number of levels (distinct response
categories) among the items, and nclass
is the number of
treatment classes. The element starting.values$rho[j,k,c,g]
is
the probability that an individual in class c
supplies a
response of k
to item j
. The component alpha
should be a matrix of dimension c(ncovs.alpha,nclass)
, where
ncovs.alpha
is the number of predictors in the logistic
treatment model (including a constant term for the intercept, if
present). The elements of starting.values$alpha[,c]
are the
coefficients determining the log-odds of membership in class c
,
versus the reference class. If c
is the reference class, then
all elements of starting.values$alpha[,c]
must be zero. The
component beta
should be a matrix of dimension
c(ncovs.beta,nclass)
, where
ncovs.beta
is the number of predictors in the outcome
model (including a constant term for the intercept, if
present). The elements of starting.values$beta[,c]
are the
coefficients for predicting the potential outcomes for class c
.
The component sigma2
should be a numeric vector of length
nclass
containing residual variances for the potential
outcomes.
Because only one potential outcome can be seen for each individual, the
observed data provide no information about the correlations among the
potential outcomes. Inferences about average treatment effects are
insensitive to assumptions about these correlations. Nevertheless,
assumed values for these correlations may be provided via the argument
r.matrix
, which should be a symmetric, positive definite matrix
of dimension c(nclass,nclass)
with diagonal elements equal to
1
. If none is supplied, then correlations among potential
outcomes are set to zero.
This function, unlike lca
and lcacov
,
does not support multi-group analyses.
If starting.values
is not supplied, or if
starting.values$rho=NULL
, then starting values for the
rho-parameters will be randomly generated. This function uses its own
internal random number generator which is seeded by two integers, for
example, seeds=c(123,456)
, which allows results to be
reproduced in the future. If seeds=NULL
then the function will
seed itself with two random integers from R. Therefore, results can
also be made reproducible by calling set.seed
beforehand
and taking seeds=NULL
. Different starting values for the
rhos's may lead to solutions in which the classes have different
orderings. To reorder the classes in the printed output summaries,
use the function permute.class
. If
starting.values$alpha=NULL
, then starting values for the
alpha's will be set to zero, which implies uniform class-membership
probabilities in all classes for all individuals. If
starting.values$beta=NULL
, then starting values for the beta's
will be determined by an ordinary least-squares (OLS) regression of
the observed outcome on the predictors in the outcome model, and will
be assumed to be equal across classes, which implies that all average
treatment effects are zero. If starting.values$sigma2=NULL
, then
starting values for the sigma-squared's will be determined by the
residual variance from the OLS regression.
Rho-parameters at or near a boundary, which are commonplace in
latent-class analysis, create difficulty when computing
derivative-based standard errors, because the likelihood function at
the solution may not be log-concave. The argument flatten.rhos
allows the user to supply a positive flattening constant to smooth the
estimated rho's toward the interior of the parameter space. A value
of flatten.rhos=1
, which should be adequate in most cases,
supplies information equivalent to one prior observation for each
response item in each class and each group, distributed fractionally
across the response categories in equal amounts.
During the estimation procedure, one or more estimated
alpha-parameters may diverge toward plus or minus infinity, a
condition that is sometimes called quasi-separation (Agresti, 2002).
It suggests that all the individuals in a given class have identical
values for one or more covariates, or that a group of covariates is
collinear within a class. This tends to happen in sparse-data
situations where the number of covariates is large or one or more
classes are rare. If the logistic estimation procedure fails to
converge, one possible remedy is to eliminate some covariates from the
model. Another possibility is to introduce a small amount of prior
information to stabilize the coefficients.. Clogg et al. (1991)
described a procedure for sparse logistic regression that smooths the
estimated slopes toward zero. It can be viewed as an application of a
penalty function or a data-dependent prior distribution. In effect,
this prior adds a fictitious fractional number of observations to each
case in the dataset, distributed across classes in proportions
determined by the class prevalences from a model without
covariates. The total number of fictitious observations is equal to
ncovs.alpha
multiplied by stabilize.alphas
. Setting
stabilize.alphas=1
should, in most cases, be adequate to solve
the problem. If the value of stabilize.alphas
is positive,
then lcca
will first run the EM algorithm for a treatment model
without covariates to estimate the class prevalences. Poor starting
values may cause one or more class prevalences from this model to go
to zero. Prevalences for the model without covariates may be flattened
by the argument flatten.gammas
, and a value of 1
should
usually be adequate to solve the problem. If
stabilize.alphas=0
, then flatten.gammas
will be ignored.
In summary, any problems due to
sparseness, boundary estimates or poor starting values can usually be
overcome by setting flatten.rhos=1
, stabilize.alphas=1
and flatten.gammas=1
.
Acceptable values for the argument se.method
are
"STANDARD"
,"FAST"
, "SANDWICH"
or "NONE"
.
If "STANDARD"
, then standard errors are obtained by inverting the
matrix of (minus one times) the second derivatives of the loglikelihood
function (plus penalty terms for flattening constants, if present) at
the solution. If "FAST"
, then the matrix of second derivatives
is approximated by the sum of the outer products of individuals'
contributions to the score functions (first derivatives). If
"SANDWICH"
, then the sum of the score outer-products is pre-
and post-multiplied by the inverse of the second derivatives. If
"NONE"
, then computation of first- and second-derivatives and
standard errors is suppressed.
By default, each case (row) of data
or the model environment is
assumed to represent one observational unit or individual. Data may
also be aggegated, with individuals bearing identical responses to all
variables (including NA
's, if present) collapsed into a single
case, with frequencies conveyed through the numeric variable
freq
.
Survey weights may be supplied via the argument weights
.
Weights should not be confused with frequencies as supplied by
freq
. A frequency of 10
indicates that ten individuals
in the sample exhibited the given pattern of responses, but a survey
weight of 10
indicates that one sampled individual is
representing ten individuals in the population. The same variable
supplied as freq
or weights
will lead to identical
estimates, but the standard errors may be drastically different. You
cannot supply both freq
and weights
; data from a survey
with unequal probabilities of selection must be supplied in
disaggregated form. If a weights
variable is present, then
se.method
is automatically set to "SANDWICH"
, and the
inner matrix of the sandich formula (i.e., the meat of the sandwich)
is an estimated covariance matrix for the total quasi-score.
Weights provided with large, nationally-representative survey datasets
are often very large, because their sum estimates the size of the
population. Many data analysts are accustomed to rescaling survey
weights to have a mean of one, so that they sum to the sample size
rather than the population size. Rescaling a weights
variable
will have no effect on estimates or standard errors, because the scale
factor cancels out when se.method="SANDWICH"
.
Additional information about the sample design may be supplied via
clusters
and strata
. If weights
is supplied,
then the sampling plan is assumed to fall within the general class of
with-replacement (WR) designs. At the first stage, clusters are drawn
with replacement. Then individuals are selected within clusters,
possibly with unequal probabilities, possibly in multiple stages. The
clusters
and strata
variables should be integers or
factors. The integers serve merely as identifiers; the actual values
are unimportant. Cluster identifiers are assumed to be unique within
strata, so that cluster 1
in stratum 1
and cluster
1
in stratum 2
are assumed to be different. If
clusters
is not supplied, then each sampled individual is
assumed to be a cluster. If strata
is not supplied, then one
stratum is assumed for the whole population. Note a weight variable is
required for complex survey data; if weights=NULL
, then
clusters
and strata
will be ignored.
It is often useful to fit a model that describes only a part of the
full population (e.g., females). With a simple random sample, it is
acceptable to remove the sampled individuals who are not in this
subpopulation (e.g, males) from the data frame or model environent,
because those who remain will be then a simple random sample of the
subpopulation of interest. With a complex survey design, however,
discarding individuals who do not belong to the subpopulation may lead
to incorrect standard errors, because the overall design may not scale
down to the subpopulation. To fit a model to a subpopulation when
weights
is present, supply a logical variable subpop
whose elements are TRUE
for members of the subpopulation. If
subpop
is supplied and weights=NULL
, individuals outside
of the subpopulation will be ignored when computing estimates and
standard errors.
A list whose class
attribute has been set to "lcca"
.
A nicely formatted summary of this object may be seen by applying
the summary
method, but its components may also be accessed
directly. Components which may be of interest include:
ncases |
number of ncases (rows) from the data frame or model environment used in the procedure. |
nitems |
number of response variables appearing in the model. |
nlevs |
integer vector of length |
iter |
number of EM iterations performed. |
converged |
logical value indicating whether the algorithm
converged by |
loglik |
vector of length |
loglik.final |
value of the loglikelihood or pseudo-loglikelihood function after the final iteration. |
logpost |
vector of length |
logpost.final |
value of the log-posterior density after the final iteration. |
AIC |
Akaike's information criterion (smaller is better). Will
be |
BIC |
Bayesian information criterion (smaller is better). Will
be |
param |
estimated parameters after the final
iteration. This is a list with four named components, |
post.probs |
matrix of estimated posterior probabilities of class membership given the observed items and outcomes. This is a matrix with rows corresponding to cases or rows of the dataset and columns corresponding to classes. |
se.fail |
logical value equal to |
se.rho |
array with same dimensions as |
se.alpha |
array with same dimensions as |
se.beta |
array with same dimensions as |
se.sigma2 |
array with same dimensions as |
dim.theta |
number of free parameters estimated in the
model. The free parameters correspond to the elements of
|
theta |
vector of length |
cov.theta |
estimated covariance matrix for the
parameters in |
score |
vector of first derivatives of the loglikelihood of
pseudo-loglikelihood (plus penalty terms for flattening constants,
if present)
with respect to the free parameters in |
hessian |
matrix of (minus one times) the second derivatives of
the loglikelihood or pseudo-loglikehood (plus penalty terms for
flattening constants, if present) with respect to the free parameters in
|
sandwich.meat |
inner matrix of the sandwich variance formula.
This is an empirical estimated covariance matrix for |
deff.trace |
the design-effect trace, if |
marg.gamma |
estimated marginal class prevalences. This is a vector
of length |
mu.c |
estimated marginal means for the potential
outcomes. This is a vector of length |
cov.mu.c |
estimated covariance matrix for |
mu.cd |
estimated means for the potential
outcomes within classes. This is a matrix of dimension
|
cov.mu.cd |
estimated covariance matrices for |
Joe Schafer
Send questions to mchelpdesk@psu.edu
Agresti, A. (2002) Categorical Data Analysis (2nd Ed.). New York: Wiley.
Clogg, C.C., Rubin, D.B., Schenker, N., Schultz, B., and Weidman,
L. (1991) Multiple imputation of industry and occupation codes in
census public-use samples using Bayesian logistic
regression. Journal of the American Statistical Association,
86, 68-78.
For more information on using this function and other functions in
the LCCA package, see the manual LCCA Package for R, Version 1
in the subdirectory doc
.
compare.fit
,
lca
,
lcacov
,
lcca.datasim
,
permute.class
,
summary.lcca
## fit a two-class model to dieting study data data(diet) set.seed(123) # for reproducibility fit <- lcca( formula.treatment = cbind(U.1,U.2,U.3) ~ DISTRESS.1 + BLACK + NBHISP + GRADE + SLFHLTH + SLFWGHT + WORKHARD + GOODQUAL + PHYSFIT + PROUD + LIKESLF + ACCEPTED + FEELLOVD, formula.outcome = DISTRESS.2 ~ DISTRESS.1 + BLACK + NBHISP + GRADE + SLFHLTH + SLFWGHT + WORKHARD + GOODQUAL + PHYSFIT + PROUD + LIKESLF + ACCEPTED + FEELLOVD, nclass=2, outcome.distribution="NORMAL", data=diet) summary(fit)