Understanding the Output of summary(glmer(…)) in R
In this article, we will delve into the output of the summary(glmer(...)) function in R, which is used to summarize the results of a generalized linear mixed model (GLMM). We will explore what each part of the output represents and how to interpret it.
What is a Generalized Linear Mixed Model (GLM)?
A GLM is a type of statistical model that extends the linear regression model to account for both fixed and random effects. In R, the glmer function is used to fit GLMs, which are similar to linear models but can handle categorical variables as predictors.
The Output of summary(glmer(…)) Explained
The output of summary(glmer(...)) provides several key pieces of information about the model:
- AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion): These are metrics used to evaluate the fit of the model. A lower value indicates a better fit.
- logLik and deviance: These represent the log-likelihood function and the deviance, which measure how well the model fits the data.
- df.resid: This represents the residual degrees of freedom, which can be used to check for any issues with the model assumptions.
Fixed Effects
The fixed effects part of the output shows the estimated coefficients for each predictor variable in the model. These estimates represent the change in the outcome variable for a one-unit change in the predictor variable, while holding all other predictors constant.
In the example provided, we have three predictor variables: closed_chosen_stim, day, and an intercept term (denoted as (1 | ID)). The fixed effects output shows the estimated coefficients for these variables:
closed_chosen_stim: 0.4783day: -0.2476
These estimates can be interpreted in the context of the model, but we need to consider the standard errors and p-values associated with each coefficient.
Random Effects
The random effects part of the output shows information about the variance components for each level of the grouping variable (in this case, ID). The variance component represents the amount of variation explained by that particular group.
In this example, we have one random effect term (1 | ID), which means that the model includes a random intercept for each ID group. The output shows the estimated variance for this term:
- Variance: 0
- Standard Deviation: 0
This indicates that the random effects model is estimating a constant group-level intercept, but since the variance is zero, it suggests that the model does not account for any significant variation between groups.
Weights (or Prior Probabilities)
The weights output shows the prior probabilities of each predictor variable, which are used by the model to determine the likelihood of observing the data given the model parameters. In this case, the weights represent the relative importance of each predictor variable in explaining the outcome variable.
In the example provided, we can see that closed_chosen_stim has a weight of 1, indicating that it is considered equally important as the intercept term in explaining the outcome variable. The other predictors have lower weights, indicating that they are less influential.
Accessor: weights(object,type=“working”)
The weights() accessor returns the working weights for each predictor variable, which represent the prior probabilities of observing the data given the model parameters. By default, these weights are equal to 1, but we can use the type="working" argument to obtain the actual weights.
In the example provided, the output shows that the working weights for closed_chosen_stim and day match the coefficients obtained from the fixed effects output:
all.equal(fixef(glmer.D93),coef(glm.D93)) ## TRUE
This confirms that the fixed effects output is equivalent to the working weights.
Conclusion
In this article, we explored the output of summary(glmer(...)) in R and provided an explanation for each part of the summary. We also demonstrated how to access the weights (prior probabilities) using the weights() accessor and how they relate to the fixed effects output. By understanding these concepts, you can gain a deeper insight into your models and make more informed decisions about model selection and interpretation.
Trivial Example: Matching GLM and glmer Results
The example provided in the original post demonstrates that the results of glm() (ordinary linear regression) match those obtained from glmer() (generalized linear mixed model). However, we must note that this is only possible when the random effects term (1 | ID) is not used.
To obtain a more accurate comparison between glm() and glmer(), you would need to consider using an identical random effects structure in both models. The example code provided shows how to fit a simple linear regression model with a dummy grouping variable, but this does not accurately represent the original post’s GLMM model.
By following these steps and understanding the output of summary(glmer(...)) in R, you can better interpret the results of your generalized linear mixed models and make more informed decisions about model selection and interpretation.
Last modified on 2023-06-01