# Quantitative Methods Consulting Corner

Dear Stats Consultant,
I am running a two-way ANOVA and I have a couple questions regarding Type I error inflation: 1) If I am analyzing both main effects and the interaction, should I be dividing up my total probability of a Type I error (familywise α, FW α ) across the three sets of analyses?; and 2) One of my main effects has 3 levels and the other has 4 levels. What is the best FW α procedure for conducting all pairwise comparisons?

Sincerely,
Bon Faroni

Dear Mr. Faroni,
Before delving into the specifics of your questions, the most important thing to consider is that the nature of the multiplicity control depends on the nature of the research. If your research is exploratory, then it might not be necessary to discuss multiplicity control. In other words, if the consequences of making a Type I error are minimal, then would it make sense to impose multiplicity control and hence reduce statistical power? On the other hand, if there are severe consequences of a Type I error, then the strategy changes and efforts must be directed towards minimizing the probability of a false positive.

Now, to your questions. For Question #1, it is not recommended that researchers analyze main effects in the presence of a significant interaction, so therefore there are only two scenarios, analyze the interaction (single effect) or analyze the main effects (two effects). Typically no multiplicity control is imposed when following up on interactions since the goal is simply to understand the nature of the interaction. Whether to control for multiplicity over the two main effects (or not) is a difficult question. It depends on whether or not the research is exploratory, one’s theoretical position on how to define a ‘family’ for imposing FW α , etc. If the research is confirmatory and one subscribes to a conservative FW α strategy, then some sort of control (e.g., splitting the Type I error probability in half) is necessary. In this case, it is important to ensure that there is sufficient a priori power for detecting effects at level α/2 rather than α.

This brings us to Question #2. If there are 3 levels of the main effect and the interest is all pairwise comparisons, then Fisher’s Least Significant Difference (LSD) procedure will ensure FW α control and maximum power. In short, if the omnibus main effect test is significant then each of the three pairwise comparisons can be conducted without adjustment. With four or more levels a more conservative multiple comparison procedure is necessary. Sequential procedures (e.g., Holm) provide strict familywise error control and can provide greater power than simultaneous procedures (e.g., Bonferroni, Tukey).

Sincerely,
Stats Consultant

Dear QM Consultant,
I have three subscale scores that I would like to include as predictor variables in a regression model. All of the constructs represented by the subscales are theoretically important predictors. When I run the regression model with all of the subscale variables in the regression only two of them are significant. However, running separate regression models using one subscale at a time produces significant results for all three subscales. Why does this happen and what is the correct model?
Sincerely, Jane Tukey

Dear Jane Tukey,
This is a common problem in regression. You likely have significant predictors in individual models but not in the multiple regression model because the regression coefficients in the full model are partial regression coefficients, whereas in a simple regression they are not. This means that the coefficients in your full model are only looking at the unique contribution of each predictor. When predictors share a large amount of variance with each other, the shared variance is not taken into account for predicting your outcome variable. It is typical for subscale scores to be highly related to one another.

It is important for you to determine which variability you are interested in: 1) the shared variability; 2) the unique variability; 3) unique plus shared variability. If you are interested in the shared variability then you could try extracting the unique factors from a factor analysis and use those as predictors in your regression model, or run a structural equation model with a latent variable (or variables) representing the shared variability among the predictors . If you are interested in the unique variability, then you will want to leave all predictors in the regression. Be sure to check though that collinearity is not an issue, since highly correlated predictors causes inflated error variances. Lastly, if you are interested in predicting the outcome from both the unique and shared variability, then you could run separate regression coefficients for each predictor. Generally this is not what researchers are interested in, and may provide misleading information about the strength of the relationship between each predictor and the outcome variable (e.g., the sum of the separate R2 values does not represent the total variability explained by the three predictors). When running regression models, including all theoretically relevant predictors (whether it is in raw form, factor form, etc.) is the best method to get unbiased regression estimates and maximize the amount of variance accounted for in your outcome variable.

Sincerely, Your Friendly Neighbourhood Stats Consultant