I recently had two colleagues get their manuscripts rejected from medium-tier journals for having cross-sectional mediation analyses. To be clear, these were their central analyses, not auxiliary analyses. Their main hypotheses concerned the reasons why a predictor and outcome were related. The journal reviewers’ main justification for rejection was the manuscripts were based upon cross-sectional mediation analyses. Ever since Scott E. Maxwell and David A. Cole’s two simulation studies (Maxwell & Cole, 2007; Maxwell, Cole, & Mitchell, 2011) on cross-sectional versus longitudinal mediation, the field of soft psychology (i.e., personality, social, developmental, clinical, I/O, etc.), and social science in general, has cursed the practice of cross-sectional mediation. I plan to argue why this ban against cross-sectional mediation is 1) a massive double standard in social science and 2) bad for the progression of scientific theory.
Many prediction analyses in soft psychology seek to imply causation. There are predictors and there are outcomes and while authors of manuscripts never say the word “cause,” they say everything but it in their discussion section. Of course, most studies in soft psychology are observational; they are not experimental designs with randomized control groups. Therefore, causation cannot be determined. However, not all observational studies have equal causal inference.
There are three requirements for causality: 1) correlation 2) temporal precedence and 3) removal of confounding (aka third, extraneous) variables. A cross-sectional study only has the first – correlation. A longitudinal study has the first and second – correlation and temporal precedence. However, no observational study can satisfy the third – removal of confounding variables. Yet, studies can control for key, plausible confounds. For example, in most longitudinal studies the previous time point of the outcome is a key confounding variable. Instead of a predictor at time 1 causing the outcome at time 2, it may simply be that both are correlated with the outcome at time 1 (i.e., the confounding variable). Essentially, the outcome at time 1 caused the outcome at time 2, while the predictor at time 1 does not.
This focus on controlling for the outcome time 1, or baseline, was popularized by David Rogosa in his seminal paper A critique of cross-lagged correlation in 1980. He critiqued David Kenny’s 1975 paper on – you can guess… cross-lagged correlations. David Kenny suggested the best way to test for the causal direction of two variables was to compare their correlations across time. So to determine if variable A causes variable B or B caused A, one would compare the correlation between variable A at time 1 and variable B at time 2 and then between variable B at time 1 and variable A at time 2. If one of these correlations was statistically significantly larger in magnitude than the other, then there was support for it being the primary causal direction.
However, David Rogosa showed that the time 1 variable of the outcome (in either direction) was a key confound for the cross-lagged correlation. Depending on how strongly correlated the outcome at time 1 was with the predictor at time 1 and the outcome at time 2, the standardized partial regression coefficient (although see my most recent blog post for why I believe he should have used semi-partial correlations instead) might look very different than the cross-lagged correlation. David Rogosa argued the standardized partial regression coefficients, sometimes known as cross-lagged autoregression coefficients, should be compared rather than the cross-lagged correlation.
Maxwell and Cole’s two papers on cross-sectional mediation essentially make the same argument as David Rogosa, but specifically applied to the context of cross-sectional mediation. Maxwell and Cole showed that although researchers may find evidence for cross-sectional mediation, results will differ from David Rogosa’s cross-lagged autoregression approach if 1) the longitudinal correlations (i.e., time 1 with time 2) differ from the cross-sectional correlations (i.e., time 1 with time 1) and 2) after controlling for previous time points of the mediator and outcome, the standardized partial regression coefficients of the predictor and mediator become zero. It is important to understand that these are two different influences. I often hear people say “you need three time points to test for mediation.” If you are referring to Maxwell and Cole’s studies, that is not enough: three time points are necessary, but not sufficient. You need three time points and previous time points of the mediator and outcome to control for. Longitudinal mediation without controlling for previous time points of the mediator and outcome if analogous to David Kenny’s cross-lagged correlations, which Rogosa, Maxwell, and Cole would all likely disapprove of.
However, if you are going to be against cross-sectional mediation, then to be logically consistent, you need to also be against any cross-sectional main effect analyses with two different outcomes. Maxwell and Cole’s simulation studies tested the influence of 1) temporal precedence and 2) controlling for previous time points of the mediator and outcome on the influence of two effects: the “a path” and “b path” in mediation. The key is the word two not the word mediation. A main effect analysis with a predictor and two outcomes also has two effects that are influenced by 1) temporal precedence and 2) controlling for previous time points of the outcomes. The bias is the same (although technically the bias is greater in the two outcome case because it is possible to have significant mediation without both the “a path” and “b path” significant). If there are three or more outcomes in the cross-sectional main effect analyses, then the total bias is greater than cross-sectional mediation! But in my experience, studies with cross-sectional main effect analyses do not get rejected as often as those with cross-sectional mediation. I have had colleagues get manuscripts accepted with cross-sectional main effect analyses no reviewer comments of “needing two time points to test for main effects.”
If you are going to ban cross-sectional mediation, then all cross-sectional prediction analyses (or longitudinal analyses that don’t control for previous time points of the outcomes) should be banned as well because every prediction analysis needs two things: 1) temporal precedence of the outcome measures after the predictors and 2) the previous time points of the outcomes controlled for. This includes not only cross-sectional main effect analyses, but also cross-sectional moderation analyses. Now, some people argue that mediation is by definition a causal analysis and thus it has different standards than main effects or moderating effects. Maxwell and Cole even say this in the abstract of their 2007 paper: “mediation consists of causal processes that unfold over time.” And here may lie the crux of the double standard. Main effects and moderation effects are not defined as processes that unfold over time. Except if used as prediction analyses, they are! Look no further than the diathesis-stress theory of psychopathology, which is tested with moderation analyses. A person has a diathesis, such as pessimism, that interacts with a stressor, such as the death of a loved one, to cause a psychological disorder, such as depression. The theory is a process that unfolds over time. It is the claims of the scientific theory, not the mathematics of the statistics, which determine if an analysis reflects a process that unfolds over time. Any prediction analysis seeks to test a process that unfolds over time, whether a mediation analysis or some other analysis.
This potentially leaves us with a pretty grim scenario – a ban on all cross-sectional prediction analyses. However, I do not take this stance, because I do not advocate for a ban of cross-sectional mediation. I believe cross-sectional prediction analyses, whether mediation analyses or others, have a role in social science for two reasons. First, to force every researcher to conduct a longitudinal study is too burdensome on burgeoning researchers. Longitudinal studies require follow-ups and follow-ups require money to pay participants. If you don’t have grant money (or willing to use up your personal savings) longitudinal studies are not always possible.
Second, great theoretical contributions can come from cross-sectional mediation analyses. For example, one of the seminal papers on chronic pain used cross-sectional mediation (Rudy, Kerns, & Turk, 1988). The paper showed that interference of life activities and self-control mediated the relationship between chronic pain and depression. This helped lay the foundation for the cognitive-behavioral theory of chronic pain and cognitive-behavioral therapy for chronic pain. Now, what if the authors had submitted this study for publication and the reviewers had rejected it because the authors didn’t have three time points and didn’t control for previous time points of interference from life activities, self-control, and depression? Maybe, the authors get a grant and the money to conduct a longitudinal study on the topic, or maybe they doesn’t have time to write the grant or they apply for one and don’t get funded. It is possible the cognitive-behavioral theory of chronic pain never develops! What a loss to the science of clinical psychology that could have been. And it is possible other fruitful psychological theories are not being studied because of rejected cross-sectional mediation results.
I am upset at the reviewers of my two colleagues’ manuscripts. Not because they rejected their manuscripts, but because they were so blindly against cross-sectional mediation analyses. The presence of cross-sectional mediation is not enough to reject a study from a low or medium tier journal (one could argue top-tier journals are reserved for grant-funded longitudinal studies that have the money to conduct the follow-ups). In order to reject their manuscripts, I believe the reviewers should have primarily argued that the theoretical contribution of theirs mediation models were weak or limited. As we saw with the chronic pain study, cross-sectional mediation analyses can be very influential. However, if the reviewer believed their mediation models were not meaningful theoretical contributions, then they should have rejected them. However, rejecting manuscripts solely on the presence of cross-sectional mediation is bad peer-reviewing and bad for the progression of social science.