Topics III
Which effects are identified by the random assignment of \(X\)?
An obvious approach is to first examine the (average) effect of X on M1 and then use another manipulation to examine the (average) effect of M1 on Y.
An obvious approach is to first examine the (average) effect of X on M1 and then use another manipulation to examine the (average) effect of M1 on Y.
Both instances of unobserved confounding between \(M\) and \(Y\):
Both instances of unobserved confounding between \(M\) and \(Y\):
Another somewhat obvious approach is to see how the effect of \(X\) on \(Y\) in a regression is reduced when you control for \(M\).
If the effect of \(X\) on \(Y\) passes through \(M\) then surely there should be no effect of \(X\) on \(Y\) after you control for \(M\).
This common strategy associated with Baron and Kenny (1986) is also not guaranteed to produce reliable results. See for instance Green, Ha, and Bullock (2010)
df <- fabricate(N = 1000,
U = rbinom(N, 1, .5), X = rbinom(N, 1, .5),
M = ifelse(U==1, X, 1-X), Y = ifelse(U==1, M, 1-M))
list(lm(Y ~ X, data = df),
lm(Y ~ X + M, data = df)) |> texreg::htmlreg()
Model 1 | Model 2 | |
---|---|---|
(Intercept) | 0.00*** | 0.00*** |
(0.00) | (0.00) | |
X | 1.00*** | 1.00*** |
(0.00) | (0.00) | |
M | 0.00 | |
(0.00) | ||
R2 | 1.00 | 1.00 |
Adj. R2 | 1.00 | 1.00 |
Num. obs. | 1000 | 1000 |
***p < 0.001; **p < 0.01; *p < 0.05 |
The bad news is that although a single experiment might identify the total effect, it can not identify these elements of the direct effect.
So:
Check formal requirement for identification under single experiment design (“sequential ignorability”—that, conditional on actual treatment, it is as if the value of the mediation variable is randomly assigned relative to potential outcomes). But this is strong (and in fact unverifiable) and if it does not hold, bounds on effects always include zero (Imai et al)
Consider sensitivity analyses
You can use interactions with covariates if you are willing to make assumptions on no heterogeneity of direct treatment effects over covariates.
eg you think that money makes people get to work faster because they can buy better cars; you look at the marginal effect of more money on time to work for people with and without cars and find it higher for the latter.
This might imply mediation through transport but only if there is no direct effect heterogeneity (eg people with cars are less motivated by money).
Weaker assumptions justify parallel design
Takeaway: Understanding mechanisms is harder than you think. Figure out what assumptions fly.
CausalQueries
Lets imagine that sequential ignorability does not hold. What are our posteriors on mediation quantities when in fact all effects are mediated, effects are strong, and we have lots of data?
CausalQueries
We imagine a true model and consider estimands:
truth <- make_model("X -> M ->Y") |>
set_parameters(c(.5, .5, .1, 0, .8, .1, .1, 0, .8, .1))
queries <-
list(
indirect = "Y[X = 1, M = M[X=1]] - Y[X = 1, M = M[X=0]]",
direct = "Y[X = 1, M = M[X=0]] - Y[X = 0, M = M[X=0]]"
)
truth |> query_model(queries) |> kable()
label | query | given | using | case_level | mean | sd | cred.low | cred.high |
---|---|---|---|---|---|---|---|---|
indirect | Y[X = 1, M = M[X=1]] - Y[X = 1, M = M[X=0]] | - | parameters | FALSE | 0.64 | NA | 0.64 | 0.64 |
direct | Y[X = 1, M = M[X=0]] - Y[X = 0, M = M[X=0]] | - | parameters | FALSE | 0.00 | NA | 0.00 | 0.00 |
CausalQueries
Error in if (parent_nodes == "") {: argument is of length zero
Why such poor behavior? Why isn’t weight going onto indirect effects?
Turns out the data is consistent with direct effects only: specifically that whenever \(M\) is responsive to \(X\), \(Y\) is responsive to \(X\).
CausalQueries
Error in if (parent_nodes == "") {: argument is of length zero
Spillovers can result in the estimation of weaker effects when effects are actually stronger.
The key problem is that \(Y(1)\) and \(Y(0)\) are not sufficient to describe potential outcomes
Unit | Location | \(D_\emptyset\) | \(y(D_\emptyset)\) | \(D_1\) | \(y(D_1)\) | \(D_2\) | \(y(D_2)\) | \(D_3\) | \(y(D_3)\) | \(D_4\) | \(y(D_4)\) |
---|---|---|---|---|---|---|---|---|---|---|---|
A | 1 | 0 | 0 | 1 | 3 | 0 | 1 | 0 | 0 | 0 | 0 |
B | 2 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 | 0 | 0 |
C | 3 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 |
D | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 3 |
Table: Potential outcomes for four units for different treatment profiles. \(D_i\) is an allocation and \(y_j(D_i)\) is the potential outcome for (row) unit \(j\) given (column) \(D_i\).
0 | 1 | 2 | 3 | 4 | |||||||
Unit | Location | \(D_\emptyset\) | \(y(D_\emptyset)\) | \(D_1\) | \(y(D_1)\) | \(D_2\) | \(y(D_2)\) | \(D_3\) | \(y(D_3)\) | \(D_4\) | \(y(D_4)\) |
A | 1 | 0 | 0 | 1 | 3 | 0 | 1 | 0 | 0 | 0 | 0 |
B | 2 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 | 0 | 0 |
C | 3 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 |
D | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 3 |
\(\bar{y}_\text{treated}\) | - | 3 | 3 | 3 | |||||||
\(\bar{y}_\text{untreated}\) | 0 | 1 | 4/3 | 4/3 | |||||||
\(\bar{y}_\text{neighbors}\) | - | 3 | 2 | 2 | |||||||
\(\bar{y}_\text{pure control}\) | 0 | 0 | 0 | 0 | |||||||
ATT-direct | - | 3 | 3 | 3 | |||||||
ATT-indirect | - | 3 | 2 | 2 |
dgp <- function(i, Z, G) Z[i]/3 + sum(Z[G == G[i]])^2/5 + rnorm(1)
spillover_design <-
declare_model(G = add_level(N = 80),
j = add_level(N = 3, zeros = 0, ones = 1)) +
declare_inquiry(direct = mean(sapply(1:240, # just i treated v no one treated
function(i) { Z_i <- (1:240) == i
dgp(i, Z_i, G) - dgp(i, zeros, G)}))) +
declare_inquiry(indirect = mean(sapply(1:240,
function(i) { Z_i <- (1:240) == i # all but i treated v no one treated
dgp(i, ones - Z_i, G) - dgp(i, zeros, G)}))) +
declare_assignment(Z = complete_ra(N)) +
declare_measurement(
neighbors_treated = sapply(1:N, function(i) sum(Z[-i][G[-i] == G[i]])),
one_neighbor = as.numeric(neighbors_treated == 1),
two_neighbors = as.numeric(neighbors_treated == 2),
Y = sapply(1:N, function(i) dgp(i, Z, G))
) +
declare_estimator(Y ~ Z,
inquiry = "direct",
model = lm_robust,
label = "naive") +
declare_estimator(Y ~ Z * one_neighbor + Z * two_neighbors,
term = c("Z", "two_neighbors"),
inquiry = c("direct", "indirect"),
label = "saturated",
model = lm_robust)
You can in principle:
But to estimate effects you still need some SUTVA like assumption.
In this example if one compared the outcome between treated units and all control units that are at least \(n\) positions away from a treated unit you will get the wrong answer unless \(n \geq 7\).
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Contentious issues (mostly):
Data. How soon should you make your data available? My view: as soon as possibe. Along with working papers and before publication. Before it affects policy in any case. Own the ideas not the data.
Where should you make your data available? Dataverse is focal for political science. Not personal website (mea culpa)
What data should you make available? Disagreement is over how raw your data should be. My view: as raw as you can but at least post cleaning and pre-manipulation.
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Should you register?: Hard to find reasons against. But case strongest in testing phase rather than exploratory phase.
Registration: When should you register? My view: Before treatment assignment. (Not just before analysis, mea culpa)
Registration: Should you deviate from a preanalysis plan if you change your mind about optimal estimation strategies. My view: Yes, but make the case and describe both sets of results.
File drawer bias (Publication bias)
Analysis bias (Fishing)
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– Researchers conduct multiple excellent studies. But they only write up the 50% that produce “positive” results.
– Even if each individual study is indisputably correct, the account in the research record – that X affects Y in 100% of cases – will be wrong.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– Researchers conduct multiple excellent studies. But they only write up the 50% that produce “positive” results.
– Even if each individual study is indisputably correct, the account in the research record – that X affects Y in 100% of cases – will be wrong.
Exacerbated by:
– Publication bias – the positive results get published
– Citation bias – the positive results get read and cited
– Chatter bias – the positive results gets blogged, tweeted and TEDed.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– But say that researchers enjoy discretion to select measures for \(X\) or \(Y\), or enjoy discretion to select statistical models after seeing \(X\) and \(Y\) in each case.
– Then, with enough discretion, 100% of analyses may report positive effects, even if all studies get published.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– But say that researchers enjoy discretion to select measures for \(X\) or \(Y\), or enjoy discretion to select statistical models after seeing \(X\) and \(Y\) in each case.
– Then, with enough discretion, 100% of analyses may report positive effects, even if all studies get published.
– Try the exact fishy test An Exact Fishy Test (https://macartan.shinyapps.io/fish/)
– What’s the problem with this test?
When your conclusions do not really depend on the data
Eg – some evidence will always support your proposition – some interpretation of evidence will always support your proposition
Knowing the mapping from data to inference in advance gives a handle on the false positive rate.
Source: Gerber and Malhotra
Implications are:
Summary: we do not know when we can or cannot trust claims made by researchers.
[Not a tradition specific claim]
Simple idea:
Lots of misunderstandings around registration
Fishing can happen in very subtle ways, and may seem natural and justifiable.
Example:
Our journal review process is largely organized around advising researchers how to adjust analysis in light of findings in the data.
Frequentists can do it
Bayesians can do it too.
Qualitative researchers can also do it.
You can even do it with descriptive statistics
The key distinction is between prospective and retrospective studies.
Not between experimental and observational studies.
A reason (from the medical literature) why registration is especially important for experiments: because you owe it to subjects
A reason why registration is less important for experiments: because it is more likely that the intended analysis is implied by the design in an experimental study. Researcher degrees of freedom may be greatest for observational qualitative analyses.
Registration will produce some burden but does not require the creation of content that is not needed anyway
It does shift preparation of analyses forward
And it also can increase the burden of developing analyses plans even for projects that don’t work. But that is in part, the point.
Upside is that ultimate analyses may be much easier.
In neither case would the creation of a registration facility prevent exploration.
What it might do is make it less credible for someone to claim that they have tested a proposition when in fact the proposition was developed using the data used to test it.
Registration communicates when researchers are angage in exploration or not. We love exploration and should be proud of it.
Incentives and strategies
Inquiry | In the preanalysis plan | In the paper | In the appendix |
---|---|---|---|
Gender effect | X | X | |
Age effect | X |
Inquiry | Following A from the PAP | Following A from the paper | Notes |
---|---|---|---|
Gender effect | estimate = 0.6, s.e = 0.31 | estimate = 0.6, s.e = 0.25 | Difference due to change in control variables [provide cross references to tables and code] |