DeclareDesign
DeclareDesign
introDay 1: Intro
Day 2: Causality
Day 3: Estimation and Inference
Day 4:
Day 5:
A passing paper will illustrate subtle features of a method; a good paper will identify unknown properties of a method; en excellent paper will develop a new method.
Teams should prepare 15 - 20 minute presentations on set puzzles. Typically the task is to:
Take a puzzle
Declare and diagnose a design that shows the issue under study (e.g. some estimator produces unbiased estimates under some condition)
Modify the design to show behavior when conditions are violated
Share a report with the class. Best in self-contained documents for easy third party viewing. e.g. .html
via .qmd
or .Rmd
Presentations should be about 10 minutes for a given puzzle.
pacman
First best: If someone has access to your .Rmd
/.qmd
file they can hit render or compile and the whole thing reproduces first time. So: Nothing local, everything relative: so please do not include hardcoded paths to your computer
But: often you need ancillary files for data and code. That’s OK but aims should still be that with a self contained folder someone can open a main.Rmd
file, hit compile and get everything. I usually have an input
and an output
subfolder.
git
, osf, Dropbox, Drive, Nextcloudin
) and is never edited directlyDeclareDesign
How to define and assess research designs
DeclareDesign
: key resourcesModel
: set of models of what causes what and howInquiry
: a question stated in terms of the modelData strategy
: the set of procedures we use to gather information from the world (sampling, assignment, measurement)Answer strategy
: how we summarize the data produced by the data strategyDesign declaration is telling the computer (and readers) what M
, I
, D
, and A
are.
Design diagnosis is figuring out how the design will perform under imagined conditions.
Estimating “diagnosands” like power, bias, rmse, error rates, ethical harm, “amount learned”.
Diagnosis takes account of model uncertainty: it aims to identify models for which the design works well and models for which it does not
Redesign is the fine-tuning of features of the data- and answer strategies to understand how changing them affects the diagnosands
declare_model()
declare_inquiry()
declare_sampling()
declare_assignment()
declare_measurement()
declare_estimator()
and there are more declare_
functions!
draw_data(design)
draw_estimands(design)
draw_estimates(design)
get_estimates(design, data)
run_design(design)
, simulate_design(design)
diagnose_design(design)
redesign(design, N = 200)
compare_designs()
, compare_diagnoses()
https://raw.githubusercontent.com/rstudio/cheatsheets/master/declaredesign.pdf
?DeclareDesign
+
design
Each step is a function (or rather: a function that generates functions) and each function presupposes what is created by previous functions.
main
data frame in and push the main
dataframe out; this data frame normally builds up as you move along the pipe.Each step is a function (or rather: a function that generates functions) and each function presupposes what is created by previous functions.
declare_estimator
steps take the main
data frame in and send out an estimator_df
dataframedeclare_inquiry
steps take the main
data frame in and send out an estimand_df
dataframe.You can also just run through the whole design once by typing the name of the design:
Research design declaration summary
Step 1 (model): declare_model(N = 100, Y = rnorm(N, mean)) ---------------------
Step 2 (inquiry): declare_inquiry(Q = 0) ---------------------------------------
Step 3 (estimator): declare_estimator(Y ~ 1) -----------------------------------
Run of the design:
inquiry estimand estimator term estimate std.error statistic p.value
Q 0 estimator (Intercept) 0.00249 0.0916 0.0272 0.978
conf.low conf.high df outcome
-0.179 0.184 99 Y
Or by asking for a run of the design
one_run <- simplest_design |> run_design()
one_run |> kable(digits = 2) |> kable_styling(font_size = 18)
inquiry | estimand | estimator | term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|
Q | 0 | estimator | (Intercept) | 0 | 0.09 | -0.03 | 0.97 | -0.18 | 0.18 | 99 | Y |
A single run creates data, calculates estimands (the answer to inquiries) and calculates estimates plus ancillary statistics.
Or by asking for a run of the design
some_runs <- simplest_design |> simulate_design(sims = 1000)
some_runs |> kable(digits = 2) |> kable_styling(font_size = 16)
design | sim_ID | inquiry | estimand | estimator | term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
simplest_design | 1 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.16 | 0.88 | -0.17 | 0.20 | 99 | Y |
simplest_design | 2 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.65 | 0.10 | -0.03 | 0.33 | 99 | Y |
simplest_design | 3 | Q | 0 | estimator | (Intercept) | -0.20 | 0.11 | -1.91 | 0.06 | -0.41 | 0.01 | 99 | Y |
simplest_design | 4 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.74 | 0.46 | -0.30 | 0.14 | 99 | Y |
simplest_design | 5 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.36 | 0.18 | -0.34 | 0.06 | 99 | Y |
simplest_design | 6 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.19 | 0.24 | -0.28 | 0.07 | 99 | Y |
simplest_design | 7 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.19 | 0.24 | -0.08 | 0.31 | 99 | Y |
simplest_design | 8 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.50 | 0.62 | -0.14 | 0.23 | 99 | Y |
simplest_design | 9 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.38 | 0.71 | -0.26 | 0.18 | 99 | Y |
simplest_design | 10 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.58 | 0.56 | -0.13 | 0.24 | 99 | Y |
simplest_design | 11 | Q | 0 | estimator | (Intercept) | -0.17 | 0.11 | -1.56 | 0.12 | -0.39 | 0.05 | 99 | Y |
simplest_design | 12 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.10 | 0.92 | -0.17 | 0.19 | 99 | Y |
simplest_design | 13 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.19 | 0.24 | -0.07 | 0.28 | 99 | Y |
simplest_design | 14 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.04 | 0.97 | -0.19 | 0.20 | 99 | Y |
simplest_design | 15 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.18 | 0.24 | -0.08 | 0.30 | 99 | Y |
simplest_design | 16 | Q | 0 | estimator | (Intercept) | -0.10 | 0.12 | -0.85 | 0.40 | -0.33 | 0.13 | 99 | Y |
simplest_design | 17 | Q | 0 | estimator | (Intercept) | -0.21 | 0.10 | -2.08 | 0.04 | -0.41 | -0.01 | 99 | Y |
simplest_design | 18 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.42 | 0.68 | -0.26 | 0.17 | 99 | Y |
simplest_design | 19 | Q | 0 | estimator | (Intercept) | -0.09 | 0.11 | -0.82 | 0.42 | -0.31 | 0.13 | 99 | Y |
simplest_design | 20 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.20 | 0.84 | -0.23 | 0.19 | 99 | Y |
simplest_design | 21 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.37 | 0.71 | -0.16 | 0.23 | 99 | Y |
simplest_design | 22 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.12 | 0.26 | -0.08 | 0.29 | 99 | Y |
simplest_design | 23 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.04 | 0.97 | -0.20 | 0.20 | 99 | Y |
simplest_design | 24 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.84 | 0.40 | -0.12 | 0.30 | 99 | Y |
simplest_design | 25 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.02 | 0.98 | -0.21 | 0.21 | 99 | Y |
simplest_design | 26 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.79 | 0.43 | -0.26 | 0.11 | 99 | Y |
simplest_design | 27 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.13 | 0.26 | -0.30 | 0.08 | 99 | Y |
simplest_design | 28 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.59 | 0.56 | -0.28 | 0.15 | 99 | Y |
simplest_design | 29 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.73 | 0.47 | -0.27 | 0.12 | 99 | Y |
simplest_design | 30 | Q | 0 | estimator | (Intercept) | -0.13 | 0.09 | -1.41 | 0.16 | -0.31 | 0.05 | 99 | Y |
simplest_design | 31 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.88 | 0.38 | -0.27 | 0.11 | 99 | Y |
simplest_design | 32 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.25 | 0.80 | -0.18 | 0.23 | 99 | Y |
simplest_design | 33 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.02 | 0.98 | -0.19 | 0.19 | 99 | Y |
simplest_design | 34 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.59 | 0.56 | -0.15 | 0.27 | 99 | Y |
simplest_design | 35 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.56 | 0.12 | -0.36 | 0.04 | 99 | Y |
simplest_design | 36 | Q | 0 | estimator | (Intercept) | 0.21 | 0.11 | 1.95 | 0.05 | 0.00 | 0.43 | 99 | Y |
simplest_design | 37 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.44 | 0.66 | -0.15 | 0.23 | 99 | Y |
simplest_design | 38 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.20 | 0.84 | -0.17 | 0.20 | 99 | Y |
simplest_design | 39 | Q | 0 | estimator | (Intercept) | -0.20 | 0.10 | -2.02 | 0.05 | -0.40 | 0.00 | 99 | Y |
simplest_design | 40 | Q | 0 | estimator | (Intercept) | 0.17 | 0.10 | 1.76 | 0.08 | -0.02 | 0.36 | 99 | Y |
simplest_design | 41 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -1.07 | 0.29 | -0.29 | 0.09 | 99 | Y |
simplest_design | 42 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.78 | 0.44 | -0.30 | 0.13 | 99 | Y |
simplest_design | 43 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.53 | 0.60 | -0.13 | 0.23 | 99 | Y |
simplest_design | 44 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.08 | 0.28 | -0.09 | 0.30 | 99 | Y |
simplest_design | 45 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.49 | 0.63 | -0.28 | 0.17 | 99 | Y |
simplest_design | 46 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.34 | 0.73 | -0.24 | 0.17 | 99 | Y |
simplest_design | 47 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | -0.04 | 0.97 | -0.19 | 0.18 | 99 | Y |
simplest_design | 48 | Q | 0 | estimator | (Intercept) | 0.12 | 0.09 | 1.27 | 0.21 | -0.07 | 0.31 | 99 | Y |
simplest_design | 49 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.35 | 0.73 | -0.24 | 0.17 | 99 | Y |
simplest_design | 50 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.21 | 0.23 | -0.07 | 0.30 | 99 | Y |
simplest_design | 51 | Q | 0 | estimator | (Intercept) | 0.13 | 0.11 | 1.17 | 0.24 | -0.09 | 0.34 | 99 | Y |
simplest_design | 52 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.13 | 0.90 | -0.21 | 0.19 | 99 | Y |
simplest_design | 53 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.69 | 0.09 | -0.03 | 0.35 | 99 | Y |
simplest_design | 54 | Q | 0 | estimator | (Intercept) | 0.14 | 0.10 | 1.47 | 0.14 | -0.05 | 0.33 | 99 | Y |
simplest_design | 55 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.57 | 0.12 | -0.37 | 0.04 | 99 | Y |
simplest_design | 56 | Q | 0 | estimator | (Intercept) | -0.09 | 0.09 | -1.07 | 0.29 | -0.27 | 0.08 | 99 | Y |
simplest_design | 57 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.48 | 0.63 | -0.26 | 0.16 | 99 | Y |
simplest_design | 58 | Q | 0 | estimator | (Intercept) | -0.10 | 0.11 | -0.94 | 0.35 | -0.32 | 0.11 | 99 | Y |
simplest_design | 59 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.58 | 0.56 | -0.24 | 0.13 | 99 | Y |
simplest_design | 60 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.83 | 0.41 | -0.28 | 0.11 | 99 | Y |
simplest_design | 61 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.59 | 0.11 | -0.36 | 0.04 | 99 | Y |
simplest_design | 62 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.09 | 0.28 | -0.09 | 0.31 | 99 | Y |
simplest_design | 63 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.75 | 0.46 | -0.12 | 0.26 | 99 | Y |
simplest_design | 64 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.09 | 0.93 | -0.17 | 0.19 | 99 | Y |
simplest_design | 65 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.17 | 0.87 | -0.16 | 0.19 | 99 | Y |
simplest_design | 66 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.45 | 0.65 | -0.15 | 0.23 | 99 | Y |
simplest_design | 67 | Q | 0 | estimator | (Intercept) | -0.09 | 0.09 | -0.99 | 0.32 | -0.26 | 0.09 | 99 | Y |
simplest_design | 68 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.35 | 0.73 | -0.21 | 0.15 | 99 | Y |
simplest_design | 69 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.17 | 0.25 | -0.27 | 0.07 | 99 | Y |
simplest_design | 70 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.34 | 0.73 | -0.17 | 0.24 | 99 | Y |
simplest_design | 71 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.99 | 0.32 | -0.10 | 0.31 | 99 | Y |
simplest_design | 72 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.21 | 0.84 | -0.19 | 0.23 | 99 | Y |
simplest_design | 73 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.05 | 0.96 | -0.20 | 0.19 | 99 | Y |
simplest_design | 74 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.05 | 0.96 | -0.22 | 0.23 | 99 | Y |
simplest_design | 75 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.50 | 0.61 | -0.17 | 0.28 | 99 | Y |
simplest_design | 76 | Q | 0 | estimator | (Intercept) | 0.12 | 0.09 | 1.26 | 0.21 | -0.07 | 0.30 | 99 | Y |
simplest_design | 77 | Q | 0 | estimator | (Intercept) | -0.25 | 0.11 | -2.30 | 0.02 | -0.47 | -0.04 | 99 | Y |
simplest_design | 78 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.13 | 0.89 | -0.16 | 0.19 | 99 | Y |
simplest_design | 79 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.74 | 0.46 | -0.27 | 0.12 | 99 | Y |
simplest_design | 80 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.60 | 0.55 | -0.13 | 0.24 | 99 | Y |
simplest_design | 81 | Q | 0 | estimator | (Intercept) | -0.03 | 0.12 | -0.22 | 0.83 | -0.26 | 0.21 | 99 | Y |
simplest_design | 82 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.59 | 0.56 | -0.23 | 0.12 | 99 | Y |
simplest_design | 83 | Q | 0 | estimator | (Intercept) | -0.17 | 0.10 | -1.60 | 0.11 | -0.37 | 0.04 | 99 | Y |
simplest_design | 84 | Q | 0 | estimator | (Intercept) | 0.13 | 0.10 | 1.34 | 0.18 | -0.06 | 0.32 | 99 | Y |
simplest_design | 85 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.32 | 0.19 | -0.32 | 0.06 | 99 | Y |
simplest_design | 86 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.86 | 0.39 | -0.28 | 0.11 | 99 | Y |
simplest_design | 87 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.43 | 0.67 | -0.25 | 0.16 | 99 | Y |
simplest_design | 88 | Q | 0 | estimator | (Intercept) | -0.12 | 0.11 | -1.06 | 0.29 | -0.33 | 0.10 | 99 | Y |
simplest_design | 89 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.79 | 0.08 | -0.39 | 0.02 | 99 | Y |
simplest_design | 90 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.03 | 0.98 | -0.20 | 0.20 | 99 | Y |
simplest_design | 91 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.11 | 0.91 | -0.17 | 0.19 | 99 | Y |
simplest_design | 92 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.82 | 0.07 | -0.39 | 0.02 | 99 | Y |
simplest_design | 93 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.05 | 0.96 | -0.20 | 0.21 | 99 | Y |
simplest_design | 94 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.15 | 0.88 | -0.19 | 0.22 | 99 | Y |
simplest_design | 95 | Q | 0 | estimator | (Intercept) | 0.13 | 0.10 | 1.27 | 0.21 | -0.07 | 0.34 | 99 | Y |
simplest_design | 96 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.98 | 0.33 | -0.10 | 0.28 | 99 | Y |
simplest_design | 97 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.79 | 0.43 | -0.28 | 0.12 | 99 | Y |
simplest_design | 98 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.61 | 0.11 | -0.04 | 0.35 | 99 | Y |
simplest_design | 99 | Q | 0 | estimator | (Intercept) | 0.34 | 0.10 | 3.50 | 0.00 | 0.15 | 0.54 | 99 | Y |
simplest_design | 100 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.71 | 0.09 | -0.03 | 0.39 | 99 | Y |
simplest_design | 101 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.65 | 0.51 | -0.13 | 0.25 | 99 | Y |
simplest_design | 102 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.94 | 0.35 | -0.29 | 0.11 | 99 | Y |
simplest_design | 103 | Q | 0 | estimator | (Intercept) | -0.20 | 0.11 | -1.91 | 0.06 | -0.41 | 0.01 | 99 | Y |
simplest_design | 104 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.23 | 0.82 | -0.22 | 0.18 | 99 | Y |
simplest_design | 105 | Q | 0 | estimator | (Intercept) | -0.01 | 0.11 | -0.08 | 0.94 | -0.22 | 0.20 | 99 | Y |
simplest_design | 106 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.82 | 0.41 | -0.10 | 0.25 | 99 | Y |
simplest_design | 107 | Q | 0 | estimator | (Intercept) | -0.13 | 0.09 | -1.34 | 0.18 | -0.31 | 0.06 | 99 | Y |
simplest_design | 108 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.60 | 0.11 | -0.36 | 0.04 | 99 | Y |
simplest_design | 109 | Q | 0 | estimator | (Intercept) | 0.26 | 0.09 | 2.99 | 0.00 | 0.09 | 0.43 | 99 | Y |
simplest_design | 110 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.92 | 0.36 | -0.11 | 0.30 | 99 | Y |
simplest_design | 111 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.78 | 0.08 | -0.37 | 0.02 | 99 | Y |
simplest_design | 112 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.59 | 0.56 | -0.26 | 0.14 | 99 | Y |
simplest_design | 113 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.51 | 0.61 | -0.25 | 0.15 | 99 | Y |
simplest_design | 114 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.61 | 0.55 | -0.13 | 0.24 | 99 | Y |
simplest_design | 115 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.80 | 0.43 | -0.27 | 0.12 | 99 | Y |
simplest_design | 116 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.71 | 0.48 | -0.13 | 0.28 | 99 | Y |
simplest_design | 117 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.29 | 0.77 | -0.24 | 0.18 | 99 | Y |
simplest_design | 118 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.71 | 0.48 | -0.25 | 0.12 | 99 | Y |
simplest_design | 119 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.42 | 0.68 | -0.25 | 0.16 | 99 | Y |
simplest_design | 120 | Q | 0 | estimator | (Intercept) | -0.20 | 0.10 | -1.89 | 0.06 | -0.41 | 0.01 | 99 | Y |
simplest_design | 121 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.69 | 0.49 | -0.11 | 0.23 | 99 | Y |
simplest_design | 122 | Q | 0 | estimator | (Intercept) | 0.23 | 0.10 | 2.25 | 0.03 | 0.03 | 0.43 | 99 | Y |
simplest_design | 123 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.20 | 0.85 | -0.21 | 0.17 | 99 | Y |
simplest_design | 124 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.13 | 0.26 | -0.08 | 0.30 | 99 | Y |
simplest_design | 125 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.81 | 0.42 | -0.27 | 0.12 | 99 | Y |
simplest_design | 126 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.31 | 0.76 | -0.22 | 0.16 | 99 | Y |
simplest_design | 127 | Q | 0 | estimator | (Intercept) | 0.02 | 0.12 | 0.16 | 0.87 | -0.21 | 0.25 | 99 | Y |
simplest_design | 128 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 1.03 | 0.31 | -0.09 | 0.28 | 99 | Y |
simplest_design | 129 | Q | 0 | estimator | (Intercept) | 0.21 | 0.10 | 2.19 | 0.03 | 0.02 | 0.40 | 99 | Y |
simplest_design | 130 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.50 | 0.62 | -0.24 | 0.15 | 99 | Y |
simplest_design | 131 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.50 | 0.61 | -0.25 | 0.15 | 99 | Y |
simplest_design | 132 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.61 | 0.54 | -0.26 | 0.14 | 99 | Y |
simplest_design | 133 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.66 | 0.51 | -0.25 | 0.12 | 99 | Y |
simplest_design | 134 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.88 | 0.38 | -0.27 | 0.10 | 99 | Y |
simplest_design | 135 | Q | 0 | estimator | (Intercept) | -0.15 | 0.11 | -1.39 | 0.17 | -0.36 | 0.06 | 99 | Y |
simplest_design | 136 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.08 | 0.28 | -0.28 | 0.08 | 99 | Y |
simplest_design | 137 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.48 | 0.64 | -0.23 | 0.14 | 99 | Y |
simplest_design | 138 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.76 | 0.45 | -0.12 | 0.26 | 99 | Y |
simplest_design | 139 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.15 | 0.25 | -0.32 | 0.08 | 99 | Y |
simplest_design | 140 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.29 | 0.20 | -0.34 | 0.07 | 99 | Y |
simplest_design | 141 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.75 | 0.45 | -0.29 | 0.13 | 99 | Y |
simplest_design | 142 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.20 | 0.84 | -0.24 | 0.20 | 99 | Y |
simplest_design | 143 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.23 | 0.82 | -0.17 | 0.21 | 99 | Y |
simplest_design | 144 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.66 | 0.51 | -0.13 | 0.27 | 99 | Y |
simplest_design | 145 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.94 | 0.35 | -0.29 | 0.10 | 99 | Y |
simplest_design | 146 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.83 | 0.41 | -0.27 | 0.11 | 99 | Y |
simplest_design | 147 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.60 | 0.55 | -0.14 | 0.27 | 99 | Y |
simplest_design | 148 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.08 | 0.28 | -0.28 | 0.08 | 99 | Y |
simplest_design | 149 | Q | 0 | estimator | (Intercept) | 0.10 | 0.11 | 0.91 | 0.37 | -0.12 | 0.32 | 99 | Y |
simplest_design | 150 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.24 | 0.81 | -0.21 | 0.17 | 99 | Y |
simplest_design | 151 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.70 | 0.49 | -0.29 | 0.14 | 99 | Y |
simplest_design | 152 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.46 | 0.15 | -0.06 | 0.36 | 99 | Y |
simplest_design | 153 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.23 | 0.22 | -0.32 | 0.08 | 99 | Y |
simplest_design | 154 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.69 | 0.49 | -0.12 | 0.24 | 99 | Y |
simplest_design | 155 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.63 | 0.53 | -0.14 | 0.26 | 99 | Y |
simplest_design | 156 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.96 | 0.34 | -0.26 | 0.09 | 99 | Y |
simplest_design | 157 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.70 | 0.49 | -0.13 | 0.28 | 99 | Y |
simplest_design | 158 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.21 | 0.83 | -0.20 | 0.25 | 99 | Y |
simplest_design | 159 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.33 | 0.74 | -0.17 | 0.24 | 99 | Y |
simplest_design | 160 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.25 | 0.80 | -0.17 | 0.22 | 99 | Y |
simplest_design | 161 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.34 | 0.73 | -0.17 | 0.24 | 99 | Y |
simplest_design | 162 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.48 | 0.63 | -0.25 | 0.15 | 99 | Y |
simplest_design | 163 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.47 | 0.64 | -0.16 | 0.26 | 99 | Y |
simplest_design | 164 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.30 | 0.76 | -0.17 | 0.23 | 99 | Y |
simplest_design | 165 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.46 | 0.65 | -0.16 | 0.26 | 99 | Y |
simplest_design | 166 | Q | 0 | estimator | (Intercept) | -0.11 | 0.09 | -1.27 | 0.21 | -0.29 | 0.06 | 99 | Y |
simplest_design | 167 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.67 | 0.51 | -0.29 | 0.14 | 99 | Y |
simplest_design | 168 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.15 | 0.88 | -0.20 | 0.17 | 99 | Y |
simplest_design | 169 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.84 | 0.40 | -0.26 | 0.10 | 99 | Y |
simplest_design | 170 | Q | 0 | estimator | (Intercept) | -0.01 | 0.11 | -0.05 | 0.96 | -0.23 | 0.21 | 99 | Y |
simplest_design | 171 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.46 | 0.64 | -0.15 | 0.23 | 99 | Y |
simplest_design | 172 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.84 | 0.41 | -0.29 | 0.12 | 99 | Y |
simplest_design | 173 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.41 | 0.68 | -0.15 | 0.23 | 99 | Y |
simplest_design | 174 | Q | 0 | estimator | (Intercept) | 0.14 | 0.11 | 1.32 | 0.19 | -0.07 | 0.36 | 99 | Y |
simplest_design | 175 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.73 | 0.47 | -0.26 | 0.12 | 99 | Y |
simplest_design | 176 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.22 | 0.83 | -0.21 | 0.17 | 99 | Y |
simplest_design | 177 | Q | 0 | estimator | (Intercept) | -0.20 | 0.10 | -2.01 | 0.05 | -0.40 | 0.00 | 99 | Y |
simplest_design | 178 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.55 | 0.59 | -0.25 | 0.14 | 99 | Y |
simplest_design | 179 | Q | 0 | estimator | (Intercept) | -0.16 | 0.11 | -1.36 | 0.18 | -0.38 | 0.07 | 99 | Y |
simplest_design | 180 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.09 | 0.93 | -0.16 | 0.18 | 99 | Y |
simplest_design | 181 | Q | 0 | estimator | (Intercept) | 0.15 | 0.11 | 1.39 | 0.17 | -0.06 | 0.36 | 99 | Y |
simplest_design | 182 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.40 | 0.69 | -0.16 | 0.24 | 99 | Y |
simplest_design | 183 | Q | 0 | estimator | (Intercept) | -0.10 | 0.11 | -0.94 | 0.35 | -0.31 | 0.11 | 99 | Y |
simplest_design | 184 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.19 | 0.85 | -0.17 | 0.20 | 99 | Y |
simplest_design | 185 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.57 | 0.57 | -0.16 | 0.29 | 99 | Y |
simplest_design | 186 | Q | 0 | estimator | (Intercept) | 0.18 | 0.09 | 2.09 | 0.04 | 0.01 | 0.35 | 99 | Y |
simplest_design | 187 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.47 | 0.15 | -0.35 | 0.05 | 99 | Y |
simplest_design | 188 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.52 | 0.60 | -0.26 | 0.15 | 99 | Y |
simplest_design | 189 | Q | 0 | estimator | (Intercept) | 0.13 | 0.09 | 1.41 | 0.16 | -0.05 | 0.31 | 99 | Y |
simplest_design | 190 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.87 | 0.39 | -0.29 | 0.12 | 99 | Y |
simplest_design | 191 | Q | 0 | estimator | (Intercept) | 0.08 | 0.11 | 0.72 | 0.48 | -0.14 | 0.30 | 99 | Y |
simplest_design | 192 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.82 | 0.42 | -0.11 | 0.26 | 99 | Y |
simplest_design | 193 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.14 | 0.89 | -0.18 | 0.20 | 99 | Y |
simplest_design | 194 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.06 | 0.29 | -0.09 | 0.31 | 99 | Y |
simplest_design | 195 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.55 | 0.58 | -0.15 | 0.27 | 99 | Y |
simplest_design | 196 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.21 | 0.23 | -0.08 | 0.32 | 99 | Y |
simplest_design | 197 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.49 | 0.63 | -0.27 | 0.16 | 99 | Y |
simplest_design | 198 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.15 | 0.88 | -0.20 | 0.23 | 99 | Y |
simplest_design | 199 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.73 | 0.47 | -0.25 | 0.11 | 99 | Y |
simplest_design | 200 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.14 | 0.89 | -0.16 | 0.19 | 99 | Y |
simplest_design | 201 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.67 | 0.51 | -0.13 | 0.26 | 99 | Y |
simplest_design | 202 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.77 | 0.44 | -0.27 | 0.12 | 99 | Y |
simplest_design | 203 | Q | 0 | estimator | (Intercept) | 0.12 | 0.11 | 1.16 | 0.25 | -0.09 | 0.34 | 99 | Y |
simplest_design | 204 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.18 | 0.86 | -0.18 | 0.22 | 99 | Y |
simplest_design | 205 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.70 | 0.09 | -0.39 | 0.03 | 99 | Y |
simplest_design | 206 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.24 | 0.22 | -0.35 | 0.08 | 99 | Y |
simplest_design | 207 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.62 | 0.53 | -0.25 | 0.13 | 99 | Y |
simplest_design | 208 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.01 | 0.99 | -0.19 | 0.19 | 99 | Y |
simplest_design | 209 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.05 | 0.96 | -0.21 | 0.20 | 99 | Y |
simplest_design | 210 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.17 | 0.87 | -0.21 | 0.18 | 99 | Y |
simplest_design | 211 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.89 | 0.37 | -0.27 | 0.10 | 99 | Y |
simplest_design | 212 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.77 | 0.08 | -0.35 | 0.02 | 99 | Y |
simplest_design | 213 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.12 | 0.26 | -0.29 | 0.08 | 99 | Y |
simplest_design | 214 | Q | 0 | estimator | (Intercept) | 0.16 | 0.11 | 1.52 | 0.13 | -0.05 | 0.37 | 99 | Y |
simplest_design | 215 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.68 | 0.50 | -0.13 | 0.26 | 99 | Y |
simplest_design | 216 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.20 | 0.84 | -0.21 | 0.17 | 99 | Y |
simplest_design | 217 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.89 | 0.38 | -0.28 | 0.11 | 99 | Y |
simplest_design | 218 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.27 | 0.21 | -0.34 | 0.07 | 99 | Y |
simplest_design | 219 | Q | 0 | estimator | (Intercept) | -0.06 | 0.12 | -0.56 | 0.58 | -0.29 | 0.16 | 99 | Y |
simplest_design | 220 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.53 | 0.59 | -0.23 | 0.13 | 99 | Y |
simplest_design | 221 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.49 | 0.62 | -0.14 | 0.24 | 99 | Y |
simplest_design | 222 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.08 | 0.94 | -0.18 | 0.20 | 99 | Y |
simplest_design | 223 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.45 | 0.65 | -0.23 | 0.15 | 99 | Y |
simplest_design | 224 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 0.91 | 0.37 | -0.10 | 0.27 | 99 | Y |
simplest_design | 225 | Q | 0 | estimator | (Intercept) | 0.14 | 0.11 | 1.26 | 0.21 | -0.08 | 0.36 | 99 | Y |
simplest_design | 226 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.79 | 0.43 | -0.26 | 0.11 | 99 | Y |
simplest_design | 227 | Q | 0 | estimator | (Intercept) | -0.18 | 0.09 | -1.94 | 0.06 | -0.37 | 0.00 | 99 | Y |
simplest_design | 228 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.73 | 0.47 | -0.27 | 0.12 | 99 | Y |
simplest_design | 229 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.68 | 0.50 | -0.13 | 0.26 | 99 | Y |
simplest_design | 230 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.16 | 0.87 | -0.22 | 0.19 | 99 | Y |
simplest_design | 231 | Q | 0 | estimator | (Intercept) | -0.19 | 0.10 | -1.92 | 0.06 | -0.38 | 0.01 | 99 | Y |
simplest_design | 232 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.08 | 0.94 | -0.21 | 0.19 | 99 | Y |
simplest_design | 233 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.14 | 0.89 | -0.18 | 0.20 | 99 | Y |
simplest_design | 234 | Q | 0 | estimator | (Intercept) | 0.08 | 0.11 | 0.75 | 0.45 | -0.13 | 0.30 | 99 | Y |
simplest_design | 235 | Q | 0 | estimator | (Intercept) | -0.11 | 0.11 | -1.03 | 0.30 | -0.32 | 0.10 | 99 | Y |
simplest_design | 236 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.05 | 0.96 | -0.21 | 0.22 | 99 | Y |
simplest_design | 237 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.96 | 0.34 | -0.10 | 0.29 | 99 | Y |
simplest_design | 238 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.21 | 0.84 | -0.17 | 0.21 | 99 | Y |
simplest_design | 239 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.67 | 0.50 | -0.13 | 0.26 | 99 | Y |
simplest_design | 240 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.37 | 0.71 | -0.25 | 0.17 | 99 | Y |
simplest_design | 241 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.23 | 0.82 | -0.21 | 0.17 | 99 | Y |
simplest_design | 242 | Q | 0 | estimator | (Intercept) | 0.14 | 0.09 | 1.51 | 0.14 | -0.04 | 0.32 | 99 | Y |
simplest_design | 243 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.77 | 0.45 | -0.27 | 0.12 | 99 | Y |
simplest_design | 244 | Q | 0 | estimator | (Intercept) | 0.17 | 0.10 | 1.68 | 0.10 | -0.03 | 0.37 | 99 | Y |
simplest_design | 245 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.31 | 0.76 | -0.17 | 0.23 | 99 | Y |
simplest_design | 246 | Q | 0 | estimator | (Intercept) | 0.13 | 0.10 | 1.27 | 0.21 | -0.07 | 0.32 | 99 | Y |
simplest_design | 247 | Q | 0 | estimator | (Intercept) | -0.21 | 0.10 | -2.02 | 0.05 | -0.42 | 0.00 | 99 | Y |
simplest_design | 248 | Q | 0 | estimator | (Intercept) | -0.17 | 0.10 | -1.75 | 0.08 | -0.37 | 0.02 | 99 | Y |
simplest_design | 249 | Q | 0 | estimator | (Intercept) | -0.08 | 0.12 | -0.63 | 0.53 | -0.32 | 0.17 | 99 | Y |
simplest_design | 250 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.33 | 0.19 | -0.32 | 0.06 | 99 | Y |
simplest_design | 251 | Q | 0 | estimator | (Intercept) | -0.16 | 0.09 | -1.73 | 0.09 | -0.35 | 0.02 | 99 | Y |
simplest_design | 252 | Q | 0 | estimator | (Intercept) | -0.14 | 0.09 | -1.45 | 0.15 | -0.32 | 0.05 | 99 | Y |
simplest_design | 253 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.13 | 0.89 | -0.20 | 0.23 | 99 | Y |
simplest_design | 254 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.06 | 0.95 | -0.20 | 0.21 | 99 | Y |
simplest_design | 255 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.42 | 0.67 | -0.17 | 0.26 | 99 | Y |
simplest_design | 256 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.89 | 0.37 | -0.10 | 0.27 | 99 | Y |
simplest_design | 257 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.48 | 0.14 | -0.36 | 0.05 | 99 | Y |
simplest_design | 258 | Q | 0 | estimator | (Intercept) | -0.04 | 0.12 | -0.38 | 0.71 | -0.28 | 0.19 | 99 | Y |
simplest_design | 259 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.41 | 0.68 | -0.25 | 0.17 | 99 | Y |
simplest_design | 260 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.10 | 0.92 | -0.19 | 0.21 | 99 | Y |
simplest_design | 261 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.47 | 0.64 | -0.15 | 0.25 | 99 | Y |
simplest_design | 262 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.85 | 0.40 | -0.10 | 0.24 | 99 | Y |
simplest_design | 263 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.23 | 0.22 | -0.27 | 0.06 | 99 | Y |
simplest_design | 264 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.15 | 0.25 | -0.34 | 0.09 | 99 | Y |
simplest_design | 265 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.10 | 0.92 | -0.18 | 0.20 | 99 | Y |
simplest_design | 266 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.07 | 0.94 | -0.21 | 0.19 | 99 | Y |
simplest_design | 267 | Q | 0 | estimator | (Intercept) | 0.06 | 0.08 | 0.74 | 0.46 | -0.10 | 0.22 | 99 | Y |
simplest_design | 268 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.74 | 0.46 | -0.13 | 0.28 | 99 | Y |
simplest_design | 269 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.81 | 0.42 | -0.13 | 0.31 | 99 | Y |
simplest_design | 270 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.13 | 0.90 | -0.20 | 0.22 | 99 | Y |
simplest_design | 271 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.10 | 0.92 | -0.20 | 0.22 | 99 | Y |
simplest_design | 272 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.28 | 0.20 | -0.32 | 0.07 | 99 | Y |
simplest_design | 273 | Q | 0 | estimator | (Intercept) | 0.07 | 0.08 | 0.94 | 0.35 | -0.08 | 0.23 | 99 | Y |
simplest_design | 274 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.68 | 0.50 | -0.14 | 0.28 | 99 | Y |
simplest_design | 275 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.86 | 0.39 | -0.11 | 0.28 | 99 | Y |
simplest_design | 276 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.18 | 0.24 | -0.08 | 0.33 | 99 | Y |
simplest_design | 277 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.20 | 0.85 | -0.20 | 0.17 | 99 | Y |
simplest_design | 278 | Q | 0 | estimator | (Intercept) | -0.14 | 0.09 | -1.44 | 0.15 | -0.32 | 0.05 | 99 | Y |
simplest_design | 279 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 0.98 | 0.33 | -0.09 | 0.27 | 99 | Y |
simplest_design | 280 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.59 | 0.56 | -0.14 | 0.26 | 99 | Y |
simplest_design | 281 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.25 | 0.81 | -0.20 | 0.16 | 99 | Y |
simplest_design | 282 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.22 | 0.83 | -0.18 | 0.22 | 99 | Y |
simplest_design | 283 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.02 | 0.99 | -0.19 | 0.19 | 99 | Y |
simplest_design | 284 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.34 | 0.74 | -0.22 | 0.15 | 99 | Y |
simplest_design | 285 | Q | 0 | estimator | (Intercept) | 0.20 | 0.09 | 2.17 | 0.03 | 0.02 | 0.38 | 99 | Y |
simplest_design | 286 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.66 | 0.51 | -0.27 | 0.14 | 99 | Y |
simplest_design | 287 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.74 | 0.46 | -0.11 | 0.25 | 99 | Y |
simplest_design | 288 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.63 | 0.53 | -0.13 | 0.24 | 99 | Y |
simplest_design | 289 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.20 | 0.23 | -0.08 | 0.31 | 99 | Y |
simplest_design | 290 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.46 | 0.65 | -0.21 | 0.13 | 99 | Y |
simplest_design | 291 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.91 | 0.06 | -0.37 | 0.01 | 99 | Y |
simplest_design | 292 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.16 | 0.87 | -0.23 | 0.20 | 99 | Y |
simplest_design | 293 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.48 | 0.63 | -0.16 | 0.27 | 99 | Y |
simplest_design | 294 | Q | 0 | estimator | (Intercept) | 0.19 | 0.11 | 1.72 | 0.09 | -0.03 | 0.40 | 99 | Y |
simplest_design | 295 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.55 | 0.59 | -0.13 | 0.23 | 99 | Y |
simplest_design | 296 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.39 | 0.70 | -0.24 | 0.16 | 99 | Y |
simplest_design | 297 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.98 | 0.33 | -0.28 | 0.10 | 99 | Y |
simplest_design | 298 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.47 | 0.64 | -0.25 | 0.15 | 99 | Y |
simplest_design | 299 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.97 | 0.34 | -0.10 | 0.29 | 99 | Y |
simplest_design | 300 | Q | 0 | estimator | (Intercept) | -0.24 | 0.09 | -2.66 | 0.01 | -0.42 | -0.06 | 99 | Y |
simplest_design | 301 | Q | 0 | estimator | (Intercept) | -0.25 | 0.10 | -2.50 | 0.01 | -0.44 | -0.05 | 99 | Y |
simplest_design | 302 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.04 | 0.97 | -0.19 | 0.18 | 99 | Y |
simplest_design | 303 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.44 | 0.66 | -0.25 | 0.16 | 99 | Y |
simplest_design | 304 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.74 | 0.46 | -0.27 | 0.12 | 99 | Y |
simplest_design | 305 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.25 | 0.81 | -0.15 | 0.20 | 99 | Y |
simplest_design | 306 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.54 | 0.59 | -0.14 | 0.25 | 99 | Y |
simplest_design | 307 | Q | 0 | estimator | (Intercept) | -0.17 | 0.10 | -1.83 | 0.07 | -0.36 | 0.02 | 99 | Y |
simplest_design | 308 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.11 | 0.91 | -0.19 | 0.22 | 99 | Y |
simplest_design | 309 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.06 | 0.29 | -0.30 | 0.09 | 99 | Y |
simplest_design | 310 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.50 | 0.14 | -0.05 | 0.35 | 99 | Y |
simplest_design | 311 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.68 | 0.10 | -0.03 | 0.34 | 99 | Y |
simplest_design | 312 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.17 | 0.24 | -0.08 | 0.30 | 99 | Y |
simplest_design | 313 | Q | 0 | estimator | (Intercept) | 0.17 | 0.10 | 1.73 | 0.09 | -0.02 | 0.37 | 99 | Y |
simplest_design | 314 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.91 | 0.37 | -0.11 | 0.28 | 99 | Y |
simplest_design | 315 | Q | 0 | estimator | (Intercept) | -0.30 | 0.09 | -3.21 | 0.00 | -0.49 | -0.12 | 99 | Y |
simplest_design | 316 | Q | 0 | estimator | (Intercept) | -0.14 | 0.11 | -1.29 | 0.20 | -0.36 | 0.08 | 99 | Y |
simplest_design | 317 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.34 | 0.18 | -0.33 | 0.06 | 99 | Y |
simplest_design | 318 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.75 | 0.46 | -0.24 | 0.11 | 99 | Y |
simplest_design | 319 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.86 | 0.07 | -0.01 | 0.38 | 99 | Y |
simplest_design | 320 | Q | 0 | estimator | (Intercept) | 0.11 | 0.11 | 1.04 | 0.30 | -0.10 | 0.33 | 99 | Y |
simplest_design | 321 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.44 | 0.66 | -0.14 | 0.23 | 99 | Y |
simplest_design | 322 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.12 | 0.90 | -0.22 | 0.19 | 99 | Y |
simplest_design | 323 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.74 | 0.46 | -0.12 | 0.26 | 99 | Y |
simplest_design | 324 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.16 | 0.87 | -0.19 | 0.22 | 99 | Y |
simplest_design | 325 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.22 | 0.23 | -0.08 | 0.32 | 99 | Y |
simplest_design | 326 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.25 | 0.81 | -0.24 | 0.18 | 99 | Y |
simplest_design | 327 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.12 | 0.90 | -0.19 | 0.17 | 99 | Y |
simplest_design | 328 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.49 | 0.62 | -0.16 | 0.27 | 99 | Y |
simplest_design | 329 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.28 | 0.78 | -0.25 | 0.19 | 99 | Y |
simplest_design | 330 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.07 | 0.95 | -0.18 | 0.17 | 99 | Y |
simplest_design | 331 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.07 | 0.95 | -0.22 | 0.20 | 99 | Y |
simplest_design | 332 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.83 | 0.41 | -0.12 | 0.30 | 99 | Y |
simplest_design | 333 | Q | 0 | estimator | (Intercept) | 0.17 | 0.10 | 1.70 | 0.09 | -0.03 | 0.38 | 99 | Y |
simplest_design | 334 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.09 | 0.93 | -0.21 | 0.23 | 99 | Y |
simplest_design | 335 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.29 | 0.77 | -0.21 | 0.15 | 99 | Y |
simplest_design | 336 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.40 | 0.69 | -0.17 | 0.26 | 99 | Y |
simplest_design | 337 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.10 | 0.92 | -0.18 | 0.17 | 99 | Y |
simplest_design | 338 | Q | 0 | estimator | (Intercept) | 0.08 | 0.11 | 0.74 | 0.46 | -0.14 | 0.30 | 99 | Y |
simplest_design | 339 | Q | 0 | estimator | (Intercept) | -0.23 | 0.10 | -2.25 | 0.03 | -0.44 | -0.03 | 99 | Y |
simplest_design | 340 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.76 | 0.45 | -0.12 | 0.28 | 99 | Y |
simplest_design | 341 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.79 | 0.43 | -0.11 | 0.25 | 99 | Y |
simplest_design | 342 | Q | 0 | estimator | (Intercept) | -0.11 | 0.08 | -1.36 | 0.18 | -0.28 | 0.05 | 99 | Y |
simplest_design | 343 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.44 | 0.66 | -0.24 | 0.15 | 99 | Y |
simplest_design | 344 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.79 | 0.43 | -0.28 | 0.12 | 99 | Y |
simplest_design | 345 | Q | 0 | estimator | (Intercept) | 0.23 | 0.12 | 1.95 | 0.05 | 0.00 | 0.47 | 99 | Y |
simplest_design | 346 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.06 | 0.95 | -0.18 | 0.20 | 99 | Y |
simplest_design | 347 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -0.97 | 0.33 | -0.31 | 0.10 | 99 | Y |
simplest_design | 348 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.04 | 0.97 | -0.18 | 0.19 | 99 | Y |
simplest_design | 349 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.16 | 0.87 | -0.20 | 0.17 | 99 | Y |
simplest_design | 350 | Q | 0 | estimator | (Intercept) | 0.22 | 0.09 | 2.35 | 0.02 | 0.03 | 0.41 | 99 | Y |
simplest_design | 351 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.68 | 0.50 | -0.13 | 0.27 | 99 | Y |
simplest_design | 352 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.28 | 0.78 | -0.18 | 0.23 | 99 | Y |
simplest_design | 353 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.52 | 0.60 | -0.15 | 0.26 | 99 | Y |
simplest_design | 354 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.61 | 0.54 | -0.24 | 0.13 | 99 | Y |
simplest_design | 355 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.70 | 0.49 | -0.27 | 0.13 | 99 | Y |
simplest_design | 356 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.68 | 0.50 | -0.27 | 0.13 | 99 | Y |
simplest_design | 357 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.16 | 0.25 | -0.08 | 0.30 | 99 | Y |
simplest_design | 358 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.54 | 0.59 | -0.14 | 0.24 | 99 | Y |
simplest_design | 359 | Q | 0 | estimator | (Intercept) | 0.23 | 0.10 | 2.28 | 0.02 | 0.03 | 0.43 | 99 | Y |
simplest_design | 360 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.21 | 0.84 | -0.19 | 0.24 | 99 | Y |
simplest_design | 361 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.30 | 0.77 | -0.24 | 0.18 | 99 | Y |
simplest_design | 362 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.15 | 0.88 | -0.22 | 0.19 | 99 | Y |
simplest_design | 363 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.35 | 0.73 | -0.15 | 0.21 | 99 | Y |
simplest_design | 364 | Q | 0 | estimator | (Intercept) | -0.21 | 0.09 | -2.31 | 0.02 | -0.40 | -0.03 | 99 | Y |
simplest_design | 365 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.20 | 0.84 | -0.16 | 0.19 | 99 | Y |
simplest_design | 366 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.76 | 0.45 | -0.12 | 0.27 | 99 | Y |
simplest_design | 367 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.07 | 0.94 | -0.18 | 0.17 | 99 | Y |
simplest_design | 368 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.08 | 0.94 | -0.19 | 0.21 | 99 | Y |
simplest_design | 369 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.19 | 0.85 | -0.19 | 0.16 | 99 | Y |
simplest_design | 370 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.00 | 1.00 | -0.21 | 0.21 | 99 | Y |
simplest_design | 371 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.80 | 0.43 | -0.11 | 0.26 | 99 | Y |
simplest_design | 372 | Q | 0 | estimator | (Intercept) | -0.22 | 0.11 | -1.98 | 0.05 | -0.44 | 0.00 | 99 | Y |
simplest_design | 373 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.44 | 0.66 | -0.16 | 0.25 | 99 | Y |
simplest_design | 374 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.54 | 0.13 | -0.04 | 0.34 | 99 | Y |
simplest_design | 375 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.14 | 0.89 | -0.17 | 0.19 | 99 | Y |
simplest_design | 376 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.83 | 0.41 | -0.27 | 0.11 | 99 | Y |
simplest_design | 377 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.48 | 0.63 | -0.15 | 0.25 | 99 | Y |
simplest_design | 378 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.64 | 0.53 | -0.24 | 0.12 | 99 | Y |
simplest_design | 379 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.80 | 0.43 | -0.13 | 0.30 | 99 | Y |
simplest_design | 380 | Q | 0 | estimator | (Intercept) | 0.14 | 0.09 | 1.59 | 0.12 | -0.04 | 0.32 | 99 | Y |
simplest_design | 381 | Q | 0 | estimator | (Intercept) | 0.20 | 0.10 | 1.97 | 0.05 | 0.00 | 0.40 | 99 | Y |
simplest_design | 382 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.58 | 0.12 | -0.36 | 0.04 | 99 | Y |
simplest_design | 383 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.14 | 0.89 | -0.21 | 0.18 | 99 | Y |
simplest_design | 384 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.78 | 0.44 | -0.13 | 0.31 | 99 | Y |
simplest_design | 385 | Q | 0 | estimator | (Intercept) | -0.20 | 0.09 | -2.12 | 0.04 | -0.38 | -0.01 | 99 | Y |
simplest_design | 386 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.33 | 0.74 | -0.22 | 0.16 | 99 | Y |
simplest_design | 387 | Q | 0 | estimator | (Intercept) | -0.15 | 0.11 | -1.37 | 0.17 | -0.38 | 0.07 | 99 | Y |
simplest_design | 388 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.03 | 0.97 | -0.17 | 0.18 | 99 | Y |
simplest_design | 389 | Q | 0 | estimator | (Intercept) | 0.03 | 0.12 | 0.23 | 0.82 | -0.21 | 0.26 | 99 | Y |
simplest_design | 390 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.50 | 0.62 | -0.17 | 0.29 | 99 | Y |
simplest_design | 391 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.12 | 0.90 | -0.21 | 0.19 | 99 | Y |
simplest_design | 392 | Q | 0 | estimator | (Intercept) | -0.14 | 0.08 | -1.74 | 0.09 | -0.31 | 0.02 | 99 | Y |
simplest_design | 393 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.11 | 0.91 | -0.19 | 0.21 | 99 | Y |
simplest_design | 394 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.73 | 0.46 | -0.13 | 0.28 | 99 | Y |
simplest_design | 395 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.56 | 0.58 | -0.15 | 0.26 | 99 | Y |
simplest_design | 396 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.07 | 0.94 | -0.21 | 0.20 | 99 | Y |
simplest_design | 397 | Q | 0 | estimator | (Intercept) | 0.23 | 0.11 | 2.09 | 0.04 | 0.01 | 0.45 | 99 | Y |
simplest_design | 398 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.21 | 0.83 | -0.21 | 0.17 | 99 | Y |
simplest_design | 399 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.85 | 0.40 | -0.29 | 0.12 | 99 | Y |
simplest_design | 400 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.48 | 0.63 | -0.24 | 0.15 | 99 | Y |
simplest_design | 401 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.25 | 0.80 | -0.18 | 0.23 | 99 | Y |
simplest_design | 402 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.32 | 0.75 | -0.23 | 0.17 | 99 | Y |
simplest_design | 403 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.24 | 0.81 | -0.19 | 0.24 | 99 | Y |
simplest_design | 404 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.24 | 0.81 | -0.16 | 0.21 | 99 | Y |
simplest_design | 405 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.76 | 0.45 | -0.12 | 0.28 | 99 | Y |
simplest_design | 406 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.61 | 0.11 | -0.37 | 0.04 | 99 | Y |
simplest_design | 407 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.02 | 0.31 | -0.28 | 0.09 | 99 | Y |
simplest_design | 408 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.83 | 0.07 | -0.36 | 0.01 | 99 | Y |
simplest_design | 409 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.48 | 0.63 | -0.15 | 0.24 | 99 | Y |
simplest_design | 410 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.38 | 0.17 | -0.33 | 0.06 | 99 | Y |
simplest_design | 411 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.76 | 0.45 | -0.28 | 0.12 | 99 | Y |
simplest_design | 412 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.18 | 0.24 | -0.08 | 0.33 | 99 | Y |
simplest_design | 413 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.99 | 0.32 | -0.10 | 0.29 | 99 | Y |
simplest_design | 414 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.44 | 0.15 | -0.06 | 0.36 | 99 | Y |
simplest_design | 415 | Q | 0 | estimator | (Intercept) | -0.14 | 0.09 | -1.44 | 0.15 | -0.33 | 0.05 | 99 | Y |
simplest_design | 416 | Q | 0 | estimator | (Intercept) | 0.07 | 0.08 | 0.85 | 0.40 | -0.09 | 0.24 | 99 | Y |
simplest_design | 417 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.82 | 0.42 | -0.12 | 0.29 | 99 | Y |
simplest_design | 418 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -1.01 | 0.32 | -0.29 | 0.09 | 99 | Y |
simplest_design | 419 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.07 | 0.95 | -0.18 | 0.17 | 99 | Y |
simplest_design | 420 | Q | 0 | estimator | (Intercept) | 0.13 | 0.09 | 1.42 | 0.16 | -0.05 | 0.32 | 99 | Y |
simplest_design | 421 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.26 | 0.80 | -0.18 | 0.23 | 99 | Y |
simplest_design | 422 | Q | 0 | estimator | (Intercept) | -0.09 | 0.12 | -0.73 | 0.47 | -0.32 | 0.15 | 99 | Y |
simplest_design | 423 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.57 | 0.12 | -0.35 | 0.04 | 99 | Y |
simplest_design | 424 | Q | 0 | estimator | (Intercept) | 0.14 | 0.10 | 1.46 | 0.15 | -0.05 | 0.33 | 99 | Y |
simplest_design | 425 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.64 | 0.52 | -0.27 | 0.14 | 99 | Y |
simplest_design | 426 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.37 | 0.71 | -0.22 | 0.15 | 99 | Y |
simplest_design | 427 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.06 | 0.95 | -0.19 | 0.21 | 99 | Y |
simplest_design | 428 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.29 | 0.77 | -0.23 | 0.17 | 99 | Y |
simplest_design | 429 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.82 | 0.41 | -0.10 | 0.24 | 99 | Y |
simplest_design | 430 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.50 | 0.14 | -0.05 | 0.35 | 99 | Y |
simplest_design | 431 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.67 | 0.51 | -0.13 | 0.25 | 99 | Y |
simplest_design | 432 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.36 | 0.72 | -0.26 | 0.18 | 99 | Y |
simplest_design | 433 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.21 | 0.84 | -0.21 | 0.17 | 99 | Y |
simplest_design | 434 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.35 | 0.73 | -0.25 | 0.18 | 99 | Y |
simplest_design | 435 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.05 | 0.30 | -0.09 | 0.29 | 99 | Y |
simplest_design | 436 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.30 | 0.77 | -0.18 | 0.24 | 99 | Y |
simplest_design | 437 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.97 | 0.33 | -0.10 | 0.30 | 99 | Y |
simplest_design | 438 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.51 | 0.13 | -0.36 | 0.05 | 99 | Y |
simplest_design | 439 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.55 | 0.59 | -0.23 | 0.13 | 99 | Y |
simplest_design | 440 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.59 | 0.11 | -0.34 | 0.04 | 99 | Y |
simplest_design | 441 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.51 | 0.13 | -0.05 | 0.35 | 99 | Y |
simplest_design | 442 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.22 | 0.82 | -0.19 | 0.16 | 99 | Y |
simplest_design | 443 | Q | 0 | estimator | (Intercept) | -0.22 | 0.10 | -2.20 | 0.03 | -0.42 | -0.02 | 99 | Y |
simplest_design | 444 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.31 | 0.75 | -0.16 | 0.22 | 99 | Y |
simplest_design | 445 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.83 | 0.41 | -0.11 | 0.28 | 99 | Y |
simplest_design | 446 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.71 | 0.48 | -0.29 | 0.14 | 99 | Y |
simplest_design | 447 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.11 | 0.27 | -0.32 | 0.09 | 99 | Y |
simplest_design | 448 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.19 | 0.85 | -0.18 | 0.22 | 99 | Y |
simplest_design | 449 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.31 | 0.76 | -0.18 | 0.25 | 99 | Y |
simplest_design | 450 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.49 | 0.62 | -0.26 | 0.16 | 99 | Y |
simplest_design | 451 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.06 | 0.95 | -0.20 | 0.21 | 99 | Y |
simplest_design | 452 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.18 | 0.24 | -0.32 | 0.08 | 99 | Y |
simplest_design | 453 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.78 | 0.44 | -0.12 | 0.28 | 99 | Y |
simplest_design | 454 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.26 | 0.79 | -0.18 | 0.24 | 99 | Y |
simplest_design | 455 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.37 | 0.72 | -0.14 | 0.20 | 99 | Y |
simplest_design | 456 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | -0.04 | 0.97 | -0.18 | 0.18 | 99 | Y |
simplest_design | 457 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.16 | 0.87 | -0.17 | 0.19 | 99 | Y |
simplest_design | 458 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.04 | 0.97 | -0.18 | 0.19 | 99 | Y |
simplest_design | 459 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.20 | 0.84 | -0.23 | 0.19 | 99 | Y |
simplest_design | 460 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.37 | 0.71 | -0.23 | 0.16 | 99 | Y |
simplest_design | 461 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.33 | 0.74 | -0.16 | 0.23 | 99 | Y |
simplest_design | 462 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.33 | 0.74 | -0.22 | 0.16 | 99 | Y |
simplest_design | 463 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.60 | 0.55 | -0.16 | 0.29 | 99 | Y |
simplest_design | 464 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.17 | 0.86 | -0.17 | 0.20 | 99 | Y |
simplest_design | 465 | Q | 0 | estimator | (Intercept) | 0.21 | 0.11 | 1.98 | 0.05 | 0.00 | 0.42 | 99 | Y |
simplest_design | 466 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.80 | 0.42 | -0.12 | 0.29 | 99 | Y |
simplest_design | 467 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.42 | 0.68 | -0.14 | 0.21 | 99 | Y |
simplest_design | 468 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.76 | 0.45 | -0.28 | 0.12 | 99 | Y |
simplest_design | 469 | Q | 0 | estimator | (Intercept) | 0.08 | 0.11 | 0.74 | 0.46 | -0.14 | 0.30 | 99 | Y |
simplest_design | 470 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.80 | 0.42 | -0.28 | 0.12 | 99 | Y |
simplest_design | 471 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.70 | 0.48 | -0.26 | 0.12 | 99 | Y |
simplest_design | 472 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.09 | 0.28 | -0.09 | 0.32 | 99 | Y |
simplest_design | 473 | Q | 0 | estimator | (Intercept) | 0.12 | 0.09 | 1.45 | 0.15 | -0.05 | 0.30 | 99 | Y |
simplest_design | 474 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.49 | 0.14 | -0.05 | 0.35 | 99 | Y |
simplest_design | 475 | Q | 0 | estimator | (Intercept) | -0.16 | 0.11 | -1.45 | 0.15 | -0.38 | 0.06 | 99 | Y |
simplest_design | 476 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.61 | 0.54 | -0.25 | 0.13 | 99 | Y |
simplest_design | 477 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.46 | 0.64 | -0.14 | 0.22 | 99 | Y |
simplest_design | 478 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.21 | 0.83 | -0.16 | 0.20 | 99 | Y |
simplest_design | 479 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.72 | 0.48 | -0.28 | 0.13 | 99 | Y |
simplest_design | 480 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.12 | 0.90 | -0.18 | 0.20 | 99 | Y |
simplest_design | 481 | Q | 0 | estimator | (Intercept) | 0.20 | 0.10 | 2.09 | 0.04 | 0.01 | 0.39 | 99 | Y |
simplest_design | 482 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.84 | 0.40 | -0.28 | 0.11 | 99 | Y |
simplest_design | 483 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.28 | 0.78 | -0.17 | 0.22 | 99 | Y |
simplest_design | 484 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | -0.02 | 0.98 | -0.19 | 0.18 | 99 | Y |
simplest_design | 485 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.70 | 0.09 | -0.35 | 0.03 | 99 | Y |
simplest_design | 486 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.15 | 0.88 | -0.19 | 0.23 | 99 | Y |
simplest_design | 487 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.18 | 0.24 | -0.08 | 0.31 | 99 | Y |
simplest_design | 488 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.02 | 0.31 | -0.09 | 0.29 | 99 | Y |
simplest_design | 489 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.59 | 0.56 | -0.15 | 0.28 | 99 | Y |
simplest_design | 490 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.52 | 0.60 | -0.15 | 0.26 | 99 | Y |
simplest_design | 491 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.59 | 0.55 | -0.24 | 0.13 | 99 | Y |
simplest_design | 492 | Q | 0 | estimator | (Intercept) | -0.17 | 0.10 | -1.68 | 0.10 | -0.37 | 0.03 | 99 | Y |
simplest_design | 493 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.14 | 0.26 | -0.29 | 0.08 | 99 | Y |
simplest_design | 494 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.53 | 0.60 | -0.27 | 0.16 | 99 | Y |
simplest_design | 495 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.54 | 0.13 | -0.04 | 0.35 | 99 | Y |
simplest_design | 496 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.12 | 0.27 | -0.31 | 0.09 | 99 | Y |
simplest_design | 497 | Q | 0 | estimator | (Intercept) | 0.10 | 0.11 | 0.94 | 0.35 | -0.12 | 0.32 | 99 | Y |
simplest_design | 498 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 1.06 | 0.29 | -0.08 | 0.27 | 99 | Y |
simplest_design | 499 | Q | 0 | estimator | (Intercept) | -0.16 | 0.09 | -1.82 | 0.07 | -0.33 | 0.01 | 99 | Y |
simplest_design | 500 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.08 | 0.28 | -0.09 | 0.30 | 99 | Y |
simplest_design | 501 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.22 | 0.83 | -0.20 | 0.24 | 99 | Y |
simplest_design | 502 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.36 | 0.72 | -0.16 | 0.24 | 99 | Y |
simplest_design | 503 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.25 | 0.80 | -0.23 | 0.17 | 99 | Y |
simplest_design | 504 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.20 | 0.23 | -0.07 | 0.30 | 99 | Y |
simplest_design | 505 | Q | 0 | estimator | (Intercept) | -0.06 | 0.12 | -0.50 | 0.62 | -0.29 | 0.17 | 99 | Y |
simplest_design | 506 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.33 | 0.74 | -0.17 | 0.24 | 99 | Y |
simplest_design | 507 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.37 | 0.72 | -0.23 | 0.16 | 99 | Y |
simplest_design | 508 | Q | 0 | estimator | (Intercept) | 0.07 | 0.08 | 0.86 | 0.39 | -0.09 | 0.23 | 99 | Y |
simplest_design | 509 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.85 | 0.40 | -0.27 | 0.11 | 99 | Y |
simplest_design | 510 | Q | 0 | estimator | (Intercept) | -0.15 | 0.12 | -1.25 | 0.21 | -0.38 | 0.09 | 99 | Y |
simplest_design | 511 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.02 | 0.99 | -0.20 | 0.20 | 99 | Y |
simplest_design | 512 | Q | 0 | estimator | (Intercept) | -0.29 | 0.10 | -2.95 | 0.00 | -0.48 | -0.09 | 99 | Y |
simplest_design | 513 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.75 | 0.46 | -0.29 | 0.13 | 99 | Y |
simplest_design | 514 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.19 | 0.24 | -0.33 | 0.08 | 99 | Y |
simplest_design | 515 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.44 | 0.66 | -0.15 | 0.24 | 99 | Y |
simplest_design | 516 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.53 | 0.13 | -0.05 | 0.36 | 99 | Y |
simplest_design | 517 | Q | 0 | estimator | (Intercept) | 0.21 | 0.09 | 2.41 | 0.02 | 0.04 | 0.38 | 99 | Y |
simplest_design | 518 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.90 | 0.37 | -0.11 | 0.29 | 99 | Y |
simplest_design | 519 | Q | 0 | estimator | (Intercept) | -0.11 | 0.09 | -1.21 | 0.23 | -0.28 | 0.07 | 99 | Y |
simplest_design | 520 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -0.96 | 0.34 | -0.29 | 0.10 | 99 | Y |
simplest_design | 521 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.23 | 0.82 | -0.19 | 0.25 | 99 | Y |
simplest_design | 522 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.40 | 0.69 | -0.23 | 0.15 | 99 | Y |
simplest_design | 523 | Q | 0 | estimator | (Intercept) | 0.00 | 0.11 | -0.04 | 0.97 | -0.22 | 0.21 | 99 | Y |
simplest_design | 524 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.60 | 0.11 | -0.04 | 0.36 | 99 | Y |
simplest_design | 525 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.72 | 0.47 | -0.13 | 0.28 | 99 | Y |
simplest_design | 526 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.57 | 0.57 | -0.26 | 0.14 | 99 | Y |
simplest_design | 527 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.09 | 0.93 | -0.18 | 0.20 | 99 | Y |
simplest_design | 528 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.71 | 0.48 | -0.12 | 0.25 | 99 | Y |
simplest_design | 529 | Q | 0 | estimator | (Intercept) | -0.12 | 0.11 | -1.07 | 0.29 | -0.33 | 0.10 | 99 | Y |
simplest_design | 530 | Q | 0 | estimator | (Intercept) | -0.25 | 0.10 | -2.51 | 0.01 | -0.45 | -0.05 | 99 | Y |
simplest_design | 531 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.66 | 0.51 | -0.13 | 0.25 | 99 | Y |
simplest_design | 532 | Q | 0 | estimator | (Intercept) | 0.14 | 0.11 | 1.30 | 0.20 | -0.07 | 0.35 | 99 | Y |
simplest_design | 533 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.25 | 0.81 | -0.19 | 0.24 | 99 | Y |
simplest_design | 534 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.44 | 0.15 | -0.35 | 0.06 | 99 | Y |
simplest_design | 535 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.53 | 0.60 | -0.13 | 0.23 | 99 | Y |
simplest_design | 536 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.73 | 0.09 | -0.03 | 0.39 | 99 | Y |
simplest_design | 537 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.24 | 0.81 | -0.19 | 0.24 | 99 | Y |
simplest_design | 538 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.40 | 0.16 | -0.34 | 0.06 | 99 | Y |
simplest_design | 539 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.02 | 0.31 | -0.09 | 0.30 | 99 | Y |
simplest_design | 540 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.31 | 0.76 | -0.24 | 0.17 | 99 | Y |
simplest_design | 541 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.34 | 0.74 | -0.23 | 0.16 | 99 | Y |
simplest_design | 542 | Q | 0 | estimator | (Intercept) | 0.23 | 0.09 | 2.51 | 0.01 | 0.05 | 0.41 | 99 | Y |
simplest_design | 543 | Q | 0 | estimator | (Intercept) | -0.17 | 0.11 | -1.55 | 0.13 | -0.39 | 0.05 | 99 | Y |
simplest_design | 544 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.07 | 0.94 | -0.20 | 0.19 | 99 | Y |
simplest_design | 545 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.16 | 0.87 | -0.22 | 0.18 | 99 | Y |
simplest_design | 546 | Q | 0 | estimator | (Intercept) | 0.21 | 0.10 | 2.04 | 0.04 | 0.01 | 0.42 | 99 | Y |
simplest_design | 547 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.19 | 0.24 | -0.33 | 0.08 | 99 | Y |
simplest_design | 548 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.79 | 0.43 | -0.11 | 0.26 | 99 | Y |
simplest_design | 549 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.04 | 0.30 | -0.09 | 0.29 | 99 | Y |
simplest_design | 550 | Q | 0 | estimator | (Intercept) | 0.10 | 0.11 | 0.88 | 0.38 | -0.12 | 0.31 | 99 | Y |
simplest_design | 551 | Q | 0 | estimator | (Intercept) | -0.01 | 0.11 | -0.12 | 0.91 | -0.23 | 0.21 | 99 | Y |
simplest_design | 552 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.71 | 0.48 | -0.13 | 0.27 | 99 | Y |
simplest_design | 553 | Q | 0 | estimator | (Intercept) | 0.17 | 0.11 | 1.50 | 0.14 | -0.05 | 0.39 | 99 | Y |
simplest_design | 554 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.01 | 0.99 | -0.20 | 0.20 | 99 | Y |
simplest_design | 555 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.74 | 0.46 | -0.27 | 0.12 | 99 | Y |
simplest_design | 556 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.51 | 0.61 | -0.22 | 0.13 | 99 | Y |
simplest_design | 557 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.09 | 0.93 | -0.21 | 0.19 | 99 | Y |
simplest_design | 558 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.48 | 0.63 | -0.17 | 0.28 | 99 | Y |
simplest_design | 559 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.56 | 0.58 | -0.14 | 0.25 | 99 | Y |
simplest_design | 560 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.89 | 0.06 | -0.01 | 0.37 | 99 | Y |
simplest_design | 561 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.25 | 0.80 | -0.22 | 0.17 | 99 | Y |
simplest_design | 562 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.92 | 0.36 | -0.26 | 0.09 | 99 | Y |
simplest_design | 563 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.10 | 0.92 | -0.20 | 0.18 | 99 | Y |
simplest_design | 564 | Q | 0 | estimator | (Intercept) | 0.14 | 0.10 | 1.51 | 0.14 | -0.05 | 0.33 | 99 | Y |
simplest_design | 565 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.08 | 0.93 | -0.21 | 0.19 | 99 | Y |
simplest_design | 566 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.10 | 0.92 | -0.17 | 0.19 | 99 | Y |
simplest_design | 567 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.62 | 0.53 | -0.25 | 0.13 | 99 | Y |
simplest_design | 568 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.54 | 0.13 | -0.35 | 0.04 | 99 | Y |
simplest_design | 569 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.70 | 0.48 | -0.26 | 0.12 | 99 | Y |
simplest_design | 570 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.04 | 0.30 | -0.10 | 0.31 | 99 | Y |
simplest_design | 571 | Q | 0 | estimator | (Intercept) | 0.13 | 0.10 | 1.36 | 0.18 | -0.06 | 0.33 | 99 | Y |
simplest_design | 572 | Q | 0 | estimator | (Intercept) | -0.18 | 0.11 | -1.69 | 0.09 | -0.40 | 0.03 | 99 | Y |
simplest_design | 573 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.31 | 0.76 | -0.17 | 0.24 | 99 | Y |
simplest_design | 574 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.53 | 0.60 | -0.17 | 0.29 | 99 | Y |
simplest_design | 575 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.20 | 0.85 | -0.19 | 0.23 | 99 | Y |
simplest_design | 576 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.68 | 0.50 | -0.11 | 0.23 | 99 | Y |
simplest_design | 577 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.11 | 0.92 | -0.19 | 0.21 | 99 | Y |
simplest_design | 578 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.74 | 0.46 | -0.12 | 0.26 | 99 | Y |
simplest_design | 579 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.35 | 0.18 | -0.32 | 0.06 | 99 | Y |
simplest_design | 580 | Q | 0 | estimator | (Intercept) | -0.14 | 0.09 | -1.51 | 0.13 | -0.33 | 0.05 | 99 | Y |
simplest_design | 581 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.75 | 0.45 | -0.29 | 0.13 | 99 | Y |
simplest_design | 582 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.62 | 0.53 | -0.13 | 0.24 | 99 | Y |
simplest_design | 583 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.68 | 0.50 | -0.14 | 0.28 | 99 | Y |
simplest_design | 584 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.65 | 0.51 | -0.28 | 0.14 | 99 | Y |
simplest_design | 585 | Q | 0 | estimator | (Intercept) | 0.06 | 0.08 | 0.73 | 0.47 | -0.10 | 0.23 | 99 | Y |
simplest_design | 586 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.48 | 0.63 | -0.13 | 0.22 | 99 | Y |
simplest_design | 587 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.71 | 0.48 | -0.28 | 0.13 | 99 | Y |
simplest_design | 588 | Q | 0 | estimator | (Intercept) | -0.20 | 0.10 | -1.95 | 0.05 | -0.40 | 0.00 | 99 | Y |
simplest_design | 589 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.92 | 0.36 | -0.11 | 0.30 | 99 | Y |
simplest_design | 590 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.14 | 0.89 | -0.19 | 0.17 | 99 | Y |
simplest_design | 591 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.60 | 0.11 | -0.04 | 0.33 | 99 | Y |
simplest_design | 592 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.17 | 0.87 | -0.22 | 0.19 | 99 | Y |
simplest_design | 593 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.00 | 0.32 | -0.10 | 0.29 | 99 | Y |
simplest_design | 594 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.81 | 0.42 | -0.29 | 0.12 | 99 | Y |
simplest_design | 595 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.31 | 0.76 | -0.17 | 0.23 | 99 | Y |
simplest_design | 596 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 0.99 | 0.32 | -0.10 | 0.31 | 99 | Y |
simplest_design | 597 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.90 | 0.37 | -0.28 | 0.11 | 99 | Y |
simplest_design | 598 | Q | 0 | estimator | (Intercept) | 0.14 | 0.10 | 1.38 | 0.17 | -0.06 | 0.34 | 99 | Y |
simplest_design | 599 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.12 | 0.91 | -0.21 | 0.19 | 99 | Y |
simplest_design | 600 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.85 | 0.40 | -0.28 | 0.11 | 99 | Y |
simplest_design | 601 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.24 | 0.81 | -0.21 | 0.16 | 99 | Y |
simplest_design | 602 | Q | 0 | estimator | (Intercept) | 0.10 | 0.11 | 0.90 | 0.37 | -0.11 | 0.31 | 99 | Y |
simplest_design | 603 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | -0.01 | 0.99 | -0.19 | 0.18 | 99 | Y |
simplest_design | 604 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.01 | 0.99 | -0.19 | 0.19 | 99 | Y |
simplest_design | 605 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.80 | 0.42 | -0.27 | 0.12 | 99 | Y |
simplest_design | 606 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.81 | 0.42 | -0.12 | 0.27 | 99 | Y |
simplest_design | 607 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.30 | 0.20 | -0.34 | 0.07 | 99 | Y |
simplest_design | 608 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.12 | 0.27 | -0.30 | 0.08 | 99 | Y |
simplest_design | 609 | Q | 0 | estimator | (Intercept) | -0.13 | 0.09 | -1.35 | 0.18 | -0.31 | 0.06 | 99 | Y |
simplest_design | 610 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.25 | 0.80 | -0.23 | 0.18 | 99 | Y |
simplest_design | 611 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.51 | 0.61 | -0.24 | 0.14 | 99 | Y |
simplest_design | 612 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.28 | 0.20 | -0.31 | 0.07 | 99 | Y |
simplest_design | 613 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.08 | 0.28 | -0.08 | 0.28 | 99 | Y |
simplest_design | 614 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.40 | 0.69 | -0.15 | 0.23 | 99 | Y |
simplest_design | 615 | Q | 0 | estimator | (Intercept) | 0.19 | 0.10 | 1.92 | 0.06 | -0.01 | 0.38 | 99 | Y |
simplest_design | 616 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.43 | 0.67 | -0.23 | 0.15 | 99 | Y |
simplest_design | 617 | Q | 0 | estimator | (Intercept) | -0.11 | 0.09 | -1.26 | 0.21 | -0.29 | 0.07 | 99 | Y |
simplest_design | 618 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.20 | 0.84 | -0.23 | 0.18 | 99 | Y |
simplest_design | 619 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.32 | 0.75 | -0.22 | 0.16 | 99 | Y |
simplest_design | 620 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.49 | 0.63 | -0.15 | 0.25 | 99 | Y |
simplest_design | 621 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.70 | 0.48 | -0.13 | 0.28 | 99 | Y |
simplest_design | 622 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.15 | 0.88 | -0.19 | 0.23 | 99 | Y |
simplest_design | 623 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.33 | 0.74 | -0.24 | 0.17 | 99 | Y |
simplest_design | 624 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.16 | 0.25 | -0.32 | 0.08 | 99 | Y |
simplest_design | 625 | Q | 0 | estimator | (Intercept) | 0.16 | 0.11 | 1.48 | 0.14 | -0.05 | 0.37 | 99 | Y |
simplest_design | 626 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.92 | 0.36 | -0.29 | 0.11 | 99 | Y |
simplest_design | 627 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.44 | 0.66 | -0.25 | 0.16 | 99 | Y |
simplest_design | 628 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.51 | 0.61 | -0.15 | 0.25 | 99 | Y |
simplest_design | 629 | Q | 0 | estimator | (Intercept) | 0.01 | 0.12 | 0.08 | 0.94 | -0.23 | 0.25 | 99 | Y |
simplest_design | 630 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.64 | 0.52 | -0.14 | 0.28 | 99 | Y |
simplest_design | 631 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.41 | 0.68 | -0.27 | 0.18 | 99 | Y |
simplest_design | 632 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.37 | 0.71 | -0.16 | 0.24 | 99 | Y |
simplest_design | 633 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.11 | 0.91 | -0.21 | 0.19 | 99 | Y |
simplest_design | 634 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.10 | 0.92 | -0.19 | 0.21 | 99 | Y |
simplest_design | 635 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.23 | 0.82 | -0.23 | 0.18 | 99 | Y |
simplest_design | 636 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.40 | 0.69 | -0.25 | 0.16 | 99 | Y |
simplest_design | 637 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.37 | 0.71 | -0.15 | 0.22 | 99 | Y |
simplest_design | 638 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.41 | 0.68 | -0.15 | 0.23 | 99 | Y |
simplest_design | 639 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.11 | 0.92 | -0.20 | 0.22 | 99 | Y |
simplest_design | 640 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.74 | 0.46 | -0.11 | 0.24 | 99 | Y |
simplest_design | 641 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.52 | 0.13 | -0.35 | 0.05 | 99 | Y |
simplest_design | 642 | Q | 0 | estimator | (Intercept) | 0.14 | 0.11 | 1.29 | 0.20 | -0.08 | 0.36 | 99 | Y |
simplest_design | 643 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.87 | 0.06 | -0.36 | 0.01 | 99 | Y |
simplest_design | 644 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.19 | 0.85 | -0.22 | 0.18 | 99 | Y |
simplest_design | 645 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.89 | 0.06 | -0.35 | 0.01 | 99 | Y |
simplest_design | 646 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.17 | 0.87 | -0.20 | 0.24 | 99 | Y |
simplest_design | 647 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.27 | 0.21 | -0.33 | 0.07 | 99 | Y |
simplest_design | 648 | Q | 0 | estimator | (Intercept) | 0.13 | 0.11 | 1.19 | 0.24 | -0.09 | 0.35 | 99 | Y |
simplest_design | 649 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.64 | 0.52 | -0.26 | 0.14 | 99 | Y |
simplest_design | 650 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.42 | 0.68 | -0.23 | 0.15 | 99 | Y |
simplest_design | 651 | Q | 0 | estimator | (Intercept) | -0.12 | 0.09 | -1.35 | 0.18 | -0.29 | 0.06 | 99 | Y |
simplest_design | 652 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.23 | 0.82 | -0.22 | 0.17 | 99 | Y |
simplest_design | 653 | Q | 0 | estimator | (Intercept) | 0.16 | 0.11 | 1.39 | 0.17 | -0.07 | 0.38 | 99 | Y |
simplest_design | 654 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.27 | 0.79 | -0.18 | 0.24 | 99 | Y |
simplest_design | 655 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.63 | 0.53 | -0.27 | 0.14 | 99 | Y |
simplest_design | 656 | Q | 0 | estimator | (Intercept) | 0.14 | 0.11 | 1.30 | 0.20 | -0.07 | 0.36 | 99 | Y |
simplest_design | 657 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.08 | 0.94 | -0.19 | 0.20 | 99 | Y |
simplest_design | 658 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.43 | 0.67 | -0.22 | 0.14 | 99 | Y |
simplest_design | 659 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.24 | 0.81 | -0.23 | 0.18 | 99 | Y |
simplest_design | 660 | Q | 0 | estimator | (Intercept) | -0.19 | 0.09 | -2.10 | 0.04 | -0.36 | -0.01 | 99 | Y |
simplest_design | 661 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.79 | 0.08 | -0.35 | 0.02 | 99 | Y |
simplest_design | 662 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.60 | 0.55 | -0.14 | 0.26 | 99 | Y |
simplest_design | 663 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.06 | 0.95 | -0.21 | 0.22 | 99 | Y |
simplest_design | 664 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.23 | 0.82 | -0.23 | 0.18 | 99 | Y |
simplest_design | 665 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.05 | 0.96 | -0.21 | 0.20 | 99 | Y |
simplest_design | 666 | Q | 0 | estimator | (Intercept) | 0.00 | 0.11 | 0.04 | 0.97 | -0.21 | 0.22 | 99 | Y |
simplest_design | 667 | Q | 0 | estimator | (Intercept) | -0.11 | 0.09 | -1.17 | 0.24 | -0.29 | 0.07 | 99 | Y |
simplest_design | 668 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.66 | 0.51 | -0.27 | 0.13 | 99 | Y |
simplest_design | 669 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.20 | 0.84 | -0.19 | 0.23 | 99 | Y |
simplest_design | 670 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.08 | 0.94 | -0.18 | 0.19 | 99 | Y |
simplest_design | 671 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.82 | 0.41 | -0.11 | 0.26 | 99 | Y |
simplest_design | 672 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.49 | 0.63 | -0.23 | 0.14 | 99 | Y |
simplest_design | 673 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.22 | 0.83 | -0.17 | 0.21 | 99 | Y |
simplest_design | 674 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.30 | 0.77 | -0.17 | 0.22 | 99 | Y |
simplest_design | 675 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.24 | 0.22 | -0.35 | 0.08 | 99 | Y |
simplest_design | 676 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.40 | 0.69 | -0.16 | 0.24 | 99 | Y |
simplest_design | 677 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.07 | 0.94 | -0.19 | 0.21 | 99 | Y |
simplest_design | 678 | Q | 0 | estimator | (Intercept) | -0.19 | 0.10 | -1.89 | 0.06 | -0.40 | 0.01 | 99 | Y |
simplest_design | 679 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.85 | 0.40 | -0.12 | 0.29 | 99 | Y |
simplest_design | 680 | Q | 0 | estimator | (Intercept) | -0.23 | 0.11 | -2.04 | 0.04 | -0.46 | -0.01 | 99 | Y |
simplest_design | 681 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.30 | 0.77 | -0.16 | 0.22 | 99 | Y |
simplest_design | 682 | Q | 0 | estimator | (Intercept) | 0.10 | 0.12 | 0.89 | 0.37 | -0.13 | 0.33 | 99 | Y |
simplest_design | 683 | Q | 0 | estimator | (Intercept) | -0.22 | 0.11 | -1.99 | 0.05 | -0.43 | 0.00 | 99 | Y |
simplest_design | 684 | Q | 0 | estimator | (Intercept) | 0.06 | 0.11 | 0.57 | 0.57 | -0.15 | 0.27 | 99 | Y |
simplest_design | 685 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.54 | 0.59 | -0.27 | 0.15 | 99 | Y |
simplest_design | 686 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.43 | 0.16 | -0.35 | 0.06 | 99 | Y |
simplest_design | 687 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.40 | 0.69 | -0.16 | 0.24 | 99 | Y |
simplest_design | 688 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.14 | 0.89 | -0.23 | 0.20 | 99 | Y |
simplest_design | 689 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.35 | 0.73 | -0.16 | 0.22 | 99 | Y |
simplest_design | 690 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.69 | 0.49 | -0.13 | 0.26 | 99 | Y |
simplest_design | 691 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.79 | 0.08 | -0.38 | 0.02 | 99 | Y |
simplest_design | 692 | Q | 0 | estimator | (Intercept) | -0.06 | 0.08 | -0.69 | 0.49 | -0.22 | 0.11 | 99 | Y |
simplest_design | 693 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.04 | 0.97 | -0.20 | 0.20 | 99 | Y |
simplest_design | 694 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.46 | 0.65 | -0.21 | 0.13 | 99 | Y |
simplest_design | 695 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.38 | 0.70 | -0.17 | 0.26 | 99 | Y |
simplest_design | 696 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.36 | 0.72 | -0.25 | 0.17 | 99 | Y |
simplest_design | 697 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.49 | 0.63 | -0.14 | 0.23 | 99 | Y |
simplest_design | 698 | Q | 0 | estimator | (Intercept) | 0.16 | 0.11 | 1.52 | 0.13 | -0.05 | 0.37 | 99 | Y |
simplest_design | 699 | Q | 0 | estimator | (Intercept) | 0.08 | 0.11 | 0.77 | 0.44 | -0.13 | 0.30 | 99 | Y |
simplest_design | 700 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.57 | 0.57 | -0.14 | 0.25 | 99 | Y |
simplest_design | 701 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.89 | 0.38 | -0.26 | 0.10 | 99 | Y |
simplest_design | 702 | Q | 0 | estimator | (Intercept) | -0.06 | 0.12 | -0.47 | 0.64 | -0.29 | 0.18 | 99 | Y |
simplest_design | 703 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.20 | 0.84 | -0.24 | 0.20 | 99 | Y |
simplest_design | 704 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.19 | 0.85 | -0.16 | 0.20 | 99 | Y |
simplest_design | 705 | Q | 0 | estimator | (Intercept) | 0.21 | 0.10 | 2.11 | 0.04 | 0.01 | 0.42 | 99 | Y |
simplest_design | 706 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.49 | 0.63 | -0.15 | 0.25 | 99 | Y |
simplest_design | 707 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.48 | 0.64 | -0.15 | 0.24 | 99 | Y |
simplest_design | 708 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.21 | 0.84 | -0.20 | 0.24 | 99 | Y |
simplest_design | 709 | Q | 0 | estimator | (Intercept) | 0.14 | 0.08 | 1.71 | 0.09 | -0.02 | 0.30 | 99 | Y |
simplest_design | 710 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.69 | 0.09 | -0.03 | 0.33 | 99 | Y |
simplest_design | 711 | Q | 0 | estimator | (Intercept) | 0.25 | 0.11 | 2.28 | 0.02 | 0.03 | 0.46 | 99 | Y |
simplest_design | 712 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.11 | 0.27 | -0.31 | 0.09 | 99 | Y |
simplest_design | 713 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.58 | 0.12 | -0.04 | 0.37 | 99 | Y |
simplest_design | 714 | Q | 0 | estimator | (Intercept) | -0.16 | 0.11 | -1.44 | 0.15 | -0.38 | 0.06 | 99 | Y |
simplest_design | 715 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.75 | 0.08 | -0.02 | 0.38 | 99 | Y |
simplest_design | 716 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.24 | 0.81 | -0.23 | 0.18 | 99 | Y |
simplest_design | 717 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.02 | 0.98 | -0.18 | 0.18 | 99 | Y |
simplest_design | 718 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.69 | 0.49 | -0.28 | 0.14 | 99 | Y |
simplest_design | 719 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.30 | 0.20 | -0.06 | 0.29 | 99 | Y |
simplest_design | 720 | Q | 0 | estimator | (Intercept) | 0.21 | 0.10 | 1.98 | 0.05 | 0.00 | 0.41 | 99 | Y |
simplest_design | 721 | Q | 0 | estimator | (Intercept) | -0.10 | 0.11 | -0.88 | 0.38 | -0.32 | 0.12 | 99 | Y |
simplest_design | 722 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.38 | 0.71 | -0.15 | 0.22 | 99 | Y |
simplest_design | 723 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.54 | 0.59 | -0.22 | 0.13 | 99 | Y |
simplest_design | 724 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.23 | 0.82 | -0.19 | 0.15 | 99 | Y |
simplest_design | 725 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.54 | 0.59 | -0.25 | 0.14 | 99 | Y |
simplest_design | 726 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.66 | 0.51 | -0.14 | 0.29 | 99 | Y |
simplest_design | 727 | Q | 0 | estimator | (Intercept) | -0.21 | 0.11 | -1.90 | 0.06 | -0.42 | 0.01 | 99 | Y |
simplest_design | 728 | Q | 0 | estimator | (Intercept) | -0.01 | 0.11 | -0.09 | 0.93 | -0.23 | 0.22 | 99 | Y |
simplest_design | 729 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.44 | 0.15 | -0.34 | 0.05 | 99 | Y |
simplest_design | 730 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.12 | 0.27 | -0.08 | 0.30 | 99 | Y |
simplest_design | 731 | Q | 0 | estimator | (Intercept) | 0.27 | 0.09 | 2.87 | 0.01 | 0.08 | 0.46 | 99 | Y |
simplest_design | 732 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.26 | 0.79 | -0.17 | 0.22 | 99 | Y |
simplest_design | 733 | Q | 0 | estimator | (Intercept) | 0.13 | 0.09 | 1.47 | 0.15 | -0.05 | 0.30 | 99 | Y |
simplest_design | 734 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.39 | 0.70 | -0.14 | 0.21 | 99 | Y |
simplest_design | 735 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | -0.02 | 0.99 | -0.21 | 0.20 | 99 | Y |
simplest_design | 736 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.18 | 0.86 | -0.24 | 0.20 | 99 | Y |
simplest_design | 737 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.90 | 0.37 | -0.11 | 0.29 | 99 | Y |
simplest_design | 738 | Q | 0 | estimator | (Intercept) | 0.31 | 0.11 | 2.96 | 0.00 | 0.10 | 0.52 | 99 | Y |
simplest_design | 739 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.58 | 0.56 | -0.24 | 0.13 | 99 | Y |
simplest_design | 740 | Q | 0 | estimator | (Intercept) | -0.02 | 0.11 | -0.24 | 0.81 | -0.23 | 0.18 | 99 | Y |
simplest_design | 741 | Q | 0 | estimator | (Intercept) | -0.17 | 0.09 | -1.84 | 0.07 | -0.35 | 0.01 | 99 | Y |
simplest_design | 742 | Q | 0 | estimator | (Intercept) | 0.23 | 0.10 | 2.26 | 0.03 | 0.03 | 0.44 | 99 | Y |
simplest_design | 743 | Q | 0 | estimator | (Intercept) | -0.10 | 0.11 | -0.98 | 0.33 | -0.32 | 0.11 | 99 | Y |
simplest_design | 744 | Q | 0 | estimator | (Intercept) | -0.10 | 0.11 | -0.95 | 0.35 | -0.32 | 0.11 | 99 | Y |
simplest_design | 745 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.19 | 0.85 | -0.20 | 0.25 | 99 | Y |
simplest_design | 746 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.84 | 0.40 | -0.29 | 0.12 | 99 | Y |
simplest_design | 747 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.52 | 0.60 | -0.28 | 0.16 | 99 | Y |
simplest_design | 748 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.69 | 0.49 | -0.27 | 0.13 | 99 | Y |
simplest_design | 749 | Q | 0 | estimator | (Intercept) | 0.11 | 0.11 | 0.96 | 0.34 | -0.11 | 0.33 | 99 | Y |
simplest_design | 750 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.43 | 0.67 | -0.15 | 0.23 | 99 | Y |
simplest_design | 751 | Q | 0 | estimator | (Intercept) | 0.25 | 0.10 | 2.57 | 0.01 | 0.06 | 0.44 | 99 | Y |
simplest_design | 752 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.81 | 0.42 | -0.28 | 0.12 | 99 | Y |
simplest_design | 753 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.56 | 0.58 | -0.14 | 0.25 | 99 | Y |
simplest_design | 754 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.33 | 0.74 | -0.18 | 0.26 | 99 | Y |
simplest_design | 755 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.19 | 0.24 | -0.31 | 0.08 | 99 | Y |
simplest_design | 756 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.43 | 0.67 | -0.14 | 0.22 | 99 | Y |
simplest_design | 757 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.35 | 0.73 | -0.14 | 0.21 | 99 | Y |
simplest_design | 758 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.49 | 0.63 | -0.15 | 0.25 | 99 | Y |
simplest_design | 759 | Q | 0 | estimator | (Intercept) | 0.13 | 0.10 | 1.27 | 0.21 | -0.07 | 0.33 | 99 | Y |
simplest_design | 760 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.40 | 0.69 | -0.14 | 0.21 | 99 | Y |
simplest_design | 761 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.05 | 0.96 | -0.21 | 0.19 | 99 | Y |
simplest_design | 762 | Q | 0 | estimator | (Intercept) | 0.12 | 0.09 | 1.24 | 0.22 | -0.07 | 0.30 | 99 | Y |
simplest_design | 763 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.53 | 0.13 | -0.35 | 0.05 | 99 | Y |
simplest_design | 764 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.44 | 0.15 | -0.34 | 0.05 | 99 | Y |
simplest_design | 765 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.17 | 0.24 | -0.08 | 0.29 | 99 | Y |
simplest_design | 766 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.66 | 0.51 | -0.28 | 0.14 | 99 | Y |
simplest_design | 767 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.27 | 0.79 | -0.19 | 0.25 | 99 | Y |
simplest_design | 768 | Q | 0 | estimator | (Intercept) | -0.15 | 0.10 | -1.41 | 0.16 | -0.36 | 0.06 | 99 | Y |
simplest_design | 769 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.49 | 0.63 | -0.26 | 0.16 | 99 | Y |
simplest_design | 770 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.03 | 0.97 | -0.19 | 0.20 | 99 | Y |
simplest_design | 771 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.41 | 0.68 | -0.18 | 0.27 | 99 | Y |
simplest_design | 772 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.13 | 0.90 | -0.19 | 0.22 | 99 | Y |
simplest_design | 773 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.52 | 0.60 | -0.14 | 0.24 | 99 | Y |
simplest_design | 774 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.57 | 0.57 | -0.27 | 0.15 | 99 | Y |
simplest_design | 775 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.62 | 0.53 | -0.13 | 0.24 | 99 | Y |
simplest_design | 776 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.14 | 0.89 | -0.18 | 0.20 | 99 | Y |
simplest_design | 777 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.76 | 0.45 | -0.11 | 0.25 | 99 | Y |
simplest_design | 778 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.64 | 0.52 | -0.14 | 0.27 | 99 | Y |
simplest_design | 779 | Q | 0 | estimator | (Intercept) | -0.23 | 0.11 | -2.19 | 0.03 | -0.44 | -0.02 | 99 | Y |
simplest_design | 780 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.41 | 0.68 | -0.17 | 0.26 | 99 | Y |
simplest_design | 781 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.77 | 0.44 | -0.29 | 0.13 | 99 | Y |
simplest_design | 782 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.76 | 0.45 | -0.29 | 0.13 | 99 | Y |
simplest_design | 783 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.43 | 0.67 | -0.25 | 0.16 | 99 | Y |
simplest_design | 784 | Q | 0 | estimator | (Intercept) | 0.14 | 0.09 | 1.53 | 0.13 | -0.04 | 0.31 | 99 | Y |
simplest_design | 785 | Q | 0 | estimator | (Intercept) | 0.01 | 0.09 | 0.08 | 0.93 | -0.17 | 0.18 | 99 | Y |
simplest_design | 786 | Q | 0 | estimator | (Intercept) | -0.16 | 0.09 | -1.64 | 0.10 | -0.34 | 0.03 | 99 | Y |
simplest_design | 787 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.71 | 0.48 | -0.28 | 0.13 | 99 | Y |
simplest_design | 788 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.06 | 0.95 | -0.19 | 0.18 | 99 | Y |
simplest_design | 789 | Q | 0 | estimator | (Intercept) | -0.04 | 0.11 | -0.41 | 0.69 | -0.25 | 0.17 | 99 | Y |
simplest_design | 790 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.51 | 0.61 | -0.13 | 0.23 | 99 | Y |
simplest_design | 791 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.81 | 0.42 | -0.25 | 0.11 | 99 | Y |
simplest_design | 792 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.50 | 0.62 | -0.15 | 0.25 | 99 | Y |
simplest_design | 793 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.78 | 0.44 | -0.26 | 0.11 | 99 | Y |
simplest_design | 794 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.22 | 0.83 | -0.22 | 0.18 | 99 | Y |
simplest_design | 795 | Q | 0 | estimator | (Intercept) | 0.02 | 0.11 | 0.21 | 0.83 | -0.19 | 0.24 | 99 | Y |
simplest_design | 796 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.64 | 0.10 | -0.03 | 0.34 | 99 | Y |
simplest_design | 797 | Q | 0 | estimator | (Intercept) | -0.05 | 0.09 | -0.56 | 0.58 | -0.22 | 0.12 | 99 | Y |
simplest_design | 798 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.53 | 0.60 | -0.14 | 0.24 | 99 | Y |
simplest_design | 799 | Q | 0 | estimator | (Intercept) | -0.18 | 0.11 | -1.70 | 0.09 | -0.39 | 0.03 | 99 | Y |
simplest_design | 800 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.75 | 0.45 | -0.26 | 0.12 | 99 | Y |
simplest_design | 801 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.91 | 0.37 | -0.11 | 0.29 | 99 | Y |
simplest_design | 802 | Q | 0 | estimator | (Intercept) | -0.19 | 0.09 | -2.09 | 0.04 | -0.37 | -0.01 | 99 | Y |
simplest_design | 803 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.84 | 0.40 | -0.12 | 0.29 | 99 | Y |
simplest_design | 804 | Q | 0 | estimator | (Intercept) | 0.06 | 0.09 | 0.70 | 0.48 | -0.12 | 0.25 | 99 | Y |
simplest_design | 805 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.12 | 0.91 | -0.20 | 0.18 | 99 | Y |
simplest_design | 806 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | -0.01 | 0.99 | -0.17 | 0.17 | 99 | Y |
simplest_design | 807 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.03 | 0.31 | -0.10 | 0.31 | 99 | Y |
simplest_design | 808 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.91 | 0.36 | -0.29 | 0.11 | 99 | Y |
simplest_design | 809 | Q | 0 | estimator | (Intercept) | 0.13 | 0.09 | 1.48 | 0.14 | -0.04 | 0.30 | 99 | Y |
simplest_design | 810 | Q | 0 | estimator | (Intercept) | 0.09 | 0.10 | 0.87 | 0.39 | -0.11 | 0.29 | 99 | Y |
simplest_design | 811 | Q | 0 | estimator | (Intercept) | 0.13 | 0.11 | 1.19 | 0.24 | -0.09 | 0.35 | 99 | Y |
simplest_design | 812 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.43 | 0.67 | -0.23 | 0.15 | 99 | Y |
simplest_design | 813 | Q | 0 | estimator | (Intercept) | 0.12 | 0.09 | 1.27 | 0.21 | -0.07 | 0.30 | 99 | Y |
simplest_design | 814 | Q | 0 | estimator | (Intercept) | 0.03 | 0.11 | 0.29 | 0.77 | -0.18 | 0.24 | 99 | Y |
simplest_design | 815 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.17 | 0.24 | -0.07 | 0.27 | 99 | Y |
simplest_design | 816 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.48 | 0.63 | -0.16 | 0.27 | 99 | Y |
simplest_design | 817 | Q | 0 | estimator | (Intercept) | -0.23 | 0.09 | -2.53 | 0.01 | -0.42 | -0.05 | 99 | Y |
simplest_design | 818 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.74 | 0.46 | -0.25 | 0.11 | 99 | Y |
simplest_design | 819 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.08 | 0.94 | -0.20 | 0.21 | 99 | Y |
simplest_design | 820 | Q | 0 | estimator | (Intercept) | -0.18 | 0.09 | -1.86 | 0.07 | -0.36 | 0.01 | 99 | Y |
simplest_design | 821 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.61 | 0.54 | -0.29 | 0.15 | 99 | Y |
simplest_design | 822 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.39 | 0.70 | -0.17 | 0.25 | 99 | Y |
simplest_design | 823 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.42 | 0.68 | -0.23 | 0.15 | 99 | Y |
simplest_design | 824 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.27 | 0.21 | -0.34 | 0.08 | 99 | Y |
simplest_design | 825 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.42 | 0.67 | -0.23 | 0.15 | 99 | Y |
simplest_design | 826 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.25 | 0.81 | -0.16 | 0.20 | 99 | Y |
simplest_design | 827 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.57 | 0.57 | -0.28 | 0.15 | 99 | Y |
simplest_design | 828 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.55 | 0.58 | -0.25 | 0.14 | 99 | Y |
simplest_design | 829 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.43 | 0.67 | -0.22 | 0.14 | 99 | Y |
simplest_design | 830 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -1.03 | 0.30 | -0.31 | 0.10 | 99 | Y |
simplest_design | 831 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.21 | 0.84 | -0.20 | 0.16 | 99 | Y |
simplest_design | 832 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.50 | 0.62 | -0.14 | 0.24 | 99 | Y |
simplest_design | 833 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.21 | 0.83 | -0.23 | 0.18 | 99 | Y |
simplest_design | 834 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.81 | 0.42 | -0.11 | 0.27 | 99 | Y |
simplest_design | 835 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.29 | 0.77 | -0.25 | 0.18 | 99 | Y |
simplest_design | 836 | Q | 0 | estimator | (Intercept) | -0.06 | 0.12 | -0.52 | 0.60 | -0.30 | 0.17 | 99 | Y |
simplest_design | 837 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.41 | 0.68 | -0.15 | 0.23 | 99 | Y |
simplest_design | 838 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.19 | 0.24 | -0.35 | 0.09 | 99 | Y |
simplest_design | 839 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.17 | 0.24 | -0.32 | 0.08 | 99 | Y |
simplest_design | 840 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.92 | 0.36 | -0.28 | 0.10 | 99 | Y |
simplest_design | 841 | Q | 0 | estimator | (Intercept) | 0.04 | 0.12 | 0.38 | 0.70 | -0.19 | 0.28 | 99 | Y |
simplest_design | 842 | Q | 0 | estimator | (Intercept) | -0.25 | 0.10 | -2.42 | 0.02 | -0.45 | -0.04 | 99 | Y |
simplest_design | 843 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.28 | 0.78 | -0.23 | 0.17 | 99 | Y |
simplest_design | 844 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.81 | 0.42 | -0.11 | 0.27 | 99 | Y |
simplest_design | 845 | Q | 0 | estimator | (Intercept) | -0.18 | 0.10 | -1.89 | 0.06 | -0.37 | 0.01 | 99 | Y |
simplest_design | 846 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.63 | 0.53 | -0.28 | 0.15 | 99 | Y |
simplest_design | 847 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.47 | 0.64 | -0.22 | 0.14 | 99 | Y |
simplest_design | 848 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.62 | 0.54 | -0.28 | 0.15 | 99 | Y |
simplest_design | 849 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.55 | 0.12 | -0.04 | 0.33 | 99 | Y |
simplest_design | 850 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.42 | 0.68 | -0.22 | 0.14 | 99 | Y |
simplest_design | 851 | Q | 0 | estimator | (Intercept) | -0.14 | 0.09 | -1.57 | 0.12 | -0.32 | 0.04 | 99 | Y |
simplest_design | 852 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.93 | 0.36 | -0.09 | 0.25 | 99 | Y |
simplest_design | 853 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.30 | 0.76 | -0.20 | 0.15 | 99 | Y |
simplest_design | 854 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.10 | 0.92 | -0.19 | 0.20 | 99 | Y |
simplest_design | 855 | Q | 0 | estimator | (Intercept) | 0.14 | 0.09 | 1.48 | 0.14 | -0.05 | 0.33 | 99 | Y |
simplest_design | 856 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.61 | 0.55 | -0.27 | 0.14 | 99 | Y |
simplest_design | 857 | Q | 0 | estimator | (Intercept) | 0.13 | 0.11 | 1.20 | 0.23 | -0.08 | 0.34 | 99 | Y |
simplest_design | 858 | Q | 0 | estimator | (Intercept) | -0.05 | 0.11 | -0.50 | 0.62 | -0.26 | 0.16 | 99 | Y |
simplest_design | 859 | Q | 0 | estimator | (Intercept) | -0.08 | 0.10 | -0.80 | 0.42 | -0.29 | 0.12 | 99 | Y |
simplest_design | 860 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.59 | 0.12 | -0.36 | 0.04 | 99 | Y |
simplest_design | 861 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.19 | 0.23 | -0.31 | 0.08 | 99 | Y |
simplest_design | 862 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 1.00 | 0.32 | -0.09 | 0.28 | 99 | Y |
simplest_design | 863 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.75 | 0.46 | -0.13 | 0.29 | 99 | Y |
simplest_design | 864 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.55 | 0.58 | -0.26 | 0.15 | 99 | Y |
simplest_design | 865 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.41 | 0.68 | -0.15 | 0.23 | 99 | Y |
simplest_design | 866 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.42 | 0.68 | -0.16 | 0.25 | 99 | Y |
simplest_design | 867 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.66 | 0.51 | -0.27 | 0.13 | 99 | Y |
simplest_design | 868 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.13 | 0.90 | -0.18 | 0.21 | 99 | Y |
simplest_design | 869 | Q | 0 | estimator | (Intercept) | 0.07 | 0.09 | 0.84 | 0.40 | -0.10 | 0.25 | 99 | Y |
simplest_design | 870 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.61 | 0.54 | -0.24 | 0.13 | 99 | Y |
simplest_design | 871 | Q | 0 | estimator | (Intercept) | -0.12 | 0.09 | -1.31 | 0.19 | -0.29 | 0.06 | 99 | Y |
simplest_design | 872 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.66 | 0.51 | -0.25 | 0.13 | 99 | Y |
simplest_design | 873 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.30 | 0.77 | -0.21 | 0.15 | 99 | Y |
simplest_design | 874 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.60 | 0.55 | -0.13 | 0.23 | 99 | Y |
simplest_design | 875 | Q | 0 | estimator | (Intercept) | -0.04 | 0.09 | -0.40 | 0.69 | -0.22 | 0.15 | 99 | Y |
simplest_design | 876 | Q | 0 | estimator | (Intercept) | 0.23 | 0.10 | 2.35 | 0.02 | 0.04 | 0.43 | 99 | Y |
simplest_design | 877 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.73 | 0.47 | -0.12 | 0.26 | 99 | Y |
simplest_design | 878 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.31 | 0.76 | -0.24 | 0.17 | 99 | Y |
simplest_design | 879 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.80 | 0.42 | -0.11 | 0.27 | 99 | Y |
simplest_design | 880 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.42 | 0.68 | -0.24 | 0.15 | 99 | Y |
simplest_design | 881 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.89 | 0.37 | -0.30 | 0.11 | 99 | Y |
simplest_design | 882 | Q | 0 | estimator | (Intercept) | -0.15 | 0.09 | -1.69 | 0.09 | -0.32 | 0.03 | 99 | Y |
simplest_design | 883 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.15 | 0.25 | -0.08 | 0.29 | 99 | Y |
simplest_design | 884 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.17 | 0.86 | -0.18 | 0.22 | 99 | Y |
simplest_design | 885 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.58 | 0.56 | -0.25 | 0.14 | 99 | Y |
simplest_design | 886 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.12 | 0.90 | -0.20 | 0.18 | 99 | Y |
simplest_design | 887 | Q | 0 | estimator | (Intercept) | -0.13 | 0.09 | -1.38 | 0.17 | -0.31 | 0.06 | 99 | Y |
simplest_design | 888 | Q | 0 | estimator | (Intercept) | 0.11 | 0.09 | 1.27 | 0.21 | -0.06 | 0.29 | 99 | Y |
simplest_design | 889 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.67 | 0.51 | -0.14 | 0.27 | 99 | Y |
simplest_design | 890 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.17 | 0.86 | -0.20 | 0.17 | 99 | Y |
simplest_design | 891 | Q | 0 | estimator | (Intercept) | -0.16 | 0.08 | -1.93 | 0.06 | -0.33 | 0.00 | 99 | Y |
simplest_design | 892 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.70 | 0.49 | -0.14 | 0.28 | 99 | Y |
simplest_design | 893 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.60 | 0.55 | -0.26 | 0.14 | 99 | Y |
simplest_design | 894 | Q | 0 | estimator | (Intercept) | 0.05 | 0.11 | 0.45 | 0.66 | -0.16 | 0.26 | 99 | Y |
simplest_design | 895 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.59 | 0.11 | -0.04 | 0.34 | 99 | Y |
simplest_design | 896 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.50 | 0.62 | -0.14 | 0.23 | 99 | Y |
simplest_design | 897 | Q | 0 | estimator | (Intercept) | -0.06 | 0.11 | -0.54 | 0.59 | -0.29 | 0.17 | 99 | Y |
simplest_design | 898 | Q | 0 | estimator | (Intercept) | -0.23 | 0.11 | -2.02 | 0.05 | -0.46 | 0.00 | 99 | Y |
simplest_design | 899 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.14 | 0.89 | -0.19 | 0.21 | 99 | Y |
simplest_design | 900 | Q | 0 | estimator | (Intercept) | -0.07 | 0.10 | -0.68 | 0.50 | -0.26 | 0.13 | 99 | Y |
simplest_design | 901 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.04 | 0.97 | -0.17 | 0.18 | 99 | Y |
simplest_design | 902 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.20 | 0.84 | -0.17 | 0.20 | 99 | Y |
simplest_design | 903 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.76 | 0.45 | -0.26 | 0.11 | 99 | Y |
simplest_design | 904 | Q | 0 | estimator | (Intercept) | -0.10 | 0.10 | -1.08 | 0.28 | -0.30 | 0.09 | 99 | Y |
simplest_design | 905 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.17 | 0.86 | -0.18 | 0.21 | 99 | Y |
simplest_design | 906 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.41 | 0.68 | -0.24 | 0.16 | 99 | Y |
simplest_design | 907 | Q | 0 | estimator | (Intercept) | 0.04 | 0.09 | 0.48 | 0.63 | -0.14 | 0.23 | 99 | Y |
simplest_design | 908 | Q | 0 | estimator | (Intercept) | 0.07 | 0.10 | 0.67 | 0.51 | -0.13 | 0.27 | 99 | Y |
simplest_design | 909 | Q | 0 | estimator | (Intercept) | -0.11 | 0.11 | -1.04 | 0.30 | -0.33 | 0.10 | 99 | Y |
simplest_design | 910 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.11 | 0.27 | -0.08 | 0.29 | 99 | Y |
simplest_design | 911 | Q | 0 | estimator | (Intercept) | 0.11 | 0.11 | 0.99 | 0.32 | -0.11 | 0.32 | 99 | Y |
simplest_design | 912 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.90 | 0.06 | -0.01 | 0.37 | 99 | Y |
simplest_design | 913 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.26 | 0.79 | -0.16 | 0.21 | 99 | Y |
simplest_design | 914 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.38 | 0.70 | -0.17 | 0.25 | 99 | Y |
simplest_design | 915 | Q | 0 | estimator | (Intercept) | 0.08 | 0.09 | 0.96 | 0.34 | -0.09 | 0.25 | 99 | Y |
simplest_design | 916 | Q | 0 | estimator | (Intercept) | -0.12 | 0.09 | -1.31 | 0.19 | -0.31 | 0.06 | 99 | Y |
simplest_design | 917 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.24 | 0.81 | -0.22 | 0.17 | 99 | Y |
simplest_design | 918 | Q | 0 | estimator | (Intercept) | 0.02 | 0.09 | 0.27 | 0.79 | -0.16 | 0.21 | 99 | Y |
simplest_design | 919 | Q | 0 | estimator | (Intercept) | 0.00 | 0.10 | 0.02 | 0.98 | -0.20 | 0.20 | 99 | Y |
simplest_design | 920 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.40 | 0.16 | -0.33 | 0.06 | 99 | Y |
simplest_design | 921 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.62 | 0.53 | -0.14 | 0.26 | 99 | Y |
simplest_design | 922 | Q | 0 | estimator | (Intercept) | -0.13 | 0.11 | -1.19 | 0.24 | -0.35 | 0.09 | 99 | Y |
simplest_design | 923 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.35 | 0.73 | -0.15 | 0.22 | 99 | Y |
simplest_design | 924 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.30 | 0.77 | -0.17 | 0.23 | 99 | Y |
simplest_design | 925 | Q | 0 | estimator | (Intercept) | 0.05 | 0.10 | 0.46 | 0.65 | -0.16 | 0.25 | 99 | Y |
simplest_design | 926 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.86 | 0.39 | -0.29 | 0.12 | 99 | Y |
simplest_design | 927 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.07 | 0.29 | -0.28 | 0.08 | 99 | Y |
simplest_design | 928 | Q | 0 | estimator | (Intercept) | 0.11 | 0.11 | 0.97 | 0.33 | -0.11 | 0.33 | 99 | Y |
simplest_design | 929 | Q | 0 | estimator | (Intercept) | 0.15 | 0.09 | 1.55 | 0.12 | -0.04 | 0.34 | 99 | Y |
simplest_design | 930 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.74 | 0.46 | -0.30 | 0.14 | 99 | Y |
simplest_design | 931 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.09 | 0.93 | -0.20 | 0.18 | 99 | Y |
simplest_design | 932 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.79 | 0.08 | -0.02 | 0.38 | 99 | Y |
simplest_design | 933 | Q | 0 | estimator | (Intercept) | -0.10 | 0.09 | -1.08 | 0.28 | -0.28 | 0.08 | 99 | Y |
simplest_design | 934 | Q | 0 | estimator | (Intercept) | -0.11 | 0.10 | -1.15 | 0.25 | -0.30 | 0.08 | 99 | Y |
simplest_design | 935 | Q | 0 | estimator | (Intercept) | -0.19 | 0.10 | -1.88 | 0.06 | -0.40 | 0.01 | 99 | Y |
simplest_design | 936 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.16 | 0.25 | -0.08 | 0.31 | 99 | Y |
simplest_design | 937 | Q | 0 | estimator | (Intercept) | -0.13 | 0.09 | -1.42 | 0.16 | -0.31 | 0.05 | 99 | Y |
simplest_design | 938 | Q | 0 | estimator | (Intercept) | 0.20 | 0.10 | 2.06 | 0.04 | 0.01 | 0.40 | 99 | Y |
simplest_design | 939 | Q | 0 | estimator | (Intercept) | -0.01 | 0.11 | -0.10 | 0.92 | -0.23 | 0.20 | 99 | Y |
simplest_design | 940 | Q | 0 | estimator | (Intercept) | 0.08 | 0.10 | 0.83 | 0.41 | -0.11 | 0.27 | 99 | Y |
simplest_design | 941 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.22 | 0.83 | -0.23 | 0.18 | 99 | Y |
simplest_design | 942 | Q | 0 | estimator | (Intercept) | 0.20 | 0.10 | 2.11 | 0.04 | 0.01 | 0.39 | 99 | Y |
simplest_design | 943 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.25 | 0.80 | -0.21 | 0.17 | 99 | Y |
simplest_design | 944 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.31 | 0.75 | -0.18 | 0.24 | 99 | Y |
simplest_design | 945 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.06 | 0.95 | -0.20 | 0.19 | 99 | Y |
simplest_design | 946 | Q | 0 | estimator | (Intercept) | -0.03 | 0.11 | -0.28 | 0.78 | -0.24 | 0.18 | 99 | Y |
simplest_design | 947 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.29 | 0.20 | -0.34 | 0.07 | 99 | Y |
simplest_design | 948 | Q | 0 | estimator | (Intercept) | -0.07 | 0.09 | -0.73 | 0.47 | -0.25 | 0.12 | 99 | Y |
simplest_design | 949 | Q | 0 | estimator | (Intercept) | 0.18 | 0.10 | 1.76 | 0.08 | -0.02 | 0.37 | 99 | Y |
simplest_design | 950 | Q | 0 | estimator | (Intercept) | 0.01 | 0.10 | 0.11 | 0.91 | -0.19 | 0.22 | 99 | Y |
simplest_design | 951 | Q | 0 | estimator | (Intercept) | 0.10 | 0.11 | 0.92 | 0.36 | -0.12 | 0.33 | 99 | Y |
simplest_design | 952 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.41 | 0.68 | -0.23 | 0.15 | 99 | Y |
simplest_design | 953 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.10 | 0.92 | -0.18 | 0.17 | 99 | Y |
simplest_design | 954 | Q | 0 | estimator | (Intercept) | -0.05 | 0.10 | -0.47 | 0.64 | -0.24 | 0.15 | 99 | Y |
simplest_design | 955 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.62 | 0.54 | -0.14 | 0.26 | 99 | Y |
simplest_design | 956 | Q | 0 | estimator | (Intercept) | 0.04 | 0.11 | 0.37 | 0.71 | -0.17 | 0.25 | 99 | Y |
simplest_design | 957 | Q | 0 | estimator | (Intercept) | 0.09 | 0.11 | 0.80 | 0.42 | -0.13 | 0.30 | 99 | Y |
simplest_design | 958 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.08 | 0.28 | -0.09 | 0.30 | 99 | Y |
simplest_design | 959 | Q | 0 | estimator | (Intercept) | 0.15 | 0.10 | 1.51 | 0.14 | -0.05 | 0.35 | 99 | Y |
simplest_design | 960 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.55 | 0.59 | -0.14 | 0.24 | 99 | Y |
simplest_design | 961 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.61 | 0.11 | -0.04 | 0.35 | 99 | Y |
simplest_design | 962 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.15 | 0.25 | -0.33 | 0.09 | 99 | Y |
simplest_design | 963 | Q | 0 | estimator | (Intercept) | 0.05 | 0.09 | 0.60 | 0.55 | -0.12 | 0.23 | 99 | Y |
simplest_design | 964 | Q | 0 | estimator | (Intercept) | 0.12 | 0.10 | 1.13 | 0.26 | -0.09 | 0.32 | 99 | Y |
simplest_design | 965 | Q | 0 | estimator | (Intercept) | 0.07 | 0.11 | 0.68 | 0.50 | -0.14 | 0.28 | 99 | Y |
simplest_design | 966 | Q | 0 | estimator | (Intercept) | -0.02 | 0.09 | -0.16 | 0.87 | -0.20 | 0.17 | 99 | Y |
simplest_design | 967 | Q | 0 | estimator | (Intercept) | -0.04 | 0.10 | -0.45 | 0.65 | -0.24 | 0.15 | 99 | Y |
simplest_design | 968 | Q | 0 | estimator | (Intercept) | -0.09 | 0.09 | -1.00 | 0.32 | -0.27 | 0.09 | 99 | Y |
simplest_design | 969 | Q | 0 | estimator | (Intercept) | -0.06 | 0.09 | -0.73 | 0.47 | -0.23 | 0.11 | 99 | Y |
simplest_design | 970 | Q | 0 | estimator | (Intercept) | -0.07 | 0.11 | -0.65 | 0.52 | -0.28 | 0.14 | 99 | Y |
simplest_design | 971 | Q | 0 | estimator | (Intercept) | 0.24 | 0.10 | 2.42 | 0.02 | 0.04 | 0.44 | 99 | Y |
simplest_design | 972 | Q | 0 | estimator | (Intercept) | 0.16 | 0.10 | 1.71 | 0.09 | -0.03 | 0.35 | 99 | Y |
simplest_design | 973 | Q | 0 | estimator | (Intercept) | -0.01 | 0.10 | -0.08 | 0.94 | -0.20 | 0.18 | 99 | Y |
simplest_design | 974 | Q | 0 | estimator | (Intercept) | -0.06 | 0.10 | -0.61 | 0.54 | -0.25 | 0.13 | 99 | Y |
simplest_design | 975 | Q | 0 | estimator | (Intercept) | -0.13 | 0.10 | -1.37 | 0.17 | -0.32 | 0.06 | 99 | Y |
simplest_design | 976 | Q | 0 | estimator | (Intercept) | -0.12 | 0.10 | -1.13 | 0.26 | -0.32 | 0.09 | 99 | Y |
simplest_design | 977 | Q | 0 | estimator | (Intercept) | -0.01 | 0.09 | -0.16 | 0.87 | -0.20 | 0.17 | 99 | Y |
simplest_design | 978 | Q | 0 | estimator | (Intercept) | -0.09 | 0.10 | -0.91 | 0.36 | -0.29 | 0.11 | 99 | Y |
simplest_design | 979 | Q | 0 | estimator | (Intercept) | -0.12 | 0.11 | -1.12 | 0.27 | -0.33 | 0.09 | 99 | Y |
simplest_design | 980 | Q | 0 | estimator | (Intercept) | 0.11 | 0.10 | 1.10 | 0.27 | -0.09 | 0.32 | 99 | Y |
simplest_design | 981 | Q | 0 | estimator | (Intercept) | 0.00 | 0.09 | 0.05 | 0.96 | -0.18 | 0.19 | 99 | Y |
simplest_design | 982 | Q | 0 | estimator | (Intercept) | -0.02 | 0.10 | -0.20 | 0.84 | -0.21 | 0.17 | 99 | Y |
simplest_design | 983 | Q | 0 | estimator | (Intercept) | -0.03 | 0.10 | -0.28 | 0.78 | -0.23 | 0.17 | 99 | Y |
simplest_design | 984 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.14 | 0.26 | -0.08 | 0.29 | 99 | Y |
simplest_design | 985 | Q | 0 | estimator | (Intercept) | 0.12 | 0.11 | 1.06 | 0.29 | -0.10 | 0.33 | 99 | Y |
simplest_design | 986 | Q | 0 | estimator | (Intercept) | 0.10 | 0.10 | 1.06 | 0.29 | -0.09 | 0.30 | 99 | Y |
simplest_design | 987 | Q | 0 | estimator | (Intercept) | 0.06 | 0.08 | 0.66 | 0.51 | -0.11 | 0.22 | 99 | Y |
simplest_design | 988 | Q | 0 | estimator | (Intercept) | 0.03 | 0.10 | 0.34 | 0.74 | -0.16 | 0.23 | 99 | Y |
simplest_design | 989 | Q | 0 | estimator | (Intercept) | 0.09 | 0.09 | 0.93 | 0.36 | -0.10 | 0.27 | 99 | Y |
simplest_design | 990 | Q | 0 | estimator | (Intercept) | 0.06 | 0.10 | 0.61 | 0.54 | -0.14 | 0.26 | 99 | Y |
simplest_design | 991 | Q | 0 | estimator | (Intercept) | 0.11 | 0.11 | 1.01 | 0.32 | -0.11 | 0.32 | 99 | Y |
simplest_design | 992 | Q | 0 | estimator | (Intercept) | 0.02 | 0.10 | 0.23 | 0.82 | -0.18 | 0.23 | 99 | Y |
simplest_design | 993 | Q | 0 | estimator | (Intercept) | -0.03 | 0.09 | -0.27 | 0.78 | -0.21 | 0.16 | 99 | Y |
simplest_design | 994 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.32 | 0.75 | -0.16 | 0.22 | 99 | Y |
simplest_design | 995 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.46 | 0.15 | -0.33 | 0.05 | 99 | Y |
simplest_design | 996 | Q | 0 | estimator | (Intercept) | 0.01 | 0.11 | 0.10 | 0.92 | -0.21 | 0.23 | 99 | Y |
simplest_design | 997 | Q | 0 | estimator | (Intercept) | 0.21 | 0.10 | 2.09 | 0.04 | 0.01 | 0.41 | 99 | Y |
simplest_design | 998 | Q | 0 | estimator | (Intercept) | -0.19 | 0.11 | -1.81 | 0.07 | -0.40 | 0.02 | 99 | Y |
simplest_design | 999 | Q | 0 | estimator | (Intercept) | 0.14 | 0.10 | 1.33 | 0.19 | -0.07 | 0.35 | 99 | Y |
simplest_design | 1000 | Q | 0 | estimator | (Intercept) | 0.04 | 0.10 | 0.41 | 0.68 | -0.16 | 0.24 | 99 | Y |
Once you have simulated many times you can “diagnose”.
This is the next topic
Once you have simulated many times you can “diagnose”.
For instance we can ask about bias: the average difference between the estimand and the estimate:
mean_estimate | mean_estimand | bias |
---|---|---|
0 | 0 | 0 |
diagnose_design()
does this in one step for a set of common “diagnosands”:
Design | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|
simplest_design | 500 | 0.00 | -0.00 | -0.00 | 0.10 | 0.10 | 0.05 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) | (0.01) |
The diagnosis object is also a list; of class diagnosis
design | sim_ID | inquiry | estimand | estimator | term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
simplest_design | 1 | Q | 0 | estimator | (Intercept) | 0.03 | 0.09 | 0.31 | 0.76 | -0.16 | 0.21 | 99 | Y |
simplest_design | 2 | Q | 0 | estimator | (Intercept) | 0.10 | 0.09 | 1.07 | 0.29 | -0.09 | 0.29 | 99 | Y |
simplest_design | 3 | Q | 0 | estimator | (Intercept) | -0.16 | 0.10 | -1.54 | 0.13 | -0.37 | 0.05 | 99 | Y |
simplest_design | 4 | Q | 0 | estimator | (Intercept) | -0.08 | 0.11 | -0.72 | 0.48 | -0.30 | 0.14 | 99 | Y |
simplest_design | 5 | Q | 0 | estimator | (Intercept) | -0.14 | 0.10 | -1.34 | 0.18 | -0.34 | 0.07 | 99 | Y |
simplest_design | 6 | Q | 0 | estimator | (Intercept) | -0.08 | 0.09 | -0.90 | 0.37 | -0.26 | 0.10 | 99 | Y |
design | inquiry | estimator | outcome | term | mean_estimand | se(mean_estimand) | mean_estimate | se(mean_estimate) | bias | se(bias) | sd_estimate | se(sd_estimate) | rmse | se(rmse) | power | se(power) | coverage | se(coverage) | n_sims |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
simplest_design | Q | estimator | Y | (Intercept) | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0.1 | 0 | 0.05 | 0.01 | 0.95 | 0.01 | 500 |
design | bootstrap_id | inquiry | estimator | outcome | term | mean_estimand | mean_estimate | bias | sd_estimate | rmse | power | coverage |
---|---|---|---|---|---|---|---|---|---|---|---|---|
simplest_design | 1 | Q | estimator | Y | (Intercept) | 0 | 0.00 | 0.00 | 0.1 | 0.10 | 0.05 | 0.95 |
simplest_design | 2 | Q | estimator | Y | (Intercept) | 0 | -0.01 | -0.01 | 0.1 | 0.11 | 0.06 | 0.94 |
simplest_design | 3 | Q | estimator | Y | (Intercept) | 0 | -0.01 | -0.01 | 0.1 | 0.10 | 0.05 | 0.95 |
simplest_design | 4 | Q | estimator | Y | (Intercept) | 0 | -0.01 | -0.01 | 0.1 | 0.10 | 0.05 | 0.95 |
simplest_design | 5 | Q | estimator | Y | (Intercept) | 0 | 0.00 | 0.00 | 0.1 | 0.10 | 0.05 | 0.95 |
simplest_design | 6 | Q | estimator | Y | (Intercept) | 0 | 0.00 | 0.00 | 0.1 | 0.10 | 0.05 | 0.95 |
The bootstraps dataframe is produced by resampling from the simulations dataframe and producing a diagnosis dataframe from each resampling.
This lets us generate estimates of uncertainty around our diagnosands.
It can be controlled thus:
It’s reshapeable: as a tidy dataframe, ready for graphing
design | inquiry | estimator | outcome | term | diagnosand | estimate | std.error | conf.low | conf.high |
---|---|---|---|---|---|---|---|---|---|
simplest_design | Q | estimator | Y | (Intercept) | mean_estimand | 0.00 | 0.00 | 0.00 | 0.00 |
simplest_design | Q | estimator | Y | (Intercept) | mean_estimate | 0.00 | 0.00 | -0.01 | 0.00 |
simplest_design | Q | estimator | Y | (Intercept) | bias | 0.00 | 0.00 | -0.01 | 0.00 |
simplest_design | Q | estimator | Y | (Intercept) | sd_estimate | 0.10 | 0.00 | 0.10 | 0.11 |
simplest_design | Q | estimator | Y | (Intercept) | rmse | 0.10 | 0.00 | 0.10 | 0.11 |
simplest_design | Q | estimator | Y | (Intercept) | power | 0.05 | 0.01 | 0.03 | 0.07 |
simplest_design | Q | estimator | Y | (Intercept) | coverage | 0.95 | 0.01 | 0.93 | 0.97 |
It’s reshapeable: as a tidy dataframe, ready for graphing
Or turn into a formatted table:
Design | Inquiry | Estimator | Outcome | Term | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|---|---|---|---|
simplest_design | Q | estimator | Y | (Intercept) | 500 | 0.00 | -0.00 | -0.00 | 0.10 | 0.10 | 0.05 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) | (0.01) |
mean_se = mean(std.error)
type_s_rate = mean((sign(estimate) != sign(estimand))[p.value <= alpha])
exaggeration_ratio = mean((estimate/estimand)[p.value <= alpha])
var_estimate = pop.var(estimate)
mean_var_hat = mean(std.error^2)
prop_pos_sig = estimate > 0 & p.value <= alpha
mean_ci_length = mean(conf.high - conf.low)
my_diagnosands <-
declare_diagnosands(median_bias = median(estimate - estimand))
diagnose_design(simplest_design, diagnosands = my_diagnosands, sims = 10) |>
reshape_diagnosis() |> kable() |> kable_styling(font_size = 20)
Design | Inquiry | Estimator | Outcome | Term | N Sims | Median Bias |
---|---|---|---|---|---|---|
simplest_design | Q | estimator | Y | (Intercept) | 10 | -0.02 |
(0.04) |
You can diagnose multiple designs or a list of designs
You can partition the simulations data frame into groups before calculating diagnosands.
Design | Significant | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|---|
design_1 | FALSE | 474 | 0.00 | -0.00 | -0.00 | 0.09 | 0.09 | 0.00 | 1.00 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |||
design_1 | TRUE | 26 | 0.00 | -0.02 | -0.02 | 0.23 | 0.23 | 1.00 | 0.00 |
(0.00) | (0.04) | (0.04) | (0.01) | (0.01) | (0.00) | (0.00) |
Note especially the mean estimate, the power, the coverage, the RMSE, and the bias. (Bias is not large because we have both under and over estimates)
Consider for instance this sampling design:
Compare these two diagnoses:
diagnosis | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|
diagnosis_1 | 5000 | 1.00 | 1.00 | -0.00 | 1.01 | 0.90 | 0.17 | 0.97 |
diagnosis_1 | (0.01) | (0.01) | (0.01) | (0.01) | (0.01) | (0.01) | (0.00) | |
diagnosis_2 | 5000 | 1.22 | 1.22 | -0.00 | 0.91 | 0.91 | 0.20 | 0.97 |
diagnosis_2 | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
In the second the estimate is drawn just once. The SD of the estimate is lower. But the RMSE is not very different.
Diagnosis alerts to problems in a design. Consider the following simple alternative design.
Here we define the inquiry as the sample average \(Y\) (instead of the population mean). But otherwise things stay the same.
What do we think of this design?
Here is the diagnosis
Design | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|
simplest_design_2 | 500 | -0.00 | -0.00 | 0.00 | 0.10 | 0.00 | 0.04 | 1.00 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) | (0.00) |
Redesign is the process of taking a design and modifying it in some way.
There are a few ways to do this:
replace_step
, insert_step
or delete_step
redesign
we will focus on the third approach
A design parameter is a modifiable quantity of a design.
These quantities are objects that were in your global environment when you made your design, get referred to explicitly in your design, and got scooped up when the design was formed.
In our simplest design above we had a fixed N
, but we could make N
a modifiable quantity like this:
Note that N
is defined in memory; and it gets called in one of the steps. It has now become a parameter of the design and it can be modified using redesign.
Here is a version of the design with N = 200
:
Here is a list of three different designs with different Ns.
The good thing here is that it is now easy to diagnose over multiple designs and compare diagnoses. The parameter names then end up in the diagnosis_df
Consider this:
Then:
Output:
N | m | diagnosand | estimate | std.error | conf.low | conf.high |
---|---|---|---|---|---|---|
100 | 0.0 | mean_estimand | 0.00 | 0.00 | 0.00 | 0.00 |
100 | 0.0 | mean_estimate | 0.00 | 0.00 | -0.01 | 0.01 |
100 | 0.0 | bias | 0.00 | 0.00 | -0.01 | 0.01 |
100 | 0.0 | sd_estimate | 0.10 | 0.00 | 0.10 | 0.11 |
200 | 0.0 | mean_estimand | 0.00 | 0.00 | 0.00 | 0.00 |
200 | 0.0 | mean_estimate | 0.00 | 0.00 | -0.01 | 0.00 |
200 | 0.1 | mean_estimand | 0.10 | 0.00 | 0.10 | 0.10 |
200 | 0.1 | mean_estimate | 0.10 | 0.00 | 0.09 | 0.10 |
300 | 0.2 | bias | 0.00 | 0.00 | 0.00 | 0.00 |
300 | 0.2 | sd_estimate | 0.06 | 0.00 | 0.05 | 0.06 |
300 | 0.2 | rmse | 0.06 | 0.00 | 0.05 | 0.06 |
300 | 0.2 | power | 0.93 | 0.01 | 0.91 | 0.95 |
300 | 0.2 | coverage | 0.95 | 0.01 | 0.92 | 0.97 |
Graphing after redesign is especially easy:
When redesigning with arguments that are vectors, use list()
in redesign, with each list item representing a design you wish to create
A parameter has to be called correctly. And you get no warning if you misname.
why not 200?
A parameter has to be called explicitly
N <- 100
my_N <- function(n = N) n
simplest_design_N2 <-
declare_model(N = my_N(), Y = rnorm(N)) +
declare_inquiry(Q = 0) +
declare_estimator(Y ~ 1)
simplest_design_N2 |> redesign(N = 200) |> draw_data() |> nrow()
[1] 100
why not 200?
A parameter has to be called explicitly
N <- 100
my_N <- function(n = N) n
simplest_design_N2 <-
declare_model(N = my_N(N), Y = rnorm(N)) +
declare_inquiry(Q = 0) +
declare_estimator(Y ~ 1)
simplest_design_N2 |> redesign(N = 200) |> draw_data() |> nrow()
[1] 200
OK
Here is an example of redesigning where the “parameter” is a function
What can you do with a design once you have it?
We will start with a very simple experimental design (more on the components of this later)
ID | U | Y_Z_0 | Y_Z_1 | Z | Y |
---|---|---|---|---|---|
001 | 0.8939241 | 0.8939241 | 1.8939241 | 1 | 1.8939241 |
002 | 1.3350334 | 1.3350334 | 2.3350334 | 1 | 2.3350334 |
003 | 0.8329075 | 0.8329075 | 1.8329075 | 1 | 1.8329075 |
004 | -0.2886946 | -0.2886946 | 0.7113054 | 0 | -0.2886946 |
005 | -0.3062044 | -0.3062044 | 0.6937956 | 1 | 0.6937956 |
006 | 0.6443779 | 0.6443779 | 1.6443779 | 1 | 1.6443779 |
Play with the data:
Using your actual data:
design | sim_ID | inquiry | estimand | estimator | term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
design | 1 | ate | 1 | estimator | Z | 1.50 | 0.19 | 7.92 | 0 | 1.12 | 1.88 | 98 | Y |
design | 2 | ate | 1 | estimator | Z | 1.27 | 0.19 | 6.64 | 0 | 0.89 | 1.65 | 98 | Y |
design | 3 | ate | 1 | estimator | Z | 0.87 | 0.19 | 4.58 | 0 | 0.49 | 1.24 | 98 | Y |
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
1.00 | 0.00 | 0.19 | 0.19 | 1.00 | 0.95 |
(0.02) | (0.02) | (0.01) | (0.01) | (0.00) | (0.02) |
diagnosand | mean_1 | mean_2 | mean_difference | conf.low | conf.high |
---|---|---|---|---|---|
mean_estimand | 0.50 | 0.50 | 0.00 | 0.00 | 0.00 |
mean_estimate | 0.48 | 0.50 | 0.02 | -0.01 | 0.04 |
bias | -0.02 | 0.00 | 0.02 | -0.01 | 0.04 |
sd_estimate | 0.28 | 0.20 | -0.08 | -0.10 | -0.06 |
rmse | 0.28 | 0.20 | -0.08 | -0.10 | -0.06 |
power | 0.38 | 0.71 | 0.32 | 0.26 | 0.37 |
coverage | 0.97 | 0.96 | -0.01 | -0.04 | 0.01 |
Recall?: The power of a design is the probability that you will reject a null hypothesis
inquiry | estimand | estimator | term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|
ate | 0.5 | estimator | Z | 0.57 | 0.2 | 2.88 | 0 | 0.18 | 0.96 | 98 | Y |
sim_ID | estimate | p.value |
---|---|---|
1 | 0.81 | 0.00 |
2 | 0.40 | 0.04 |
3 | 0.88 | 0.00 |
4 | 0.72 | 0.00 |
5 | 0.38 | 0.05 |
6 | 0.44 | 0.02 |
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
0.50 | 0.00 | 0.20 | 0.20 | 0.70 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
b | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|
0 | -0.00 | -0.00 | 0.20 | 0.20 | 0.05 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
0.25 | 0.25 | -0.00 | 0.20 | 0.20 | 0.23 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
0.5 | 0.50 | 0.00 | 0.20 | 0.20 | 0.70 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
1 | 1.00 | 0.00 | 0.20 | 0.20 | 1.00 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
We start with a simple experimental design and then show ways to extend.
fabricatr
package (and others)randomizr
package (and others)estimatr
package (and others)A few new elements here:
declare_model
can be used much like mutate
with multiple columns created in sequencepotential_outcomes
function is a special function that creates potential outcome columnsreveal_outcome
to reveal the outcome; Z
and Y
are defaultA few new elements here:
lm_robust
is defaulte.g. If you sample before defining the inquiry you get a different inquiry to if you sample after you define the inquiry
e.g. If you sample before defining the inquiry you get a different inquiry to if you sample after you define the inquiry
You can generate hierarchical data like this:
You can generate hierarchical data like this:
You can generate panel data like this:
M <-
declare_model(
countries = add_level(
N = 196,
country_shock = rnorm(N)
),
years = add_level(
N = 100,
time_trend = 1:N,
year_shock = runif(N, 1, 10),
nest = FALSE
),
observation = cross_levels(
by = join_using(countries, years),
observation_shock = rnorm(N),
Y = 0.01 * time_trend + country_shock + year_shock + observation_shock
)
)
You can generate panel data like this:
countries | country_shock | years | time_trend | year_shock | observation | observation_shock | Y |
---|---|---|---|---|---|---|---|
001 | -1.01 | 001 | 1 | 7.24 | 00001 | 0.14 | 6.38 |
002 | 1.59 | 001 | 1 | 7.24 | 00002 | 1.10 | 9.94 |
003 | 0.18 | 001 | 1 | 7.24 | 00003 | 0.94 | 8.37 |
004 | -2.07 | 001 | 1 | 7.24 | 00004 | 0.21 | 5.40 |
005 | 0.22 | 001 | 1 | 7.24 | 00005 | 1.08 | 8.55 |
006 | -0.37 | 001 | 1 | 7.24 | 00006 | 1.22 | 8.11 |
You can repeat steps and play with the order, always conscious of the direction of the pipe
design <-
declare_model(N = N, X = rep(0:1, N/2)) +
declare_model(U = rnorm(N), potential_outcomes(Y ~ b * Z * X + U)) +
declare_assignment(Z = block_ra(blocks = X), Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ate = mean(Y_Z_1 - Y_Z_0)) +
declare_inquiry(cate = mean(Y_Z_1[X==0] - Y_Z_0[X==0])) +
declare_estimator(Y ~ Z, inquiry = "ate", label = "ols") +
declare_estimator(Y ~ Z*X, inquiry = "cate", label = "fe")
Many causal inquiries are simple summaries of potential outcomes:
Inquiry | Units | Code |
---|---|---|
Average treatment effect in a finite population (PATE) | Units in the population | mean(Y_D_1 - Y_D_0) |
Conditional average treatment effect (CATE) for X = 1 | Units for whom X = 1 | mean(Y_D_1[X == 1] - Y_D_0[X == 1]) |
Complier average causal effect (CACE) | Complier units | mean(Y_D_1[D_Z_1 > D_Z_0] - Y_D_0[D_Z_1 > D_Z_0]) |
Causal interactions of \(D_1\) and \(D_2\) | Units in the population | mean((Y_D1_1_D2_1 - Y_D1_0_D2_1) - (Y_D1_1_D2_0 - Y_D1_0_D2_0)) |
Generating potential outcomes columns gets you far
Often though we need to define inquiries as a function of continuous variables. For this generating a potential outcomes function can make life easier. This helps for:
Here is an example of using functions to define complex counterfactuals:
f_M <- function(X, UM) 1*(UM < X)
f_Y <- function(X, M, UY) X + M - .4*X*M + UY
design <-
declare_model(N = 100,
X = simple_rs(N),
UM = runif(N),
UY = rnorm(N),
M = f_M(X, UM),
Y = f_Y(X, M, UY)) +
declare_inquiry(Q1 = mean(f_Y(1, f_M(0, UM), UY) - f_Y(0, f_M(0, UM), UY)))
design |> draw_estimands() |> kable() |> kable_styling(font_size = 20)
inquiry | estimand |
---|---|
Q1 | 1 |
Here is an example of using functions to define effects of continuous treatments.
f_Y <- function(X, UY) X - .25*X^2 + UY
design <-
declare_model(N = 100,
X = rnorm(N),
UY = rnorm(N),
Y = f_Y(X, UY)) +
declare_inquiry(
Q1 = mean(f_Y(X+1, UY) - f_Y(X, UY)),
Q2 = mean(f_Y(1, UY) - f_Y(0, UY)),
Q3 = (lm_robust(Y ~ X)|> tidy())[2,2]
)
design |> draw_estimands() |> kable() |> kable_styling(font_size = 20)
inquiry | estimand |
---|---|
Q1 | 0.857143 |
Q2 | 0.750000 |
Q3 | 1.363886 |
which one is the ATE?
The randomizr
package has a set of functions for different types of block and cluster assignments.
simple_ra(N = 100, prob = 0.25)
complete_ra(N = 100, m = 40)
block_ra(blocks = regions)
cluster_ra(clusters = households)
* Block-and-cluster assignment: Cluster random assignment within blocks of clusters block_and_cluster_ra(blocks = regions, clusters = villages)
You can combine these in various ways. For examples with saturation random assignment first clusters are assigned to a saturation level, then units within clusters are assigned to treatment conditions according to the saturation level:
By default declare_estimates()
assumes you are interested in the first term after the constant from the output of an estimation procedure.
But you can say what you are interested in directly using term
and you can also associate different terms with different quantities of interest using inquiry
.
design <-
declare_model(
N = 100,
X1 = rnorm(N),
X2 = rnorm(N),
X3 = rnorm(N),
Y = X1 - X2 + X3 + rnorm(N)
) +
declare_inquiries(ate_2 = -1, ate_3 = 1) +
declare_estimator(Y ~ X1 + X2 + X3,
term = c("X2", "X3"),
inquiry = c("ate_2", "ate_3"))
design |> run_design() |> kable(digits = 2) |> kable_styling(font_size = 20)
inquiry | estimand | term | estimator | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|---|---|---|
ate_2 | -1 | X2 | estimator | -0.85 | 0.09 | -9.80 | 0 | -1.02 | -0.68 | 96 | Y |
ate_3 | 1 | X3 | estimator | 0.99 | 0.10 | 9.87 | 0 | 0.79 | 1.18 | 96 | Y |
Sometimes it can be confusing what the names of a term is but you can figure this by running the estimation strategy directly. Here’s an example where the names of a term might be confusing.
lm_robust(Y ~ A*B,
data = data.frame(A = rep(c("a", "b"), 3),
B = rep(c("p", "q"), each = 3),
Y = rnorm(6))) |>
coef() |> kable() |> kable_styling(font_size = 20)
x | |
---|---|
(Intercept) | 0.984547 |
Ab | -1.172676 |
Bq | -1.976603 |
Ab:Bq | 2.115862 |
The names are they appear in the output here is the name of the term that declare_estimator
will look for.
DeclareDesign
works natively with estimatr
but you you can use whatever packages you like. You do have to make sure though that estimatr gets as input a nice tidy dataframe of estimates, and that might require some tidying.
design <-
declare_model(N = 1000, U = runif(N),
potential_outcomes(Y ~ as.numeric(U < .5 + Z/3))) +
declare_assignment(Z = simple_ra(N), Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ate = mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y ~ Z, inquiry = "ate",
.method = glm,
family = binomial(link = "probit"))
Note that we passed additional arguments to glm
; that’s easy.
It’s not a good design though. Just look at the diagnosis:
if(run)
diagnose_design(design) |> write_rds("saved/probit.rds")
read_rds("saved/probit.rds") |>
reshape_diagnosis() |>
kable() |>
kable_styling(font_size = 20)
Design | Inquiry | Estimator | Term | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|---|---|---|
design | ate | estimator | Z | 500 | 0.33 | 0.97 | 0.64 | 0.09 | 0.64 | 1.00 | 0.00 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
Why is it so terrible?
Because the probit estimate does not target the ATE directly; you need to do more work to get there.
You essentially have to write a function to get the estimates, calculate the quantity of interest and other stats, and turn these into a nice dataframe.
Luckily you can use the margins
package with tidy
to create a .summary
function which you can pass to declare_estimator
to do all this for you
if(run)
diagnose_design(design) |> write_rds("saved/probit_2.rds")
read_rds("saved/probit_2.rds") |> reshape_diagnosis() |> kable() |>
kable_styling(font_size = 20)
Design | Inquiry | Estimator | Term | N Sims | Mean Estimand | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|---|---|---|---|---|
design | ate | estimator | Z | 500 | 0.33 | 0.97 | 0.64 | 0.09 | 0.64 | 1.00 | 0.00 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |||||
design | ate | margins | Z | 500 | 0.33 | 0.31 | -0.02 | 0.02 | 0.03 | 1.00 | 0.90 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) |
Much better
Causation as difference making
The intervention based motivation for understanding causal effects:
The problem in 2 is that you need to know what would have happened if things were different. You need information on a counterfactual.
Idea: A causal claim is (in part) a claim about something that did not happen. This makes it metaphysical.
Now that we have a concept of causal effects available, let’s answer two questions:
Now that we have a concept of causal effects available, let’s answer two questions:
TRANSITIVITY: If for a given unit \(A\) causes \(B\) and \(B\) causes \(C\), does that mean that \(A\) causes \(C\)?
A boulder is flying down a mountain. You duck. This saves your life.
So the boulder caused the ducking and the ducking caused you to survive.
So: did the boulder cause you to survive?
CONNECTEDNESS Say \(A\) causes \(B\) — does that mean that there is a spatiotemporally continuous sequence of causal intermediates?
CONNECTEDNESS Say \(A\) causes \(B\) — does that mean that there is a spatiotemporally continuous sequence of causal intermediates?
The counterfactual model is about contribution and attribution in a very specific sense.
Consider an outcome \(Y\) that might depend on two causes \(X_1\) and \(X_2\):
\[Y(0,0) = 0\] \[Y(1,0) = 0\] \[Y(0,1) = 0\] \[Y(1,1) = 1\]
What caused \(Y\)? Which cause was most important?
The counterfactual model is about attribution in a very conditional sense.
This is problem for research programs that define “explanation” in terms of figuring out the things that cause \(Y\)
Real difficulties conceptualizing what it means to say one cause is more important than another cause. What does that mean?
Erdogan’s increasing authoritarianism was the most important reason for the attempted coup
More uncomfortably:
What does it mean to say that the tides are caused by the moon? What exactly do we have to imagine…
Jack exploited Jill
It’s Jill’s fault that bucket fell
Jack is the most obstructionist member of Congress
Melania Trump stole from Michelle Obama’s speech
Activists need causal claims
This is sometimes called a “switching equation”
In DeclareDesign
\(Y\) is realised from potential outcomes and assignment in this way using reveal_outcomes
Say \(Z\) is a random variable, then this is a sort of data generating process. BUT the key thing to note is
Now for some magic. We really want to estimate: \[ \tau_i = Y_i(1) - Y_i(0)\]
BUT: We never can observe both \(Y_i(1)\) and \(Y_i(0)\)
Say we lower our sights and try to estimate an average treatment effect: \[ \tau = \mathbb{E} [Y(1)-Y(0)]\]
Now make use of the fact that \[\mathbb E[Y(1)-Y(0)] = \mathbb E[Y(1)]- \mathbb E [Y(0)] \]
In words: The average of differences is equal to the difference of averages; here, the average treatment effect is equal to the difference in average outcomes in treatment and control units.
The magic is that while we can’t hope to measure the differences; we are good at measuring averages.
This provides a positive argument for causal inference from randomization, rather than simply saying with randomization “everything else is controlled for”
Let’s discuss:
Idea: random assignment is random sampling from potential worlds: to understand anything you find, you need to know the sampling weights
Idea: We now have a positive argument for claiming unbiased estimation of the average treatment effect following random assignment
But is the average treatment effect a quantity of social scientific interest?
The average of the differences \(\approx\) difference of averages
The average of the differences \(\approx\) difference of averages
Question: \(\approx\) or \(=\)?
Consider the following potential outcomes table:
Unit | Y(0) | Y(1) | \(\tau_i\) |
---|---|---|---|
1 | 4 | 3 | |
2 | 2 | 3 | |
3 | 1 | 3 | |
4 | 1 | 3 | |
5 | 2 | 3 |
Questions for us: What are the unit level treatment effects? What is the average treatment effect?
Consider the following potential outcomes table:
In treatment? | Y(0) | Y(1) |
---|---|---|
Yes | 2 | |
No | 3 | |
No | 1 | |
Yes | 3 | |
Yes | 3 | |
No | 2 |
Questions for us: Fill in the blanks.
What is the actual treatment effect?
Take a short break!
Experiments often give rise to endogenous subgroups. The potential outcomes framework can make it clear why this can cause problems.
Problems arise in analyses of subgroups when the categories themselves are affected by treatment
Example from our work:
V(0) | V(1) | R(0,1) | R(1,1) | R(0,0) | R(1,0) | |
---|---|---|---|---|---|---|
Type 1 (reporter) | 1 | 1 | 1 | 1 | 0 | 0 |
Type 2 (non reporter) | 1 | 0 | 0 | 0 | 0 | 0 |
Expected reporting given violence in control = Pr(Type 1)
Expected reporting given violence in treatment = 100%
Question: What is the actual effect of treatment on the propensity to report violence?
It is possible that in truth no one’s reporting behavior has changed, what has changed is the propensity of people with different propensities to report to experience violence:
Reporter | No Violence | Violence | % Report | |
---|---|---|---|---|
Control | Yes No |
25 25 |
25 25 |
\(\frac{25}{25+25}=50\%\) |
Treatment | Yes No |
25 50 |
25 0 |
\(\frac{25}{25+0}=100\%\) |
This problem can arise as easily in seemingly simple field experiments. Example:
What’s the problem?
Question for us:
Which problems face an endogenous subgroup issue?:
Which problems face an endogenous subgroup issue?:
In such cases you can:
Pair | I | I | II | II | |
---|---|---|---|---|---|
Unit | 1 | 2 | 3 | 4 | Average |
Y(0) | 0 | 0 | 0 | 0 | |
Y(1) | -3 | 1 | 1 | 1 | |
\(\tau\) | -3 | 1 | 1 | 1 | 0 |
Pair | I | I | II | II | |
---|---|---|---|---|---|
Unit | 1 | 2 | 3 | 4 | Average |
Y(0) | 0 | 0 | 0 | ||
Y(1) | 1 | 1 | 1 | ||
\(\hat{\tau}\) | 1 |
Pair | I | I | II | II | |
---|---|---|---|---|---|
Unit | 1 | 2 | 3 | 4 | Average |
Y(0) | [0] | 0 | 0 | ||
Y(1) | [-3] | 1 | 1 | ||
\(\hat{\tau}\) | 1 |
Note: The right way to think about this is that bias is a property of the strategy over possible realizations of data and not normally a property of the estimator conditional on the data.
Multistage games can also present an endogenous group problem since collections of late stage players facing a given choice have been created by early stage players.
Question: Does visibility alter the extent to which subjects follow norms to punish antisocial behavior (and reward prosocial behavior)? Consider a trust game in which we are interested in how information on receivers affects their actions
Average % returned
|
|||
---|---|---|---|
Visibility Treatment | % invested (average) | ...when 10% invested | ...when 50% invested |
Control: Masked information on respondents | 30% | 20% | 40% |
Treatment: Full information on respondents | 30% | 0% | 60% |
What do we think? Does visibility make people react more to investments?
Imagine you could see all the potential outcomes, and they looked like this:
Responder’s return decision (given type)
|
Avg.
|
||||||
---|---|---|---|---|---|---|---|
Offered behavior | Nice 1 | Nice 2 | Nice 3 | Mean 1 | Mean 2 | Mean 3 | |
Invest 10% | 60% | 60% | 60% | 0% | 0% | 0% | 30% |
Invest 50% | 60% | 60% | 60% | 0% | 0% | 0% | 30% |
Conclusion: Both the offer and the information condition are completely irrelevant for all subjects.
Unfortunately you only see a sample of the potential outcomes, and that looks like this:
Responder’s return decision (given type)
|
Avg.
|
||||||
---|---|---|---|---|---|---|---|
Offered behavior | Nice 1 | Nice 2 | Nice 3 | Mean 1 | Mean 2 | Mean 3 | |
Invest 10% | 0% | 0% | 0% | 0% | |||
Invest 50% | 60% | 60% | 60% | 60% |
False Conclusion: When not protected, responders condition behavior strongly on offers (because offerers can select on type accurately)
In fact: The nice types invest more because they are nice. The responders return more to the nice types because they are nice.
Unfortunately you only see a (noisier!) sample of the potential outcomes, and that looks like this:
Responder’s return decision (given type)
|
Avg.
|
||||||
---|---|---|---|---|---|---|---|
Offered behavior | Nice 1 | Nice 2 | Nice 3 | Mean 1 | Mean 2 | Mean 3 | |
Invest 10% | 60% | 0% | 0% | 20% | |||
Invest 50% | 60% | 60% | 0% | 40% |
False Conclusion: When protected, responders condition behavior less strongly on offers (because offerers can select on type less accurately)
What to do?
Solutions?
Take away: Proceed with extreme caution when estimating effects beyond the first stage.
Take a short break!
Directed Acyclic Graphs
The most powerful results from the study of DAGs give procedures for figuring out when conditioning aids or hinders causal identification.
Pearl’s book Causality is the key reference. Pearl (2009) (Though see also older work such as Pearl and Paz (1985))
There is a lot of excellent material on Pearl’s page http://bayes.cs.ucla.edu/WHY/
See also excellent material on Felix Elwert’s page http://www.ssc.wisc.edu/~felwert/causality/?page_id=66
Say you don’t like graphs. Fine.
Consider this causal structure:
Say \(Z\) is temporally prior to \(X\); it is correlated with \(Y\) (because of \(U_1\)) and with \(X\) (because of \(U_2\)).
Question: Would it be useful to “control” for \(Z\) when trying to estimate the effect of \(X\) on \(Y\)?
Say you don’t like graphs. Fine.
Consider this causal structure:
Question: Would it be useful to “control” for \(Z\) when trying to estimate the effect of \(X\) on \(Y\)?
Answer: Hopefully by the end of today you should see that the answer is obviously (or at least, plausibly) “no.”
Variable sets \(A\) and \(B\) are conditionally independent, given \(C\) if for all \(a\), \(b\), \(c\):
\[\Pr(A = a | C = c) = \Pr(A = a | B = b, C = c)\]
Informally; given \(C\), knowing \(B\) tells you nothing more about \(A\).
Now we have what we need to simplify: if the Markov condition is satisfied, then instead of writing the full probability as \(P(x) = P(x_1)P(x_2|x_1)P(x_3|x_1, x_2)\) we can write \(P(x) = \prod_i P(x_i |pa_i)\).
If \(P(a,b,c)\) is Markov relative to this graph then: \(C\) is independent of \(A\) given \(B\)
And instead of
\[\Pr(a,b,c) = \Pr(a)\Pr(a|b)\Pr(c|a, b)\]
we could now write:
\[\Pr(a,b,c) = \Pr(a)\Pr(a|b)\Pr(c|b)\]
We want the graphs to be able to represent the effects of interventions.
Pearl uses do
notation to capture this idea.
\[\Pr(X_1, X_2,\dots | do(X_j = x_j))\] or
\[\Pr(X_1, X_2,\dots | \hat{x}_j)\]
denotes the distribution of \(X\) when a particular node (or set of nodes) is intervened upon and forced to a particular level, \(x_j\).
do
operationsNote, in general: \[\Pr(X_1, X_2,\dots | do(X_j = x_j')) \neq \Pr(X_1, X_2,\dots | X_j = x_j')\] as an example we might imagine a situation where:
In that case \(\Pr(Y=1 | X = 1) = 1\) but \(\Pr(Y=1 | do(X = 1)) = .5\)
do
operationsA DAG is “causal Bayesian network” or “Causal DAG” if (and only if) the probability distribution resulting from setting some set \(X_i\) to \(\hat{x'}_i\) (i.e. do(X=x')
) is:
\[P_{\hat{x}_i}: P(x_1,x_2,\dots x_n|\hat{x}_i) = \mathbb{I}(x_i = x_i')\prod_{-i}P(x_j|pa_j)\]
This means that there is only probability mass on vectors in which \(x_i = x_i'\) (reflecting the success of control) and all other variables are determined by their parents, given the values that have been set for \(x_i\).
do
operationsIllustration, say we have binary \(X\) causes binary \(M\) which cases binary \(Y\); say we intervene and set \(M=1\). Then what is the distribution of \((x,m,y)\)?
It is:
\[\Pr(x,m,y) = \Pr(x)\mathbb I(M = 1)\Pr(y|m)\]
We now have a well defined sense in which the arrows on a graph represent a causal structure and capture the conditional independence relations implied by the causal structure.
Of course any graph might represent many different probability distributions \(P\)
We can now start reading off from a graph when there is or is not conditional independence between sets of variables
\(A\) and \(B\) are conditionally independent, given \(C\) if on every path between \(A\) and \(B\):
or
Notes:
Are A and D unconditionally independent:
Now: say we removed the arrow from \(X\) to \(Y\) - Would you expect to see a correlation between \(X\) and \(Y\) if you did not control for \(Z\) - Would you expect to see a correlation between \(X\) and \(Y\) if you did control for \(Z\)
A “causal model” is:
1.Variables
A list of \(n\) functions \(\mathcal{F}= (f^1, f^2,\dots, f^n)\), one for each element of \(\mathcal{V}\) such that each \(f^i\) takes as arguments \(\theta^i\) as well as elements of \(\mathcal{V}\) that are prior to \(V^i\) in the ordering
A probability distribution over \(\Theta\)
Learning about effects given a model means learning about \(F\) and also the distribution of shocks (\(\Theta\)).
For discrete data this can be reduced to a question about learning about the distribution of \(\Theta\) only.
For instance the simplest model consistent with \(X \rightarrow Y\):
Endogenous Nodes = \(\{X, Y\}\), both with range \(\{0,1\}\)
Exogenous Nodes = \(\{\theta^X, \theta^Y\}\), with ranges \(\{\theta^X_0, \theta^X_1\}\) and \(\{\theta^Y_{00}\theta^Y_{01}, \theta^Y_{10}, \theta^Y_{11}\}\)
Functional equations:
Distributions on \(\Theta\): \(\Pr(\theta^i = \theta^i_k) = \lambda^i_k\)
What is the probability that \(X\) has a positive causal effect on \(Y\)?
This is equivalent to: \(\Pr(\theta^Y =\theta^Y_{01}) = \lambda^Y_{01}\)
So we want to learn about the distributions of the exogenous nodes
Well posed questions
dagitty
Say that units are randomly assigned to treatment in different strata (maybe just one); with fixed, though possibly different, shares assigned in each stratum. Then the key estimands and estimators are:
Estimand | Estimator |
---|---|
\(\tau_{ATE} \equiv \mathbb{E}[\tau_i]\) | \(\widehat{\tau}_{ATE} = \sum\nolimits_{x} \frac{w_x}{\sum\nolimits_{j}w_{j}}\widehat{\tau}_x\) |
\(\tau_{ATT} \equiv \mathbb{E}[\tau_i | Z_i = 1]\) | \(\widehat{\tau}_{ATT} = \sum\nolimits_{x} \frac{p_xw_x}{\sum\nolimits_{j}p_jw_j}\widehat{\tau}_x\) |
\(\tau_{ATC} \equiv \mathbb{E}[\tau_i | Z_i = 0]\) | \(\widehat{\tau}_{ATC} = \sum\nolimits_{x} \frac{(1-p_x)w_x}{\sum\nolimits_{j}(1-p_j)w_j}\widehat{\tau}_x\) |
where \(x\) indexes strata, \(p_x\) is the share of units in each stratum that is treated, and \(w_x\) is the size of a stratum.
In addition, each of these can be targets of interest:
And for different subgroups,
The CATEs are conditional average treatment effects, for example the effect for men or for women. These are straightfoward.
However we might also imagine conditioning on unobservable or counterfactual features.
\[LATE = \frac{1}{|C|}\sum_{j\in C}(Y_j(X=1) - Y_j(X=0))\] \[C:=\{j:X_j(Z=1) > X_j(Z=0) \}\]
We will return to these in the study of instrumental variables.
Other ways to condition on potential outcomes:
Many inquiries are averages of individual effects, even if the groups are not known, but they do not have to be:
Many inquiries are averages of individual effects, even if the groups are not known,
But they do not have to be:
Inquiries might relate to distributional quantities such as:
You might even be interested in \(\min(Y_i(1) - Y_i(0))\).
There are lots of interesting “spillover” estimands.
Imagine there are three individuals and each person’s outcomes depends on the assignments of all others. For instance \(Y_1(Z_1, Z_2, Z_3\), or more generally, \(Y_i(Z_i, Z_{i+1 (\text{mod }3)}, Z_{i+2 (\text{mod }3)})\).
Then three estimands might be:
Interpret these. What others might be of interest?
A difference in CATEs is a well defined estimand that might involve interventions on one node only:
It captures differences in effects.
An interaction is an effect on an effect:
Note in the latter the expectation is taken over the whole population.
Say \(X\) can affect \(Y\) directly, or indirectly through \(M\). then we can write potential outcomes as:
We can then imagine inquiries of the form:
Interpret these. What others might be of interest?
Again we might imagine that these are defined with respect to some group:
here, among those for whom \(X\) has a positive effect on \(Y\), for what share would there be a positive effect if \(M\) were fixed at 1?
In qualitative research a particularly common inquiry is “did \(X=1\) cause \(Y=1\)?
This is often given as a probability, the “probability of causation” (though at the case level we might better think of this probability as an estimate rather than an estimand):
\[\Pr(Y_i(0) = 0 | Y_i(1) = 1, X = 1)\]
Intuition: What’s the probability \(X=1\) caused \(Y=1\) in an \(X=1, Y=1\) case drawn from a large population with the following experimental distribution:
Y=0 | Y=1 | All | |
---|---|---|---|
X=0 | 1 | 0 | 1 |
X=1 | 0.25 | 0.75 | 1 |
Intuition: What’s the probability \(X=1\) caused \(Y=1\) in an \(X=1, Y=1\) case drawn from a large population with the following experimental distribution:
Y=0 | Y=1 | All | |
---|---|---|---|
X=0 | 0.75 | 0.25 | 1 |
X=1 | 0.25 | 0.75 | 1 |
Other inquiries focus on distinguishing between causes.
For the Billy Suzy problem (Hall 2004), Halpern (2016) focuses on “actual causation” as a way to distinguish between Suzy and Billy:
Imagine Suzy and Billy, simultaneously throwing stones at a bottle. Both are excellent shots and hit whatever they aim at. Suzy’s stone hits first, knocks over the bottle, and the bottle breaks. However, Billy’s stone would have hit had Suzy’s not hit, and again the bottle would have broken. Did Suzy’s throw cause the bottle to break? Did Billy’s?
Actual Causation:
An inquiry: for what share in a population is a possible cause an actual cause?
Pearl (e.g. Pearl and Mackenzie (2018)) describes three types of inquiry:
Level | Activity | Inquiry |
---|---|---|
Association | “Seeing” | If I see \(X=1\) should I expect \(Y=1\)? |
Intervention | “Doing” | If I set \(X\) to \(1\) should I expect \(Y=1\)? |
Counterfactual | “Imagining” | If \(X\) were \(0\) instead of 1, would \(Y\) then be \(0\) instead of \(1\)? |
We can understand these as asking different types of questions about a causal model
Level | Activity | Inquiry |
---|---|---|
Association | “Seeing” | \(\Pr(Y=1|X=1)\) |
Intervention | “Doing” | \(\mathbb{E}[\mathbb{I}(Y(1)=1)]\) |
Counterfactual | “Imagining” | \(\Pr(Y(1)=1 \& Y(0)=0)\) |
The third is qualitatively different because it requires information about two mutually incompatible conditions for units. This is not (generally ) recoverable directly from knowledge of \(\Pr(Y(1)=1)\) and \(\Pr(Y(0)=0)\).
Given a causal model over nodes with discrete ranges, inquiries can generally be described as summaries of the distributions of exogenous nodes.
We already saw two instances of this:
What it is. When you have it. What it’s worth.
Informally a quantity is “identified” if it can be “recovered” once you have enough data.
Say for example average wage is \(x\) in some very large population. If I gather lots and lots of data on the wages of individuals and take the average then then my estimate will ultimately let be figure out \(x\).
Identifiability Let \(Q(M)\) be a query defined over a class of models \(\mathcal M\), then \(Q\) is identifiable if \(P(M_1) = P(M_2) \rightarrow Q(M_1) = Q(M_1)\).
Identifiability with constrained data Let \(Q(M)\) be a query defined over a class of models \(\mathcal M\), then \(Q\) is identifiable from features \(F(M)\) if \(F(M_1) = F(M_2) \rightarrow Q(M_1) = Q(M_1)\).
Based on Defn 3.2.3 in Pearl.
Informally a quantity is “identified” if it can be “recovered” once you have enough data.
Our goal in causal inference is to estimate quantities such as:
\[\Pr(Y|\hat{x})\]
where \(\hat{x}\) is interpreted as \(X\) set to \(x\) by “external” control. Equivalently: \(do(X=x)\) or sometimes \(X \leftarrow x\).
If this quantity is identifiable then we can recover it with infinite data.
If it is not identifiable, then, even in the best case, we are not guaranteed to get the right answer.
Are there general rules for determining whether this quantity can be identified? Yes.
Note first, identifying
\[\Pr(Y|x)\]
is easy.
But we are not always interested in identifying the distribution of \(Y\) given observed values of \(x\), but rather, the distribution of \(Y\) if \(X\) is set to \(x\).
If we can identify the controlled distribution we can calculate other causal quantities of interest.
For example for a binary \(X, Y\) the causal effect of \(X\) on the probability that \(Y=1\) is:
\[\Pr(Y=1|\hat{x}=1) - \Pr(Y=1|\hat{x}=0)\]
Again, this is not the same as:
\[\Pr(Y=1|x=1) - \Pr(Y=1|x=0)\]
It’s the difference between seeing and doing.
The key idea is that you want to find a set of variables such that when you condition on these you get what you would get if you used a do
operation.
Intuition:
The backdoor criterion is satisfied by \(Z\) (relative to \(X\), \(Y\)) if:
In that case you can identify the effect of \(X\) on \(Y\) by conditioning on \(Z\):
\[P(Y=y | \hat{x}) = \sum_z P(Y=y| X = x, Z=z)P(z)\] (This is eqn 3.19 in Pearl (2000))
\[P(Y=y | \hat{x}) = \sum_z P(Y=y| X = x, Z=z)P(z)\]
\[P(Y=y | \hat{x}) - P(Y=y | \hat{x}')\]
Following Pearl (2009), Chapter 11. Let \(T\) denote the set of parents of \(X\): \(T := pa(X)\), with (possibly vector valued) realizations \(t\). These might not all be observed.
If the backdoor criterion is satisfied, we have:
We bring \(Z\) into the picture by writing: \[p(y|\hat{x}) = \sum_{t\in T} p(t) \sum_z p(y|x, t, z)p(z|x, t)\]
Then using the two conditions above:
This gives: \[p(y|\hat x) = \sum_{t \in T} p(t) \sum_z p(y|x, z)p(z|t) \]
So, cleaning up, we can get rid of \(T\):
\[p(y|\hat{x}) = \sum_z p(y|x, z)\sum_{t\in T} p(z|t)p(t) = \sum_z p(y| x, z)p(z)\]
For intuition:
We would be happy if we could condition on the parent \(T\), but \(T\) is not observed. However we can use \(Z\) instead making use of the fact that:
See Shpitser, VanderWeele, and Robins (2012)
The adjustment criterion is satisfied by \(Z\) (relative to \(X\), \(Y\)) if:
Note:
Here \(Z\) satisfies the adjustment criterion but not the backdoor criterion:
\(Z\) is descendant of \(X\) but it is not a descendant of a node on a path from \(X\) to \(Y\). No harm adjusting for \(Z\) here, but not necessary either.
Consider this DAG:
Why?
If:
Then \(\Pr(y| \hat x)\) is identifiable and given by:
\[\Pr(y| \hat x) = \sum_m\Pr(m|x)\sum_{x'}\left(\Pr(y|m,x')\Pr(x')\right)\]
We want to get \(\Pr(y | \hat x)\)
From the graph the joint distribution of variables is:
\[\Pr(x,m,y,u) = \Pr(u)\Pr(x|u)\Pr(m|x)\Pr(y|m,u)\] If we intervened on \(X\) we would have (\(\Pr(X = x |u)=1\)):
\[\Pr(m,y,u | \hat x) = \Pr(u)\Pr(m|x)\Pr(y|m,u)\] If we sum up over \(u\) and \(m\) we get:
\[\Pr(m,y| \hat x) = \Pr(m|x)\sum_u\left(\Pr(y|m,u)\Pr(u)\right)\] \[\Pr(y| \hat x) = \sum_m\Pr(m|x)\sum_u\left(\Pr(y|m,u)\Pr(u)\right)\]
The first part is fine; the second part however involves \(u\) which is unobserved. So we need to get the \(u\) out of \(\sum_u\left(\Pr(y|m,u)\Pr(u)\right)\).
Now, from the graph:
\[\Pr(u|m, x) = \Pr(u|x)\] 2. \(X\) is d-separated from \(Y\) by \(M\), \(U\)
\[\Pr(y|x, m, u) = \Pr(y|m,u)\] That’s enough to get \(u\) out of \(\sum_u\left(\Pr(y|m,u)\Pr(u)\right)\)
\[\sum_u\left(\Pr(y|m,u)\Pr(u)\right) = \sum_x\sum_u\left(\Pr(y|m,u)\Pr(u|x)\Pr(x)\right)\]
Using the 2 equalities we got from the graph:
\[\sum_u\left(\Pr(y|m,u)\Pr(u)\right) = \sum_x\sum_u\left(\Pr(y|x,m,u)\Pr(u|x,m)\Pr(x)\right)\]
So:
\[\sum_u\left(\Pr(y|m,u)\Pr(u)\right) = \sum_x\left(\Pr(y|m,x)\Pr(x)\right)\]
Intuitively: \(X\) blocks the back door between \(Z\) and \(Y\) just as well as \(U\) does
Substituting we are left with:
\[\Pr(y| \hat x) = \sum_m\Pr(m|x)\sum_{x'}\left(\Pr(y|m,x')\Pr(x')\right)\]
(The \('\) is to distinguish the \(x\) in the summation from the value of \(x\) of interest)
It’s interesting that \(x\) remains in the right hand side in the calculation of the \(m \rightarrow y\) effect, but this is because \(x\) blocks a backdoor from \(m\) to \(y\)
Bringing all this together into a claim we have:
If:
Then \(\Pr(y| \hat x)\) is identifiable and given by:
\[\Pr(y| \hat x) = \sum_m\Pr(m|x)\sum_{x'}\left(\Pr(y|m,x')\Pr(x')\right)\]
There is a package (Textor et al. 2016) for figuring out what to condition on.
Define a dag using dagitty syntax:
There is then a simple command to check whether two sets are d-separated by a third set:
And a simple command to identify the adjustments needed to identify the effect of one variable on another:
Example where \(Z\) is correlated with \(X\) and \(Y\) and is a confounder
Example where \(Z\) is correlated with \(X\) and \(Y\) but it is not a confounder
But controlling can also cause problems. In fact conditioning on a temporally pre-treatment variable could cause problems. Who’d have thunk? Here is an example from Pearl (2005):
U1 <- rnorm(10000); U2 <- rnorm(10000)
Z <- U1+U2
X <- U2 + rnorm(10000)/2
Y <- U1*2 + X
lm_robust(Y ~ X) |> tidy() |> kable(digits = 2)
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | -0.02 | 0.02 | -1.21 | 0.23 | -0.06 | 0.01 | 9998 | Y |
X | 1.02 | 0.02 | 56.52 | 0.00 | 0.98 | 1.05 | 9998 | Y |
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | -0.01 | 0.01 | -1.13 | 0.26 | -0.03 | 0.01 | 9997 | Y |
X | -0.34 | 0.01 | -34.98 | 0.00 | -0.36 | -0.32 | 9997 | Y |
Z | 1.67 | 0.01 | 220.37 | 0.00 | 1.65 | 1.68 | 9997 | Y |
g <- dagitty("dag{U1 -> Z ; U1 -> y ; U2 -> Z ; U2 -> x -> y}")
adjustmentSets(g, exposure = "x", outcome = "y")
{}
[1] FALSE
[1] TRUE
Which means, no need to condition on anything.
A bind: from Pearl 1995.
For a solution for a class of related problems see Robins, Hernan, and Brumback (2000)
g <- dagitty("dag{U1 -> Z ; U1 -> y ;
U2 -> Z ; U2 -> x -> y;
Z -> x}")
adjustmentSets(g, exposure = "x", outcome = "y")
{ U1 }
{ U2, Z }
which means you have to adjust on an unobservable. Here we double check that including or not including “Z” is enough:
So we cannot identify the effect here. But can we still learn about it?
Estimation and testing
Unbiased estimates of the (sample) average treatment effect can be estimated (whether or not there imbalance on covariates) using:
\[ \widehat{ATE} = \frac{1}{n_T}\sum_TY_i - \frac{1}{n_C}\sum_CY_i, \]
df <- fabricatr::fabricate(N = 100, Z = rep(0:1, N/2), Y = rnorm(N) + Z)
# by hand
df |>
summarize(Y1 = mean(Y[Z==1]),
Y0 = mean(Y[Z==0]),
diff = Y1 - Y0) |> kable(digits = 2)
Y1 | Y0 | diff |
---|---|---|
1.07 | -0.28 | 1.35 |
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
Z | 1.35 | 0.17 | 7.94 | 0 | 1.01 | 1.68 | 97.98 | Y |
We can also do this with regression:
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | -0.28 | 0.12 | -2.33 | 0.02 | -0.51 | -0.04 | 98 | Y |
Z | 1.35 | 0.17 | 7.94 | 0.00 | 1.01 | 1.68 | 98 | Y |
See Freedman (2008) on why regression is fine here
Say now different strata or blocks \(\mathcal{S}\) had different assignment probabilities. Then you could estimate:
\[ \widehat{ATE} = \sum_{S\in \mathcal{S}}\frac{n_{S}}{n} \left(\frac{1}{n_{S1}}\sum_{S\cap T}y_i - \frac{1}{n_{S0}}\sum_{S\cap C}y_i \right) \]
Note: you cannot just ignore the blocks because assignment is no longer independent of potential outcomes: you might be sampling units with different potential outcomes with different probabilities.
However, the formula above works fine because selecting is random conditional on blocks.
As a DAG this is just classic confounding:
Data with heterogeneous assignments:
True effect is 0.5, but:
Averaging over effects in blocks
# by hand
estimates <-
df |>
group_by(X) |>
summarize(Y1 = mean(Y[Z==1]),
Y0 = mean(Y[Z==0]),
diff = Y1 - Y0,
W = n())
estimates$diff |> weighted.mean(estimates$W)
[1] 0.7236939
# with estimatr
estimatr::difference_in_means(Y ~ Z, blocks = X, data = df) |>
tidy() |> kable(digits = 2)
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
Z | 0.72 | 0.11 | 6.66 | 0 | 0.51 | 0.94 | 496 | Y |
This also corresponds to the difference in the weighted average of treatment outcomes (with weights given by the inverse of the probability that each unit is assigned to treatment) and control outcomes (with weights given by the inverse of the probability that each unit is assigned to control).
# by hand
df |>
summarize(Y1 = weighted.mean(Y[Z==1], ip[Z==1]),
Y0 = weighted.mean(Y[Z==0], ip[Z==0]), # note !
diff = Y1 - Y0)|>
kable(digits = 2)
Y1 | Y0 | diff |
---|---|---|
0.59 | -0.15 | 0.74 |
# with estimatr
estimatr::difference_in_means(Y ~ Z, weights = ip, data = df) |>
tidy() |> kable(digits = 2)
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
Z | 0.74 | 0.11 | 6.65 | 0 | 0.52 | 0.96 | 498 | Y |
But inverse propensity weighting is a more general principle, which can be used even if you do not have blocks.
The intuition for it comes straight from sampling weights — you weight up in order to recover an unbiased estimate of the potential outcomes for all units, whether or not they are assigned to treatment.
With sampling weights however you can include units even if their weight was 1. Why can you not include these units when doing inverse propensity weighting?
Say you made a mess and used a randomization that was correlated with some variable, \(U\). For example:
Bad assignment, some randomization process you can’t understand (but can replicate) that results in unequal probabilities.
Results is a sampling distribution not centered on the true effect (0)
To fix you can estimate the assignment probabilities by replicating the assignment many times:
and then use these assignment probabilities in your estimator
Implied weights
Improved results
This example is surprising but it helps you see the logic of why inverse weighting gets unbiased estimates (and why that might not guarantee a reasonable answer)
Imagine there is one unit with potential outcomes \(Y(1) = 2, Y(0) = 1\). So the unit level treatment effect is 1.
You toss a coin.
So your expected estimate is: \[0.5 \times 4 - 0.5 \times (-2) = 1\]
Great on average but always lousy
\[\hat{\overline{Y_1}} = \frac{1}n\left(\sum_i \frac{Z_iY_i(1)}{\pi_i}\right)\] With independent assignment the expected value of \(\hat{\overline{Y_1}}\) is just:
\[\mathbb{E}[\hat{\overline{Y_1}}] =\frac1n\left( \left(\pi_1 \frac{1\times Y_1(1)}{\pi_1} + (1-\pi_1) \frac{0\times Y_1(1)}{\pi_1}\right) + \left(\pi_2 \frac{1\times Y_2(1)}{\pi_2} + (1-\pi_2) \frac{0\times Y_1(1)}{\pi_2}\right) + \dots\right)\]
\[\mathbb{E}[\hat{\overline{Y_1}}] =\frac1n\left( Y_1(1) + Y_2(1) + \dots\right) = \overline{Y_1}\]
and similarly for \(\mathbb{E}[\hat{\overline{Y_0}}]\) and so using linearity of expectations:
\[\mathbb{E}[\widehat{\overline{Y_1 - Y_0}}] = \overline{Y_1 - Y_0}\]
Lets talk about “inference”
In classical statistics we characterize our uncertainty over an estimate using an estimate of variance of the sampling distribution of the estimator.
Key idea is we want to be able to say: how likely are we to have gotten such an estimate if the distribution of estimates associated with our design looked a given way.
More specifically we want to estimate “standard error” or the “standard deviation of the sampling distribution”
(See Woolridge (2023) where the standard error is understood as the “estimate of the standard deviation of the sampling distribution”)
Given:
The variance of the estimator of \(n\) repeated ‘runs’ of a design is: \(Var(\hat{\tau}) = \frac{1}n\sum_i(\hat\tau_i - \overline{\hat\tau_i})^2\)
And the standard error is:
\(se(\hat{\tau}) = \sqrt{\frac{1}n\sum_i(\hat\tau_i - \overline{\hat\tau_i})^2}\)
If we have a good measure for the shape of the sampling distribution we can start to make statements of the form:
If the sampling distribution is roughly normal, as it may be with large samples, then we can use procedures such as: “there is a 5% probability that an estimate would be more than 1.96 standard errors away from the mean of the sampling distribution”
Key idea: You can estimate variance straight from the data, given knowledge of the assignment process and assuming well defined potential outcomes?
Recall in general \(Var(x) = \frac{1}n\sum_i(x_i - \overline{x})^2\). here the \(x_i\)s are the treatment effect estimates we might get under different random assignments, the \(n\) is number of different assignments (assumed here all equally likely, but otherwise we can weight) and \(\overline{x}\) is the truth.
For intuition imagine we have just two units \(A\), \(B\), with potential outcomes \(A_1\), \(A_0\), \(B_1\), \(B_0\).
When there are two units with outcomes \(x_1, x_2\), the variance simplifies like this:
\[Var(x) = \frac{1}2\left(x_1 - \frac{x_1 + x_2}{2}\right)^2 + \frac{1}2\left(x_2 - \frac{x_1 + x_2}{2}\right)^2 = \left(\frac{x_1 - x_2}{2}\right)^2\]
In the two unit case the two possible treatment estimates are: \(\hat{\tau}_1=A_1 - B_0\) and \(\hat{\tau}_2=B_1 - A_0\), depending on what gets put into treatment. So the variance is:
\[Var(\hat{\tau}) = \left(\frac{\hat{\tau}_1 - \hat{\tau}_2}{2}\right)^2 = \left(\frac{(A_1 - B_0) - (B_1 - A_0)}{2}\right)^2 =\left(\frac{(A_1 - B_1) + (A_0 - B_0)}{2}\right)^2 \] which we can re-write as:
\[Var(\hat{\tau}) = \left(\frac{A_1 - B_1}{2}\right)^2 + \left(\frac{A_0 - B_0}{2}\right)^2+ 2\frac{(A_1 - B_1)(A_0-B_0)}{2}\] The first two terms correspond to the variance of \(Y(1)\) and the variance of \(Y(0)\). The last term is a bit pesky though, it corresponds to twice the covariance of \(Y(1)\) and \(Y(0)\).
How can we go about estimating this?
\[Var(\hat{\tau}) = \left(\frac{A_1 - B_1}{2}\right)^2 + \left(\frac{A_0 - B_0}{2}\right)^2+ 2\frac{(A_1 - B_1)(A_0-B_0)}{2}\]
In the two unit case it is quite challenging because we do not have an estimate for any of the three terms: we do not have an estimate for the variance in the treatment group or in the control group because we have only one observation in each case; and we do not have an estimate for the covariance because we don’t observe both potential outcomes for any case.
Things do look a bit better however with more units…
From Freedman Prop 1 / Example 1 (using combinatorics!) we have:
\(V(\widehat{ATE}) = \frac{1}{n-1}\left[\frac{n_C}{n_T}V_1 + \frac{n_T}{n_C}V_0 + 2C_{01}\right]\)
… where \(V_0, V_1\) denote variances and \(C_{01}\) covariance
This is usefully rewritten as:
\[ \begin{split} V(\widehat{ATE}) & = \frac{1}{n-1}\left[\frac{n - n_T}{n_T}V_1 + \frac{n - n_C}{n_C}V_0 + 2C_{01}\right] \\ & = \frac{n}{n-1}\left[\frac{V_1}{n_T} + \frac{V_0}{n_C}\right] - \frac{1}{n-1}\left[V_1 + V_0 - 2C_{01}\right] \end{split} \]
where the final term is positive
Note:
, robust
(see Samii and Aronow (2012))For the case with blocking, the conservative estimator is:
\(V(\widehat{ATE}) = {\sum_{S\in \mathcal{S}}{\left(\frac{n_{S}}{n}\right)^2} \left({\frac{s^2_{S1}}{n_{S1}}} + {\frac{s^2_{S0}}{n_{S0}}} \right)}\)
An illustration of how conservative the conservative estimator of variance really is (numbers in plot are correlations between \(Y(1)\) and \(Y(0)\).
We confirm that:
\(\tau\) | \(\rho\) | \(\sigma^2_{Y(1)}\) | \(\Delta\) | \(\sigma^2_{\tau}\) | \(\widehat{\sigma}^2_{\tau}\) | \(\widehat{\sigma}^2_{\tau(\text{Neyman})}\) |
---|---|---|---|---|---|---|
1.00 | -1.00 | 1.00 | -0.04 | 0.00 | -0.00 | 0.04 |
1.00 | -0.67 | 1.00 | -0.03 | 0.01 | 0.01 | 0.04 |
1.00 | -0.33 | 1.00 | -0.03 | 0.01 | 0.01 | 0.04 |
1.00 | 0.00 | 1.00 | -0.02 | 0.02 | 0.02 | 0.04 |
1.00 | 0.33 | 1.00 | -0.01 | 0.03 | 0.03 | 0.04 |
1.00 | 0.67 | 1.00 | -0.01 | 0.03 | 0.03 | 0.04 |
1.00 | 1.00 | 1.00 | 0.00 | 0.04 | 0.04 | 0.04 |
Here \(\rho\) is the unobserved correlation between \(Y(1)\) and \(Y(0)\); and \(\Delta\) is the final term in the sample variance equation that we cannot estimate.
The conservative variance comes from the fact that you do not know the covariance between \(Y(1)\) and \(Y(0)\).
Example:
sharp_var <- function(yt, yc, N=length(c(yt,yc)), upper=TRUE){
m <- length(yt)
n <- m + length(yc)
V <- function(x,N) (N-1)/(N*(length(x)-1)) * sum((x - mean(x))^2)
yt <- sort(yt)
if(upper) {yc <- sort(yc)
} else {
yc <- sort(yc,decreasing=TRUE)}
p_i <- unique(sort(c(seq(0,n-m,1)/(n-m),seq(0,m,1)/m)))-
.Machine$double.eps^.5
p_i[1] <- .Machine$double.eps^.5
yti <- yt[ceiling(p_i*m)]
yci <- yc[ceiling(p_i*(n-m))]
p_i_minus <- c(NA,p_i[1: (length(p_i)-1)])
((N-m)/m * V(yt,N) + (N-(n-m))/(n-m)*V(yc,N) +
2*sum(((p_i-p_i_minus)*yti*yci)[2:length(p_i)]) - 2*mean(yt)*mean(yc))/(N-1)}
n <- 1000000
Y <- c(rep(0,n/2), 1000*rnorm(n/2))
X <- c(rep(0,n/2), rep(1, n/2))
lm_robust(Y~X) |> tidy() |> kable(digits = 2)
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | 0.00 | 0.00 | 1.73 | 0.08 | 0.00 | 0.00 | 999998 | Y |
X | 1.21 | 1.41 | 0.86 | 0.39 | -1.56 | 3.98 | 999998 | Y |
Error in eval(expr, envir, enclos): object 'ols' not found
c(sharp_var(Y[X==1], Y[X==0], upper = FALSE),
sharp_var(Y[X==1], Y[X==0], upper = TRUE)) |>
round(2)
[1] 1 1
The sharp bounds are \([1,1]\) but the conservative estimate is \(\sqrt{2}\).
However you can do hypothesis testing even without an estimate of the standard error.
Up next
A procedure for using the randomization distribution to calculate \(p\) values
Illustrating \(p\) values via “randomization inference”
Say you randomized assignment to treatment and your data looked like this.
Unit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
Treatment | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Health score | 4 | 2 | 3 | 1 | 2 | 3 | 4 | 8 | 7 | 6 |
Then:
Unit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
Treatment | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Health score | 4 | 2 | 3 | 1 | 2 | 3 | 4 | 8 | 7 | 6 |
Then:
ri
estimate of \(p\):# data
set.seed(1)
df <- fabricate(N = 1000, Z = rep(c(0,1), N/2), Y= .1*Z + rnorm(N))
# test stat
test.stat <- function(df) with(df, mean(Y[Z==1])- mean(Y[Z==0]))
# test stat distribution
ts <- replicate(4000, df |> mutate(Z = sample(Z)) |> test.stat())
# test
mean(ts >= test.stat(df)) # One sided p value
[1] 0.025
The \(p\) value is the mass to the right of the vertical
ri2
You can do the same using Alex Coppock’s ri2 package
ri2
term | estimate | upper_p_value |
---|---|---|
Z | 0.1321367 | 0.02225 |
You’ll notice slightly different answer. This is because although the procedure is “exact” it is subject to simulation error.
Observed
|
Under null that effect is 0 |
Under null that effect is 2 |
|||
---|---|---|---|---|---|
Y(0) | Y(1) | Y(0) | Y(1) | Y(0) | Y(1) |
1 | NA | 1 | 1 | 1 | 3 |
2 | NA | 2 | 2 | 2 | 4 |
NA | 4 | 4 | 4 | 2 | 4 |
NA | 3 | 3 | 3 | 1 | 3 |
ri
and CIsIt is possible to use this procedure to generate confidence intervals with a natural interpretation.
ri
and CIs in practiceri
and CIs in practiceWarning: calculating confidence intervals this way can be computationally intensive
ri
with DeclareDesign
DeclareDesign
can do randomization inference natively.ri
with DeclareDesign
Design | Estimator | Outcome | Term | N Sims | One Sided Pos | One Sided Neg | Two Sided |
---|---|---|---|---|---|---|---|
design | estimator | Y | Z | 1000 | 0.02 | 0.98 | 0.04 |
(0.00) | (0.00) | (0.01) |
ri
interactionsLets now imagine a world with two treatments and we are interested in using ri
for assessing the interaction. (Code from Coppock, ri2
)
ri
interactionsThe approach is to declare a null model that is nested by the full model. Then \(F\) test statistic from the model comparisons is taken as the test statistic and distribution of this is built up under re-randomizations.
ri
interactions with DeclareDesign
Let’s imagine a true model with interactions. We take an estimate. We then ask how likely that estimate is from a null model with constant effects
Note: this is quite a sharp hypothesis
df <- fabricate(N = 1000, Z1 = rep(c(0,1), N/2), Z2 = sample(Z1), Y = Z1 + Z2 - .15*Z1*Z2 + rnorm(N))
my_estimate <- (lm(Y ~ Z1*Z2, data = df) |> coef())[4]
null_model <- function(df) {
M0 <- lm(Y ~ Z1 + Z2, data = df)
d1 <- coef(M0)[2]
d2 <- coef(M0)[3]
df |> mutate(
Y_Z1_0_Z2_0 = Y - Z1*d1 - Z2*d2,
Y_Z1_1_Z2_0 = Y + (1-Z1)*d1 - Z2*d2,
Y_Z1_0_Z2_1 = Y - Z1*d1 + (1-Z2)*d2,
Y_Z1_1_Z2_1 = Y + (1-Z1)*d1 + (1-Z2)*d2)
}
ri
interactions with DeclareDesign
Let’s imagine a true model with interactions. We take an estimate. We then ask how likely that estimate is from a null model with constant effects
Imputed potential outcomes look like this:
ID | Z1 | Z2 | Y | Y_Z1_0_Z2_0 | Y_Z1_1_Z2_0 | Y_Z1_0_Z2_1 | Y_Z1_1_Z2_1 |
---|---|---|---|---|---|---|---|
0001 | 0 | 0 | -0.18 | -0.18 | 0.76 | 0.68 | 1.61 |
0002 | 1 | 0 | 0.20 | -0.73 | 0.20 | 0.12 | 1.06 |
0003 | 0 | 0 | 2.56 | 2.56 | 3.50 | 3.42 | 4.36 |
0004 | 1 | 0 | -0.27 | -1.21 | -0.27 | -0.35 | 0.59 |
0005 | 0 | 1 | -2.13 | -2.99 | -2.05 | -2.13 | -1.19 |
0006 | 1 | 1 | 3.52 | 1.72 | 2.66 | 2.58 | 3.52 |
ri
interactions with DeclareDesign
Design | Estimator | Outcome | Term | N Sims | One Sided Pos | One Sided Neg | Two Sided |
---|---|---|---|---|---|---|---|
design | estimator | Y | Z1:Z2 | 1000 | 0.95 | 0.05 | 0.10 |
(0.01) | (0.01) | (0.01) |
ri
in practiceri
Applicationsri
Applicationsri
ApplicationsCapacity | T1 | T2 | T3 | ||
---|---|---|---|---|---|
Session | Thursday | 40 | 10 | 30 | 0 |
Friday | 40 | 10 | 0 | 30 | |
Saturday | 10 | 10 | 0 | 0 |
Optimal assignment to treatment given constraints due to facilities
Subject type | N | Available |
---|---|---|
A | 3 | Thurs, Fri |
B | 30 | Thurs, Sat |
C | 30 | Fri, Sat |
Constraints due to subjects
ri
ApplicationsIf you think hard about assignment you might come up with an allocation like this.
Allocations
|
|||||
---|---|---|---|---|---|
Subject type | N | Available | Thurs | Fri | Sat |
A | 30 | Thurs, Fri | 15 | 15 | NA |
B | 30 | Thurs, Sat | 25 | NA | 5 |
C | 30 | Fri, Sat | NA | 25 | 5 |
Assignment of people to days
ri
ApplicationsThat allocation balances as much as possible. Given the allocation you might randomly assign individuals to different days as well as randomly assigning them to treatments within days. If you then figure out assignment propensities, this is what you would get:
Assignment Probabilities
|
|||||
---|---|---|---|---|---|
Subject type | N | Available | T1 | T2 | T3 |
A | 30 | Thurs, Fri | 0.250 | 0.375 | 0.375 |
B | 30 | Thurs, Sat | 0.375 | 0.625 | 0.000 |
C | 30 | Fri, Sat | 0.375 | NA | 0.625 |
ri
ApplicationsEven under the assumption that the day of measurement does not matter, these assignment probabilities have big implications for analysis.
Assignment Probabilities
|
|||||
---|---|---|---|---|---|
Subject type | N | Available | T1 | T2 | T3 |
A | 30 | Thurs, Fri | 0.250 | 0.375 | 0.375 |
B | 30 | Thurs, Sat | 0.375 | 0.625 | 0.000 |
C | 30 | Fri, Sat | 0.375 | NA | 0.625 |
Only the type \(A\) subjects could have received any of the three treatments.
There are no two treatments for which it is possible to compare outcomes for subpopulations \(B\) and \(C\)
A comparison of \(T1\) versus \(T2\) can only be made for population \(A \cup B\)
However subpopulation \(A\) is assigned to \(A\) (versus \(B\)) with probability 4/5; while population \(B\) is assigned with probability 3/8
ri
ApplicationsImplications for design: need to uncluster treatment delivery
Implications for analysis: need to take account of propensities
Idea: Wacky assignments happen but if you know the propensities you can do the analysis.
ri
Applications: Indirect assignmentsA particularly interesting application is where a random assignment combines with existing features to determine an assignment to an “indirect” treatment.
Consider for example this data.
Should you believe it?
To: Baganda | To: Banyankole | ||
---|---|---|---|
Offers by | Baganda | 64% | 16% |
Banyankole | 16% | 4% |
To: Baganda | To: Banyankole | ||
---|---|---|---|
Offers by | Baganda | 50 | 50 |
Banyankole | 20 | 20 |
So that’s a problem
Control?
Compare:
Instead you can use formula above for \(\hat{\tau}_{ATE}\) to estimate ATE
alternatively…
You should have noticed that the logic for controlling for a covariate here is equivalent to the logic we saw for heterogeneous assignment propensities. These are really the same thing.
Returning to prior example:
df <- fabricatr::fabricate(
N = 500,
X = rep(0:1, N/2),
Z = rbinom(N, 1, .2 + .3*X),
Y = rnorm(N) + Z*X)
lm_robust(Y ~ Z*X_c, data = df |> mutate(X_c = X - mean(X))) |>
tidy() |> kable(digits = 2)
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | -0.02 | 0.06 | -0.42 | 0.68 | -0.14 | 0.09 | 496 | Y |
Z | 0.59 | 0.10 | 6.05 | 0.00 | 0.40 | 0.78 | 496 | Y |
X_c | 0.16 | 0.11 | 1.42 | 0.15 | -0.06 | 0.39 | 496 | Y |
Z:X_c | 0.64 | 0.20 | 3.28 | 0.00 | 0.26 | 1.02 | 496 | Y |
Returning to prior example:
term | estimate | std.error | statistic | p.value | conf.low | conf.high | df | outcome |
---|---|---|---|---|---|---|---|---|
(Intercept) | -0.02 | 0.06 | -0.42 | 0.68 | -0.14 | 0.09 | 496 | Y |
Z | 0.59 | 0.10 | 6.05 | 0.00 | 0.40 | 0.78 | 496 | Y |
X_c | 0.16 | 0.11 | 1.42 | 0.15 | -0.06 | 0.39 | 496 | Y |
Z:X_c | 0.64 | 0.20 | 3.28 | 0.00 | 0.26 | 1.02 | 496 | Y |
Demeaning interactions
Let’s:
f_Y <- function(X1, X2, u) .1 + .2*X1 + .3*X2 + u*X1*X2
where u is distributed \(U[0,1]\).
f_Y <- function(X1, X2, u) .1 + .2*X1 + .3*X2 + u*X1*X2
design <-
declare_model(N = 1000, u = runif(N),
X1 = complete_ra(N), X2 = block_ra(blocks = X1),
X1_demeaned = X1 - mean(X1),
X2_demeaned = X2 - mean(X2),
Y = f_Y(X1, X2, u)) +
declare_inquiry(
base = mean(f_Y(0, 0, u)),
average = mean(f_Y(0, 0, u) + f_Y(0, 1, u) + f_Y(1, 0, u) + f_Y(1, 1, u))/4,
CATE_X1_given_0 = mean(f_Y(1, 0, u) - f_Y(0, 0, u)),
CATE_X2_given_0 = mean(f_Y(0, 1, u) - f_Y(0, 0, u)),
ATE_X1 = mean(f_Y(1, X2, u) - f_Y(0, X2, u)),
ATE_X2 = mean(f_Y(X1, 1, u) - f_Y(X1, 0, u)),
I_X1_X2 = mean((f_Y(1, 1, u) - f_Y(0, 1, u)) - (f_Y(1, 0, u) - f_Y(0, 0, u)))
) +
declare_estimator(Y ~ X1*X2,
inquiry = c("base", "CATE_X1_given_0", "CATE_X2_given_0", "I_X1_X2"),
term = c("(Intercept)", "X1", "X2", "X1:X2"),
label = "natural") +
declare_estimator(Y ~ X1_demeaned*X2_demeaned,
inquiry = c("average", "ATE_X1", "ATE_X2", "I_X1_X2"),
term = c("(Intercept)", "X1_demeaned", "X2_demeaned", "X1_demeaned:X2_demeaned"),
label = "demeaned")
Estimator | Inquiry | Term | Mean Estimand | Mean Estimate |
---|---|---|---|---|
demeaned | ATE_X1 | X1_demeaned | 0.45 | 0.45 |
demeaned | ATE_X2 | X2_demeaned | 0.55 | 0.55 |
demeaned | I_X1_X2 | X1_demeaned:X2_demeaned | 0.50 | 0.50 |
demeaned | average | (Intercept) | 0.48 | 0.48 |
natural | CATE_X1_given_0 | X1 | 0.20 | 0.20 |
natural | CATE_X2_given_0 | X2 | 0.30 | 0.30 |
natural | I_X1_X2 | X1:X2 | 0.50 | 0.50 |
natural | base | (Intercept) | 0.10 | 0.10 |
It’s all good. But you need to match the estimator to the inquiry: demean for average marginal effects; do not demean for conditional marginal effects.
If you have different groups with different assignment propensities you can do any or all of these:
You cannot (reliably):
When does controlling for covariates improve things and when does it make it worse
For a great walk through of what you can draw from graphical models for the decision to control see:
A Crash Course in Good and Bad Controls by Cinelli, Forney, and Pearl (2022)
Aside: these implications generally refer to use controls as covariates – e.g. by implementing blocked differences in means or similar. For a Bayesian model of the form used in CausalQueries
the information from “bad controls” is used wisely.
Conditional Bias and Precision Gains from Controls
Experimental motivation: Controls can reduce noise and improve precision. This is an argument for using variables that are correlated with the output (not with the treatment).
However: Introducing controls can create complications
As argued by Freedman (summary from Lin (2012)), we can get: “worsened asymptotic precision, invalid measures of precision, and small-sample bias”\(^*\)
These adverse effects are essentially removed with an interacted model
See discussions in Imbens and Rubin (2015) (7.6, 7.7) and especially Theorem 7.2 for the asymptotic variance of the estimator
\(^*\) though note that the precision concern does not hold when treatment and control groups are equally sized
We will illustrate by comparing:
With:
# https://book.declaredesign.org/library/experimental-causal.html
prob <- 0.5
control_slope <- -1
declaration_18.3 <-
declare_model(N = 100, X = runif(N, 0, 1),
U = rnorm(N, sd = 0.1),
Y_Z_1 = 1*X + U, Y_Z_0 = control_slope*X + U
) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) +
declare_assignment(Z = complete_ra(N = N, prob = prob)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_estimator(Y ~ Z, inquiry = "ATE", label = "DIM") +
declare_estimator(Y ~ Z + X, .method = lm_robust, inquiry = "ATE", label = "OLS") +
declare_estimator(Y ~ Z, covariates = ~X, .method = lm_lin,
inquiry = "ATE", label = "Lin")
The variances and covariance of potential outcomes depend on the slope parameter
Diagnosis:
a <- 0
b <- 0
design <-
declare_model(N = 100,
X = rnorm(N),
Z = complete_ra(N),
Y_Z_0 = a*X + rnorm(N),
Y_Z_1 = a*X + correlate(given = X, rho = b, rnorm) + 1,
Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y ~ Z, covariates = ~X, .method = lm_lin, label = "Lin") +
declare_estimator(Y ~ Z, label = "No controls") +
declare_estimator(Z ~ X, label = "Condition")
The design implements estimation controlling and not controlling for \(X\) and also keeps track of the results of a test for the relation between \(Z\) and \(X\).
We simulate with many simulations over a range of designs
We see the standard errors are larger when you control in cases in which the control is not predictive of the outcome and it is correlated with the treatment. Otherwise they can be smaller.
See Mutz et al
We also see “conditional bias” when we do not control: where the distribution of errors depends on the correlation with the covariate.
Puzzle: Does the sample average treatment effect on the treated depend on the covariate balance?
Doubly robust estimation combines:
Using both together to estimate potential outcomes using propensity weighting lets you do well even if either model is wrong.
Each part can be done using nonparameteric methods resulting in an overall semi-parametric procedure.
Estimate of causal effect: \(\frac{1}{n}\sum_{i=1}^n\left(\left(\frac{Z_i}{\hat{\pi}_i}(Y_i - \hat{Y}_{i1}\right) - \left(\frac{1-Z_i}{1-\hat{\pi}_i}(Y_i - \hat{Y}_{i0}\right) + \left(\hat{Y}_{i1} - \hat{Y}_{i0}\right) \right)\)
Note that if \(\hat{Y}_{iz}\) are correct then the first parts drop out and we we get the right answer.
So if you can impute the potential outcomes, you are good (though hardly surprising)
To see this imagine with probability \(\pi\) we assign unit 1 to treatment and 2 to control (otherwise 1 to control and 2 to treatment).
Then our expected estimate is:
\(\frac12\pi\left(\left(\frac{1}{\pi}(Y_{11} - \hat{Y}_{11}\right) - \left(\frac{1}{\pi}(Y_{20} - \hat{Y}_{20}\right) \right) + (1-\pi)\left(\left(\frac{1}{1-\pi}(Y_{21} - \hat{Y}_{21}\right) - \left(\frac{1}{1-\pi}(Y_{10} - \hat{Y}_{10}\right) \right) + \left(\hat{Y}_{11} - \hat{Y}_{20}\right) + \left(\hat{Y}_{21} - \hat{Y}_{10}\right)\)
\(\frac12\left(Y_{11} - Y_{10} + Y_{21}- Y_{20} +\pi\left(\left(\frac{1}{\pi}( - \hat{Y}_{11}\right) - \left(\frac{1}{\pi}( - \hat{Y}_{20}\right) \right) + (1-\pi)\left(\left(\frac{1}{1-\pi}( - \hat{Y}_{21}\right) - \left(\frac{1}{1-\pi}(- \hat{Y}_{10}\right) \right)\right) + \left(\hat{Y}_{11} - \hat{Y}_{20}\right) + \left(\hat{Y}_{21} - \hat{Y}_{10}\right)\)
\(\frac12\left(Y_{11} - Y_{10} + Y_{21}- Y_{20}\right)\)
Robins, Rotnitzky, and Zhao (1994)
Consider this data (with confounding):
# df with true treatment effect of 1
# (0.5 if race = 0; 1.5 if race = 1)
df <- fabricatr::fabricate(
N = 5000,
class = sample(1:3, N, replace = TRUE),
race = rbinom(N, 1, .5),
Z = rbinom(N, 1, .2 + .3*race),
Y = .5*Z + race*Z + class + rnorm(N),
qsmk = factor(Z),
class = factor(class),
race = factor(race)
)
Naive regression produces biased estimates, even with controls. Lin regression gets the right result however.
drtmle
is an R package that uses doubly robust estimation to compute “marginal means of an outcome under fixed levels of a treatment.”
[1] 2.997512 1.983561
$drtmle
est cil ciu
E[Y(0)]-E[Y(1)] -1.014 -1.079 -0.949
$drtmle
zstat pval
H0:E[Y(0)]-E[Y(1)]=0 -30.688 0
Resource: https://muse.jhu.edu/article/883477
Challenge: Use DeclareDesign
to compare performance of drtmle
and lm_lin
Report the analysis that is implied by the design.
T2 | |||||
---|---|---|---|---|---|
N | Y | All | Diff | ||
T1 | N | \(\overline{y}_{00}\) | \(\overline{y}_{01}\) | \(\overline{y}_{0x}\) | \(d_2|T1=0\) |
(sd) | (sd) | (sd) | (sd) | ||
Y | \(\overline{y}_{10}\) | \(\overline{y}_{10}\) | \(\overline{y}_{1x}\) | \(d_2|T1=1\) | |
(sd) | (sd) | (sd) | (sd) | ||
All | \(\overline{y}_{x0}\) | \(\overline{y}_{x1}\) | \(y\) | \(d_2\) | |
(sd) | (sd) | (sd) | (sd) | ||
Diff | \(d_1|T2=0\) | \(d_1|T2=1\) | \(d_1\) | \(d_1d_2\) | |
(sd) | (sd) | (sd) | (sd) |
This is instantly recognizable from the design and returns all the benefits of the factorial design including all main effects, conditional causal effects, interactions and summary outcomes. It is much clearer and more informative than a regression table.
Updating on causal quantities
stan
stan
CausalQueries
Bayesian methods are just sets of procedures to figure out how to update beliefs in light of new information.
We begin with a prior belief about the probability that a hypothesis is true.
New data then allow us to form a posterior belief about the probability of the hypothesis.
Bayesian inference takes into account:
I draw a card from a deck and ask What are the chances it is a Jack of Spades?
Now I tell you that the card is indeed a spade. What would you guess?
What if I told you it was a heart?
What if I said it was a face card and a spade.
These answers are applications of Bayes’ rule.
In each case the answer is derived by assessing what is possible, given the new information, and then assessing how likely the outcome of interest among the states that are possible. In all the cases you calculate:
\[\text{Prob Jack of Spades | Info} = \frac{\text{Is Jack of Spades Consistent w/ Info?}}{\text{How many cards are consistent w/ Info?}} \]
You take a test to see whether you suffer from a disease that affects 1 in 100 people. The test is good in the following sense:
The test result says that you have the disease. What are the chances you have it?
It is not 99%. 99% is the probability of the result given the disease, but we want the probability of the disease given the result.
The right answer is 50%, which you can think of as the share of people that have the disease among all those that test positive. For example
e.g. if there were 10,000 people, then 100 would have the disease and 99 of these would test positive. But 9,900 would not have the disease and 99 of these would test positive. So the people with the disease that test positive are half of the total number testing positive.
What’s the probability of being a circle given you are black?
As an equation this might be written:
\[\text{Prob You have the Disease | Pos} = \frac{\text{How many have the disease and test pos?}}{\text{How many people test pos?}}\]
Consider last an old puzzle described in Gardner (1961).
To be explicit about the puzzle, we will assume that the information that one child is a boy is given as a truthful answer to the question “is at least one of the children a boy?”
Assuming also that there is a 50% probability that a given child is a boy.
As an equation:
\[\text{Prob both boys | Not both girls} = \frac{\text{Prob both boys}}{\text{Prob not both girls}} = \frac{\text{1 in 4}}{\text{3 in 4}}\]
Can anyone describe the Monty Hall puzzle?
Formally, all of these equations are applications of Bayes’ rule which is a simple and powerful formula for deriving updated beliefs from new data.
The formula is given as:
\[\Pr(H|\mathcal{D})=\frac{\Pr(\mathcal{D}|H)\Pr(H)}{\Pr(\mathcal{D})}\\ =\frac{\Pr(\mathcal{D}|H)\Pr(H)}{\sum_{H'}\Pr(\mathcal{D}|H')\Pr(H'))}\]
Formally, all of these equations are applications of Bayes’ rule which is a simple and powerful formula for deriving updated beliefs from new data.
For continuous distributions and parameter vector \(\theta\):
\[p(\theta|\mathcal{D})=\frac{p(\mathcal{D}|\theta)p(\theta)}{\int_{\theta'}p(\mathcal{D|\theta'})p(\theta')d\theta}\]
Consider the share of people in a population that voted. This is a quantity between 0 and 1.
Here the parameter of interest is a share. The Beta and Dirichlet distributions are particularly useful for representing beliefs on shares.
An attractive feature is that if one has a prior Beta(\(\alpha\), \(\beta\)) over the probability of some event, and then one observes a positive case, the Bayesian posterior distribution is also a Beta with with parameters \(\alpha+1, \beta\). Thus if people start with uniform priors and build up knowledge on seeing outcomes, their posterior beliefs should be Beta.
Here is a set of such distributions.
The Dirichlet distributions are generalizations of the Beta to the situation in which there are beliefs not just over a proportion, or a probability, but over collections of probabilities.
If four outcomes are possible and each is likely to occur with probability \(p_k\), \(k=1,2,3,4\) then beliefs are distributions over a three dimensional unit simplex.
The distribution has as many parameters as there are outcomes and these are traditionally recorded in a vector, \(\alpha\).
As with the Beta distribution, an uninformative prior (Jeffrey’s prior) has \(\alpha\) parameters of \((.5,.5,.5, \dots)\) and a uniform (“flat”) distribution has \(\alpha = (1,1,1,,\dots)\).
The Dirichlet updates in a simple way. If you have a Dirichlet prior with parameter \(\alpha = (\alpha_1, \alpha_2, \dots)\) and you observe outcome \(1\), for example, then then posterior distribution is also Dirichlet with parameter vector \(\alpha' = (\alpha_1+1, \alpha_2,\dots)\).
Bayes on a Grid
Now with a strongish prior on 50%:
fabricate(N = 100,
parameters = seq(.01, .99, length = N),
prior = dbeta(parameters, 20, 20),
prior = prior/sum(prior),
likelihood = dbinom(20, 100, parameters),
posterior = likelihood*prior/sum(likelihood*prior)) |>
ggplot(aes(parameters, posterior)) + geom_line() + theme_bw() +
geom_line(aes(parameters, prior), color = "red")
This approach is sound, but if you are dealing with many continuous parameters, the full parameter space can get very large and so the number of calculations you do increases rapidly.
Luckily other approaches have been developed.
In this short lecture we:
The good news: There is lots of help online. Start with: https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started
We will jump straight into things and work through a session.
To implement a stan model you should write the code in a text editor and save it as a text file. You can also write it directly in your script. You can then bring the file into R or call the file directly.
I saved a simple model called one_var.stan
locally. Here it is:
The key features here are (read from bottom up!):
a + bX
and standard deviation sigma
.a
, b
, sigma
.N
and X1,
Y` dataWe feed data to the model in the form of a list. The idea of a list is that the data can include all sorts of objects, not just a single dataset.
When you run the model you get a lot of useful output on the estimation and the posterior distribution. Here though are the key results:
mean | sd | Rhat | |
---|---|---|---|
a | -0.179 | 0.214 | 1 |
b | 0.738 | 0.183 | 1 |
sigma | 0.950 | 0.175 | 1 |
These look good.
The Rhat at the end tells you about convergence. You want this very close to 1.
The model output contains the full posterior distribution.
With the full posterior you can look at marginal posterior distributions over arbitrary transformations of parameters.
Let’s go back to the code.
There we had three key blocks: data
, parameters
, and model
More generally the blocks you can specify are:
data
(define the vars that will be coming in from the data list)transformed data
(can be used for preprocessing)parameters
(required: defines the parameters to be estimated)transformed parameters
(transformations of parameters useful for computational reasons and sometimes for clarity)model
(give priors and likelihood)generated quantities
(can be used for post processing)The parameters block declared the set of parameters that we wanted to estimate. In the simple model these were a
, b
, and sigma
. Note in the declaration we also:
Instead of defining:
We could have defined
and then referenced coef[1]
and coef[2]
in the model block.
Or we could also have imposed the constraint that the slope coefficient is positive by defining:
In the model block we give the likelihood
But we can also give the priors (if we want to). If priors are not provided, flat (possibly improper) priors are assumed
In our case for example we could have provided something like
This suggests that we start off believing b
is centered on -10. That will surely matter for our conclusions. Lets try it:
This time I will write the model right in the editor:
mean | sd | Rhat | |
---|---|---|---|
a | -1.338 | 2.444 | 1.003 |
b | -7.875 | 1.172 | 1.003 |
sigma | 10.988 | 2.499 | 1.001 |
Note that we get a much lower estimate for b
with the same data.
Now imagine a setting in which there are 10 villages, each with 10 respondents. Half in each village are assigned to treatment \(X=1\), and half to control \(X=0\).
Say that there is possibly a village specific average outcome: \(Y_v = a_v + b_vX\) where \(a_v\) and \(b_v\) are each drawn from some distribution with a mean and variance of interest. The individual outcomes are draws from a village level distribution centered on the village specific average outcome.
This all implies a multilevel structure.
Here is a model for this
ml_model <- '
data {
vector[100] Y;
int<lower=0,upper=1> X[100];
int village[100];
}
parameters {
vector<lower=0>[3] sigma;
vector[10] a;
vector[10] b;
real mu_a;
real mu_b;
}
transformed parameters {
vector[100] Y_vx;
for (i in 1:100) Y_vx[i] = a[village[i]] + b[village[i]] * X[i];
}
model {
a ~ normal(mu_a, sigma[1]);
b ~ normal(mu_b, sigma[2]);
Y ~ normal(Y_vx, sigma[3]);
}
'
Here is a slightly more general version: https://github.com/stan-dev/example-models/blob/master/ARM/Ch.17/17.1_radon_vary_inter_slope.stan
Lets create some multilevel data. Looking at this, can you tell what is the typical village level effect? How much heterogeneity is there?
mean | sd | Rhat | |
---|---|---|---|
mu_a | -0.29 | 0.25 | 1.00 |
mu_b | 1.89 | 0.26 | 1.00 |
sigma[1] | 0.61 | 0.25 | 1.00 |
sigma[2] | 0.47 | 0.28 | 1.01 |
sigma[3] | 0.98 | 0.08 | 1.00 |
parameters drawn from theory
Say that a set of people in a population are playing sequential prisoner’s dilemmas.
In such games selfish behavior might suggest defections by everyone everywhere. But of course people often cooperate. Why might this be?
We will capture some of this intuition with a behavioral type model in which
In all, this means that a player with propensity \(r_i>.5\) will cooperate with probability \(1-r_i\); a player with propensity \(r_i<.5\) will cooperate with probability \(1\).
Interestingly the not-very-rational people sometimes cooperate strategically but the really rational people never cooperate strategically because they think it won’t work.
What then are the probabilities of each of the possible outcomes?
where \(p\) is the density function on \(r_i\) given \(\theta\)
Given the assumption on \(p\)
We have data on the actions of the first movers and the second movers and are interested in the distribution of the \(p_i\)s.
Lets collapse that data into a simple list of the number of each type of game outcome:
And say we start off with a uniform prior of \(\theta\).
What should we conclude about \(\theta\)?
Here’s a model:
Note we define event weights as transformed parameters on a simplex. We also constrain \(\theta\) to be \(>.5\). Obviously we are relying a lot on our model.
What is the probability of observing strategic first round cooperation?
A player with rationality \(r_i\) will cooperate strategically with probability \(r_i\) if \(r_i<.5\) and 0 otherwise. Thus we are interested in \(\int_0^{.5}r_i/\theta dr_i = .125/\theta\)
CausalQueries
CausalQueries
brings these elements together
CausalQueries
brings these elements together by allowing users to:
CausalQueries
figures out all principal strata and places a prior on theseCausalQueries
writes a stand model and updates on all parametersCausalQueries
figures out which parameters correspond to a given causal queryevent | strategy | count |
---|---|---|
Z0X0Y0 | ZXY | 158 |
Z1X0Y0 | ZXY | 52 |
Z0X1Y0 | ZXY | 0 |
Z1X1Y0 | ZXY | 23 |
Z0X0Y1 | ZXY | 14 |
Z1X0Y1 | ZXY | 12 |
Z0X1Y1 | ZXY | 0 |
Z1X1Y1 | ZXY | 78 |
Note that in compact form we simply record the number of units (“count”) that display each possible pattern of outcomes on the three variables (“event”).[^1]
query | given | mean | sd | cred.low.2.5% | cred.high.97.5% |
---|---|---|---|---|---|
Y[X=1] - Y[X=0] | - | 0.55 | 0.10 | 0.37 | 0.73 |
Y[X=1] - Y[X=0] | X==0 & Y==0 | 0.64 | 0.15 | 0.37 | 0.89 |
Y[X=1] - Y[X=0] | X[Z=1] > X[Z=0] | 0.70 | 0.05 | 0.59 | 0.80 |
Focus on randomization schemes
DeclareDesign
Experiments are investigations in which an intervention, in all its essential elements, is under the control of the investigator. (Cox & Reid)
Two major types of control:
In general you might want to set things up so that your randomization is replicable. You can do this by setting a seed:
and again:
Even better is to set it up so that it can reproduce lots of possible draws so that you can check the propensities for each unit.
[1] 0.519 0.496 0.510 0.491 0.524 0.514 0.535 0.497 0.470 0.506
Here the \(P\) matrix gives 1000 possible ways of allocating 5 of 10 units to treatment. We can then confirm that the average propensity is 0.5.
People often wonder: did randomization work? Common practice is to implement a set of \(t\)-tests to see if there is balance. This makes no sense.
If you doubt whether it was implemented properly do an \(F\) test. If you worry about variance, specify controls in advance as a function of relation with outcomes (more on this later). If you worry about conditional bias then look at substantive differences between groups, not \(t\)–tests
If you want realizations to have particular properties: build it into the scheme in advance.
Note: clusters are part of your design, not part of the world.
Often used if intervention has to function at the cluster level or if outcome defined at the cluster level.
Disadvantage: loss of statistical power
However: perfectly possible to assign some treatments at cluster level and then other treatments at the individual level
Principle: (unless you are worried about spillovers) generally make clusters as small as possible
Principle: Surprisingly, variability in cluster size makes analysis harder. Try to control assignment so that cluster sizes are similar in treatment and in control.
Be clear about whether you believe effects are operating at the cluster level or at the individual level. This matters for power calculations.
Be clear about whether spillover effects operate only within clusters or also across them. If within only you might be able to interpret treatment as the effect of being in a treated cluster…
Surprisingly, if clusters are of different sizes the difference in means estimator is not unbiased, even if all units are assigned to treatment with the same probability.
Here’s the intuition.Say there are two clusters each with homogeneous treatment effects:
Cluster | Size | Y0 | Y1 |
---|---|---|---|
1 | 1000000 | 0 | 1 |
2 | 1 | 0 | 0 |
Then: What is the true average treatment effect? What do you expect to estimate from cluster random assignment?
The solution is to block by cluster size. For more see: http://gking.harvard.edu/files/cluster.pdf
There are more or less efficient ways to randomize.
Consider a case with four units and two strata. There are 6 possible assignments of 2 units to treatment:
ID | X | Y(0) | Y(1) | R1 | R2 | R3 | R4 | R5 | R6 |
---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 |
2 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 |
3 | 2 | 1 | 2 | 0 | 1 | 0 | 1 | 0 | 1 |
4 | 2 | 1 | 2 | 0 | 0 | 1 | 0 | 1 | 1 |
– | – | – | – | – | – | – | – | – | – |
\(\widehat{\tau}\): | 0 | 1 | 1 | 1 | 1 | 2 |
Even with a constant treatment effect and everything uniform within blocks, there is variance in the estimation of \(\widehat{\tau}\). This can be eliminated by excluding R1 and R6.
Simple blocking in R (5 pairs):
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
1 | 1 | 0 | 1 | 1 |
0 | 0 | 1 | 0 | 0 |
DeclareDesign
can help with this.Load up:
\(T2=0\) | \(T2=1\) | |
---|---|---|
T1 = 0 | \(50\%\) | \(0\%\) |
T1 = 1 | \(50\%\) | \(0\%\) |
Spread out:
\(T2=0\) | \(T2=1\) | |
---|---|---|
T1 = 0 | \(25\%\) | \(25\%\) |
T1 = 1 | \(25\%\) | \(25\%\) |
Three arm it?:
\(T2=0\) | \(T2=1\) | |
---|---|---|
T1 = 0 | \(33.3\%\) | \(33.3\%\) |
T1 = 1 | \(33.3\%\) | \(0\%\) |
Bunch it?:
\(T2=0\) | \(T2=1\) | |
---|---|---|
T1 = 0 | \(40\%\) | \(20\%\) |
T1 = 1 | \(20\%\) | \(20\%\) |
This speaks to “spreading out.” Note: the “bunching” example may not pay off and has undesireable feature of introducing a correlation between treatment assignments.
Two ways to do favtial assignments in DeclareDesign
:
In practice if you have a lot of treatments it can be hard to do full factorial designs – there may be too many combinations.
In such cases people use fractional factorial designs, like the one below (5 treatments but only 8 units!)
Variation | T1 | T2 | T3 | T4 | T5 |
---|---|---|---|---|---|
1 | 0 | 0 | 0 | 1 | 1 |
2 | 0 | 0 | 1 | 0 | 0 |
3 | 0 | 1 | 0 | 0 | 1 |
4 | 0 | 1 | 1 | 1 | 0 |
5 | 1 | 0 | 0 | 1 | 0 |
6 | 1 | 0 | 1 | 0 | 1 |
7 | 1 | 1 | 0 | 0 | 0 |
8 | 1 | 1 | 1 | 1 | 1 |
Then randomly assign units to rows. Note columns might also be blocking covariates.
In R, look at library(survey)
Unit | T1 | T2 | T3 | T4 | T5 |
---|---|---|---|---|---|
1 | 0 | 0 | 0 | 1 | 1 |
2 | 0 | 0 | 1 | 0 | 0 |
3 | 0 | 1 | 0 | 0 | 1 |
4 | 0 | 1 | 1 | 1 | 0 |
5 | 1 | 0 | 0 | 1 | 0 |
6 | 1 | 0 | 1 | 0 | 1 |
7 | 1 | 1 | 0 | 0 | 0 |
8 | 1 | 1 | 1 | 1 | 1 |
library(survey)
Muralidharan, Romero, and Wüthrich (2023) write:
Factorial designs are widely used to study multiple treatments in one experiment. While t-tests using a fully-saturated “long” model provide valid inferences, “short” model t-tests (that ignore interactions) yield higher power if interactions are zero, but incorrect inferences otherwise. Of 27 factorial experiments published in top-5 journals (2007–2017), 19 use the short model. After including interactions, over half of their results lose significance. […]
Anything to be done on randomization to address external validity concerns?
DeclareDesign
A design with hierarchical data and different assignment schemes.
design <-
declare_model(
school = add_level(N = 16,
u_school = rnorm(N, mean = 0)),
classroom = add_level(N = 4,
u_classroom = rnorm(N, mean = 0)),
student = add_level(N = 20,
u_student = rnorm(N, mean = 0))
) +
declare_model(
potential_outcomes(Y ~ .1*Z + u_classroom + u_student + u_school)
) +
declare_assignment(Z = simple_ra(N)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y ~ Z, .method = difference_in_means)
Here are the first couple of rows and columns of the resulting data frame.
school | u_school | classroom | u_classroom | student | u_student | Y_Z_0 | Y_Z_1 | Z | Y |
---|---|---|---|---|---|---|---|---|---|
01 | 1.35 | 01 | 1.26 | 0001 | -1.28 | 1.33 | 1.43 | 0 | 1.33 |
01 | 1.35 | 01 | 1.26 | 0002 | 0.79 | 3.40 | 3.50 | 1 | 3.50 |
01 | 1.35 | 01 | 1.26 | 0003 | -0.12 | 2.49 | 2.59 | 0 | 2.49 |
01 | 1.35 | 01 | 1.26 | 0004 | -0.65 | 1.96 | 2.06 | 1 | 2.06 |
01 | 1.35 | 01 | 1.26 | 0005 | 0.36 | 2.97 | 3.07 | 1 | 3.07 |
01 | 1.35 | 01 | 1.26 | 0006 | -0.96 | 1.65 | 1.75 | 0 | 1.65 |
Here is the distribution between treatment and control:
We can draw a new set of data and look at the number of subjects in the treatment and control groups.
But what if all students in a given class have to be assigned the same treatment?
assignment_clustered <-
declare_assignment(Z = cluster_ra(clusters = classroom))
estimator_clustered <-
declare_estimator(Y ~ Z, clusters = classroom,
.method = difference_in_means)
design_clustered <-
design |>
replace_step("assignment", assignment_clustered) |>
replace_step("estimator", estimator_clustered)
assignment_clustered_blocked <-
declare_assignment(Z = block_and_cluster_ra(blocks = school,
clusters = classroom))
estimator_clustered_blocked <-
declare_estimator(Y ~ Z, blocks = school, clusters = classroom,
.method = difference_in_means)
design_clustered_blocked <-
design |>
replace_step("assignment", assignment_clustered_blocked) |>
replace_step("estimator", estimator_clustered_blocked)
Design | Power | Coverage |
---|---|---|
simple | 0.16 | 0.95 |
(0.01) | (0.01) | |
complete | 0.20 | 0.96 |
(0.01) | (0.01) | |
blocked | 0.42 | 0.95 |
(0.01) | (0.01) | |
clustered | 0.06 | 0.96 |
(0.01) | (0.01) | |
clustered_blocked | 0.08 | 0.96 |
(0.01) | (0.01) |
In many designs you seek to assign an integer number of subjects to treatment from some set.
Sometimes however your assignment targets are not integers.
Example:
Two strategies:
Can also be used to set targets
# remotes::install_github("macartan/probra")
library(probra)
set.seed(1)
fabricate(N = 4, size = c(47, 53, 87, 25), n_treated = prob_ra(.5*size)) %>%
janitor::adorn_totals("row") |>
kable(caption = "Setting targets to get 50% targets with minimal variance")
ID | size | n_treated |
---|---|---|
1 | 47 | 23 |
2 | 53 | 27 |
3 | 87 | 43 |
4 | 25 | 13 |
Total | 212 | 106 |
Can also be used to set for complete assignment with heterogeneous propensities
Indirect control
Indirect assignments are generally generated by applying a direct assignment and then figuring our an implied indirect assignment
Looks better: but there are trade offs between the direct and indirect distributions
Figuring out the optimal procedure requires full diagnosis
A focus on power
In the classical approach to testing a hypothesis we ask:
How likely are we to see data like this if indeed the hypothesis is true?
How unlikely is “not very likely”?
When we test a hypothesis we decide first on what sort of evidence we need to see in order to decide that the hypothesis is not reliable.
Othello has a hypothesis that Desdemona is innocent.
Iago confronts him with evidence:
Note that Othello is focused on the probability of the events if she were innocent but not the probability of the events if Iago were trying to trick him.
He is not assessing his belief in whether she is faithful, but rather how likely the data would be if she were faithful.
So:
Illustrating \(p\) values via “randomization inference”
Say you randomized assignment to treatment and your data looked like this.
Unit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
Treatment | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Health score | 4 | 2 | 3 | 1 | 2 | 3 | 4 | 8 | 7 | 6 |
Then:
Power is just the probability of getting a significant result rejecting a hypothesis.
Simple enough but it presupposes:
I want to test the hypothesis that a six never comes up on this dice.
Here’s my test:
What is the power of this test?
I want to test the hypothesis that a six never comes up on this dice.
Here’s my test:
What is the power of this test?
Power sometimes seems more complicated because hypothesis rejection involves a calculated probability and so you need the probability of a probability.
I want to test the hypothesis that this dice is fair.
Here’s my test:
Now:
For this we need to figure a rule for rejection. This is based on identifying events that should be unlikely under the hypothesis.
Here is how many 6’s I would expect if the dice is fair:
I can figure out from this that 143 or fewer is really very few and 190 or more is really very many:
Now we need to stipulate some belief about how the world really works—this is not the null hypothesis that we plan to reject, but something that we actually take to be true.
For instance: we think that in fact sixes appear 20% of the time.
Now what’s the probability of seeing at least 190 sixes?
So given I think 6s appear 20% of the time, I think it likely I’ll see at least 190 sixes and reject the hypothesis of a fair dice.
Simplest intuition on power:
What is the probability of getting a significant estimate given the sampling distribution is centered on \(b\) and the standard error is 1?
Add these together: probability of getting an estimate above 1.96 or below -1.96.
This is essentially what is done by pwrss::power.z.test
– and it produces nice graphs!
See:
Substantively: if in expectation an estimate will be just significant, then your power is 50%
power <- function(b, alpha = 0.05, critical = qnorm(1-alpha/2))
1 - pnorm(critical - b) + pnorm(-critical - b)
power(1.96)
[1] 0.5000586
Intuition:
Of course the standard error will depend on the number of units and the variance of outcomes in treatment and control.
Say \(N\) subject are divided into two groups and potential outcomes have standard deviation \(\sigma\) in treatment and control. Then the conservative variance of the treatment effect is (approx / conservatively):
\[Var(\tau)=\frac{\sigma^2}{N/2} + \frac{\sigma^2}{N/2} = 4\frac{\sigma^2}{N}\]
and so the (conservative / approx) standard error is:
\[\sigma_\tau=\frac{2\sigma}{\sqrt{N}}\]
Note here we seem to be using the actual standard error but of course the tests we actually run will use an estimate of the standard error…
This can be done e.g. with pwrss
like this:
pwrss::pwrss.t.2means(mu1 = .2, mu2 = .1, sd1 = 1, sd2 = 1,
n2 = 50, alpha = 0.05,
alternative = "not equal")
Difference between Two means
(Independent Samples t Test)
H0: mu1 = mu2
HA: mu1 != mu2
------------------------------
Statistical power = 0.079
n1 = 50
n2 = 50
------------------------------
Alternative = "not equal"
Degrees of freedom = 98
Non-centrality parameter = 0.5
Type I error rate = 0.05
Type II error rate = 0.921
[1] 0.7010827
Mostly involve figuring out the standard error.
Consider a cluster randomized trial, with each unit having a cluster level shock \(\epsilon_k\) and an individual shock \(\nu_i\). Say these have variances \(\sigma^2_k\), \(\sigma^2_i\).
The standard error is:
\[\sqrt{\frac{4\sigma^2_k}{K} + \frac{4\sigma^2_i}{nK}}\]
Define \(\rho = \frac{\sigma^2_k}{\sigma^2_k + \sigma^2_i}\)
\[\sqrt{\rho \frac{4\sigma^2}{K} + (1- \rho)\frac{4\sigma^2}{nK}}\]
\[\sqrt{((n - 1)\rho + 1)\frac{4\sigma^2}{nK}}\]
where
Plug in these standard errors and proceed as before
Is arbitrarily flexible
sim_ID | estimate | p.value |
---|---|---|
1 | 0.81 | 0.00 |
2 | 0.40 | 0.04 |
3 | 0.88 | 0.00 |
4 | 0.72 | 0.00 |
5 | 0.38 | 0.05 |
6 | 0.44 | 0.02 |
Obviously related to the estimates you might get
A valid \(p\)-value satisfies \(\Pr(p≤x)≤x\) for every \(x \in[0,1]\) (under the null)
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
0.50 | 0.00 | 0.20 | 0.20 | 0.70 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
b | Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|---|
0 | -0.00 | -0.00 | 0.20 | 0.20 | 0.05 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
0.25 | 0.25 | -0.00 | 0.20 | 0.20 | 0.23 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
0.5 | 0.50 | 0.00 | 0.20 | 0.20 | 0.70 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) | |
1 | 1.00 | 0.00 | 0.20 | 0.20 | 1.00 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.00) |
coming up:
We often focus on sample sizes
But
Power also depends on
Say we have access to a “pre” measure of outcome Y_now
; call it Y_base
. Y_base
is informative about potential outcomes. We are considering using Y_now - Y_base
as the outcome instead of Y_now
.
N <- 100
rho <- .5
design <-
declare_model(N,
Y_base = rnorm(N),
Y_Z_0 = 1 + correlate(rnorm, given = Y_base, rho = rho),
Y_Z_1 = correlate(rnorm, given = Y_base, rho = rho),
Z = complete_ra(N),
Y_now = Z*Y_Z_1 + (1-Z)*Y_Z_0,
Y_change = Y_now - Y_base) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y_now ~ Z, label = "level") +
declare_estimator(Y_change ~ Z, label = "change")+
declare_estimator(Y_now ~ Z + Y_base, label = "RHS")
Punchline:
You can see from the null design that power is great but bias is terrible and coverage is way off.
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
1.59 | 1.59 | 0.12 | 1.60 | 1.00 | 0.00 |
(0.01) | (0.01) | (0.00) | (0.01) | (0.00) | (0.00) |
Power without unbiasedness corrupts, absolutely
another_bad_design <-
declare_model(
N = 100,
female = rep(0:1, N/2),
U = rnorm(N),
potential_outcomes(Y ~ female * Z + U)) +
declare_assignment(
Z = block_ra(blocks = female, block_prob = c(.1, .5)),
Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ate = mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y ~ Z + female, inquiry = "ate",
.method = lm_robust)
You can see from the null design that power is great but bias is terrible and coverage is way off.
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
0.76 | 0.26 | 0.24 | 0.35 | 0.84 | 0.85 |
(0.01) | (0.01) | (0.01) | (0.01) | (0.01) | (0.02) |
clustered_design <-
declare_model(
cluster = add_level(N = 10, cluster_shock = rnorm(N)),
individual = add_level(
N = 100,
Y_Z_0 = rnorm(N) + cluster_shock,
Y_Z_1 = rnorm(N) + cluster_shock)) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0)) +
declare_assignment(Z = cluster_ra(clusters = cluster)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_estimator(Y ~ Z, inquiry = "ATE")
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
-0.00 | -0.00 | 0.64 | 0.64 | 0.79 | 0.20 |
(0.01) | (0.01) | (0.01) | (0.01) | (0.01) | (0.01) |
What alerts you to a problem?
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
0.00 | -0.00 | 0.66 | 0.65 | 0.06 | 0.94 |
(0.02) | (0.02) | (0.01) | (0.01) | (0.01) | (0.01) |
design_uncertain <-
declare_model(N = 1000, b = 1+rnorm(1), Y_Z_1 = rnorm(N), Y_Z_2 = rnorm(N) + b, Y_Z_3 = rnorm(N) + b) +
declare_assignment(Z = complete_ra(N = N, num_arms = 3, conditions = 1:3)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ate = mean(b)) +
declare_estimator(Y ~ factor(Z), term = TRUE)
draw_estimands(design_uncertain)
inquiry estimand
1 ate -0.3967765
inquiry estimand
1 ate 0.7887188
Say I run two tests and want to correct for multiple comparisons.
Two approaches. First, by hand:
b = .2
design_mc <-
declare_model(N = 1000, Y_Z_1 = rnorm(N), Y_Z_2 = rnorm(N) + b, Y_Z_3 = rnorm(N) + b) +
declare_assignment(Z = complete_ra(N = N, num_arms = 3, conditions = 1:3)) +
declare_measurement(Y = reveal_outcomes(Y ~ Z)) +
declare_inquiry(ate = b) +
declare_estimator(Y ~ factor(Z), term = TRUE)
design_mc |>
simulate_designs(sims = 1000) |>
filter(term != "(Intercept)") |>
group_by(sim_ID) |>
mutate(p_bonferroni = p.adjust(p = p.value, method = "bonferroni"),
p_holm = p.adjust(p = p.value, method = "holm"),
p_fdr = p.adjust(p = p.value, method = "fdr")) |>
ungroup() |>
summarize(
"Power using naive p-values" = mean(p.value <= 0.05),
"Power using Bonferroni correction" = mean(p_bonferroni <= 0.05),
"Power using Holm correction" = mean(p_holm <= 0.05),
"Power using FDR correction" = mean(p_fdr <= 0.05)
)
Power using naive p-values | Power using Bonferroni correction | Power using Holm correction | Power using FDR correction |
---|---|---|---|
0.7374 | 0.6318 | 0.6886 | 0.7032 |
The alternative approach (generally better!) is to design with a custom estimator that includes your corrections.
my_estimator <- function(data)
lm_robust(Y ~ factor(Z), data = data) |>
tidy() |>
filter(term != "(Intercept)") |>
mutate(p.naive = p.value,
p.value = p.adjust(p = p.naive, method = "bonferroni"))
design_mc_2 <- design_mc |>
replace_step(5, declare_estimator(handler = label_estimator(my_estimator)))
run_design(design_mc_2) |>
select(term, estimate, p.value, p.naive) |> kable()
term | estimate | p.value | p.naive |
---|---|---|---|
factor(Z)2 | 0.2508003 | 0.0021145 | 0.0010573 |
factor(Z)3 | 0.2383963 | 0.0052469 | 0.0026235 |
Lets try same thing for a null model (using redesign(design_mc_2, b = 0)
)
…and power:
Mean Estimate | Bias | SD Estimate | RMSE | Power | Coverage |
---|---|---|---|---|---|
0.00 | 0.00 | 0.08 | 0.08 | 0.02 | 0.95 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) |
-0.00 | -0.00 | 0.08 | 0.08 | 0.02 | 0.96 |
(0.00) | (0.00) | (0.00) | (0.00) | (0.00) | (0.01) |
bothered?
Introduction to observational strategies and more advanced topics
Sometimes you give a medicine but only a nonrandom sample of people actually try to use it. Can you still estimate the medicine’s effect?
X=0 | X=1 | |
---|---|---|
Z=0 | \(\overline{y}_{00}\) (\(n_{00}\)) | \(\overline{y}_{01}\) (\(n_{01}\)) |
Z=1 | \(\overline{y}_{10}\) (\(n_{10}\)) | \(\overline{y}_{11}\) (\(n_{11}\)) |
Say that people are one of 3 types:
Sometimes you give a medicine but only a non random sample of people actually try to use it. Can you still estimate the medicine’s effect?
X=0 | X=1 | |
---|---|---|
Z=0 | \(\overline{y}_{00}\) (\(n_{00}\)) | \(\overline{y}_{01}\) (\(n_{01}\)) |
Z=1 | \(\overline{y}_{10}\) (\(n_{10}\)) | \(\overline{y}_{11}\) (\(n_{11}\)) |
We can figure something about types:
\(X=0\) | \(X=1\) | |
---|---|---|
\(Z=0\) | \(\frac{\frac{1}{2}n_c}{\frac{1}{2}n_c + \frac{1}{2}n_n} \overline{y}^0_{c}+\frac{\frac{1}{2}n_n}{\frac{1}{2}n_c + \frac{1}{2}n_n} \overline{y}_{n}\) | \(\overline{y}_{a}\) |
\(Z=1\) | \(\overline{y}_{n}\) | \(\frac{\frac{1}{2}n_c}{\frac{1}{2}n_c + \frac{1}{2}n_a} \overline{y}^1_{c}+\frac{\frac{1}{2}n_a}{\frac{1}{2}n_c + \frac{1}{2}n_a} \overline{y}_{a}\) |
You give a medicine to 50% but only a non random sample of people actually try to use it. Can you still estimate the medicine’s effect?
\(X=0\) | \(X=1\) | |
---|---|---|
\(Z=0\) | \(\frac{n_c}{n_c + n_n} \overline{y}^0_{c}+\frac{n_n}{n_c + n_n} \overline{y}_n\) | \(\overline{y}_{a}\) |
(n) | (\(\frac{1}{2}(n_c + n_n)\)) | (\(\frac{1}{2}n_a\)) |
\(Z=1\) | \(\overline{y}_{n}\) | \(\frac{n_c}{n_c + n_a} \overline{y}^1_{c}+\frac{n_a}{n_c + n_a} \overline{y}_{a}\) |
(n) | (\(\frac{1}{2}n_n\)) | (\(\frac{1}{2}(n_a+n_c)\)) |
Key insight: the contributions of the \(a\)s and \(n\)s are the same in the \(Z=0\) and \(Z=1\) groups so if you difference you are left with the changes in the contributions of the \(c\)s.
Average in \(Z=0\) group: \(\frac{{n_c} \overline{y}^0_{c}+ \left(n_{n}\overline{y}_{n} +{n_a} \overline{y}_a\right)}{n_a+n_c+n_n}\)
Average in \(Z=1\) group: \(\frac{{n_c} \overline{y}^1_{c} + \left(n_{n}\overline{y}_{n} +{n_a} \overline{y}_a \right)}{n_a+n_c+n_n}\)
So, the difference is the ITT: \(({\overline{y}^1_c-\overline{y}^0_c})\frac{n_c}{n}\)
Last step:
\[ITT = ({\overline{y}^1_c-\overline{y}^0_c})\frac{n_c}{n}\]
\[\leftrightarrow\]
\[LATE = \frac{ITT}{\frac{n_c}{n}}= \frac{\text{Intent to treat effect}}{\text{First stage effect}}\]
With and without an imposition of monotonicity
data("lipids_data")
models <-
list(unrestricted = make_model("Z -> X -> Y; X <-> Y"),
restricted = make_model("Z -> X -> Y; X <-> Y") |>
set_restrictions("X[Z=1] < X[Z=0]")) |>
lapply(update_model, data = lipids_data, refresh = 0)
models |>
query_model(query = list(CATE = "Y[X=1] - Y[X=0]",
Nonmonotonic = "X[Z=1] < X[Z=0]"),
given = list("X[Z=1] > X[Z=0]", TRUE),
using = "posteriors")
With and without an imposition of monotonicity:
model | query | mean | sd |
---|---|---|---|
unrestricted | CATE | 0.70 | 0.05 |
restricted | CATE | 0.71 | 0.05 |
unrestricted | Nonmonotonic | 0.01 | 0.01 |
restricted | Nonmonotonic | 0.00 | 0.00 |
In one case we assume monotonicity, in the other we update on it (easy in this case because of the empirically verifiable nature of one sided non compliance)
Key idea: the evolution of units in the control group allow you to impute what the evolution of units in the treatment group would have been had they not been treated
We have group \(A\) that enters treatment at some point and group \(B\) that never does
The estimate:
\[\hat\tau = (\mathbb{E}[Y^A | post] - \mathbb{E}[Y^A | pre]) -(\mathbb{E}[Y^B | post] - \mathbb{E}[Y^B | pre])\] (how different is the change in \(A\) compared to the change in \(B\)?)
can be written:
\[\hat\tau = (\mathbb{E}[Y_1^A | post] - \mathbb{E}[Y_0^A | pre]) -(\mathbb{E}[Y_0^B | post] - \mathbb{E}[Y_0^B | pre])\]
Cleaning up
\[\hat\tau = (\mathbb{E}[Y_1^A | post] - \mathbb{E}[Y_0^A | pre]) -(\mathbb{E}[Y_0^B | post] - \mathbb{E}[Y_0^B | pre])\]
\[\hat\tau = (\mathbb{E}[Y_1^A | post] - \mathbb{E}[Y_0^A | post]) + ((\mathbb{E}[Y_0^A | post] - \mathbb{E}[Y_0^A | pre]) -(\mathbb{E}[Y_0^B | post] - \mathbb{E}[Y_0^B | pre]))\]
\[\hat\tau_{ATT} = \tau_{ATT} + \text{Difference in trends}\]
n_units <- 2
design <-
declare_model(
unit = add_level(N = n_units, I = 1:N),
time = add_level(N = 6, T = 1:N, nest = FALSE),
obs = cross_levels(by = join_using(unit, time))) +
declare_model(potential_outcomes(Y ~ I + T^.5 + Z*T)) +
declare_assignment(Z = 1*(I>(n_units/2))*(T>3)) +
declare_measurement(Y = reveal_outcomes(Y~Z)) +
declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0),
ATT = mean(Y_Z_1[Z==1] - Y_Z_0[Z==1])) +
declare_estimator(Y ~ Z, label = "naive") +
declare_estimator(Y ~ Z + I, label = "FE1") +
declare_estimator(Y ~ Z + as.factor(T), label = "FE2") +
declare_estimator(Y ~ Z + I + as.factor(T), label = "FE3")
Here only the two way fixed effects is unbiased and only for the ATT.
The ATT here is averaging over effects for treated units (later periods only). We know nothing about the size of effects in earlier periods when all units are in control!
Inquiry | Estimator | Bias |
---|---|---|
ATE | FE1 | 2.25 |
ATE | FE2 | 6.50 |
ATE | FE3 | 1.50 |
ATE | naive | 5.40 |
ATT | FE1 | 0.75 |
ATT | FE2 | 5.00 |
ATT | FE3 | 0.00 |
ATT | naive | 3.90 |
Inquiry | Estimator | Bias |
---|---|---|
ATE | FE1 | 2.25 |
ATE | FE2 | 6.50 |
ATE | FE3 | 1.50 |
ATE | naive | 5.40 |
ATT | FE1 | 0.75 |
ATT | FE2 | 5.00 |
ATT | FE3 | 0.00 |
ATT | naive | 3.90 |
Things get much more complicated when there is (a) heterogeneous timing in treatment take up and (b) heterogeneous effects
It’s only recently been appreciated how tricky things can get
But we already have an intuition from our analysis of trials with heterogeneous assignment and heterogeneous effects:
in such cases fixed effects analysis weights stratum level treatment effects by the variance in assignment to treatment
something similar here
Just two units assigned at different times:
trend = 0
design <-
declare_model(
unit = add_level(N = 2, ui = rnorm(N), I = 1:N),
time = add_level(N = 6, ut = rnorm(N), T = 1:N, nest = FALSE),
obs = cross_levels(by = join_using(unit, time))) +
declare_model(
potential_outcomes(Y ~ trend*T + (1+Z)*(I == 2))) +
declare_assignment(Z = 1*((I == 1) * (T>3) + (I == 2) * (T>5))) +
declare_measurement(Y = reveal_outcomes(Y~Z),
I_c = I - mean(I)) +
declare_inquiry(mean(Y_Z_1 - Y_Z_0)) +
declare_estimator(Y ~ Z, label = "1. naive") +
declare_estimator(Y ~ Z + I, label = "2. FE1") +
declare_estimator(Y ~ Z + as.factor(T), label = "3. FE2") +
declare_estimator(Y ~ Z + I + as.factor(T), label = "4. FE3") +
declare_estimator(Y ~ Z*I_c + as.factor(T), label = "5. Sat")
Estimator | Mean Estimand | Mean Estimate |
---|---|---|
1. naive | 0.50 | -0.12 |
(0.00) | (0.00) | |
2. FE1 | 0.50 | 0.36 |
(0.00) | (0.00) | |
3. FE2 | 0.50 | -1.00 |
(0.00) | (0.00) | |
4. FE3 | 0.50 | 0.25 |
(0.00) | (0.00) | |
5. Sat | 0.50 | 0.50 |
(0.00) | (0.00) |
The estimand is .5 – this comes from weighting the effect for unit 1 (0) and the effect for unit 2 (1) equally
The naive estimate is wildly off because it does not take into account that units with different treatment shares have different average levels in outcomes
The estimate when we control for time and unit is 0.25
This is actually a lot harder to interpret:
We can figure out what it is from the “Goodman-Bacon decomposition” in Goodman-Bacon (2021)
In this case we can think of our data having the following structure:
y1 | y2 | |
---|---|---|
pre | 0 | 1 |
mid | 0 | 1 |
post | 0 | 2 |
TWFE gives a weighted average of these two, putting a 3/4 weight on the first and a 1/4 weight on the second
Specifically:
\[\hat\beta^{DD} = \mu_{12}\hat\beta^{2 \times 2, 1}_{12} + (1-\mu_{12})\hat\beta^{2 \times 2, 2}_{12}\]
where \(\mu_{12} = \frac{1-\overline{Z_1}}{1-(\overline{Z_1}-\overline{Z_2})} = \frac{1-\frac36}{1-\left(\frac36-\frac16\right)} = \frac34\)
\[\frac34\hat\beta^{2 \times 2, 1}_{12} + \frac14 \hat\beta^{2 \times 2, 2}_{12}\] (weights formula from WP version)
And:
which in the simple example without time trends is \((0 - 0) - (1-1) = 0\)
which is \((2 - 1) - (0 - 0) = 1\)
So quite complex weighting of different comparisons
See excellent review by Roth et al. (2023)
library(DIDmultiplegt)
library(rdss)
design <-
declare_model(
unit = add_level(N = 4, I = 1:N),
time = add_level(N = 8, T = 1:N, nest = FALSE),
obs = cross_levels(by = join_using(unit, time),
potential_outcomes(Y ~ T + (1 + Z)*I))) +
declare_assignment(Z = 1*(T > (I + 4))) +
declare_measurement(
Y = reveal_outcomes(Y~Z),
Z_lag = lag_by_group(Z, groups = unit, n = 1, order_by = T)) +
declare_inquiry(
ATT_switchers = mean(Y_Z_1 - Y_Z_0),
subset = Z == 1 & Z_lag == 0 & !is.na(Z_lag)) +
declare_estimator(
Y = "Y", G = "unit", T = "T", D = "Z",
handler = label_estimator(rdss::did_multiplegt_tidy),
inquiry = c("ATT_switchers"),
label = "chaisemartin"
)
Note the inquiry
A response to concerns that double differencing is not enough is to triple difference
When you think that there may be a violation of parallel tends but you have other outcomes that would pick up the same difference in trends
See Olden and Møen (2022)
You are interested in the effects of influx of refugees on right wing voting
You want to do differences in differences comparing these states before and after
However you worry that things change differntially in the conservative and liberal states: no parallel trends
but you can identify areas within states that are more or less likely to be exposed and compare differnces in differences in the exposed and unexposed groups.
So:
\[Y = \beta_0 + \beta_1 L + \beta_2 B + \beta_3 Post + \beta_4 LB + \beta_5 L Post + \beta_6 B Post + \beta_7L B Post + \epsilon\]
\[\frac{\partial ^3Y}{\partial L \partial B \partial Post} = \beta_7\]
The level among the \(B=1\) types is:
\[Y = \beta_0 + \beta_1 L + \beta_2 + \beta_3 Post + \beta_4 L + \beta_5 L Post + \beta_6 Post + \beta_7L Post + \epsilon\] If you did simple before / after differences among the \(B\) types you would get
\[\Delta Y| L = 1, B = 1 = \beta_3 + \beta_5 + \beta_6 + \beta_7\] \[\Delta Y| L = 0, B = 1 = \beta_3 + \beta_6\]
And so if you differenced again you would get:
\[\Delta^2 Y| B = 1 = \beta_5 + \beta_7\] So the problem is that you have \(\beta^5\) in here which corresponds exactly to how \(L\) states change over time.
But we can figure out \(\beta_5\) by doing a diff in diff among the \(B\)’s.
\[Y|B = 0 = \beta_0 + \beta_1 L + \beta_3 Post + \beta_5 L Post\]
\[\Delta^2 Y| B = 0 = \beta_5\]
The identifying assumption is that absent treatment the differences in trends between \(L=0\) and \(L=1\) would be the same for units with \(B=0\) and \(B=1\).
See equation 5.4 in Olden and Møen (2022)
\[ \left(E[Y_0|L=1, B=1, {\textit {Post}}=1] - E[Y_0|L=1, B=1, {\textit {Post}}=0]\right) \\ \quad - \ \left(E[Y_0|L=1, B=0, {\textit {Post}}=1] - E[Y_0|L=1, B=0, {\textit {Post}}=0]\right) \\ = \nonumber \\ \left(E[Y_0|L=0, B=1, {\textit {Post}}=1] - E[Y_0|L=0, B=1, {\textit {Post}}=0]\right) \\ \quad - \ \left(E[Y_0|L=0, B=0, {\textit {Post}}=1] - E[Y_0|L=0, B=0, {\textit {Post}}=0]\right)\]
In a sense this is one parallel trends assumption, not two
But there are four counterfactual quantities in this expression.
Puzzle: Is it possible to have an effect identified by a difference in difference but incorrectly by a triple difference design?
Errors and diagnostics
See excellent introduction: Lee and Lemieux (2010)
Kids born on 31 August start school a year younger than kids born on 1 September: does starting younger help or hurt?
Kids born on 12 September 1983 are more likely to register Republican than kids born on 10 September 1983: can this identify the effects of registration on long term voting?
A district in which Republicans got 50.1% of the vote get a Republican representative while districts in which Republicans got 49.9% of the vote do not: does having a Republican representative make a difference for these districts?
Setting:
Two arguments:
Continuity: \(\mathbb{E}[Y(1)|X=x]\) and \(\mathbb{E}[Y(0)|X=x]\) are continuous (at \(x=0\)) in \(x\): so \(\lim_{\hat x \rightarrow 0}\mathbb{E}[Y(0)|X=\hat x] = \mathbb{E}[Y(0)|X=\hat 0]\)
Local randomization: tiny things that determine exact values of \(x\) are as if random and so we can think of a local experiment around \(X=0\).
Note:
Exclusion restriction is implicit in continuity: If something else happens at the threshold then the conditional expectation functions jump at the thresholds
Implicit: \(X\) is exogenous in the sense that units cannot adjust \(X\) in order to be on one or the other side of the threshold
Typically researchers show:
Typically researchers show:
In addition:
Sometimes:
library(rdss) # for helper functions
library(rdrobust)
cutoff <- 0.5
bandwidth <- 0.5
control <- function(X) {
as.vector(poly(X, 4, raw = TRUE) %*% c(.7, -.8, .5, 1))}
treatment <- function(X) {
as.vector(poly(X, 4, raw = TRUE) %*% c(0, -1.5, .5, .8)) + .25}
rdd_design <-
declare_model(
N = 1000,
U = rnorm(N, 0, 0.1),
X = runif(N, 0, 1) + U - cutoff,
D = 1 * (X > 0),
Y_D_0 = control(X) + U,
Y_D_1 = treatment(X) + U
) +
declare_inquiry(LATE = treatment(0) - control(0)) +
declare_measurement(Y = reveal_outcomes(Y ~ D)) +
declare_sampling(S = X > -bandwidth & X < bandwidth) +
declare_estimator(Y ~ D*X, term = "D", label = "lm") +
declare_estimator(
Y, X,
term = "Bias-Corrected",
.method = rdrobust_helper,
label = "optimal"
)
Note rdrobust
implements:
See Calonico, Cattaneo, and Titiunik (2014) and related papers ? rdrobust::rdrobust
Estimator | Mean Estimate | Bias | SD Estimate | Coverage |
---|---|---|---|---|
lm | 0.23 | -0.02 | 0.01 | 0.64 |
(0.00) | (0.00) | (0.00) | (0.02) | |
optimal | 0.25 | 0.00 | 0.03 | 0.89 |
(0.00) | (0.00) | (0.00) | (0.01) |
Are popular in political science:
See Keele and Titiunik (2015)
Spillovers can result in the estimation of weaker effects when effects are actually stronger.
The key problem is that \(Y(1)\) and \(Y(0)\) are not sufficient to describe potential outcomes
Unit | Location | \(D_\emptyset\) | \(y(D_\emptyset)\) | \(D_1\) | \(y(D_1)\) | \(D_2\) | \(y(D_2)\) | \(D_3\) | \(y(D_3)\) | \(D_4\) | \(y(D_4)\) |
---|---|---|---|---|---|---|---|---|---|---|---|
A | 1 | 0 | 0 | 1 | 3 | 0 | 1 | 0 | 0 | 0 | 0 |
B | 2 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 | 0 | 0 |
C | 3 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 |
D | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 3 |
Table: Potential outcomes for four units for different treatment profiles. \(D_i\) is an allocation and \(y_j(D_i)\) is the potential outcome for (row) unit \(j\) given (column) \(D_i\).
0 | 1 | 2 | 3 | 4 | |||||||
Unit | Location | \(D_\emptyset\) | \(y(D_\emptyset)\) | \(D_1\) | \(y(D_1)\) | \(D_2\) | \(y(D_2)\) | \(D_3\) | \(y(D_3)\) | \(D_4\) | \(y(D_4)\) |
A | 1 | 0 | 0 | 1 | 3 | 0 | 1 | 0 | 0 | 0 | 0 |
B | 2 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 | 0 | 0 |
C | 3 | 0 | 0 | 0 | 0 | 0 | 3 | 1 | 3 | 0 | 3 |
D | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 3 |
\(\bar{y}_\text{treated}\) | - | 3 | 3 | 3 | |||||||
\(\bar{y}_\text{untreated}\) | 0 | 1 | 4/3 | 4/3 | |||||||
\(\bar{y}_\text{neighbors}\) | - | 3 | 2 | 2 | |||||||
\(\bar{y}_\text{pure control}\) | 0 | 0 | 0 | 0 | |||||||
ATT-direct | - | 3 | 3 | 3 | |||||||
ATT-indirect | - | 3 | 2 | 2 |
dgp <- function(i, Z, G) Z[i]/3 + sum(Z[G == G[i]])^2/5 + rnorm(1)
spillover_design <-
declare_model(G = add_level(N = 80),
j = add_level(N = 3, zeros = 0, ones = 1)) +
declare_inquiry(direct = mean(sapply(1:240, # just i treated v no one treated
function(i) { Z_i <- (1:240) == i
dgp(i, Z_i, G) - dgp(i, zeros, G)}))) +
declare_inquiry(indirect = mean(sapply(1:240,
function(i) { Z_i <- (1:240) == i # all but i treated v no one treated
dgp(i, ones - Z_i, G) - dgp(i, zeros, G)}))) +
declare_assignment(Z = complete_ra(N)) +
declare_measurement(
neighbors_treated = sapply(1:N, function(i) sum(Z[-i][G[-i] == G[i]])),
one_neighbor = as.numeric(neighbors_treated == 1),
two_neighbors = as.numeric(neighbors_treated == 2),
Y = sapply(1:N, function(i) dgp(i, Z, G))
) +
declare_estimator(Y ~ Z,
inquiry = "direct",
model = lm_robust,
label = "naive") +
declare_estimator(Y ~ Z * one_neighbor + Z * two_neighbors,
term = c("Z", "two_neighbors"),
inquiry = c("direct", "indirect"),
label = "saturated",
model = lm_robust)
You can in principle:
But to estimate effects you still need some SUTVA like assumption.
In this example if one compared the outcome between treated units and all control units that are at least \(n\) positions away from a treated unit you will get the wrong answer unless \(n \geq 7\).
Which effects are identified by the random assignment of \(X\)?
An obvious approach is to first examine the (average) effect of X on M1 and then use another manipulation to examine the (average) effect of M1 on Y.
An obvious approach is to first examine the (average) effect of X on M1 and then use another manipulation to examine the (average) effect of M1 on Y.
Both instances of unobserved confounding between \(M\) and \(Y\):
Both instances of unobserved confounding between \(M\) and \(Y\):
Another somewhat obvious approach is to see how the effect of \(X\) on \(Y\) in a regression is reduced when you control for \(M\).
If the effect of \(X\) on \(Y\) passes through \(M\) then surely there should be no effect of \(X\) on \(Y\) after you control for \(M\).
This common strategy associated with Baron and Kenny (1986) is also not guaranteed to produce reliable results. See for instance Green, Ha, and Bullock (2010)
df <- fabricate(N = 1000,
U = rbinom(N, 1, .5), X = rbinom(N, 1, .5),
M = ifelse(U==1, X, 1-X), Y = ifelse(U==1, M, 1-M))
list(lm(Y ~ X, data = df),
lm(Y ~ X + M, data = df)) |> texreg::htmlreg()
Model 1 | Model 2 | |
---|---|---|
(Intercept) | -0.00*** | -0.00*** |
(0.00) | (0.00) | |
X | 1.00*** | 1.00*** |
(0.00) | (0.00) | |
M | -0.00 | |
(0.00) | ||
R2 | 1.00 | 1.00 |
Adj. R2 | 1.00 | 1.00 |
Num. obs. | 1000 | 1000 |
***p < 0.001; **p < 0.01; *p < 0.05 |
The bad news is that although a single experiment might identify the total effect, it can not identify these elements of the direct effect.
So:
Check formal requirement for identification under single experiment design (“sequential ignorability”—that, conditional on actual treatment, it is as if the value of the mediation variable is randomly assigned relative to potential outcomes). But this is strong (and in fact unverifiable) and if it does not hold, bounds on effects always include zero (Imai et al)
Consider sensitivity analyses
You can use interactions with covariates if you are willing to make assumptions on no heterogeneity of direct treatment effects over covariates.
eg you think that money makes people get to work faster because they can buy better cars; you look at the marginal effect of more money on time to work for people with and without cars and find it higher for the latter.
This might imply mediation through transport but only if there is no direct effect heterogeneity (eg people with cars are less motivated by money).
Weaker assumptions justify parallel design
Takeaway: Understanding mechanisms is harder than you think. Figure out what assumptions fly.
CausalQueries
Lets imagine that sequential ignorability does not hold. What are our posteriors on mediation quantities when in fact all effects are mediated, effects are strong, and we have lots of data?
CausalQueries
We imagine a true model and consider estimands:
truth <- make_model("X -> M ->Y") |>
set_parameters(c(.5, .5, .1, 0, .8, .1, .1, 0, .8, .1))
queries <-
list(
indirect = "Y[X = 1, M = M[X=1]] - Y[X = 1, M = M[X=0]]",
direct = "Y[X = 1, M = M[X=0]] - Y[X = 0, M = M[X=0]]"
)
truth |> query_model(queries) |> kable()
query | given | using | case_level | mean | sd | cred.low | cred.high |
---|---|---|---|---|---|---|---|
indirect | - | parameters | FALSE | 0.64 | NA | 0.64 | 0.64 |
direct | - | parameters | FALSE | 0.00 | NA | 0.00 | 0.00 |
CausalQueries
Why such poor behavior? Why isn’t weight going onto indirect effects?
Turns out the data is consistent with direct effects only: specifically that whenever \(M\) is responsive to \(X\), \(Y\) is responsive to \(X\).
CausalQueries
Multiple survey experimental designs have been generated to make it easier for subjects to answer sensitive questions
The key idea is to use inference rather than measurement.
Subjects are placed in different conditions and the conditions affect the answers that are given in such a way that you can infer some underlying quantity of interest
This is an obvious DAG but the main point is to be clear that the Value is the quantity of interest and the value is not affected by the treatment, Z.
The list experiment supposes that:
In other words: sensitivities notwithstanding, they are happy for the researcher to make correct inferences about them or their group
Respondents are given a short list and a long list.
The long list differs from the short list in having one extra item—the sensitive item
We ask how many items in each list does a respondent agree with:
How many of these do you agree with:
Short list | Long list | “Effect” | |
---|---|---|---|
“2 + 2 = 4” | “2 + 2 = 4” | ||
“2 * 3 = 6” | “2 * 3 = 6” | ||
“3 + 6 = 8” | “Climate change is real” | ||
“3 + 6 = 8” | |||
Answer | Y(0) = 2 | Y(1) = 4 | Y(1) - Y(0) = 2 |
[Note: this is obviously not a good list. Why not?]
declaration_17.3 <-
declare_model(
N = 500,
control_count = rbinom(N, size = 3, prob = 0.5),
Y_star = rbinom(N, size = 1, prob = 0.3),
potential_outcomes(Y_list ~ Y_star * Z + control_count)
) +
declare_inquiry(prevalence_rate = mean(Y_star)) +
declare_assignment(Z = complete_ra(N)) +
declare_measurement(Y_list = reveal_outcomes(Y_list ~ Z)) +
declare_estimator(Y_list ~ Z, .method = difference_in_means,
inquiry = "prevalence_rate")
diagnosands <- declare_diagnosands(
bias = mean(estimate - estimand),
mean_CI_width = mean(conf.high - conf.low)
)
Design | Inquiry | Bias | Mean CI Width |
---|---|---|---|
declaration_17.3 | prevalence_rate | 0.00 | 0.32 |
(0.00) | (0.00) |
declaration_17.4 <-
declare_model(
N = N,
U = rnorm(N),
control_count = rbinom(N, size = 3, prob = 0.5),
Y_star = rbinom(N, size = 1, prob = 0.3),
W = case_when(Y_star == 0 ~ 0L,
Y_star == 1 ~ rbinom(N, size = 1, prob = proportion_hiding)),
potential_outcomes(Y_list ~ Y_star * Z + control_count)
) +
declare_inquiry(prevalence_rate = mean(Y_star)) +
declare_assignment(Z = complete_ra(N)) +
declare_measurement(Y_list = reveal_outcomes(Y_list ~ Z),
Y_direct = Y_star - W) +
declare_estimator(Y_list ~ Z, inquiry = "prevalence_rate", label = "list") +
declare_estimator(Y_direct ~ 1, inquiry = "prevalence_rate", label = "direct")
rho <- -.8
correlated_lists <-
declare_model(
N = 500,
U = rnorm(N),
control_1 = rbinom(N, size = 1, prob = 0.5),
control_2 = correlate(given = control_1, rho = rho, draw_binary, prob = 0.5),
control_count = control_1 + control_2,
Y_star = rbinom(N, size = 1, prob = 0.3),
potential_outcomes(Y_list ~ Y_star * Z + control_count)
) +
declare_inquiry(prevalence_rate = mean(Y_star)) +
declare_assignment(Z = complete_ra(N)) +
declare_measurement(Y_list = reveal_outcomes(Y_list ~ Z)) +
declare_estimator(Y_list ~ Z)
This is typically used to estimate average levels
However you can use it in the obvious way to get average levels for groups: this is equivalent to calculating group level heterogeneous effects
Extending the idea you can even get individual level estimates: for instance you might use causal forests
You can also use this to estimate the effect of an experimental treatment on an item that’s measured using a list, without requiring individual level estimates:
\[Y_i = \beta_0 + \beta_1Z_i + \beta_2Long_i + \beta_3Z_iLong_i\]
Note that here we looked at “hiders” – people not answering the direct question truthfully
See Li (2019) on bounds when the “no liars” assumption is threatened — this is about whether people respond truthfully to the list experimental question
Good questions studied well
Prospects and priorities
There is no foundationless answer to this question. So let’s take some foundations from the Belmont report and seek to ensure:
Unfortunately, operationalizing these requires further ethical theories. Let’s assume that (1) is operationalized by informed consent (a very liberal idea). We are a bit at sea for (2) and (3) (the Belmont report suggests something like a utilitarian solution).
The major focus on (1) by IRBs might follow from the view that if subjects consent, then they endorse the ethical calculations made for 2 and 3 — they think that it is good and fair.
This is a little tricky, though, since the study may not be good or fair because of implications for non-subjects.
The problem is that many (many) field experiments have nothing like informed consent.
For example, whether the government builds a school in your village, whether an ad appears on your favorite radio show, and so on.
Consider three cases:
Consider three cases:
In all cases, there is no consent given by subjects.
In 2 and 3, the treatment is possibly harmful for subjects, and the results might also be harmful. But even in case 1, there could be major unintended harmful consequences.
In cases 1 and 3, however, the “intervention” is within the sphere of normal activities for the implementer.
Sometimes it is possible to use this point of difference to make a “spheres of ethics” argument for “embedded experimentation.”
Spheres of Ethics Argument: Experimental research that involves manipulations that are not normally appropriate for researchers may nevertheless be ethical if:
Otherwise keep focus on consent and desist if this is not possible
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Contentious issues (mostly):
Data. How soon should you make your data available? My view: as soon as possibe. Along with working papers and before publication. Before it affects policy in any case. Own the ideas not the data.
Where should you make your data available? Dataverse is focal for political science. Not personal website (mea culpa)
What data should you make available? Disagreement is over how raw your data should be. My view: as raw as you can but at least post cleaning and pre-manipulation.
Experimental researchers are deeply engaged in the movement towards more transparency social science research.
Should you register?: Hard to find reasons against. But case strongest in testing phase rather than exploratory phase.
Registration: When should you register? My view: Before treatment assignment. (Not just before analysis, mea culpa)
Registration: Should you deviate from a preanalysis plan if you change your mind about optimal estimation strategies. My view: Yes, but make the case and describe both sets of results.
File drawer bias (Publication bias)
Analysis bias (Fishing)
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– Researchers conduct multiple excellent studies. But they only write up the 50% that produce “positive” results.
– Even if each individual study is indisputably correct, the account in the research record – that X affects Y in 100% of cases – will be wrong.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– Researchers conduct multiple excellent studies. But they only write up the 50% that produce “positive” results.
– Even if each individual study is indisputably correct, the account in the research record – that X affects Y in 100% of cases – will be wrong.
Exacerbated by:
– Publication bias – the positive results get published
– Citation bias – the positive results get read and cited
– Chatter bias – the positive results gets blogged, tweeted and TEDed.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– But say that researchers enjoy discretion to select measures for \(X\) or \(Y\), or enjoy discretion to select statistical models after seeing \(X\) and \(Y\) in each case.
– Then, with enough discretion, 100% of analyses may report positive effects, even if all studies get published.
– Say in truth \(X\) affects \(Y\) in 50% of cases.
– But say that researchers enjoy discretion to select measures for \(X\) or \(Y\), or enjoy discretion to select statistical models after seeing \(X\) and \(Y\) in each case.
– Then, with enough discretion, 100% of analyses may report positive effects, even if all studies get published.
– Try the exact fishy test An Exact Fishy Test (https://macartan.shinyapps.io/fish/)
– What’s the problem with this test?
When your conclusions do not really depend on the data
Eg – some evidence will always support your proposition – some interpretation of evidence will always support your proposition
Knowing the mapping from data to inference in advance gives a handle on the false positive rate.
Source: Gerber and Malhotra
Implications are:
Summary: we do not know when we can or cannot trust claims made by researchers.
[Not a tradition specific claim]
Simple idea:
Fishing can happen in very subtle ways, and may seem natural and justifiable.
Example:
Our journal review process is largely organized around advising researchers how to adjust analysis in light of findings in the data.
Frequentists can do it
Bayesians can do it too.
Qualitative researchers can also do it.
You can even do it with descriptive statistics
The key distinction is between prospective and retrospective studies.
Not between experimental and observational studies.
A reason (from the medical literature) why registration is especially important for experiments: because you owe it to subjects
A reason why registration is less important for experiments: because it is more likely that the intended analysis is implied by the design in an experimental study. Researcher degrees of freedom may be greatest for observational qualitative analyses.
Registration will produce some burden but does not require the creation of content that is not needed anyway
It does shift preparation of analyses forward
And it also can increase the burden of developing analyses plans even for projects that don’t work. But that is in part, the point.
Upside is that ultimate analyses may be much easier.
In neither case would the creation of a registration facility prevent exploration.
What it might do is make it less credible for someone to claim that they have tested a proposition when in fact the proposition was developed using the data used to test it.
Registration communicates when researchers are angage in exploration or not. We love exploration and should be proud of it.
Incentives and strategies
Inquiry | In the preanalysis plan | In the paper | In the appendix |
---|---|---|---|
Gender effect | X | X | |
Age effect | X |
Inquiry | Following A from the PAP | Following A from the paper | Notes |
---|---|---|---|
Gender effect | estimate = 0.6, s.e = 0.31 | estimate = 0.6, s.e = 0.25 | Difference due to change in control variables [provide cross references to tables and code] |