×
Convivium was a project of Cardus 2011‑2022, and is preserved here for archival purposes.
Search
Search
Countering Counter-Factual COVID ConfusionCountering Counter-Factual COVID Confusion

Countering Counter-Factual COVID Confusion

We don’t know what we don’t know, and we will never know what we can’t know, but by insisting on properly designed studies we can know if pandemic policies are working, Sarah Hamersma notes.

Sarah Hamersma
4 minute read

There are so many policy questions that need answers during the COVID-19 pandemic.  We would like to know how every intervention has affected the spread: have extended testing, school closures, stay-at-home orders, or even just officially-recommended social distancing made a difference?  And how much?

Let me start with some bad news: we can never know exactly.   

The reason is almost too simple: to truly know the effects of a policy, we need to watch the world unfold with the policy, then rewind, and watch the world unfold in its absence without changing anything else.  If we could just compare the real world to this world that might have been – which social scientists call the “counterfactual” and we geeks call an “alternative timeline” – then we would know what effect the policy had.  Like a control group in an experiment, the counterfactual anchors our measurement of a policy’s effects.  But without a time machine, it’s unknowable.

I wonder if this is why so many people throw up their hands and say, “We just can’t really know anything about all this.” Or, perhaps more commonly, people latch on to evidence corresponding to their own inclinations, and it then seems obvious that their preferred narrative is true.  As studies accumulate by the day, how is the general public supposed to decide which analyses are treating the data with respect and care, and which are shoving it into a pre-fabricated story mold?  How can we judge? 

This is where the good news comes in. If the problem is an unknowable counterfactual – and it always is! – then a study can provide solid, trustworthy results if it has developed a credible substitute.  If we want to know the effects of stay-at-home orders on COVID-19 case growth, the challenge is to use all of the available information to estimate what case growth would have been in the absence of stay-at-home orders.  This is far from impossible.  Social scientists have been working on methods for developing credible counterfactuals for decades, and there are objectively better and worse ways to do it.   

Consider an example.  Suppose I’d like to know the effect of food banks on hunger.  The data tell me that people living near food banks are hungrier, while people without a local food pantry are doing fine.  More access corresponds to more hunger.  In other words, food banks make things worse.  Does this sound wrong to you?  It should.  It is not the data, but the analysis, that is creating the problem. It assumes that if neighbourhoods with food banks didn’t have them, their hunger levels would be more like neighbourhoods that currently don’t have food banks.  

Yet, neighbourhoods without food banks are likely wealthier, with many advantages that improve their food access.  They are not good counterfactuals because of important, underlying differences.  There may also be a cause-and-effect relationship running in the opposite direction from what we hope to measure: if pantry locations were actually chosen to be in poor neighbourhoods, local hunger (in a sense) “causes” food pantry access.  In the presence of both important secondary factors and these reverse causal paths, it is hard to identify the effect of a food pantry on hunger.

What does all this have to do with COVID-19 research?  Well, all of the same issues apply.  You may have seen point-in-time case comparisons of states that adopted a stay-at-home order to those that did not.  This is the weakest type of analysis, making the (literally in-credible) assumption that these states are otherwise equivalent. Non-adopters are an inappropriate counterfactual for adopters for at least two reasons: the two sets of states do not have the same overall characteristics, and non-adopters may be waiting to adopt policies until outcomes look worse (which will always make those policies look less effective).

Any solid study must address both of these issues directly to generate a credible counterfactual.   A nice example is a recent peer-reviewed study in Health Affairs.  The study uses data on daily COVID-19 case counts for every county in the United States for most of March and April.  The authors then connect the data from every county, every day, to its policies intended to reduce the spread of COVID-19.  The study is designed to develop smart counterfactuals, comparing each policy-affected county to multiple alternatives: both similar counties at the same time without that policy and the county itself before it had the policy.  

These comparisons are made in a single integrated model, using statistical methods designed expressly for this purpose.  Every technical decision the authors make is scrutinized to ensure that the assumptions made about comparability are supported.  They find that stay-at-home orders and restaurant and bar closures have prevented millions of COVID-19 cases.  I believe them.

There are more studies emerging every day, claiming new insights on these policies. We are not stuck in a world where we the public must either believe every study (“we’re not experts, after all”) or reject every study (“no one can really know anyway”).   We are in a world where researchers ought to – indeed, have an obligation to – convince both their fellow experts and the public that they have designed their study to make the right comparisons.  Without a credible design, neither mountains of data nor complex statistical techniques can help us correctly understand the effects of policy.  

And so when you read that next article describing the changes in COVID-19 outcomes from a policy intervention, ask yourself a simple question: compared to what?  The researchers should tell you.  That is, unless they’ve built a time machine.

You'll also enjoy...

Science can Strengthen Faith

Science can Strengthen Faith

Kyelle Byne attended a talk by renowned Christian author Philip Yancey in which he borrowed centuries-old lessons from John Donne to frame the challenge and opportunity for Christian scientists in our pandemic context. Suffering can uncover contours of our faith and motivate a Christian witness of care and understanding.