Forget the models, follow the R(t)
In my new Bloomberg column I suggest that R(t), which is a hyperparameter in most Covid-19 models, is a much better and more trustworthy figure to follow than any other particular data set.
One reason, which didn’t get into the column, is that R(t) can be estimated from most other daily data sources like hospitalizations, cases, or even deaths, albeit with lags. That means that we can piece together a trustworthy patchwork quilt of R(t)’s that might be more trustworthy than any particular version.
Moreover, R(t) is insulated from the bias we know exists in these figures (due mostly to not enough tests) and only cares about trends, so as long as the bias is consistent we don’t care about it.
The caveat here is that we’ve seen many states performing Covid-19 data manipulation (Texas, Florida, and Georgia for example) in order to open up sooner than they honestly should. Basically, they’re juicing the numbers. That’s a kind of political bias we cannot overcome easily (unless they forget to manipulate some of the data!).
Anyway, that’s a nerdy postscript on the following:
Here’s a Covid-19 Number Worth Watching
My other Bloomberg columns are available here.
You wrote:
R(t) can be estimated from most other daily data sources like hospitalizations, cases, or even deaths, albeit with lags. That means that we can piece together a trustworthy patchwork quilt of R(t)’s that might be more trustworthy than any particular version.
How does one do this? Is anyone doing this, on a local, state, or national basis? (Excuse me if I’ve missed it: the “daily briefings” have been unwatchable, on all levels, for a long time now.) If not, why not?
LikeLike
I’m not sanguine that we can assume the bias hasn’t changed. News reports, for instance, suggest that numbers in Florida and Georgia have gone from undercounting merely due to technical limitations (not enough tests), to undercounting more because government officials are actively trying to suppress the reported numbers.
LikeLike