Artificially Intelligent

Any mimicry distinguishable from the original is insufficiently advanced.

France Bet Postmortem

On the 18th of October 2020, I assigned a 50% chance that France’s daily COVID cases would not exceed 40,000 by the end of October. My friend Sidney assigned 70% chance, so we bet about it. As of October 22nd, I have lost this bet.

Abram Demski recommends that people who lose bets write a post-mortem explaining why they think they lost. In general, I think it’s very valuable to look at the past and ask yourself how you could have done better.

My model of COVID before I looked at the graph of France’s daily cases was something like:

  • Most cases are driven by large social gatherings
  • Things are exponential early on because of this
  • After an initial period of exponential growth, people stop attending large social gatherings
  • This implies that exponential growth stops faster than you think

This model was mostly built from thinking that COVID was going to continue being exponential for a long time back in March and being very wrong about that. I think I updated too much in the wrong direction and thought that COVID growth was mostly linear, with a brief period of exponential growth at the beginning; the truth was closer to “didn’t remain exponential for weeks after things got bad” not “stopped being exponential as soon as things got bad”.

In the future, I think I should focus more on the specificity of the claim “things will stop being exponential this week”. Additionally, I think I failed to assign high enough probability that the curve will stop being exponential and that the number of daily cases would still exceed 40,000.

I also briefly googled France and saw that they were instituting an aggressive lockdown. I realized that there would be a delay in lockdown causing a decrease in daily cases, however I thought that lockdown implied that people would have started doing basic social distancing approximately a week ago. In hindsight, this reasoning seem pretty complicated and I didn’t downweight accordingly.

Overall, I think my main error was not sufficiently penalizing complexity. Frustratingly, I am well aware of the fact that increased complexity requires an exponential penalty; apparently, I still cannot apply it in practice. It’s also likely that I engaged in wishful thinking, so I should slightly increase the probability that I assign to events that I don’t want to happen in the future.