Bad predictions are useful
Back when I was writing about college basketball regularly, I was asked to vote in preseason polls a couple times. I had a bit of a dilemma: there were already statistical projections that project sports outcomes well. Should I just follow those? Or should I come up with my own independent forecasts?
I now feel it’s usually best to make predictions as independently as possible—even if they’re worse than other available sources.
Why? Using other sources would certainly make my own predictions better. But thinking independently makes the collective prediction better.1
If everyone follows the statistical models, then the only information available is what those models say.
But if people make independent forecasts, you can now see all of those opinions and the models too. This means the collective prediction is better—anyone looking for information now has more of it available. (In fact, at least in college basketball, combining models and human polls results in better predictions than using either one individually.)
There’s a tricky collective action problem here: If you make a prediction that’s different than the best available information, you’ll probably be wrong. And it’s uncomfortable to make predictions that might be wrong!
That’s true even for frivolous topics like ranking basketball teams. It’s more true for important topics like election polling. Polls inherently have sampling error; for example, 5% of the time, an 800-person poll will get the margin wrong by at least seven percentage points. So even if you’re doing all of your modeling and sample selection perfectly, you might get a Trump +5 result when the true value is Biden +2. If you publish the former when everyone is expecting the latter, people will freak out. And since the average of all other polls is more informative than your single poll, you’ll probably be wrong, and you’ll look bad.
Do you publish that poll anyway? Or do you tweak some assumptions to nudge your number closer to the consensus, or maybe throw it away entirely? There’s some evidence that pollsters refuse to publish outliers (known as “pollster herding”), at least late in campaigns. This makes them look better, but it’s a disservice to the public, because it reduces the total information available.
This isn’t a hard and fast rule—some people are looked to as experts precisely because they’re good at synthesizing and weighting other perspectives, and they should continue doing that. But when you know your voice will be one of many, you should stick your neck out, even if it means making a bad prediction.2
In more technical terms: "A Bayesian wants everyone else to be non-Bayesian".
In fact, you can argue that the best thing you can do is to make a bad prediction and be a huge jerk about it; not only are you adding your perspective to the chorus, but you’re making yourself a target for everyone else who disagrees to add their own perspectives and explain exactly why you’re wrong.