Schools Proliferating Without Practitioners

It's been more than a decade since Eliezer Yudkowsky started writing The Sequences. Lots of stuff has happened since then in the realm of rationality research. Philip Tetlock creamed everyone else in a competition to predict global events by making a platform on top to measure forecasting ability. The replication crisis hit, showing tons of studies in psychology, behavioral economics, and related disciplines to be total hocus. Effective Altruism splintered off as a movement from rationality, raising millions of dollars for charity and fundamentally changing the way many wealthy philanthropists, bankers, computer programmers, and other high income earners think about their contributions. You know what didn't change much? The prototype of Bayesian Rationality put forth by Eliezer. It's not that there's nothing to update. Eliezer's sketch was a rough, unpolished thing that he actively invited his readers to improve upon and iterate. Despite that, the top 'rationality textbook' a newcomer to the LessWrong school can expect to be recommended in 2018 is still The Sequences, with a few additions and light editing. You could say that the LessWrong rationality community, figuratively and literally has failed to update.

Bayes Rule and The Failure To Update

Ironically enough, Bayes Theorem is something that the community at large has updated on without really officially acknowledging it. None of those updates have really backpropagated into The Sequences however.

I suspect most of my readers are already familiar with Bayes Theorem, but if you're not I certainly shouldn't be the one to explain it to you. This explanation from Better Explained is probably your best bet to get a quick handle on the concept.

I know it's been a while since most of you have read The Sequences (if ever), so a few quick reminders are in order. You probably remember that Eliezer spends a lot of time talking about Bayes Theorem and Bayesian Reasoning and why Frequentist interpretations of statistics are insane. You might not remember that he's spending all that time talking about it because he believes Bayes is the centerpiece of his philosophy. For example in his series of essays on Quantum Physics, Eliezer is trying to force a confrontation between the readers intuitions about science and their intuitions about bayesian inference:

Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first? Are you going to make a major modification to a scientific model, and believe in zillions of other worlds you can’t see, without a defining moment of experimental triumph over the old model? Or are you going to reject probability theory? Will you give your allegiance to Science, or to Bayes?

(Source: The Dilemma: Science or Bayes?)

Eliezer speaks of his Bayesian Enlightement, and how it made him realize his entire approach to 'rationality' had been deeply flawed; he hadn't been holding himself to nearly enough rigor:

But it was Probability Theory that did the trick. Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox. If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end up doing less than optimal. Jaynes would do his calculations different ways to show that the same answer always arose when you used legitimate methods; and he would display different answers that others had arrived at, and trace down the illegitimate step. Paradoxes could not coexist with his precision. Not an answer, but the answer. And so—having looked back on my mistakes, and all the an-answers that had led me into paradox and dismay—it occurred to me that here was the level above mine. I could no longer visualize trying to build an AI based on vague answers— like the an-answers I had come up with before—and surviving the challenge.

(Source: My Bayesian Enlightenment)

And Eliezer provides short notes on how to think better with Bayes, clearly showing he intends it to be used by the reader, not just as a 'model' or 'metaphor' for correct thinking:

This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1−P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O/(1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience. For example, it’s more convenient to use odds when you’re doing Bayesian updates. Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4/1.4) = 2/7 = ∼29%.

(Source: 0 and 1 Are Not Probabilities)

All of this is to support a claim that should be fairly obvious, but I suspect many readers will try to wriggle out of if I state it without extensive justification: Bayesian methods are a core feature of Eliezer Yudkowsky's version of rationality. You might even say that Eliezer's variant could be called "Bayesian Rationality". It's not a 'technique' or a 'tool', to Eliezer Bayes is the law, the irrefutable standard that provides a precise unchanging figure for exactly how much you should update in response to a new piece of evidence. Bayes shows you that there is in fact a right answer to this question, and you're almost certainly getting it wrong.

This in turn points toward the uncomfortable fact that Bayes does not seem to have helped the Bayesian Rationalists develop useful approximations of correct inference. In fact, it's not so much that we started with primitive approximations and then improved them. Rather, the Bayesian feature of Eliezer's philosophy seems to have left no conceptual descendants in the meme pool. For example, the Center For Applied Rationality's 2017 handbook does not include the phrase "Bayes Theorem" even once. It's taken by the current cohort as something of a status symbol, a neat novelty you can claim to have knowledge of to boost prestige.

Meanwhile, Philip Tetlock figured out how to get humans to approximate Bayes Theorem in their predictive powers. He used a score function called the Brier Score to measure predictive strength from participants in forecasting tournaments. This let him figure out the rules of reason which humans can actually implement to be very good at predicting the future. In his book Superforecasting, Bayes Theorem gets a brief aside to explain that it's largely irrelevant to his top performers success. In fact, Tetlock takes the reader aside for a bit of myth busting, stating explicitly that Bayes Theorem is not necessary for the level of ability superforecasters demonstrate:

This may cause the math-averse to despair. Do forecasters really have to understand, memorize, and use a―shudder―algebraic formula? For you, I have good news: no, you don't.

He goes on to explain that while forecasters might occasionally use Bayes Theorem to ground their predictions, in general its use is not all that necessary for strong performance per empirical observation. Tetlock uses the example of Tim Minto, a superforecaster who understands the basics of Bayes Theorem and used it an astonishing zero times while updating and considering his forecasts. The mental movements that Bayes Theorem suggest for updating your beliefs are incredibly useful and crucially important to good performance, but the equation itself seems to be of limited benefit in real world prediction tournaments. Even in describing superforecasters as a whole, Tetlock says that 'many' know about Bayes Theorem, implying the number does not even constitute a majority.

There is an entire fascinating discussion we could have about what it means for Tetlock's measurement based, empirical perspective to accomplish the goal that Eliezer's rational, model based perspective didn't. But we shouldn't veer into too many subjects in one essay, it makes for messy reading.

So How About That Replication Crisis?

The case of Bayes underscores a larger point about the way this sort of thing is treated by LW-flavor rationalists. The replication crisis destroyed a lot of things we thought we knew about human psychology in a very short period of time. Naively, I would expect this to be a hair-catches-fire moment for the community. Since a lot of these things were taken as important things in The Sequences, if people are actually practicing stuff based on these ideas then their sudden deletion from the scientific canon should have caused quite a bit of chaos and reshuffling. Instead, there was almost no reaction, and to the extent a reaction occurred it mostly treated this issue as a spectator sport rather than something which applies to LessWrongers personally.

The replication crisis provides us with a natural experiment, and I invite you to consider what would happen if it were performed in some other discipline. Imagine for example what the reaction would be in medicine if it were found that 3/4 of pharmaceutical drugs were actually placebo or had effect sizes so small that they were nearly indistinguishable from noise. You'd see doctors going through the 5 stages of grief they'd be so shaken up by it. It would provoke a great deal of argument, drama, vicious denials, and once the dust settled grim acceptance of the new reality. And yet:

The crisis intensified in 2015 when a group of psychologists, which included Nosek, published a report in Science with evidence of an overarching problem: When 270 psychologists tried to replicate 100 experiments published in top journals, only around 40 percent of the studies held up. The remainder either failed or yielded inconclusive data. And again, the replications that did work showed weaker effects than the original papers. The studies that tended to replicate had more highly significant results compared to the ones that just barely crossed the threshold of significance. (Resnick, 2018)

This is fairly close to the situation we find ourselves in with the bias literature. But nobody seems particularly shaken, and why should they? Our naive impression is just that, naivete. The straightforward conclusion is that if deleting knowledge from the canon causes no reaction, then it clearly wasn't important to people. And the straightforward conclusion from that postulate is whatever rationalists do, the practice isn't based on the bias literature. And the practice presumably isn't based on Bayes either. After all, if people were doing stuff based on Bayesian Inference they wouldn't need Philip Tetlock to tell them the equation itself is nearly useless as a supplement to most human reason. Yet a newcomer to the community would get the impression that the bias literature and Bayes Theorem are central features.

Apathy implies inaction, which implies something very strange about Bayesian Rationality. In The Sequences Eliezer warns against schools proliferating without evidence and how you need measurement, testing, statistics, etc for your organized practice of X to mean anything. You are probably expecting me to tell you that Bayesian Rationality is a school proliferating without evidence, but the conclusion is so much odder than that. More than just proliferating without evidence, Bayesian Rationality seems to be a school proliferating without practitioners. It still memetically replicates but nobody is doing anything directly based on the ideas it espouses.

The Sequences: Trapped In Amber

If you were to join the LessWrong rationalist community in 2018, you would probably be told to read The Sequences. The more time passes since their publication date, the less sensible this seems. Certain portions are timeless, other parts could do with some revision. More than just things which are outdated, there's plenty of new developments in the past decade that could be added. It's not as though Bayes Theorem was somehow invalidated, it is a mathematical law after all. Rather we've since learned it's possible to iterate on and become better at inference using the Brier Score. Other possible inclusion candidates exist, such as Jonathon Haidt's research on the 6 moral foundations.

It's very difficult for our collective understanding to advance when the introductory material starts people where we were in 2009. The knowledge of some LessWrongers is quite deep, but the conversations they can have with that knowledge in public are bottlenecked by a lack of common knowledge with other potential participants. In my next post I'll show how this dynamic came about, and what it looks like when a community actively updates its knowledge in response to new information and events.


(0): Resnick, Brian. (2018, August 27). More social science studies just failed to replicate. Here's why this is good. Retrieved from https://www.vox.com/science-and-health/2018/8/27/17761466/psychology-replication-crisis-nature-social-science