Coincidence as Scientific Evidence

in bayes •  4 years ago 

In their paper “From mere Coincidences to Meaningful Discoveries,” Thomas Griffiths and Joshua Tenenbaum (hereafter GT) argue that coincidences inspire our greatest scientific discoveries as well as our greatest superstitious blunders. This article explains what coincidences are and how to analyze their significance.

Using a simple coin flip example, GT dispel the common misconception that coincidences are simply the occurrence of unlikely events. Consider the following two coin flip sequences: HTHHTT and HHHHHH. Given a fair coin, both are equally likely, but the low algorithmic complexity of the second sequence suggests that something strange is afoot, and it is only the second sequence that would feel like a coincidence. This motivates the following definition:

A coincidence is an event that provides support for an alternative to a currently favored causal theory, but not necessarily enough support to accept that alternative in light of its low prior probability.

The second coin flip sequence suggests the alternative theory that the coin is rigged. But if the coin was just some random coin off the street, this is unlikely. The likelihood ratio P(sequence | rigged)/P(sequence | not rigged) supports rigged, but given the low prior probability of rigged, we would consider the event a mere coincidence.

But mere coincidence is evidence nonetheless. The question is how strong? From a Bayesian perspective, rational agents should update their beliefs when confronted with coincidences, but erroneous beliefs can stem from taking them too seriously. GT point out three ways that belief updates can fail:

  1. Failing to accurately estimate the likelihood ratio
  2. Failing to estimate priors
  3. Failing to combine the likelihoods and priors appropriately

Based on their experiments, GT conclude that the main point of failure or “locus of irrationality” is 2. Considering elaborate conspiracy theories or those involving supernatural phenomena, this is unsurprising. How could our priors be well-calibrated in such cases?

GT also point out another factor that causes us to overestimate the significance of coincidences: it is computationally easier for us to notice coincidences than non-coincidences. We notice when someone calls us right at the moment we happen to be thinking about them. We don’t notice all the times we thought about them and they didn’t. That is, we count the hits and not the misses.

Despite GT’s claim that bad priors is the locus of irrationality, I think we suck at estimating likelihoods as well. For instance, 9/11 conspiracy theorists argue that the World Trade Center towers could not possibly have collapsed how they did unless by controlled demolition. Similarly, people often make miraculous healing claims based on their estimate that the person who was healed could not possibly have gotten better without divine intervention or the help of some alternative medicine. In both cases, improbable though the theories may be, people appear to be overestimating likelihoods.

Coincidences can lead us astray, but they can also lead to scientific discovery because they represent clues that our current understanding of the world might be incorrect. Such clues provide scientists with a pool of promising theories which, upon further testing, can be confirmed or disconfirmed. Once confirmed, an alternative theory becomes the default worldview and coincidental observations are no longer seen as coincidental.

The descent into madness occurs when the further testing part is skipped, jumping straight from coincidence to conclusion. With limited testing, beliefs are less strongly influenced by the evidence and more strongly influenced by priors—the locus of irrationality according to GT. So the more data we can collect the better.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!