It seems to me that a blog whichs (A word I made up for the possessive of an inanimate object, pronounced “witches”) epistemology is based largely in uncertainty about the universe should start by explaining what it thinks truth is. My trouble in doing this is I am concerned that a lot of this post will draw on arguments I would like to make in future posts, which makes me unsure of which order I should write things. Nevertheless, I’ve decided on this order, and in the rather likely event that I abandon this project after several posts, I will at least have a post written on this topic.
I’ve defined the ethos of this blog to be about how to deal with uncertainty in one’s beliefs, and how one’s admitted ignorance of the world around em (Spivak pronouns will be the default for this blog; however because I am not as conscious of these things as I should be, I will likely mess this up at some point. Please remind me if you notice this.) should affect eir actions. This, of course, assumes that one admits some ignorance of the world around them, which I understand some people claim not to have. I tend to not take these people seriously. That being said, the best data I have (PredictionBook) indicates people are only right about 92% of the time they are 100% confident of something. If you think you can do better, I would invite you to do so.
One of my biggest issues with mainstream philosophy is that it seems to look at the problem of uncertainty in the world and conclude that the answer is nihilism and moral relativism and that discovering truth is impossible. I have seen it taken even further and seen the absurdity of these positions used as a strawman to knock down uncertainty in the world. But just because there is some uncertainty does not mean that statements about how likely things are to happen can’t be made.
So how do we define truth in a probabilistic world? If, a year ago, I had said “I think the world will end on December 21st, 2012 with 75% probability” and December 22nd rolled around, I could have just made the excuse, “Oh, well that was just the 25%”, and gone on. What’s more, there’s a pretty decent change that I’m right – one in four isn’t terribly bad odds. But, statistically, I can only make this claim about three times before there’s a 98% chance I’m just full of crap. (1) By repeatedly evaluating a theory, belief, or person’s predictions, we can get a very good idea of how usefully it or ey can predict the future. (2) Truth, then, is a relative term, that best applies when comparing two sets of predictions. To quote The Big Bang Theory, “It’s a little wrong to say a Tomato is a vegetable. It’s very wrong to say it’s a suspension bridge.”
This may be a confusing example, at first, because it doesn’t sound like the theories are making any predictions. But the words have predictions inherently associated with them – the former predicts that I will be able to eat it, while the latter predicts I’ll be able to drive my car over it. We could devise any number of tests for determining whether the tomato is a vegetable and whether or not it is a suspension bridge. The key difference is that the tomato would pass a lot of tests for vegetability and would not pass many tests for being a suspension bridge. That makes the tomato much more like the category “vegetable” than the category “suspension bridge”, even if it’s not a perfect member of either category.
So if truth is relative, we need a metric (or at least one) of comparing more true things to less true things. For any theory, belief, or person, we can generate a set of predictions with associated probabilities. We can test those, or wait for them to happen, and assign a 1 to those that happened and a 0 to those that did not. The result minus the predicted probability will give us how far over or under someone predicted the result. We can take the mean and standard deviation of these and use them as a way of describing the population of errors. If the average error is -0.05, or -5%, you average being 5% under-confident. (Coincidentally your author is, on average, 5% under-confident). Over the long haul, however, you could average an error of 0 simply by predicting with random probabilities. But in this case, you would expect to see high variation. We can use the inverse of the variance (3) (hereafter IOV) as a way of measuring how directionally correct one’s predictions are. I did a great deal of testing on simulated data to see what kind of scale I was working with on IOV (one to infinity). Random probabilities and random results give an IOV of exactly 3, so IOVs below 3 are directionally incorrect, IOVs above 3 are directionally correct. My personal IOV is around 7.0 (n=51). It’s important to note, though, that being consistently under or over confident will tend to raise one’s IOV. I don’t have a great way of determining if 5% average error and an IOV of 6 is better or worse than a 6% average error and a IOV of 8. I think it comes down to preference – ultimately, for a set of predictions, IOV is a measures how accurate you are, and average error only measures a systematic error towards over or under confidence. (4)
So what do I think truth is? Absolute truth is actually pretty easy. Something is only absolutely true unless it has 0 average error and infinite IOV. Everything else is wrong in some way. But the two metrics still give us a very good description of how accurate/precise a theory is, and allows us to compare two admittedly flawed theories and reject the worse one. If this blog is to have a defined epistemology, it is this: The fact that truth is relative needn’t paralyze us from knowing things. The process of making predictions, measuring error, and adjusting our theories will lead us to more accurate beliefs over time. Go forth and measure.