Gamers, Girls, and Human Nature

Editor’s Note: I’m posting this a few weeks after I wrote the original draft, both because I don’t get the urge to write very often and to let the original stories die down a bit.

It’s been a bad few weeks for nerd culture. After Zoe Quinn, an indie game developer, was recently publically accused by her ex of cheating/sleeping around in order to advance her career, internet warriors on reddit and 4chan began leaking her personal information, using it to harass and threaten her, despite the denouncement of her ex. Validity or invalidity of the accusations aside, personal threats should never be tolerated. During all this, Anita Sarkeesian posted the newest (and possibly best) of her excellent Tropes vs. Women video series. Sadly, the resulting backlash seem to indicate that personal threats are the norm for women on the internet. Eventually, they caused her to leave her house out of fear for her safety. The crowning gem, and the only story which has received (to my knowledge) mainstream press, was, of course, the hacking of multiple celebrities’ nude photographs and subsequent dissemination on (drumroll please) sites such as Reddit and 4chan.

Rightly so, there’s quite a bit of backlash over this. Kotaku wonders if the identity of “gamer” itself is dead or dying. As someone who wholly self-identifies as almost every stereotype of “nerd” and probably plays close to forty hours of video games a week, I’ve got to say I agree pretty heartily agree with nerd icon and personal hero Wil Wheton on this one. Possibly my favorite response goes to Chris Kluwe.

It’s not just women under attack by nerd culture. It’s no secret that “the internet” is wildly homophobic, overtly sexist, and oftentimes, rather racist. In fact, the most upsetting thing is stories of this ilk are far from uncommon, they’re closer to par for the course – all three have happened before.  But the question that always grates at me is why? It’s not as though “the internet” can stand on centuries of religious tradition, which is the typical justification (at least for the first two). In fact, wouldn’t you expect nerds, a demographic that is literally the sterotype of the bullying victim, (OK, I really just wanted to link to that video), the counterculture individual, and the freethinker to band together to stand up for people being systematically oppressed by “the man”? Sadly, it seems that just as children of abusive parents are more likely to be abusive, the geeks, nerds, and dweebs that have been cast out by popular culture, are perfectly happy to complete the cycle and become bullies themselves.

It’s been bothering me recently that practical jokes are as much a part of our culture as they are. I work in a blue-collar environment, and I see a lot of them. One day I started wondering why. The joker risks his trustworthiness and reputation (two not-unvaluable traits) to display his dominance over another person and make them the fool. It says a lot about human nature that jokers are viewed as funny and friendly, (high status) not dishonorable and untrustworthy (low status) and the jokees are viewed as gullible and stupid (low status) and not trusting and empathetic (high status). It seems almost as though the character traits displayed are irrelevant, perceived status is almost solely determined by one person’s dominance over another. It certainly seems that this is the lesson that nerd culture has taken away – dominate, or you will be dominated.

There are obviously differences between a practical joke and hacking and disseminating nude pictures of someone, or threatening violence on them. But I believe they are more related than one might think. Instead of learning the lesson that nerds should stand up for people being oppressed by the system, we’ve learned that the way to gain status is to have another group that you can point to and say “Well, look, I’m still better than them.”

Ultimately this is what’s most depressing to me about these situations – the above opinion seems to be in the vast minority (note that the linked are all massively upvoted posts). Most simply don’t seem to care. Macklemore and Ryan Lewis make much the same point about hip-hop culture and homosexuality: “Our culture founded from oppression / yet we don’t have acceptance for ‘em, / [we] call each other faggots / behind the keys of a message board.”

I often am asked (well, I would be if I talked to people) why I’m so cynical about human nature. I think the horrible things about human nature that this implies is the answer why. Instead of learning to recognize oppression and learning empathy, nerds instead turn their keyboards on every minority out there for no reason other than to not be the bullied one. And I’m just not sure I can think of any other word for that than profoundly, profoundly sad.

The Blind, Idiot God of Capitalism

Scott Alexander of Slate Star Codex recently wrote what I think may be a candidate for the most outstanding blog post of all time. Go ahead, read it. I’ll wait. It’s a lot more intelligent/witty/intellectually original than anything I have to say.

Done yet?

Great.

I think what affected me so strongly about it was that I have never seen my view of society explained so eloquently:

“The system is not your friend. The system is not your enemy. The system is a retarded giant throwing wads of $100 bills and books of rules in random directions while shouting “LOOK AT ME! I’M HELPING! I’M HELPING!” Sometimes by luck you catch a wad of cash, and you think the system loves you. Other times by misfortune you get hit in the gut with a rulebook, and you think the system hates you. But either one is giving the system too much credit.”

Which is not to say I think Scott got it exactly right – if he did, I’d just spam a link to his post, instead of writing my own. What I think he misses is that the education delusion in society is only a piece of an even more disturbing societal trend.

We now live in a society that tells us that everyone needs a college degree. You need this degree or you won’t be able to get a good job. You need the job so you can buy a house, have two point four kids, and oh yeah, pay off all those student loans. Except your degree isn’t even getting you a good job – to set yourself apart from the pack, now you really need to go to graduate school. Except post graduate education is increasingly looking like a horrible choice. So you settle for something outside your field, or maybe longer hours than you’d like, a longer commute, or less pay. You spend your drive home passing billboards for products you don’t want, and when you get home and open your favorite blog, it might have a banner or two on the side. And then, on Facebook, there’s a link to a website telling you 26 ways only people like you understand XYZ (with more ads on the side). And somehow, sometime the idea gets into your head that maybe you need a new car – Jessie just got one and ey loves it. You wouldn’t be able to afford it, except you just got your merit raise at work, and they’re offering 0% interest for 18 months. And suddenly despite your 3% raise, you are still barely making do and living paycheck to paycheck. Sadly, this (or worse) is the end result for a huge proportion of the population.

I tend to look at evolution and capitalism in very similar ways. They are both blind idiot gods and are capable of enormously powerful creations. Most importantly, they both operate by the single truism that the only things that will continue to exist are those that are sustainable. Genes sustain themselves in the gene pool by passing on themselves on from generation to generation. In business, it takes the form of creating demand. This can drive an extremely positive outcome – it’s the force behind innovation and economic growth. But just as often, it takes the form of advertising or designing products to fail.

On a very basic level, every business needs at least two things to survive: customers and employees. In a pinch, you can lose the employees. By some amazing coincidence, capitalism has evolved a system that is perfectly sustainable for both of these things. Artificial demand keeps employees buying things, creating customers. Those customers continue to need money, creating employees. The end result is about a third of everyone’s day devoted to nothing but spinning the wheel of capitalism.

This is probably more unfair to capitalism than I should be. It is, like I said, responsible for most of the innovation of the 20th and 21st centuries. Capitalism isn’t evil – it’s just purposeless. As Scott said, the system is a retarded giant. Unless we, as a society impose a purpose on it, through responsible consumerism, moderation, and in some cases, regulation, the only results will be those that benefit the system itself.

Crimes Against Language: Missing the Trees for the Forest

I once was listening to a class, and the lecturer was speaking about the value of different perspectives. Attempting to emphasize that two people can often interpret the same situation differently, he made the analogy to his own eyes. The fundamental structure of his eyes differs from eye to eye – the various rods and cones are arranged in different patterns and different numbers. And the wiring to the brain differs too – the cells transmit the information in different paths and in different ways. “If I don’t see the same thing from one eye to the other,” he concluded, “how can I be upset with someone else for seeing something differently than me?”

I was, to put it mildly, frustrated. This type of statement is a huge pet peeve of mine. I completely agree with his statements about the anatomy of eyes, and I agree that multiple perspectives are tremendously valuable. But he makes not one, but two inferential leaps to get from one to the other that are simply not allowed. First, he infers that merely because the electrical relays differ, he sees things differently from eye to eye. If he genuinely believes this, I would like him to take the simple test of closing an eye, then the other eye, and reporting his observations. Perhaps he will be surprised, but I doubt it. (1) After this, however, and perhaps more frustrating, is the shift in meaning of the word “see”. He originally uses it to mean “a visual perception” then completely shifts meaning, equating that to the broader definition of “perspective on a situation”. This makes about as much sense Netflix hiring Cards Against Humanity to make a deck advertising the season two premier of “House of Cards”, noting, to everyone’s surprise, that the name of both products contains the word “cards”. Rumor has it that Netflix has also lobbied Extreme Makeover: Home Edition to build a literal house of cards for the premier of Season 3. The most disturbing part of all of this is that the lucky recipient will fall asleep looking at Frank Underwood every night.

These errors may not seem like a huge deal to many people. If someone agrees with your premise and conclusion, and the two are tied together linguistically, people will often overlook the fact that they aren’t even talking about the same thing. People that agree with you will simply note that you’re scoring a point for their side and that your argument sounds remotely reasonable. They will smile, nod, and smugly think to themselves, “I’m so smart for thinking B is true, I knew it all along.” Meanwhile, opponents will immediately realize that A and B are completely unrelated and be more likely to dismiss future arguments for B. Making bad arguments makes both sides more confident of their opinion, regardless of which side you’re arguing for.

This accomplishes nothing but further polarizing the debate, perpetuating a culture that emphasizes winning an argument at any cost, regardless of the logical missteps made along the way. And the more polarized the debates become, the more likely people are to grasp at straws and make bad arguments, resulting in a vicious cycle. Political (and I’m not only talking about public policy) arguments have become so visceral and so heated that the only thing that matters is winning. But if we are ever going to improve at decision making as a society, we must deemphasize the answers, in favor of the methods used to get those answers. In fact, the answers can be crackpot ideas, but if the methods used were rigorous, they require the same deliberate investigation as other more “scientific” theories. (2)

At any rate, this kind of language error is fairly common. For some reason, people seem to be under the impression that confusing various meanings of a word makes someone wise or reveals inner truths about the world. As someone who avidly maintains a collection of quotes fifteen pages long, I see this a lot. Ayn Rand once remarked, “The smallest minority on earth is the individual. Those that deny individual rights cannot claim to be defenders of minorities.” Indeed, mathematically, the smallest natural number is, in fact, one. I used to really like this quote: it scored points for my team, and did it in a semi-plausible manner. It completely misses the point, however, that by “minority” what is generally implied is “a group of persons with a sociological history of being oppressed by another group of persons”. This is exactly the reason that, despite being less than 50% of the population, white males are not typically referred to as a “minority”.

Not to lack for examples, Johnathan Swift is quoted as saying, “For in reason, all government without the consent of the governed is the very definition of slavery.” While I’m sure this makes me un-American, I must admit that I can see a few differences in communist China and the pre-Civil War American South. While the communist regime in China is by no means progressive, I think the slaves might have been slightly worse off. At a bare minimum, I wouldn’t suggest arguing otherwise in polite company.

For some reason, people seem to be under the impression that confusing meanings of a word makes someone wise or reveals an inner truth about the world. But these lingual errors don’t make you a Zen Buddhist monk, they just reinforce bad thinking whenever they’re passed on as “deep wisdom”. To paraphrase Sam Harris (at 2:53), in the best case, they provide bad reasons for correct beliefs where good reasons are actually available, and in the worst case, it decouples beliefs on both sides from the rational processes one should go through to believe them. In almost every instance, the answer you get matters very little, but how you get there is everything.


 

  1. It is possible (although I think unreasonable, particularly given that this charitable interpretation makes the second half of his sentence make even less sense) that our dispute is actually one of definition. This reminds me of the argument “if a tree falls in the forest, does it make a sound”. But since Eliezer has already written an excellent piece on this and I would simply be plagiarizing him, I have no intention of delving into the topic here.
  2. An unfortunate consequence of treating “absurd” hypotheses seriously is that sometimes, the data comes back in a completely unexpected way. This should, I hope, reinforce just how overwhelmingly conclusive the evidence is for the things you believe. Hint: Not very

How to Know Things

It seems to me that a blog whichs (A word I made up for the possessive of an inanimate object, pronounced “witches”) epistemology is based largely in uncertainty about the universe should start by explaining what it thinks truth is. My trouble in doing this is I am concerned that a lot of this post will draw on arguments I would like to make in future posts, which makes me unsure of which order I should write things. Nevertheless, I’ve decided on this order, and in the rather likely event that I abandon this project after several posts, I will at least have a post written on this topic.

I’ve defined the ethos of this blog to be about how to deal with uncertainty in one’s beliefs, and how one’s admitted ignorance of the world around em (Spivak pronouns will be the default for this blog; however because I am not as conscious of these things as I should be, I will likely mess this up at some point. Please remind me if you notice this.) should affect eir actions. This, of course, assumes that one admits some ignorance of the world around them, which I understand some people claim not to have. I tend to not take these people seriously. That being said, the best data I have (PredictionBook) indicates people are only right about 92% of the time they are 100% confident of something. If you think you can do better, I would invite you to do so.

One of my biggest issues with mainstream philosophy is that it seems to look at the problem of uncertainty in the world and conclude that the answer is nihilism and moral relativism and that discovering truth is impossible. I have seen it taken even further and seen the absurdity of these positions used as a strawman to knock down uncertainty in the world. But just because there is some uncertainty does not mean that statements about how likely things are to happen can’t be made.

So how do we define truth in a probabilistic world? If, a year ago, I had said “I think the world will end on December 21st, 2012 with 75% probability” and December 22nd rolled around, I could have just made the excuse, “Oh, well that was just the 25%”, and gone on. What’s more, there’s a pretty decent change that I’m right – one in four isn’t terribly bad odds. But, statistically, I can only make this claim about three times before there’s a 98% chance I’m just full of crap. (1) By repeatedly evaluating a theory, belief, or person’s predictions, we can get a very good idea of how usefully it or ey can predict the future. (2) Truth, then, is a relative term, that best applies when comparing two sets of predictions. To quote The Big Bang Theory, “It’s a little wrong to say a Tomato is a vegetable. It’s very wrong to say it’s a suspension bridge.”

This may be a confusing example, at first, because it doesn’t sound like the theories are making any predictions. But the words have predictions inherently associated with them – the former predicts that I will be able to eat it, while the latter predicts I’ll be able to drive my car over it. We could devise any number of tests for determining whether the tomato is a vegetable and whether or not it is a suspension bridge. The key difference is that the tomato would pass a lot of tests for vegetability and would not pass many tests for being a suspension bridge. That makes the tomato much more like the category “vegetable” than the category “suspension bridge”, even if it’s not a perfect member of either category.

So if truth is relative, we need a metric (or at least one) of comparing more true things to less true things. For any theory, belief, or person, we can generate a set of predictions with associated probabilities. We can test those, or wait for them to happen, and assign a 1 to those that happened and a 0 to those that did not. The result minus the predicted probability will give us how far over or under someone predicted the result. We can take the mean and standard deviation of these and use them as a way of describing the population of errors. If the average error is -0.05, or -5%, you average being 5% under-confident. (Coincidentally your author is, on average, 5% under-confident). Over the long haul, however, you could average an error of 0 simply by predicting with random probabilities. But in this case, you would expect to see high variation. We can use the inverse of the variance (3) (hereafter IOV) as a way of measuring how directionally correct one’s predictions are. I did a great deal of testing on simulated data to see what kind of scale I was working with on IOV (one to infinity). Random probabilities and random results give an IOV of exactly 3, so IOVs below 3 are directionally incorrect, IOVs above 3 are directionally correct. My personal IOV is around 7.0 (n=51). It’s important to note, though, that being consistently under or over confident will tend to raise one’s IOV. I don’t have a great way of determining if 5% average error and an IOV of 6 is better or worse than a 6% average error and a IOV of 8. I think it comes down to preference – ultimately, for a set of predictions, IOV is a measures how accurate you are, and average error only measures a systematic error towards over or under confidence. (4)

So what do I think truth is? Absolute truth is actually pretty easy. Something is only absolutely true unless it has 0 average error and infinite IOV. Everything else is wrong in some way. But the two metrics still give us a very good description of how accurate/precise a theory is, and allows us to compare two admittedly flawed theories and reject the worse one. If this blog is to have a defined epistemology, it is this: The fact that truth is relative needn’t paralyze us from knowing things. The process of making predictions, measuring error, and adjusting our theories will lead us to more accurate beliefs over time. Go forth and measure.

                                                                                                                                                                                      

1. One can observe that a person who claims to be absolutely certain can still be modeled in this way, by using probabilities of 100% and 0%, but I suspect ey will quickly be proven wrong, if they are truly making testable predictions with more than remote chances of going wrong.
2. One might argue that beliefs not necessarily need to predict the future. My favorite demonstration of this need is an argument by Carl Sagan entitled “The Dragon in my Garage”. He tells the story of a person (unsurprisingly) claiming to have a dragon in their garage. All eir friends ask to go see the dragon, but lo, the dragon is invisible. One friend suggests putting an oxygen monitor in the garage, and watching the dragon consume oxygen and exhale CO2, but, to everyone’s surprise, this dragon has no need to breathe. Another suggests putting a thermometer in the room to measure the heat of the fire, but the claimant says that the fire is heatless. This continues until the friends have exhausted all the ways they could possibly detect the dragon and the claimant has explained away all of them. Sagan’s point is that the claimant doesn’t actually believe he has a dragon in the garage – he knows what tests he will have to explain away before they are even done. His only true belief is in the idea that he believes there is a dragon, he doesn’t believe in the dragon itself.
3. I prefer to use the inverse of the variance as opposed to the standard deviation, because it makes higher = better and is a larger number. It also has a number of interesting properties that I can’t explain mathematically, but seem really important. Namely, the IOV’s for several reasonable methods of coming up with test data all come out to be even numbers. If someone could explain this, I would be immensely grateful, as they might help me come up with a way to normalize the IOV to be a more intuitively meaningful number. I set up a test to try and see what scale I was working with on IOV. I set up several tests (The names sort of make sense, but are really arbitrary): Opposite Prediction: Random outcome, estimated probability = 1 – Outcome, IOV = 1; Optimal Non-solution: Random estimated probability, outcome = 1 – ROUND(Estimated Probability), IOV ≈ 1.714; Perfect Antiprediction: Random probability, Outcome is random result with probability 1 – Estimated probability, IOV = 2; Random Prediction: Random probability, random outcome, IOV = 3; Perfect prediction: Random probability, outcome is random with probability equal to prediction, IOV = 6; Optimal Solution: Random probability, outcome = ROUND(Estimated Probability), IOV = 12; Exact Prediction: Random result, probability = result. IOV = ∞. Constant prediction: Random result, prediction = constant, IOV = 4. I don’t understand why the IOV’s come out to be even numbers like they do, and if anyone does, I’d really appreciate an explanation.
4. As a side note, this paragraph is the only part of this blog post which is a remotely original thought. I haven’t seen such a quantification anywhere else, which does not necessarily mean it has never been done before, but I am rather proud of the idea, and have looked hard for it, because I would very much like an answer to footnote 3, and why the math works out as exactly as it does for IOV.