Monday, October 29, 2012

Being a “free man” costs a lot of money

Never before in the United States, we're told, has having a college education been more important for finding a good-paying job. Yet on the other hand, many Americans are critical of the traditional four-year college plan, questioning whether college is worthwhile—if maybe too many kids are going to college these days. Together, these two points of conventional wisdom suggest that many people nowadays think it isn't worthwhile to have a job that pays well.

I have mixed feelings about this. On the one hand, I am college educated and have done well from it. I went to a traditional school for a traditional four years using a traditional all-expenses-paid-by-my-parents financial plan. I got a degree in computer science, and since then the job market has been, on average, good for people who aren't afraid of computers. So I can't say that my life would have been easier and more materially profitable if I hadn't gone to college. Probably it would have been neither.

But college for me ended eleven years ago. I recently looked at estimated expenses for my alma mater and discovered they've nearly doubled since I graduated in 2001. It's the same story as everywhere else: college costs are steadily growing faster than inflation, and middle-class people are eating the costs. So when is college no longer worth it?

Though this is a subjective question, there's an important objectivity to it; a college education may be assigned a monetary value just as any annuity may be. Suppose a degree allows you to earn $X more per year than you would earn without it. Further suppose your degree costs $Y in direct expenses, plus the opportunity cost of having missed $Z of income for four years while you're busying going to frat parties and playing intramural sports. How much money is that degree worth? For a sufficiently small value of $X combined with sufficiently large values of $Y and $Z, a college degree is worth a negative amount. That is, it pays back less than what it costs to obtain.

All exponential trends fail eventually, and rising college costs will prove no different. However, I wonder if maybe the biggest factor that will cause this trend to fail will be college becoming a (perceived) negative-returning investment for too many people. That is, many people will stop trying to get into college, satisfied instead to take lower-paying jobs indefinitely or else to scrap a good income the old-fashioned way, by learning a trade and running a business doing it. Not that either of these alternatives leads to a cushier life than what follows from muddling through tests and writing clutter-filled papers for four years at an esteemed university, but the universe can be an uncaring place when it comes to one's personal problems.

I pity parents of children today and the education decisions they face. In the next few decades, many families and their would-be college-bound kids will opt out of taking on a lot of debt, instead forgoing college and a better chance of working a higher-paying job in order to come out ahead by earning less. But lost somewhere amidst the dollar figures and stigma of class status of this decision is the subjective, intrinsic value of a good education and the habit of thinking critically about the world.

Thursday, October 25, 2012

“Quiz time!” Recap

Last week's Quiz time! post about the Allais Paradox generated the most diverse and on-topic set of reader comments for a JEC post in awhile, so today I'm going to recap.

What I neglected to mention in last week's post is that a lot of people answer A-B-A or A-B-B rather than one of the two rational sets of answers, A-A-A or B-B-B. One explanation for this phenomenon is a psychological effect called the certainty effect.

The certainty effect happens when a person assigns a premium to a certain outcome for the sake of certainty. For example, imagine you have a 100% chance of winning $1 million. Now imagine your chance of winning decreases to 90%. How much worse do you feel at 90% than you did at 100%? Now further imagine your chance of winning decreases to 80%. How much worse do you feel at 80% than you did at 90%. Many people feel the change from 100% to 90% more acutely than they do the change from 90% to 80%, and they're willing to pay a premium for it. A common rationale is that the 100% chance is a sure thing while both 90% and 80% aren't sure things. Nevertheless, the decrease in probabilistic value from 100% to 90% is the same as the decrease from 90% to 80%. Objectively, if we were playing only the odds, we wouldn't favor certainty for the sake of certainty.

But should we play only the odds? There's more than one way to look at it.

Firstly, there's subjectivity in any chancy game. As last week's first question showed, some people would prefer to keep a sure $1 million, while some people would prefer to give up one of those percentage points to gain 10 percentage points of chance of doubling their winnings. But if you were strictly playing the numbers, then you would always pick the 10-for-1.

\[ (89\% \times \$1 \text{ million}) + (10\% \times \$2 \text{ million}) = \$1.09 \text{ million} \]

But to many people, the extra $0.09 million isn't worth risking $1 million—no matter what the odds are. They value winning differently than, say, a billionaire who likes thrills.

However, though we expect different people to value risk differently, we might expect everyone to be consistent with respect to their own risk valuations. But often this isn't the case, and that's what the three questions in last week's post together show when people say they would tolerate a 1 percentage point smaller chance of winning only when that decrease is from 11% to 10% and not when it's from 100% to 99%. That's akin to saying that sometimes the extra $0.09 million is worth risking $1 million for, but sometimes it isn't.

Is there a good reason to waver on that choice?

Monday, October 22, 2012

Presidential Election 2012

In fifteen days We the People of the United States will vote to elect our National Hood Ornament for the next four years. Today's blog post constitutes, I hope, my only direct political commentary about that upcoming election.

I don't care who you vote for. I don't care whether you vote based on an informed decision or an uninformed decision. But I would like to convince you to vote for the candidate whom you would most like to see win, regardless of whether that candidate hails from one of the two major political parties or else is a third-party candidate who has no chance of winning.

The core of my argument is that you have no good reason not to vote for a third-party candidate, should you feel inclined to do so. Don't be afraid of throwing your vote away. By voting for either Mr. Romney or President Obama, you're also throwing your vote away, so you may as well vote for the candidate you think best represents you.

As to why you'll be throwing your vote away regardless of whom you vote for, that's revealed by a moment's consideration: your vote will not be a tie-breaking vote. You won't tip a majority of the country's electoral college one way or the other. You won't be as Kevin Costner was in the movie Swing Vote. Your vote won't matter, at least not in the sense that voting for one candidate is worthwhile and voting for another candidate is a waste. These are probabilistic facts made certain if you reside in a non-swing state, as most Just Enough Craig readers do.

So vote for whom you like. And keep in mind that though We the People don't choose which two political parties sit atop the ballot and receive ample free coverage in the news, we do influence which issues the two major parties talk about and which they ignore and do nothing about. Despite all the bitter disagreement between the Republicans and the Democrats, those parties agree more than they disagree, and every vote that is thrown away on a third party gives more incentive to both the Republicans and Democrats to do something about those ignored issues in order to capture a third party's votes.

It's an election of percentage points. Don't be a captive constituent.

Thursday, October 18, 2012

Quiz time!

Quiz time! Don't worry, there are no wrong answers. But please think about and answer each question in turn, before moving on to the next question.

Question #1: Which of the following would you prefer to have?

(A) A 100% chance of winning $1 million

— or —

(B) An 89% chance of winning $1 million, a 10% chance of winning $2 million, and a 1% chance of winning nothing?

Question #2: Which of the following would you prefer to have?

(A) An 11% chance of winning $1 million and an 89% chance of winning nothing

— or —

(B) A 10% chance of winning $2 million and a 90% chance of winning nothing?

Question #3: For this question, imagine there's a box in front of you. You have no idea what's in the box: It could be something good, such as a billion dollars; or it could be something bad, such as a poisonous spider; or it could be something neutral, such as a used pencil. With that in mind, which of the following would you prefer to have?

(A) An 89% chance of winning whatever is in the box and an 11% chance of winning $1 million

— or —

(B) An 89% chance of winning whatever is in the box, a 10% chance of winning $2 million, and a 1% chance of winning nothing?


If by now you suspect these are trick questions, you're right. While there's no wrong way to answer the questions, all the questions taken together have only two rational sets of answers: all A or all B. Any mixing of A and B answers leads to a contradiction. Here's why.

The first question is entirely based on preference: would you rather have the sure thing or take a small risk to go for a bigger gain? There's no correct answer.

Question #2 phrases the same question differently by removing an 89% chance of winning $1 million from each choice. However, for many people who answerA to Question #1, the same choice seems too prudent for Question #2. Why increase your chance of winning by a mere percentage point at the cost of giving up half the winnings?

Question #3 shows the similarity between the two previous questions by replacing the missing 89% chance with a mystery box. The box shouldn't affect your answer because both the A and B answers give an identical 89% chance to win the box. So you should decide which answer to pick based on the remaining odds: an 11% chance to win $1 million versus a 10% chance to win $2 million—the same choice in Question #2.

However, the contradiction is that the same reasoning works for Question #1, too. To see that, imagine that Question #1 were phrased as follows:

Question #1-B: Which of the following would you prefer to have?

(A) An 89% chance of winning 1$ million and an 11% chance of winning $1 million

— or —

(B) An 89% chance of winning $1 million, a 10% chance of winning $2 million, and a 1% chance of winning nothing?

Question #1-B is the same as Question #1, and it's also the same as Question #3 but with the mystery box replaced with $1 million. Therefore, based on the similarity we already established between Question #2 and Question #3, all three questions are asking the same thing with the one difference of what's being offered at an 89% chance: $1 million, nothing, or a mystery box. And the 89% chance shouldn't affect your choice in any of your answers, so therefore you should choose the same answer for all three questions.

If you live on Planet Rational.

These questions make up what's called the Allais Paradox. I've lifted it from another William Poundstone book I've started reading—this one called Priceless: The Myth of Fair Value (and How to Take Advantage of It).

Monday, October 15, 2012

Stag hunt, deadlock, & the sickle cell anemia–malaria game

In last week's post, inspired by William Poundstone's book Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb, I owned up to my years-long mistake of calling social dilemmas in general prisoner's dilemmas, and I described a distinctly different social dilemma called chicken. Today I'll describe two more dilemmas from the book.

Stag hunt

The stag hunt is like prisoner's dilemma but with mutual cooperation given the greater good. Here's the payoff table. (Again, like in last week's post, lower numbers are better.)

Cooperate Defect
Cooperate 1, 1 4, 2
Defect 2, 4 3, 3

Because in the stag hunt everyone is best off cooperating, there should in theory be no dilemma: the rational choice is to always cooperate. But that only happens on the make-believe planet inhabited in the minds of renown economists, the place where everyone is rational. In the real world the situation is more interesting because cooperating with an irrational player who defects causes you to end up with the worst possible result: a low score of 4. Thus, there's a preventative incentive to defect—just in case your opponent is thinking the same thing.

This makes stag hunt more of a tragedy than prisoner's dilemma and chicken. Whereas in those two games the players are victims of circumstance, the problems born of a stag hunt are self-made owing to a lack of trust.

A good real-world fit for a stag hunt meltdown is nearly any kind of financial bubble, whereby reason is subordinate to greed and fear. I heard more than one person in Phoenix saying after the housing bubble popped that they felt they had to buy a house during the run-up in prices because they feared otherwise becoming forever priced out of the market. This thinking follows the defect before they do destruction of a stag hunt.

Deadlock

The weakest of the four social dilemmas is deadlock.

Cooperate Defect
Cooperate 3, 3 4, 1
Defect 1, 4 2, 2

When I read Poundstone's book and first saw the payoff table for deadlock, I tried without success to imagine what this scenario describes. I should have taken a hint from the book's title: Deadlock describes nearly any attempt by two countries to agree to reduce their nuclear arsenal. Cooperation is equivalent to going along with the agreement, and defection is equivalent to breaking the agreement—presumably in secret. In such a scenario, the best outcome for any country is to secretly keep their arsenal while the other country dismantles theirs. Second best is mutual defection, in which case that country at least maintains their nuclear privilege over the have-not countries. The worst outcome is going along with the agreement when the other country defects, in which case there's still a threat of nuclear annihilation and now the country with a dismantled arsenal has no counter-threat.

As its name implies, deadlock leads rational players to always defect, just as in prisoner's dilemma.

Sickle cell anemia and resistance to malaria

So of the four games I described—prisoner's dilemma, chicken, stag hunt, and deadlock—which best describes the conflict inherent in the genetic mutation that leads to increased resistance to malaria but also to having sickle cell anemia? As you may remember, the conflict is symmetrical: any parent having (a single copy of) the mutation benefits from increased malarial resistance, but children of two parents both possessing the mutation may end up with sickle cell anemia.

Imagine the game as being played between the parents, with each parent choosing either to cooperate by not having the mutation or to defect by having the mutation. Here's the payoff table.

Cooperate Defect
Cooperate 3, 3 2, 1
Defect 1, 2 4, 4

As I've assigned the values, the best outcome for an individual is to have the mutation but one's mate not to have the mutation. But the second best outcome is switch roles so that one's children still have a chance at getting one copy of the mutation. Mutual cooperation is third best, and mutual defection, which leads to the possibility of children with sickle cell anemia, is worst.

So it turns out the sickle cell anemia–malaria game doesn't match any of the four social dilemmas I described. Indeed, I'm not sure whether it's strictly a social dilemma at all. In an iterated version, the best course would be to take turns cooperating and defecting while the other player does the opposite. In a one-shot version—which is how the game must be played in real life—the dilemma is over choosing who gets to defect, with the loser still getting the second best outcome. Because a sole defection beats mutual cooperation, this game may lack an ingredient necessary for it to be considered social.

Thursday, October 11, 2012

Prisoner's dilemma & chicken

I recently finished reading William Poundstone's book, Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb, so I've got game theory on my mind, and that's what today's post is about. My apologies to reader Jill, who'll stop reading about…now.

Prisoner's dilemma

For years I've called any social dilemma where there exists a conflict between the good of the group and the good of the individual a prisoner's dilemma, but this is wrong. In the language of game theory, a prisoner's dilemma is a specific kind of social dilemma, and the other social dilemmas have their own names.

What all social dilemmas have in common is two or more people deciding, independently of others, whether to cooperate or to defect. In each dilemma, defection brings about a reward for the defector and punishment for the cooperator, though mutual defection is an undesired outcome. Prisoner's dilemma is the best known of the dilemmas because it's the most brutal: defecting is always the better choice for the individual despite mutual defection being undesirable. As a decision table, it looks like this.

Cooperate Defect
Cooperate 3, 3 1, 4
Defect 4, 1 2, 2

The way to read the table goes as follows: in each cell, the first number represents the payoff for the player whose choice determines which row to use, and the second number represents the payoff for the player whose choice determines the column. Each payoff is ranked from 1 to 4, with higher numbers being better for that person. For example, if the row player (Player 1) defects, and the column player (Player 2) cooperates, then we follow the defect row and the cooperate column and see that Player 1 is rewarded with a 4 (top score) and Player 2 is punished with a 1 (lowest score).

As you can see from the table, a prisoner's dilemma player is always better off defecting than cooperating even though mutual cooperation beats mutual defection. To see this, imagine you're Player 1, so you must select a row. Player 2 has already made their choice, though you don't know what that choice is. Should Player 2 have decided to defect, then you ought to select the best cell for you in the second column. Your choices are to cooperate and score a 1 or to defect and score a 2, so you're better off defecting. Whereas, should Player 2 have decided to cooperate then you should select the best cell in the first column. Your choices then would be to cooperate and score a 3 or to defect and score a 4, so again you would be better off defecting. The dilemma here is that Player 2 will likely arrive at the same conclusion and thus defect, too. Therefore, in a prisoner's dilemma two rational players will both defect, bringing about a score of 2 for both players. Whereas, two irrational players might have both cooperated and each scored a 3.

That's prisoner's dilemma. What are the other dilemmas?

Chicken

Another dilemma is called chicken. Here's its decision table.

Cooperate Defect
Cooperate 3, 3 2, 4
Defect 4, 2 1, 1

Chicken gets its name from any number of popular uses, one being a game where two people each drive a car straight at the other to see who swerves out of the way first. The winner is the macho player who doesn't swerve (score 4) and the loser is the coward who does swerve (score 2), though the catastrophic event here is if both players win, in which case both macho drivers are dead on impact (score 1).

Chicken is a useful scenario for describing a situation whereby everyone involved needs someone to commit to a sacrificial action, but everyone has incentive not to be that person. One example is stopping a well armed and suicidal hijacker; someone needs to stop the hijacker for the good of the group, but individually the best course of action is for someone else to be the hero. In some ways this is worse than the prisoner's dilemma because with prisoner's dilemma there's a fixed rational choice—always defect—but in chicken there's no fixed rational choice: sometimes a player is better off defecting and sometimes they're better off cooperating. Indeed, in a version of chicken where your behavior can affect the other player's decision (though this is not allowed in the strict game-theory version of the game, where both players must make their decision independently of the other), one rational course of action is to convince the other player of your own irrationality, thereby making the other player more fearful of the looming catastrophe of mutual defection and thus more likely to cooperate. But of course the other player can try the same trick on you! Just as with prisoner's dilemma, chicken is a dilemma with no clear, ideal solution.

Next post, I'll describe two more social dilemmas from the book.

Monday, October 1, 2012

Cottonwood – Prescott Valley – Cottonwood

The shortest path between Cottonwood to Prescott Valley is the zig-zaggy AZ 89A, which passes through the town of Jerome before going up and over Mingus Mountain. Thursday last week I drove Laura's car to Cottonwood and biked the route as an out-and-back.

Those of you who don't live in Arizona likely have never heard of Jerome (jah-ROAM). It was established in the late 1800's as a mining town, and like many mining towns of that era, Jerome went through a cycle of extreme boom and a bust. According to its Wikipedia article, the town's population peaked at over 15,000 people in 1929 and plummeted to about 50 by the late 50's. Since that low, Jerome, like most other of the fortunate former mining towns, has reestablished its economy by eking out an existence based on tourism.

What makes Jerome stand out is its topography: it's built on the side of a mountain. As you pass along the switchbacks of its main street, past the numerous restaurants and art galleries, nearly every view gives a panorama of the Verde Valley over 1000ft below. My point-and-shoot camera can't capture the beauty and gradients of the town, so I settled for this photo of the street overlooked by the J on the mountainside above.

But my trip wasn't a tourist trip. It was a bike ride with an aim to do some climbing. The pass over Mingus Mountain is about 7000ft above sea level, Cottonwood is about 3400ft, and Prescott Valley is about 5000ft. I opted to start in Cottonwood (as opposed to Prescott Valley) so that I would do the big climb first and the big descent last. This strategy worked out well because by the time I made my second pass up and over the mountain, it was during the heat of the day, and I was looking forward to a cool, breezy descent down the mountain.

A frequently asked question about a bike ride like this is: How fast did you go down the mountain? In truth I don't know because my nifty high-tech Garmin GPS had a dead battery before I left the parking lot to start my ride. But for a ride like this—which took five hours, including a stop at a Subway to eat an egg-and-cheese omelet sandwich—nearly all the time is spent slowly cranking up the mountain and I enjoyed only a few minutes that seemed faster than they were on the way down. It's like eating a full plate of Brussels sprouts and then having one small bite of a cookie. But oh is that cookie a lot of fun.