Thursday, January 31, 2013

Reasons and Persons: self-effacement

Last week I described how the rational self-interest theory is self-defeating when it's combined with the belief that one ought to be never self-denying. I described Parfit's example of the car breaking down in the desert at night and how it was better for the stranded person to be trustworthy than to be never self-denying. However, this example doesn't mean that rational self-interest—abbreviated hereafter as S, as in the book—is always self-defeating. Rather, it means that S tells a person to be self-denying at least in some circumstances. If a person is never self-denying anyway, despite what S says, then it's the fault of the person, not the theory.

But what if a person's belief in S causes that person, mistakenly, to become never self-denying? Would S be at fault? Would S fail in its own terms? Before answering that question, I'll explain why the situation it describes is plausible.

There are many ways to be incompetent at following S. Some ways are tractable. For example, I may believe that I'm overall better off eating broccoli, but instead I succumb to temptation and eat a donut. This is entirely my own fault, caused by the straightforward human failure of valuing the present too much over the future. My failure wasn't in any way caused by S or my belief in S. Theoretically, I could have had a stronger will and better followed my own interests.

But some other ways of being incompetent at following S are intractable. Sometimes, as an adherent to S, my failure to do what's in my own self-interest may be the result of my inability to assess a complex situation well and to accurately predict the future. For example, I might believe as a young person that I'm better off making a lot of money and as a result choose spend my prime years working long, hard hours. But maybe that choice turns out wrong and my life would have, in fact, gone better if I had spent more time with my friends and family. While my choice follows from incompetence at following S, it's impossible to blame my failure on any single, straightforward mistake, such as with a failure to reject temptation. Rather, I made a choice as a young man that seemed best at the time and that only after many years of experience proved bad. This scenario, and others like it, are too plausible to be passed off as mere incompetence; we must judge S according to the reality that we're dealing with imperfect humans.

Let's now return to the question: what if a person's belief in S causes that person, mistakenly, to become wrongly self-denying, such as with devoting one's life to a bad pursuit? Would S be at fault? Would S fail in its own terms?

The answer is no. In that circumstance, where otherwise my belief in S causes me to make my life go worse for me by motivating me to work too hard, S tells me to not believe in S. That is, S gives me reason to believe in some other theory, one that leads to the consequence of me choosing to spend more time with friends and family. In this case, S is said to be self-effacing: it tells us to believe in some other theory.

We may want the best ethical theory to be not self-effacing, but that's irrelevant to whether the best ethical actually is self-effacing. It may very well be that the ethical theory that makes our lives go best also happens to give us reason to believe in some entirely different ethical theory. Some people find this to be a discomforting or depressing idea; they feel that the best ethical theory ought to be aligned with our beliefs about truth, that a belief in the truth will also be a belief in the best ethical theory. However, this is a preference—and possibly one that the universe doesn't fulfill—and not a valid objection to S.

But one objection that is valid is the concern that it might be impossible for a person to reject S and to change their disposition, even when doing so is necessary for them to make their lives go best. For example, that person whose car broke down in the desert might be too attached to believing in S and, furthermore, stuck on being never self-denying, with no realistic possibility of changing their mind otherwise. Would such a scenario cause S to fail?

I'll pick up here next week.

Related posts:

Monday, January 28, 2013

Illuminati stats

Three games held: 3 winners, and 14 losers.

Player standings
Player Games Wins Kills
Laura 3 2 0
Rich 2 1 0
Craig 3 0 2
Jace 2 0 1
Nick 2 0 0
Rick 2 0 0
Alex 1 0 0
Jill 1 0 0
Matt 1 0 0

Note: A kill is when a player eliminates another player from the game. It doesn't count for anything but glory to, or condemnation of, one's viciousness.

Wins by Group
Group Games Wins
The Servants of Cthulhu 3 1
The Society of Assassins 3 1
The Bavarian Illuminati 1 0
The Bermuda Triangle 2 0
The Discordian Society 0 0
The Gnomes of Zurich 3 1
The UFOs 2 0
The Network 3 0

Related links:

Thursday, January 24, 2013

Reasons and Persons: What is a self-defeating theory?

Reasons and Persons is separated into four parts. The first part is about self-defeating theories.

Before getting into self-defeating theories, let's make it clear what a theory is. Whereas in science a theory is an explanation that ties together observations into a more general idea, in Reasons and Persons a theory is a set of principles that state how people ought to act. For example, utilitarianism constitutes an ethical theory that states all people ought to act so as to maximize the happiness of all people. This use of the word theory, denoting a principle or practice rather than an explanation, is more like its use in music theory than its use in the theory of gravity.

Ethical theories, like any theories, may succeed or fail. What causes an ethical theory to fail? If we were scientific about the matter, we might try setting up social experiments whereby we would have groups of people live their lives according to different theories, and then we would observe which theories work best. However, such an experiment would be impossible to set adequate controls for, making such experimentation impractical. Moreover, being philosophers, we're more apt to settle things from the comfort of our armchairs, by reasoning through the details.

A common way theories fail is by being measured according to the values of another theory. For example, a utilitarian, who believes we each ought to act so as to maximize everyone's happiness, would conflict with someone who believes the most important principle is that we each worship God. Though there might be points of compatibility between the two theories, where worshiping God would also maximize happiness, there would also be inevitable points of conflict—a dilemma of having to either worship God or maximize happiness, but not both. No matter how rare such points of conflict would be, any one conflict would be enough to cause each theory to fail in terms of the other.

Failure in the terms of another theory isn't ipso facto failure. If two theories conflict with each other—let's call those theories A and B—it could be that A fails according to B because B is wrong. This wouldn't make A wrong; at most it might make adherents of B wrong in their belief of A being wrong. It could turn out that A is either right or wrong, but in either case B would be useless for measuring A.

Is there a better, more objective way to measure a theory? Yes, there is. A theory always fails, regardless how other theories measure it, if that theory is self-defeating. A self-defeating theory fails in its own terms.

How can a theory be self-defeating? Here's an example. Take the following theory about self-interest: that (1) each of us ought to act so as to bring about outcomes that are best for ourselves (and without regard to others' circumstances) and (2) we each ought to never deny ourselves from fulfilling our desires. It turns out that this theory fails in its own terms. Here's a hypothetical scenario described by Parfit that causes the theory to fail.

Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver in this desert. I manage to stop you, and I offer you a great reward if you drive me to my home. I cannot pay you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if [I] pay you the promised reward. Since I know that I never do what will be worse for me, I know that I would break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise. I am stranded in the desert throughout the night. This happens to me because I am never self-denying. It would have been better for me if I was trustworthy, disposed to keep my promises even when doing so will be worse for me. You would then have driven me home.

(I've previously written about this scenario in a previous post.)

Some of you who may recognize the above scenario as a colorful instance of a social dilemma. Specifically, the narrator's reward and punishment payback is the same as with a prisoner's dilemma, though the stranger's payback is different from a prisoner's dilemma, so this scenario isn't a true prisoner's dilemma.

Be honest Lie
Give ride 3, 4 4, 1
Don't help 1, 3 (tie) 2, 3 (tie)

Here's how to read the table. The narrator chooses the column, and the stranger chooses the row. The pair of numbers in each cells denote the payback for both the narrator and stranger, with the first number being the payback for the narrator and the second number being the payback for the stranger. A higher number is better for that person, but irrelevant to the other person.

Regardless of the choice the stranger makes, the narrator is always better off lying, and stiffing the stranger, as measured by the narrator's own principle of never being self-denying. The stranger has a more difficult choice because he's better off helping the narrator only if the narrator is honest. Otherwise the stranger is better off driving off without helping. However, because of the narrator's transparency, the stranger knows the narrator will choose to lie, and thus the stranger will choose not to help out. This leads to the outcome in the lower-right cell marked in bold, which is the Nash equilibrium for this scenario.

A consequence of all this is that in order for the narrator to create an outcome that's best for the narrator, the narrator must do what's not in his immediate self-interest: the narrator must be honest with the stranger. More to the point, the narrator ought to convince himself that he is better off denying himself the immediate fulfillment of his own desires, thus giving him reason to reject his own ethical theory.

This is a peculiar result, with a possible consequence that goes beyond the mere success or failure of this one ethical theory. I'll elaborate on this in subsequent weeks.

Related posts:

Monday, January 21, 2013

Out sick

The flu, like a cold, is caused by a virus. Anti-bacterial soaps and disinfectants don't help against viruses any more than regular soap or countertop-wiping do. Yet washing one's hands is helpful for destroying the viruses on the hand, some of which go on to infect us through or noses, mouths, or eyes.

I once heard, I think on the NPR show Science Friday, that the important thing about washing our hands isn't the soap, or even the water, but rather the simple act of rubbing, which creates friction and destroys the viruses, or at least dislodges them from the oils on our skin. So when Laura got the flu a little over a week over, I began wringing my hands with restless regularity. However, it didn't work. I became infected nevertheless.

Thursday, January 17, 2013

Reasons and Persons: Undefining terms

To follow Derek Parfit's arguments in Reasons and Persons, you needn't change you mind about what constitutes right, wrong, good, and bad. Parfit thinks most of us already have a good, working understanding of these concepts and that, owing to our built-in conscience, we're already good enough at judging one circumstance to be better or worse than another.

Here are Parfit's words from the book's Introduction.

My central concepts are few. We have reasons for acting. We ought to act in certain ways, and some ways of acting are morally wrong. Some outcomes are good or bad, in a sense that has moral relevance: it is bad for example if people become paralyzed, and we ought, if we can, to prevent this. Most of us understand my last three sentences well enough to understand my arguments.

So if we already know so much about right and wrong and good and bad, what use is ethics? It turns out there's still room for improvement. Parfit believes that though we know enough to evaluate morality, we make bad choices effecting it. Part of the cause for our bad choices is an inevitable failure of character, such as when we give in to temptation and do something that is bad despite knowing we ought to do otherwise. But failure of character is only part of the problem. According to Parfit, most of us have false beliefs about ourselves, and these false beliefs lead us to making bad moral choices as a seemingly rational act. That is, even when we act as we believe we ought to act, we still often make bad choices. Here's more from the Introduction.

I believe that most of us have false beliefs about our own nature, and our identity over time, and that, when we see the truth, we ought to change some of our beliefs about what we have reason to do. We ought to revise our moral theories, and our beliefs about rationality.

So Reasons and Persons isn't a book whose thesis is that we ought to change our values; it's about changing our ideas and strategies for best bringing about those values. This makes Parfit's arguments harder to dismiss than many other ethical arguments, ones that require the reader to change their fundamental moral view of things. Reasons and Persons begins with common ground by yielding to most readers' assertions, and only then attacks their conclusions.

An analogy might help show the value in this. Imagine you're interested in making bread, and you're reading a book about it. Further imagine the book tries to convince you that your preferences about bread—how, say, you prefer wheat bread to rye and yeast bread to unleavened—are wrong and need to be revised. The book isn't likely to change your mind. You know what you like, and you're not likely to change your mind about it. Instead, imagine reading a different book, one that accepts your opinions about bread as they are and instead tries to convince you that you're not making the best bread you can, as measured by your own preferences. The book may say, If you're making leavened wheat bread then there's a good chance you're making some common mistakes and not making the best bread you can. Here's how to make it better. But if instead you're into rye flatbread, here's what to do. Such a book is more likely to be useful to you because the book begins with common ground. So it goes with Reasons and Persons. The book's premise is that our core morals values are OK, but most of us aren't following the best recipe for leading the best lives we can.

In the posts that'll follow, I'll write a lot about right and wrong and good and bad, yet I'll never define the terms. This may be a strange way of dealing with an ethics book, but it's how Reasons and Persons goes. As you read my summaries of some of Parfit's arguments, substitute your own values into the arguments, and see what you get.

Monday, January 14, 2013

Riddle

Time to rack your brines, today's post is a riddle.

The answer has eight letters. The clue is: Pickle?

Friday, January 11, 2013

Reasons and Persons

On an episode of the TV show The Big Bang Theory, supernerds Sheldon Cooper and Amy Fowler argue about whose scientific field is more fundamental: physics or neurobiology. Sheldon, the physicist, says his field is more fundamental because a Grand Unified Theory would explain everything in the universe, including brains and anything else neurobiologists study. Not so, says Amy, for a complete theory in neurobiology would explain how physicists' brains would work in deriving that Grand Unified Theory, thus subsuming physics into neurobiology. Or, as Amy says it: My colleagues and I are mapping the neurological substrates that subserve global information processing, which is required for all cognitive reasoning, including scientific inquiry, making my research ipso facto prior in the ordo cognoscendi.

Physics and neurobiology aren't the only fields that vie for being most fundamental. I think of ethics as having an even stronger claim, for what's the use of explaining anything—including particles, waves, and brains—without having started with some notion of the Good and what's worth doing? Everything we humans do, including scientific inquiry, starts with ought. That puts ethics first.

But whereas the sciences have instilled most people with a strong sense of progress—that we really do know more about particles, waves, brains, etc.—ethics has achieved no such thing. Ethics may have gotten more abstract during the last few centuries, just as most other active fields of knowledge have, but we're no closer to being better people as a result—presumably the end goal of any pursuit in ethics. Instead, humans rely as much as ever on base mammalian responses such as emotion and intuition to guide themselves through difficult moral choices. Ethics remains as theoretical and irrelevant as ever as it relates to how people act in real life, with the few attempts during the past century to sell ethical systems wholesale to the public being, by most accounts, disasters—e.g., Soviet communism and religious fundamentalism.

I'm an ethics agnostic: I believe it's impossible to make objectively true statements one way or the other about value propositions. I also believe in moral dissensus, that there's rarely a universal best way to act in any given circumstance for any given person. And yet I love ethics. Despite its continuing pursuit of the objectivity and universality I don't believe in, modern ethics thrills me as a set of logic puzzles where tricky, sometimes paradoxical problems are presented for resolution. Working through those problems may not make me a better person in the direct sense, but they help me to see the weaknesses in other people's ideas. There's solace in having, if nothing else, a healthy mental immune system strong in skepticism.

This year I'm going to hone that defense a little more. In the last half of 2011, I posted a series of essays here on JEC as I read through two religion books, first John Michael Greer's A World Full of Gods (all posts here) and then Edward Feser's Aquinas (all posts here). Though I have my doubts whether many of you readers got much out of those posts—just last week, for example, Laura expressed confusion as to whether Thomas Aquinas was a moral relativist—I found the toil of taking notes and later drafting analyses and critiques to be intrinsically rewarding. That's reason enough to repeat the process and post a new series of essays on a new book.

The book I'll be writing about is an ethics books, Derek Parfit's Reasons and Persons. To put it bluntly, this book scares me. Many times I've started it without having yet gotten past the first hundred pages. But being unable to finish isn't my only concern; unlike the Greer and Feser books, Reasons and Persons is an academic book and not targeted to a popular audience. More specifically, it's an academic philosophy book, with a degree of precision and a density of logic that make it unsuitable for most unpopular audiences too.

Furthermore, the book is separated into 154 sections, each ranging in length from one to a few pages, and I'm unsure how to break that down into a schedule suitable for blogging. Two years ago, with the two religion books, I aimed to do about one chapter each week, and the books were conducive to me doing just that and finishing in a couple months. Pacing a section per week with Reasons and Persons would have me wrapping it up not until after the 2015 World Series—far too much time. I expect to settle into a faster schedule as I go.

So with all these challenges, why have I chosen this book? In short, Reasons and Persons raises good questions about two ethical concerns that fascinate me: time and personal identity. As living things we move through time as surely as we move through anything, and so it would be appropriate to know how our moral choices affect not just the present but also the future, including the far future. Yet it turns out that's really hard to do, and most ethical systems fail at it, breaking down into paradox or outright self-contradiction given the right questions.

The second concern, personal identity, intrigues me because conventional concepts of what a person is—what makes me a different person than you, yet what makes me the same person as I was five years ago, etc.—don't hold up well to scrutiny. And yet we base most of our moral choices chiefly around such dubious notions of the self, starting with how what we choose affects other people. Shouldn't we have a better idea of what we are when we make those choices?

I don't expect to find a lot of answers in Reason and Persons—none, maybe. But I hope to gain a few new questions.

Monday, January 7, 2013

Canal path super loop

Spurred by one of Laura's recent observations—or was it a challenge?—that a person can ride a loop around a huge chunk of the Phoenix Metro Area using almost exclusively multi-use paths along canals and washes and drainage channels, today I rode my bike around the canal path super loop. Starting on the Arizona Canal path near my home near the Metrocenter Mall, I rode the AC path westward until its terminus, near the Hwy 101 at Olive Ave. Then I took a surface street three miles due south to the Grand Canal, and from there I rode along the GC path east through the slums of Phoenix to Tempe. There I passed by the lake until I reached the Scottsdale Greenbelt, and on that I rode until I reached the Arizona Canal again, whose path I took westward to complete the loop and return home. The loop ended up being just shy of 67 miles and took me 5½ hours to do. I also had my second and third flat tires of the year.

Friday, January 4, 2013

First flat of the year

Did anyone lose a screw on the Arizona Canal path, between 19th and 25th Ave? Black, Philips head, 5/8 inch long with 1/8 inch pitch? Because today I found it, when I ran over it with my bike.

In all my miles of biking, I've run over lots of sharp, protruding things that have punctured a bike tire: goatheads, sharp rocks, glass shards, goatheads, nails, staples, goatheads, and countless unidentified metal snags. Oh, and goatheads. But never before today had I gotten a flat with a screw, just as never before today had I removed a foreign object from my tire using a screwdriver in the usual way one uses a screwdriver. I suppose I could have pulled the screw out using macho strength, but it was so easy just to give it a few counterclockwise turns. Out it came.