Much of ethics, historically, is what I call stateless ethics: it does not tie what is right or what is good with time. As an example, take a utilitarian system that weights each person's happiness equally and claims that we should maximize happiness. How can this be so? Does one unit of a person's happiness today equal one unit of a person's happiness tomorrow? What about the happiness of a person not yet born? How should their happiness be valued? Should we be concerned with persons' happiness 100 years from now? If so, how much? According to such a system, how we are to act right now, such as deciding between two possible courses of action, A and B, depends on whether A produces more units of happiness than B. But if we don't know how to relate future happiness with present happiness then how can we ever expect to valuate A and B meaningfully? This is a difficult question, and much of ethics flatly ignores it.
What makes the question difficult is that the answer clearly fits somewhere between the extremes. The first extreme is that future happiness is worth nothing compared to present happiness. In such a framework, we, if faced with the possibility, wouldn't hesitate to cause 6.7 billion people to suffer tomorrow if it meant we could make one person happier right now. This defies our intuition. The other extreme values future happiness as exactly equal to present happiness. This would have us concerned with the heat death of the sun and all sorts of long-out, future scenarios when an asteroid could very well smash into the earth next year and cause the suffering (and extinction) of all humans. That the future is necessarily uncertain means that there is some natural bias towards present happiness over future happiness. If we knew for certain that a killer asteroid will smash into the earth next year, then we may as well begin the end-of-the-world party right now and forsake all human happiness two years from now because there won't be any humans alive in two years.
The answer, clearly, is that we should value future happiness somewhat less than present happiness but how much less? What is the happiness discount rate? Now we are talking about stateful ethics, where the decisions and events of today inexorably affect the decisions and events of tomorrow, and what is right or good is complex and probabilistic.
Say, for example, we value happiness at a 10% discount rate, meaning that one unit of happiness one year from now is worth 90% as much as one unit of happiness right now. It's not that happiness is worth less in the future when it is occurring; rather, the 10% discount figures in the uncertainties such that if decision A causes a net increase of 9 units of happiness today and decision B is expected to cause a net increase of 10 units of happiness one year from now, then A and B, valued right now, are equally good decisions. After all, that killer asteroid (or whatever contingency) may make B inconsequential, so its expected 10 units of happiness are discounted.
With a 10% discount rate, a unit of happiness about seven years from now is worth half as much as a unit of happiness today. A unit of happiness 30 years from now is worth about one-tenth as much as a unit of happiness today. You don't have to go out very far before units of happiness become negligible, which I think accurately reflects humans' abilities of predicting the future.
But 10% is only an example. What is the best happiness discount rate? Utilitarianism is likewise only an example. Should we be more concerned with discount rates that have to do with goals other than happiness? Should we study history and asteroid deflection technology to help us better both predict the future and ensure our survival? Can we make the discount rate lower? Should we? This goes meta in a hurry.
Monday, September 6, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment