I don't maintain rigorous accounting of my finances, but I do keep a few simple notes that are enough for me to make a few useful observations about my spending habits. One such observation I made for the previous year was that about a fourth of my expenses went towards the purchases of the two new bicycles that make up my new fleet.
The purchases were needed in the sense that any material thing outside of food, water, clothing, shelter, etc. is ever needed; the bicycles of my former fleet were showing signs of age and owner neglect as well as a lack of specialization important for my increasingly bicycle-oriented lifestyle. Neither bicycle was optimal for fulfilling the role of practical transportation, and neither was optimal for fulfilling the role of racing, so I replaced them both with (1) a touring bicycle, optimal for practical transportation, and (2) a carbon-fiber bicycle, optimal for going fast in dorky clothes.
Directing one-fourth of one's expenses towards motor transportation seems a lot, directing it towards bicycles seems even more so, and so one may assume that the two bicycles I bought constitute special and high-tech equipment and that I'm happy with the results. I didn't, and I'm not. At least, I'm not entirely happy with the results.
More specifically I've decided after several months of riding it that I'm not entirely happy with the racing bicycle. (The touring bicycle is great and fulfills its role superbly.) Even more specifically I've decided that I'm unsatisfied with carbon fiber as a bicycle frame material. It's not that I have a specific complaint about the riding quality of carbon. I'm not even sure if I have a complaint about the riding quality of any suitable frame material; I suspect that (excluding weight and fit) tires, wheels, and the saddle account for the vast majority of a bicycle's feel. So my beef against carbon isn't a snob appeal about some made-up (and subjective) disadvantage of the material, and it isn't a snob appeal to the soft flex of steel or any made-up (and subjective) awesomeness of another material; rather, my beef is that it doesn't make economic sense for us amateurs to be riding carbon.
Carbon fiber is a useful technology in bicycles that should be continued to be explored; however, I think its application makes obvious sense only for pros. Carbon fiber makes for the fastest possible bicycle today, but it does so only through trade-offs; mainly it (1) increases cost; (2) decreases durability; and (3) increases the chance of catastrophic, unsafe failure. For pros, these trade-offs don't matter much: (1) the bicycle is purchased on a sponsor's dollar so cost is irrelevant, (2) team cars follow behind the riders so durability isn't as critical, (3) pros are paid money to assume a tremendous amount of risk as it is so the slightly increased chance of catastrophic failure is negligible for them. Meanwhile, for pros, performance matters. A lot.
For the rest of us, performance shouldn't matter, but it ends up mattering. The only reason why I considered a carbon-fiber bicycle is that so many amateurs ride them, and I thought that if I want to be competitive, whether in a fast-paced Saturday-morning ride or a hill-climb race, then I need the additional lightness and stiffness that carbon provides. So basically I selected carbon for my frame material out of peer pressure. Why did my peers select carbon? Probably because of pressure as well.
I suppose it all started with one club rider somewhere in the world for whom the additional cost and decreased reliability of carbon were no objections. He then proceeded to buy a carbon-fiber bicycle and become faster relative to the other riders in his club. Maybe he made it to the top of the hill first a few times when before, on his old steel-frame bicycle, he never had a chance. The other riders saw these gains as easy and undignified and thus felt compelled to do the same, to upgrade their bicycles to bring the group back to equilibrium. Eventually everyone in the group fitted themselves with better bicycles, and the group overall became faster while each individual rider remained the same relative to the other riders. And so it is that we cyclists now find ourselves trapped in a hideous carbon gap, each of us racing to the top of the hill while racing to the bottom in terms of buying the most expensive, flimsiest thing on two wheels, with most of us merely trying to maintain our position as a mid-pack cyclist.
That's my complaint with carbon: buying it makes me feel like a sucker; I should have the guts to be slow. But this isn't the worst complaint possible; after all I shaved 4½ pounds off a piece of equipment in a sport where weight is measured in ounces and grams. My new bicycle is faster, as anyone aware of gravity would predict, so I suppose I'll just have to enjoy my awesome new bicycle. While it lasts. Hey, it's paid for.
Meanwhile, I wonder what frame material I should choose for my new TT/triathlon bicycle...
Monday, March 29, 2010
Thursday, March 25, 2010
Two maxims for practical software development
My previous software development job may not have been good, but my boss there was. Eric was thoroughly the practical sort who didn't care much to philosophize about the what ifs or shoulds of his circumstances but instead gave his attention to making good software with what he had. It shouldn't be too surprising that with my theoretical leanings I had my frustrations with some of his decisions at the time, but in the nearly four years since, whatever disagreements we had have been buried, and meanwhile I've retained some of his practical insights, As a result I've become a more balanced developer, though of course I'm as inclined as ever to discuss the what ifs and shoulds of my circumstances.
In the ensuing years since that job, I've elevated two maxims of Eric's to a sort of gospel status, and in today's blog post I'll share those ideas.
Bugs that go away mysteriously have a way of returning mysteriously.
I get the most mileage out of this in software, but it's also applicable in other disciplines, such as bicycle maintenance and interpersonal relationships. Stated another way: don't fix effects; fix the cause.
Here's an example. Recently at my current job I was doing some embedded development and ran into some timing issues whereby the unit's LED display was showing incomplete or garbled messages. I could fix the problem, by which I mean that I could make the display correct, by inserting a slight delay into the code. I did so and mentally filed a note that I had a system that, according to Eric, chapter 1, was hiding a bug. Sure enough it took only a few days of continuing work on the system before my existing delay proved insufficient and the truncated-message symptom reemerged due to some seemingly unrelated code changes. I decided to figure out the display problem's cause, and after some time determined that I had misinterpreted the hardware documentation. I changed a register value assignment (according to my new interpretation of the documentation), and the display worked correctly without the original delay hack. Bugs that go away for known reasons have a way of returning rarely.
People will be quick to forget that you were late to release but will be slow to forget that your product was initially buggy.
Professional software development is often an exercise in blundering. It's well known, at least within our industry, that software is hard to make; it's terrifically complex stuff, its development is notorious for missing deadlines and exceeding budgets, and a lot of software released into the wild just plain sucks. Despite this, releasing on time a product of poor workmanship is not hard and won't earn you much distinction as a developer. What is hard and will earn you distinction is releasing something that does what it needs to do and does it well. This makes sense: the three months you're late with a project may feel like an eternity while you're in them, but they'll pass soon enough and if you end up releasing software that can be trusted then its users will usually be satisfied and those three months won't seem as big a deal after the fact. But if you release buggy software then even if it's on time and even if you've successfully fixed all the problems (and fixing released software is usually harder than fixing unreleased software) then the software's users will generally hold a vague distrust towards the system for a long time (if not forever). Once broken, never trusted.
I have a theory about the phenomenon to rush out a software release of poor quality, and it's to assign a lot of the blame on the college experience. In college the first priority of any assignment (for most students) is the deadline and secondary is the quality of the job. After all, for papers and tests the student generally must be there on time or else a lot of credit is irrevocably forfeited. But even a mediocre effort on that paper or test all too often lands a solid grade, sometimes even full credit. I remember from my own experience many times turning in a paper at the last minute after barely having taken the time to massage a rough draft into even the illusory appearance of something final—and all to be rewarded the next week with a grade only a notch or two below what a great paper (and significantly more effort) would have earned. And so it is that our colleges annually release into the white-collar world legions of nervous deadline-appeasers who are quick to dismiss the necessity of high quality work.
Deadlines do matter, and software developers should have at least some inkling of an understanding of their decisions' budgetary consequences. However, a developer's primary responsibility is to his software, not his budget. Often this means standing up to a trigger-happy manager or lead and telling him, “No, you're going to have to wait.” It's a small price and often, in the end, none at all. The world has little need for more bad software.
In the ensuing years since that job, I've elevated two maxims of Eric's to a sort of gospel status, and in today's blog post I'll share those ideas.
Bugs that go away mysteriously have a way of returning mysteriously.
I get the most mileage out of this in software, but it's also applicable in other disciplines, such as bicycle maintenance and interpersonal relationships. Stated another way: don't fix effects; fix the cause.
Here's an example. Recently at my current job I was doing some embedded development and ran into some timing issues whereby the unit's LED display was showing incomplete or garbled messages. I could fix the problem, by which I mean that I could make the display correct, by inserting a slight delay into the code. I did so and mentally filed a note that I had a system that, according to Eric, chapter 1, was hiding a bug. Sure enough it took only a few days of continuing work on the system before my existing delay proved insufficient and the truncated-message symptom reemerged due to some seemingly unrelated code changes. I decided to figure out the display problem's cause, and after some time determined that I had misinterpreted the hardware documentation. I changed a register value assignment (according to my new interpretation of the documentation), and the display worked correctly without the original delay hack. Bugs that go away for known reasons have a way of returning rarely.
People will be quick to forget that you were late to release but will be slow to forget that your product was initially buggy.
Professional software development is often an exercise in blundering. It's well known, at least within our industry, that software is hard to make; it's terrifically complex stuff, its development is notorious for missing deadlines and exceeding budgets, and a lot of software released into the wild just plain sucks. Despite this, releasing on time a product of poor workmanship is not hard and won't earn you much distinction as a developer. What is hard and will earn you distinction is releasing something that does what it needs to do and does it well. This makes sense: the three months you're late with a project may feel like an eternity while you're in them, but they'll pass soon enough and if you end up releasing software that can be trusted then its users will usually be satisfied and those three months won't seem as big a deal after the fact. But if you release buggy software then even if it's on time and even if you've successfully fixed all the problems (and fixing released software is usually harder than fixing unreleased software) then the software's users will generally hold a vague distrust towards the system for a long time (if not forever). Once broken, never trusted.
I have a theory about the phenomenon to rush out a software release of poor quality, and it's to assign a lot of the blame on the college experience. In college the first priority of any assignment (for most students) is the deadline and secondary is the quality of the job. After all, for papers and tests the student generally must be there on time or else a lot of credit is irrevocably forfeited. But even a mediocre effort on that paper or test all too often lands a solid grade, sometimes even full credit. I remember from my own experience many times turning in a paper at the last minute after barely having taken the time to massage a rough draft into even the illusory appearance of something final—and all to be rewarded the next week with a grade only a notch or two below what a great paper (and significantly more effort) would have earned. And so it is that our colleges annually release into the white-collar world legions of nervous deadline-appeasers who are quick to dismiss the necessity of high quality work.
Deadlines do matter, and software developers should have at least some inkling of an understanding of their decisions' budgetary consequences. However, a developer's primary responsibility is to his software, not his budget. Often this means standing up to a trigger-happy manager or lead and telling him, “No, you're going to have to wait.” It's a small price and often, in the end, none at all. The world has little need for more bad software.
Monday, March 22, 2010
Somewhere in the middle of all possible worlds
What nonsense it is, this idea that we live in the best of all possible worlds! Earthquakes, floods, tornadoes. War, poverty, disease. Pain, suffering, misery. How could any of these things exist in the best of all possible worlds?
I've written in the past that I consider myself neither an optimist nor a pessimist, and so it is that I've lived for most of my adult life with the world view that our world is neither the best nor worst of ones but likely exists somewhere in the middle in a sort of hypothetical world-quality continuum.
But consider this clue: for ages people have had unlimited potential for conjuring up and describing elaborate hells but have yet to turn up a compelling vision for heaven. It doesn't take much critical thinking to realize that getting everything you wish for (and not having to work at all for it, at that!) is a poor strategy for achieving happiness or fulfillment, and yet we can do no better when imagining Eden. Many, I suspect, are holding out with a vague hope that somehow the whole thing turns out to be beyond human reason, whatever that means, but what we do know is that in the best of cases our idea of paradise is that of a pleasant place to be eternally bored.
That we can imagine much worse than reality but little better than it is, I think, an important clue that our world is closer on that world-quality continuum to the “best” than I initially thought. But how much closer?
Recently I've adopted a new working ethical hypothesis: happiness is the harmonious operation of one's instincts. Like most of my good ideas, this one isn't mine; I've borrowed it from Will Durant and his Story of Philosophy. It suggests that happiness is the state of being what one is suited for being and being it within the environment one is suited for being in. It suggests a man at balance.
Aristotle posited that virtue is the mean between the extremes, and I take that to be a similar idea because it similarly connotes a sort of balance as a necessary ingredient of happiness. Man is a battleground of warring instincts and emotions, and it is only when they are brought into balance with each other that he is capable of living the Good Life. For example, somewhere between our instinct for flight (fear and cowardice) and our instinct to fight (recklessness and foolhardiness) lies the golden mean of courage.
Aristotle did not benefit from a theory of evolution, and so it's not surprising that his idea of the happy life necessarily involves so static a function as the application of reason. Nowadays our view of the world contains more dynamicism and arbitrariness and less permanence and fixedness, and we understand how, in the coarser biologic sense, man's core function is simpler and baser than what Aristotle proposed. It is this: he consumes, he survives, and he reproduces. How he does each of these things has to do with his particular adaptations to his environment, just like any other living thing.
It doesn't matter so much what the particular goings on are in either the best of all possible worlds or the worst of all possible worlds; man is likeliest happy when he is the element which he has evolved to be in because that is the environment that is most conducive for bringing his instincts into harmonious operation. For example, most of us like warm weather and sunshine because they are what we evolved to handle when, long ago, we took to two legs in the savanna, and our instincts are brought into better balance when we are experiencing warm weather and sunshine. It is this sense that this world is the best of all possible worlds as if by definition because man is better suited for no other world.
But there's a problem with this: it's not true. Man's very nature is to change his environment with greater speed than is his capacity for adaptation. Or at least, it seems as though for the last 10,000 years man has changed his environment with greater speed. We till the soil and grow grains, and in turn our new diet rich in carbohydrates gives us diabetes and tooth decay. We make tools and clothes that enable us to move into colder regions, and the ensuing lack of sunshine and warmth makes us depressed and moody. Our technology enables us greater consumption, and the resulting pollution harms our health and sense of beauty.
This suggests to me an ironic sort of progression: our world was once the best of all possible worlds, but it was a dangerous one for our ancestor, and so he acted according to his nature and set about making the world safer and more hospitable. And though he succeeded in making the world more hospitable—and the evidence for this is that there are more of us around today—he did so by shifting the world just a little more towards the middle on that continuum.
I've written in the past that I consider myself neither an optimist nor a pessimist, and so it is that I've lived for most of my adult life with the world view that our world is neither the best nor worst of ones but likely exists somewhere in the middle in a sort of hypothetical world-quality continuum.
But consider this clue: for ages people have had unlimited potential for conjuring up and describing elaborate hells but have yet to turn up a compelling vision for heaven. It doesn't take much critical thinking to realize that getting everything you wish for (and not having to work at all for it, at that!) is a poor strategy for achieving happiness or fulfillment, and yet we can do no better when imagining Eden. Many, I suspect, are holding out with a vague hope that somehow the whole thing turns out to be beyond human reason, whatever that means, but what we do know is that in the best of cases our idea of paradise is that of a pleasant place to be eternally bored.
That we can imagine much worse than reality but little better than it is, I think, an important clue that our world is closer on that world-quality continuum to the “best” than I initially thought. But how much closer?
Recently I've adopted a new working ethical hypothesis: happiness is the harmonious operation of one's instincts. Like most of my good ideas, this one isn't mine; I've borrowed it from Will Durant and his Story of Philosophy. It suggests that happiness is the state of being what one is suited for being and being it within the environment one is suited for being in. It suggests a man at balance.
Aristotle posited that virtue is the mean between the extremes, and I take that to be a similar idea because it similarly connotes a sort of balance as a necessary ingredient of happiness. Man is a battleground of warring instincts and emotions, and it is only when they are brought into balance with each other that he is capable of living the Good Life. For example, somewhere between our instinct for flight (fear and cowardice) and our instinct to fight (recklessness and foolhardiness) lies the golden mean of courage.
Aristotle did not benefit from a theory of evolution, and so it's not surprising that his idea of the happy life necessarily involves so static a function as the application of reason. Nowadays our view of the world contains more dynamicism and arbitrariness and less permanence and fixedness, and we understand how, in the coarser biologic sense, man's core function is simpler and baser than what Aristotle proposed. It is this: he consumes, he survives, and he reproduces. How he does each of these things has to do with his particular adaptations to his environment, just like any other living thing.
It doesn't matter so much what the particular goings on are in either the best of all possible worlds or the worst of all possible worlds; man is likeliest happy when he is the element which he has evolved to be in because that is the environment that is most conducive for bringing his instincts into harmonious operation. For example, most of us like warm weather and sunshine because they are what we evolved to handle when, long ago, we took to two legs in the savanna, and our instincts are brought into better balance when we are experiencing warm weather and sunshine. It is this sense that this world is the best of all possible worlds as if by definition because man is better suited for no other world.
But there's a problem with this: it's not true. Man's very nature is to change his environment with greater speed than is his capacity for adaptation. Or at least, it seems as though for the last 10,000 years man has changed his environment with greater speed. We till the soil and grow grains, and in turn our new diet rich in carbohydrates gives us diabetes and tooth decay. We make tools and clothes that enable us to move into colder regions, and the ensuing lack of sunshine and warmth makes us depressed and moody. Our technology enables us greater consumption, and the resulting pollution harms our health and sense of beauty.
This suggests to me an ironic sort of progression: our world was once the best of all possible worlds, but it was a dangerous one for our ancestor, and so he acted according to his nature and set about making the world safer and more hospitable. And though he succeeded in making the world more hospitable—and the evidence for this is that there are more of us around today—he did so by shifting the world just a little more towards the middle on that continuum.
Thursday, March 18, 2010
To const or not to const?
I am a C expert. I've put in my ten-plus years with it, and by now my understanding of the language is not limited merely to knowing keywords, the standard library, and language-use conventions; I see C programs as complex and abstract patterns. When I code in C, I do not translate thoughts to C; I think in C. I would call myself a master of the language if only my experience had more breadth. Even so, I've used C in academia, personal, and corporate settings; I've done projects as a lackey and projects as a lead; I know C99; I've used many different compilers on many different operating systems; recently I've done work on a small embedded system with no operating system; so even the breadth of my experience is good if it isn't great.
That said, let it be known that I struggle with one of the C language's simplest constructs, the const keyword.
Sure, I know the const keyword. I know its syntax and meaning. I know some of its peculiarities, such as that the declaration char const * * const foo declares foo to be a mutable array of immutable C strings and that foo itself is immutable. I know that the const keyword trips up a lot of intermediate-level C developers when it's mixed with pointers. I'm not one of those people.
Like I said, I see C programs as patterns, and here lies my problem with the const keyword; I don't understand how it best fits into a program's pattern. Here's an example.
In most cases, adding the const keyword provides a pleasant safeguard, both for the function foo_copy itself as well as its callers (and their callers and so on). Calling functions can be assured that the state of the object pointed to by src will not change. And we expect that its state won't change.
But there is at least one case in which the object's state should change, and that is when the object is not actually copied but is instead reference-copied. Take the following implementation of foo_copy.
This example exposes a fundamental problem with the const keyword, which is that the keyword breaks the barrier between interface and implementation. Const poses as a keyword that denotes interface, but it is actually a keyword that specifies, in part, implementation. It transforms, however slightly, a black box into a gray box.
I want to discard the const keyword and avoid it in my function prototypes, but the problem with that is that const does have its value. Like assert, it cannot guard against all faults but it can catch some of them, and it does so without increasing size or hindering performance. But languages such as Java do not have const. Historically, I have believed this to be a deficiency in those languages, but now I wonder whether those languages' designers well understood const's deficiencies and saw the construct as a dangerous (though well meant) crutch and decided to take that crutch away from their developers.
Only one thing about the matter is certain to me: I am a C expert, not a C master.
That said, let it be known that I struggle with one of the C language's simplest constructs, the const keyword.
Sure, I know the const keyword. I know its syntax and meaning. I know some of its peculiarities, such as that the declaration char const * * const foo declares foo to be a mutable array of immutable C strings and that foo itself is immutable. I know that the const keyword trips up a lot of intermediate-level C developers when it's mixed with pointers. I'm not one of those people.
Like I said, I see C programs as patterns, and here lies my problem with the const keyword; I don't understand how it best fits into a program's pattern. Here's an example.
This is an example of a function, foo_copy, that takes an object of type foo_t and creates and returns a copy of the object. The question is: should the function argument be of type foo_t const * or foo_t *? To const or not to const?struct foo;
typedef struct foo foo_t;
foo_t *foo_copy(foo_t const *src);
In most cases, adding the const keyword provides a pleasant safeguard, both for the function foo_copy itself as well as its callers (and their callers and so on). Calling functions can be assured that the state of the object pointed to by src will not change. And we expect that its state won't change.
But there is at least one case in which the object's state should change, and that is when the object is not actually copied but is instead reference-copied. Take the following implementation of foo_copy.
This will generate a compiler error, though the spirit of the function is correct. The developer then has two options: one, to remove the const keyword from the argument specification and two, to add a const-removing cast to the offending line like so: ((foo_t *) src)->ref_cnt++. The first option potentially breaks existing code that expects the function's const-ness. The second option is impure and, I would argue, more dangerous than goto, both in principal and in practice.foo_t *foo_copy(foo_t const *src)
{
assert(src != NULL);
src->ref_cnt++;
return src;
}
This example exposes a fundamental problem with the const keyword, which is that the keyword breaks the barrier between interface and implementation. Const poses as a keyword that denotes interface, but it is actually a keyword that specifies, in part, implementation. It transforms, however slightly, a black box into a gray box.
I want to discard the const keyword and avoid it in my function prototypes, but the problem with that is that const does have its value. Like assert, it cannot guard against all faults but it can catch some of them, and it does so without increasing size or hindering performance. But languages such as Java do not have const. Historically, I have believed this to be a deficiency in those languages, but now I wonder whether those languages' designers well understood const's deficiencies and saw the construct as a dangerous (though well meant) crutch and decided to take that crutch away from their developers.
Only one thing about the matter is certain to me: I am a C expert, not a C master.
Monday, March 15, 2010
Century post meta extravaganza!
Writing about ideas is hard. Often it goes that at first I am full of ideas about a topic in which I am not an expert, such as with politics or economics. I take these ideas and with my enthusiasm begin to write, and in doing so, I begin to think more deeply about the matter. Often I soon find many of my original ideas to be riddled with holes: my certainties become uncertainties; my answers become questions; and my arguments soon fail to convince even myself. Sometimes I then give up the endeavor altogether and turn to a different topic; other times I limp forward anyway and end up with a stream of prose that meets my self-imposed deadline of publishing twice a week but does little else.
In too many cases, to write, I must decide to be ignorant until the job is done. I'm reminded of a bit by the English philosopher Herbert Spencer about this very thing.
* * *Edward: Jack, do you suppose it's possible for two characters in a dialog to choose what they say?
Jack: You mean how, for example, if a character were to exclaim, “Fliggle phlasm phooey flooey!” with no prior context for doing so then would he be exhibiting free choice?
Edward: Well, not exactly but—
Jack: —Because even then I think the answer is a fat no. How would it be anyone but the author choosing the words a character says?
Edward: Well, sure, yes, but consider the difference between a new character with no history and a preexisting character with an established history?
Jack: Okay. [Jack furrows his brow for a few silent moments.] What exactly am I considering?
Edward: That a character without a history is free to say anything he wishes because the reader would have no expectations for what he says whereas a character with a history would be more constrained within the limits of the expectations of the readers.
Jack: I think I see what you mean. So you're saying, for example, that if you were a character within a dialog and readers thought of you as a intelligent, well spoken fellow and without warning you laid down a long, incomplete sentence then you wouldn't so much be expressing the free choice of your words as much as the author would be inadequately hiding his laziness for maintaining the consistency of his character, but then a new character with no established history could break all the rules in grammar just fine and would be all the freer for doing so. Is that what you mean?
Edward: Well, not exactly but—
Jack: —Because I think I see that now. I think I should like to meet such a character one day.
Edward: Fliggle phlasm phooey flooey!
In too many cases, to write, I must decide to be ignorant until the job is done. I'm reminded of a bit by the English philosopher Herbert Spencer about this very thing.
There is a story of a Frenchman who, having been three weeks here, proposed to write a book on England; who, after three months, found that he was not quite ready; and who, after three years, concluded that he knew nothing about it.Some people may look at the situation and say that more humility is needed, that one should write well and only about things he knows. I say that's nonsense. I say that I'm liberated.
* * *Edward: Jack, do you suppose it's possible for two characters in a dialog to choose what they say?
Jack: You mean how, for example, if a character were to exclaim, “Fliggle phlasm phooey flooey!” with no prior context for doing so then would he be exhibiting free choice?
Edward: Well, not exactly but—
Jack: —Because even then I think the answer is a fat no. How would it be anyone but the author choosing the words a character says?
Edward: Well, sure, yes, but consider the difference between a new character with no history and a preexisting character with an established history?
Jack: Okay. [Jack furrows his brow for a few silent moments.] What exactly am I considering?
Edward: That a character without a history is free to say anything he wishes because the reader would have no expectations for what he says whereas a character with a history would be more constrained within the limits of the expectations of the readers.
Jack: I think I see what you mean. So you're saying, for example, that if you were a character within a dialog and readers thought of you as a intelligent, well spoken fellow and without warning you laid down a long, incomplete sentence then you wouldn't so much be expressing the free choice of your words as much as the author would be inadequately hiding his laziness for maintaining the consistency of his character, but then a new character with no established history could break all the rules in grammar just fine and would be all the freer for doing so. Is that what you mean?
Edward: Well, not exactly but—
Jack: —Because I think I see that now. I think I should like to meet such a character one day.
Edward: Fliggle phlasm phooey flooey!
Thursday, March 11, 2010
Two arguments for the elimination of worry about your liberties, pt. 2
Meaningful liberties are yours to take.
I'm reminded of Former Coworker Randy who was fond of saying that the only real freedom is financial freedom. It's not strictly true because civil liberties do matter, but for most people in the western world today whose core of civil liberties are not threatened, it approximates the truth aptly. Even a maximum of free speech, due process, and the like won't enable an individual to achieve happiness. Happiness requires free time.
In Walden Two Revisited (the preface to Walden Two), B. F. Skinner opines how the great historical revolutions have not been politically driven, saying instead that they've been idealogical--moral, religious, philosophical, scientific, etc. In the United States, the state has egregiously trampled over civil liberties since the early days of the republic, such as with the Sedition Act of 1798, which made it illegal to speak out against the government. The act was repealed a short time later, but our history since has been a noisy sequence of setbacks and resets with respect to individual liberties. Usually the victims have been the same: people who the state perceives as a threat to its power. Worriers would have it that these days the state's threat to our liberties is real and that things are different. We all are threatened with a great and sudden loss of liberty. The odds are not on the worriers' side, but what if they're right? So what. It's doubtful that political activism will improve the situation for the individual; instead, it's more optimal to let other people continue taking political action and for the individual to take personal action towards securing freedoms that are more under his control. It's more optimal to pursue financial freedom than it is to pursue political ends.
The freedom from work is not only about winning freedom over one's time. The nature of needing someone else's money as one's own income necessarily reduces options. Whether the individual is self-employed or working for a large corporation, that he needs to work means that there necessarily exist things he cannot say, behaviors he cannot adopt, and ideas that are unsafe for him to pursue. The situation is good when the individual gives up what he cares little for, but the situation is best when the individual needn't give up anything at all. The path to maximal practical freedom for an individual is for him to reduce his financial obligations so as to win his freedoms, not from the state but from his economic situation. In modern times, everything else is, to some extent, an abstraction.
Conclusion
I was tempted to write a third argument about how the natural course for most people is to favor security over freedom and that much discussion about the erosion of civil liberties is empty talk. Perhaps in a later post I'll return to this topic, but for now I'll end with a quote by Mark Twain.
I'm reminded of Former Coworker Randy who was fond of saying that the only real freedom is financial freedom. It's not strictly true because civil liberties do matter, but for most people in the western world today whose core of civil liberties are not threatened, it approximates the truth aptly. Even a maximum of free speech, due process, and the like won't enable an individual to achieve happiness. Happiness requires free time.
In Walden Two Revisited (the preface to Walden Two), B. F. Skinner opines how the great historical revolutions have not been politically driven, saying instead that they've been idealogical--moral, religious, philosophical, scientific, etc. In the United States, the state has egregiously trampled over civil liberties since the early days of the republic, such as with the Sedition Act of 1798, which made it illegal to speak out against the government. The act was repealed a short time later, but our history since has been a noisy sequence of setbacks and resets with respect to individual liberties. Usually the victims have been the same: people who the state perceives as a threat to its power. Worriers would have it that these days the state's threat to our liberties is real and that things are different. We all are threatened with a great and sudden loss of liberty. The odds are not on the worriers' side, but what if they're right? So what. It's doubtful that political activism will improve the situation for the individual; instead, it's more optimal to let other people continue taking political action and for the individual to take personal action towards securing freedoms that are more under his control. It's more optimal to pursue financial freedom than it is to pursue political ends.
The freedom from work is not only about winning freedom over one's time. The nature of needing someone else's money as one's own income necessarily reduces options. Whether the individual is self-employed or working for a large corporation, that he needs to work means that there necessarily exist things he cannot say, behaviors he cannot adopt, and ideas that are unsafe for him to pursue. The situation is good when the individual gives up what he cares little for, but the situation is best when the individual needn't give up anything at all. The path to maximal practical freedom for an individual is for him to reduce his financial obligations so as to win his freedoms, not from the state but from his economic situation. In modern times, everything else is, to some extent, an abstraction.
Conclusion
I was tempted to write a third argument about how the natural course for most people is to favor security over freedom and that much discussion about the erosion of civil liberties is empty talk. Perhaps in a later post I'll return to this topic, but for now I'll end with a quote by Mark Twain.
I’ve seen many troubles in my time, only half of which ever came true.
- Mark Twain
Monday, March 8, 2010
Two arguments for the elimination of worry about your liberties, pt. 1
I'm decreasingly worried about the erosion of my liberties, civil or otherwise. Here are two arguments for why you should be decreasingly worried about yours.
The erosion of individuals' liberties is inevitable.
For this argument, I make one key assumption, which is that national states have a natural life cycle similar to that of living organisms. Perhaps you already agree with this. Personally, I find it hard to read history and not think of states as living things, what with their continual rises and falls, their dynamicism and responsiveness.
If you accept the proposition that nations undergo life cycles similar to living organisms then I think you must also accept that your liberties will inevitably be eroded.
The reason for this has to do with that the growth of a living organism necessarily correlates with an increase in its overall, systemic complexity, and systemic complexity itself is defined, in part, by an increase in the heterogeneity of the parts composing the whole. A whole thing cannot be made more complex without either making some of its parts more different from each other or else grouping existing parts in ways to create substructures with new, emergent characteristics. An organism in its first stages of life comprises a small number of cells that are all similar to each other in both form and function, and the organism's capacity for adaptation is limited. As the organism grows, cells differentiate and acquire distinct forms and functions. For example, some cells become skin or nerves; some others become the intestines or pancreas. In all cases, this differentiation allows the organism greater potential for adaptation to its environment but only at the cost of the individual parts becoming more specialized, more limited in the scope of their behavior. A cell once specialized into a skin cell cannot change into a pancreatic cell.
So it is with the state. We individuals are the cells and tissues of the state, and the state uses us through a hegemony of specialization into highly functional parts who are each increasingly dependent upon the whole for their survival. If you don't believe this then try to earn a better living by employing only generalized skills and no specialized ones. The whole has use only for a small number of well rounded parts that are capable of all or most functions; it needs most of its parts to be capable of doing one or a few functions and doing them exceedingly well. So it is with us.
The principal benefit we as individuals gain from this arrangement is improved odds of survival because our host system is more robust and adaptable then we are as individuals; the principal cost we suffer is a decreased flexibility in our choices for how to live our lives. This is a necessary trade-off.
It is common for an individual to ignore the benefit and to look only at the cost. Some people see the politicians, lawyers, and bureaucrats--a sort of nervous system--as imposing upon their freedom to live as they wish. Some people see the military, law enforcement, censures, and tax collection agencies--a sort of immune system--as similarly imposing. Of course they're imposing! That's their role. If our nation host didn't possess brain and immunity, which are two features ubiquitous to complex organisms, then surely it would not be responsive to its international environment nor would it be capable of warding off infection, either exogenous or endogenous.
As individuals, many of us flock to large cities knowing that we lose many of our freedoms by doing so but also knowing that we gain greater potential to prosper through our labors. I as a software developer do well for myself by applying my trade, but I do so only under the umbrella of protection and stability afforded to me by my society. In return my behaviors are limited, and I am not as free in thought or action as a hermit. Though, if I were responsible for my own constant protection as well as supplying my every need, as the hermit is, then my specialized skill of developing software would be nigh worthless. Similarly, a pancreas cell suddenly thrust into the world outside the body would find itself unequipped to procure its own survival; it's beautifully specialized skills of hormone production would be nigh worthless.
My nation host currently affords me a degree of freedom that is sufficient. Some people would have me focus on the half-empty portion, though, and concern myself with the delta, the decrease in my freedom over time. These people ignore that my nation host is young yet and continuing to develop into full maturity and that our increasing rigid social and political structure is a natural byproduct of that growth.
The erosion of individuals' liberties is inevitable.
For this argument, I make one key assumption, which is that national states have a natural life cycle similar to that of living organisms. Perhaps you already agree with this. Personally, I find it hard to read history and not think of states as living things, what with their continual rises and falls, their dynamicism and responsiveness.
If you accept the proposition that nations undergo life cycles similar to living organisms then I think you must also accept that your liberties will inevitably be eroded.
The reason for this has to do with that the growth of a living organism necessarily correlates with an increase in its overall, systemic complexity, and systemic complexity itself is defined, in part, by an increase in the heterogeneity of the parts composing the whole. A whole thing cannot be made more complex without either making some of its parts more different from each other or else grouping existing parts in ways to create substructures with new, emergent characteristics. An organism in its first stages of life comprises a small number of cells that are all similar to each other in both form and function, and the organism's capacity for adaptation is limited. As the organism grows, cells differentiate and acquire distinct forms and functions. For example, some cells become skin or nerves; some others become the intestines or pancreas. In all cases, this differentiation allows the organism greater potential for adaptation to its environment but only at the cost of the individual parts becoming more specialized, more limited in the scope of their behavior. A cell once specialized into a skin cell cannot change into a pancreatic cell.
So it is with the state. We individuals are the cells and tissues of the state, and the state uses us through a hegemony of specialization into highly functional parts who are each increasingly dependent upon the whole for their survival. If you don't believe this then try to earn a better living by employing only generalized skills and no specialized ones. The whole has use only for a small number of well rounded parts that are capable of all or most functions; it needs most of its parts to be capable of doing one or a few functions and doing them exceedingly well. So it is with us.
The principal benefit we as individuals gain from this arrangement is improved odds of survival because our host system is more robust and adaptable then we are as individuals; the principal cost we suffer is a decreased flexibility in our choices for how to live our lives. This is a necessary trade-off.
It is common for an individual to ignore the benefit and to look only at the cost. Some people see the politicians, lawyers, and bureaucrats--a sort of nervous system--as imposing upon their freedom to live as they wish. Some people see the military, law enforcement, censures, and tax collection agencies--a sort of immune system--as similarly imposing. Of course they're imposing! That's their role. If our nation host didn't possess brain and immunity, which are two features ubiquitous to complex organisms, then surely it would not be responsive to its international environment nor would it be capable of warding off infection, either exogenous or endogenous.
As individuals, many of us flock to large cities knowing that we lose many of our freedoms by doing so but also knowing that we gain greater potential to prosper through our labors. I as a software developer do well for myself by applying my trade, but I do so only under the umbrella of protection and stability afforded to me by my society. In return my behaviors are limited, and I am not as free in thought or action as a hermit. Though, if I were responsible for my own constant protection as well as supplying my every need, as the hermit is, then my specialized skill of developing software would be nigh worthless. Similarly, a pancreas cell suddenly thrust into the world outside the body would find itself unequipped to procure its own survival; it's beautifully specialized skills of hormone production would be nigh worthless.
My nation host currently affords me a degree of freedom that is sufficient. Some people would have me focus on the half-empty portion, though, and concern myself with the delta, the decrease in my freedom over time. These people ignore that my nation host is young yet and continuing to develop into full maturity and that our increasing rigid social and political structure is a natural byproduct of that growth.
Thursday, March 4, 2010
A reminder to myself
The seed germinates, and a sapling grows. Would you have the sapling remain a sapling forever? No. The sapling is to pass some years growing into a full, healthy tree; and in doing so cells and tissues will continue to differentiate, heterogeneity will increase, parts' roles within the whole will become more rigidly set.
The tree bears fruit. Would you weep for the fruit of the tree? No. The fruit provides you with nourishment, though it signals the height of bloom for the tree, that the period of its quick growth is over, and that its terminal decline is imminent.
The tree withers and falls. Do you despair? No. The tree has spread its germ, and the soil will be enriched by its decay. The cycle will begin anew.
So why then would you worry for your country? For your eroding freedoms? For your fouled nest? On principle? You are nothing but a part of the tree. You are fortunate enough to be alive during the time of fruit-bearing.
Now, continue with reading B. F. Skinner's Walden Two and laugh.
The tree bears fruit. Would you weep for the fruit of the tree? No. The fruit provides you with nourishment, though it signals the height of bloom for the tree, that the period of its quick growth is over, and that its terminal decline is imminent.
The tree withers and falls. Do you despair? No. The tree has spread its germ, and the soil will be enriched by its decay. The cycle will begin anew.
So why then would you worry for your country? For your eroding freedoms? For your fouled nest? On principle? You are nothing but a part of the tree. You are fortunate enough to be alive during the time of fruit-bearing.
Now, continue with reading B. F. Skinner's Walden Two and laugh.
Monday, March 1, 2010
Mystery, beauty, and significance
When we "prove" or "disprove" a philosophy we are merely offering another one, which, like the first, is a fallible compound of experience and hope. As experience widens and hope changes, we find more "truth" in the "falsehoods" we denounced, and perhaps more falsehood in our youth's eternal truths. When we are lifted up on the wings of rebellion we like determinism and mechanism, they are so cynical and devilish; but when death looms up suddenly at the foot of the hill we try to see beyond it into another hope. Philosophy is a function of age.I am a rebel by Mr. Durant's definition because I am a mechanist; a materialist; or, to shake off some rust from those two terms, a physicalist. For most of my life, it has seemed prudent to suppose that all the activities of the mind, from conscious thinking to unconscious feeling, at their core lie firmly within the domain of the same physical laws that we use to explain and predict all other interactions of matter and energy.
Will Durant
The Story of Philosophy
I read the above quoted passage from The Story of Philosophy in the section about Henri Bergson. I hadn't previously known of Bergson, and the eighteen brief pages I read about him were enough for me to dismiss him along with other non-materialists, those other wishful thinkers who so badly want the human mind to be above and beyond the natural, those who so badly want their bodily composition to be more special than rare bits of dust from fantastic explosions of long-dead stars, those who opine, however eloquently, about how life is too complex and improbable to be subject to the physics textbook.
I reject materialism's opposite, freewill, for two reasons. The first is that freewill, as an explanation for the mind, gains us no practicality; by definition it fails to help us explain the whys and wherefores. It's safer and more rational to assume that there doesn't exist freewill than to assume that there does exist freewill because its unbelief can be falsified, its belief cannot. Meanwhile, by initially rejecting freewill, we may develop a useful theory of mind that aids us in pursuits such as education and the treatment of mental illness.
The second reason why I reject freewill is that the concept reeks of psychological bias. This is no proof for materialism for sure, but it's based on the same type of simplified reasoning which I use to reject unicorns, leprechauns, and magical teapots in space—all things whose existence cannot be disproved. People believe in freewill because they want freewill to exist.1 They dismiss materialism because they feel that as a philosophy it reduces all of life to an unbearably industrial process lacking mystery, beauty, and significance. I think this is way off. I think that materialism has greater potential for mystery, beauty, and significance and that those who reject it are missing out on a more sublime way of perceiving the universe.
* * *
Physicalism is the idea that our minds are composed only of the same basic types of matter and energy as can be found throughout the rest of the universe and that the workings of our minds may be explained by a consistent physical process. Our mind differs little if any from a machines, however terrific and complex it may be; given our mind's initial state and its environment's initial state, one may simulate the ensuing physical interactions within and without it and so predict our thoughts and feelings with total accuracy. It is this conclusion, that our thoughts and feelings are mere byproducts of ancient natural processes, that, for so many people, rankles the spirit and defies the intuition.
Such defiance of the intuition may be for good reason, though the axioms of materialism are as solid and well supported by evidence as ever. What is not supported though is the idea that materialism allows for perfect prediction of the mind. Predictability is a misleading if not outrightly wrong deduction of materialism and is so for two reasons. Firstly, our most commonly accepted understanding of the physical laws suggests that no initial state can be known with total precision. This is the uncertainty principal, and its relation to freewill has been done over many times, so I won't cover it here.
What I will cover is the second reason, which is that supposing for the moment that we do indeed know the initial states of our mind and our mind's environment, it may still prove impossible to predict our thoughts and feelings because the interactions of matter and energy may themselves, though governed by consistent physical law, prove unpredictable.
The universe is running down. Out of a primordial seed of order, chaos continuously emerges. But what is chaos? Chaos is information. Whereas order is pattern, a whole tending toward homogeneity that may be described without describing all of the parts individually, chaos is the lack of pattern, a whole tending toward heterogeneity that may only be described by describing all of its parts individually.2 If the quantity of chaos is continuously increasing then the quantity of information is continuously increasing.
Coworker Shafik thinks of a hypothetical computer that is powerful enough to simulate all of the universe, from its origin till its end. When I think of such a computer, I think of a machine that requires an ever increasing store of memory to hold the simulation's state because, at each step in the simulation, results occur that could not be predicted given the information previously known. Wholly new information is created.
Or, at least, this is one interpretation of our current understanding of the universe. It may turn out that one of the bedrocks of modern science—the idea that the universe is running down—is flawed. Or maybe ever increasing chaos is nothing more than an emergent property of an expanding universe; maybe our concept of information is flawed and what we think of as chaos is yet another predictable outcome of existing circumstances and consistent law. Maybe the workings of the mind have nothing to do with chaos at all. Materialism may be determinism; materialism may not be determinism. In light of these questions and uncertainties, does materialism not hold, as a philosophy, great potential for mystery, beauty, and significance?
Later in The Story of Philosophy, I read a quoted passage of another philosopher previously unknown to me that captures this spirit divinely.
A theory is not an unemotional thing. If music can be full of passion, merely by giving form to a single sense, how much more beauty or terror may not a vision be pregnant with which brings order and method into everything that we know... If you are in the habit of believing in special providences, or of expecting to continue your romantic adventures in a second life, materialism will dash your hopes most unpleasantly, and you may think for a year or two that you have nothing left to live for. But a thorough materialist, one born to the faith and not half plunged into it by an unexpected christening in cold water, will be like the superb Democritus, a laughing philosopher. His delight in a mechanism that can fall into so many marvellous and beautiful shapes, and can generate so many exciting passions, should be of the same intellectual quality as that which the visitor feels in a museum of natural history, where he views the myriad butterflies in their cases, the flamingoes and shell-fish, the mammoths and gorillas. Doubtless there were pangs in that incalculable life; but they were soon over; and how splendid meantime was the pageant, how infinitely interesting the universal interplay, and how foolish and inevitable those absolute little passions.1Coworker Shafik tells me that this, what I call "psychological bias", is correctly called argumentum ad consequentiam—appeal to consequences.
George Santayana
Reason in Common Sense
2If the idea of chaos as information seems odd to you, imagine this scenario. There exist two libraries. One library is a well ordered library with all books in their proper spots on the shelves. The other library has been ransacked, and its books have been ripped apart with their pages lying scattered throughout the library. Now I ask you: in which library do you require more information to find all the pages to the book you're looking for? In the well ordered library you need only a call number and a few signs directing you to the correct shelf. In the ransacked library you would need...
Subscribe to:
Posts (Atom)