Thursday, December 29, 2011

Reading log, 2011

All during 2011 I kept a list of the books I read. I don't feel like writing any book reports, or even short blurbs, so here it is: just the list.

  • Chris Townsend
    The Backpacker's Handbook (2nd ed)

  • John Steinbeck
    The Grapes of Wrath

  • Lisa Rogak
    Moving to the Country Once and for All

  • William Gibson
    Neuromancer

  • Jared Diamond
    Guns, Germs & Steel

  • William Zinsser
    On Writing Well

  • Wanda Urbanski & Frank Levering
    Moving to a Small Town

  • (various essayists)
    Is the Internet Changing the Way You Think?

  • T. C. Boyle
    Drop City

  • Margaret Atwood
    The Year of the Flood

  • John Steinbeck
    Travels with Charley in Search of America

  • J. K. Rowling
    Harry Potter and the Sorcerer's Stone

  • John Michael Greer
    A World Full of Gods

  • Henry D. Thoreau
    Walden

  • John L. Casti & Werner De Pauli
    Gödel: A Life of Logic

  • Dan Brown
    The Lost Symbol

  • Edward Feser
    Aquinas

  • J. K. Rowling
    Harry Potter and the Chamber of Secrets

  • J. K. Rowling
    Harry Potter and the Prisoner of Azkaban

  • Sean Carrol
    From Eternity to Here: The Quest for the Ultimate Theory of Time

  • J. K. Rowling
    Harry Potter and the Goblet of Fire

  • J. K. Rowling
    Harry Potter and the Order of the Phoenix

  • J. K. Rowling
    Harry Potter and the Half-Blood Prince

  • J. K. Rowling
    Harry Potter and the Deathly Hallows

Monday, December 26, 2011

From Eternity to Here

I wish I could write something insightful about Sean Carroll's book From Eternity to Here, but instead I wrote today's post. This is not to say anything bad about Carroll's book—it's a great book. Rather, it's to say there's nothing like learning a few facts to realize just how little I know, and that's what I got from reading Carroll's book: a few facts a big dose of humility. Cosmology isn't my strength.

Carroll's main point has to do with explaining time and why we experience it as moving forward. The short answer is that time appears to move in the direction of increasing entropy. But this raises another question: why is entropy steadily increasing? The majority of the book explores this question from a multitude of angles, and along the way I learned some interesting facts, which I've summarized in bullet-list form.

  • The laws of physics, even at their most fundamental level, may be reversible, which is to say time's arrow mayn't be caused because of low-level interactions. Particle physics appears reversible along the three reflections of nature: time, parity (i.e., right and left, like what a mirror changes), and charge (i.e., positive and negative). When all three reflections are inverted, a particle or a system of particles will run backwards. So, for example, imagine you start with a box, mostly empty save for gas particles crammed into one corner. That's a low-entropy state. Then let the particles bounce around until they fill the box uniformly, which is a high-entropy state. If, at some time after the particles settle into a uniform distribution, you invert each particle along all three reflections, then the particles will move in reverse, with the effect that entropy will decrease from high to low in the box.

  • Entropy isn't one-to-one with disorder. Counterexample: oil and vinegar, when mixed and allowed to increase in entropy, will separate into a higher-order state. Thus, sometimes an increase in entropy denotes a decrease in disorder. So it's a good idea to be precise with the terminology and say entropy when that's what you mean, not disorder.

  • There is something called a Boltzmann brain, which is a hypothetical brain, or mind, that floats in outer space unattached to any body. But the brain is alive, thinking and feeling just like any human brain does. As extraordinarily unlikely it is that a Boltzmann brain actually exists (for the odds of a brain forming in a near vacuum are extremely tiny), it's more likely for a Boltzmann brain to exist than Boltzmann himself. This is because Boltzmann (the physicist) comprises a brain and a body, which is even lower entropy than just a brain.

  • Indeed, Boltzmann brains are maybe the biggest reason why it's important for guys like Carroll to figure out what time is. Boltzmann brains tell us—not the actual brains, mind you, just their possible existence—that we ourselves are more likely to be Boltzmann brains than real people on a planet, just as it's more likely for the universe to spontaneously generate a loaf of bread than it is to generate a loaf of bread and a baker. But we're not Boltzmann brains, so cosmology ought to account for why the universe has much less entropy than it could otherwise have for there to exist someone who, like us, observes what's going on. If Boltzmann brains were impossible, you could merely posit a low-entropy beginning condition—i.e., the big bang—and say the universe had an infinite amount of time before that time in which to fluctuate into the big bang's hot, dense low-entropy state. But, once allowing for the possibility of Boltzmann brains and how we'd much more likely be Boltzmann brains than real people on a planet, we need to explain why the universe's past low-entropy condition was lower than it needed to be—i.e., low enough to produce us.

Carroll explains such a possible model in his book. But that's all you'll read about it here.

Thursday, December 22, 2011

Does this ever happen to you?

At least this much happens to everyone: you've closed your eyes but haven't yet fallen deeply into sleep. Suddenly there's a loud sound, such as the slamming of a door or the ringing of a phone, and you're startled awake.

What does that sound feel like to you?

I feel a curious sensation whenever I'm startled awake this way. I see the sound. The part of me that's jolted awake is done so by sight. What I see—every time—is a sudden flash that looks a lot like TV static. Whatever I was seeing before, whatever nascent dream was working its way through my unconscious mind, is replaced by a blinding flash of snowy noise that then fades out, just as if turning off a CRT. Then my mind hears the sound, and I awake and open my eyes—peeved but aware of what awoke me.

How many other people feel this sensation?

Monday, December 19, 2011

Information and entropy

I have a correction to make. A few weeks ago, I wrote that information increases as entropy increases. I was wrong. The relationship between entropy and information depends on who you ask.

Sean Carroll, in his book From Eternity to Here, repeatedly states that information decreases as entropy increases. I take this to be a view more common than mine, though I had long thought it was mistaken. Reading this book changed my mind. It turns out there's more than one way to define information, and unsurprisingly, not everyone chooses to define it as though they're a computer scientist.

As to why there exist different definitions for information, some leading to opposite descriptions of the world around us, that's something of a riddle. Today I'm going to describe that riddle.

First, take view that information and entropy are indirectly proportional—i.e., the view Carroll expresses in From Eternity to Here. Carroll uses an example of a glass containing warm water and an ice cube. The ice cube melts, causing the water to become cool. This change entails an increase in entropy. But as Carroll puts it, information becomes lost along the way. That's because the situation ends with a glass of cool water, but a glass of cool water can result from either an ice cube melting in warm water or else a glass whose water was cool to begin with. Two possible states evolved into one—i.e., information decreased.

However, that's not how I normally think about entropy. I take the view that information and entropy are directly proportional. To see it my way, take another example. Imagine you flip a coin a thousand times, and it comes up heads every time. That's an unlikely, low-entropy result. It's also the simplest result to describe; the two-word description 1000 heads suffices. Contrast that to any high-entropy result you're likely to achieve with a fair coin, where no discernible pattern emerges. In a patternless result, the only way to describe all coin tosses is by listing each toss individually—e.g., heads, heads, tails, heads, tails, tails, tails, etc. That's the meaning of patternless. Thus, higher entropy requires more information to describe what's going on.

So what's going on with the difference between these two scenarios? Which way is the right way of looking at the relationship between information and entropy? I wish Carroll had elaborated on this in his book, but From Eternity to Here is chiefly about time, not information, so I didn't learn why physicists find it compelling to look at entropy and information as being indirectly proportional. I understand only my own view, which stems from a background in computation.

My perspective is that of dealing with computer stuff, including data compression and the shortest program to do X. Put simply: the more random something is, the less it can be compressed—the longer a program must be to contain it. That leads computer scientists to the counterintuitive notion that high-entropy randomness is full of information, whereas patterns are not. That means we look at a TV showing static as containing more information than a TV showing a show, just as a shredded book contains more information than an intact book. As you may imagine, this view takes some getting used to.

As to the riddle of the two examples, and what causes the difference between the two views, the difference is whether one's view is macroscopic or microscopic. The macroscopic view leads to the physics perspective, where molecules are course-grained into big states, such as warm water with an ice cube. As entropy increases, the number of possible macroscopic states decreases, and that's perceived as information loss.

The microscopic view leads to the computer science perspective, where there is no course graining and one keeps track of each individual bit. As entropy increases, the number of possible microscopic states increases, and that's perceived as information gain.

That solves that riddle, but it suggests another riddle entirely: what is information, really? May we say something objective about it?

Thursday, December 15, 2011

The new commute

The new commute is tough. But because it's by bike, being tough isn't necessarily bad.

The shortest and fastest legal-and-not-too-likely-to-get-me-killed route to work is the one straight down Central Ave, from the AZ Canal path to downtown. It's a tad over twelve miles, vertically descends about 150 feet, and takes me 45-50 minutes to complete most mornings, door to desk. My guess is that's about 15 minutes slower than taking the freeway by car and a little faster than taking the express bus. Again, this is figuring total time from door to desk.

The route home is different. It ascends 150 feet, and though that's not much spread over twelve miles, it makes enough of a difference when biking in rush hour traffic. Negating the gradient subtracts several KPH from my average speed. That makes it harder to ride with traffic, to time lights and to evade. Add to these the phenomenon—as I observe—that afternoon rush hour traffic is more aggressive and unpredictable than morning rush hour traffic, and I have sufficient motivation to take a more out-of-the-way route home, one that's calmer. That route sends me all the way to 20th St and the AZ Canal path near my old neighborhood. It's more than fifteen miles but traverses only a dozen traffic lights or so, which is remarkable considering I cross through downtown at the start, and downtown is a dense matrix of traffic lights. But the afternoon route's elevation gain and added distance cause it to take more time than my morning route; it takes between 60 and 80 minutes, depending on how much power I put to the pedals.

On another note, last week I had the foresight to buy a set of fenders for my touring & commuter bike. I installed them Monday night and tested them in the wet and muddy conditions during Tuesday's commutes. Conclusion: fenders are amazingly effective. Upon finishing both the morning ride and the afternoon ride, my legs were dry and clean, and that's after speeding through inch-deep puddles. Had I ridden a bike without fenders, I would have been a sopping mess from waist to toe—I know that from experience. Tuesday alone made me pro-fenders.

Monday, December 12, 2011

The sleeping mind

What modern materialist explanations of mind get right, I suspect, is their assumption that minds are entirely material phenomena that abide all the same physical laws as any other material we observe. But what materialist explanations get wrong, I further suspect, is nearly everything else.

It's not just the details we're wrong about; the metaphors we're using for our basic understanding of the mind are misleading. To show what I'm talking about, take as an example: sleep.

Sleep is ubiquitous for all animals possessing a nervous system of sufficient complexity. Even fruit flies appear to sleep. But sleep patterns vary to extremes from one species to another. For example, humans sleep an average of eight hours per day with nearly all sleep happening in one burst. Giraffes and cows sleep only about four hours per day, and armadillos sleep about eighteen. A house cat may sleep twice as much as its human roommate without ever sleeping more than a couple hours at a time. Ostriches sleep fifteen minutes or so at a time. Some animals sleep nocturnally; some animals' diurnal phasing changes with the seasons. Some birds and many aquatic mammals sleep with half their brain still awake, though REM sleep always requires both hemispheres. Seals sleep both in the water and on land, but they attain REM sleep only on land. And so on.

Given the wide range of sleep behaviors across different species, it strikes me as more than an accident that all animals of sufficient neural complexity sleep. Rather, it seems as though sleep is a necessary condition for self-sustained neural complexity, and the wide range of sleep behaviors we observe are animals' diverse ways of coping with the necessary but strategically disadvantaged position of being unconscious on a murderous planet.

Yet, as far as I can tell, sleep figures prominently into no modern explanation of the mind. Modern materialism's guiding metaphor for the mind remains the digital computer and its mechanical manipulation of information. But digital computers don't need sleep, so as a metaphor I doubt they'll take us but partway if anywhere to figuring out what's going on in the mind. Indeed, my guess is that computers' freedom from sleep remains one of the major limitations preventing us from making machines humanly smart, though how we should make computers need sleep is anyone's guess.

I suspect a good theory of mind will make strong claims about sleep and answer many of the puzzling questions we have about it. It will explain why complex neural systems need to shut down and reboot from time to time. And it will explain dreaming.

Thursday, December 8, 2011

opinions.go

package main

import "time"

func main() {

  // FIXME: This is not thread-safe. Then again, that just makes it
  // interesting.

  // All opinions strengthen over time, given no facts.
  go func() {
    time.Sleep(24 * 60 * 60 * 1000000000) // 1 day
    for _, op := range opinions {
      op.strengthen(stuckInMyWaysDelta) // presumably >0 but small
      // NOTE: stuckInMyWaysDelta keeps getting increased each time we
      // release. This is becoming a problem.
    }
  }()

  // Accept incoming facts, adjust opinions accordingly. Congruent facts
  // strengthen opinions, incongruent facts weaken opinions.
  go func() {
    for f := range factChan {
      for _, op := range opinions {
        agree, value := op.arbitrateFact(f)
        if !agree {
          value = -value
        }
        relevancy := op.factRelevancy(f) // >= 0.0
        op.strengthen(value * relevancy)
      }
    }
  }()

  // Contrariness loop:
  // Disabled if pleasant or uninteresting.
  if !pleasant || interesting {
    go func() {
      // For each incoming opinion (from another program), reconcile with
      // existing, local opinions. Unlike facts, opinions are strengthened
      // because of disagreement, not agreement.
      for inOp := range opinionChan {
        for _, op := range opinions {
          agree, value := op.arbitrate(inOp)
          if agree {
            value = -value
          }
          value *= howMuchICareCoefficient // see social.go
          relevancy := op.relevancy(inOp) // >= 0.0
          op.strengthen(value * relevancy)
        }
      }
    }()
  }

  go inspireNewOpinions()
  go garbageCollectStaleOpinions()

  metabolize() // doesn't return until SIGTERM
}

Sunday, December 4, 2011

On the origin of minds

From Douglas Hofstadter's I Am a Strange Loop:

But consciousness is not a power moonroof (you can quote me on that). Consciousness is not an optional feature that one can order independently of how the brain is built. You cannot order a car with a two-cylinder motor and then tell the dealer, Also, please throw in Racecar Power® for me. (To be sure, nothing will keep you from placing such an order, but don't hold your breath for it to arrive.) Nor does it make sense to order a car with a hot sixteen-cylinder motor and then to ask, Excuse me, but how much more would I have to throw in if I also want to get Racecar Power®?

Like my fatuous notion of optional Racecar Power®, which in reality is nothing but the upper end of a continuous spectrum of horsepower levels that engines automatically possess as a result of their design, consciousness is nothing but the upper end of a spectrum of self-perception levels that brains automatically possess as a result of their design. Fancy 100-huneker-and-higher racecar brains like yours and mine have a lot of self-perception and hence a lot of consciousness, while very primitive wind-up rubber-band brains like those of mosquitoes have essentially none of it, and lastly, middle-level brains, with just a handful of hunekers (like that of a two-year-old, or a pet cat or dog) come with a modicum of it.

Consciousness is not an add-on option when one has a 100-huneker brain; it is an inevitable emergent consequence of the fact that the system has a sufficiently sophisticated repertoire of categories. Like Gödel's strange loop, which arises automatically in any sufficiently powerful formal system of number theory, the strange loop of selfhood will automatically arise in any sufficiently sophisticated repertoire of categories, and once you've got self, you've got consciousness. Élan mental is not needed.

Hofstadter's analogy between consciousness and Racecar Power® succinctly explains what I find lacking about non-materialist criticisms of materialism. Just as you can tear apart a racecar engine block, grind it to metal shavings, and never once observe an atom of Racecar Power®, so too you should never expect to discover consciousness as a tangible entity anywhere in nature. But we don't go around claiming that Racecar Power® is an immaterial entity that defies materialist explanations; to the contrary, Racecar Power® is exactly engineered by precise and intentional exploitation of physical laws. So too consciousness is consequence of physical laws applied to plain, ordinary material stuff.

Nevertheless, I find Hofstadter's strange loop view of consciousness lacking. Though the view makes more sense of what I see than non-materialist views, it strikes me as being like guessing the right answer on a test: it doesn't show that we understand what's going on. As yet another analogy, the strange-loop view is like pre-Darwin ideas about evolution, which also were good guesses but guesses nevertheless.

It may surprise some people to know that ideas about evolution predate Darwin, but as far as we know, the idea goes back at least to the Greek philosopher Anaximander, who in the 6th century BC proposed that life began in the seas. Darwin's big contribution to the idea of evolution is the idea of natural selection. Natural selection provides the framework through which we can say how evolution occurs and even a little about what forms it takes. In other words, natural selection is the glue that binds evolution to falsifiability, transforming a weak explanation that says life changes to a stronger one that says life changes as a result of selective pressures of the environment. Pre-Darwin, ideas about evolution were speculative; post-Darwin, evolution serves as a framework that points us to explanations and further questions.

Materialist views of consciousness such as Hofstadter's strange loop are interesting but speculative. What's missing from them is consciousness's analog to evolution's natural selection—i.e, the driving force that explains how the emergent phenomenon works. As for what that analogous thing is, no one knows. But evolution as an idea was around for at least 2,300 years before Darwin entered the scene, and a theory of material consciousness may take as long or longer to emerge—though, if consciousness has no analog to evolution's Galapagos Islands, the problem may be intractable.

Thursday, December 1, 2011

Of shirts and cars

On Monday, for my first day at work, I forgot my shirt. How's that for a first impression?

Wallets, keys, phones, papers, packed lunches, etc—these are easy things to forget. But not shirts. It's hard to slip out the front door of your home having accidentally forgotten your shirt. Unless, of course, your shirt is supposed to be out of sight, packed in one of your panniers because you're dressed in cycling attire. Then it's easy to forget that shirt—which you pressed the night before and what a big deal that is because you haven't ironed anything for years and the iron emitted an aroma of melting plastic because it hasn't ironed anything in years either—draped over a kitchen chair.

The company I work for makes chargers for electric cars. Some people thought the future arrived when they first watched a movie on their phone. For me the future has arrived in that I have the guts of an EV charger on my desk at work.

For a guy like me who's pro-bike, it may seem strange to work somewhere that's furthering car culture—even if it's a new fringe part of it. I also spent more than four years working at a company that made software for car dealerships—and car dealerships are evil, no doubt. The truth is: most software development in the world is, well, corporate, and I'm not above selling out.

That said, I don't believe electric cars are the way of the future. (Shh, don't tell any of my coworkers I said that!) Electric cars require a lot of system complexity just to maintain the status quo. I liken our culture's sudden enthusiasm for EVs as a sign that we're entering the bargaining phase in our grief over the continued erosion of our way of life. Please, please let me keep driving a one-and-a-half ton vehicle 60MPH on the freeway. I'll do anything—even put up with limited mileage, higher costs, and the extra inconvenience of electric charging to pumping fuel. Please?

While sitting at my new desk and familiarizing myself with the details of EV charging, I ran some Physics 101 calculations to compare pumping gas to electric charging. Here's what I got:

  • There are 4,184 watt-seconds in a Calorie.
  • A gallon of gasoline contains 31,000 Calories—or what I like to call 31 burrito units.
  • At the gas pump, gas flows up to 10 gallons per minute, though I suppose most pumps do about half that—let's say 5 gallons per minute.
  • There are 60 seconds in a minute.
  • Thus, 4,184 watt-seconds per Calorie, times 31,000 Calories per gallon, times 5 gallons per minute, times 1 minute per 60 seconds is 10 megawatts.

To make that clear, whenever you pump gas for your car, you're controlling an energy flow equivalent to about two to three thousand McMansions. Granted, you fill your tank in only a few minutes, whereas those McMansions keep sucking power all day. But the next time you stop and fill-'er-up during rush hour, look around at the other ten or so pumps in use and keep in mind that that gas station is outputting as much power as a small coal-fired power plant.

This is to say people's dreams of electric cars replacing cars as we know them probably aren't going to come true. Even if we solve battery shortcomings—and that's a huge if—there's still the problem of replacing the convenience of fueling at a rate that's three orders of magnitude greater than what your house consumes. I just don't see that happening. Ever.

The way of the future for personal transportation will involve the word smaller and probably the words slower and nearer. Everything between now and then is bargaining and depression.

As for me and my new job, I'm just glad they don't mind me wearing an undershirt to work.