Sunday, August 3, 2008

Sweet Dreams Are Not Made Of This

In the last few years, I've noticed hard drives failing at a spectacular rate. This phenomenon has cut across vendors. I've had Western Digital, Maxtor, IBM and Seagate drives all fail on me, regardless of whether internally or externally mounted. Just this week, one of the drives in my Buffalo DriveStation Quattro failed, and so did one of the drives mounted in one of my machines at work. External housings also fail at an alarming rate (I've had controllers or power supplies fail on about 40% of the ones I own).

Electronic devices in general are either failing more often, or their ubiquity is making apparent failure rates that have always been there. My new computer shipped with one or two chips of bad RAM. Our GPS navigational device failed recently. All manner of components fail regularly, often bringing down an entire system.

As someone who engages himself in discussions and debates about Transhumanism, I have come to believe that while it is fascinating to contemplate the high-level concerns about the feasibility of uploading -- ranging from whether or not the human mind is sufficiently represented in determinate hardware for it to be at all feasible, to trying to determine if scanning brain states or understanding the underlying logic is a more tenable approach -- that a much more mundane concern must first be addressed before any of those discussions move beyond the realm of fantastical speculation. That concern is reliability, both of hardware and software.

Simply put, nothing lasts forever. But, if we want to explore the idea of allowing our minds to outlast our body hardware, we must deal with the decidedly unsexy problem of reliability. Hardware and software will always fail, though hopefully not always as frequently as is the case now. A major component of reliability is trust, which is placed in the hands of other people. In the case of Transhumanism, that trust is placed in the hands of future generations who may have no real attachment to their wards, depending on what differences would emerge between uploads and humans should this become feasible.

What we are trusting others to do, even in contemporary society, is to maintain systems that keep us alive. Aircraft mechanics, hospital equipment technicians, traffic systems engineers, and a whole host of other people are entrusted to maintain and repair systems we've come to utterly depend upon. Uploaded minds would not only need more reliable hardware, but also a trustworthy system of stewardship in which redundancy is provided, timely and complete backups are maintained, and hardware and software failures are addressed before minds are literally lost.

One limiting factor here is cost. Doing all these things requires resources, and as more and more minds got uploaded, more and more resources would be needed. Incentives would be needed to give the stewards reason to perform their duties, both the promise of future uploading for themselves and material supplies and comforts in their biological lives.

Perhaps this all becomes a moot point once computers can reliably maintain, repair, upgrade and replace themselves without loss of data. But we're so far from this point that there is a legitimate bootstrapping concern here. Waving it away with hopes for a magical "hard singularity" by which a rapid transition causes the universe to go from chaotic and flawed to perfectly managed in a matter of minutes may palliate some people's concerns in this area. Personally, I don't see how we could get anywhere near a Transhumanist ideal of preserving our thoughts, memories, and personalities in such a way as to afford "eternal" life with continuity of consciousness, without first addressing immediate concerns about reliability.

Moore's law hoodwinks Transhumanists into believing that eventually all mundane problems of computing capacity will "solve themselves" with time, but I contend that these pragmatic issues of computer hardware and software development pose a far greater challenge to Transhumanism via uploading than the dyed-in-the-wool idealists are willing to accept.

Even without taking into account Transhumanist fantasy, the problem of system reliability is a very real and pressing issue as our society becomes increasingly computerized. Transport, medicine, finance, the military, and law enforcement all rely heavily on computerized systems. These are systems upon which lives, sometimes millions of lives at once, utterly depend. Reliable systems engineering is an incredibly boring area of research, which also happens to be utterly essential if we're to continue to expand the degree to which we rely upon technology to maintain our way of life, and indeed life itself.

0 comments: