Monday, June 11, 2007

Technology in the LookingGlass world (#1 of several)


Nailing down the technology level in a science fiction world is a tricky thing. Because Looking Glass is set a little less than 20 years in the future, a lot of the technology is like today's, only smaller, faster, and cheaper. Not really rocket science to predict that. They're probably more recyclable, too, though some of Shroud's comments suggest otherwise, and given the shortsightedness of corporations in my world, as in the real one, maybe they're not. Looking Glass doesn't spend all that much time on the matter. Nor does the next book, thus far.

There are, however, some quantum leaps of technology in Looking Glass. Direct neural interfacing is on its second generation technology by 2025. A variety of power technologies have replaced petroleum energy, as petroleum has become to expensive to burn. Deep virtual realities have become commonplace. It might seem that at least some of these technologies are arbitrarily selected because this is a cyberpunk story.

There's some truth to that, actually. You need virtual reality for cyberpunk, pure and simple. However, in 2004 when I was writing LookingGlass, this was in the news: a new technology for directly interfacing neurons with silicon. If the NSF jack I mention in the story sounds like that technology, it should. The next generation of direct neural interfaces, which I mention at some length, are the product of the nanotech revolution, and are among the only active nanomachines I use in the novel. Effective neuro-interfaces do represent a quantum leap, it's true. But these leaps happen, and they ripple through conventional technologies in strange ways, making those conventional technologies re-express themselves in new, but familiar ways.

Look at your cell phone for a moment. What you have there is a state of the art, wireless digital data network node. What was the quantum leap that made it possible? Microprocessors. Transmitters? 19th century tech, incrementally improved. Batteries? 18th century tech, incrementally improved. Touchtone? Came out in the early 1960s. Micro-electronics outside the CPU? Incremental improvements on electronics that began in the early 20th century. The one quantum leap, microprocessors, rippled out through those related technologies and produced a product which would have been impossible before it, and yet, it's still a telephone. You dial a number, and someone answers on the other end. This is how technology evolves. The electronic revolution and digital revolutions followed this classic pattern, as did steam power before them, and there's no reason to believe that bio-technology and nanotechnology won't go the same way.

Speaking of nanotech, Why aren't there more active nanomachines in the LookingGlass world? In recent science fiction, it's become common to see nanotechnology used as magic, essentially. A technology without constraints, that is the universal solution to AI, androids, immortality, weapons, and pretty much anything else the writer in question wants to assign to them. No wonder. This is how nanotech has been hyped by the likes of Drexler.

Here's the thing. Nanomachines will have constraints on what the technology can do now, what the technology can do in the future, what it can ever do, and, most important, how much the nanotech will cost. All these factors will sharply affect how much nanotech we actually see on the streets. There are a great many technologies today that we never see simply because they're not economically feasible to manufacture things people will buy with. A great example is that there have been better choices than transistors for electronics for several decades now, and yet if you look inside the chips that make our world work today, you will find millions and millions of transistors. Why? Because the silicon industry has an enormous investment in time, equipment, and expertise to deal with transistors, and none of the "better" technologies offer fiscal advantages large enough for manufacturers to change.

This has happened before. In America after World War II, the American consumer electronics world virtually ignored transistors altogether. They were expensive to make, had some serious limitations, but most of all, the American electronics industry had just invested millions if not billions of dollars miniaturizing vacuum tubes, and they were tooled up to manufacture those in great quantity. Transistors only took off because the Japanese, whose electronics industry had been pretty much destroyed by bombing in the war, were starting from scratch and elected to license the transistor technology and run with it rather than wade back into tubes. The first inexpensive Japanese transistor sets hit American shores in 1956, and by the 1970s, the American consumer electronics industry was all but destroyed.

Anyway. The upshot is that I think nanomaterials will be all the rage in 2025. The machines that make them will be confined to factories, but the resulting products will be available, and if you need the extraordinary properties of a given nanomaterial, you'll pay for them. Active nanomachines are much, much harder to make, harder to sustain, and so on, and self-reproducing nanomachines… my feeling is that while they will quite likely be possible in 2025, they'll be so hard to make, so dangerous to have around, and thus, so expensive that you wouldn't use them if you had any alternatives at all. Neurofibers are active nanomachines, yes. Sort of. (More on that in the next installment.) They most emphatically do not reproduce, however. And even so, it bears noting that a neurofiber jack is usually purchased with a mortgage, like a house. Figure a quarter of a million dollars and up in today's money. They're hard to make, harder to make in quantity, and thus, they're expensive.

(To be continued…)

No comments:

Blog Archive