Just a quick update. Very nice review of Looking Glass on The Speculist. Full disclosure: I've known Mike, the author, for the majority of my life. - JRS
Monday, June 25, 2007
Friday, June 15, 2007
Biotechnology is actually a stronger player in the LookingGlass world, although you don't see much of it in Looking Glass. You do see a little of it in shots, which make you immune to all forms of sexually transmitted diseases and also are almost perfectly effective birth control. Admittedly, shots began as a plot convenience, but as the world has evolved, particularly in the later novels in the series, they've had interesting ramifications socially. Remember for a moment that the sexual revolution in the 1960s occurred in no small part, because the most common forms of VD were now treatable with antibiotics, and because birth control pills, along with condoms and other barrier methods, were developed, made legal, became cheap, were readily available, and were reasonably effective. This, combined with a population explosion of young, horny people who'd grown up with these factors in place, changed our society significantly. The LookingGlass world is much less conservative about matters of sexuality, and shots (along with the segregation of most religious extremists to the UCSA) are the reason. As Shroud says, "I've had my shots, how bad could it be?"
Biotech in the LookingGlass world respects reality a little more than classic cyberpunk. One of the hallmarks of cyberpunk as a genre is treating human bodies like cars - there are chop shops and scrap yards, and having other people’s body parts grafted into your own body is no big deal. Well, in the LookingGlass world there’s some of that, but transplants are still a big deal. Your immune system is not to be trifled with. Besides, transplants are largely being supplanted by clone tissue auto-plants anyway, which I think is the future of transplant science. Technology marches on.
Biotechnology in the LookingGlass worlds shares a rather fuzzy border with nanotechnology, to be honest. Neurofibers live on that border. They have a metabolism, of sorts,. They conduct electricity, rather than passing an electric charge by moving salt ions through their membranes. They clearly have some processing capability of their own, much like neurons do. If I had to pin down exactly how they work, I’d say they’re cyborgs - nanomachines and biological components freely intermingled. This idea goes back to the earliest thoughts I had on the LookingGlass world, so it fits. Even Drexler says that the first active nanomachines will likely be engineered cells.
Computer Technology in the LookingGlass world is where I’ve actually gotten the most heat. One of my friends especially, found the deck/ice model hard to swallow. The short version, for those who haven’t read Looking Glass yet, is that your deck is a shell. It provides resources, such as the interface to your brain, interface to the network, wifi, and so forth. They provide these resources to ice, via an optical connection. (Virtual Reality tanks do the same things, they just do them better.) An OS deck, the smallest you’ll encounter in Looking Glass, is the size of a regular iPod ™, which was the form factor I had in mind when I wrote the story. Other decks are larger, with the average being about the size of a PlayStation II ™ . Tanks are, obviously, much larger. Why these sizes? Because they’re well established form factors, easy to use with your hands, easy to carry. This is why your standard iPod ™ is very similar in size and shape to the transistor radios of the 1970s, why your Playstation II ™ is pretty similar in size to the Atari 2600 ™ that I grew up with, and so forth and so on.
All your applications are in ice, which are slabs of plastic containing all the electronics of a processor, memory, and everything needed by a given program to run. They also contain the program itself. Ice tend to be transparent or translucent, save for the circuitry inside, which is where the name comes from. An ice stick is about the size of a stick of gum. And yes, if this bears a strong resemblance to how classic video games worked, with a cartridge containing software in ROM, and any extra circuitry the game needed, that’s intentional. We got away from that model because the software expanded beyond what ROMs could hold, and because more and more electronics got packed in the game console itself. I see this trend reversing as integrated circuits get cheaper and cheaper, and the complexities and vulnerabilities of centralized operating systems and computers become less and less manageable. You plug the ice in, and it just works. And even if it doesn’t just work, even if it crashes, only that ice is inconvenienced. By adding what amounts to redundant hardware - multiple CPUs doing one job each and sharing resources, rather than trying to make all these pieces of software play nice together in a monolithic computer, the complexity of the computing system is reduced dramatically. Doing so also removes the need for selling software on media which can be read by your computer, but not by someone else’s. If you’re selling software, you burn it into rom on an ice, make sure the software is encrypted and that your ice can decrypt it as needed, and it’s much, much harder to copy. Remember that intellectual property law varies greatly from country to country in the Looking Glass world, and that some countries have no intellectual property law at all. Copy protection becomes much more important.
I didn’t make the idea of separating the application and computing from the user interface resources up. Xwindows works exactly that way. In Xwindows, you have an xserver running somewhere. Applications call that server and ask it to draw a window. Set the background. And so forth and so on. My impression is that most GUI based computing works that way, to be honest. The difference with Xwindows is that it can reach across a TCP-IP network to send display info to a completely different computer, or a dedicated xterminal, or whatever.
Plan 9 from Bell Labs extended this model to all the computing resources needed by an application. In Plan 9, an app might run on one processing resource, hand threads or subprocesses out to a bunch of others, use storage resources in another building, and display in another country. The box on your desk that looks like a desktop computer is really just a gateway to the network and a collection of resources offered to a given list of others.
This is a little more extreme than I really use in Looking Glass. I’m not a Plan 9 expert, and I’ve never even run it, but it seems to me that there’d be a lot of problems with latency and overhead spreading your computing out like that. And in my world, at least, it’s not necessary. Very few computing tasks that we wouldn’t think of as supercomputing tasks today are beyond what a single ice processor can do, and even some lower end supercomputing tasks are in the reach of clusters of ice. (More on clusters in a minute.). If you’re running big environment servers - Omnimart’s online shopping environment, for example - you’re in a different hardware world. I picture servers being as they are today, racks and racks of electronics dedicated to a specific task. But I don’t really nail it down. They could as easily be racks and racks of ice, basically. It’s not that important to the story, so I don’t spend a lot of time on the server side of things. They exist, and they’re managed, and that’s all that’s important.
Overall, the computing world of Looking Glass is an expression of the end of Moore’s Law. Moore’s law states that every 18 months, the computing power of a microprocessor will double for the same price. I believe that in time, this law will hit physical limitations. You can only make a transistor so small. You can only make conductors so small. I asked myself what would happen to the world of computing when the exponential growth of processing power finally ground to a halt? The computers in the LookingGlass world are the result.
Posted by Jim Strickland at 1:20 PM
Monday, June 11, 2007
Nailing down the technology level in a science fiction world is a tricky thing. Because Looking Glass is set a little less than 20 years in the future, a lot of the technology is like today's, only smaller, faster, and cheaper. Not really rocket science to predict that. They're probably more recyclable, too, though some of Shroud's comments suggest otherwise, and given the shortsightedness of corporations in my world, as in the real one, maybe they're not. Looking Glass doesn't spend all that much time on the matter. Nor does the next book, thus far.
There are, however, some quantum leaps of technology in Looking Glass. Direct neural interfacing is on its second generation technology by 2025. A variety of power technologies have replaced petroleum energy, as petroleum has become to expensive to burn. Deep virtual realities have become commonplace. It might seem that at least some of these technologies are arbitrarily selected because this is a cyberpunk story.
There's some truth to that, actually. You need virtual reality for cyberpunk, pure and simple. However, in 2004 when I was writing LookingGlass, this was in the news: a new technology for directly interfacing neurons with silicon. If the NSF jack I mention in the story sounds like that technology, it should. The next generation of direct neural interfaces, which I mention at some length, are the product of the nanotech revolution, and are among the only active nanomachines I use in the novel. Effective neuro-interfaces do represent a quantum leap, it's true. But these leaps happen, and they ripple through conventional technologies in strange ways, making those conventional technologies re-express themselves in new, but familiar ways.
Look at your cell phone for a moment. What you have there is a state of the art, wireless digital data network node. What was the quantum leap that made it possible? Microprocessors. Transmitters? 19th century tech, incrementally improved. Batteries? 18th century tech, incrementally improved. Touchtone? Came out in the early 1960s. Micro-electronics outside the CPU? Incremental improvements on electronics that began in the early 20th century. The one quantum leap, microprocessors, rippled out through those related technologies and produced a product which would have been impossible before it, and yet, it's still a telephone. You dial a number, and someone answers on the other end. This is how technology evolves. The electronic revolution and digital revolutions followed this classic pattern, as did steam power before them, and there's no reason to believe that bio-technology and nanotechnology won't go the same way.
Speaking of nanotech, Why aren't there more active nanomachines in the LookingGlass world? In recent science fiction, it's become common to see nanotechnology used as magic, essentially. A technology without constraints, that is the universal solution to AI, androids, immortality, weapons, and pretty much anything else the writer in question wants to assign to them. No wonder. This is how nanotech has been hyped by the likes of Drexler.
Here's the thing. Nanomachines will have constraints on what the technology can do now, what the technology can do in the future, what it can ever do, and, most important, how much the nanotech will cost. All these factors will sharply affect how much nanotech we actually see on the streets. There are a great many technologies today that we never see simply because they're not economically feasible to manufacture things people will buy with. A great example is that there have been better choices than transistors for electronics for several decades now, and yet if you look inside the chips that make our world work today, you will find millions and millions of transistors. Why? Because the silicon industry has an enormous investment in time, equipment, and expertise to deal with transistors, and none of the "better" technologies offer fiscal advantages large enough for manufacturers to change.
This has happened before. In America after World War II, the American consumer electronics world virtually ignored transistors altogether. They were expensive to make, had some serious limitations, but most of all, the American electronics industry had just invested millions if not billions of dollars miniaturizing vacuum tubes, and they were tooled up to manufacture those in great quantity. Transistors only took off because the Japanese, whose electronics industry had been pretty much destroyed by bombing in the war, were starting from scratch and elected to license the transistor technology and run with it rather than wade back into tubes. The first inexpensive Japanese transistor sets hit American shores in 1956, and by the 1970s, the American consumer electronics industry was all but destroyed.
Anyway. The upshot is that I think nanomaterials will be all the rage in 2025. The machines that make them will be confined to factories, but the resulting products will be available, and if you need the extraordinary properties of a given nanomaterial, you'll pay for them. Active nanomachines are much, much harder to make, harder to sustain, and so on, and self-reproducing nanomachines… my feeling is that while they will quite likely be possible in 2025, they'll be so hard to make, so dangerous to have around, and thus, so expensive that you wouldn't use them if you had any alternatives at all. Neurofibers are active nanomachines, yes. Sort of. (More on that in the next installment.) They most emphatically do not reproduce, however. And even so, it bears noting that a neurofiber jack is usually purchased with a mortgage, like a house. Figure a quarter of a million dollars and up in today's money. They're hard to make, harder to make in quantity, and thus, they're expensive.
(To be continued…)
Posted by Jim Strickland at 2:45 PM
- ► 2016 (20)
- ► 2015 (13)
- ► 2014 (14)
- ► 2013 (24)
- ► 2012 (22)
- ► 2011 (27)
- ► 2010 (34)
- ► 2009 (34)
- ► 2008 (37)