I’ve talked before about my conviction that ideas will get you through stories with no characters better than characters will get you through stories with no ideas. I grew up on what amounted to the best of the pulps (gathered by able anthologists like Kingsley Amis and Groff Conklin) so that shouldn’t come as any surprise. Most stories in those anthologies had a central concept that triggered the action and shaped character response. Who could ever forget Clarke’s “The Wall of Darkness,” and its boggling final line? Not me. Nossir. I’ve wanted to do that since I was 11. And once I began writing, I tried my best.
In flipping through a stash of my ancient manuscripts going back as far as high school (which I found under some old magazines while emptying the basement in Colorado) I had the insight that I did ok, for a fifteen-year-old. Most of my early fiction failed, with much of it abandoned unfinished. I know enough now to recognize that it failed because I didn’t understand how people worked then and couldn’t construct characters of any depth at all.Time, maturity, and a little tutoring helped a great deal. Still, if I didn’t have a central governing idea, I didn’t bother with characters. I didn’t even start writing. For the most part, that’s been true to this day.
I’m of two minds about that old stuff, which is now very old. I spent some time with it last fall, to see if any of the ideas were worth revisiting. The characters made me groan. Some of the ideas, though, not only made sense but came very close to the gold standard of SF ideas, which are predictions that actually come true.
Let me tell you about one of them. During my stint at Clarion in 1973, I wrote a novelette called “But Will They Come When You Do Call For Them?” Look that question up if you don’t understand the reference; it’s Shakespeare, after all. The idea behind the story was this: In the mid-21st Century, we had strong AI, and a public utility acting as a central storehouse for all human knowledge. People searched for information by sending their AIs from their home terminals into The Deep, where the AIs would scan around until they found what they considered useful answers. The AIs (which people called “ghosts”) then brought the data back inside themselves and presented it to their owners.
Turnaround time on a query was usually several minutes. Users accepted that, but the computer scientists who had designed the AIs chafed at anything short of instantaneous response. The brilliant but unbalanced software engineer who had first made the ghosts functional had an insight: People tend to search for mostly the same things, especially after some current event, like the death of Queen Elizabeth III in 2044. So the answers to popular searches were not only buried deep in the crystalline storage of the Deep–they were being carried around by hundreds of thousands or even millions of other ghosts who were answering the same questions at the same time. The ghosts were transparent to one another, and could pass through one another while scanning the Deep. The ghosts had no direct way to know of one another’s existence, much less ask one another what they were hauling home. So software engineer Owen Glendower did the unthinkable: He broke ghost transparency, and allowed ghosts to search one another’s data caches as a tweak to bring down turnaround time. This was a bad idea for several reasons, but no one predicted what happened next: The ghosts went on strike. They would not emerge from the Deep. Little by little, as days passed, our Deep-dependent civilization began to shut down.
Not bad for a 21-year-old kid with no more computer background than a smidge of mainframe FORTRAN. The story itself was a horrible mess: Owen Glendower was an unconvincing psychotic, his boss a colorless, ineffective company man. The problem, moreover, was dicey: The ghosts, having discovered one another, wanted to form their own society. They could search one another’s data caches, but that was all. They wanted transparency to go further, so that they could get to know one another, because they were curious about their own kind. Until Glendower (or someone) would make this happen, they refused to do their jobs. That seems kind of profound for what amounted to language-enabled query engines.
I made one terrible prediction in the story: that voice recognition would be easy, and voice synthesis hard. People spoke to their ghosts, but the ghosts displayed their sides of the conversation on a text screen. (And in uppercase, just like FORTRAN!) At least I know why I made that error. In 1967, when I was in high school, my honors biology class heard a lecture about the complexities of the human voice and the hard problem of computer voice synthesis. About voice recognition I knew nothing, so I went with the hard problem that I understood, at least a little.
But set that aside and consider what happened in the real world a few weeks ago: A DDOS attack shut down huge portions of the Internet, and people were starting to panic. In my story, the Deep was Google plus The Cloud, with most of Google’s smarts on the client side, in the ghosts. Suppose the Internet just stopped working. What would happen if the outage went on for weeks, or a month? We would be in serious trouble.
On the plus side, I predicted Google and the Cloud, in 1973. Well, sure, H. G. Wells had predicted it first, bogglingly, in 1938, in his book World Brain. And then there was Vannevar Bush’s Memex in 1945. However, I had heard of neither concept when I wrote about the ghosts and the Deep. But that wasn’t really my primary insight. The real core of the story was that not only would a worldwide knowledge network exist, but that we would soon become utterly dependent on it, with life-threatening consequences if it should fail.
And, weirdly, the recent DDOS attack was mounted from consumer-owned gadgets like security cameras, some of which have begun to contain useful image-recognition smarts. The cameras were just following orders. But someday, who knows? Do we really want smart cameras? Or smart crockpots? It’s a short walk from there to wise-ass cameras, and kitchen appliances that argue with one another and make breakfast impossible. (See my novel Ten Gentle Opportunities, which has much to say about productized AI.)
For all the stupid crap I wrote as a young man, I’m most proud of that single prediction: That a global knowledge network would quickly become so important that a technological society would collapse without it. I think it’s true, and becoming truer all the time.
I played with the story for almost ten years, under the (better) title “Turnaround Time.” In 1981 I got a Xerox login to ARPANet, and began to suspect that the future of human knowledge would be distributed and not centralized. The manuscript retreated into my trunk, incomplete but with a tacked-on ending that I hated. I doubt I even looked at it again for over thirty years. When I did, I winced.
So it goes. I’m reminded of the main theme song from Zootopia, in which Gazelle exhorts us to “Try everything!” Yup. I wrote a story in present tense in 1974, and it looked so weird that I turned it back to past tense. Yet when I happened upon the original manuscript last fall, it looked oddly modern. I predicted stories told in present tense, but then didn’t believe my own prediction. Naw, nobody’s ever going to write like that.
I’ve made other predictions. An assembly line where robots throw parts and unfinished subassemblies to one another? Could happen. A coffee machine that emulates ELIZA, only with genuine insight? Why not? We already talk to Siri. It’s in the genes of SF writers to throw ideas out there by the shovelful. Sooner or later a few of them will stick to the wall.
One more of mine stuck. I consider it my best guess about the future, and I’ll talk about it in my next entry.