The subtle threat from smart machines.
In this morning’s NYT, John Tierney had one of those gee, the future’s coming pieces on robotic “smart cars.” He concluded with the prediction that:
even if humans stubbornly cling to the steering wheel, they could still end up sharing the road with smart cars. By around 2030, according to some believers in Moore’s Law, there will be computers more powerful than the human brain, leading to the emergence of superintelligent “post-humans.” If these beings do appear, I have no doubt how they’ll get around. They’d never be stupid enough to get in a car driven by a member of Mr. Magoo’s species.
Yeah, funny. But come to think of it, isn’t the prospect of “superintelligent ‘post-humans’” really a lot more inkworthy than the prospect of safe-driving robot cars?
And isn’t it a lot less appealing?
It certainly used to be. When Karel Capek wrote his 1921 play, R.U.R. (Rossum’s Universal Robots), thereby introducing the term robot to the language, it probably seemed natural to his audience that his machine-men ultimately rebelled and snuffed out their human creators. Rebellion and havoc-making was what creatures made by hubristic mortals always did, from Frankenstein’s monster back to the Hebrew Golem. Even God had his problems with those wayward kids, Adam and Eve.
By the 1930s, robots had become standard monster-figures in pulp sci-fi. But by the early 1940s, Isaac Asimov had begun to refer in his stories to the restraining “Three Laws of Robotics,” and lawful robots began to mingle with rebellious ones.
To anyone who works in AI these days, the Three Laws must seem absurdly naive—just the sort of thing a sci-fi writer would have come up with back in the early Forties. But the “Laws” did what Asimov had wanted them to do. They got sci-fi out of the robophobe rut it had been in, by persuading readers that smart machines could be sympathetic, peaceful characters, even trustworthy, Tonto-like companions to humans.
The “good robot” theme hasn’t always prevailed since then in Western culture. Films like Westworld, 2001: A Space Odyssey, The Terminator and The Matrix occasionally have brought our deeper fears to the surface. On the non-fiction side, writers including Bill Joy and Bill McKibben have raised warning flags, too. But since the turn of the century, it seems to me, the robophiles have been stomping the competition in the mass media. The three big robot films of this decade so far—AI, I Robot and Transformers—have all featured good robots who prevail in the end.
How did the robophiles gain the upper hand in this culture war?
One big reason, I suspect, is that there are now robots, even if mostly in toy form, and people are starting to think seriously of all the roles in which they might be useful, from housekeeping to construction work to sex work. For related reasons, the advertising industry also now has an interest in portraying robots positively.
Yet for all our newfound enthusiasm for robots, the existential threat they pose hasn’t gone away. In fact, that threat now seems closer and less hypothetical than ever.
I don’t mean that robots necessarily threaten us with violence. To me it’s plausible that the humanoid machines living and working among us twenty years from now will all be as gentle and unassuming as the C3PO character from Star Wars. They might even have such lifelike “skin” that they visually fit right in. But their presence would still be cataclysmic.
Merely by their low cost and utility, they would make human labor obsolete. Working constantly, never complaining, consuming only electric power and the occasional spare part, they would be, dollar for dollar, more productive by far than the cheapest Third World sweatshop toiler. And they would evolve their way up the labor value chain too swiftly for any human to stay in the game.
A few years ago, Salon ran a piece on this topic, and among others interviewed Robert Reich, a former Secretary of Labor. Reich’s point was that “There are all sorts of jobs that can’t be done by robots because the essence of the job is providing personal attention.” And that was essentially the conclusion of the piece: that robots in the foreseeable future would merely hasten the labor market tilt, in America and other developed countries, towards personal-service and high-creativity jobs, and away from jobs that machines and cheap foreign workers can do.
This concept would be bleak enough even if it were correct, given the labor market upheaval it predicts. But I think Reich’s idea is actually wrong, in a way that is probably typical of people who don’t know much about robots or AI. He assumes that the robots of tomorrow will be like the computer-driven automated systems of today. Even Tierney’s comment about Moore’s Law illuminates a common misunderstanding.
“Moore’s Law” was just Gordon Moore’s observation that the computer chip industry tends to advance quickly enough to double the maximum density of chip elements every 18-24 months. To some extent the widespread faith in this “law” ensures its accuracy. But there is no guarantee that will continue to hold. In any case, Moore’s Law refers to computer chips, not to the vastly different, brain-like architecture needed to make recognizably “smart” robots and AI systems.
Brain-like architecture is essentially parallel-processing and hyper-interconnected, not serial-processing and centralized like computer CPUs. It is true that AI researchers now often use traditional computer chips to run software modelling how brains work, and with this inefficient architecture, brain-modelling does require great processing power. But researchers are already beginning to experiment with more “neural” hardware, which is enormously more efficient at performing animal-like tasks.
True neural hardware could be scalable in ways that modern computer chips aren’t. Mammalian brains consist to a great extent of repeating structures known as neocortical columns, so if the basic architecture is right, and the initial wiring/programming is right, most of the ground between small robot brains and big ones could be covered with more neurons and more interconnections.
Obviously, some further design changes would be needed to turn, for example, a mouselike brain (~15 million neurons) into a humanlike brain (~100 billion neurons), but those changes could prove to be relatively minor, and in any case, given that they apply to a totally different architecture, they are unlikely to be limited by the state of traditional computer-chip technology.
The point is that walking, talking (or at least chirping or barking) robots could become a reality very quickly—long before “Moore’s Law” gives traditional computers the power to model brain processes at human-like scale and speed.
Robots and AI systems with artificial brains won’t seem like the automated systems we have today. They won’t even seem like machines at all. They will seem like the living, sentient creatures in whose images they are made.
Will they be conscious? Probably not—but they won’t have to be conscious to perform virtually all the economic functions of humans, from building houses to writing novels and doing advanced theoretical physics.
And waiting on tables. And assisting shoppers in retail stores. And serving as executive assistants. The idea that these artificial creatures would necessarily be inept at personal services is ludicrous; it seems to rest on nothing more than the old stereotype of the “emotionless” robot. From a neuro-engineering standpoint, the ability to recognize emotions appropriately is not inherently more difficult than, say, the ability to recognize faces or words or terrain patterns. So robots should soon be able to exceed humans in this department as well as all the others.
Personal services already represent a huge growth area for robotics. Even before the technology is really in place, the Japanese are making a major push to build personal-service robots—housekeepers, butlers, receptionists, street-corner direction-givers, hospital orderlies, trashmen, home companions for the elderly, even prostitutes—because their population is declining and they would rather not import workers from “lower” countries and risk cultural dilution.
Even if we were to assume, conservatively, that robots and AI systems with a broad range of human or superhuman abilities won’t be around until 2030, we’d have to believe that lesser but still useful automatons will be available much sooner. With robots, even a little utility is likely to go a long way. Any product—for example—that can walk reliably, can recognize a few hundred faces and objects and words, can hold things as dexterously as we, and in addition can interface directly and rapidly with computers and the Internet, will be able to do what waiters and waitresses do, what counter clerks do, what office staff do, what pilots do, and what common laborers do, only at far lower cost. How far are we from such a prospect? Fifteen years? I doubt it will be even that long.
And again, taking this still-relatively-crude robot technology and scaling up its brain and skillset could turn out to be a relatively simple matter. In any case it seems a fair bet that a child born today, even a gifted child with the best possible education, will graduate from college, about 21 years from now, into a labor market where humans have become a decidedly inferior product.
Conceivably we humans will be able to earn money in a robot-worker economy by running our own businesses or otherwise managing assets. But as robots march into the upper reaches of the labor market, they will start to compete even with human entrepreneurs. Operating from huge robot-worker conglomerates, controlled by dwindling numbers of colossally wealthy human CEOs and senior managers, they will be able to exterminate smaller, human-run businesses all the way down the “long tail.” In a free market, there will be nowhere for expensive, high-maintenance humans to run.
And robots will be able to achieve this conquest while remaining the passive, gentle chattel of humans—appliances with legs! Should they go on to acquire the same civil rights as ours, we’ll be out of political options too. Think this won’t happen? The post-humanists consider it inevitable. And they have a point: The more sympathy robots evoke in us, the more rights we will want to cede to them. Believe me, there will be money in it for anyone who designs robots to evoke sympathy.
Like global warming, the functional obsolescence of humans, and their consequent demoralization and cultural decay, would be one of those “unintended consequences” of our more or less freely-evolving market system. Unlike global warming, this self-destruct process would not be solvable by technological innovation. Technological innovation would be the problem, not the solution.
Roboticists, unsurprisingly, tend to see technological innovation—“evolution”—as sacred, unquestionable, unstoppable. Carnegie Mellon professor Hans Moravec, one of the pioneers of modern robotics, has argued that we should accept the obsolescence of humanity the way we have always accepted our demises as individuals. In other words, we should “silently fade away,” passing on the torch of existence to robots as if they were our children. “We have very little choice, if our culture is to remain viable,” he wrote in his 1988 book Mind Children. “Societies and economies are surely as subject to competitive evolutionary processes as are biological organisms.”
Seemingly less suicidal, but not really, is the proposal of the post-humanists, whose most prominent representative these days is an inventor and futurist named Ray Kurzweil. In his recent book, The Singularity is Near, Kurzweil whooped and cheered about the technologies that would soon “enable us to transcend our biological limitations,” i.e., by turning ourselves into robots. Kurzweil saw this happening in the next two or three decades.
There are a few shortcomings to this approach. One is that humans have “human” needs, for other people and so on, whereas a robot wired for economic superiority wouldn’t be held back by such needs. To become such a creature, totally inhuman, merely to keep up with a supposedly “inexorable” technological evolution, strikes me as even more idiotic, suicidal and inhumane than Moravec’s idea—and Moravec set the bar pretty high. Yet we seem to be chasing this insane goal already.
There is also the consciousness problem. We don’t know—and so far we have no good reason to believe—that the circuitry of a robot brain can generate the sense of conscious awareness that humans and other animals experience. Kurzweil nevertheless blithely suggests that we’ll all be able to transfer the contents of our old, fragile, wetware brains to new, solid-state brains and live happily ever after.
Apart from the murky issue of consciousness, a brain-state “transfer” from one medium to another would, at best, represent a copying process. Whether or not self-awareness could be generated in the new brain, the old self would remain and die in the old brain. Conceivably, if non-biological material could generate consciousness (and again, there is zero evidence for this), one could transform a wetware brain, slowly and in place, into solid-state robot-stuff, and the subject of this freakish experiment might feel enough continuity with his old self, throughout this process, to believe that he had lived through it.
But wouldn’t it be a lot easier, and more sane, and a lot more humane, simply to take control of our cultural and technological development, and to block it where appropriate, before this creeping dystopia overwhelms us?
That, of course, is the third possible solution to the problem posed by “post-human” robots. It has been suggested already by others, including Bill Joy, Bill McKibben and Francis Fukuyama.
To no avail. Theirs have been the proverbial voices crying in the wilderness—mocked for their archaic notion that “progress” could ever be stopped.
Originally published December 4, 2007