The subtle threat from smart machines.
Originally published December 4, 2007
In this morningâs NYT, John Tierney had one of those gee, the futureâs coming pieces on robotic âsmart cars.â He concluded with the prediction that:
even if humans stubbornly cling to the steering wheel, they could still end up sharing the road with smart cars. By around 2030, according to some believers in Mooreâs Law, there will be computers more powerful than the human brain, leading to the emergence of superintelligent âpost-humans.â If these beings do appear, I have no doubt how theyâll get around. Theyâd never be stupid enough to get in a car driven by a member of Mr. Magooâs species.
Yeah, funny. But come to think of it, isnât the prospect of âsuperintelligent âpost-humansââ really a lot more inkworthy than the prospect of safe-driving robot cars?
And isnât it a lot less appealing?
It certainly used to be. When Karel Capek wrote his 1921 play, R.U.R. (Rossumâs Universal Robots), thereby introducing the term robot to the language, it probably seemed natural to his audience that his machine-men ultimately rebelled and snuffed out their human creators. Rebellion and havoc-making was what creatures made by hubristic mortals always did, from Frankensteinâs monster back to the Hebrew Golem. Even God had his problems with those wayward kids, Adam and Eve.
By the 1930s, robots had become standard monster-figures in pulp sci-fi. But by the early 1940s, Isaac Asimov had begun to refer in his stories to the restraining âThree Laws of Robotics,â and lawful robots began to mingle with rebellious ones.
To anyone who works in AI these days, the Three Laws must seem absurdly naive—just the sort of thing a sci-fi writer would have come up with back in the early Forties. But the âLawsâ did what Asimov had wanted them to do. They got sci-fi out of the robophobe rut it had been in, by persuading readers that smart machines could be sympathetic, peaceful characters, even trustworthy, Tonto-like companions to humans.
The âgood robotâ theme hasnât always prevailed since then in Western culture. Films like Westworld, 2001: A Space Odyssey, The Terminator and The Matrix occasionally have brought our deeper fears to the surface. On the non-fiction side, writers including Bill Joy and Bill McKibben have raised warning flags, too. But since the turn of the century, it seems to me, the robophiles have been stomping the competition in the mass media. The three big robot films of this decade so far—AI, I Robot and Transformers—have all featured good robots who prevail in the end.
How did the robophiles gain the upper hand in this culture war?
One big reason, I suspect, is that there are now robots, even if mostly in toy form, and people are starting to think seriously of all the roles in which they might be useful, from housekeeping to construction work to sex work. For related reasons, the advertising industry also now has an interest in portraying robots positively.
Yet for all our newfound enthusiasm for robots, the existential threat they pose hasnât gone away. In fact, that threat now seems closer and less hypothetical than ever.
I donât mean that robots necessarily threaten us with violence. To me itâs plausible that the humanoid machines living and working among us twenty years from now will all be as gentle and unassuming as the C3PO character from Star Wars. They might even have such lifelike âskinâ that they visually fit right in. But their presence would still be cataclysmic.
Merely by their low cost and utility, they would make human labor obsolete. Working constantly, never complaining, consuming only electric power and the occasional spare part, they would be, dollar for dollar, more productive by far than the cheapest Third World sweatshop toiler. And they would evolve their way up the labor value chain too swiftly for any human to stay in the game.
A few years ago, Salon ran a piece on this topic, and among others interviewed Robert Reich, a former Secretary of Labor. Reichâs point was that âThere are all sorts of jobs that canât be done by robots because the essence of the job is providing personal attention.â And that was essentially the conclusion of the piece: that robots in the foreseeable future would merely hasten the labor market tilt, in America and other developed countries, towards personal-service and high-creativity jobs, and away from jobs that machines and cheap foreign workers can do.
This concept would be bleak enough even if it were correct, given the labor market upheaval it predicts. But I think Reichâs idea is actually wrong, in a way that is probably typical of people who donât know much about robots or AI. He assumes that the robots of tomorrow will be like the computer-driven automated systems of today. Even Tierneyâs comment about Mooreâs Law illuminates a common misunderstanding.
*
âMooreâs Lawâ was just Gordon Mooreâs observation that the computer chip industry tends to advance quickly enough to double the maximum density of chip elements every 18-24 months. To some extent the widespread faith in this âlawâ ensures its accuracy. But there is no guarantee that will continue to hold. In any case, Mooreâs Law refers to computer chips, not to the vastly different, brain-like architecture needed to make recognizably âsmartâ robots and AI systems.
Brain-like architecture is essentially parallel-processing and hyper-interconnected, not serial-processing and centralized like computer CPUs. It is true that AI researchers now often use traditional computer chips to run software modelling how brains work, and with this inefficient architecture, brain-modelling does require great processing power. But researchers are already beginning to experiment with more âneuralâ hardware, which is enormously more efficient at performing animal-like tasks.
True neural hardware could be scalable in ways that modern computer chips arenât. Mammalian brains consist to a great extent of repeating structures known as neocortical columns, so if the basic architecture is right, and the initial wiring/programming is right, most of the ground between small robot brains and big ones could be covered with more neurons and more interconnections.
Obviously, some further design changes would be needed to turn, for example, a mouselike brain (~15 million neurons) into a humanlike brain (~100 billion neurons), but those changes could prove to be relatively minor, and in any case, given that they apply to a totally different architecture, they are unlikely to be limited by the state of traditional computer-chip technology.
The point is that walking, talking (or at least chirping or barking) robots could become a reality very quickly—long before âMooreâs Lawâ gives traditional computers the power to model brain processes at human-like scale and speed.
*
Robots and AI systems with artificial brains wonât seem like the automated systems we have today. They wonât even seem like machines at all. They will seem like the living, sentient creatures in whose images they are made.
Will they be conscious? Probably not—but they wonât have to be conscious to perform virtually all the economic functions of humans, from building houses to writing novels and doing advanced theoretical physics.
And waiting on tables. And assisting shoppers in retail stores. And serving as executive assistants. The idea that these artificial creatures would necessarily be inept at personal services is ludicrous; it seems to rest on nothing more than the old stereotype of the âemotionlessâ robot. From a neuro-engineering standpoint, the ability to recognize emotions appropriately is not inherently more difficult than, say, the ability to recognize faces or words or terrain patterns. So robots should soon be able to exceed humans in this department as well as all the others.
Personal services already represent a huge growth area for robotics. Even before the technology is really in place, the Japanese are making a major push to build personal-service robots—housekeepers, butlers, receptionists, street-corner direction-givers, hospital orderlies, trashmen, home companions for the elderly, even prostitutes—because their population is declining and they would rather not import workers from âlowerâ countries and risk cultural dilution.
*
Even if we were to assume, conservatively, that robots and AI systems with a broad range of human or superhuman abilities wonât be around until 2030, weâd have to believe that lesser but still useful automatons will be available much sooner. With robots, even a little utility is likely to go a long way. Any product—for example—that can walk reliably, can recognize a few hundred faces and objects and words, can hold things as dexterously as we, and in addition can interface directly and rapidly with computers and the Internet, will be able to do what waiters and waitresses do, what counter clerks do, what office staff do, what pilots do, and what common laborers do, only at far lower cost. How far are we from such a prospect? Fifteen years? I doubt it will be even that long.
And again, taking this still-relatively-crude robot technology and scaling up its brain and skillset could turn out to be a relatively simple matter. In any case it seems a fair bet that a child born today, even a gifted child with the best possible education, will graduate from college, about 21 years from now, into a labor market where humans have become a decidedly inferior product.
Conceivably we humans will be able to earn money in a robot-worker economy by running our own businesses or otherwise managing assets. But as robots march into the upper reaches of the labor market, they will start to compete even with human entrepreneurs. Operating from huge robot-worker conglomerates, controlled by dwindling numbers of colossally wealthy human CEOs and senior managers, they will be able to exterminate smaller, human-run businesses all the way down the âlong tail.â In a free market, there will be nowhere for expensive, high-maintenance humans to run.
And robots will be able to achieve this conquest while remaining the passive, gentle chattel of humans—appliances with legs! Should they go on to acquire the same civil rights as ours, weâll be out of political options too. Think this wonât happen? The post-humanists consider it inevitable. And they have a point: The more sympathy robots evoke in us, the more rights we will want to cede to them. Believe me, there will be money in it for anyone who designs robots to evoke sympathy.
*
Like global warming, the functional obsolescence of humans, and their consequent demoralization and cultural decay, would be one of those âunintended consequencesâ of our more or less freely-evolving market system. Unlike global warming, this self-destruct process would not be solvable by technological innovation. Technological innovation would be the problem, not the solution.
Roboticists, unsurprisingly, tend to see technological innovation—âevolutionâ—as sacred, unquestionable, unstoppable. Carnegie Mellon professor Hans Moravec, one of the pioneers of modern robotics, has argued that we should accept the obsolescence of humanity the way we have always accepted our demises as individuals. In other words, we should âsilently fade away,â passing on the torch of existence to robots as if they were our children. âWe have very little choice, if our culture is to remain viable,â he wrote in his 1988 book Mind Children. âSocieties and economies are surely as subject to competitive evolutionary processes as are biological organisms.â
Seemingly less suicidal, but not really, is the proposal of the post-humanists, whose most prominent representative these days is an inventor and futurist named Ray Kurzweil. In his recent book, The Singularity is Near, Kurzweil whooped and cheered about the technologies that would soon âenable us to transcend our biological limitations,” i.e., by turning ourselves into robots. Kurzweil saw this happening in the next two or three decades.
There are a few shortcomings to this approach. One is that humans have âhumanâ needs, for other people and so on, whereas a robot wired for economic superiority wouldnât be held back by such needs. To become such a creature, totally inhuman, merely to keep up with a supposedly âinexorableâ technological evolution, strikes me as even more idiotic, suicidal and inhumane than Moravecâs idea—and Moravec set the bar pretty high. Yet we seem to be chasing this insane goal already.
There is also the consciousness problem. We donât know—and so far we have no good reason to believe—that the circuitry of a robot brain can generate the sense of conscious awareness that humans and other animals experience. Kurzweil nevertheless blithely suggests that we’ll all be able to transfer the contents of our old, fragile, wetware brains to new, solid-state brains and live happily ever after.
Apart from the murky issue of consciousness, a brain-state âtransferâ from one medium to another would, at best, represent a copying process. Whether or not self-awareness could be generated in the new brain, the old self would remain and die in the old brain. Conceivably, if non-biological material could generate consciousness (and again, there is zero evidence for this), one could transform a wetware brain, slowly and in place, into solid-state robot-stuff, and the subject of this freakish experiment might feel enough continuity with his old self, throughout this process, to believe that he had lived through it.
But wouldnât it be a lot easier, and more sane, and a lot more humane, simply to take control of our cultural and technological development, and to block it where appropriate, before this creeping dystopia overwhelms us?
That, of course, is the third possible solution to the problem posed by âpost-humanâ robots. It has been suggested already by others, including Bill Joy, Bill McKibben and Francis Fukuyama.
To no avail. Theirs have been the proverbial voices crying in the wilderness—mocked for their archaic notion that âprogressâ could ever be stopped.