We are entering an uncertain age, stumbling toward an understanding of true spirituality. If we are to put the brakes on the current trend toward fundamental, intolerant ideologies, we need to see this problem from as many angles as we can get around.
I was researching Ivan Illich's work, and I came upon the Whole Earth, Spring 2003 - apparently the last published issue. There is a good amount of information about the man, as well as references to his works, and the large number of people he influenced (he died in 2002). More about him later.
Also in the 2003 Spring issue of Whole Earth, Alex Steffen interviews Jaron Lanier, a humanist.
As the thrust of Lanier's thoughts on current linear approaches to solving the world's non-linear problems go to the heart of Ivan Illich's body of work, lets see what a technically savvy, self-professed humanist has to say.
WHOLE EARTH SPRING 2003
WHAT KEEPS JARON LANIER AWAKE AT NIGHT
Artificial Intelligence, Cybernetic Totalism, and the loss of Common Sense
Alex Steffen interviews Jaron Lanier
Jaron Lanier is an artist, musician, and programmer, as well as the coiner of the term “virtual reality.” He’s also one of the more astute observers of technology’s impacts on society. We met over coffee in June in New York City, just blocks from Ground Zero, and our conversation left me with seven pages of some of the most interesting notes I’ve ever taken. Jaron agreed to do another interview to discuss technology, society,and the future. —AS
Alex Steffen: When we met in June, you said that there remain “earthshaking questions” about how computation, biotechnology, and materials science will develop in this new century. You went on to say that you see “cybernetic totalism” as “the characteristic delusion of our times,” and a danger. Why?
Jaron Lanier: Cybernetic totalism is the confusion of linear and nonlinear systems. A typical example of that is the enormous amount of funding and attention devoted to ideas about artificial intelligence that clearly can be seen to be hanging on an irrational idea about what intelligence might be and might not be. We have a notion that—just because you can show that a Turing machine, a common computer, can hypothetically do an enormous range of things —somehow all analytical problems are solvable. It’s sort of a new style of reductionism. Instead of saying my little abstraction proves everything, it’s saying that because we’ve proved a hypothetical equivalent between some range of computational problems, there’s no functional difference in the amount of time or effort it might take to solve those problems. All problems are solvable.
There’s another level though, an aesthetic or spiritual level. A whole class of scientists and engineers have adopted what you might call a new religion, where they hope to find comfort in the face of life’s uncertainty, especially in relation to questions of mortality, by turning themselves into machines, or hope to find immortality by downloading themselves into computers—that sort of thing.
AS: What kinds of questions might these attitudes obscure?
JL: Essential questions about how far we can go with technology and how fast. Some kinds of problems we know are hard to solve, but they’re hard to solve in a brute force way. We understand how hard we have to work to solve them. The best example of this is weather prediction. We have a pretty good idea of the nature of the problem of predicting the weather, and we know that if we want to do a better job of it, we have to get so much better at writing software for very large-scale computation, and we also have to get so much better at gathering very large amounts of data, and so forth.
Our engineering process, for weather prediction, is relatively linear—we know that if we put in given effort, money, and intellectual struggle, we’ll get out a roughly commensurate improvement. The natural system itself is complex, but the engineering effort to model it is a reasonably linear thing. Now in the case of studying how the human mind works, we have an entirely different scenario. We don’t have any idea what low-hanging fruit we might be about to discover, and we don’t have any idea what aspects of human cognition might make it extremely, extremely difficult to come up with a useful model. This is true for a whole lot of questions in biology.
Now, it’s rare to find a computer-oriented scientist or engineer who’s able to recognize the difference between these two kinds of problems. About once a month, there’ll be a headline that says researchers can tell us that some gene has something to do with an aspect of behavior— that, for instance, it might contribute to an aspect of depression— but really, the results they have are the most meager sort of explanations based on terms that we don’t even fully know how to define yet, so we don’t have any sense of how important these results are, or even if they’re in any way helpful for building a bigger picture. This is a classic confusion of linear and
nonlinear results.
AS: Would it be fair to say that one of the characteristic beliefs of cybernetic totalism is that if enough brute force is applied, all problems are linear?
JL: Exactly. Yes. And since brute force comes in the form of Moore’s Law, and the amount of brute force we can apply is growing at an exponential rate, there’s this idea that all problems are solvable, very soon. And it’s a short step to saying essentially there are no problems. That idea leads to this religious feeling of imminent transcendence. It’s a completely irrational stream of thought. It’s sort of a nerd’s path to religious ecstasy.
AS: You have spoken about the errors that can arise when we move from treating Moore’s Law as a (thus far) accurate prediction about the speed at which hardware improves, to treating it as a metaphor for the speed at which our power to understand and manipulate the world improves. Why is this an important distinction? What happens when we get it wrong?
JL: That distinction’s important because honest self-assessment is the first step in effective action of any kind. That’s why the entire scientific method is designed to prevent us from fooling ourselves. One place you see this fallacy is in artificial intelligence. Go back to the very origins of cybernetic totalism and visit poor Alan Turing— who probably made as big a single contribution to protecting human freedom as any single person, having broken the Nazi’s secret Enigma code during World War II. You’d find him under house arrest, being forced to take massive doses of female sex hormones to treat his homosexuality and developing breasts and becoming increasingly depressed, then eventually committing a strangely eloquent suicide in which he injects cyanide into an apple and then eats it.
During that time he develops this longing for transcendence (or psychological denial if you prefer). He imagines himself a computer and creates this test—known now as the Turing test—where a judge is asked to distinguish a computer from a person based only on written interaction with each, and declares that if the judge can’t distinguish them, the computer is as smart as a human.
The problem with Turing’s idea is not only that the judge can only compare one very limited means of human communication, but that the human in the test is just as likely to become stupid by conforming to the artificial limits of the situation as the machine is to be getting smarter.
Artificial intelligences are not people. They’re not really even intelligent. They’re programs. We forget this at our peril.
AS: Sherry Turkle has said that she thinks it’s a sign of great progress that little children take their apparently intelligent toys at “interface value.” You’ve said the danger is not that runaway AIs will get superintelligent, take over the world and stop needing us, but that mock AIs will be accepted as intelligent, even sapient, when they aren’t, and thus obscure the fact that somebody who is error-prone and serving his own agenda has programmed them.
JL: I think this is still true, but what I’m getting at is something larger. The claim of machine sentience is fundamentally false. The idea of sentient technology grabs attention and helps sell the technology. But we don’t fully grasp what consciousness itself means, so the idea that we can fully replicate it in machines, and then trust those machines to do our thinking for us, is really a departure from reality. The more we imagine ourselves becoming machines, the more we risk losing our humanity. We’re modeling ourselves after our own technologies—becoming some sort of anti-Pinocchios—and it’s insane.
AS: You sound worried about where we’re headed.
JL: I don’t know what sort of future we’re entering here, but it’s entirely possible that the twenty-first century is going to be a profoundly unhappy one—a century in which there isn’t much more technological advancement because the climate crashes or we slide into a series of horrible wars or whatever. But let’s imagine that’s not the case, and that there’s a large portion
of the world where things continue to work. Imagine, too, that Moore’s law continues to hold true. If that’s the future we enter, then we’ll all live in a world where every facet of our lives will be saturated in technology.
In that world, this new religion—cybernetic totalism—will be much more mainstream. Instead of just being a cult among a relatively small group of scientists and technologists, it could become a major movement. The metaphor I’d draw is to something like Marxism in the nineteenth century—there’s a seed of ideology here, and it’s not certain how big it will get or how far from its origins it will drift. It’s possible this whole thing will just float away and it won’t be all that important to criticize it. It seems important to come up with a critical response to it now, though, because I think it’d be a really grossly dysfunctional mass movement.
All forms of fundamentalism dehumanize those who don’t share their ideas. But cybernetic totalism in a sense dehumanizes the human race, saying that people are just a stepping stone to some other evolutionary machine being or destination.
No person matters anymore. This is incredibly dangerous. If we had some way of knowing that we were on the path to creating some greater being, maybe we would all be willing to commit suicide to bring it about. The problem is, because of the Turing test paradox, we can’t know that. Any postulated posthuman being is at this point absolutely a matter of fantasy.
AS:And yet,there’s very little intelligent critique of the idea ofThe Singularity. Most people who criticize it seem to be just run-of-the-mill technophobes.
JL:Unfortunately, of the people who’ve written works cautioning against the ideology of The Singularity, few have technological backgrounds. Part of it is that the preponderance of members of the elite computer science community are at least partially sympathetic. Another is that many engineers are relatively tone-deaf to aesthetics or morality because they’re people who dwell in the realm of value-free problem solving.
There’s something about that that’s sort of charming. But it leads to very few critics having any idea what they’re talking about. There’s a gap between those who understand these technologies and the, I almost said, secular world.
AS: You write of a danger that “the gulf between the richest and the rest could become transcendently grave. The possibilities that they will become essentially different species are so obvious and so terrifying that there is almost a banality in stating them. The rich could have their children made genetically more intelligent, beautiful, and joyous. Perhaps they could even be genetically disposed to have a superior capacity for empathy, but only to other people who meet some narrow range of criteria. Even stating these things seems beneath me, as if I were writing pulp science fiction, and yet the logic of the possibility is inescapable.” You said that you were concerned that medical and biological research were tending to favor outcomes that would make real advances available only to a tiny fraction of the Earth’s people.
A WHOLE CLASS OF SCIENTISTS AND ENGINEERS HAVE ADOPTED WHAT YOU MIGHT CALL A NEW RELIGION, WHERE THEY HOPE TO FIND COMFORT IN THE FACE OF LIFE’S UNCERTAINTY BY TURNING THEMSELVES INTO MACHINES
AS: Why is this a problem, and what can be done about it?
JL: This is the thing I really most worry about. Out in the wider world, though, there’s a rebellion brewing precisely as a result of the sort of wild pronouncements about technology you see more and more often in press releases from places like MIT and Berkeley. There has long been a sense of economic injustice, but there’s a brewing sense of spiritual injustice. There’s this sense that it’s one thing if rich people in America drive fancy cars and have lower infant mortality, but this notion that some elite somewhere is defining the soul or making the soul into an obsolete idea or is going to transform what it means to be human or is going to be first in line for immortality— that idea strikes so deep it creates a sense of panic. And I believe this is the explanation for one of the weird features of our time, that every major religion has a terribly violent fundamentalist wing at the same time.
I think if this continues, we’ll get the worst of both—on the one hand, the people who fear being left behind will believe that the first are getting everything—not only riches, but immortality and superintelligence and whatever—while on the other hand, because they’re fooling themselves about what they do and don’t really know, those who are first in line won’t be getting nearly what they dream of. They may get riches. They may get much greater longevity and designer babies. But they certainly won’t be getting the transcendence they dream of. A world with all the conflicts and little of the progress.
AS: What do you think of the adoption of open source as a modelfor how people might create distributed, collaborative, emergent political movements for evaluating and guiding technological policy (for example, the “open source biology” movement)?
JL: Right now we know how to act on two paradigms. We can use an open, free, collaborative, equality oriented system, sort of the Napster model—whether we’re talking about code or music or medical information. Or we can do this proprietary system, where everything’s closed and owned and those who are first in line become arbitrarily rich, like Bill Gates.
I think there is a sort of split in our culture, with one half committed to the past, to the idea of paying for the use of Mickey Mouse even though Walt Disney’s been dead for decades, and the other half ideologically crying for the “free as in beer” model, where everyone can use anything.
The problems with the intellectual property method have been very well documented by those who oppose it. Among those problems are that, for instance, applied to music, you get a terribly corrupt music industry that puts out terrible music based on the fantasies industry executives had when they were adolescents. Applied to software, you get monocultures of inferior code at high prices. Applied to medicine, you get pharmaceutical companies that drive the process of diagnosis, multitiered health care, and millions dying of easily preventable diseases. And so on.
The problem is that the open and free solutions also have real shortcomings. One is that a relatively small number of people have the ability to totally disrupt the community— the spam problem. But open systems suffer from two other problems. The “housekeeping” problem others have talked about: it’s hard to find people to do the boring, routinetasks that are required for a functional system. More importantly, open systems have a hard time plotting strategy and encouraging major risk-taking for innovation. There’s a trackless wilderness in between the two, of course.
ARTIFICIAL INTELLIGENCES ARE NOT PEOPLE. THEY’RE NOT REALLY EVEN INTELLIGENT. THEY’RE PROGRAMS. WE FORGET THIS AT OUR PERIL.
AS: You said that creativity is now the source of innovation, that innovation is the source of wealth, but that there’s “creative inequity” between the classes, which is far more troubling than the financial inequities, that “Spreading creativity is a survival question: the alternative to a planet of artists is a planet of corpses.”
JL: What I mean is this—the level of complexity in the problems we face is such that they can only be solved with the help of a whole lot of creative people. We have to look at distributed models of creativity, like gaming communities. Where there’s a shared virtual world, and you invite everyone to be creative, what you get instead of universal creativity is a power law distribution where a very small number of people are very creative and do all sorts of amazing things, and then a slightly larger number make the occasional interesting contribution, but a vast number of people are either not very creative, or their creativity doesn’t amount to much. So widespread creativity would seem to have little value, but there’s a paradox that you often can’t find those uniquely valuable creative people unless you invite everyone, and that if you invite everyone, the results of creativity can spread remarkably rapidly.
The dawn of the Web was driven by something like this. When the Web started up, there was a period of about a year when there was absolutely no commercial interest in it, and during that time it spread from nothing to tens of millions of users based only on the urge to create and the urge to connect. There was no advertising, no charismatic figure, and no money to be made; no structure, no hierarchy. This is one of the most optimistic signals we’ve ever gotten about how the future could be better.
Take Grameen. Grameen Bankis a celebrated experimental bank which pioneered the idea of microfinance. Grameen makes tiny loans to groups of people who vouch for each other, say villagers in Bangladesh. These are tiny loans, to start tiny businesses, but the recipients are mutually responsible for one another, so the loans are almost always repaid. It turns out to be a great banking method, yet it also has an incredible socialeffect, of spurring the growth of small, local businesses. The idea is spreading now, but I’ve wondered if we might apply a similar principle to promoting creativity—perhaps having groups of people be mutually responsible for using resources together to solve some problem creatively. At any rate, we need to find new methods to encourage systems of creativity to grow. We need to figure out a way for nearly everyone to have an opportunity to contribute to something vital and constructive, to have a way to find yourself and make a name for yourself without resorting to conflict and violence and terror.
AS: I read that the total number of children and teenagers in the world— over two billion—is more people than were ever alive from when humans first walked upright until after 1930. This wave of kids is the biggest baby boom in all of history. For most of those kids, there are few avenues for education, meaningful work, or participation in democracy.
JL:This global teenager problem is terribly scary. The first time a society encounters mass media, its propaganda goes insane. Europe, Japan, and America went through that in the first half of the twentieth century, and we fought two world wars. China went through it during the Cultural Revolution. The Muslim and African worlds are going through that now. They’re still in the middle of their first encounter with the full force of modern propaganda. It just makes people nuts, and it takes a generation to get used to it, to get desensitized. The coincidence of this wave of teenagers with a third wave of McLuanesque shock is incredibly alarming.
The only way to respond to this is through technology. It’s the only conceivable way to educate and involve several billion people, right away. There’s no time to build enough new schools, or train enough new teachers. We have to imagine somehow inviting all these people online, imagine propagating some sort of cheap wireless devices that create widespread high-quality access to needed information and collaboration across the entire developing world, imagine accelerating the process whereby kids in the Third World can become jaded to propaganda and open to new frontiers in their own lives—become educated, and capable of creating real work for themselves, and able to solve their communities’ problems with collective wisdom. I don’t know of any plans to do this, but every happy scenario I can imagine has something like this in it. If I were in charge, it would be my first priority.
AS: What keeps you awake at night?
JL: That we’re losing sight of an extremely simple common sense idea—that there’s a set of ideas (democracy, technological optimism, entrepreneurship, the sense we can find commonality through science and exploration) which have provided us with almost everything good about our world. That those ideas are fundamental to any hopes we have.
The anti-globalization sort of people have become entirely too cynical. They just view the whole class of entrepreneurial and technologically optimistic people with suspicion. As a result, they discount the very real almost utopian possibilities if we all learn better ways of working together. Then there are the religious fundamentalists, who just seem to want to go back to the twelfth century. Nobody’s advocating for progress and problem-solving and the really good things about modernity.
AS: There are very few people out there who are willing to stand up for rational inquiry and the humanist project as something that benefits all mankind?
JL: Exactly. It’s extraordinary. I am a humanist, and it’s very hard to find allies these days. The academy’s gone all postmodern, and the sciences seem dominated by these extremes of commercialism and radical cybernetic totalism. We don’t have any major voices advocating this most basic, simple, and obvious thing—and that keeps me up at night.
I like to think of myself as an out-there thinker, exploring the far reaches, but I end up spending a lot of time talking about this simple idea. It’s so disheartening and surprising to feel alone so often in doing that in technological and scientific circles. There’s so much talk about singularities. And if you believe in the ability to rationally improve destiny, a singularity would be a terrible thing, because by definition it’d be something we don’t understand.
AS:You said “We should do everything we can to avoid singularities. A singularity is a sign that we’ve failed.” Is that what you meant?
JL:Yes. We live in this remarkable time, and the possibilities are astounding. But we need a rate of progress that allows us sufficient predictability to know we’re making good decisions. We have to go fast enough to deliver solutions to the world’s problems in some kind of useful time frame, but slow enough that we retain control. That’s not easy.
Wednesday, September 19, 2007
Learning To Be Human
Posted by No Apology at 4:15 PM
Labels: Artificial Intelligence, Cybernetic totalism, Jaron Lanier, The Singularity
Subscribe to:
Comment Feed (RSS)
|