John McCarthy, a computer scientist who helped design the foundation of
today’s Internet-based computing and who is widely credited with coining the
term for a frontier of research he helped pioneer, Artificial Intelligence, or
A.I., died on Monday at his home in Stanford, Calif. He was 84.
The cause was complications of heart disease, his daughter Sarah McCarthy said.
Dr. McCarthy’s career followed the arc of modern computing. Trained as a
mathematician, he was responsible for seminal advances in the field and was
often called the father of computer time-sharing, a major development of the
1960s that enabled many people and organizations to draw simultaneously from a
single computer source, like a mainframe, without having to own one.
By lowering costs, it allowed more people to use computers and laid the
groundwork for the interactive computing of today.
Though he did not foresee the rise of the personal computer, Dr. McCarthy was
prophetic in describing the implications of other technological advances decades
before they gained currency.
“In the early 1970s, he presented a paper in France on buying and selling by
computer, what is now called electronic commerce,” said Whitfield Diffie, an
Internet security expert who worked as a researcher for Dr. McCarthy at the
Stanford Artificial Intelligence Laboratory.
And in the study of artificial intelligence, “no one is more influential than
John,” Mr. Diffie said.
While teaching mathematics at Dartmouth in 1956, Dr. McCarthy was the principal
organizer of the first Dartmouth Conference on Artificial Intelligence.
The idea of simulating human intelligence had been discussed for decades, but
the term “artificial intelligence” — originally used to help raise funds to
support the conference — stuck.
In 1958, Dr. McCarthy moved to the Massachusetts Institute of Technology, where,
with Marvin Minsky, he founded the Artificial Intelligence Laboratory. It was at
M.I.T. that he began working on what he called List Processing Language, or
Lisp, a computer language that became the standard tool for artificial
intelligence research and design.
Around the same time he came up with a technique called garbage collection, in
which pieces of computer code that are not needed by a running computation are
automatically removed from the computer’s random access memory.
He developed the technique in 1959 and added it to Lisp. That technique is now
routinely used in Java and other programming languages.
His M.I.T. work also led to fundamental advances in software and operating
systems. In one, he was instrumental in developing the first time-sharing system
for mainframe computers.
The power of that invention would come to shape Dr. McCarthy’s worldview to such
an extent that when the first personal computers emerged with local computing
and storage in the 1970s, he belittled them as toys.
Rather, he predicted, wrongly, that in the future everyone would have a
relatively simple and inexpensive computer terminal in the home linked to a
shared, centralized mainframe and use it as an electronic portal to the worlds
of commerce and news and entertainment media.
Dr. McCarthy, who taught briefly at Stanford in the early 1950s, returned there
in 1962 and in 1964 became the founding director of the Stanford Artificial
Intelligence Laboratory, or SAIL. Its optimistic, space-age goal, with financial
backing from the Pentagon, was to create a working artificial intelligence
system within a decade.
Years later he developed a healthy respect for the challenge, saying that
creating a “thinking machine” would require “1.8 Einsteins and one-tenth the
resources of the Manhattan Project.”
Artificial intelligence is still thought to be far in the future, though
tremendous progress has been made in systems that mimic many human skills,
including vision, listening, reasoning and, in robotics, the movements of limbs.
From the mid-’60s to the mid-’70s, the Stanford lab played a vital role in
creating some of these technologies, including robotics and machine-vision
natural language.
In 1972, the laboratory drew national attention when Stewart Brand, the founder
of The Whole Earth Catalog, wrote about it in Rolling Stone magazine under the
headline “SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums.”
The article evoked the esprit de corps of a group of researchers who had been
freed to create their own virtual worlds, foreshadowing the emergence of
cyberspace. “Ready or not, computers are coming to the people,” Mr. Brand wrote.
Dr. McCarthy had begun inviting the Homebrew Computer Club, a Silicon Valley
hobbyist group, to meet at the Stanford lab. Among its growing membership were
Steven P. Jobs and Steven Wozniak, who would go on to found Apple. Mr. Wozniak
designed his first personal computer prototype, the Apple 1, to share with his
Homebrew friends.
But Dr. McCarthy still cast a jaundiced eye on personal computing. In the second
Homebrew newsletter, he suggested the formation of a “Bay Area Home Terminal
Club,” to provide computer access on a shared Digital Equipment computer. He
thought a user fee of $75 a month would be reasonable.
Though Dr. McCarthy would initially miss the significance of the PC, his early
thinking on electronic commerce would influence Mr. Diffie at the Stanford lab.
Drawing on those ideas, Mr. Diffie began thinking about what would replace the
paper personal check in an all-electronic world.
He and two other researchers went on to develop the basic idea of public key
cryptography, which is now the basis of all modern electronic banking and
commerce, providing secure interaction between a consumer and a business.
A chess enthusiast, Dr. McCarthy had begun working on chess-playing computer
programs in the 1950s at Dartmouth. Shortly after joining the Stanford lab, he
engaged a group of Soviet computer scientists in an intercontinental chess match
after he discovered they had a chess-playing computer. Played by telegraph, the
match consisted of four games and lasted almost a year. The Soviet scientists
won.
John McCarthy was born on Sept. 4, 1927, into a politically engaged family in
Boston. His father, John Patrick McCarthy, was an Irish immigrant and a labor
organizer.
His mother, the former Ida Glatt, a Lithuanian Jewish immigrant, was active in
the suffrage movement. Both parents were members of the Communist Party. The
family later moved to Los Angeles in part because of John’s respiratory
problems.
He entered the California Institute of Technology in 1944 and went on to
graduate studies at Princeton, where he was a colleague of John Forbes Nash Jr.,
the Nobel Prize-winning economist and subject of Sylvia Nasar’s book “A
Beautiful Mind,” which was adapted into a movie.
At Princeton, in 1949, he briefly joined the local Communist Party cell, which
had two other members: a cleaning woman and a gardener, he told an interviewer.
But he quit the party shortly afterward.
In the ’60s, as the Vietnam War escalated, his politics took a conservative turn
as he grew disenchanted with leftist politics.
In 1971 Dr. McCarthy received the Turing Award, the most prestigious given by
the Association of Computing Machinery, for his work in artificial intelligence.
He was awarded the Kyoto Prize in 1988, the National Medal of Science in 1991
and the Benjamin Franklin Medal in 2003.
Dr. McCarthy was married three times. His second wife, Vera Watson, a member of
the American Women’s Himalayan Expedition, died in a climbing accident on
Annapurna in 1978.
Besides his daughter Sarah, of Nevada City, Calif., he is survived by his wife,
Carolyn Talcott, of Stanford; another daughter, Susan McCarthy, of San
Francisco; and a son, Timothy, of Stanford.
He remained an independent thinker throughout his life. Some years ago, one of
his daughters presented him with a license plate bearing one of his favorite
aphorisms: “Do the arithmetic or be doomed to talk nonsense.”
YORKTOWN HEIGHTS, N.Y. — In the end, the humans on “Jeopardy!” surrendered
meekly.
Facing certain defeat at the hands of a room-size I.B.M. computer on Wednesday
evening, Ken Jennings, famous for winning 74 games in a row on the TV quiz show,
acknowledged the obvious. “I, for one, welcome our new computer overlords,” he
wrote on his video screen, borrowing a line from a “Simpsons” episode.
From now on, if the answer is “the computer champion on “Jeopardy!,” the
question will be, “What is Watson?”
For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million
prize, but proof that the company has taken a big step toward a world in which
intelligent machines will understand and respond to humans, and perhaps
inevitably, replace some of them.
Watson, specifically, is a “question answering machine” of a type that
artificial intelligence researchers have struggled with for decades — a computer
akin to the one on “Star Trek” that can understand questions posed in natural
language and answer them.
Watson showed itself to be imperfect, but researchers at I.B.M. and other
companies are already developing uses for Watson’s technologies that could have
significant impact on the way doctors practice and consumers buy products.
“Cast your mind back 20 years and who would have thought this was possible?”
said Edward Feigenbaum, a Stanford University computer scientist and a pioneer
in the field.
In its “Jeopardy!” project, I.B.M. researchers were tackling a game that
requires not only encyclopedic recall, but the ability to untangle convoluted
and often opaque statements, a modicum of luck, and quick, strategic button
pressing.
The contest, which was taped in January here at the company’s T. J. Watson
Research Laboratory before an audience of I.B.M. executives and company clients,
played out in three televised episodes concluding Wednesday. At the end of the
first day, Watson was in a tie with Brad Rutter, another ace human player, at
$5,000 each, with Mr. Jennings trailing with $2,000.
But on the second day, Watson went on a tear. By night’s end, Watson had a
commanding lead with a total of $35,734, compared with Mr. Rutter’s $10,400 and
Mr. Jennings’ $4,800.
But victory was not cemented until late in the third match, when Watson was in
Nonfiction. “Same category for $1,200” it said in a manufactured tenor, and
lucked into a Daily Double. Mr. Jennings grimaced.
Even later in the match, however, had Mr. Jennings won another key Daily Double
it might have come down to Final Jeopardy, I.B.M. researchers acknowledged.
The final tally was $77,147 to Mr. Jennings’ $24,000 and Mr. Rutter’s $21,600.
More than anything, the contest was a vindication for the academic field of
computer science, which began with great promise in the 1960s with the vision of
creating a thinking machine and which became the laughingstock of Silicon Valley
in the 1980s, when a series of heavily funded start-up companies went bankrupt.
Despite its intellectual prowess, Watson was by no means omniscient. On Tuesday
evening during Final Jeopardy, the category was U.S. Cities and the clue was:
“Its largest airport is named for a World War II hero; its second largest for a
World War II battle.”
Watson drew guffaws from many in the television audience when it responded “What
is Toronto?????”
The string of question marks indicated that the system had very low confidence
in its response, I.B.M. researchers said, but because it was Final Jeopardy, it
was forced to give a response. The machine did not suffer much damage. It had
wagered just $947 on its result.
“We failed to deeply understand what was going on there,” said David Ferrucci,
an I.B.M. researcher who led the development of Watson. “The reality is that
there’s lots of data where the title is U.S. cities and the answers are
countries, European cities, people, mayors. Even though it says U.S. cities, we
had very little confidence that that’s the distinguishing feature.”
The researchers also acknowledged that the machine had benefited from the
“buzzer factor.”
Both Mr. Jennings and Mr. Rutter are accomplished at anticipating the light that
signals it is possible to “buzz in,” and can sometimes get in with virtually
zero lag time. The danger is to buzz too early, in which case the contestant is
penalized and “locked out” for roughly a quarter of a second.
Watson, on the other hand, does not anticipate the light, but has a weighted
scheme that allows it, when it is highly confident, to buzz in as quickly as 10
milliseconds, making it very hard for humans to beat. When it was less
confident, it buzzed more slowly. In the second round, Watson beat the others to
the buzzer in 24 out of 30 Double Jeopardy questions.
“It sort of wants to get beaten when it doesn’t have high confidence,” Dr.
Ferrucci said. “It doesn’t want to look stupid.”
Both human players said that Watson’s button pushing skill was not necessarily
an unfair advantage. “I beat Watson a couple of times,” Mr. Rutter said.
When Watson did buzz in, it made the most of it. Showing the ability to parse
language, it responded to, “A recent best seller by Muriel Barbery is called
‘This of the Hedgehog,’ ” with “What is Elegance?”
It showed its facility with medical diagnosis. With the answer: “You just need a
nap. You don’t have this sleep disorder that can make sufferers nod off while
standing up,” Watson replied, “What is narcolepsy?”
The coup de grâce came with the answer, “William Wilkenson’s ‘An Account of the
Principalities of Wallachia and Moldavia’ inspired this author’s most famous
novel.” Mr. Jennings wrote, correctly, Bram Stoker, but realized he could not
catch up with Watson’s winnings and wrote out his surrender.
Both players took the contest and its outcome philosophically.
“I had a great time and I would do it again in a heartbeat,” said Mr. Jennings.
“It’s not about the results; this is about being part of the future.”
For I.B.M., the future will happen very quickly, company executives said. On
Thursday it plans to announce that it will collaborate with Columbia University
and the University of Maryland to create a physician’s assistant service that
will allow doctors to query a cybernetic assistant. The company also plans to
work with Nuance Communications Inc. to add voice recognition to the physician’s
assistant, possibly making the service available in as little as 18 months.
“I have been in medical education for 40 years and we’re still a very
memory-based curriculum,” said Dr. Herbert Chase, a professor of clinical
medicine at Columbia University who is working with I.B.M. on the physician’s
assistant. “The power of Watson- like tools will cause us to reconsider what it
is we want students to do.”
I.B.M. executives also said they are in discussions with a major consumer
electronics retailer to develop a version of Watson, named after I.B.M.’s
founder, Thomas J. Watson, that would be able to interact with consumers on a
variety of subjects like buying decisions and technical support.
Dr. Ferrucci sees none of the fears that have been expressed by theorists and
science fiction writers about the potential of computers to usurp humans.
“People ask me if this is HAL,” he said, referring to the computer in “2001: A
Space Odyssey.” “HAL’s not the focus, the focus is on the computer on ‘Star
Trek,’ where you have this intelligent information seek dialog, where you can
ask follow-up questions and the computer can look at all the evidence and tries
to ask follow-up questions. That’s very cool.”
STANFORD, Calif. — At the dawn of the modern computer era, two
Pentagon-financed laboratories bracketed Stanford University. At one laboratory,
a small group of scientists and engineers worked to replace the human mind,
while at the other, a similar group worked to augment it.
In 1963 the mathematician-turned-computer scientist John McCarthy started the
Stanford Artificial Intelligence Laboratory. The researchers believed that it
would take only a decade to create a thinking machine.
Also that year the computer scientist Douglas Engelbart formed what would become
the Augmentation Research Center to pursue a radically different goal —
designing a computing system that would instead “bootstrap” the human
intelligence of small groups of scientists and engineers.
For the past four decades that basic tension between artificial intelligence and
intelligence augmentation — A.I. versus I.A. — has been at the heart of progress
in computing science as the field has produced a series of ever more powerful
technologies that are transforming the world.
Now, as the pace of technological change continues to accelerate, it has become
increasingly possible to design computing systems that enhance the human
experience, or now — in a growing number of cases — completely dispense with it.
The implications of progress in A.I. are being brought into sharp relief now by
the broadcasting of a recorded competition pitting the I.B.M. computing system
named Watson against the two best human Jeopardy players, Ken Jennings and Brad
Rutter.
Watson is an effort by I.B.M. researchers to advance a set of techniques used to
process human language. It provides striking evidence that computing systems
will no longer be limited to responding to simple commands. Machines will
increasingly be able to pick apart jargon, nuance and even riddles. In attacking
the problem of the ambiguity of human language, computer science is now closing
in on what researchers refer to as the “Paris Hilton problem” — the ability, for
example, to determine whether a query is being made by someone who is trying to
reserve a hotel in France, or simply to pass time surfing the Internet.
If, as many predict, Watson defeats its human opponents on Wednesday, much will
be made of the philosophical consequences of the machine’s achievement.
Moreover, the I.B.M. demonstration also foretells profound sociological and
economic changes.
Traditionally, economists have argued that while new forms of automation may
displace jobs in the short run, over longer periods of time economic growth and
job creation have continued to outpace any job-killing technologies. For
example, over the past century and a half the shift from being a largely
agrarian society to one in which less than 1 percent of the United States labor
force is in agriculture is frequently cited as evidence of the economy’s ability
to reinvent itself.
That, however, was before machines began to “understand” human language. Rapid
progress in natural language processing is beginning to lead to a new wave of
automation that promises to transform areas of the economy that have until now
been untouched by technological change.
“As designers of tools and products and technologies we should think more about
these issues,” said Pattie Maes, a computer scientist at the M.I.T. Media Lab.
Not only do designers face ethical issues, she argues, but increasingly as
skills that were once exclusively human are simulated by machines, their
designers are faced with the challenge of rethinking what it means to be human.
I.B.M.’s executives have said they intend to commercialize Watson to provide a
new class of question-answering systems in business, education and medicine. The
repercussions of such technology are unknown, but it is possible, for example,
to envision systems that replace not only human experts, but hundreds of
thousands of well-paying jobs throughout the economy and around the globe.
Virtually any job that now involves answering questions and conducting
commercial transactions by telephone will soon be at risk. It is only necessary
to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of
what could happen.
To be sure, anyone who has spent time waiting on hold for technical support, or
trying to change an airline reservation, may welcome that day. However, there is
also a growing unease about the advances in natural language understanding that
are being heralded in systems like Watson. As rapidly as A.I.-based systems are
proliferating, there are equally compelling examples of the power of I.A. —
systems that extend the capability of the human mind.
Google itself is perhaps the most significant example of using software to mine
the collective intelligence of humans and then making it freely available in the
form of a digital library. The search engine was originally based on a software
algorithm called PageRank that mined human choices in picking Web pages that
contained answers to a particular typed query and then quickly ranked the
matches by relevance.
The Internet is widely used for applications that employ a range of human
capabilities. For example, experiments in Web-based games designed to harness
the human ability to recognize patterns — which still greatly exceeds what is
possible by computer — are generating a new set of scientific tools. Games like
FoldIt, EteRNA and Galaxy Zoo make it possible for individuals to compete and
collaborate in fields like astronomy to biology, medicine and possibly even
material science.
Personal computing was the first step toward intelligence augmentation that
reached a broad audience. It created a generation of “information workers,” and
equipped them with a set of tools for gathering, producing and sharing
information. Now there is a cyborg quality to the changes that are taking place
as personal computing has evolved from desktop to laptop and now to the
smartphones that have quickly become ubiquitous.
The smartphone is not just a navigation and communication tool. It has rapidly
become an almost seamless extension of almost all of our senses. It is not only
a reference tool but is quickly evolving to be an “information concierge” that
can respond to typed or spoken queries or simply volunteer advice.
Further advances in both A.I. and I.A. will increasingly confront the engineers
and computer scientists with clear choices about how technology is used. “There
needs to be an explicit social contract between the engineers and society to
create not just jobs but better jobs,” said Jaron Lanier, a computer scientist
and author of “You are not a Gadget: A Manifesto.”
The consequences of human design decisions can be clearly seen in the competing
online news systems developed here in Silicon Valley.
Each day Katherine Ho sits at a computer and observes which news articles
millions of Yahoo users are reading.
Her computer monitor displays the results of a cluster of software programs
giving her almost instant updates on precisely how popular each of the news
articles on the company’s home page is, based on her readers’ tastes and
interests.
Ms. Ho is a 21st-century version of a traditional newspaper wire editor. Instead
of gut and instinct, her decisions on which articles to put on the Yahoo home
page are based on the cues generated by the software algorithms.
Throughout the day she constantly reorders the news articles that are displayed
for dozens of demographic subgroups that make up the Yahoo readership. An
article that isn’t drawing much interest may last only minutes before she
“spikes” it electronically. Popular articles stay online for days and sometimes
draw tens of millions of readers.
Just five miles north at Yahoo’s rival Google, however, the news is produced in
an entirely different manner. Spotlight, a popular feature on Google’s news
site, is run entirely by a software algorithm which performs essentially the
same duties as Ms. Ho does.
Google’s software prowls the Web looking for articles deemed interesting,
employing a process that is similar to the company’s PageRank search engine
ranking system to make decisions on which articles to present to readers.
In one case, software-based technologies are being used to extend the skills of
a human worker, in another case technology replaces her entirely.
Similar design decisions about how machines are used and whether they will
enhance or replace human qualities are now being played out in a multitude of
ways, and the real value of Watson may ultimately be in forcing society to
consider where the line between human and machine should be drawn.
Indeed, for the computer scientist John Seely Brown, machines that are facile at
answering questions only serve to obscure what remains fundamentally human.
“The essence of being human involves asking questions, not answering them,” he
said.
December 4, 2008
The New York Times
By JOHN MARKOFF
Oliver G. Selfridge, an innovator in early computer science and artificial
intelligence, died on Wednesday in Boston. He was 82.
The cause was injuries suffered in a fall on Sunday at his home in nearby
Belmont, Mass., said his companion, Edwina L. Rissland.
Credited with coining the term “intelligent agents,” for software programs
capable of observing and responding to changes in their environment, Mr.
Selfridge theorized about far more, including devices that would not only
automate certain tasks but also learn through practice how to perform them
better, faster and more cheaply.
Eventually, he said, machines would be able to analyze operator instructions to
discern not just what users requested but what they actually wanted to occur,
not always the same thing.
His 1958 paper “Pandemonium: A Paradigm for Learning,” which proposed a
collection of small components dubbed “demons” that together would allow
machines to recognize patterns, was a landmark contribution to the emerging
science of machine learning.
An early enthusiast about the potential of interactive computing, Mr. Selfridge
saw his ideas summarized in a famous 1968 paper, “The Computer as a
Communications Device,” written by J. C. R. Licklider and Robert W. Taylor and
published in the journal Science and Technology.
Honoring Mr. Selfridge, the authors proposed a device they referred to as
Oliver, an acronym for On-Line Interactive Vicarious Expediter and Responder.
Oliver was one of the clearest early descriptions of a computerized personal
assistant.
With four other colleagues, Mr. Selfridge helped organize a 1956 conference at
Dartmouth that led directly to creation of the field of artificial intelligence.
“Oliver was one of the founding fathers of the discipline of artificial
intelligence,” said Eric Horvitz, a Microsoft researcher who is president of the
Association for the Advancement of Artificial Intelligence. “He has been well
known in the field for his early and prescient writings on the challenge of
endowing machines with the ability to learn to recognize patterns.”
Oliver Gordon Selfridge, a grandson of H. Gordon Selfridge, the American who
founded Selfridges department store in London, was born in London on May 10,
1926. The family lost control of the business during the Depression and
emigrated to the United States at the onset of World War II.
Mr. Selfridge attended Middlesex School in Concord, Mass., and the Massachusetts
Institute of Technology, from which he graduated at 19 with a degree in
mathematics. After service in the Navy, he embarked on graduate study at M.I.T.
under Norbert Weiner, the pioneering theorist of computer science. He became one
of Weiner’s collaborators but plunged into the working world of computer science
before earning an advanced degree.
In the 1960s Mr. Selfridge was associate director for Project MAC, an early
time-shared computing research project at M.I.T. He did much of this work at the
M.I.T. Lincoln Laboratory, a federally financed research center for security
technology. He then worked at Bolt, Beranek & Newman, now BBN Technologies,
which develops computer and communications-related technology. In 1983 he became
chief scientist for the telecommunications company GTE.
He began advising the nation’s national security leaders in the 1950s, among
other tasks serving on the President’s Foreign Intelligence Advisory Board and
the Scientific Advisory Board of the National Security Agency.
His first marriage, to Allison Gilman Selfridge, and his second, to Katherine
Bull Selfridge, ended in divorce. Besides his companion, his survivors include
their daughter, Olivia Selfridge Rissland of Belmont; three children from his
first marriage, Peter Selfridge of Bethesda, Md.; Mallory Selfridge of Eastford,
Conn.; and Caroline Selfridge of Saratoga, Calif.; a sister, Jennifer Selfridge
MacLeod of Princeton Junction, N.J.; and six grandchildren.
Along with producing scholarly papers and technical books, Mr. Selfridge wrote
“Fingers Come in Fives,” “All About Mud” and “Trouble With Dragons,” all books
for children. At his death he was working on a series of books he hoped might
one day become an arithmetic equivalent of summer reading projects for
schoolchildren.
Mr. Selfridge never stopped theorizing, speaking and writing on what he saw as
the future of artificial intelligence.
“I want an agent that can learn and adapt as I might,” he once told a meeting
organized by I.B.M. Such an agent would “infer what I would want it to do, from
the updated purposes it has learned from working for me,” he went on, and “do as
I want rather than the silly things I might say.”
“BEWARE of geeks bearing formulas.” So saith Warren Buffett, the Wizard of
Omaha. Words to bear in mind as we bail out banks and buy up mortgages and tweak
interest rates and nothing, nothing seems to make any difference on Wall Street
or Main Street. Years ago, Mr. Buffett called derivatives “weapons of financial
mass destruction” — an apt metaphor considering that the Manhattan Project’s
math and physics geeks bearing formulas brought us the original weapon of mass
destruction, at Trinity in New Mexico on July 16, 1945.
In a 1981 documentary called “The Day After Trinity,” Freeman Dyson, a reigning
gray eminence of math and theoretical physics, as well as an ardent proponent of
nuclear disarmament, described the seductive power that brought us the ability
to create atomic energy out of nothing.
“I have felt it myself,” he warned. “The glitter of nuclear weapons. It is
irresistible if you come to them as a scientist. To feel it’s there in your
hands, to release this energy that fuels the stars, to let it do your bidding.
To perform these miracles, to lift a million tons of rock into the sky. It is
something that gives people an illusion of illimitable power, and it is, in some
ways, responsible for all our troubles — this, what you might call technical
arrogance, that overcomes people when they see what they can do with their
minds.”
The Wall Street geeks, the quantitative analysts (“quants”) and masters of “algo
trading” probably felt the same irresistible lure of “illimitable power” when
they discovered “evolutionary algorithms” that allowed them to create vast
empires of wealth by deriving the dependence structures of portfolio credit
derivatives.
What does that mean? You’ll never know. Over and over again, financial experts
and wonkish talking heads endeavor to explain these mysterious, “toxic”
financial instruments to us lay folk. Over and over, they ignobly fail, because
we all know that no one understands credit default obligations and derivatives,
except perhaps Mr. Buffett and the computers who created them.
Somehow the genius quants — the best and brightest geeks Wall Street firms could
buy — fed $1 trillion in subprime mortgage debt into their supercomputers, added
some derivatives, massaged the arrangements with computer algorithms and — poof!
— created $62 trillion in imaginary wealth. It’s not much of a stretch to
imagine that all of that imaginary wealth is locked up somewhere inside the
computers, and that we humans, led by the silverback males of the financial
world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith
for answers. Or maybe we are lost in space, with Dave the astronaut pleading,
“Open the bank vault doors, Hal.”
As the current financial crisis spreads (like a computer virus) on the earth’s
nervous system (the Internet), it’s worth asking if we have somehow managed to
colossally outsmart ourselves using computers. After all, the Wall Street titans
loved swaps and derivatives because they were totally unregulated by humans.
That left nobody but the machines in charge.
How fitting then, that almost 30 years after Freeman Dyson described the almost
unspeakable urges of the nuclear geeks creating illimitable energy out of
equations, his son, George Dyson, has written an essay (published at Edge.org)
warning about a different strain of technical arrogance that has brought the
entire planet to the brink of financial destruction. George Dyson is an
historian of technology and the author of “Darwin Among the Machines,” a book
that warned us a decade ago that it was only a matter of time before technology
out-evolves us and takes over.
His new essay — “Economic Dis-Equilibrium: Can You Have Your House and Spend It
Too?” — begins with a history of “stock,” originally a stick of hazel, willow or
alder wood, inscribed with notches indicating monetary amounts and dates. When
funds were transferred, the stick was split into identical halves — with one
side going to the depositor and the other to the party safeguarding the money —
and represented proof positive that gold had been deposited somewhere to back it
up. That was good enough for 600 years, until we decided that we needed more
speed and efficiency.
Making money, it seems, is all about the velocity of moving it around, so that
it can exist in Hong Kong one moment and Wall Street a split second later. “The
unlimited replication of information is generally a public good,” George Dyson
writes. “The problem starts, as the current crisis demonstrates, when
unregulated replication is applied to money itself. Highly complex
computer-generated financial instruments (known as derivatives) are being
produced, not from natural factors of production or other goods, but purely from
other financial instruments.”
It was easy enough for us humans to understand a stick or a dollar bill when it
was backed by something tangible somewhere, but only computers can understand
and derive a correlation structure from observed collateralized debt obligation
tranche spreads. Which leads us to the next question: Just how much of the
world’s financial stability now lies in the “hands” of computerized trading
algorithms?
•
Here’s a frightening party trick that I learned from the futurist Ray Kurzweil.
Read this excerpt and then I’ll tell you who wrote it:
But we are suggesting neither that the human race would voluntarily turn power
over to the machines nor that the machines would willfully seize power. What we
do suggest is that the human race might easily permit itself to drift into a
position of such dependence on the machines that it would have no practical
choice but to accept all of the machines’ decisions. ... Eventually a stage may
be reached at which the decisions necessary to keep the system running will be
so complex that human beings will be incapable of making them intelligently. At
that stage the machines will be in effective control. People won’t be able to
just turn the machines off, because they will be so dependent on them that
turning them off would amount to suicide.
Brace yourself. It comes from the Unabomber’s manifesto.
Yes, Theodore Kaczinski was a homicidal psychopath and a paranoid kook, but he
was also a bloodhound when it came to scenting all of the horrors technology
holds in store for us. Hence his mission to kill technologists before machines
commenced what he believed would be their inevitable reign of terror.
•
We are living, we have long been told, in the Information Age. Yet now we are
faced with the sickening suspicion that technology has run ahead of us. Man is a
fire-stealing animal, and we can’t help building machines and machine
intelligences, even if, from time to time, we use them not only to outsmart
ourselves but to bring us right up to the doorstep of Doom.
We are still fearful, superstitious and all-too-human creatures. At times, we
forget the magnitude of the havoc we can wreak by off-loading our minds onto
super-intelligent machines, that is, until they run away from us, like mad
sorcerers’ apprentices, and drag us up to the precipice for a look down into the
abyss.
As the financial experts all over the world use machines to unwind Gordian knots
of financial arrangements so complex that only machines can make — “derive” —
and trade them, we have to wonder: Are we living in a bad sci-fi movie? Is the
Matrix made of credit default swaps?
When Treasury Secretary Paulson (looking very much like a frightened primate)
came to Congress seeking an emergency loan, Senator Jon Tester of Montana, a
Democrat still living on his family homestead, asked him: “I’m a dirt farmer.
Why do we have one week to determine that $700 billion has to be appropriated or
this country’s financial system goes down the pipes?”
“Well, sir,” Mr. Paulson could well have responded, “the computers have demanded
it.”
Richard Dooling is the author
of “Rapture for the Geeks: When A.I. Outsmarts
I.Q.”