Technology > Computing >
Science, Pioneers, Chips, Microprocessors
Time Covers - The 50S
TIME cover 01-23-1950
caricature of the US Navy's Mark III computer.
Date taken: January 23, 1950
Photographer: Boris Artzybasheff
a single piece of silicon
Silicon Valley USA
computer chips USA
nanotube chip breakthrough USA
IBM Develops a New Chip That Functions Like a Brain
chip / semiconductor chips / processor
chip designer > ARM Holdings
Julius Blank USA
who helped start
a computer chip company in the 1950s
that became a prototype for high-tech start-ups
and a training ground for a generation
of Silicon Valley entrepreneurs
chip maker Texas Instruments Inc.
state of the art
A bit is a single unit of data,
expressed as either a "0" or a "1" in binary code.
A string of eight bits equals one byte.
Any character formed,
such as a letter of the alphabet,
a number, or a punctuation mark,
requires eight binary bits to describe it.
A = 01000001
B = 01000010
a = 01100001
b = 01100010
6 = 00110110
7 = 00110111
! = 00100001
@ = 01000000
What is the Difference Between a Bit and a Byte?
A bit, short for binary digit,
is the smallest unit of measurement
used for information storage in computers.
A bit is represented
by a 1 or a 0 with a value of true or false,
sometimes expressed as on or off.
Eight bits form a single byte of information,
also known as an octet.
The difference between a bit and a byte is size,
or the amount of information stored.
1000 bits = 1 kilobit
1000 kilobits = 1 megabit
1000 megabits = 1 gigabit
Megabits per second (Mbps)
refers to data transfer speeds as measured in megabits (Mb).
This term is commonly used
in communications and data technology
to demonstrate the speed at which a transfer takes place.
A megabit is just over one million bits,
so "Mbps" indicates the transfer of one million bits of data each second.
Data can be moved even faster than this,
measured by terms like gigabits per second (Gbps).
Megabytes (MBs) are collections of digital information.
The term commonly refers to two different numbers of bytes,
where each byte contains eight bits.
The first definition of megabyte,
used mainly in the context of computer memory,
denotes 1 048 576, or 220 bytes.
The other definition,
used in most networking
and computer storage applications,
means 1 000 000, or 106 bytes.
Using either definition,
a megabyte is roughly the file size of a 500-page e-book.
is a unit of measurement used in computers,
equal to one billion bits of data.
A bit is the smallest unit of data.
It takes eight bits
to form or store a single character of text.
These 8-bit units are known as bytes.
Hence, the difference between a gigabit and a gigabyte
is that the latter is 8x greater, or eight billion bits.
is a term that indicates a definite value of data quantity
with regards to storage capacity or content.
It refers to an amount of something,
usually data of some kind, often stored digitally.
A gigabyte typically refers to 1 billion bytes.
A terabyte (TB)
is a large allocation of data storage capacity
applied most often to hard disk drives.
Hard disk drives are essential to computer systems,
as they store the operating system, programs, files and data
necessary to make the computer work.
Depending on what type of storage is being measured,
it can be equal to either 1,000 gigabytes (GB) or 1,024 GB;
disk storage is usually measured as the first,
while processor storage as the second.
In the late 1980s,
the average home computer system
had a single hard drive
with a capacity of about 20 megabytes (MB).
By the mid 1990s,
average capacity increased to about 80 MBs.
Just a few years later,
operating systems alone required more room than this,
while several hundred megabytes
represented an average storage capacity.
As of 2005,
computer buyers think in terms of hundreds of gigabytes,
and this is already giving way to even greater storage.
In the world of ever-growing data capacity,
a petabyte represents the frontier just ahead of the terabyte,
which itself runs just ahead of the gigabyte.
In other words, 1,024 gigabytes is one terabyte,
and 1,024 terabytes is one petabyte.
To put this in perspective,
a petabyte is about one million gigabytes (1,048,576).
computing pioneer > Steven J. Wallach
computer science USA
personal computer PC
PC tuneup USA
supercomputer UK / USA
performing complex calculations
thousands of times faster
supercomputers UK / USA
computerized surveillance systems
MITS Altair, the first inexpensive general-purpose microcomputer
and minuscule sensors stuffed inside pills
The world's first multi-tasking computer - video
We take the ability to multi-task
on our computers for granted,
but it all started
with the Pilot Ace Computer
and the genius
Alan Turing's Pilot Ace computer
12 april 2013
Built in the 1950s
and one of the Science Museum's 20th century icons,
The Pilot Ace "automatic computing engine"
was the world's first general purpose
– and for a while was the fastest computer in the world.
We now take the ability to carry out
a range of tasks on our computers for
but it all started with the principles developed
by mathematician Alan Turing in
and his design for the Ace.
In this film,
Professor Nick Braithwaite
of the Open
discusses its significance with Tilly Blyth,
curator of Computing and
at the Science Museum
IBM PC UK
the IBM PC 5150 computer
construction of the ENIAC machine
University of Pennsylvania in 1943 UK
ECToR, the UK's fastest machine
computer services company
laptop / notebook
widescreen wireless notebook
James Loton Flanagan
pioneer in the field
the technical foundation
for speech recognition,
and the more efficient digital transmission
of human conversation
designer of an early
portable computer, the Kaypro II,
that became a smash
hit in the early 1980s
before his company
fell into bankruptcy in the ’90s
as the computer
industry leapfrogged ahead of him
For a time, Mr. Kay’s
was the world’s
largest portable computer maker,
ranked fourth in the
PC industry over all behind IBM,
Apple Computer and
Jack Tramiel (born Jacek Trzmiel in Lodz,
immensely popular Commodore computers
helped ignite the personal computer industry
the way Henry Ford’s Model T
kick-started the mass production of automobiles
Kenneth Harry Olsen USA
Ken Olsen helped reshape the computer industry
as a founder of the
Digital Equipment Corporation,
at one time the world’s second-largest computer company
Computers Make Strides in Recognizing Speech
Date taken: 1961
Photographer: Walter Sanders
John McCarthy, 84, Dies;
Computer Design Pioneer
October 25, 2011
The New York Times
By JOHN MARKOFF
John McCarthy, a computer scientist who helped design the foundation of
today’s Internet-based computing and who is widely credited with coining the
term for a frontier of research he helped pioneer, Artificial Intelligence, or
A.I., died on Monday at his home in Stanford, Calif. He was 84.
The cause was complications of heart disease, his daughter Sarah McCarthy said.
Dr. McCarthy’s career followed the arc of modern computing. Trained as a
mathematician, he was responsible for seminal advances in the field and was
often called the father of computer time-sharing, a major development of the
1960s that enabled many people and organizations to draw simultaneously from a
single computer source, like a mainframe, without having to own one.
By lowering costs, it allowed more people to use computers and laid the
groundwork for the interactive computing of today.
Though he did not foresee the rise of the personal computer, Dr. McCarthy was
prophetic in describing the implications of other technological advances decades
before they gained currency.
“In the early 1970s, he presented a paper in France on buying and selling by
computer, what is now called electronic commerce,” said Whitfield Diffie, an
Internet security expert who worked as a researcher for Dr. McCarthy at the
Stanford Artificial Intelligence Laboratory.
And in the study of artificial intelligence, “no one is more influential than
John,” Mr. Diffie said.
While teaching mathematics at Dartmouth in 1956, Dr. McCarthy was the principal
organizer of the first Dartmouth Conference on Artificial Intelligence.
The idea of simulating human intelligence had been discussed for decades, but
the term “artificial intelligence” — originally used to help raise funds to
support the conference — stuck.
In 1958, Dr. McCarthy moved to the Massachusetts Institute of Technology, where,
with Marvin Minsky, he founded the Artificial Intelligence Laboratory. It was at
M.I.T. that he began working on what he called List Processing Language, or
Lisp, a computer language that became the standard tool for artificial
intelligence research and design.
Around the same time he came up with a technique called garbage collection, in
which pieces of computer code that are not needed by a running computation are
automatically removed from the computer’s random access memory.
He developed the technique in 1959 and added it to Lisp. That technique is now
routinely used in Java and other programming languages.
His M.I.T. work also led to fundamental advances in software and operating
systems. In one, he was instrumental in developing the first time-sharing system
for mainframe computers.
The power of that invention would come to shape Dr. McCarthy’s worldview to such
an extent that when the first personal computers emerged with local computing
and storage in the 1970s, he belittled them as toys.
Rather, he predicted, wrongly, that in the future everyone would have a
relatively simple and inexpensive computer terminal in the home linked to a
shared, centralized mainframe and use it as an electronic portal to the worlds
of commerce and news and entertainment media.
Dr. McCarthy, who taught briefly at Stanford in the early 1950s, returned there
in 1962 and in 1964 became the founding director of the Stanford Artificial
Intelligence Laboratory, or SAIL. Its optimistic, space-age goal, with financial
backing from the Pentagon, was to create a working artificial intelligence
system within a decade.
Years later he developed a healthy respect for the challenge, saying that
creating a “thinking machine” would require “1.8 Einsteins and one-tenth the
resources of the Manhattan Project.”
Artificial intelligence is still thought to be far in the future, though
tremendous progress has been made in systems that mimic many human skills,
including vision, listening, reasoning and, in robotics, the movements of limbs.
From the mid-’60s to the mid-’70s, the Stanford lab played a vital role in
creating some of these technologies, including robotics and machine-vision
In 1972, the laboratory drew national attention when Stewart Brand, the founder
of The Whole Earth Catalog, wrote about it in Rolling Stone magazine under the
headline “SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums.”
The article evoked the esprit de corps of a group of researchers who had been
freed to create their own virtual worlds, foreshadowing the emergence of
cyberspace. “Ready or not, computers are coming to the people,” Mr. Brand wrote.
Dr. McCarthy had begun inviting the Homebrew Computer Club, a Silicon Valley
hobbyist group, to meet at the Stanford lab. Among its growing membership were
Steven P. Jobs and Steven Wozniak, who would go on to found Apple. Mr. Wozniak
designed his first personal computer prototype, the Apple 1, to share with his
But Dr. McCarthy still cast a jaundiced eye on personal computing. In the second
Homebrew newsletter, he suggested the formation of a “Bay Area Home Terminal
Club,” to provide computer access on a shared Digital Equipment computer. He
thought a user fee of $75 a month would be reasonable.
Though Dr. McCarthy would initially miss the significance of the PC, his early
thinking on electronic commerce would influence Mr. Diffie at the Stanford lab.
Drawing on those ideas, Mr. Diffie began thinking about what would replace the
paper personal check in an all-electronic world.
He and two other researchers went on to develop the basic idea of public key
cryptography, which is now the basis of all modern electronic banking and
commerce, providing secure interaction between a consumer and a business.
A chess enthusiast, Dr. McCarthy had begun working on chess-playing computer
programs in the 1950s at Dartmouth. Shortly after joining the Stanford lab, he
engaged a group of Soviet computer scientists in an intercontinental chess match
after he discovered they had a chess-playing computer. Played by telegraph, the
match consisted of four games and lasted almost a year. The Soviet scientists
John McCarthy was born on Sept. 4, 1927, into a politically engaged family in
Boston. His father, John Patrick McCarthy, was an Irish immigrant and a labor
His mother, the former Ida Glatt, a Lithuanian Jewish immigrant, was active in
the suffrage movement. Both parents were members of the Communist Party. The
family later moved to Los Angeles in part because of John’s respiratory
He entered the California Institute of Technology in 1944 and went on to
graduate studies at Princeton, where he was a colleague of John Forbes Nash Jr.,
the Nobel Prize-winning economist and subject of Sylvia Nasar’s book “A
Beautiful Mind,” which was adapted into a movie.
At Princeton, in 1949, he briefly joined the local Communist Party cell, which
had two other members: a cleaning woman and a gardener, he told an interviewer.
But he quit the party shortly afterward.
In the ’60s, as the Vietnam War escalated, his politics took a conservative turn
as he grew disenchanted with leftist politics.
In 1971 Dr. McCarthy received the Turing Award, the most prestigious given by
the Association of Computing Machinery, for his work in artificial intelligence.
He was awarded the Kyoto Prize in 1988, the National Medal of Science in 1991
and the Benjamin Franklin Medal in 2003.
Dr. McCarthy was married three times. His second wife, Vera Watson, a member of
the American Women’s Himalayan Expedition, died in a climbing accident on
Annapurna in 1978.
Besides his daughter Sarah, of Nevada City, Calif., he is survived by his wife,
Carolyn Talcott, of Stanford; another daughter, Susan McCarthy, of San
Francisco; and a son, Timothy, of Stanford.
He remained an independent thinker throughout his life. Some years ago, one of
his daughters presented him with a license plate bearing one of his favorite
aphorisms: “Do the arithmetic or be doomed to talk nonsense.”
John McCarthy, 84, Dies;
Computer Design Pioneer,
Computer Wins on ‘Jeopardy!’:
Trivial, It’s Not
February 16, 2011
The New York Times
By JOHN MARKOFF
YORKTOWN HEIGHTS, N.Y. — In the end, the humans on “Jeopardy!” surrendered
Facing certain defeat at the hands of a room-size I.B.M. computer on Wednesday
evening, Ken Jennings, famous for winning 74 games in a row on the TV quiz show,
acknowledged the obvious. “I, for one, welcome our new computer overlords,” he
wrote on his video screen, borrowing a line from a “Simpsons” episode.
From now on, if the answer is “the computer champion on “Jeopardy!,” the
question will be, “What is Watson?”
For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million
prize, but proof that the company has taken a big step toward a world in which
intelligent machines will understand and respond to humans, and perhaps
inevitably, replace some of them.
Watson, specifically, is a “question answering machine” of a type that
artificial intelligence researchers have struggled with for decades — a computer
akin to the one on “Star Trek” that can understand questions posed in natural
language and answer them.
Watson showed itself to be imperfect, but researchers at I.B.M. and other
companies are already developing uses for Watson’s technologies that could have
significant impact on the way doctors practice and consumers buy products.
“Cast your mind back 20 years and who would have thought this was possible?”
said Edward Feigenbaum, a Stanford University computer scientist and a pioneer
in the field.
In its “Jeopardy!” project, I.B.M. researchers were tackling a game that
requires not only encyclopedic recall, but the ability to untangle convoluted
and often opaque statements, a modicum of luck, and quick, strategic button
The contest, which was taped in January here at the company’s T. J. Watson
Research Laboratory before an audience of I.B.M. executives and company clients,
played out in three televised episodes concluding Wednesday. At the end of the
first day, Watson was in a tie with Brad Rutter, another ace human player, at
$5,000 each, with Mr. Jennings trailing with $2,000.
But on the second day, Watson went on a tear. By night’s end, Watson had a
commanding lead with a total of $35,734, compared with Mr. Rutter’s $10,400 and
Mr. Jennings’ $4,800.
But victory was not cemented until late in the third match, when Watson was in
Nonfiction. “Same category for $1,200” it said in a manufactured tenor, and
lucked into a Daily Double. Mr. Jennings grimaced.
Even later in the match, however, had Mr. Jennings won another key Daily Double
it might have come down to Final Jeopardy, I.B.M. researchers acknowledged.
The final tally was $77,147 to Mr. Jennings’ $24,000 and Mr. Rutter’s $21,600.
More than anything, the contest was a vindication for the academic field of
computer science, which began with great promise in the 1960s with the vision of
creating a thinking machine and which became the laughingstock of Silicon Valley
in the 1980s, when a series of heavily funded start-up companies went bankrupt.
Despite its intellectual prowess, Watson was by no means omniscient. On Tuesday
evening during Final Jeopardy, the category was U.S. Cities and the clue was:
“Its largest airport is named for a World War II hero; its second largest for a
World War II battle.”
Watson drew guffaws from many in the television audience when it responded “What
The string of question marks indicated that the system had very low confidence
in its response, I.B.M. researchers said, but because it was Final Jeopardy, it
was forced to give a response. The machine did not suffer much damage. It had
wagered just $947 on its result.
“We failed to deeply understand what was going on there,” said David Ferrucci,
an I.B.M. researcher who led the development of Watson. “The reality is that
there’s lots of data where the title is U.S. cities and the answers are
countries, European cities, people, mayors. Even though it says U.S. cities, we
had very little confidence that that’s the distinguishing feature.”
The researchers also acknowledged that the machine had benefited from the
Both Mr. Jennings and Mr. Rutter are accomplished at anticipating the light that
signals it is possible to “buzz in,” and can sometimes get in with virtually
zero lag time. The danger is to buzz too early, in which case the contestant is
penalized and “locked out” for roughly a quarter of a second.
Watson, on the other hand, does not anticipate the light, but has a weighted
scheme that allows it, when it is highly confident, to buzz in as quickly as 10
milliseconds, making it very hard for humans to beat. When it was less
confident, it buzzed more slowly. In the second round, Watson beat the others to
the buzzer in 24 out of 30 Double Jeopardy questions.
“It sort of wants to get beaten when it doesn’t have high confidence,” Dr.
Ferrucci said. “It doesn’t want to look stupid.”
Both human players said that Watson’s button pushing skill was not necessarily
an unfair advantage. “I beat Watson a couple of times,” Mr. Rutter said.
When Watson did buzz in, it made the most of it. Showing the ability to parse
language, it responded to, “A recent best seller by Muriel Barbery is called
‘This of the Hedgehog,’ ” with “What is Elegance?”
It showed its facility with medical diagnosis. With the answer: “You just need a
nap. You don’t have this sleep disorder that can make sufferers nod off while
standing up,” Watson replied, “What is narcolepsy?”
The coup de grâce came with the answer, “William Wilkenson’s ‘An Account of the
Principalities of Wallachia and Moldavia’ inspired this author’s most famous
novel.” Mr. Jennings wrote, correctly, Bram Stoker, but realized he could not
catch up with Watson’s winnings and wrote out his surrender.
Both players took the contest and its outcome philosophically.
“I had a great time and I would do it again in a heartbeat,” said Mr. Jennings.
“It’s not about the results; this is about being part of the future.”
For I.B.M., the future will happen very quickly, company executives said. On
Thursday it plans to announce that it will collaborate with Columbia University
and the University of Maryland to create a physician’s assistant service that
will allow doctors to query a cybernetic assistant. The company also plans to
work with Nuance Communications Inc. to add voice recognition to the physician’s
assistant, possibly making the service available in as little as 18 months.
“I have been in medical education for 40 years and we’re still a very
memory-based curriculum,” said Dr. Herbert Chase, a professor of clinical
medicine at Columbia University who is working with I.B.M. on the physician’s
assistant. “The power of Watson- like tools will cause us to reconsider what it
is we want students to do.”
I.B.M. executives also said they are in discussions with a major consumer
electronics retailer to develop a version of Watson, named after I.B.M.’s
founder, Thomas J. Watson, that would be able to interact with consumers on a
variety of subjects like buying decisions and technical support.
Dr. Ferrucci sees none of the fears that have been expressed by theorists and
science fiction writers about the potential of computers to usurp humans.
“People ask me if this is HAL,” he said, referring to the computer in “2001: A
Space Odyssey.” “HAL’s not the focus, the focus is on the computer on ‘Star
Trek,’ where you have this intelligent information seek dialog, where you can
ask follow-up questions and the computer can look at all the evidence and tries
to ask follow-up questions. That’s very cool.”
Computer Wins on
‘Jeopardy!’: Trivial, It’s Not, NYT, 16.2.2011,
Who Built DEC Into a Power,
Dies at 84
February 7, 2011
The New York Times
By GLENN RIFKIN
Ken Olsen, who helped reshape the computer industry as a founder of the
Digital Equipment Corporation, at one time the world’s second-largest computer
company, died on Sunday. He was 84.
His family announced the death but declined to provide further details. He had
recently lived with a daughter in Indiana and had been a longtime resident of
Mr. Olsen, who was proclaimed “America’s most successful entrepreneur” by
Fortune magazine in 1986, built Digital on $70,000 in seed money, founding it
with a partner in 1957 in the small Boston suburb of Maynard, Mass. With Mr.
Olsen as its chief executive, it grew to employ more than 120,000 people at
operations in more than 95 countries, surpassed in size only by I.B.M.
At its peak, in the late 1980s, Digital had $14 billion in sales and ranked
among the most profitable companies in the nation.
But its fortunes soon declined after Digital began missing out on some critical
market shifts, particularly toward the personal computer. Mr. Olsen was
criticized as autocratic and resistant to new trends. “The personal computer
will fall flat on its face in business,” he said at one point. And in July 1992,
the company’s board forced him to resign.
Six years later, Digital, or DEC, as the company was known, was acquired by the
Compaq Computer Corporation for $9.6 billion.
But for 35 years the enigmatic Mr. Olsen oversaw an expanding technology giant
that produced some of the computer industry’s breakthrough ideas.
In a tribute to him in 2006, Bill Gates, the Microsoft co-founder, called Mr.
Olsen “one of the true pioneers of computing,” adding, “He was also a major
influence on my life.”
Mr. Gates traced his interest in software to his first use of a DEC computer as
a 13-year-old. He and Microsoft’s other founder, Paul Allen, created their first
personal computer software on a DEC PDP-10 computer.
In the 1960s, Digital built small, powerful and elegantly designed
“minicomputers,” which formed the basis of a lucrative new segment of the
computer marketplace. Though hardly “mini” by today’s standards, the computer
became a favorite alternative to the giant, multimillion-dollar mainframe
computers sold by I.B.M. to large corporate customers. The minicomputer found a
market in research laboratories, engineering companies and other professions
requiring heavy computer use.
In time, several minicomputer companies sprang up around Digital and thrived,
forming the foundation of the Route 128 technology corridor near Boston.
Digital also spawned a generation of computing talent, lured by an open
corporate culture that fostered a free flow of ideas. A frequently rumpled
outdoorsman who preferred flannel shirts to business suits, Mr. Olsen, a brawny
man with piercing blue eyes, shunned publicity and ran the company as a large,
sometimes contentious family.
Many within the industry assumed that Digital, with its stellar engineering
staff, would be the logical company to usher in the age of personal computers,
but Mr. Olsen was openly skeptical of the desktop machines. He thought of them
as “toys” used for playing video games.
Still, most people in the industry say Mr. Olsen’s legacy is secure. “Ken Olsen
is the father of the second generation of computing,” said George Colony, who is
chief executive of Forrester Research and a longtime industry watcher, “and that
makes him one of the major figures in the history of this business.”
Kenneth Harry Olsen was born in Bridgeport, Conn., on Feb. 20, 1926, and grew up
with his three siblings in nearby Stratford. His parents, Oswald and Elizabeth
Svea Olsen, were children of Norwegian immigrants.
Mr. Olsen and his younger brother Stan lived their passion for electronics in
the basement of their Stratford home, inventing gadgets and repairing broken
radios. After a stint in the Navy at the end of World War II, Mr. Olsen headed
to the Massachusetts Institute of Technology, where he received bachelor’s and
master’s degrees in electrical engineering. He took a job at M.I.T.’s new
Lincoln Laboratory in 1950 and worked under Jay Forrester, who was doing
pioneering work in the nascent days of interactive computing.
In 1957, itching to leave academia, Mr. Olsen, then 31, recruited a Lincoln Lab
colleague, Harlan Anderson, to help him start a company. For financing they
turned to Georges F. Doriot, a renowned Harvard Business School professor and
venture capitalist. According to Mr. Colony, Digital became the first successful
venture-backed company in the computer industry. Mr. Anderson left the company
shortly afterward, leaving Mr. Olsen to put his stamp on it for more than three
In Digital’s often confusing management structure, Mr. Olsen was the dominant
figure who hired smart people, gave them responsibility and expected them “to
perform as adults,” said Edgar Schein, who taught organizational behavior at
M.I.T. and consulted with Mr. Olsen for 25 years. “Lo and behold,” he said,
“they performed magnificently.”
One crucial employee was Gordon Bell, a DEC vice president and the technical
brains behind many of Digital’s most successful machines. “All the alumni think
of Digital fondly and remember it as a great place to work,” said Mr. Bell, who
went on to become a principal researcher at Microsoft.
After he left Digital, Mr. Olsen began another start-up, Advanced Modular
Solutions, but it eventually failed. In retirement, he helped found the Ken
Olsen Science Center at Gordon College, a Christian school in Wenham, Mass.,
where an archive of his papers and Digital’s history is housed. His family
announced his death through the college.
Mr. Olsen’s wife of 59 years, Eeva-Liisa Aulikki Olsen, died in March 2009. A
son, Glenn, also died. Mr. Olsen’s survivors include a daughter, Ava Memmen,
another son, James; his brother Stan; and five grandchildren.
Ken Olsen, Who Built DEC
Into a Power, Dies at 84, NYT, 7.2.2011,
Advances Offer Path
to Shrink Computer Chips Again
August 30, 2010
The New York Times
By JOHN MARKOFF
Scientists at Rice University and Hewlett-Packard are reporting this week
that they can overcome a fundamental barrier to the continued rapid
miniaturization of computer memory that has been the basis for the consumer
In recent years the limits of physics and finance faced by chip makers had
loomed so large that experts feared a slowdown in the pace of miniaturization
that would act like a brake on the ability to pack ever more power into ever
smaller devices like laptops, smartphones and digital cameras.
But the new announcements, along with competing technologies being pursued by
companies like IBM and Intel, offer hope that the brake will not be applied any
In one of the two new developments, Rice researchers are reporting in Nano
Letters, a journal of the American Chemical Society, that they have succeeded in
building reliable small digital switches — an essential part of computer memory
— that could shrink to a significantly smaller scale than is possible using
More important, the advance is based on silicon oxide, one of the basic building
blocks of today’s chip industry, thus easing a move toward commercialization.
The scientists said that PrivaTran, a Texas startup company, has made
experimental chips using the technique that can store and retrieve information.
These chips store only 1,000 bits of data, but if the new technology fulfills
the promise its inventors see, single chips that store as much as today’s
highest capacity disk drives could be possible in five years. The new method
involves filaments as thin as five nanometers in width — thinner than what the
industry hopes to achieve by the end of the decade using standard techniques.
The initial discovery was made by Jun Yao, a graduate researcher at Rice. Mr.
Yao said he stumbled on the switch by accident.
Separately, H.P. is to announce on Tuesday that it will enter into a commercial
partnership with a major semiconductor company to produce a related technology
that also has the potential of pushing computer data storage to astronomical
densities in the next decade. H.P. and the Rice scientists are making what are
called memristors, or memory resistors, switches that retain information without
a source of power.
“There are a lot of new technologies pawing for attention,” said Richard
Doherty, president of the Envisioneering Group, a consumer electronics market
research company in Seaford, N.Y. “When you get down to these scales, you’re
talking about the ability to store hundreds of movies on a single chip.”
The announcements are significant in part because they indicate that the chip
industry may find a way to preserve the validity of Moore’s Law. Formulated in
1965 by Gordon Moore, a co-founder of Intel, the law is an observation that the
industry has the ability to roughly double the number of transistors that can be
printed on a wafer of silicon every 18 months.
That has been the basis for vast improvements in technological and economic
capacities in the past four and a half decades. But industry consensus had
shifted in recent years to a widespread belief that the end of physical progress
in shrinking the size modern semiconductors was imminent. Chip makers are now
confronted by such severe physical and financial challenges that they are
spending $4 billion or more for each new advanced chip-making factory.
I.B.M., Intel and other companies are already pursuing a competing technology
called phase-change memory, which uses heat to transform a glassy material from
an amorphous state to a crystalline one and back.
Phase-change memory has been the most promising technology for so-called flash
chips, which retain information after power is switched off.
The flash memory industry has used a number of approaches to keep up with
Moore’s law without having a new technology. But it is as if the industry has
been speeding toward a wall, without a way to get over it.
To keep up speed on the way to the wall, the industry has begun building
three-dimensional chips by stacking circuits on top of one another to increase
densities. It has also found ways to get single transistors to store more
information. But these methods would not be enough in the long run.
The new technology being pursued by H.P. and Rice is thought to be a dark horse
by industry powerhouses like Intel, I.B.M., Numonyx and Samsung. Researchers at
those competing companies said that the phenomenon exploited by the Rice
scientists had been seen in the literature as early as the 1960s.
“This is something that I.B.M. studied before and which is still in the research
stage,” said Charles Lam, an I.B.M. specialist in semiconductor memories.
H.P. has for several years been making claims that its memristor technology can
compete with traditional transistors, but the company will report this week that
it is now more confident that its technology can compete commercially in the
In contrast, the Rice advance must still be proved. Acknowledging that
researchers must overcome skepticism because silicon oxide has been known as an
insulator by the industry until now, Jim Tour, a nanomaterials specialist at
Rice said he believed the industry would have to look seriously at the research
team’s new approach.
“It’s a hard sell, because at first it’s obvious it won’t work,” he said. “But
my hope is that this is so simple they will have to put it in their portfolio to
Advances Offer Path to
Shrink Computer Chips Again, NY, 30.8.2010,
a Pioneer in Computers,
Dies at 85
May 6, 2010
The New York Times
By WILLIAM GRIMES
Max Palevsky, a pioneer in the computer industry and a founder of the
computer-chip giant Intel who used his fortune to back Democratic presidential
candidates and to amass an important collection of American Arts and Crafts
furniture, died on Wednesday at his home in Beverly Hills, Calif. He was 85.
The cause was heart failure, said Angela Kaye, his assistant.
Mr. Palevsky became intrigued by computers in the early 1950s when he heard the
mathematician John von Neumann lecture at the California Institute of Technology
on the potential for computer technology. Trained in symbolic logic and
mathematics, Mr. Palevsky was studying and teaching philosophy at the University
of California, Los Angeles, but followed his hunch and left the academy.
After working on logic design for the Bendix Corporation’s first computer, in
1957 he joined the Packard Bell Computer Corporation, a new division of the
electronics company Packard Bell.
In 1961, he and 11 colleagues from Packard Bell founded Scientific Data Systems
to build small and medium-size business computers, a market niche they believed
was being ignored by giants like I.B.M. The formula worked, and in 1969 Xerox
bought the company for $1 billion, with Mr. Palevsky taking home a 10 percent
share of the sale.
In 1968 he applied some of that money to financing a small start-up company in
Santa Clara to make semiconductors. It became Intel, today the world’s largest
producer of computer chips.
A staunch liberal, Mr. Palevsky first ventured into electoral politics in the
1960s when he became involved in the journalist Tom Braden’s race for lieutenant
governor of California and Robert F. Kennedy’s campaign for the presidency.
Mr. Palevsky pursued politics with zeal and whopping contributions of money. He
bet heavily on Senator George McGovern — who first began running for the
presidency in 1972 — donating more than $300,000 to a campaign that barely
His financial support and organizing work for Tom Bradley, a Los Angeles city
councilman, propelled Mr. Bradley, in 1973, to the first of his five terms as
mayor. During the campaign, Mr. Palevsky recruited Gray Davis as Mr. Bradley’s
chief fund-raiser, opening the door to a political career for Mr. Davis that
later led to the governorship.
Mr. Palevsky later became disenchanted with the power of money in the American
political system and adopted campaign finance reform as his pet issue.
Overcoming his lifelong aversion to Republican candidates, he raised money for
Senator John McCain of Arizona, an advocate of campaign finance reform, during
the 2000 presidential primary. Mr. Palevsky also became a leading supporter of
the conservative-backed Proposition 25, a state ballot initiative in 2000 that
would limit campaign contributions by individuals and ban contributions by
Mr. Palevsky donated $1 million to the Proposition 25 campaign, his largest
political contribution ever. “I am making this million-dollar contribution in
hopes that I will never again legally be allowed to write huge checks to
California political candidates,” he told Newsweek.
His support put him in direct conflict with Governor Davis, the state Democratic
Party and labor unions, whose combined efforts to rally voter support ended in
the measure’s defeat.
Max Palevsky was born on July 24, 1924, in Chicago. His father, a house painter
who had immigrated from Russia, struggled during the Depression, and Mr.
Palevsky described his childhood as “disastrous.”
During World War II he served with the Army Air Corps doing electronics repair
work on airplanes in New Guinea. On returning home, he attended the University
of Chicago on the G.I. Bill, earning bachelor’s degrees in mathematics and
philosophy in 1948. He went on to do graduate work in mathematics and philosophy
at the University of Chicago, the University of California, Berkeley, and
Money allowed him to indulge his interests. He collected Modernist art, but in
the early 1970s, while strolling through SoHo in Manhattan, he became fixated on
a desk by the Arts and Crafts designer Gustav Stickley. Mr. Palevsky amassed an
important collection of Arts and Crafts furniture and Japanese woodcuts, which
he donated to the Los Angeles County Museum of Art.
He also plunged into film production. He helped finance Terrence Malick’s
“Badlands” in 1973, and, with the former Paramount executive Peter Bart,
produced “Fun With Dick and Jane” in 2005, and, in 1977, “Islands in the
Stream.” In 1970 he rescued the foundering Rolling Stone magazine by buying a
substantial block of its stock.
Mr. Palevsky married and divorced five times. He is survived by a sister, Helen
Futterman of Los Angeles; a daughter, Madeleine Moskowitz of Los Angeles; four
sons: Nicholas, of Bangkok, Alexander and Jonathan, both of Los Angeles, and
Matthew, of Brooklyn; and four grandchildren.
Despite his groundbreaking work in the computer industry, Mr. Palevsky remained
skeptical about the cultural influence of computer technology. In a catalog
essay for an Arts and Crafts exhibition at the Los Angeles County Museum of Art
in 2005, he lamented “the hypnotic quality of computer games, the substitution
of a Google search for genuine inquiry, the instant messaging that has replaced
He meant it too. “I don’t own a computer,” he told The Los Angeles Times in
2008. “I don’t own a cellphone, I don’t own any electronics. I do own a radio.”
Max Palevsky, a Pioneer
in Computers, Dies at 85, NYT, 6.5.2010,
H. Edward Roberts,
Dies at 68
April 2, 2010
The New York Times
By STEVE LOHR
Not many people in the computer world remembered H. Edward Roberts, not after
he walked away from the industry more than three decades ago to become a country
doctor in Georgia. Bill Gates remembered him, though.
As Dr. Roberts lay dying last week in a hospital in Macon, Ga., suffering from
pneumonia, Mr. Gates flew down to be at his bedside.
Mr. Gates knew what many had forgotten: that Dr. Roberts had made an early and
enduring contribution to modern computing. He created the MITS Altair, the first
inexpensive general-purpose microcomputer, a device that could be programmed to
do all manner of tasks. For that achievement, some historians say Dr. Roberts
deserves to be recognized as the inventor of the personal computer.
For Mr. Gates, the connection to Dr. Roberts was also personal. It was writing
software for the MITS Altair that gave Mr. Gates, a student at Harvard at the
time, and his Microsoft partner, Paul G. Allen, their start. Later, they moved
to Albuquerque, where Dr. Roberts had set up shop.
Dr. Roberts died Thursday at the Medical Center of Middle Georgia, his son
Martin said. He was 68.
When the Altair was introduced in the mid-1970s, personal computers — then
called microcomputers — were mainly intriguing electronic gadgets for hobbyists,
the sort of people who tinkered with ham radio kits.
Dr. Roberts, it seems, was a classic hobbyist entrepreneur. He left his mark on
computing, built a nice little business, sold it and moved on — well before
personal computers moved into the mainstream of business and society.
Mr. Gates, as history proved, had far larger ambitions.
Over the years, there was some lingering animosity between the two men, and Dr.
Roberts pointedly kept his distance from industry events — like the 20th
anniversary celebration in Silicon Valley of the introduction of the I.B.M. PC
in 1981, which signaled the corporate endorsement of PCs.
But in recent months, after learning that Dr. Roberts was ill, Mr. Gates made a
point of reaching out to his former boss and customer. Mr. Gates sent Dr.
Roberts a letter last December and followed up with phone calls, another son,
Dr. John David Roberts, said. Eight days ago, Mr. Gates visited the elder Dr.
Roberts at his bedside in Macon.
“Any past problems between those two were long since forgotten,” said Dr. John
David Roberts, who had accompanied Mr. Gates to the hospital. He added that Mr.
Allen, the other Microsoft founder, had also called the elder Dr. Roberts
frequently in recent months.
On his Web site, Mr. Gates and Mr. Allen posted a joint statement, saying they
were saddened by the death of “our friend and early mentor.”
“Ed was willing to take a chance on us — two young guys interested in computers
long before they were commonplace — and we have always been grateful to him,”
the statement said.
When the small MITS Altair appeared on the January 1975 cover of Popular
Electronics, Mr. Gates and Mr. Allen plunged into writing a version of the Basic
programming language that could run on the machine.
Mr. Gates dropped out of Harvard, and Mr. Allen left his job at Honeywell in
Boston. The product they created for Dr. Roberts’s machine, Microsoft Basic, was
the beginning of what would become the world’s largest software company and
would make its founders billionaires many times over.
MITS was the kingpin of the fledgling personal computer business only briefly.
In 1977, Mr. Roberts sold his company. He walked away a millionaire. But as a
part of the sale, he agreed not to design computers for five years, an eternity
in computing. It was a condition that Mr. Roberts, looking for a change,
He first invested in farmland in Georgia. After a few years, he switched course
and decided to revive a childhood dream of becoming a physician, earning his
medical degree in 1986 from Mercer University in Macon. He became a general
practitioner in Cochran, 35 miles northwest of the university.
In Albuquerque, Dr. Roberts, a burly, 6-foot-4 former Air Force officer, often
clashed with Mr. Gates, the skinny college dropout. Mr. Gates was “a very bright
kid, but he was a constant headache at MITS,” Dr. Roberts said in an interview
with The New York Times at his office in 2001.
“You couldn’t reason with him,” he added. “He did things his way or not at all.”
His former MITS colleagues recalled that Dr. Roberts could be hardheaded as
well. “Unlike the rest of us, Bill never backed down from Ed Roberts face to
face,” David Bunnell, a former MITS employee, said in 2001. “When they
disagreed, sparks flew.”
Over the years, people have credited others with inventing the personal
computer, including the Xerox Palo Alto Research Center, Apple and I.B.M. But
Paul E. Ceruzzi, a technology historian at the Smithsonian Institution, wrote in
“ History of Modern Computing” (MIT Press, 1998) that “H. Edward Roberts, the
Altair’s designer, deserves credit as the inventor of the personal computer.”
Mr. Ceruzzi noted the “utter improbability and unpredictability” of having one
of the most significant inventions of the 20th century come to life from such a
seemingly obscure origin. “But Albuquerque it was,” Mr. Ceruzzi wrote, “for it
was only at MITS that the technical and social components of personal computing
H. Edward Roberts was born in Miami on Sept. 13, 1941. His father, Henry Melvin
Roberts, ran a household appliance repair service, and his mother, Edna Wilcher
Roberts, was a nurse. As a young man, he wanted to be a doctor and, in fact,
became intrigued by electronics working with doctors at the University of Miami
who were doing experimental heart surgery. He built the electronics for a
heart-lung machine. “That’s how I got into it,” Dr. Roberts recalled in 2001.
So he abandoned his intended field and majored in electrical engineering at
Oklahoma State University. Then, he worked on a room-size I.B.M. computer. But
the power of computing, Dr. Roberts recalled, “opened up a whole new world. And
I began thinking, What if you gave everyone a computer?”
In addition to his sons Martin, of Glenwood, Ga., and John David, of Eastman,
Ga., Dr. Roberts is survived by his mother, Edna Wilcher Roberts, of Dublin,
Ga., his wife, Rosa Roberts of Cochran; his sons Edward, of Atlanta, and Melvin
and Clark, both of Athens, Ga.; his daughter, Dawn Roberts, of Warner Robins,
Ga.; three grandchildren and one great-grandchild.
His previous two marriages, to Donna Mauldin Roberts and Joan C. Roberts, ended
His sons said Dr. Roberts never gave up his love for making things, for
tinkering and invention. He was an accomplished woodworker, making furniture for
his household, family and friends. He made a Star Wars-style light saber for a
neighbor’s son, using light-emitting diodes. And several years ago he designed
his own electronic medical records software, though he never tried to market it,
his son Dr. Roberts said.
“Once he figured something out,” he added, “he was on to the next thing.”
H. Edward Roberts, PC
Pioneer, Dies at 68, NYT, 2.5.2010,
a Pioneer in Computing,
Dies at 77
September 7, 2009
The New York Times
By JOHN MARKOFF
Robert J. Spinrad, a computer designer who carried out pioneering
work in scientific automation at Brookhaven National Laboratory and who later
was director of Xerox’s Palo Alto Research Center while the personal computing
technology invented there in the 1970s was commercialized, died on Wednesday in
Palo Alto, Calif. He was 77.
The cause was Lou Gehrig’s disease, his wife, Verna, said.
Trained in electrical engineering before computer science was a widely taught
discipline, Dr. Spinrad built his own computer from discarded telephone
switching equipment while he was a student at Columbia.
He said that while he was proud of his creation, at the time most people had no
interest in the machines. “I may as well have been talking about the study of
Kwakiutl Indians, for all my friends knew,” he told a reporter for The New York
Times in 1983.
At Brookhaven he would design a room-size, tube-based computer he named Merlin,
as part of an early generation of computer systems used to automate scientific
experimentation. He referred to the machine, which was built before transistors
were widely used in computers, as “the last of the dinosaurs.”
After arriving at Brookhaven, Dr. Spinrad spent a summer at Los Alamos National
Laboratories, where he learned about scientific computer design by studying an
early machine known as Maniac, designed by Nicholas Metropolis, a physicist. Dr.
Spinrad’s group at Brookhaven developed techniques for using computers to run
experiments and to analyze and display data as well as to control experiments
interactively in response to earlier measurements.
Later, while serving as the head of the Computer Systems Group at Brookhaven,
Dr. Spinrad wrote a cover article on laboratory automation for the Oct. 6, 1967,
issue of Science magazine.
“He was really the father of modern laboratory automation,” said Joel Birnbaum,
a physicist who designed computers at both I.B.M. and Hewlett-Packard. “He had a
lot of great ideas about how you connected computers to instruments. He realized
that it wasn’t enough to just build a loop between the computer and the
apparatus, but that the most important piece of the apparatus was the
After leaving Brookhaven, Dr. Spinrad joined Scientific Data Systems in Los
Angeles as a computer designer and manager. When the company was bought by the
Xerox Corporation in an effort to compete with I.B.M., he participated in
Xerox’s decision to put a research laboratory next to the campus of Stanford.
Xerox’s Palo Alto Research Center pioneered the technology that led directly to
the modern personal computer and office data networks.
Taking over as director of the laboratory in 1978, Dr. Spinrad oversaw a period
when the laboratory’s technology was commercialized, including the first modern
personal computer, the ethernet local area network and the laser printer.
However, as a copier company, Xerox was never a comfortable fit for the emerging
computing world, and many of the laboratory researchers left Xerox, often taking
their innovations with them.
At the center, Dr. Spinrad became adept at bridging the cultural gulf between
the lab’s button-down East Coast corporations and its unruly and innovative West
Robert Spinrad was born in Manhattan on March 20, 1932. He received an
undergraduate degree in electrical engineering from Columbia and a Ph.D. from
the Massachusetts Institute of Technology.
In addition to his wife, Verna, he is survived by two children, Paul, of San
Francisco, and Susan Spinrad Esterly, of Palo Alto, and three grandchildren.
Flying between Norwalk, Conn., and Palo Alto frequently, Dr. Spinrad once
recalled how he felt like Superman in reverse because he would invariably step
into the airplane’s lavatory to change into a suit for his visit to the company
Robert Spinrad, a
Pioneer in Computing, Dies at 77, NYT, 7.9.2009,
Intel Adopts an Identity in Software
May 25, 2009
The New York Times
By ASHLEE VANCE
SANTA CLARA, Calif. — Intel has worked hard and spent a lot of
money over the years to shape its image: It is the company that celebrates its
quest to make computer chips ever smaller, faster and cheaper with a quick
five-note jingle at the end of its commercials.
But as Intel tries to expand beyond the personal computer chip business, it is
changing in subtle ways. For the first time, its long unheralded software
developers, more than 3,000 of them, have stolen some of the spotlight from its
hardware engineers. These programmers find themselves at the center of Intel’s
forays into areas like mobile phones and video games.
The most attention-grabbing element of Intel’s software push is a version of the
open-source Linux operating system called Moblin. It represents a direct assault
on the Windows franchise of Microsoft, Intel’s longtime partner.
“This is a very determined, risky effort on Intel’s part,” said Mark
Shuttleworth, the chief executive of Canonical, which makes another version of
Linux called Ubuntu.
The Moblin software resembles Windows or Apple’s Mac OS X to a degree, handling
the basic functions of running a computer. But it has a few twists as well that
Intel says make it better suited for small mobile devices.
For example, Moblin fires up and reaches the Internet in about seven seconds,
then displays a novel type of start-up screen. People will find their
appointments listed on one side of the screen, along with their favorite
programs. But the bulk of the screen is taken up by cartoonish icons that show
things like social networking updates from friends, photos and recently used
With animated icons and other quirky bits and pieces, Moblin looks like a fresh
take on the operating system. Some companies hope it will give Microsoft a
strong challenge in the market for the small, cheap laptops commonly known as
netbooks. A polished second version of the software, which is in trials, should
start appearing on a variety of netbooks this summer.
“We really view this as an opportunity and a game changer,” said Ronald W.
Hovsepian, the chief executive of Novell, which plans to offer a customized
version on Moblin to computer makers. Novell views Moblin as a way to extend its
business selling software and services related to Linux.
While Moblin fits netbooks well today, it was built with smartphones in mind.
Those smartphones explain why Intel was willing to needle Microsoft.
Intel has previously tried and failed to carve out a prominent stake in the
market for chips used in smaller computing devices like phones. But the company
says one of its newer chips, called Atom, will solve this riddle and help it
compete against the likes of Texas Instruments and Qualcomm.
The low-power, low-cost Atom chip sits inside most of the netbooks sold today,
and smartphones using the chip could start arriving in the next couple of years.
To make Atom a success, Intel plans to use software for leverage. Its needs
Moblin because most of the cellphone software available today runs on chips
whose architecture is different from Atom’s. To make Atom a worthwhile choice
for phone makers, there must be a supply of good software that runs on it.
“The smartphone is certainly the end goal,” said Doug Fisher, a vice president
in Intel’s software group. “It’s absolutely critical for the success of this
Though large, Intel’s software group has remained out of the spotlight for
years. Intel considers its software work a silent helping hand for computer
Mostly, the group sells tools that help other software developers take advantage
of features in Intel’s chips. It also offers free consulting services to help
large companies wring the most performance out of their code, in a bid to sell
Renee J. James, Intel’s vice president in charge of software, explained, “You
can’t just throw hardware out there into the world.”
Intel declines to disclose its revenue from these tools, but it is a tiny
fraction of the close to $40 billion in sales Intel racks up every year.
Still, the software group is one of the largest at Intel and one of the largest
such organizations at any company.
In the last few years, Intel’s investment in Linux, the main rival to Windows,
has increased. Intel has hired some of the top Linux developers, including Alan
Cox from Red Hat, the leading Linux seller, last year. Intel pays these
developers to improve Linux as a whole and to further the company’s own projects
“Intel definitely ranks pretty highly when it comes to meaningful
contributions,” Linus Torvalds, who created the core of Linux and maintains the
software, wrote in an e-mail message. “They went from apparently not having much
of a strategy at all to having a rather wide team.”
Intel has also bought software companies. Last year, it acquired OpenedHand, a
company whose work has turned into the base of the new Moblin user interface.
It has also bought a handful of software companies with expertise in gaming and
graphics technology. Such software is meant to create a foundation to support
Intel’s release of new high-powered graphics chips next year. Intel hopes the
graphics products will let it compete better against Nvidia and Advanced Micro
Devices and open up another new business.
Intel tries to play down its competition with Microsoft. Since Moblin is open
source, anyone can pick it up and use it. Companies like Novell will be the ones
actually offering the software to PC makers, while Intel will stay in the
background. Still, Ms. James says that Intel’s relationship with Microsoft has
turned more prickly.
“It is not without its tense days,” she said.
Microsoft says Intel faces serious hurdles as it tries to stake a claim in the
operating system market.
“I think it will introduce some challenges for them just based on our experience
of having built operating systems for 25 years or so,” said James DeBragga, the
general manager of Microsoft’s Windows consumer team.
While Linux started out as a popular choice on netbooks, Microsoft now dominates
the market. Microsoft doubts whether something like Moblin’s glossy interface
will be enough to woo consumers who are used to Windows.
Intel says people are ready for something new on mobile devices, which are
geared more to the Internet than to running desktop-style programs.
“I am a risk taker,” Ms. James of Intel said. “I have that outlook that if
there’s a possibility of doing something different, we should explore trying
Intel Adopts an
Identity in Software, NYT, 25.5.2009,
I.B.M. Unveils Real-Time Software
to Find Trends in Vast Data
May 21, 2009
The New York Times
By ASHLEE VANCE
New software from I.B.M. can suck up huge volumes of data from many sources
and quickly identify correlations within it. The company says it expects the
software to be useful in analyzing finance, health care and even space weather.
Bo Thidé, a scientist at the Swedish Institute of Space Physics, has been
testing an early version of the software as he studies the ways in which things
like gas clouds and particles cast off by the sun can disrupt communications
networks on Earth. The new software, which I.B.M. calls stream processing, makes
it possible for Mr. Thidé and his team of researchers to gather and analyze vast
amounts of information at a record pace.
“For us, there is no chance in the world that you can think about storing data
and analyzing it tomorrow,” Mr. Thidé said. “There is no tomorrow. We need a
smart system that can give you hints about what is happening out there right
I.B.M., based in Armonk, N.Y., spent close to six years working on the software
and has just moved to start selling a product based on it called System S. The
company expects it to encourage breakthroughs in fields like finance and city
management by helping people better understand patterns in data.
Steven A. Mills, I.B.M.’s senior vice president for software, notes that
financial companies have spent years trying to gain trading edges by sorting
through various sets of information. “The challenge in that industry has not
been ‘Could you collect all the data?’ but ‘Could you collect it all together
and analyze it in real time?’ ” Mr. Mills said.
To that end, the new software harnesses advances in computing and networking
horsepower in a fashion that analysts and customers describe as unprecedented.
Instead of creating separate large databases to track things like currency
movements, stock trading patterns and housing data, the System S software can
meld all of that information together. In addition, it could theoretically then
layer on databases that tracked current events, like news headlines on the
Internet or weather fluctuations, to try to gauge how such factors interplay
with the financial data.
Most computers, of course, can digest large stores of information if given
enough time. But I.B.M. has succeeded in performing very quick analyses on
larger hunks of combined data than most companies are used to handling.
“It’s that combination of size and speed that had yet to be solved,” said Gordon
Haff, an analyst at Illuminata, a technology industry research firm.
Conveniently for I.B.M., the System S software matured in time to match up with
the company’s “Smarter Planet” campaign. I.B.M. has flooded the airwaves with
commercials about using technology to run things like power grids and hospitals
The company suggests, for example, that a hospital could tap the System S
technology to monitor not only individual patients but also entire patient
databases, as well as medication and diagnostics systems. If all goes according
to plan, the computing systems could alert nurses and doctors to emerging
Analysts say the technology could also provide companies with a new edge as they
grapple with doing business on a global scale.
“With globalization, more and more markets are heading closer to perfect
competition models,” said Dan Olds, an analyst with Gabriel Consulting. “This
means that companies have to get smarter about how they use their data and find
previously unseen opportunities.”
Buying such an advantage from I.B.M. has its price. The company will charge at
least hundreds of thousands of dollars for the software, Mr. Mills said.
I.B.M. Unveils Real-Time
Software to Find Trends in Vast Data Sets,
Intel Prepares New Chip
Fortified by Constant Tests
The New York Times
By JOHN MARKOFF
Ore. — Rows and rows of computers in Intel’s labs here relentlessly
torture-tested the company’s new microprocessor for months on end.
But on a recent tour of the labs, John Barton, an Intel vice president in charge
of product testing, acknowledged that he was still feeling anxious about the
possibility of a last-minute, show-stopping flaw.
After all, even the slightest design error in the chip could end up being a
“I’m not sleeping well yet,” Mr. Barton said.
Intel’s Core i7 microprocessor, code-named Nehalem, which goes on sale Monday,
has already received glowing technical reviews. But it is impossible for Mr.
Barton to predict exactly how the chip will function in thousands of computers
running tens of thousands of programs.
The design and testing of an advanced microprocessor chip is among the most
complex of all human endeavors. To ensure that its products are as error-free as
possible, Intel, based in Santa Clara, Calif., now spends a half-billion dollars
annually in its factories around the world, testing the chips for more than a
year before selling them.
There is good reason for the caution. In 1994, the giant chip maker was humbled
by a tiny error in the floating point calculation unit of its Pentium chips. The
flaw, which led to an embarrassing recall, prompted a wrenching cultural shift
at the company, which had minimized the testing requirements of the Pentium.
A series of bugs last year in the Barcelona microprocessor from Intel’s main
competitor, Advanced Micro Devices, was equally devastating.
A.M.D., based in Sunnyvale, Calif., had been making steady progress, offering
new processor technologies long before Intel and handily winning the
power-efficiency war. But the quality problems that slammed A.M.D. cost the
company revenue for several quarters and landed it in a hole from which it has
yet to dig out.
If Nehalem is a hit for Intel, it will represent vindication for Andrew Grove,
the company’s former chief, who acknowledged that he had been blindsided by the
Pentium problems and then set out to reform the company.
The Pentium bug badly damaged Intel’s brand with consumers. The company quickly
became a laughingstock as jokes made the rounds of the Internet: Q: Know how the
Republicans can cut taxes and pay the deficit at the same time? A: Their
spreadsheet runs on a Pentium computer.
After initially appearing to stonewall, Intel reversed course and issued an
apology while setting aside $420 million to pay for the recall.
The company put Mr. Grove’s celebrated remark about the situation on key chains:
“Bad companies are destroyed by crisis. Good companies survive them. Great
companies are improved by them.”
Those words weigh heavily on the shoulders of Mr. Barton and his colleagues — as
does the pressure from Intel’s customers around the world whose very survival is
based on the ability to create new products with the company’s chips at their
heart. Nehalem is initially aimed at desktop computers, but the company hopes it
will eventually be found in everything from powerful servers to laptops.
“Our business model is now predicated on saying to the consumer, ‘You will get a
new set of functionality by a particular date,’ ” Mr. Barton said. “We did get a
new dimension of business pressure that says we can’t take our merry time
turning it out whenever we feel like it.”
The pressure for a successful product is especially intense now as the overall
technology industry faces a serious slump. Intel’s chief executive, Paul S.
Otellini, said last month that the company was getting “mixed signals” from
customers about future spending. Intel’s stock fell 7.7 percent on Friday to
$13.32, a six-year low, in a broad market drop.
With Nehalem, Intel’s designers took the company’s previous generation of chips
and added a host of features, each of which adds complexity and raises the
possibility of unpredictable interactions.
“Now we are hitting systemic complexity,” said Aart de Geus, chief executive of
Synopsys, a Silicon Valley developer of chip design tools. “Things that came
from different angles that used to be independent have become interdependent.”
Trying to define the complexity that Mr. Barton and his team face is itself a
challenge. Even in the late 1970s, chips were being designed that were as
complicated as the street map of a large city.
Mr. Barton’s love affair with the world of electronics began as a child, when he
took apart a walkie-talkie his father had given him and counted its transistors:
a total of seven. The change in his lifetime, he said, has been “mind-boggling.”
Going from the Intel 8088 — the processor used in the IBM PC 27 years ago — to
the Nehalem involves a jump from 29,000 transistors to 731 million, on a silicon
chip roughly the same size.
Mr. Barton equates the two by comparing a city the size of Ithaca, N.Y., to the
continent of Europe. “Ithaca is quite complex in its own right if you think of
all that goes on,” he said. “If we scale up the population to 730 million, we
come to Europe as about the right size. Now take Europe and shrink it until it
all fits in about the same land mass as Ithaca.”
Even given a lifetime, it would be impossible to test more than the smallest
fraction of the total possible “states” that the Nehalem chip can be programmed
in, which are easily more plentiful than all the atoms in the universe.
Modern designers combat complexity by turning to modular design techniques,
making it possible to simplify drastically what needs to be tested.
“Instead of testing for every possible case, you break up the problem into
smaller pieces,” said G. Dan Hutcheson, chief executive of VLSI Research, a
semiconductor consulting firm.
After the Pentium flaw, Intel also fundamentally rethought the way it designed
its processors, trying to increase the chance that its chips would be error-free
even before testing. During the late 1990s it turned to a group of mathematical
theoreticians in the computer science field who had developed advanced
techniques for evaluating hardware and software, known as formal methods.
“For several years Intel hired everyone in the world in formal methods,” said
Pat Lincoln, director of SRI International’s Computer Science Laboratory.
The Intel designers have also done something else to help defend against the
errors that will inevitably sneak into the chip. Nehalem contains a significant
amount of software that can be changed even after the microprocessor leaves the
factory. That gives the designers a huge safety net.
It is one that Mr. Barton and his team are hoping they will not have to use.
Burned Once, Intel Prepares New Chip Fortified by Constant
Computing Pioneer Has a New Idea
November 17, 2008
The New York Times
By JOHN MARKOFF
SAN FRANCISCO — Steven J. Wallach is completing the soul of
his newest machine.
Thirty years ago, Mr. Wallach was one of a small team of computer designers
profiled by Tracy Kidder in his Pulitzer Prize winning best seller, “The Soul of
a New Machine.”
It was Mr. Wallach, then 33, who served as the architect and baby sitter for his
“microkids,” the young team that designed the Data General MV 8000, the underdog
minicomputer that kept the company alive in its brutal competition with the
Digital Equipment Corporation.
At 63, he is still at it. He plans to introduce his new company, Convey
Computer, and to describe the technical details of a new supercomputer intended
for scientific and engineering applications at a supercomputing conference in
Austin, Tex., this week.
Mr. Wallach thinks he has come upon a new idea in computer design in an era when
it has become fashionable to say that there are no new ideas. So far, he has
persuaded some of the leading thinkers in the high performance computing world
that he might be right. Both Intel and a second chip maker, Xilinx, have joined
as early investors.
“Steve comes from a long history of building successful machines,” said Jack
Dongarra, a computer scientist at the University of Tennessee who helps maintain
the list of the world’s fastest 500 computers. “He understands where the
After leaving Data General, Mr. Wallach helped found Convex in 1982 to build a
Mr. Wallach may be one of the few people remaining to recall a bold generation
of computer designers once defined by Seymour Cray, the engineer who created the
world’s first commercial supercomputers during the 1960s.
His newest effort in computing design is intended to tackle one of the principal
limitations in the world of supercomputing. Typically supercomputers are
intended to excel in solving a single class of problems. They may simulate the
explosion of a nuclear weapon or model global climate change at blinding speed,
but for other problems they will prove sluggish and inefficient.
Today’s supercomputers are assembled from thousands or even tens of thousands of
microprocessors, and they often consume as much electricity as a small city.
Moreover, they can prove to be frightfully difficult to program. Many new
supercomputers try to deal with the challenge of solving different classes of
problems by connecting different kinds of processors together Lego-style. This
can give programmers fits.
For decades, computer designers have struggled with different ways to sidestep
the complexity of programming multiple chips, in order to break up problems into
pieces to be computed simultaneously so that they can be solved more quickly.
Mr. Wallach came up with his new design idea in 2006 after he found himself
rejecting many of the start-up companies who were coming to the venture capital
companies he was advising.
“I would say, ‘No, no, no, they’re clueless,’ ” he said. “I find it difficult to
think of myself as the old man of the industry, but it feels that the same as it
was in the early 1980s.”
One of the venture capitalists grew frustrated with Mr. Wallach’s repeated
criticisms and said to him, “All right Mr. Bigshot, what would you?”
Two weeks later, Mr. Wallach had a new idea. He had long been fascinated with a
chip technology called Field Programmable Gate Arrays. These chips are widely
used to make prototype computer systems because they can be easily reprogrammed
and yet offer the pure speed of computer hardware. There have been a number of
start-ups and large supercomputer companies that have already tried to design
systems based on the chips, but Mr. Wallach thought that he could do a better
The right way to use them, he decided, was to couple them so tightly to the
microprocessor chip that it would appear they were simply a small set of
additional instructions to give a programmer an easy way to turbocharge a
program. Everything had to look exactly like the standard programming
environment. In contrast, many supercomputers today require programmers to be
“The past 40 years has taught us that ultimately the system that is easiest to
program will always win,” he said.
Mr. Wallach approached Applied Micro Devices about partnering, but it was
skeptical. So he went to Intel, where he knew Justin Rattner, the company’s
chief technology officer and a veteran supercomputer designer.
“We’ve had enough debates over the years that Justin has some respect for me,”
The Convey computer will be based around Intel’s microprocessors. It will
perform like a shape-shifter, reconfiguring with different hardware
“personalities” to compute problems for different industries, initially aiming
at bioinformatics, computer-aided design, financial services and oil and gas
Mr. Wallach acknowledges that starting a company going into a recession in the
face of stiff competition from Cray, I.B.M., Hewlett-Packard, Sun Microsystems
and more than a dozen smaller companies is daunting. However, Convey was put
together in just two years on a shoestring. It has raised just $15.1 million.
“In a lot of ways, it’s easier than it was in 1982,” he said. “You need less
money and I don’t think a lot of people have grasped this.”
One who does get the idea and who is enthusiastic about it is Larry Smarr, an
astrophysicist who is director of the California Institute for
Telecommunications and Information Technology at the University of California,
San Diego. He believes that the most important quality of the Convey computer is
that it will be a green supercomputer.
“The I.T. industry is going to become the boogeyman for global warming,” he
Three decades after designing the computer that brought the idea of computing
into the public consciousness, Mr. Wallach gives no hint that he is slowing
He still wears the earring that he began wearing 15 years ago when his daughter
suggested that he was getting old.
“Isn’t that required to be a computer architect?” he asked recently.
A Computing Pioneer
Has a New Idea, NYT, 17.11.2008,
The Rise of the Machines
October 12, 2008
The New York Times
By RICHARD DOOLING
“BEWARE of geeks bearing formulas.” So saith Warren Buffett, the Wizard of
Omaha. Words to bear in mind as we bail out banks and buy up mortgages and tweak
interest rates and nothing, nothing seems to make any difference on Wall Street
or Main Street. Years ago, Mr. Buffett called derivatives “weapons of financial
mass destruction” — an apt metaphor considering that the Manhattan Project’s
math and physics geeks bearing formulas brought us the original weapon of mass
destruction, at Trinity in New Mexico on July 16, 1945.
In a 1981 documentary called “The Day After Trinity,” Freeman Dyson, a reigning
gray eminence of math and theoretical physics, as well as an ardent proponent of
nuclear disarmament, described the seductive power that brought us the ability
to create atomic energy out of nothing.
“I have felt it myself,” he warned. “The glitter of nuclear weapons. It is
irresistible if you come to them as a scientist. To feel it’s there in your
hands, to release this energy that fuels the stars, to let it do your bidding.
To perform these miracles, to lift a million tons of rock into the sky. It is
something that gives people an illusion of illimitable power, and it is, in some
ways, responsible for all our troubles — this, what you might call technical
arrogance, that overcomes people when they see what they can do with their
The Wall Street geeks, the quantitative analysts (“quants”) and masters of “algo
trading” probably felt the same irresistible lure of “illimitable power” when
they discovered “evolutionary algorithms” that allowed them to create vast
empires of wealth by deriving the dependence structures of portfolio credit
What does that mean? You’ll never know. Over and over again, financial experts
and wonkish talking heads endeavor to explain these mysterious, “toxic”
financial instruments to us lay folk. Over and over, they ignobly fail, because
we all know that no one understands credit default obligations and derivatives,
except perhaps Mr. Buffett and the computers who created them.
Somehow the genius quants — the best and brightest geeks Wall Street firms could
buy — fed $1 trillion in subprime mortgage debt into their supercomputers, added
some derivatives, massaged the arrangements with computer algorithms and — poof!
— created $62 trillion in imaginary wealth. It’s not much of a stretch to
imagine that all of that imaginary wealth is locked up somewhere inside the
computers, and that we humans, led by the silverback males of the financial
world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith
for answers. Or maybe we are lost in space, with Dave the astronaut pleading,
“Open the bank vault doors, Hal.”
As the current financial crisis spreads (like a computer virus) on the earth’s
nervous system (the Internet), it’s worth asking if we have somehow managed to
colossally outsmart ourselves using computers. After all, the Wall Street titans
loved swaps and derivatives because they were totally unregulated by humans.
That left nobody but the machines in charge.
How fitting then, that almost 30 years after Freeman Dyson described the almost
unspeakable urges of the nuclear geeks creating illimitable energy out of
equations, his son, George Dyson, has written an essay (published at Edge.org)
warning about a different strain of technical arrogance that has brought the
entire planet to the brink of financial destruction. George Dyson is an
historian of technology and the author of “Darwin Among the Machines,” a book
that warned us a decade ago that it was only a matter of time before technology
out-evolves us and takes over.
His new essay — “Economic Dis-Equilibrium: Can You Have Your House and Spend It
Too?” — begins with a history of “stock,” originally a stick of hazel, willow or
alder wood, inscribed with notches indicating monetary amounts and dates. When
funds were transferred, the stick was split into identical halves — with one
side going to the depositor and the other to the party safeguarding the money —
and represented proof positive that gold had been deposited somewhere to back it
up. That was good enough for 600 years, until we decided that we needed more
speed and efficiency.
Making money, it seems, is all about the velocity of moving it around, so that
it can exist in Hong Kong one moment and Wall Street a split second later. “The
unlimited replication of information is generally a public good,” George Dyson
writes. “The problem starts, as the current crisis demonstrates, when
unregulated replication is applied to money itself. Highly complex
computer-generated financial instruments (known as derivatives) are being
produced, not from natural factors of production or other goods, but purely from
other financial instruments.”
It was easy enough for us humans to understand a stick or a dollar bill when it
was backed by something tangible somewhere, but only computers can understand
and derive a correlation structure from observed collateralized debt obligation
tranche spreads. Which leads us to the next question: Just how much of the
world’s financial stability now lies in the “hands” of computerized trading
Here’s a frightening party trick that I learned from the futurist Ray Kurzweil.
Read this excerpt and then I’ll tell you who wrote it:
But we are suggesting neither that the human race would voluntarily turn power
over to the machines nor that the machines would willfully seize power. What we
do suggest is that the human race might easily permit itself to drift into a
position of such dependence on the machines that it would have no practical
choice but to accept all of the machines’ decisions. ... Eventually a stage may
be reached at which the decisions necessary to keep the system running will be
so complex that human beings will be incapable of making them intelligently. At
that stage the machines will be in effective control. People won’t be able to
just turn the machines off, because they will be so dependent on them that
turning them off would amount to suicide.
Brace yourself. It comes from the Unabomber’s manifesto.
Yes, Theodore Kaczinski was a homicidal psychopath and a paranoid kook, but he
was also a bloodhound when it came to scenting all of the horrors technology
holds in store for us. Hence his mission to kill technologists before machines
commenced what he believed would be their inevitable reign of terror.
We are living, we have long been told, in the Information Age. Yet now we are
faced with the sickening suspicion that technology has run ahead of us. Man is a
fire-stealing animal, and we can’t help building machines and machine
intelligences, even if, from time to time, we use them not only to outsmart
ourselves but to bring us right up to the doorstep of Doom.
We are still fearful, superstitious and all-too-human creatures. At times, we
forget the magnitude of the havoc we can wreak by off-loading our minds onto
super-intelligent machines, that is, until they run away from us, like mad
sorcerers’ apprentices, and drag us up to the precipice for a look down into the
As the financial experts all over the world use machines to unwind Gordian knots
of financial arrangements so complex that only machines can make — “derive” —
and trade them, we have to wonder: Are we living in a bad sci-fi movie? Is the
Matrix made of credit default swaps?
When Treasury Secretary Paulson (looking very much like a frightened primate)
came to Congress seeking an emergency loan, Senator Jon Tester of Montana, a
Democrat still living on his family homestead, asked him: “I’m a dirt farmer.
Why do we have one week to determine that $700 billion has to be appropriated or
this country’s financial system goes down the pipes?”
“Well, sir,” Mr. Paulson could well have responded, “the computers have demanded
Richard Dooling is the author
of “Rapture for the Geeks: When A.I. Outsmarts
The Rise of the
Machines, NYT, 12.10.2008,
Military Supercomputer Sets Record
June 9, 2008
The New York Times
By JOHN MARKOFF
SAN FRANCISCO — An American military supercomputer, assembled from components
originally designed for video game machines, has reached a long-sought-after
computing milestone by processing more than 1.026 quadrillion calculations per
The new machine is more than twice as fast as the previous fastest
supercomputer, the I.B.M. BlueGene/L, which is based at Lawrence Livermore
National Laboratory in California.
The new $133 million supercomputer, called Roadrunner in a reference to the
state bird of New Mexico, was devised and built by engineers and scientists at
I.B.M. and Los Alamos National Laboratory, based in Los Alamos, N.M. It will be
used principally to solve classified military problems to ensure that the
nation’s stockpile of nuclear weapons will continue to work correctly as they
age. The Roadrunner will simulate the behavior of the weapons in the first
fraction of a second during an explosion.
Before it is placed in a classified environment, it will also be used to explore
scientific problems like climate change. The greater speed of the Roadrunner
will make it possible for scientists to test global climate models with higher
To put the performance of the machine in perspective, Thomas P. D’Agostino, the
administrator of the National Nuclear Security Administration, said that if all
six billion people on earth used hand calculators and performed calculations 24
hours a day and seven days a week, it would take them 46 years to do what the
Roadrunner can in one day.
The machine is an unusual blend of chips used in consumer products and advanced
parallel computing technologies. The lessons that computer scientists learn by
making it calculate even faster are seen as essential to the future of both
personal and mobile consumer computing.
The high-performance computing goal, known as a petaflop — one thousand trillion
calculations per second — has long been viewed as a crucial milestone by
military, technical and scientific organizations in the United States, as well
as a growing group including Japan, China and the European Union. All view
supercomputing technology as a symbol of national economic competitiveness.
By running programs that find a solution in hours or even less time — compared
with as long as three months on older generations of computers — petaflop
machines like Roadrunner have the potential to fundamentally alter science and
engineering, supercomputer experts say. Researchers can ask questions and
receive answers virtually interactively and can perform experiments that would
previously have been impractical.
“This is equivalent to the four-minute mile of supercomputing,” said Jack
Dongarra, a computer scientist at the University of Tennessee who for several
decades has tracked the performance of the fastest computers.
Each new supercomputing generation has brought scientists a step closer to
faithfully simulating physical reality. It has also produced software and
hardware technologies that have rapidly spilled out into the rest of the
computer industry for consumer and business products.
Technology is flowing in the opposite direction as well. Consumer-oriented
computing began dominating research and development spending on technology
shortly after the cold war ended in the late 1980s, and that trend is evident in
the design of the world’s fastest computers.
The Roadrunner is based on a radical design that includes 12,960 chips that are
an improved version of an I.B.M. Cell microprocessor, a parallel processing chip
originally created for Sony’s PlayStation 3 video-game machine. The Sony chips
are used as accelerators, or turbochargers, for portions of calculations.
The Roadrunner also includes a smaller number of more conventional Opteron
processors, made by Advanced Micro Devices, which are already widely used in
“Roadrunner tells us about what will happen in the next decade,” said Horst
Simon, associate laboratory director for computer science at the Lawrence
Berkeley National Laboratory. “Technology is coming from the consumer
electronics market and the innovation is happening first in terms of cellphones
and embedded electronics.”
The innovations flowing from this generation of high-speed computers will most
likely result from the way computer scientists manage the complexity of the
Roadrunner, which consumes roughly three megawatts of power, or about the power
required by a large suburban shopping center, requires three separate
programming tools because it has three types of processors. Programmers have to
figure out how to keep all of the 116,640 processor cores in the machine
occupied simultaneously in order for it to run effectively.
“We’ve proved some skeptics wrong,” said Michael R. Anastasio, a physicist who
is director of the Los Alamos National Laboratory. “This gives us a window into
a whole new way of computing. We can look at phenomena we have never seen
Solving that programming problem is important because in just a few years
personal computers will have microprocessor chips with dozens or even hundreds
of processor cores. The industry is now hunting for new techniques for making
use of the new computing power. Some experts, however, are skeptical that the
most powerful supercomputers will provide useful examples.
“If Chevy wins the Daytona 500, they try to convince you the Chevy Malibu you’re
driving will benefit from this,” said Steve Wallach, a supercomputer designer
who is chief scientist of Convey Computer, a start-up firm based in Richardson,
Those who work with weapons might not have much to offer the video gamers of the
world, he suggested.
Many executives and scientists see Roadrunner as an example of the resurgence of
the United States in supercomputing.
Although American companies had dominated the field since its inception in the
1960s, in 2002 the Japanese Earth Simulator briefly claimed the title of the
world’s fastest by executing more than 35 trillion mathematical calculations per
second. Two years later, a supercomputer created by I.B.M. reclaimed the speed
record for the United States. The Japanese challenge, however, led Congress and
the Bush administration to reinvest in high-performance computing.
“It’s a sign that we are maintaining our position,“ said Peter J. Ungaro, chief
executive of Cray, a maker of supercomputers. He noted, however, that “the real
competitiveness is based on the discoveries that are based on the machines.”
Having surpassed the petaflop barrier, I.B.M. is already looking toward the next
generation of supercomputing. “You do these record-setting things because you
know that in the end we will push on to the next generation and the one who is
there first will be the leader,” said Nicholas M. Donofrio, an I.B.M. executive
By breaking the petaflop barrier sooner than had been generally expected, the
United States’ supercomputer industry has been able to sustain a pace of
continuous performance increases, improving a thousandfold in processing power
in 11 years. The next thousandfold goal is the exaflop, which is a quintillion
calculations per second, followed by the zettaflop, the yottaflop and the
Sets Record, NYT, 9.6.2008,
a New Family
of Power-Saving Chips
September 19, 2007
The New York Times
By LAURIE J. FLYNN
SAN FRANCISCO, Sept. 18 — Intel gave the first public demonstration on
Tuesday of a new generation of computer processors that significantly increase
performance without consuming more power.
The company’s chief executive, Paul S. Otellini, told developers at its
semiannual technology conference that Intel expected to finish the new family of
chips in the second half of 2008, in keeping with its promise of a new chip
architecture every other year. The new family of chips, code-named Nehalem, will
use as many as eight processing cores and will offer better graphics and
Intel had been late to respond to technological challenges in energy efficiency
and heat consumption, and it has spent the better part of two years racing to
catch up with its smaller but feisty competitor, Advanced Micro Devices.
A year ago, Intel announced a painful corporate overhaul, including a round of
cost-cutting that reduced the work force by 10 percent and trimmed $5 billion in
expenses. Since then, the company has begun to regain lost market share, and
last week raised its sales forecast for this quarter.
As part of the corporate revamping, Intel executives last year outlined what
they called a tick-tock strategy, referring to the development of a new chip
architecture every other year and to a new manufacturing technology in the
alternate years. Mr. Otellini said the strategy would accelerate the pace of
The manufacturing-technology innovation, a new silicon technology component, is
almost ready. Intel’s Penryn family of processors, to be introduced on Nov. 12,
will be the industry’s first high-volume 45-nanometer processors. (the current
standard is 65 nanometers.)
Mr. Otellini said the company planned to introduce 15 new 45-nanometer
processors by the end of the year and 20 more in the first quarter of 2008.
A.M.D. has said it will move to 45-nanometer technology in mid-2008.
“We expect our Penryn processors to provide up to a 20 percent performance
increase while improving energy efficiency,” Mr. Otellini said.
He said that 32-nanometer technology, which is on track to begin production in
2009, would offer even greater performance. The 32-nanometer chips use
transistors so small that more than 4 million of them could fit on the head of a
“Smaller is better, smaller is cheaper,” Mr. Otellini said.
The company also disclosed plans for a new graphics-oriented product, called
Larrabee, which will compete with products from Advanced Micro Devices and
Nvidia and Advanced Micro’s ATI graphics unit. Larrabee will include 12 cores,
or computing brains.
On Monday, A.M.D. unveiled its own strategic change: a desktop chip with three
cores, unusual in an industry that tends to grow in even numbers, routinely
doubling performance. The announcement came as a surprise to analysts, as the
company had promoted the advantages of four processors only last week.
A.M.D. executives, referring to a recent survey by Mercury Research, said that
quad-core processors accounted for only 2 percent of all desktop computer
systems, suggesting that they had been slower to catch on than expected.
It is hoping that its new three-core chip, called Phenom, will appeal to
midrange customers who are looking for better performance than dual-core systems
can provide, but do not see the need for quad-core systems. A corporate vice
president, Robert Brewer, predicted that “it’s naturally going to resonate with
customers,” who he said would appreciate having another choice.
But Nathan Brookwood, principal analyst with Insight 64, a consulting firm in
Saratoga, Calif., said the triple-core chip could prove confusing to customers.
It is due in the first quarter of 2008, the quarter after Advanced Micro is
scheduled to release its quad-core chip. In some cases, the triple-core chip may
actually perform faster than a quad core.
Intel Previews a New
Family of Power-Saving Chips, NYT, 19.9.2007,
Reshaping the Architecture of Memory
September 11, 2007
The New York Times
By JOHN MARKOFF
SAN JOSE, Calif. — The ability to cram more data into less space on a memory
chip or a hard drive has been the crucial force propelling consumer electronics
companies to make ever smaller devices.
It shrank the mainframe computer to fit on the desktop, shrank it again to fit
on our laps and again to fit into our shirt pockets.
Now, if an idea that Stuart S. P. Parkin is kicking around in an I.B.M. lab here
is on the money, electronic devices could hold 10 to 100 times the data in the
same amount of space. That means the iPod that today can hold up to 200 hours of
video could store every single TV program broadcast during a week on 120
The tech world, obsessed with data density, is taking notice because Mr. Parkin
has done it before. An I.B.M. research fellow largely unknown outside a small
fraternity of physicists, Mr. Parkin puttered for two years in a lab in the
early 1990s, trying to find a way to commercialize an odd magnetic effect of
quantum mechanics he had observed at supercold temperatures. With the help of a
research assistant, he was able to alter the magnetic state of tiny areas of a
magnetic data storage disc, making it possible to store and retrieve information
in a smaller amount of space. The huge increases in digital storage made
possible by giant magnetoresistance, or GMR, made consumer audio and video
iPods, as well as Google-style data centers, a reality.
Mr. Parkin thinks he is poised to bring about another breakthrough that could
increase the amount of data stored on a chip or a hard drive by a factor of a
hundred. If he proves successful in his quest, he will create a “universal”
computer memory, one that can potentially replace dynamic random access memory,
or DRAM, and flash memory chips, and even make a “disk drive on a chip”
It could begin to replace flash memory in three to five years, scientists say.
Not only would it allow every consumer to carry data equivalent to a college
library on small portable devices, but a tenfold or hundredfold increase in
memory would be disruptive enough to existing storage technologies that it would
undoubtedly unleash the creativity of engineers who would develop totally new
entertainment, communication and information products.
Currently the flash storage chip business is exploding. Used as storage in
digital cameras, cellphones and PCs, the commercially available flash drives
with multiple memory chips store up to 64 gigabytes of data. Capacity is
expected to reach about 50 gigabytes on a single chip in the next half-decade.
However, flash memory has an Achilles’ heel. Although it can read data quickly,
it is very slow at storing it. That has led the industry on a frantic hunt for
alternative storage technologies that might unseat flash.
Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both
solid-state flash memory chips as well as computer hard disks, making it a
technology that could transform not only the storage business but the entire
“Finally, after all these years, we’re reaching fundamental physics limits,” he
said. “Racetrack says we’re going to break those scaling rules by going into the
His idea is to stand billions of ultrafine wire loops around the edge of a
silicon chip — hence the name racetrack — and use electric current to slide
infinitesimally small magnets up and down along each of the wires to be read and
written as digital ones and zeros.
His research group is able to slide the tiny magnets along notched nanowires at
speeds greater than 100 meters a second. Since the tiny magnetic domains have to
travel only submolecular distances, it is possible to read and write magnetic
regions with different polarization as quickly as a single nanosecond, or one
billionth of a second — far faster than existing storage technologies.
If the racetrack idea can be made commercial, he will have done what has so far
proved impossible — to take microelectronics completely into the third dimension
and thus explode the two-dimensional limits of Moore’s Law, the 1965 observation
by Gordon E. Moore, a co-founder of Intel, that decrees that the number of
transistors on a silicon chip doubles roughly every 18 months.
Just as with Mr. Parkin’s earlier work in GMR, there is no shortage of skeptics
at this point.
Giant storage companies like Seagate Technology are starting to turn toward
flash to create a generation of hybrid storage systems that combine silicon and
rotating disk technologies for speed and capacity. But Seagate is still looking
in the two-dimensional realm for future advances.
“There are a lot of neat technologies, but you have to be able to make them
cost-effectively,” said Bill Watkins, Seagate’s chief executive.
So far, the racetrack idea is far from the Best Buy shelves and it is very much
still in Mr. Parkin’s laboratory here. His track record, however, suggests that
the storage industry might do well to take notice of the implications of his
novel nanowire-based storage system in the not too distant future.
“Stuart marches to a little bit of a different drummer, but that’s what it takes
to have enough courage to go off the beaten path,” said James S. Harris, an
electrical engineering professor at Stanford University and co-director of the
I.B.M.-Stanford Spintronic Science and Applications Center.
A visit to Mr. Parkin’s crowded office reveals him to be a 51-year-old
British-American scientist for whom the term hyperactive is a modest
understatement at best. During interviews he is constantly in motion. When he
speaks publicly at scientific gatherings, his longtime technology assistant,
Kevin Roche, is careful to see that Mr. Parkin empties the change from his
pockets, lest he distract his audience with the constant jingling of coins and
Today, a number of industry analysts think there are important parallels between
Mr. Parkin’s earlier GMR research and his new search for racetrack materials.
“We’re on the verge of exciting new memory architectures, and his is one of the
leading candidates,” said Richard Doherty, director of the Envisioneering Group,
a computing and consumer electronics consulting firm based in Seaford, N.Y.
Mr. Parkin said he had recently shifted his focus and now thought that his
racetracks might be competitive with other storage technologies even if they
were laid horizontally on a silicon chip.
I.B.M. executives are cautious about the timing of the commercial introduction
of the technology. But ultimately, the technology may have even more dramatic
implications than just smaller music players or wristwatch TVs, said Mark Dean,
vice president for systems at I.B.M. Research.
“Something along these lines will be very disruptive,” he said. “It will not
only change the way we look at storage, but it could change the way we look at
processing information. We’re moving into a world that is more data-centric than
This is just a hint, but it suggests that I.B.M. may think that racetrack memory
could blur the line between storage and computing, providing a key to a new way
to search for data, as well as store and retrieve data.
And if it is, Mr. Parkin’s experimental physics lab will have transformed the
computing world yet again.
Architecture of Memory, NYT, 11.9.2007,
A New Entry From A.M.D.
in Chip Wars
September 10, 2007
The New York Times
By LAURIE J. FLYNN
Advanced Micro Devices is counting on a new high-performance computer chip to
hold on to hard-fought market share it has won from its principal rival, Intel.
The company, based in Sunnyvale, Calif., is set today to release the next
generation in its Opteron line of processors for computer servers. The new chip
puts four processors on one piece of silicon, a technology known as quad-core,
allowing for faster calculating and greater energy efficiency, features sought
by companies running large data centers and server farms.
Mario Rivas, executive vice president for computing products at A.M.D., said the
latest Opteron chip is the company’s most significant new product in several
For Advanced Micro, the stakes are high, with the new chip arriving just as it
struggles to maintain its hard-earned gains from Intel, its far larger rival.
A.M.D.’s product introduction comes less than a week after Intel tried to
upstage it with a server update of its own: new Xeon server processors that
bundle together two chips that each have the circuitry of two processing
In July, A.M.D. reported a $600 million loss for the second quarter, its third
loss in a row, as it grappled with the renewed competition from Intel and
falling chip prices. But it also said that shipments of microprocessors rose 38
percent from the first quarter, and that it had begun to win back market share
after several quarters of slipping.
Intel and A.M.D. have been locked in a race to deliver high-performing chips for
several years. A.M.D. was first to market with a dual-core chip more than two
years ago as Intel struggled to get its dual-core strategy off the ground.
When A.M.D. introduced the Opteron server chip in 2003, the industry was slow to
warm to the product, but the company says that this time will be different. Four
years ago, Intel’s server processors were favored by nearly all major hardware
suppliers. But delays at Intel induced Dell, I.B.M., Hewlett-Packard and Sun
Microsystems to gradually turn to the Opteron as an alternative.
A.M.D. gained market share, particularly in the desktop and server markets,
though Intel managed to keep a tight grip on fast-growing notebook PCs.
In recent quarters, Intel has responded with a succession of processors, and has
managed to win back some of the share it lost. Intel is now leading in the
market for servers, analysts say.
Analysts expect the new Opteron to take off more quickly this time because the
major hardware companies are already A.M.D. customers. “This chip will have a
much faster impact on A.M.D.’s business,” said Nathan Brookwood of Insight64, a
chip industry consulting firm, “but a lot will be riding on just how good it
A New Entry From A.M.D.
in Chip Wars, NYT, 10.9.2007,
Techies Ponder Computers
Smarter Than Us
September 9, 2007
By THE ASSOCIATED PRESS
Filed at 12:45 a.m. ET
The New York Times
SAN FRANCISCO (AP) -- At the center of a black hole there lies a point called
a singularity where the laws of physics no longer make sense. In a similar way,
according to futurists gathered Saturday for a weekend conference, information
technology is hurtling toward a point where machines will become smarter than
their makers. If that happens, it will alter what it means to be human in ways
almost impossible to conceive, they say.
''The Singularity Summit: AI and the Future of Humanity'' brought together
hundreds of Silicon Valley techies and scientists to imagine a future of
self-programming computers and brain implants that would allow humans to think
at speeds nearing today's microprocessors.
Artificial intelligence researchers at the summit warned that now is the time to
develop ethical guidelines for ensuring these advances help rather than harm.
''We and our world won't be us anymore,'' Rodney Brooks, a robotics professor at
the Massachusetts Institute of Technology, told the audience. When it comes to
computers, he said, ''who is us and who is them is going to become a different
sort of question.''
Eliezer Yudkowsky, co-founder of the Palo Alto-based Singularity Institute for
Artificial Intelligence, which organized the summit, researches on the
development of so-called ''friendly artificial intelligence.'' His greatest
fear, he said, is that a brilliant inventor creates a self-improving but amoral
artificial intelligence that turns hostile.
The first use of the term ''singularity'' to describe this kind of fundamental
technological transformation is credited to Vernor Vinge, a California
mathematician and science-fiction author.
High-tech entrepreneur Ray Kurzweil raised the profile of the singularity
concept in his 2005 book ''The Singularity is Near,'' in which he argues that
the exponential pace of technological progress makes the emergence of
smarter-than-human intelligence the future's only logical outcome.
Kurzweil, director of the Singularity Institute, is so confident in his
predictions of the singularity that he has even set a date: 2029.
Most ''singularists'' feel they have strong evidence to support their claims,
citing the dramatic advances in computing technology that have already occurred
over the last 50 years.
In 1965, Intel co-founder Gordon Moore accurately predicted that the number of
transistors on a chip should double about every two years. By comparison,
according Singularity Institute researchers, the entire evolution of modern
humans from primates has resulted in only a threefold increase in brain
With advances in biotechnology and information technology, they say, there's no
scientific reason that human thinking couldn't be pushed to speeds up to a
million times faster.
Some critics have mocked singularists for their obsession with
''techno-salvation'' and ''techno-holocaust'' -- or what some wags have called
the coming ''nerdocalypse.'' Their predictions are grounded as much in science
fiction as science, the detractors claim, and may never come to pass.
But advocates argue it would be irresponsible to ignore the possibility of dire
''Technology is heading here. It will predictably get to the point of making
artificial intelligence,'' Yudkowsky said. ''The mere fact that you cannot
predict exactly when it will happen down to the day is no excuse for closing
your eyes and refusing to think about it.''
On the Web:
The Singularity Institute for Artificial Intelligence,
Techies Ponder Computers
Smarter Than Us, NYT, 9.9.2007,
New Surface Computer
May 30, 2007
By THE ASSOCIATED PRESS
Filed at 7:39 a.m. ET
The New York Times
SEATTLE (AP) -- Microsoft Corp. has taken the wraps off ''Surface,'' a
coffee-table shaped computer that responds to touch and to special bar codes
attached to everyday objects.
The machines, which Microsoft planned to debut Wednesday at a technology
conference in Carlsbad, Calif., are set to arrive in November in T-Mobile USA
stores and properties owned by Starwood Hotels & Resorts Worldwide Inc. and
Harrah's Entertainment Inc.
Surface is essentially a Windows Vista PC tucked inside a shiny black table
base, topped with a 30-inch touchscreen in a clear acrylic frame. Five cameras
that can sense nearby objects are mounted beneath the screen. Users can interact
with the machine by touching or dragging their fingertips and objects such as
paintbrushes across the screen, or by setting real-world items tagged with
special bar-code labels on top of it.
Unlike most touchscreens, Surface can respond to more than one touch at a time.
During a demonstration with a reporter last week, Mark Bolger, the Surface
Computing group's marketing director, ''dipped'' his finger in an on-screen
paint palette, then dragged it across the screen to draw a smiley face. Then he
used all 10 fingers at once to give the face a full head of hair.
With a price tag between $5,000 and $10,000 per unit, Microsoft isn't
immediately aiming for the finger painting set. (The company said it expects
prices to drop enough to make consumer versions feasible in three to five
Some of the first Surface models are planned to help customers pick out new cell
phones at T-Mobile stores. When customers plop a phone down on the screen,
Surface will read its bar code and display information about the handset.
Customers can also select calling plans and ringtones by dragging icons toward
Guests sitting in some Starwood Hotel lobbies will be able to cluster around the
Surface to play music, then buy songs using a credit card or rewards card tagged
with a bar code. In some hotel restaurants, customers will be able to order food
and drinks, then split the bill by setting down a card or a room key and
dragging their menu items ''onto'' the card.
At Harrah's locations, visitors will be able to learn about nearby Harrah's
venues on an interactive map, then book show tickets or make dinner
Microsoft is working on a limited number of programs to ship with Surface,
including one for sharing digital photographs.
Bolger placed a card with a bar code onto Surface's surface; digital photographs
appeared to spill out of the card into piles on the screen. Several people
gathered around the table pulled photos across the screen using their
fingertips, rotated them in circles and even dragged out the corners to enlarge
the images -- behavior made possible by the advanced graphics support deep
inside Windows Vista.
''It's not a touch screen, it's a grab screen,'' Bolger said.
Historically, Microsoft has focused on creating new software, giving computer
programmers tools to build applications on its platforms, and left hardware
manufacturing to others. (Some recent exceptions include the Xbox 360 and the
Zune music player, made by the same Microsoft division that developed Surface.)
For now, Microsoft is making the Surface hardware itself, and has only given six
outside software development firms the tools they need to make Surface
Matt Rosoff, an analyst at the independent research group Directions on
Microsoft, said in an interview that keeping the technology's inner workings
under wraps will limit what early customers -- the businesses Microsoft is
targeting first with the machine -- will be able to do with it.
But overall, analysts who cover the PC industry were wowed by Surface.
Surface is ''important for Microsoft as a promising new business, as well as
demonstrating very concretely to the market that Microsoft still knows how to
innovate, and innovate in a big way,'' said Michael Gartenberg, an analyst at
Microsoft Unveils New
Surface Computer, NYT, 30.5.2007,
Intel Says Chips Will Run Faster,
Using Less Power
January 27, 2007
The New York Times
By JOHN MARKOFF
Intel, the world’s largest chip maker, has overhauled the basic building
block of the information age, paving the way for a new generation of faster and
more energy-efficient processors.
Company researchers said the advance represented the most significant change in
the materials used to manufacture silicon chips since Intel pioneered the modern
integrated-circuit transistor more than four decades ago.
The microprocessor chips, which Intel plans to begin making in the second half
of this year, are designed for computers but they could also have applications
in consumer devices. Their combination of processing power and energy efficiency
could make it possible, for example, for cellphones to play video at length — a
demanding digital task — with less battery drain.
The work by Intel overcomes a potentially crippling technical obstacle that has
arisen as a transistor’s tiny switches are made ever smaller: their tendency to
leak current as the insulating material gets thinner. The Intel advance uses new
metallic alloys in the insulation itself and in adjacent components.
Word of the announcement, which is planned for Monday, touched off a war of
dueling statements as I.B.M. rushed to announce that it was on the verge of a
I.B.M. executives said their company was planning to introduce a comparable type
of transistor in the first quarter of 2008.
Many industry analysts say that Intel retains a six-month to nine-month lead
over the rest of the industry, but I.B.M. executives disputed the claim and said
the two companies were focused on different markets in the computing industry.
The I.B.M. technology has been developed in partnership with Advanced Micro
Devices, Intel’s main rival. Modern microprocessor and memory chips are created
from an interconnected fabric of hundreds of millions and even billions of the
tiny switches that process the ones and zeros that are the foundation of digital
They are made using a manufacturing process that has been constantly improving
for more than four decades. Today transistors, for example, are made with
systems that can create wires and other features that are finer than the
resolving power of a single wavelength of light.
The Intel announcement is new evidence that the chip maker is maintaining the
pace of Moore’s Law, the technology axiom that states that the number of
transistors on a chip doubles roughly every two years, giving rise to a constant
escalation of computing power at lower costs.
“This is evolutionary as opposed to revolutionary, but it will generate a big
sigh of relief,” said Vivek Subramanian, associate professor of electrical
engineering and computer sciences at the University of California, Berkeley.
For several decades there have been repeated warnings about the impending end of
the Moore’s Law pace for chip makers. In response the semiconductor industry has
repeatedly found its way around fundamental technical obstacles, inventing
techniques that at times seem to defy basic laws of physics.
The chip industry measures its progress by manufacturing standards defined by a
width of one of the smallest features of a transistor for each generation.
Currently much of the industry is building chips in what is known as
90-nanometer technology. At that scale, about 1,000 transistors would fit in the
width of a human hair. Intel began making chips at 65 nanometers in 2005, about
nine months before its closest competitors.
Now the company is moving on to the next stage of refinement, defined by a
minimum feature size of 45 nanometers. Other researchers have recently reported
progress on molecular computing technologies that could reduce the scale even
further by the end of the decade.
Intel’s imminent advance to 45 nanometers will have a huge impact on the
industry, Mr. Subramanian said. “People have been working on it for over a
decade, and this is tremendously significant that Intel has made it work,” he
Intel’s advance was in part in finding a new insulator composed of an alloy of
hafnium, a metallic element that has previously been used in filaments and
electrodes and as a neutron absorber in nuclear power plants. They will replace
the use of silicon dioxide — essentially the material that window glass is made
of, but only several atoms thick.
Intel is also shifting to new metallic alloy materials — it is not identifying
them specifically — in transistor components known as gates, which sit directly
on top of the insulator. These are ordinarily made from a particular form of
silicon called polysilicon.
The new approach to insulation appears at least temporarily to conquer one of
the most significant obstacles confronting the semiconductor industry: the
tendency of tiny switches to leak electricity as they are reduced in size. The
leakage makes chips run hotter and consume more power.
Many executives in the industry say that Intel is still recovering from a
strategic wrong turn it made when the company pushed its chips to extremely high
clock speeds — the ability of a processor to calculate more quickly. That
obsession with speed at any cost left the company behind its competitors in
shifting to low-power alternatives.
Now Intel is coming back. Although the chip maker led in the speed race for many
years, the company has in recent years shifted its focus to low-power
microprocessors that gain speed by breaking up each chip into multiple computing
“cores.” In its new 45-nanometer generation, Intel will gain the freedom to seek
either higher performance or substantially lower power, while at the same time
increasing the number of cores per chip.
“They can adjust the transistor for high performance or low power,” said David
Lammers, director of WeSRCH.com, a Web portal for technical professionals.
The Intel development effort has gone on in a vast automated factory in
Hillsboro, Ore., that the company calls D1D. It features huge open manufacturing
rooms that are kept surgically clean to prevent dust from contaminating the
silicon wafers that are whisked around the factory by a robotic conveyor system.
The technology effort was led by Mark T. Bohr, a longtime Intel physicist who is
director of process architecture and integration. The breakthrough, he said, was
in finding a way to deal with the leakage of current. “Up until five years ago,
leakage was thought to increase with each generation,” he said.
Several analysts said that the technology advance could give Intel a meaningful
advantage over competitors in the race to build ever more powerful
“It’s going to be a nightmare for Intel’s competitors,” said G. Dan Hutcheson,
chief executive of VLSI Research. “A lot of Mark Bohr’s counterparts are going
to wake up in terror.”
An I.B.M. executive said yesterday that the company had also chosen hafnium as
its primary insulator, but that it would not release details of its new process
until technical papers are presented at coming conferences.
“It’s the difference between can openers and Ferraris,” said Bernard S.
Meyerson, vice president and chief technologist for the systems and technology
group at I.B.M. He insisted that industry analysts who have asserted that Intel
has a technology lead are not accurate and that I.B.M. had simply chosen to
deploy its new process in chips that are part of high-performance systems aimed
at the high end of the computer industry.
Intel said it had already manufactured prototype microprocessor chips in the new
45-nanometer process that run on three major operating systems: Windows, Mac OS
X and Linux.
Intel Says Chips Will
Run Faster, Using Less Power, NYT, 27.1.2007,
Researchers Go Molecular
in Design of a Denser Chip
January 25, 2007
The New York Times
By KENNETH CHANG
Scientists have built a memory chip that is roughly the size of a white blood
cell, about one-2,000th of an inch on a side.
Although the chip is modest in capacity — with 160,000 bits of information — the
bits are crammed together so tightly that it is the densest ever made. The
achievement points to a possible path toward continuing the exponential growth
of computing power even after current silicon chip-making technology hits
fundamental limits in 10 to 20 years.
The scientists, led by James R. Heath of the California Institute of Technology
and J. Fraser Stoddart of the University of California, Los Angeles, will report
their findings today in the journal Nature. As far back as 1999, Dr. Heath and
Dr. Stoddart reported on aspects of their work , which included specially
designed molecular switches and a novel technique for making ultrathin wires.
The new work pulls the components into an integrated circuit.
“Our goal always was to develop a manufacturing technique that works at the
molecular scale,” said Dr. Heath, a professor of chemistry. “It’s a scientific
demonstration, but it’s a sort of a stake in the ground.”
The density of bits on the chip — about 100 billion per square centimeter — is
about 40 times as much as current memory chips, Dr. Heath said. Improvements to
the technique could increase the density by a factor of 10, he said.
But Dr. Heath said he did not know if this technique would be commercially
useful. “I don’t know if the world needs memory like this,” he said. “I do know
if you can manufacture at these dimensions, it’s a fundamentally enabling
For example, the wires used in the chip are about the same width as proteins,
and that could make possible tiny circuits that could detect cancer or other
diseases. The researchers are making transistors and simple logic circuits using
similar techniques, Dr. Heath said.
A crucial component of the chip is its molecular switch, designed by Dr.
Stoddart. The switch, which belongs to a class of molecules known as rotaxanes,
looks like a dumbbell with a ring that can slide along the central bar. Voltage
pulses push the ring between two positions on the bar, which represent the zeros
and ones used by computers to store data. The dumbbell shape keeps the ring from
To build the chip, the researchers etched 400 parallel wires, each less than a
millionth of an inch wide and separated by about one-750,000th of an inch from
its neighbors. On top of the wires, they deposited a layer of the molecular
switches, the dumbbells standing vertically, and then a second set of 400 wires
turned 90 degrees to the first set.
Each crossing point between two perpendicular wires, with about 100 of the
molecular switches wedged in between, is the storage location of one bit of
While many researchers are looking for ways to make molecular-size electronics,
most are still building circuits containing only a handful of bits, compared
with the 160,000 in the new chip. That suggests the new process Dr. Heath and
Dr. Stoddart developed can be scaled up to a viable manufacturing process, said
Vivek Subramanian, a professor of electrical engineering and computer sciences
at the University of California, Berkeley.
“This is sort of the capstone in that they’ve pulled all this together,” said
Dr. Subramanian, who was not involved in the research.
Not everything about the chip works yet. When the researchers tested a small
part of it, they found that only 30 percent of the bits actually worked. But it
is possible to use only the working parts of the chip, and the researchers
successfully wrote and read information to those parts, though even there the
success was temporary. The switches routinely broke after being flipped about 10
The researchers readily concede that their chip is merely a demonstration and is
not refined enough for any applications. “We’re just happy it works,” Dr. Heath
Researchers Go Molecular
in Design of a Denser Chip, NYT, 25.1.2007,
What Won’t Be Possible?
October 31, 2006
The New York Times
By STEVE LOHR
Computer science is not only a comparatively
young field, but also one that has had to prove it is really science. Skeptics
in academia would often say that after Alan Turing described the concept of the
“universal machine” in the late 1930’s — the idea that a computer in theory
could be made to do the work of any kind of calculating machine, including the
human brain — all that remained to be done was mere engineering.
The more generous perspective today is that decades of stunningly rapid advances
in processing speed, storage and networking, along with the development of
increasingly clever software, have brought computing into science, business and
culture in ways that were barely imagined years ago. The quantitative changes
delivered through smart engineering opened the door to qualitative changes.
Computing changes what can be seen, simulated and done. So in science, computing
makes it possible to simulate climate change and unravel the human genome. In
business, low-cost computing, the Internet and digital communications are
transforming the global economy. In culture, the artifacts of computing include
the iPod, YouTube and computer-animated movies.
What’s next? That was the subject of a symposium in Washington this month held
by the Computer Science and Telecommunications Board, which is part of the
National Academies and the nation’s leading advisory board on science and
technology. Joseph F. Traub, the board’s chairman and a professor at Columbia
University, titled the symposium “2016.”
Computer scientists from academia and companies like I.B.M. and Google discussed
topics including social networks, digital imaging, online media and the impact
on work and employment. But most talks touched on two broad themes: the impact
of computing will go deeper into the sciences and spread more into the social
sciences, and policy issues will loom large, as the technology becomes more
powerful and more pervasive.
Richard M. Karp, a professor at the University of California, Berkeley, gave a
talk whose title seemed esoteric: “The Algorithmic Nature of Scientific
Yet he presented a fundamental explanation for why computing has had such a
major impact on other sciences, and Dr. Karp himself personifies the trend. His
research has moved beyond computer science to microbiology in recent years. An
algorithm, put simply, is a step-by-step recipe for calculation, and it is a
central concept in both mathematics and computer science.
“Algorithms are small but beautiful,” Dr. Karp observed. And algorithms are good
at describing dynamic processes, while scientific formulas or equations are more
suited to static phenomena. Increasingly, scientific research seeks to
understand dynamic processes, and computer science, he said, is the systematic
study of algorithms.
Biology, Dr. Karp said, is now understood as an information science. And
scientists seek to describe biological processes, like protein production, as
algorithms. “In other words, nature is computing,” he said.
Social networks, noted Jon Kleinberg, a professor at Cornell, are
pre-technological creations that sociologists have been analyzing for decades. A
classic example, he noted, was the work of Stanley Milgram of Harvard, who in
the 1960’s asked each of several volunteers in the Midwest to get a letter to a
stranger in Boston. But the path was not direct: under the rules of the
experiment, participants could send a letter only to someone they knew. The
median number of intermediaries was six — hence, the term “six degrees of
But with the rise of the Internet, social networks and technology networks are
becoming inextricably linked, so that behavior in social networks can be tracked
on a scale never before possible.
“We’re really witnessing a revolution in measurement,” Dr. Kleinberg said.
The new social-and-technology networks that can be studied include e-mail
patterns, buying recommendations on commercial Web sites like Amazon, messages
and postings on community sites like MySpace and Facebook, and the diffusion of
news, opinions, fads, urban myths, products and services over the Internet. Why
do some online communities thrive, while others decline and perish? What forces
or characteristics determine success? Can they be captured in a computing
Social networking research promises a rich trove for marketers and politicians,
as well as sociologists, economists, anthropologists, psychologists and
“This is the introduction of computing and algorithmic processes into the social
sciences in a big way,” Dr. Kleinberg said, “and we’re just at the beginning.”
But having a powerful new tool of tracking the online behavior of groups and
individuals also raises serious privacy issues. That became apparent this summer
when AOL inadvertently released Web search logs of 650,000 users.
Future trends in computer imaging and storage will make it possible for a
person, wearing a tiny digital device with a microphone and camera, to
essentially record his or her life. The potential for communication, media and
personal enrichment is striking. Rick Rashid, a computer scientist and head of
Microsoft’s research labs, noted that he would like to see a recording of the
first steps of his grown son, or listen to a conversation he had with his father
many years ago. “I’d like some of that back,” he said. “In the future, that will
But clearly, the technology could also enable a surveillance society. “We’ll
have the capability, and it will be up to society to determine how we use it,”
Dr. Rashid said. “Society will determine that, not scientists.”
Computing, 2016: What Won’t Be Possible?, NYT, 31.10.2006,
A Chip That Can Transfer Data
September 18, 2006
The New York Times
By JOHN MARKOFF
SAN FRANCISCO, Sept. 17 — Researchers plan to
announce on Monday that they have created a silicon-based chip that can produce
laser beams. The advance will make it possible to use laser light rather than
wires to send data between chips, removing the most significant bottleneck in
As a result, chip makers may be able to put the high-speed data communications
industry on the same curve of increased processing speed and diminishing costs —
the phenomenon known as Moore’s law — that has driven the computer industry for
the last four decades.
The development is a result of research at Intel, the world’s largest chip
maker, and the University of California, Santa Barbara. Commercializing the new
technology may not happen before the end of the decade, but the prospect of
being able to place hundreds or thousands of data-carrying light beams on
standard industry chips is certain to shake up both the communications and
Lasers are already used to transmit high volumes of computer data over longer
distances — for example, between offices, cities and across oceans — using fiber
optic cables. But in computer chips, data moves at great speed over the wires
inside, then slows to a snail’s pace when it is sent chip-to-chip inside a
With the barrier removed, computer designers will be able to rethink computers,
packing chips more densely both in home systems and in giant data centers.
Moreover, the laser-silicon chips — composed of a spider’s web of laser light in
addition to metal wires — portend a vastly more powerful and less expensive
national computing infrastructure. For a few dollars apiece, such chips could
transmit data at 100 times the speed of laser-based communications equipment,
called optical transceivers, that typically cost several thousand dollars.
Currently fiber optic networks are used to transmit data to individual
neighborhoods in cities where the data is then distributed by slower
conventional wire-based communications gear. The laser chips will make it
possible to send avalanches of data to and from individual homes at far less
They could also give rise to a new class of supercomputers that could share data
internally at speeds not possible today.
The breakthrough was achieved by bonding a layer of light-emitting indium
phosphide onto the surface of a standard silicon chip etched with special
channels that act as light-wave guides. The resulting sandwich has the potential
to create on a computer chip hundreds and possibly thousands of tiny, bright
lasers that can be switched on and off billions of times a second.
“This is a field that has just begun exploding in the past 18 months,” said Eli
Yablonovitch, a physicist at the University of California, Los Angeles, a
leading researcher in the field. “There is going to be a lot more optical
communications in computing than people have thought.”
Indeed, the results of the development work, which will be reported in a coming
issue of Optics Express, an international journal, indicate that a high-stakes
race is under way worldwide. While the researchers at Intel and Santa Barbara
are betting on indium phosphide, Japanese scientists in a related effort are
pursuing a different material, the chemical element erbium.
Although commercial chips with built-in lasers are years away, Luxtera, a
company in Carlsbad, Calif., is already selling test chips that incorporate most
optical components directly into silicon and then inject laser light from a
The Intel-Santa Barbara work proves that it is possible to make complete
photonic devices using standard chip-making machinery, although not entirely out
of silicon. “There has always been this final hurdle,” said Mario Paniccia,
director of the Photonics Technology Lab at Intel. “We have now come up with a
solution that optimizes both sides.”
In the past it has proved impossible to couple standard silicon with the exotic
materials that emit light when electrically charged. But the university team
supplied a low-temperature bonding technique that does not melt the silicon
circuitry. The approach uses an electrically charged oxygen gas to create a
layer of oxide just 25 atoms thick on each material. When heated and pressed
together, the oxide layer fuses the two materials into a single chip that
conducts information both through wires and on beams of reflected light.
“Photonics has been a low-volume cottage industry,” said John E. Bowers,
director of the Multidisciplinary Optical Switching Technology Center at the
University of California, Santa Barbara. “Everything will change and laser
communications will be everywhere, including fiber to the home.”
Photonics industry experts briefed on the technique said that it would almost
certainly pave the way for commercialization of the long-sought convergence of
silicon chips and optical lasers. “Before, there was more hype than substance,”
said Alan Huang, a former Bell Laboratories researcher who is a pioneer in the
field and is now chief technology officer of the Terabit Corporation, a
photonics start-up company in Menlo Park, Calif. “Now I believe this will lead
to future applications in optoelectronics.”
Chip That Can Transfer Data Using Laser Light, NYT, 18.9.2006,
In 1981 these men
changed how we live
The IBM PC was born 25 years ago this week,
but not all of its inventors were as lucky as Bill Gates
Sunday August 6, 2006
David Smith, technology correspondent
'IBM Corporation today announced its smallest,
lowest-priced computer system - the IBM Personal Computer,' ran the press
release 25 years ago this week. 'Designed for business, school and home, the
easy-to-use system sells for as little as $1,565. It offers many advanced
features and, with optional software, may use hundreds of popular application
On 12 August 1981 no one could guess quite how
profound an impact the announcement from International Business Machines would
have on hundreds of millions of lives. Nor how wildly divergent would be the
fortunes of three men who were there at the genesis of the IBM PC 5150 - a
invention to rank in importance with the motor car, telephone and television.
One of those men was David Bradley, 57, a member of the original 12 engineers
who worked on the secret project and who is still amazed by its profound
consequences, from email and iPods to Google and MySpace. Speaking from his home
in North Carolina last week, he said: 'Computers have improved the productivity
of office workers and become a toy for the home. I don't want to assert that the
PC invented the internet, but it was one of the preconditions.'
The man with perhaps most cause to toast the industry standard PC's 25th
birthday on Saturday, even more than the engineers who built it, is Bill Gates.
His software for the IBM PC, and nearly all the computers that followed it, made
him the world's richest man. But for IBM, the story was arguably one of defeat
snatched from the jaws of victory.
Bradley was also working on a similar machine when, in September 1980, he was
recruited to the IBM team and sent to Boca Raton in Florida to come up with a PC
that would rival the pioneering Apple II. A few months later the team had grown
and got its leader - Don Estridge, a photographer's son from Florida who had
worked for the army and Nasa. Racing against a 12-month deadline, the engineers
scoured the country for components, and asked Intel, then a manufacturer of
memory chips, to deliver the central processing unit, or 'brain'.
IBM also needed operating system software. The man in the right place at the
right time was a young geek who had dropped out of Harvard. Bill Gates of
Microsoft specialised in more modest computer languages but assured the IBM team
that he could come up with an operating system for their new machines in just a
few days. After Estridge's task force had left for their hotel, Gates went
around the corner to a tiny company which had written a system for the Intel
processor and bought it out for £26,000. He then customised the system for IBM
and sold it to them for £42,000. Critically, Gates retained the right to license
the system to other manufacturers who could, and would, clone the IBM design. A
quarter of a century later, he has an estimated wealth of £26bn.
IBM's failure to secure exclusive rights to Gates's software is often regarded
as a blunder comparable to that of the music executives who spurned The Beatles.
But Bradley disagrees, saying that there was a higher purpose - he and his
colleagues used 'open architecture', off-the-shelf parts which others could
acquire, and so defined a standard that allowed others to build compatible
machines capable of running the same software.
Experts generally regard this as the result of haste rather than altruism on
IBM's part, but Bradley points out that in the spirit of openness it published
technical manuals to explain how the PC worked. Unlike Apple, who stuck by its
proprietary system and lost the lion's share of the market, the IBM PC was an
invitation to rivals eager to imitate and improve upon it.
Bradley said: 'I believe the primary reason it was so successful is that it was
an open system. There was a microprocessor from Intel and an operating system
from Microsoft. We published everything we knew so that if you wanted to work on
an application program you had all the information to do it and you could be
reasonably confident IBM wouldn't change things later.
'The participation of the rest of the industry was important because IBM alone
could not possibly have invented all the applications that people would want.'
The IBM PC 5150 weighed 25lbs, stood just under six inches high and had 64
kilobytes of memory and a five-and-a-quarter inch floppy disk drive. Initial
sales forecasts expected 242,000 to be sold over five years, but the figure was
exceeded in single month. It was a personal triumph for Estridge, the 'father of
the PC', but he would not live to see its full legacy in the democratisation of
On 2 August 1985 Estridge was on Delta Air Lines Flight 191 from Fort
Lauderdale, Florida approaching Dallas-Fort Worth airport. It was caught in a
freak wind and plummeted to the ground, bursting into flames. Of 152 passengers
on board, 128 died, including 48-year-old Estridge, his wife and several IBM
IBM was overtaken in the PC market by Compaq in 1994. IBM sold its PC division
to Chinese giant Lenovo for £628m last year. 'I'm sad and disillusioned that IBM
got out of the computer business since I was there at the very beginning,' added
Bradley. 'But as an IBM stockholder I think it was an extremely sensible
Bradley quit IBM in 2004 after 28 years and lives in comfortable retirement. He
mused: 'I have no regrets about what happened. I was there when it was just a
glimmer in everybody's eye and it's a privilege to still be here to talk about
it. And no, I don't envy Bill Gates.'
1981 these men changed how we live, O, 6.8.2006,
Start Stepping Into Daily
July 18, 2006
The New York Times
By JOHN MARKOFF
Robot cars drive themselves across the desert,
electronic eyes perform lifeguard duty in swimming pools and virtual enemies
with humanlike behavior battle video game players.
These are some fruits of the research field known as artificial intelligence,
where reality is finally catching up to the science-fiction hype. A half-century
after the term was coined, both scientists and engineers say they are making
rapid progress in simulating the human brain, and their work is finding its way
into a new wave of real-world products.
The advances can also be seen in the emergence of bold new projects intended to
create more ambitious machines that can improve safety and security, entertain
and inform, or just handle everyday tasks. At Stanford University, for instance,
computer scientists are developing a robot that can use a hammer and a
screwdriver to assemble an Ikea bookcase (a project beyond the reach of many
humans) as well as tidy up after a party, load a dishwasher or take out the
One pioneer in the field is building an electronic butler that could hold a
conversation with its master — á la HAL in the movie “2001: A Space Odyssey” —
or order more pet food.
Though most of the truly futuristic projects are probably years from the
commercial market, scientists say that after a lull, artificial intelligence has
rapidly grown far more sophisticated. Today some scientists are beginning to use
the term cognitive computing, to distinguish their research from an earlier
generation of artificial intelligence work. What sets the new researchers apart
is a wealth of new biological data on how the human brain functions.
“There’s definitely been a palpable upswing in methods, competence and
boldness,” said Eric Horvitz, a Microsoft researcher who is president-elect of
the American Association for Artificial Intelligence. “At conferences you are
hearing the phrase ‘human-level A.I.,’ and people are saying that without
Cognitive computing is still more of a research discipline than an industry that
can be measured in revenue or profits. It is pursued in various pockets of
academia and the business world. And despite some of the more startling
achievements, improvements in the field are measured largely in increments:
voice recognition systems with decreasing failure rates, or computerized cameras
that can recognize more faces and objects than before.
Still, there have been rapid innovations in many areas: voice control systems
are now standard features in midpriced automobiles, and advanced artificial
reason techniques are now routinely used in inexpensive video games to make the
characters’ actions more lifelike.
A French company, Poseidon Technologies, sells underwater vision systems for
swimming pools that function as lifeguard assistants, issuing alerts when people
are drowning, and the system has saved lives in Europe.
Last October, a robot car designed by a team of Stanford engineers covered 132
miles of desert road without human intervention to capture a $2 million prize
offered by the Defense Advanced Research Projects Agency, part of the Pentagon.
The feat was particularly striking because 18 months earlier, during the first
such competition, the best vehicle got no farther than seven miles, becoming
stuck after driving off a mountain road.
Now the Pentagon agency has upped the ante: Next year the robots will be back on
the road, this time in a simulated traffic setting. It is being called the
At Microsoft, researchers are working on the idea of “predestination.” They
envision a software program that guesses where you are traveling based on
previous trips, and then offers information that might be useful based on where
the software thinks you are going.
Tellme Networks, a company in Mountain View, Calif., that provides voice
recognition services for both customer service and telephone directory
applications, is a good indicator of the progress that is being made in
relatively constrained situations, like looking up a phone number or
transferring a call.
Tellme supplies the system that automates directory information for toll-free
business listings. When the service was first introduced in 2001, it could
correctly answer fewer than 37 percent of phone calls without a human operator’s
help. As the system has been constantly refined, the figure has now risen to 74
More striking advances are likely to come from new biological models of the
brain. Researchers at the École Polytechnique Fédérale de Lausanne in Lausanne,
Switzerland, are building large-scale computer models to study how the brain
works; they have used an I.B.M. parallel supercomputer to create the most
detailed three-dimensional model to date of a column of 10,000 neurons in the
“The goal of my lab in the past 10 to 12 years has been to go inside these
little columns and try to figure out how they are built with exquisite detail,”
said Henry Markram, a research scientist who is head of the Blue Brain project.
“You can really now zoom in on single cells and watch the electrical activity
Blue Brain researchers say they believe the simulation will provide fundamental
insights that can be applied by scientists who are trying to simulate brain
Another well-known researcher is Robert Hecht-Nielsen, who is seeking to build
an electronic butler called Chancellor that would be able to listen, speak and
provide in-home concierge services. He contends that with adequate resources, he
could create such a machine within five years.
Although some people are skeptical that Mr. Hecht-Nielsen can achieve what he
describes, he does have one successful artificial intelligence business under
his belt. In 1986, he founded HNC Software, which sold systems to detect credit
card fraud using neural network technology designed to mimic biological circuits
in the brain. HNC was sold in 2002 to the Fair Isaac Corporation, where Mr.
Hecht-Nielsen is a vice president and leads a small research group.
Last year he began speaking publicly about his theory of “confabulation,” a
hypothesis about the way the brain makes decisions. At a recent I.B.M.
symposium, Mr. Hecht-Nielsen showed off a model of confabulation, demonstrating
how his software program could read two sentences from The Detroit Free Press
and create a third sentence that both made sense and was a natural extension of
the previous text.
For example, the program read: “He started his goodbyes with a morning audience
with Queen Elizabeth II at Buckingham Palace, sharing coffee, tea, cookies and
his desire for a golf rematch with her son, Prince Andrew. The visit came after
Clinton made the rounds through Ireland and Northern Ireland to offer support
for the flagging peace process there.”
The program then generated a sentence that read: “The two leaders also discussed
bilateral cooperation in various fields.”
Artificial intelligence had its origins in 1950, when the mathematician Alan
Turing proposed a test to determine whether or not a machine could think or be
conscious. The test involved having a person face two teleprinter machines, only
one of which had a human behind it. If the human judge could not tell which
terminal was controlled by the human, the machine could be said to be
In the late 1950’s a field of study emerged that tried to build systems that
replicated human abilities like speech, hearing, manual tasks and reasoning.
During the 1960’s and 1970’s, the original artificial intelligence researchers
began designing computer software programs they called “expert systems,” which
were essentially databases accompanied by a set of logical rules. They were
handicapped both by underpowered computers and by the absence of the wealth of
data that today’s researchers have amassed about the actual structure and
function of the biological brain.
Those shortcomings led to the failure of a first generation of artificial
intelligence companies in the 1980’s, which became known as the A.I. Winter.
Recently, however, researchers have begun to speak of an A.I. Spring emerging as
scientists develop theories on the workings of the human mind. They are being
aided by the exponential increase in processing power, which has created
computers with millions of times the power of those available to researchers in
the 1960’s — at consumer prices.
“There is a new synthesis of four fields, including mathematics, neuroscience,
computer science and psychology,” said Dharmendra S. Modha, an I.B.M. computer
scientist. “The implication of this is amazing. What you are seeing is that
cognitive computing is at a cusp where it’s knocking on the door of potentially
At Stanford, researchers are hoping to make fundamental progress in mobile
robotics, building machines that can carry out tasks around the home, like the
current generation of robotic floor vacuums, only more advanced. The field has
recently been dominated by Japan and South Korea, but the Stanford researchers
have sketched out a three-year plan to bring the United States to parity.
At the moment, the Stanford team is working on the first steps necessary to make
the robot they are building function well in an American household. The team is
focusing on systems that will consistently recognize standard doorknobs and is
building robot hands to open doors.
“It’s time to build an A.I. robot,” said Andrew Ng, a Stanford computer
scientist and a leader of the project, called Stanford Artificial Intelligence
Robot, or Stair. “The dream is to put a robot in every home.”
Brainy Robots Start Stepping Into Daily Life,
Related > Anglonautes > Vocapedia
Related > Anglonautes >
science > timeline