Science / Tech, Social Media, Tech

The Ever-Shrinking Transistor and the Invention of Google

Innovators are often unreasonable people: restless, quarrelsome, unsatisfied, and ambitious. Often, they are immigrants, especially on the west coast of America. Not always, though. Sometimes they can be quiet, unassuming, modest, and sensible stay-at-home types. The person whose career and insights best capture the extraordinary evolution of the computer between 1950 and 2000 was one such. Gordon Moore was at the centre of the industry throughout this period and he understood and explained better than most that it was an evolution, not a revolution. Apart from graduate school at Caltech and a couple of unhappy years out east, he barely left the Bay Area, let alone California. Unusually for a Californian, he was a native, who grew up in the small town of Pescadero on the Pacific coast just over the hills from what is now called Silicon Valley, going to San Jose State College for undergraduate studies. There he met and married a fellow student, Betty Whitaker.

As a child, Moore had been taciturn to the point that his teachers worried about it. Throughout his life he left it to partners like his colleague Andy Grove, or his wife, Betty, to fight his battles for him. “He was either constitutionally unable or simply unwilling to do what a manager has to do,” said Grove, a man toughened by surviving both Nazi and Communist regimes in his native Hungary. Moore’s chief recreation was fishing, a pastime that requires patience above all else. And unlike some entrepreneurs he was—and is, now in his 90s—just plain nice, according to almost everybody who knows him. His self-effacing nature somehow captures the point that innovation in computers was and is not really a story of heroic inventors making sudden breakthroughs, but an incremental, inexorable, inevitable progression driven by the needs of what Kevin Kelly calls “the technium” itself. More so than flamboyant figures like Steve Jobs, who managed to make a personality cult in a revolution that was not really about personalities.

Gordon Moore (left) and Robert Noyce founded Intel in 1968 when they left Fairchild Semiconductor.

In 1965 Moore was asked by an industry magazine called Electronics to write an article about the future. He was then at Fairchild Semiconductor, having been one of the “Traitorous Eight” who defected from the firm run by the dictatorial and irascible William Shockley to set up their own company six years before, where they had invented the integrated circuit of miniature transistors printed on a silicon chip. Moore and Robert Noyce would defect again to set up Intel in 1968. In the 1965 article Moore predicted that miniaturization of electronics would continue and that it would one day deliver “such wonders as home computers… automatic controls for automobiles, and personal portable communications equipment”. But that prescient remark is not why the article deserves a special place in history. It was this paragraph that gave Gordon Moore, like Boyle and Hooke and Ohm, his own scientific law:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years.

Moore was effectively forecasting the steady but rapid progress of miniaturization and cost reduction, doubling every year, through a virtuous circle in which cheaper circuits led to new uses, which would lead to more investment, which would lead to cheaper microchips for the same output of power. The unique feature of this technology is that a smaller transistor not only uses less power and generates less heat, but can be switched on and off faster, so it works better and is more reliable. The faster and cheaper chips got, the more uses they found. Moore’s colleague Robert Noyce deliberately under-priced microchips so that more people would use them in more applications, thereby growing the market.

By 1975 the number of components on a chip had passed 65,000, just as Moore had forecast, and it kept on growing as the size of each transistor shrank and shrank, though in that year Moore revised his estimate of the rate of change to doubling the number of transistors on a chip every two years. By then Moore was chief executive of Intel and presiding over its explosive growth and the transition to making microprocessors, rather than memory chips: essentially programmable computers on single silicon chips. Calculations by Moore’s friend and champion, Carver Mead, showed that there was a long way to go before miniaturization hit a limit.

Moore’s Law kept on going not just for 10 years but for about 50 years, to everybody’s surprise. Yet it probably has now at last run out of steam. The atomic limit is in sight. Transistors have shrunk to less than 100 atoms across, and there are billions on each chip. Since there are now trillions of chips in existence, that means there are billions of trillions of transistors on Planet Earth. They are probably now within an order of magnitude of equalling the number of grains of sand on the planet. Most sand grains, like most microchips, are made largely of silicon, albeit in oxidized form. But whereas sand grains have random—and therefore probable—structures, silicon chips have highly non-random, and therefore improbable, structures.

Looking back over the half-century since Moore first framed his Law, what is remarkable is how steady the progression was. There was no acceleration, there were no dips and pauses, no echoes of what was happening in the rest of the world, no leaps as a result of breakthrough inventions. Wars and recessions, booms and discoveries, seemed to have no impact on Moore’s Law. Also, as Ray Kurzweil was to point out later, Moore’s Law in silicon turned out to be a progression, not a leap, from the vacuum tubes and mechanical relays of previous years: The number of switches delivered for a given cost in a computer trundled upwards, showing no sign of sudden breakthrough when the transistor was invented, or the integrated circuit. Most surprising of all, discovering Moore’s Law had no effect on Moore’s Law. Knowing that the cost of a given amount of processing power would halve in two years ought surely to have been valuable information, allowing an enterprising innovator to jump ahead and achieve that goal now. Yet it never happened. Why not? Mainly because it took each incremental stage to work out how to get to the next stage.

This was encapsulated in Intel’s famous “tick-tock” corporate strategy: Tick was the release of a new chip every other year, tock was the fine-tuning of the design in the intervening years, preparatory to the next launch. But there was also a degree of self-fulfilling prophecy about Moore’s Law. It became a prescription for, not a description of, what was happening in the industry. Gordon Moore, speaking in 1976, put it this way:

This is the heart of the cost reduction machine that the semiconductor industry has developed. We put a product of given complexity into production; we work on refining the process, eliminating the defects. We gradually move the yield to higher and higher levels. Then we design a still more complex product utilizing all of the improvements, and put that into production. The complexity of our product grows exponentially with time.

Silicon chips alone could not bring about a computer revolution. For that, there needed to be new computer designs, new software, and new uses. Throughout the 1960s and 1970s, as Moore foresaw, there was a symbiotic relationship between hardware and software, as there had been between cars and oil. Each industry fed the other with innovative demand and innovative supply. Yet even as the technology went global, more and more the digital industry became concentrated in Silicon Valley, a name coined in 1971, for reasons of historical accident: Stanford University’s aggressive pursuit of defence research dollars led it to spawn a lot of electronics startups, and those startups gave birth to others, which spawned still others. Yet the role of academia in this story was surprisingly small. Though it educated many of the pioneers of the digital explosion in physics or electrical engineering, and though of course there was basic physics underlying many of the technologies, neither hardware nor software followed a simple route from pure science to applied.

Companies as well as people were drawn to the west side of San Francisco Bay to seize opportunities, catch talent and eavesdrop on the industry leaders. As the biologist and former vice-chancellor of Buckingham University, Terence Kealey, has argued, innovation can be like a club: you pay your dues and get access to its facilities. The corporate culture that developed in the Bay Area was egalitarian and open: In most firms, starting with Intel, executives had no reserved parking spaces, large offices, or hierarchical ranks, and they encouraged the free exchange of ideas sometimes to the point of chaos. Intellectual property hardly mattered in the digital industry: There was not usually time to get or defend a patent before the next advance overtook it. Competition was ruthless and incessant, but so were collaboration and cross-pollination.

The Intel 4004 was the world’s first microprocessor, released in 1971

The innovations came rolling off the silicon, digital production line: the microprocessor in 1971, the first video games in 1972, the TCP/IP protocols that made the Internet possible in 1973, the Xerox PARC Alto computer with its graphical user interface in 1974, Steve Jobs’s and Steve Wozniak’s Apple 1 in 1975, the Cray 1 supercomputer in 1976, the Atari video game console in 1977, the laser disc in 1978, the “worm”, ancestor of the first computer viruses, in 1979, the Sinclair ZX80 hobbyist computer in 1980, the IBM PC in 1981, Lotus 123 software in 1982, the CD-ROM in 1983, the word “cyberspace” in 1984, Stewart Brand’s Whole Earth ’Lectronic Link (Well) in 1985, the Connexion machine in 1986, the GSM standard for mobile phones in 1987, Stephen Wolfram’s Mathematica language in 1988, Nintendo’s Game Boy and Toshiba’s Dynabook in 1989, the World Wide Web in 1990, Linus Torvald’s Linux in 1991, the film Terminator 2 in 1991, Intel’s Pentium processor in 1993, the zip disc in 1994, Windows 95 in 1995, the Palm Pilot in 1996, the defeat of the world chess champion, Garry Kasparov, by IBM’s Deep Blue in 1997, Apple’s colourful iMac in 1998, Nvidia’s consumer graphics processing unit, the GEForce 256, in 1999, the Sims in 2000. And on and on and on.

It became routine and unexceptional to expect radical innovations every few months, an unprecedented state of affairs in the history of humanity. Almost anybody could be an innovator, because thanks to the inexorable logic unleashed and identified by Gordon Moore and his friends, the new was almost always automatically cheaper and faster than the old. So invention meant innovation too.

Not that every idea worked. There were plenty of dead ends along the way. Interactive television. Fifth-generation computing. Parallel processing. Virtual reality. Artificial intelligence. At various times each of these phrases was popular with governments and in the media, and each attracted vast sums of money, but proved premature or exaggerated. The technology and culture of computing were advancing by trial and error on a massive and widespread scale, in hardware, software and consumer products. Looking back, history endows the tryers who made the fewest errors with the soubriquet of genius, but for the most part they were lucky to have tried the right thing at the right time. Gates, Jobs, Brin, Page, Bezos, Zuckerberg were all products of the technium’s advance, as much as they were causes. In this most egalitarian of industries, with its invention of the sharing economy, a surprising number of billionaires emerged.

Again and again, people were caught out by the speed of the fall in cost of computing and communicating, leaving future commentators with a rich seam of embarrassing quotations to mine. Often it was those closest to the industry about to be disrupted who least saw it coming. Thomas Watson, the head of IBM, said in 1943 that “there is a world market for maybe five computers.” Tunis Craven, commissioner of the Federal Communications Commission, said in 1961: “there is practically no chance communications space satellites will be used to provide better telephone, telegraph, television or radio service inside the United States.” Marty Cooper, who has as good a claim as anybody to have invented the mobile phone, or cell phone, said, while director of research at Motorola in 1981: “Cellular phones will absolutely not replace local wire systems. Even if you project it beyond our lifetimes, it won’t be cheap enough.” Tim Harford points out that in the futuristic film Blade Runner, made in 1982, robots are so life-like that a policeman falls in love with one, but to ask her out he calls her from a payphone, not a mobile.

* * *

The surprise of search engines and social media

I use search engines every day. I can no longer imagine life without them. How on Earth did we manage to track down the information we needed? I use them to seek out news, facts, people, products, entertainment, train times, weather, ideas, and practical advice. They have changed the world as surely as steam engines did. In instances where they are not available, like finding a real book on a real shelf in my house, I find myself yearning for them. They may not be the most sophisticated or difficult of software tools, but they are certainly the most lucrative. Search is probably worth nearly a trillion dollars a year and has eaten the revenue of much of the media, as well as enabled the growth of online retail. Search engines, I venture to suggest, are a big part of what the Internet delivers to people in real life—that and social media.

I use social media every day too, to keep in touch with friends, family and what people are saying about the news and each other. Hardly an unmixed blessing, but it is hard to remember life without it. How on Earth did we manage to meet up, to stay in touch or to know what was going on? In the second decade of the 21st century social media exploded into the biggest and second most lucrative use of the Internet and is changing the course of politics and society.

Yet here is a paradox. There is an inevitability about both search engines and social media. If Larry Page had never met Sergei Brin, if Mark Zuckerberg had not got into Harvard, then we would still have search engines and social media. Both already existed when they started Google and Facebook. Yet before search engines or social media existed, I don’t think anybody forecast that they would exist, let alone grow so vast, certainly not in any detail. Something can be inevitable in retrospect, and entirely mysterious in prospect. This asymmetry of innovation is surprising.

The developments of the search engine and social media follow the usual path of innovation: incremental, gradual, serendipitous, and inexorable; few eureka moments or sudden breakthroughs. You can choose to go right back to the posse of MIT defence-contracting academics, such as Vannevar Bush and J. C. R. Licklider, in the post-war period, writing about the coming networks of computers and hinting at the idea of new forms of indexing and networking. Here is Bush in 1945: “The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.” And here is Licklider in his influential essay, written in 1964, on “Libraries of the Future”, imagining a future in which, over the weekend, a computer replies to a detailed question: “Over the weekend it retrieved over 10,000 documents, scanned them all for sections rich in relevant material, analyzed all the rich sections into statements in a high-order predicate calculus, and entered the statements into the data base of the question-answering subsystem.” But frankly such prehistory tells you only how little they foresaw instant search of millions of sources. A series of developments in the field of computer software made the Internet possible, which made the search engine inevitable: time sharing, packet switching, the World Wide Web, and more. Then in 1990 the very first recognizable search engine appeared, though inevitably there are rivals to the title.

Its name was Archie, and it was the brainchild of Alan Emtage—a student at McGill University in Montreal—and two of his colleagues. This was before the World Wide Web was in public use and Archie used the FTP protocol. By 1993 Archie was commercialized and growing fast. Its speed was variable: “While it responds in seconds on a Saturday night, it can take five minutes to several hours to answer simple queries during a weekday afternoon.” Emtage never patented it and never made a cent.

By 1994 Webcrawler and Lycos were setting the pace with their new text-crawling bots, gathering links and key words to index and dump in databases. These were soon followed by Altavista, Excite, and Yahoo!. Search engines were entering their promiscuous phase, with many different options for users. Yet still nobody saw what was coming. Those closest to the front still expected people to wander into the Internet and stumble across things, rather than arrive with specific goals in mind. “The shift from exploration and discovery to the intent-based search of today was inconceivable,” said Srinija Srinivasan, Yahoo!’s first editor-in-chief.

Larry Page and Sergey Brin, founders of Google Inc. September 2003

Then Larry met Sergey. Taking part in an orientation programme before joining graduate school at Stanford, a university addicted by then to spinning out tech companies, Larry Page found himself guided by a young student named Sergey Brin. “We both found each other obnoxious,” said Brin later. Both were second-generation academics in technology. Page’s parents were academic computer scientists in Michigan; Brin’s were a mathematician and an engineer in Moscow, then Maryland. Both young men had been steeped in computer talk, and hobbyist computers, since childhood.

Page began to study the links between web pages, with a view to ranking them by popularity, and had the idea, reportedly after waking from a dream in the night, of cataloguing every link on the exponentially expanding web. He created a web crawler to go from link to link, and soon had a database that ate up half of Stanford’s Internet bandwidth. But the purpose was annotating the web, not searching it. “Amazingly, I had no thought of building a search engine. The idea wasn’t even on the radar,” Page said. That asymmetry again.

By now Brin had brought his mathematical expertise and his effervescent personality to Page’s project, named BackRub, then Page Rank, and finally Google, a misspelled word for a big number that worked well as a verb. When they began to use it for search, they realized they had a much more intelligent engine than anything on the market because it ranked sites that the world thought were important enough to link to higher than those that happened to contain key words. Page discovered that three of the four biggest search engines could not even find themselves online. As Walter Isaacson has argued:

Their approach was in fact a melding of machine and human intelligence. Their algorithm relied on the billions of human judgments made by people when they created links from their own websites. It was an automated way to tap into the wisdom of humans—in other words, a higher form of human–computer symbiosis.

Bit by bit, they tweaked the programs till they got better results. Both Page and Brin wanted to start a proper business, not just invent something that others would profit from, but Stanford insisted they publish, so in 1998 they produced their now famous paper ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine’, which began: “In this paper, we present Google…” With eager backing from venture capitalists they set up in a garage and began to build a business. Only later were they persuaded by the venture capitalist Andy Bechtolsheim to make advertising the central generator of revenue.

Extracted from How Innovation Works: And Why It Flourishes In Freedom.

Matt Ridley is a British journalist and businessman. He is the author of several books, including The Red Queen (1994), Genome (1999), The Rational Optimist (2010), The Evolution of Everything (2015), and How Innovation Works: And Why It Flourishes In Freedom. You can follow him on Twitter at @mattwridley.

Comments

  1. Fun article, especially on the Google guys.

    For those who haven’t read it, I highly recommend Tom Wolfe’s article on Noyce.

  2. So if this article is strictly about transistors and Google or strictly abut transistors and computing then let’s assume the following is irrelevant. But if the piece is about innovation and the transistor, I would like to point out, as a person with an MSEE, the neglect of the mention of analog electronics and digital signal processing, which depend upon math which has its roots in analog signalling. Here are some points to illustrate: (1) the core of a digital camera, and digital imaging in general, is an analog device e.g. a charge coupled device (CCD) (2) all communications depend upon physical principles which are analog, not digital. Thus the transmission of all communications is inherently analog with digital codes ‘riding’ so to speak on those analog signals. In the case of short distant links such as USB, Firewire, etc. it is called baseband signalling (no modulation), but attention always has to be paid to dealing with the analog nature of nature. (3) the gathering of information from physical events or processes is an analog process. It has been hidden from the lay readers that the progression of this ‘art’ has also evolved to the point of resolution into the sub-microvolt domain, very important for some types of sensors some of which are suggested next. (4) the conversion of real-world measurement information into digital information is performed by analog devices called analog-to digital converters, ubiquitously present also for converting voice, imaging, seismic, astronomical, high-energy physics data and all kinds of other data, down to the remaining charge on batteries in portable devices and real time power usage therein. I should have by this point indicated the parallel development of signalling and measurement to modern technology, which has been overlooked in this piece.

  3. Analog computing is neat as all hell. My wife and I went on an in-depth tour of the USS Missouri here on Oahu, and the main targeting computer for the massive battleship guns is run by an analog computer. It compensates aiming the gun in real-time with the movement of the waves and the ship on the water. Having worked in computing for almost 20 years now, I was super excited to see a real-world analog computer!

    My understanding is that both digital and analog have their place, but due to the lack of any meaningful storage medium, analog only has a use in real-time processing?

  4. I can answer that as usual nature has the last word, and nature is analog. The storage of data as bits in flash memory or EEPROM is the storage of charge, and the reading of charge level is done by analog sensing in the memory device. A 2-level flash memory device discriminates between two quantities of charge in a cell, for a 1-bit storage. A 4-level flash memory discriminates between 4 possible quantities of charge per cell, for 2 bits storage per cell, 8-level for 3 bits/cell etc. But to answer your question more directly, I don’t personally know of any application now or in the past, where an analog value is stored, but I can imagine that it exists, as storage of a continuous value of charge on a capacitor e.g. in analog computing. I can tell you that engine controllers (turbine, aircraft) from the '70’s were analog and very sophisticated. I was working for AirResearch in the '70’s, and cabin pressure contro was done by analog controllers, and when my company started thinking about microprocessor controlled cabin pressure, I thought it was a ridiculous idea. But that was when microprocessors were very new and development tools were not plentiful.

  5. In some (but not all) respects the semiconductor revolution stopped around 15 years ago. However, in other important respects it has continued and will continue.

    Single-thread, single-core performance and clock frequency peaked around 15 years ago and may have declined since then. One important point is that other sources don’t agree.

    However, even the contrary data sources show a severe slowdown in performance gains starting after 2000.

    The problems with this theory are at least 4-fold.

    1. Even if single-thread performance has peaked, the number of cores has exploded since 2000. These days you can find systems with 160 (not a typo) cores. For example, see “Ampere launches new chip built from ground up for cloud workloads”. Too some degree, the astounding rise of multi-core architectures explains the rise of cloud (versus desktop) computing. It is certainly true, that multi-core chips are entirely well-suited to the cloud. However, the rise of cloud-computing has also been driven by the ramp-up of Internet speeds, and the inherent value of shared, hosted data. For example, most folks can’t manage the complexity of hosting Email on their desktop. By contrast, using Gmail is easy. The value of multi-core machines on the desktop is limited (by comparison) but not zero. Most desktop applications are deeply single-threaded. However, a few (such as browsers) are not. These days even smartphones have multiple cores with some (but not overwhelming) value.
    2. New types of chips (systems, boards) have exploded since 2000, that do take advantage of multiple cores. For example you can now get an NVIDIA graphics card (Titan V) with 21 billion transistors and capable of 0.11 petaflops (peta means 1000 trillion). It is possible to get a graphics card with 4,608 cores. These systems were originally built for computer games, but have been adapted/adopted for AI. Much of the AI revolution of the last few years has been driven by exploitation of computer graphics cards. Computer graphics cards are roughly 200,000 times faster than they were in 2000.
    3. The specialized Silicon for AI revolution is very much underway at this time. Historically, computer graphics cards were used for AI. Now chips/cards/boards optimized for AI are being designed and built. Of course, they will work better than systems designed for games and adapted for AI. Future system may have “millions” of cores (nodes) each of which will be quite simple, but more than good enough for AI (like neurons in the human brain).
    4. There is a long-term shift from Intel to ARM. This is mostly a business change, not a technology change… However, it has elements of a technology change in it. ARM has a mostly RISC (Reduced Instruction Set Computer) architecture. Intel (x86) has a mostly CISC (Complex Instruction Set Computer) architecture. ARM is far more efficient per-core, per-watt, per-joule, per-dollar, etc. than Intel (x86). Mostly this technology difference is hidden from users who run programs written in high-level languages. However, while the technology differences are mostly hidden, the economic differences are not.

    The bottom line is that the semiconductor revolution continues, albeit in different (and slower in some respects) ways. It will probably continue until around 2040. The impact of the semiconductor revolution will continue for many more decades thereafter. After all, Watt invented his famous steam engine in the later part of the 18th century. Chlorinated water came after 1900.

Continue the discussion in Quillette Circle

Participants

Comments have moved to our forum