The new worm in Phil Porras’s digital petri dish was announced in the usual way: a line of small black type against a white backdrop on one of his three computer screens, displaying just the barest of descriptors—time of arrival . . . server type . . . point of origin . . . nineteen columns in all.
17:52:00 . . . Win2K-f . . . 201.212.167.29 (NET.AR): PRIMA S.A, BUENOS AIRES, BUENOS AIRES, AR. (DSL) . . .
It was near the end of the workday for most Californians, November 20, 2008, a cool evening in Menlo Park. Phil took no notice of the newcomer at first. Scores of these digital infections were recorded on his monitor every day, each a simple line on his Daily Infections Log—actually, his “Multiperspective Malware Infection Analysis Page.” This was the 137th that day.
It had an Internet Protocol (IP) address from Argentina. Spread out across the screen were the infection’s vitals, including one column that noted how familiar it was to the dozens of antivirus (AV) companies who ride herd on malicious software (malware). Most were instantly familiar. For instance, the one just above was known to all 33 of the applicable AV vendors. The one before that: 35 out of 36.
This one registered a zero in the recognition column: 0 of 37. This is what caught his eye when he first noticed it on his Log.
Zero.
Outside it was dark, but as usual Phil was still at his desk in a small second-story office on the grounds of SRI International, a busy hive of labs, hundreds of them, not far from Stanford University. It is a crowded cluster of very plain three-story tan-and-maroon buildings arrayed around small parking lots like rectangular building blocks. There is not a lot of green space. It is a node of condensed brainpower, one of the best-funded centers for applied science in the world, and with about seventeen hundred workers is the second-largest employer in Menlo Park. It began life as the Stanford Research Institute—hence the initials SRI—but it was spun off by the university forty years ago. It’s a place where ideas become reality, the birthplace of gizmos like the computer mouse, ultrasound imagery machines, or tiny robot drones. The trappings of Phil’s office are simple: a white leather couch, a lamp, and a desk, which is mostly taken up by his array of three computer monitors. On the walls are whiteboards filled with calculations and schematics and several framed photos of vintage World War II fighter planes, vestiges of a boyhood passion for model building. The view out his window, through a few leafy branches, is of an identical building across an enclosed yard. It could be any office in any industrial park in any state in America. But what’s remarkable about the view from behind Phil’s desk has nothing to do with what’s outside his window. It’s on those monitors. Spread out in his desktop array of glowing multicolored pixels is a vista of cyberspace equal to . . . say, the state of Texas.
One of the inventions SRI pioneered was the Internet. The research center is a cornerstone of the global phenomenon; it owned one of the first two computers formally linked together in 1969, the first strand of a web that today links billions. This was more than two decades before Al Gore popularized the term “information superhighway.” There at the genesis, every computer that connected to the nascent network was assigned its own 32-bit identity number or IP address, represented in four octets of ones and zeros. Today the sheer size of the Internet has necessitated a new system that uses 128-bit addresses. SRI ceded authority for assigning and keeping track of such things years ago, but it retains ownership of a very large chunk of cyberspace. Phil’s portion of it is a relatively modest, nothing-to-brag-about-but-damned-hard-to-get, “Slash 16,” a block of the original digital universe containing 65,536 unique IP addresses—in other words, the last two octets of its identity number are variable, so that there are two to the sixteenth (216) possible distinct addresses, one for each potential machine added to its network. It gives him what he calls “a large contact surface” on the Internet. He’s like a rancher with his boots propped on the rail on the front porch before a wide-open prairie with, as the country song says, miles of lonesome in every direction. It’s good for spotting intruders.
Phil’s specialty is computer security, or, rather, Internet security, because few computers today are not linked to others. Each is part of a network tied to another larger network that is in turn linked to a still larger one, and so on, forming an intricate invisible web of electrons that today circle the Earth and reach even to the most distant parts of our galaxy (if you count those wayfaring NASA robot vehicles sending back cool snapshots from mankind’s farthest reach into space). This web is the singular marvel of the modern age, a kind of global brain, the world at everyone’s fingertips. It is a tool so revolutionary that we have just begun to glimpse its potential—for good and for evil.
Out on his virtual front porch, Phil keeps his eyes peeled for trouble. Most of what he sees is routine, the viral annoyances that have bedeviled computer users everywhere for decades, illustrating the principle that any new tool, no matter how helpful, will also be used for harm. Viruses are responsible for such things as the spamming of your in-box with come-ons for penis enlargement or million-dollar investment opportunities in Nigeria. Some malware is designed to damage or destroy your computer, or threaten to do so unless you purchase a remedy (which turns out to be fake). When you get hit, you know it. But the newest, most sophisticated computer viruses, like the most successful biological viruses, have bigger ambitions, and are designed for stealth. They would be noticed only by the most technically capable and vigilant of geeks. For these, you have to be looking.
Anything new was enough to make Phil’s spine tingle. He had been working with computers since he was in high school in Whittier, California, and had sent away in 1984 for a build-it-yourself personal computer. Back then personal computers had begun to establish a wider market, but there were still small companies who catered to a fringe community of users, most of them teenagers, who were excited enough and smart enough to order kits and assemble the machine themselves, using them to play games, mostly, or configure them to perform simple household or business chores. Phil’s dad was an accountant, and his mom ran a care center for senior citizens, so he amazed them by programming his toy to handle time-consuming, monotonous tasks. But mostly he played games. He took computer classes in high school, contributing at least as much as he took away, and in college at the University of California, Irvine, he fell in with a group of like-minded geeks who amused themselves by showing off their programming skills. At the time—this was in the late 1980s—Sun Microsystems dominated the software world with “Solaris,” an operating system with a reputation for state-of-the-art security features. Phil and his friends engaged in a game of one-upmanship, hacking into the terminals in their college labs and playing pranks on each other. Some of the stunts were painful. Victims might lose a whole night of work because their opponent had remotely reprogrammed their keyboard to produce gibberish. So Phil’s introduction to computer warfare, even at this prank stage, had real consequences. It was a world where you either understood the operating system enough to fend off an attack, or got screwed.
This kind of competition—mind you, these were very few geeks competing for very small stakes—nevertheless turned Phil into an aggressive expert in computer security. So much so that when he graduated, he had to go shopping for a professor at the graduate level who could teach him something. He found one in Richard Kemmerer at the University of California at Santa Barbara (UCSB), one of the only computer security academics in the country at the time, who quickly recognized Phil as more of a peer than a student. The way you capitalized on superior hacking skills in academia was to anticipate invasion strategies and devise way of detecting and fending them off. Phil was soon recognized as an expert in the newly emerging field. Today, UCSB has one of the most advanced computer security departments in the world, but back in the early 1990s, Phil was it. When UNIX-5 was purported to be the most secure operating system in the business, Phil cooked up fifty ways to break into it. When he was twenty years old, he was invited to a convention on computer security at SRI, where he presented his first attempts to design software that would auto-detect his impressive array of exploits. The research institute snapped him up when he finished his degree, and over the next two decades Phil’s expertise has evolved with the industry.
Phil has seen malware grow from petty vandalism to major crime. Today it is often crafted by organized crime syndicates or, more recently, by nation-states. An effusive man with light brown skin and a face growing rounder as he approaches middle age, he wears thin-framed glasses that seem large for his face, and has thick brown hair that jumps straight up on top. Phil is a nice guy, a good guy. One might even say he’s a kind of superhero. In cyberspace, there really are bad guys and good guys locked in intense cerebral combat; one side cruises the Internet for pillage and plunder, the other to prevent it. In this struggle, Phil is nothing less than a giant in the army of all that is right and true. His work is filled with urgent purpose and terrific challenges, a high-stakes game of one-upmanship in a realm that few people comprehend. Like most people who love their work, Phil enjoys talking about it, to connect, to explain—but the effort is often doomed:
. . . So what we ended up doing is, see, we ended up becoming really good at getting ourselves infected. Like through a sandnet. Executing the malware. Finding the IRC site and channel that was being exploited by the botmaster and simply going after it. Talking to the ISP and directly attacking. Bringing it down. Bringing down the IRC server or redirecting all IRC communications to use . . .
He tries hard. He speaks in clipped phrases, ratcheting down his natural mental velocity. But still the sentences come fast. Crisp. To the point. You can hear him straining to avoid the tricky territory of broader context, but then, failing, inevitably, as his unstoppable enthusiasm for the subject matter slips out of low gear and he’s off at turbo speed into Wired World: . . . bringing down the IRC server . . . the current UTC date . . . exploiting the buffer’s capacity . . . utilizing the peer-to-peer mechanism . . . Suffice it to say, Phil is a man who has come face-to-face many times with the Glaze, the unmistakable look of profound confusion and uninterest that descends whenever a conversation turns to the inner workings of a computer.
The Glaze is familiar to every geek ever called upon to repair a malfunctioning machine—Look, dude, spare me the details, just fix it! Most people, even well-educated people with formidable language skills, folks with more than a passing knowledge of word-processing software and spreadsheets and dynamic graphical displays, people who spend hours every day with their fingertips on keyboards, whose livelihoods and even leisure-time preferences increasingly depend on fluency with a variety of software, remain utterly clueless about how any of it works. The innards of mainframes and operating systems and networks are considered not just unfathomable but somehow unknowable, or even not worth knowing, in the way that many people are content to regard electricity as voodoo. The technical side of the modern world took a sharp turn with the discovery of electricity, and then accelerated off the ramp with electromagnetism into the Realm of the Hopelessly Obtuse, so that everyday life has come to coexist in strict parallel with a mysterious techno dimension. Computer technology rubs shoulders with us every day, as real as can be, even vital, only . . . also . . . not real. Virtual. Transmitting signals through thin air. Grounded in machines with no visible moving parts. This techno dimension is alive with . . . what exactly? Well-ordered trains of electrons? Binary charges?
That digital ranch Phil surveys? It doesn’t actually exist, of course, at least not in the sense of dust and sand and mesquite trees and whirling buzzards and distant blue buttes. It exists only in terms of capacity, or potential. Concepts like bits and bytes, domain names, ISPs, IPAs, RPCs, P2P protocols, infinite loops, and cloud computing are strictly the province of geeks or nerds who bother to pay attention to such things, and who are, ominously, increasingly essential in some obscure and vaguely disturbing way to the smooth functioning of civilization. They remain, by definition, so far as the stereotype goes, odd, remote, reputed to be borderline autistic, and generally opaque to anyone outside their own tribe—They are mutants, born with abilities far beyond those of normal humans. The late M.I.T. professor Joseph Weizenbaum identified and described the species back at the dawn of the digital age, in his 1976 book Computer Power and Human Reason:
Wherever computer centers have become established, that is to say, in countless places in the United States, as well as in all other industrial regions of the world, bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at their computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: Cokes, sandwiches. If possible, they sleep on cots near the computer. But only for a few hours—then back to the console or printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and the world in which they move. They exist, at least when so engaged, only through and for computers. These are computer bums, compulsive programmers. They are an international phenomenon.
The Geek Tribe today has broadened to include a wider and more wholesome variety of characters—Phil played a lot of basketball in high school and actually went out with girls—and there is no longer any need need for “printouts” to obsess over—everything is on-screen—but the Tribe remains international and utterly obsessed, linked 24/7 by email and a host of dedicated Internet chat channels. In one sense, it is strictly egalitarian. You might be a lonely teenager with pimples in some suburban basement, too smart for high school, or the CEO of some dazzling Silicon Valley start-up, but you can join the Tribe so long as you know your stuff. Nevertheless, its upper echelons remain strictly elitist; they can be as snobby as the hippest Soho nightclub. Some kind of sniff test applies. Phil himself, for instance, was kept out of the inner circle of geeks fighting this new worm for about a month, even though he and his team at SRI had been at it well before the Cabal came together, and much of the entire effort rested on their work. Access to a mondo mainframe or funding source might gain you some cachet, but real traction comes only with savvy and brainpower. In a way, the Tribe is as virtual as the cyberworld itself. Many members have known each other for years without actually having ever met in, like, real life. Phil seems happiest here, in the glow of his three monitors, plugged into his elite global confederacy of the like-minded.
The world they inhabit didn’t even exist, in any form, when Phil was born in 1966. At that point the idea of linking computers together was just that, an idea, and a half-baked one. It was the brainchild of a group of forward-thinking scientists at the Pentagon’s Advanced Research Projects Agency (ARPA). The agency was housed in and funded by the Pentagon, and this fact has led to false stories about the Internet’s origins, that it was official and military and therefore inherently nefarious. But ARPA was one of the least military enterprises in the building. Indeed, the agency was created and sustained as a way of keeping basic civilian research alive in an institution otherwise entirely focused on war. One of the things ARPA did was underwrite basic science at universities, supporting civilian academic scientists in projects often far afield from any obvious military application. Since at that time the large laboratories were using computers more and more, one consequence of coordinating ARPA’s varied projects was that it accumulated a variety of computer terminals in its Pentagon offices, each wired to mainframes at the different labs. Every one of these terminals was different. They varied in appearance and function, because each was a remote arm of the hardware and software peculiar to its host mainframe. Each had its own method of transferring and displaying data. ARPA’s Pentagon office had begun to resemble the tower of Babel.
Computers were then so large that if you bought one, you needed a loading dock to receive it, or you needed to lift off the roof and lower it into position with a crane. Each machine had its own design and its own language and, once it had been put to work in a particular lab, its own culture, because each was programmed and managed to perform certain functions peculiar to the organization that bought it. Most computers were used to crunch numbers for military or scientific purposes. As with many new inventions that have vast potential, those who first used them didn’t look far past their own immediate needs, which were demanding and remarkable enough, like calculating the arc through the upper atmosphere of a newly launched missile, or working out the variable paths of subatomic particles in a physics experiment. Computers were very good at solving large, otherwise time-consuming calculations very quickly, thus enabling all kinds of amazing technological feats, not the least of which was to steer six teams of astronauts to the surface of the moon and back.
Most thinkers were busy with all of the immediate miracles computers had made suddenly doable; only those at the farthest speculative frontiers were pondering the machines’ broader possibilities. The scientists at ARPA, J. C. R. Licklider and Bob Taylor and Larry Roberts, as described in Where Wizards Stay Up Late, by Katie Hafner and Matthew Lyon, were convinced that the computer might someday be the ultimate aid to human intelligence, that it might someday be, in a sense, perched on mankind’s shoulder making instant connections that few would have the knowledge, experience, or recall to make on their own, connecting minds around the world in real time, providing instant analysis of concepts that in the past might require years of painstaking research. The first idea was just to share data between labs, but it was only a short leap from sharing data to sharing resources: in other words, enabling a researcher at one lab to tap into the special capabilities and libraries of a computer at a distant one. Why reinvent a program on your own mainframe when it was already up and running elsewhere? The necessary first step in this direction would be linkage. A way had to be found to knit the independent islands of computers at universities and research centers into a functional whole.
There was resistance. Some of those operating mainframes, feeling privileged and proprietary and comfortably self-contained, saw little or no advantage in sharing them. For one thing, competition for computing time in the big labs was already keen. Why invite more competition from remote locations? Since each mainframe spoke its own language, and many were made by competing companies, how much time and effort and precious computing power would it take to enable smooth communication? The first major conceptual breakthrough was the idea of building separate computers just to resolve these issues. Called Interface Message Processors (IMPs), they grew out of an idea floated by Washington University professor Wesley Clark in 1967: instead of asking each computer operator to design protocols for sending and receiving data to every other computer on the net, why not build a subnet just to manage the traffic? That way each host computer would need to learn only one language, that of the IMP. And the IMPs would manage the routing and translating problems. This idea even dangled before each lab the prospect of a new mainframe to play with at no extra cost, since the government was footing the bill. It turned an imposition into a gift. By the early 1970s, there were dozens of IMPs scattered around the country, a subnet, if you will, managing traffic on the ARPANET. As it happens, the first two computers linked in this way were a Scientific Data Systems (SDS) 940 model in Menlo Park, and an older model, SDS Sigma-7, at UCLA. That was in October 1969. Phil Porras was just out of diapers.
The ARPANET’s designers had imagined resource- and data-sharing as its primary purpose, and a greatly simplified way to coordinate the agency’s scattered projects, but as the authors of new life-forms have always discovered, from God Almighty to Dr. Frankenstein, the creature immediately had ideas of its own. From its earliest days, the Internet was more than the sum of its parts. The first hugely successful unforeseen application became email, the ability to send messages instantly anywhere in the world, followed closely by message lists, or forums that linked in real time those with a shared interest, no matter where they were. Message lists or chat lines were created for disciplines serious and not so serious—the medieval game “Dungeons and Dragons” was a popular early topic. By the mid-1970s, at about the time microcomputers were first being marketed as build-it-yourself kits (attracting the attention of Harvard undergrad nerds Bill Gates and Paul Allen), the ARPANET had created something new and unforeseen: in the words of Hafner and Lyon, “a community of equals, many of whom had never met each other yet who carried on as if they had known each other all of their lives . . . perhaps the first virtual community.”
This precursor web relied on telephone lines to carry information, but in short order computers were being linked by radio (the ALOHANET in Hawaii connected computers on four islands in this way) and increasingly by satellite (the quickest way to connect computers on different continents). Pulling together this rapidly growing variety of networks meant going back to the idea of the IMP: creating a new subnet to facilitate linkage—call it a sub-subnet, or a network of networks. Computer scientists Vint Cerf of Stanford and Bob Kahn of MIT presented a paper in 1974 outlining a new method for moving data between these disparate systems, called Transmission Control Protocol, or TCP. It was another eureka moment. It enabled any computer network established anywhere in the world to plug into the growing international system, no matter how it transmitted data.
All of this was happening years before most people had ever seen an actual computer. For its first twenty years, the Internet remained the exclusive preserve of computer scientists and experts at military and intelligence centers, but it was becoming increasingly clear to them that the tool had broader application. Today it serves more than two billion users around the world, and has increasingly become the technological backbone of modern life.
Its growth has been bottom-up, in that beyond ad hoc efforts to shape its technical undergirding, no central authority has dictated its structure or imposed rules or guidelines for its use. This has generated a great deal of excitement among social theorists. The assignment of domain names and IP Addresses was handed off by SRI in 1998 to the closest thing the Internet has to a governing body, the International Corporation for Assigned Names and Numbers (ICANN). Headquartered in Marina Del Rey, California, ICANN is strictly nonprofit and serves little more than a clerical role, but, as we shall see, is capable of exerting important moral authority in a crisis. Domain names are the names (sometimes just numbers) that a user selects to represent his presence on the Internet—yahoo.com; nytimes.com, etc. Many domains also establish a website, a “page” or visible representation of the domain’s owner, be it an individual, a corporation, an agency, an institution, or whatever. Not all domains establish websites. The physical architecture of the Internet rests on thirteen root servers, labeled A, B, C . . . through M. Ten of these are in the United States, and one each in Great Britain, Japan, and Sweden.* The root servers maintain very large computers to direct the constant flow of data worldwide. The root servers also maintain comprehensive and dynamic lists of domain-name servers, which keep the flow moving in the right direction, from nanosecond to nanosecond.
* This is a simplification, and is not exactly true, in the sense of there being physically thirteen servers at those locations acting as central switchboards for the Internet. Like all things in cyberspace . . . it’s complicated. Here’s how Paul Vixie attempted to explain it to me: “There are thirteen root name servers on which all traffic on the Internet depends, but what we’re talking about are root name server identities, not actual machines. Each one has a name, like mine, which is f.root-servers.net. A few of them are actual servers. Most of them are virtual servers, mirrored or replicated in dozens of places. Each root server is vital, sort of, to every, sort of, message, sort of. They are vital (but not necessarily involved) in every TCP/IP [Transmission Control Protocol/Internet Protocol] connection, since every TCP/IP connection depends on DNS [Domain Name System], and DNS depends on the root name servers. But the root name servers are not in the data path itself. They do not carry other people’s traffic, they just answer questions. The most frequent question we hear is, ‘What is the TCP/IP address for www.google.com?’ and the most frequent answer we give is ‘I dunno but I will tell you where the .COM servers are and you can ask them.’ Once a TCP/IP connection is set up, DNS is no longer involved. If a browser or email system makes a second or subsequent connection to the same place in a short time, it’ll have the TCP/IP address saved in a cache, and DNS won’t be involved. A root name server is an Internet resource having a particular name and address. But it’s possible to offer the same resource at the same name and address from multiple locations. f.root-servers.net, which is my root name server, is located in fifty or so cities around the globe, each independent of the others but all sharing an identity.” Got that?
The system works more like an organism than any traditional notion of a machine. The best effort at a visual illustration was created by researchers at Bar-Ilan University in Israel, who produced a gorgeous image that resembles nothing so much as a single cell. It shows a dense glowing orange nucleus of eighty or so central nodes surrounded by a diffuse protoplasmic periphery of widely scattered yellow-and-green specks representing isolated smaller nodes, encircled by a dense blue- and-purple outer wall or membrane of directly linked, peer-to-peer networks. The bright hot colors indicate high-traffic links, like root servers or large academic, government, or corporate networks; the cooler blues and purples of the outer membrane suggest the low-traffic networks of local Internet Service Providers (ISPs) or companies. There is something deeply suggestive in this map, reminiscent of what Douglas Hofstadter called a “strange loop” in his classic work, G’del, Escher, Bach, the notion that a complex system tends toward self-reference, and inevitably rises toward consciousness. It is possible, gazing at this remarkable picture of the working Internet, to imagine it growing, multiplying, diversifying, and some day, in some epochal instant, blinking and turning and looking up, becoming alive. Planetary consciousness. The global I.
The Internet is not about to wink at us just yet, but it helps explain some of the reverence felt by those engaged in conceptualizing, building, and maintaining the thing. It represents something entirely new in human history, and is the single most remarkable technological achievement of our age. Scientists discovered the great advantage of sharing lab results and ideas instantaneously with others in their field all over the world, and grew excited about the possibilities of tying large networks together to perform unprecedented research. Social theorists awoke to the thing’s potential, and a new vision of a techno utopia was born. All human knowledge at everyone’s fingertips! Ideas shared, critiqued, tested, and improved! Events in the most remote corners of the world experienced everywhere simultaneously! The web would be a repository for all human knowledge, a global marketplace for products and ideas, a forum for anything that required interaction, from delicate international diplomacy to working out complex differential equations to buying office supplies—and it would be entirely free of regulation and control! Governments would be powerless to censor information. Journalism and publishing and research would no longer be in the hands of a wealthy few. Secrets would be impossible to keep! The Internet promised a truly global egalitarian age. That was the idea, anyway. The international and unstructured nature of the thing was vital to these early Internet idealists. If knowledge is power, then power at long last would reside where it belonged, with the people, all people! Tyrants and oligarchs would tremble! Bureaucracy would be streamlined! Barriers between nation-states and cultures would crumble! Humankind would at last be . . . !
. . . you get the picture.
Some of this was undeniable. Few innovations have taken root so fast internationally, and few have evolved in such an unfettered, democratic way. The Internet has made everyone, in a virtual sense, a citizen of the world, a development that has already had profound consequences for millions, and is sure to have more. But in their early excitement, the architects of the Internet may have overvalued its anarchic essence. When the civilian Internet began taking shape, mostly connecting university labs to one another, the only users were people who understood computers and computer languages. Techno-utopia! Everyone can play! Information for free! Complete transparency! No one wrote rules for the net; instead, people floated “Requests for Comment.” Ideas for keeping the thing working smoothly were kicked around by everyone until a consensus arose, and given the extreme flexibility of software, anything adopted could readily be changed. Nobody was actually in charge. This openness and lack of any centralized control is both a strength and a weakness. If no one is ultimately responsible for the Internet, then how do you police and defend it? Unless everyone using the thing is well-intentioned, it is vulnerable to attack, and can be used as easily for harm as for good.
Even though it has become a part of daily life, the Internet itself remains a cloudy idea to most people. It’s nebulous in a deeper way than previous leaps in home technology. Take the radio. Nobody knew how that worked, but you could picture invisible waves of electromagnetic particles arriving from the distance like the surf, distant voices carried forth on waves from the edges of the earth and amplified for your ears. If you lived in a valley or the shadow of a big building, the mountains or the walls got in the way of the waves; if you lived too far from the source of the signal, then the waves just petered out. You got static, or no sound. A fellow could understand that much. Or TV . . . well, nobody understood that, except that it was like the damn radio only the waves, the invisible waves, were more complex, see, and hence delivered pictures, too, and the sorting mechanism in the box, the transistors or vacuum tubes or some such, projected those pictures inside the tube. In either case you needed antennae to pick up the waves and vibrate just so. There was something going on there you could picture, even if falsely. But the Internet is just there. It is all around us, like the old idea of luminiferous ether. No antenna. No waves—at least, none of the kind readily understood. And it contains not just a voice or picture, but . . . the whole world and everything in it: pictures, sounds, text, movies, maps, art, propaganda, music, news, games, mail, whole national libraries, books, magazines, newspapers, sex (in varieties from enticing to ghastly), along with close-up pictures of Mars and Jupiter, your long-forgotten great-aunt Margaret, the menu at your local Thai restaurant, everything you ever heard of and plenty you had not ever dreamed about, all of it just waiting to be plucked out of thin air.
Behind his array of three monitors in Menlo Park, Phil Porras occupies a desk in the very birthplace of this marvel, and sees it not in some vague sense, but as something very real, comprehensible, and alarmingly fragile. By design, a portion of the virtual ranch he surveys is left unfenced and undefended. It is thus an inviting target for every free-roaming strain of malware trolling cyberspace. This is his petri dish, or honeynet. Inside the very large computer he gets to play with, Porras creates a network of “virtual computers.” These are not physical machines, just individual operating systems within the large computer that mimic the functions of distinct, small ones. Each has its own IP address. So Phil can set up the equivalent of a computer network that exists entirely within the confines of his digital ranch. These days if you leave any computer linked to the Internet unprotected, you can just sit back and watch it “get popped” or “get pwned,” in the parlance. (The unpronounceable coinage “pwned” was an example of puckish hacker humor: geeks are notoriously bad spellers, and someone early on in the malware wars had typed “p” instead of “o” in typing out the word “owned.” It stuck.) If you own an Internet space as wide as SRI’s, you can watch your virtual computers get pwned every few minutes.
Like just about everything in this field, the nomenclature for computer infections is confusing, because normal folk tend to use the terms “virus” and “worm” interchangeably, while the Tribe defines them differently. To make matters worse, the various species in the growing taxonomy sometimes cross-pollinate. The overarching term “malware” refers to any program that infects a computer and operates without the user’s consent. For the purposes of this story, the difference between a “virus” and a “worm” is in the way each spreads. To invade a computer, a virus relies on human help such as clicking unadvisedly on an unsolicited email attachment, or inserting an infected floppy disk or thumb drive into a vulnerable computer. A worm, on the other hand, is state of the art. It can spread all by itself.
The new arrival in Phil’s honeypot was clearly a worm, and it began to attract the Tribe’s attention immediately. After that first infection at 5:20 p.m. Thursday there came a few classic bits of malware, and then the newcomer again. And then again. And again. The infection rate kept accelerating. By Friday morning, Phil’s colleague Vinod Yegneswaran notified him that their honeynet was under significant attack. By then, very little else was showing on the Infections Log. The worm was spreading exponentially, crowding in so fast that it shouldered aside all the ordinary daily fare. If the typical inflow of infection was like a steady drip from a faucet, this new strain seemed shot out of a fire hose.
Its most obvious characteristics were familiar at a glance. The worm was targeting—Phil could see this on his Log—Port 445 of the Windows Operating System, the most commonly used operating software in the world, causing a buffer at that port to overflow, then corrupting its execution in order to burrow into the host computer’s memory. Whatever this strain was, it was the most contagious he had ever seen. It turned each new machine it infected into a propagation demon, rapidly scanning for new targets, reaching out voraciously. Soon he began to hear from others in the Tribe, who were seeing the same thing. They were watching it flood in from Germany, Japan, Colombia, Argentina, and various points around the United States. It was a pandemic.
Months later, when the battle over this worm was fully joined, Phil would check with his friends at the University of California, San Diego (UCSD), who operate a supercomputer that owns a “darknet,” or a “black hole,” a continent-size portion of cyberspace. Theirs is a “slash eight,” which amounts to one 256th of the entire Internet. Any random scanning worm like this new one would land in UCSD’s black hole once every 256 times it launched from a new source. When they went looking, they found that the first Conficker scan attempt had hit them three minutes before the worm first hit Phil’s honeynet. The source for their infection would turn out to be the same—the IP address in Buenos Aires. The address itself didn’t mean much. Most Internet Service Providers reassigned an IP address each time a machine connects to the network. But behind that number on that day had been the original worm, possibly its author but more likely a drone computer under his control.
The honeynets at SRI and at UCSD were designed to snare malware in order to study it. But the worm wasn’t just cascading into their networks. This was a worldwide digital blitzkrieg. Existing firewalls and antiviral software didn’t recognize it, so they weren’t slowing it down. The next questions were: Why? What was it up to? What was the worm’s purpose?
The most likely initial guess was that it was building a botnet. Not all worms assemble botnets, but they are very good at doing so. This would explain the extraordinary propagation rate. The term “bot” is short for “robot.” Various kinds of malware turn computers into slaves controlled by an illicit, outside operator. Programmers, who as a class share a weakness for sci-fi and horror films, also call them zombies. In the case of this new worm, the robot analogy is more apt.
Imagine your computer as a big spaceship, like the starship Enterprise on Star Trek. The ship is so complex and sophisticated that even an experienced commander like Captain James T. Kirk has only a general sense of how every facet of it works. From his wide swivel chair on the bridge, he can order it to fly, maneuver, and fight, but he cannot fully control or even comprehend all its inner workings. The ship contains many complex, interrelated systems, each with its own function and history—systems for, say, guidance, maneuvers, power, air and water, communications, temperature control, weapons, defensive measures, etc. Each system has its own operator, performing routine maintenance, exchanging information, making fine adjustments, keeping it running or ready. When idling or cruising, the ship essentially runs itself without a word from Captain Kirk. It obeys when he issues a command, and then returns to its latent mode, busily doing its own thing until the next time it is needed.
Now imagine a clever invader, an enemy infiltrator, who does understand the inner workings of the ship. He knows it well enough to find a portal with a broken lock overlooked by the ship’s otherwise vigilant defenses—like, say, a flaw in Microsoft’s operating platform. So no one notices when he slips in. He trips no alarm, and then, to prevent another clever invader from exploiting the same weakness, he repairs the broken lock and seals the portal shut behind him. He improves the ship’s defenses. Ensconced securely inside, he silently sets himself up as the ship’s alternate commander. The Enterprise is now a “bot.” The invader enlists the various operating functions of the ship to do his bidding, careful to avoid tripping any alarms. Captain Kirk is still up on the bridge in his swivel chair with the magnificent instrument arrays, unaware that he now has a rival in the depths of his ship. The Enterprise continues to perform as it always did. Meanwhile, the invader begins surreptitiously communicating with his own distant commander, letting him know that he is in position and ready, waiting for instructions.
And now imagine a vast fleet, in which the Enterprise is only one ship among millions, all of them infiltrated in exactly the same way, each ship with its hidden pilot, ever alert to an outside command. In the real world, this infiltrated fleet is called a “botnet,” a network of infected, “robot” computers. The first job of a botnet-assembling worm is to infect and link together as many computers as possible. Thousands of botnets exist, most of them relatively small—a few tens of thousand or a few hundreds of thousands of infected computers. More than a billion computers are in use around the world, and by some estimates, a fourth of them have been joined to a botnet.
Most of us still think of the threat posed by malware in terms of what it might do to our personal computer. When the subject comes up, the questions are: How do I know if I’m infected? How do I get rid of the infection? But modern malware is aimed less at exploiting individual computers than exploiting the Internet. A botnet-creating worm doesn’t want to harm your computer; it wants to use it.
Botnets are exceedingly valuable tools for criminal enterprise. Among other things, they can be used to efficiently distribute malware, to steal private information from otherwise secure websites or computers, to assist in fraudulent schemes, or to launch Dedicated Denial of Service (DDoS) attacks—overwhelming a targeted server with a flood of requests for response. If you control even a minor botnet, one with, say, twenty thousand computers, you own enough computing power to shut down most business networks. The creator of an effective botnet, one with a wide range and the staying power to defeat security measures, can use it himself for one of the above scams, or he can sell or lease it to people who will. Botnets are traded in underground markets online. Customers shop for specific things, like, say, fifty computers that belong to the FBI, or a thousand computers that are owned by Google, or Bank of America, or the U.S. or British military. The cumulative power of a botnet has been used to extort protection money from large business networks, which will sometimes pay to avoid a crippling DDoS attack. Botnets can also be used to launder money. Opportunity for larceny and sabotage is limited only by the imagination and skill of the botmaster.
If the right orders were given, and all bots in a large net worked together in one concerted effort, they could crack most codes, break into and plunder just about any protected database in the world, and potentially hobble or even destroy almost any computer network, including networks that make up a country’s vital modern infrastructure: systems that control banking, telephones, energy flow, air traffic, health-care information—even the Internet itself. Because the idea of the Internet is so nebulous, it is hard for most people, even in positions of public responsibility, to imagine it under attack, or destroyed. Those who specialize in cybersecurity face a wall of incomprehension and disbelief when they sound an alarm. It is as if this dangerous weapon pointed at the vitals of the digital world is something only they can see. And in recent years they face a new problem . . . amusement. The alarm has been sounded falsely too often—take the widespread fear of an international computer meltdown at the turn of the millennium, the Y2K phenomenon, which did not happen. This has conditioned the popular press to regard warnings from the Tribe in the same way it regards periodic predictions of the apocalypse from wacky televangelists. The news tends to be reported with a knowing wink, as if to say: And here’s your latest prediction of divine wrath and global destruction from the guys who wear those funny plastic protectors in their shirt pockets. Take it as seriously as you wish. Oddly, as the de facto threat posed by malware grew, it became harder and harder to get people, even people in responsible positions, to take it seriously.
If yours is one of the infected machines, you are like Captain Kirk, seemingly in full command of your ship, unaware that you have a hidden rival, or that your computer is part of this vast robot fleet. The worm inside your machine is not idle. It is stealthily running, scanning for other computers to infect, issuing small maintenance commands, working to protect itself from being discovered and removed, biding its time, and periodically checking in with its command center. The threat posed by a botnet is less to individual computer owners than to society at large. The Internet today is in its Wild West stage. You link to it at your own risk.
Phil had no way to stop the spread of this new worm. He could only study it. And he could tell little about it at first. He knew roughly where his first sample had come from, and that it was something unrecognized. He knew that it was a genius of a propagator. It had one other curious feature that he had never seen. It had a geographic look-up capability: this worm wanted to know where the machine it had just infected was located in the real world.
The first step in dealing with any new malware is to “unpack” it, to break it open and look inside. Most malware comes in a protective shell of code, complex enough to keep amateurs from taking a close look, but Phil’s Menlo Park wizards were pros. They had invented an unpacking program they called Eureka that readily cracked open 95 percent of what they saw.
When they tried it on the new worm, it failed.
Sometimes when Phil was stymied like this, he would just wait for one of the AV vendors to meet the challenge. But this worm was flooding in so fast that waiting was not an option. His Infections Log showed the same thing over and over again, as the worm flooded in from everywhere.
As he would later explain, “There was literally nothing else for us to do.”