Palo alto, p.66

Palo Alto, page 66

 

Palo Alto
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  In their 1998 coauthored academic paper reflecting on Google’s first years, Brin and Page acknowledge that web crawlers are hectic programs by nature. “Because of the immense variation in web pages and servers, it is virtually impossible to test a crawler without running it on [a] large part of the internet,” they write. “Invariably, there are hundreds of obscure problems which may only occur on one page out of the whole web and cause the crawler to crash, or worse, cause unpredictable or incorrect behavior.”21 Like automata from a cautionary fable, crawlers are much easier to create than they are to manage, and this early document has an ominous Sorcerer’s Apprentice energy. But if young Victor Frankenstein had begun his reanimation trials in Silicon Valley, he probably could have picked up $25 million from Sequoia and Kleiner Perkins, just as Larry and Sergey did the next year. If that kind of wild efficiency is dangerous, then venture capitalists didn’t want to be saved. The two computer scientists warned in their paper that “we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm,” but that wasn’t going to be Google. The partners left Stanford and, by way of a Menlo Park garage, landed in their Palo Alto office.

  Google’s big VC investment came at a fortunate time, and thanks to their low-cost automated system the guys weathered the dot-com crash in fine financial shape. But like many in their cohort, they subsisted on investor cash, cash they were frittering away on a daily basis. They left their Stanford program, but Larry and Sergey were still scientists on some level, and blending ranked results with advertisements was unacceptable to them. An automated system allowed buyers to place (clearly labeled) ads on results pages without costly management on Google’s side, but customers weren’t sprinting in. The firm’s fortunes improved when it copied a competitor’s model and began selling ad space in tiny auctions on a per-click basis.22 Paying for clicks rather than space was very appealing to advertisers, and the firm’s revenues took off. The new version of AdSense was so successful that the firm created a version that allowed third-party sites to post Google-run ads, too. Despite the host of exotic commercial fields Google has entered since, as well as the company’s reorganization under the Alphabet holding entity, Google advertising still, almost 20 years later, provides more than 80 percent of the conglomerate’s revenue.23

  Accumulating information was the key to Google’s advantage. The PageRank search model—a useful scavenger for the ecosystem—was based on crawling and scraping the internet’s organic map of hyperlinks. As it scaled up, Google continued to make use of this efficient tool and the orientation behind it. After surviving the quick crash, it snapped up the online diary provider Blogger, which hadn’t been so lucky. It was expand or die, and the new CEO, Eric Schmidt, was all about growth. In 2004, Google took a very public shot at the web portal players Yahoo! and Microsoft. Microsoft’s Hotmail and Yahoo’s RocketMail were the dominant American web mail providers, so Google had to offer something new and improved if it wanted to compete. Gmail not only had the cachet of an invitation-only service and Google’s signature clean white interface, it also had 200 times as much storage space as Microsoft’s Hotmail—one gigabyte compared to five megabytes. Google could afford it because it scraped users’ emails and ran personalized ads in the margins that were tailored to the results. The better the personalization, the more likely users were to click, and the more likely Google was to get paid. When confronted by a Playboy interviewer about the privacy implications of the Gmail model, Larry and Sergey pitched it as a win-win. “Our ads aren’t distracting; they’re helpful,” Brin said.24 Page conceded that seeing ads related to the content of your mail was “a little spooky at first,” but all that free space was too good to turn down and users did get used to it. An IPO later that year gave Google a market cap of more than $20 billion.

  Google held on to and made profitable use of its search engine monopoly, and it expanded into other key positions, from browser to office software suite to operating system and all the way to forays into hardware, challenging the biggest enduring incumbents like Microsoft and Apple. The search engine held its own well enough to join that vaunted level, passing also-rans like Yahoo! along the way. The firm’s origins inhered in its DNA, and as Google’s capacities increased, so did its scraping ambitions. Soon after the IPO, the firm acquired the In-Q-Tel-backed digital mapping firm Keyhole, which caught its big break during CNN’s breathless coverage of the fraudulent Iraq invasion. Keyhole scraped the world’s surface with satellites to get Google Maps, which dominated the online directions sector. A few years later, it took the logic to an absurd level, deploying cars stacked with cameras to scrape a ground-level picture of the whole world for Google Street View. If you can take a picture of a house, who’s to say you can’t take pictures of every house?

  At the end of its big IPO year, 2004, Google announced its plan to scrape every page of every book for what became Google Books, much to the publishing industry’s consternation. Though computer programs didn’t crawl these real-life surfaces by themselves, Google could afford to contract low-wage workers to drive cameras around and to turn pages. Mostly these workers disappeared behind user interfaces, but there were predictable glitches, like the reflection of a Street View worker captured in a shiny window. Artist Benjamin Shaykin’s project Google Hands features problem pages from Google Books scans, including accidentally scanned worker fingers. The fingers periodically get caught, a consistent malfunction in the scraper’s cyborg apparatus. Andrew Norman Wilson’s 2011 short film, Workers Leaving the Googleplex, focuses on the same ScanOps contractors. In the grand NorCal tradition of labor-market segregation, these laborers carried unique yellow badges, though that was hardly necessary to mark them, Wilson writes: “It was the same group of workers, mostly black and Latino, on a campus of mostly white and Asian employees, walking out of the exit like a factory bell had just gone off.”25 They entered and exited at their own special scheduled times—4:00 a.m. and 2:15 p.m.—so as to spare the white- (employee), green- (intern), and red- (contractor) badged Googlers an awkward confrontation with that particular internal hierarchy. It’s a plan that backfired with Wilson’s movie, which shows the yellow-badge exodus as Wilson tells the audience how he lost his red-badge job editing film for Google’s on-campus contractor Transvideo after being reported for speaking with ScanOps workers and recording the scene.

  As Google grew, it combined the monopolistic business strategy of Microsoft with the disrupting scraper speed of Napster. It’s a potent combination, and it left Google strong enough to defend its book scanning from the Authors Guild all the way to the top courts. Not even Bill Gates himself could have conceived of a business plan in which his company extracted value from every word accessed or typed on a Windows machine. Google belonged to a different era. In the closing decades of the twentieth century, as output growth slowed and capital hunted for low-commitment bets, global advertising increased dramatically. In the second half of the 1980s, TV ad spending doubled, from $25 to $50 billion, then doubled again in the ’90s, then doubled again in the first decade of the new millennium—and for the first couple of decades, newspapers and magazines matched that growth.26 Advertising was a good way to compete without getting into the risky business of price competition or product innovation. The fact that ads didn’t actually add anything to the economy was good, since the world was increasingly oversupplied with cheap stuff anyway.

  The more competition among capitalists became zero-sum, the more they relied on advertising. Google had a number of advantages over broadcast and print ads, the biggest being that the company could target audience members individually. It extended that advantage by spending over $3 billion in cash to outbid Microsoft for the advertising company DoubleClick, which specialized in following browsers around the web, keeping track of who they are and what they want.27 Google’s self-imposed barriers between user profiles on its different services, meant to reassure anyone nervous about their online privacy, eroded. In 2009 the firm used DoubleClick to target Google ads based on recorded browsing history. In 2016, reversing a previous policy, Google combined user information from its services—including Gmail and Google Search—with its advertising data to create “super profiles,” single pools of information that made Larry Ellison’s national biometric ID look modest by comparison.28 The difference was that Google users invited the surveillance; no one said you had to use the internet. John Ashcroft’s light hand left web companies and users to work out data privacy as two fully grown market participants, which almost always ended with the impatiently human parties pushing a button marked Accept, binding themselves to conditions left unread.

  Though it collected data at an unimagined level, Google was far from the first company to assemble vast storehouses of information about individual customers; recall the Bank of Italy’s card catalog of creditworthiness. Direct-mail advertising firms were crucial to the growth of the New Right, which contracted with them and adopted their tactics. By the mid-1960s, for-profit and nonprofit organizations were spending $400 million a year to buy information about Americans from data brokers.29 With rapid improvements in storage, relational databases, and computation, hardly anyone benefited as much as these information sellers. They partnered with the world’s biggest companies and accumulated fantastic troves of lists. Unlike consumer-facing brands Google, Yahoo!, and Microsoft, these firms stay in the shadows as far as most people are concerned—funny, considering how much they know about all of us. For the Arkansas-based list leader Acxiom, that adds up to around 1,500 data points per person on 500 million active consumers worldwide, including the majority of adults in the United States.30

  Acxiom has been around in one form or another since the 1960s, and by the turn of the century the firm topped the industry. Growing (like others) through acquisition in the ’90s, Acxiom partnered with Oracle and continued to amass information and improve its ability to process and refine that data. On 9/11, a pretty good national identity database existed, but it was private, not public, and Acxiom’s clients used it to target suckers for catalogs and telemarketing calls, not to predict terrorist activity. That idea doesn’t seem to have occurred to anyone involved until after the Twin Towers were down. Once they were, Acxiom searched its files and found it had a bunch of information on the hijackers, including so many inconsistencies that in theory the authorities could have been able to tell in advance that the men were up to something, had anyone been looking. As Robert O’Harrow reports in his book on the growth of twenty-first-century surveillance, No Place to Hide: Behind the Scenes of Our Emerging Surveillance Society, that’s when one of Acxiom’s executives called a childhood friend who happened to be the world’s most influential Arkansan: William Jefferson Clinton. Though the Democrats were out of the White House, the end of 2001 was a bipartisan time, and Attorney General Ashcroft was, despite his distaste for the man’s well-publicized un-Christian proclivities, happy enough to take the deposed president’s call. Ashcroft liked what he heard and he passed Acxiom—represented in Washington by one of its board members, soon-to-be Democratic presidential candidate and former NATO Supreme Allied Commander Europe, Wesley Clark—to a new project at DARPA called Total Information Awareness.

  Total Information Awareness (TIA) was the child of John Poindexter, Oliver North’s boss at the Reagan National Security Council (which he headed) and a convicted Iran-Contra criminal. After Poindexter was excused on appeal, George W. Bush brought him in to apply some out-of-the-box thinking to antiterrorism at a time when the state’s investigative mandate seemed virtually unlimited. Like North, Poindexter was a technophile, and his Information Awareness Office brought private scraper tech under the state umbrella. In the name of efficient data sharing, TIA government fusion centers planned to draw together not just the various silos of public intelligence but also private records from commercial brokers, including and especially Acxiom. Ashcroft and the bipartisan deregulation consensus opened up a private backdoor that allowed the government to gather whatever information it wanted, as long as it paid for it, just as the catalog companies did. In the TIA fusion model, “the combined resources of essentially unregulated industry data collecting, the close surveillance capacities of local law enforcement, and the massive power of the federal government are at each other’s disposal,” writes law professor Frank Pasquale, “and largely free from their own proper constraints.”31 Through the techno-capitalist marketing industry, the state shook off the twentieth century’s privacy restrictions in the name of homeland security.

  How did the system work? Poindexter’s plan was secret even from Congress, but a report of one arrangement leaked. To prevent another plane attack, Poindexter’s office focused on using data to screen passengers before they boarded. An early test involved just what Acxiom first suggested to Ashcroft through Clinton: juxtaposing passenger ticketing records with the host of commercial information available from data brokers. For the government to demand customer information from JetBlue was a federal overreach, but the contractors offered layers of protection. To reassure the airline that everything was aboveboard, the Department of Defense asked the Transportation Security Administration to ask JetBlue to let its data-parsing partner, Acxiom, provide passenger information to the contractor Torch Concepts, which was subcontracted to DARPA through SRS Technologies. The convoluted chain of information custody is what allowed DHS investigators to declare all parties innocent after the news broke in the fall of 2003.32 Despite public outrage and the boarding-gate humiliation of one senator, a no-fly list became policy. The absurdly Orwellian TIA was shut down soon after, and Poindexter got the can when he blurred the line between out of the box and too close to the sun with a “terrorist futures market,” which would have allowed investors to speculate on coming attacks. But the government’s scraping project didn’t end; the administration transferred it to the National Security Agency.

  The advertising technology (ad tech) industry and the NSA were looking for the same thing: information—specifically, all of it. Google’s official mission was “to organize the world’s information and make it universally accessible and useful,” and the more information Google steered onto its own platforms (such as Gmail and the Chrome browser), and the more practice it got at organizing it, the better the firm did; and the better it could target advertising, the more money it made. And Google made a lot of money, leaping into the top tier of global corporations. This was no Netscape; now the internet made bank. With the help of ad tech, the internet turned attention into cold hard cash, much more efficiently than any magazine’s sales department could. But considering that, Google’s core function held a contradiction: By trying to get users to the perfect site, Google was throwing them off its own pages. Search remains the world’s biggest product monopoly, and yet there was another layer of the web no one had been able to hold. It took one more scraper to figure it out.

  Born in 1984, Mark Zuckerberg grew up in Westchester County, New York, the socially awkward son of a technologically inclined dentist whose practice operated out of the family home. If his father was part of the personal revolution—a small businessman working on a PC—then Mark was an internetworking kid, coding a messenger program to communicate between the dental office and house computers. A hard-core elitist, the younger Zuckerberg transferred to renowned Phillips Exeter Academy to finish high school, then followed his older sister to Harvard as planned. There, with access to a lot of computing power and a bunch of potential attention, Zuck embarked on a series of scraping projects. He and a classmate made a media player that scraped users’ songs and generated playlists. Another program scraped the Harvard course listings and let users see who else was in their sections. For a classmate who wanted to start a grocery delivery business, Zuckerberg coded a scraper that copied supermarket prices. To study for a final in a class he skipped, he invited the rest of the students to contribute to a centralized digital study guide, effectively scraping his classmates’ notes. Though the music program was a minor hit, and he did pass the skipped art history class, it was Mark’s controversial next scrape that suggested a career path.

  Harvard dorms had begun to post yearbook-type headshot indexes online to better facilitate, one presumes, the student networking that makes Harvard Harvard. At the beginning of his sophomore year, Zuck scraped all these “face book” directories and plugged the pictures into his new project. The program reflected his historical milieu—early twenty-first-century suburban private-school tech individualist—and he made its vulgar, elitist mentality explicit. Facemash pitted the scraped dorm headshots against each other one-on-one, inviting the user to pick the hotter of the two. The site went viral at Harvard, instantly attracting hundreds of leering student users and producing such an outcry that he was forced to pull it down and face the Administrative Board, charged with a variety of computer security sins. Zuck didn’t get kicked out, but Facemash was the beginning of the end for him at school. The god’s-eye view of his fellow students’ private and public behavior, watching them condemn the site while flocking to it in droves, seems to have only spurred his contempt for the hypocrite masses and their bureaucrat leaders. So how did Mark Zuckerberg become the world’s crown prince of friendship?

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183