Enshittification, p.15

Enshittification, page 15

 

Enshittification
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Larger Font   Reset Font Size   Smaller Font  

  Of course, lots of executives lack the (ahem) executive function to act in their own best interest when that means giving in to their workers. When those hotheads yield to their enshittificatory impulses, workers can still avail themselves of counter-apps (even as they ask the National Labor Relations Board, which may or may not exist by the time you read this, the state attorney general, a union rep, or a class-action lawyer to step in).

  The Google Walkouts, Tech Solidarity, and Tech Unions

  It’s hard to overstate how magic Google was in its early days. Real magic, not the cheap trick of Uber, which figured out how to get a stranger to pick you up in minutes and drive you anywhere for a fraction of the cost of a cab ride. (The secret was losing a ton of money on every ride.) Google did something no one else had managed to do: make sense of the whole web.

  The search engines we all used before Google came along were locked in a losing race against spammers, who figured out how to game the primitive ranking algorithms engines like Lycos relied on. Crude tricks like adding a hundred synonyms for cat in invisible white-on-white type to your web page could make it the top-ranked page for searches for cat.

  To make things worse, these early search engines’ method for chasing revenue was purely enshittificatory: they sold search results to the highest bidder. So a search for cat might yield several paid ads from companies hoping for your clicks, followed by several more spam results from keyword stuffers and other improbably successful spammers (or, as they would be called today, search engine optimizers).

  Between spammers and payola, the search engines were eating themselves from the inside even as parasites consumed them from without. Google had an answer for both pathologies.

  In “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” better known as the PageRank paper, two Stanford grad students named Larry Page and Sergey Brin set out a method for sorting good web pages from bad ones: citation analysis.

  This is an idea from academia, where publishing in scholarly journals is a key source of validation and career advancement (hence “publish or perish”). An academic’s importance to their field and their institution can be roughly approximated by looking closely at their peer-reviewed publications.

  But academics aren’t just counting up the number of publications. Some publications matter more than others—the more important the journal you’re published in, the more your publication matters. In addition, there’s a (seemingly) objective way to measure the importance of a given journal: count how many times the articles published in its pages are cited by other journals. A journal like Nature isn’t prestigious merely because everyone knows its name—it’s prestigious because other researchers value the work published in its pages so highly that they pay special attention to the articles it publishes, and are more likely to cite those articles than they are to cite articles in rival journals. That means that publication in Nature counts more than publication in smaller journals, a measure that academics call impact factor. A journal’s impact factor is a measure of the likelihood that a publication in its pages will be cited by other publications.

  This isn’t perfect. Like any other “reputation economy,” it’s a rich-get-richer system in which the most-cited journals are the first port of call for every major paper, meaning they are home to the most blockbuster findings, which reinforces their desirability for academics seeking to place their own papers.

  Nevertheless, the mere fact that Nature chose to publish a given paper is a fairly reliable signal that the paper’s findings are significant and noteworthy.

  Until the PageRank paper, this whole esoteric business was the exclusive province of academics, who made something of a game of it. For example, mathematicians like to calculate their “Erdős numbers,” a measure of their proximity to Paul Erdős, a legendary and fantastically prolific mathematician. Erdős was an itinerant, driven, brilliant weirdo who would show up on his colleagues’ doorsteps, install himself in their guest rooms, and then collaborate on field-defining papers about areas of shared interest. (Erdős’s interests were very broad indeed.)

  If you collaborated with Erdős on a publication, you have an Erdős number of 1 (Erdős’s own Erdős number is 0, of course). If you collaborated with one of Erdős’s coauthors, you have an Erdős number of 2. Those academics’ collaborators have an Erdős number of 3, and so on.

  The PageRank paper proceeded from the insight that making links between two websites was a clunky, manual affair and the main reason anyone would link to anyone else was because they thought the link-ee had something notable to say.

  By counting the links between websites, the PageRank algorithm was able to identify the sites that were most likely to be linked from other sites, these being the web equivalent to Nature or other journals with high impact factors. If these sites had pages that seemed to match the query, they would appear high on the list of results as an authoritative resource. But the authority of these highly linked-to sites didn’t end with the pages they published: these sites were also considered authoritative sources of authority itself. If your site was linked to by a site that lots of other people linked to, some of that Google-juice would be transmitted to your site, giving it more prominence in Google search results for queries that matched its pages.

  So a site like the BBC’s (bbc.co.uk) would be very authoritative, because lots of sites link to it. Meanwhile, if your site was linked to from the BBC, it would be considered highly authoritative, too, because the BBC’s editors thought it was important enough to link to, and since everyone else on the web was so fond of the Beeb, that site was also likely to be a good one. In other words, the BBC had a BBC number of 0. If the Beeb linked to you, you had a BBC number of 1.1

  And if the New York Times’s site (nytimes.com) linked to you, you’d have a New York Times number of 1, which would go into the ranking system, blended with that BBC number to produce an authoritativeness estimation. There’s an infinitude of variations on the Erdős-Bacon number (a ranking of how close you are not only to Erdős but also to the actor Kevin Bacon), and Google kept score for all of them.

  This system worked fantastically well—so well, it was almost spooky. The key insight in the PageRank paper was that all the links on the web as it existed constituted a latent map of authoritativeness produced by people who didn’t set out to create this map and thus had no reason to attempt to distort or falsify it.

  But remember Goodhart’s law (from page 103): “When a measure becomes a target, it ceases to be a good measure.” Thanks to its excellent technique for making sense of the web, Google quickly became the authoritative site for search. (In other words, Google has a Google number of 0.) That meant that everyone wanted to be ranked highly in Google’s search results.

  Citation analysis—the academic impact-factor ranking technique that became PageRank—was very robust before it became important, but after it became the primary way to garner traffic to one’s website, a key weakness in PageRank emerged: it’s just not that hard to make links between websites. Soon, “search engine optimizers”2 were creating “link farms” of sites full of links designed to replicate the formal signifiers of authority that Google’s algorithm looked for. Google fought back by deploying ever more sophisticated and complex analyses that sought to disqualify pages whose authority seemed to come from inauthentic sources.

  This is a surprisingly subtle process. The mere fact that a link farm seems to promote a page should not automatically disqualify that page from being trusted by the algorithm. If Google took this blunt approach, then pranksters, extortionists, and other bad actors could create link farms and direct all their energy to innocent sites (like bbc.co.uk or nytimes.com) and get them kicked out of Google’s index.

  As Google’s ranking algorithm ramified into a dense forest of tests and evaluations, the way Google talked about its algorithm changed subtly, with profound implications.

  In PageRank’s early, startlingly effective years, Googlers talked about search ranking as though they had discovered a kind of empirical truth about human knowledge. (Recall that Google’s mission is to “organize the world’s information and make it universally accessible and useful.”)

  When members of the public complained to Googlers about how their carefully crafted websites were dispreferred by Google’s algorithm, the company line was simple: To get a higher ranking in Google search results, make a better page. The only way to increase your ranking was to improve the informative value of the thing you wanted ranked. It was as though Google believed that it had used a kind of obscure academic mathematical ranking to trace the location of Plato’s cave, and that it had installed a backward-facing camera at its firing line, staring directly at the true forms that cast the shadows on the cave wall.

  But for Google, the idea that its rankings were math and not judgment was a double-edged sword. Governments of the world are far more likely to defer to the free speech rights of someone expressing judgment than they are about someone solving an equation.

  That meant that governments grew very interested in telling Google what it had to rank highly—and what it should downrank or exclude altogether. Just because the empirically correct result for Where can I find illegal content? is a bunch of websites full of illegal content, it doesn’t follow that Google should be permitted to link to these illegal websites. As Google became the canonical index to the web, it found itself besieged by legal demands to alter its rankings in the name of preventing copyright infringement, child sexual exploitation, terrorist recruiting, the dissemination of information about procuring or producing weapons and drugs, blasphemy, libel, and a host of other forms of speech that someone, somewhere didn’t want the rest of us to see.

  In 2012, Google changed its tune. It commissioned Eugene Volokh, a world-renowned legal scholar and First Amendment expert, to write a law-review article about the expressive nature of what Google was doing when it combined all the “signals” that it used to produce its rankings.

  In the paper, Volokh argues—convincingly, to my mind—that Google’s core activities are editorial in nature, not empirical. A Googler trying to find a way to keep a spammer out of the top-ranked results for a search index rarely writes a crude rule like “If the website is spammer.com, then don’t put it in the results.” Instead, the programmer seeks to identify qualitative aspects of the spammy pages that make them a poor result, and then looks for quantitative correlates of those qualities that can be measured and weighed in every page on the internet, as a means of increasing the quotient of materials with subjectively positive traits that are likely to appear in the top results of a Google search.

  Even when the programmer uses some quantitative test—for example, making a tweak to the ranking system, then waiting a few minutes for a million people to run queries whose results reflect the new system and seeing how many of those searchers run a second query to refine their search because they were dissatisfied with the results the first time around—the decision to use that criterion is, itself, qualitative.

  In other words, the programmer who tweaks the ranking algorithm, runs some tests, and tweaks it again is doing something analogous to the newspaper editor in chief who rearranges the articles above the fold on the front page, reorganizing them until they feel right for the editorial stance of the paper.

  Google’s narrative switched from claiming to have discovered the mathematical roots of universal truth, to having figured out how to harness mathematics to express its judgment about what a good search results screen looked like. This was an important change, both because it was true and because it established a basis for internal contention about what qualities a good search page should have.

  When “What is a good search page?” became a question up for grabs at Google, it set the stage for later enshittification.

  The Gomes/Raghavan affair, discussed on page 75, is a stark example of the way that Google’s culture changed as it outgrew the discipline of competition. For decades, Google had insisted that “competition was just a click away,” but as Google bought out the search box on every platform and integrated its surveillance, cloud, and ad-tech services into the majority of the internet’s websites and apps, removing Google from your life became increasingly difficult.

  Now, even if you switch your default search engine to Bing or DuckDuckGo, even if you switch from Android to iOS, even if you switch your cloud storage to a self-hosted ownCloud instance, even if you switch your email to Proton Mail, even if you navigate with OpenStreetMap instead of Google Maps, you are still a Google user from the moment you log on until the moment you go to bed—and even as you sleep.

  Most of the websites you visit embed one or more Google assets—a tracking beacon used for Google analytics, a “free” font served from Google’s servers, or an ad placed by Google after being sold in a Google marketplace on behalf of an advertiser represented by Google’s demand-side platform. Whenever your browser interacts with a Google server, the transaction is logged by Google’s servers and added to your profile. That profile is augmented with vast troves of information about you bought from the largely unregulated data-broker sector, from the purchases you make to the locations where your devices’ unique Wi-Fi and Bluetooth identifiers have been logged. If you have an Android device, it sends a constant stream of telemetry to Google about your activities, even when you’re not using it.

  So while you can potentially use a competing product—swap Google Photos for Flickr, say—your decision to do so has little impact on Google’s bottom line. The subtext of “Competition is just a click away” is that Google will be disciplined by the fear of your defection to a rival service and will govern itself accordingly, resisting the urge to enshittify out of a rational fear of the consequences for doing so.

  By decoupling “competition” from “consequences,” Google, you’ll recall, inherited the attitude of Ernestine the AT&T operator: “We don’t care. We don’t have to. We’re Google.” Competition for its own sake is an empty fetish, but competition for the sake of making companies fear the consequences of prioritizing profits over quality? That’s vital.

  From its outset, technologists prized jobs at Google. The company’s hiring gauntlet became notorious for the strain it put on applicants’ ingenuity and knowledge, but for those who survived the challenge, the company offered something akin to tenure at an elite university, at a salary that was orders of magnitude higher than even the best-paid academic could dream of. Indeed, Googlers were often encouraged to retain their professorships at top universities while drawing a fat salary and socking away generous stock grants at the Big G.

  For years at Google, top technical talent literally ran the show. Managers charged with assembling a team to work on a new product had to convince technologists to accept a transfer to the assignment. Engineers maintained a nearly unquestionable veto over these requests. Google management could launch new products or change existing ones only if it could locate enough engineers who agreed with the approach.

  That wasn’t all: Googlers were also given “20 percent time”—one day in five to chase passion projects within the company. Most of these projects went nowhere (most non–20 percent Google projects also went nowhere), but the 20 percent program is responsible for one of Google’s few post-Search, in-house successes: Gmail, invented by Paul Buchheit as a 20 percent project in 2004.

  Google’s founders came out of academia. Larry Page and Sergey Brin launched the company while completing their grad studies at Stanford. Clearly, some of Google’s cultural deference to technologists reflected the founders’ academic sensibilities—after all, those were the same sensibilities that led to the creation of the PageRank algorithm, which operationalized the academic practice of citation analysis.

  But from the very beginning, the Google Boys had adult oversight: consummate corporate types like Eric Schmidt, whose presence reassured the company’s investors about its commitment to profit. The result was a kind of détente between profit and technical excellence, and it made everyone involved with the project very, very rich.

  Investors’ tolerance for Google’s “indulgent” deference to its technical staff was justified. Google’s reputation as a great (and profitable) place to work attracted top technical talent to the company, including people with incredibly scarce experience in scaling up the business to keep pace with its meteoric growth. Time and again, the engineers charged with maintaining Google’s stellar reliability pulled off never-before-seen feats. The unshakable reliability of Google drove its growth, as businesses and individuals came to treat the company and its services as a given. Why bother keeping detailed notes about the things you saw online when you could “just google it” the next time you needed to call up some half-remembered piece of information?

  And so Google grew. Even after it captured the vast majority of the world’s search business, it continued to grow, by convincing so many of us to treat it as a kind of neural prosthesis. Rather than knowing things, we came to know which keywords we could use to invoke Google’s retrieval of those things. Cell phone address books made memorizing phone numbers obsolete, and as-you-type spelling correction rendered memorizing tricky spellings obsolete, but ultra-reliable Google everywhere (especially in your pocket) made memorizing everything obsolete.

  But eventually that growth petered out. Search could grow by convincing more people to use Google Search, and it could grow by convincing Google Search users to use Search in more ways. But once all of us were using Google Search in all the ways that Search could be used, growth from Search flatlined.

  Google’s shareholders weren’t going to take that situation lying down. After all, even if Google couldn’t find more people to search, or more ways to use search, they could certainly find new ways to charge for search. Google hadn’t run out of worlds to conquer; indeed, its conquest of market share and technical excellence meant that it could turn its attention to conquering its margins.

 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183