Enshittification, p.23

Enshittification, page 23

 

Enshittification
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Larger Font   Reset Font Size   Smaller Font  
Or maybe you think Instagram turned your teenager anorexic.

  Or that TikTok brainwashed a bunch of millennials into quoting Osama bin Laden.

  Or maybe you’re worried about Black Lives Matter protesters whose identities were swept up by Google’s constant location-based surveillance and who were then reported to law enforcement after a “reverse warrant.”

  Or maybe that doesn’t worry you, but you are pissed about the same thing happening to the January 6 rioters.

  Or you’re angry that people of color are being discriminated against by algorithms that determine hiring, lending, and housing based on surveillance data.

  Or that you or someone you love is being targeted by an online scammer, an identity thief, a ransomware creep, or some other criminal who got into your or your loved one’s bank account with the help of surveillance data that was either sold or leaked.

  Or that someone is using AI to make deepfake porn of you.

  I think you’d be hard-pressed to find anyone who agrees that all of these issues are problems; I personally think that some of them are actually imaginary. But that doesn’t have to matter when it comes to coalition-building. I don’t have to agree with you that TikTok is brainwashing millennials to agree with you that a muscular privacy law would be a good thing.

  Which is why we are seeing a procession of ever-improving privacy bills being introduced in Congress, each with more support than the last.

  Surprisingly, Big Tech and other commercial surveillance firms sometimes get dragged into endorsing these privacy bills (usually when they’re in the midst of some genuinely ghastly scandal). They can support privacy bills in a pinch, because they’ve figured out Two Weird Tricks for sabotaging consumer privacy law.

  The first Weird Trick is making sure that a new privacy law doesn’t get enforced. The way to do this is to limit enforcement to government prosecutors: district attorneys, attorneys general, and federal regulators. Historically, tech companies have found it easy to intimidate, buy off, or otherwise placate these officials.

  That’s why privacy advocates want a privacy law with a private right of action. That means that private persons—you and me—will be able to bring privacy claims, even if our supposed defenders in government don’t seem to think we deserve to have our privacy defended.

  The business lobby hates private rights of action wherever they appear. For corporate America, the ideal situation is one in which everyone who might sue them either signs away the right to do so (through a binding arbitration waiver) or has that right taken away (through a law without a private right of action). For decades, American businesses have fought to be above the law, pushing disinformation. Remember the woman who spilled her McDonald’s coffee on her lap and was awarded millions of dollars by a jury? The story was often held up as an example of a frivolous lawsuit. In reality, the woman received third-degree burns and had to undergo debridement and skin grafts. McDonald’s had been repeatedly ordered to fix the temperature of its coffee, due to other burn cases, and nearly all the money the woman was awarded was clawed back by the court. Every time you hear about an “ambulance chaser” or a greedy “no win/no fee lawyer,” you’re being propagandized as part of a massive, long-running campaign to make corporations literally above the law.

  No surprise, then, that even when privacy laws are introduced with private rights of action, these clauses are made so hugely controversial by lobbyists that they stall out in the legislature.

  Now, Capitol Hill isn’t the only place where Americans can ask lawmakers to protect them. While Congress has slept on privacy law, state legislatures have taken up some of the slack. That’s where the second Weird Trick comes in.

  Laws like Illinois’s Biometric Information Privacy Act and the California Consumer Privacy Act have dragged privacy law into the twenty-first century, at least for people in Illinois and California. Californians have a broad right not to have their online and offline activities tracked, and Illinoisans’ biometric data can’t be captured or used without their meaningful opt-in consent.

  Obviously, this is a problem for the commercial surveillance industry, but what kind of problem is it? I think the problem is that it forces these companies to stop spying on those of us lucky enough to live in states that have privacy laws. They think (or claim to think) the problem is that there’s a “patchwork” of laws that are too hard to comply with.

  Surveillance industry lobbyists use this pretense to lobby for something called preemption in proposed federal privacy laws. If they get their way, any new federal privacy law will preempt (annul) all the state privacy laws. Federal privacy law will represent the most privacy we’re allowed to have, rather than the baseline of privacy we’re all guaranteed.

  Naturally, privacy advocates aren’t having any of this. Again, preemption has become a deal-breaking controversy when privacy bills are debated in the House and Senate.

  Private rights of action and preemption are the trip wires that keep tangling up new privacy laws, but each law gets a little closer to passing with a private right of action intact and without a preemption clause. A new privacy law isn’t a foregone conclusion, but it’s closer than it’s been since Die Hard was still in theaters.

  The EU’s Digital Markets Act and Digital Services Act

  Earlier in my career, I served for some years as the European director for the Electronic Frontier Foundation, splitting my time among London, Brussels, and Geneva (along with a lot of side quests, totaling thirty-one countries in three years). Based on my experience then and since, I can confidently state that the European Parliament is no less corruptible than the US government.

  But just because the two are both susceptible to corruption, it doesn’t follow that the European Union will experience the same type of corruption as the United States. In particular, the EU is far more willing to take on Big Tech than the United States is.

  There’s an obvious reason for this: the United States views the multinational tech firms that were founded in Silicon Valley and Seattle as American companies, and they are, in several important ways. (In other ways, Big Tech is pretty dang borderless, shifting its profits, manufacturing facilities, and jobs all around the world to chase favorable regulatory regimes that allow it to launder money, cheat on taxes, exploit workers, and pollute the environment.)

  In the EU, Big Tech is (correctly) viewed as an American phenomenon. This fact means that in Europe versus the United States there is (at the moment) much more space for bold, muscular regulation—of the sort that will weaken the tech giants’ market power and drain their cash reserves with punishing fines.

  That’s how the EU came to pass the 2016 General Data Protection Regulation (GDPR), a broad, sweeping consumer privacy law. I was still living in the EU when the GDPR was being debated, and I even did a bit of lobbying on it in Brussels. I got a firsthand view of the army of lobbyists that descended on the European Parliament in the run-up to the GDPR’s passage. One EU commissioner told me that it was the largest-scale, most intense lobbying they’d ever experienced in their long career.

  The GDPR is far from perfect. You may know it as the origin of all those tedious “click here to accept cookies” banners. Tech companies maintain the pretense that this satisfies the GDPR’s main edict, which is that companies can collect and process your data only with your consent.1 Under the GDPR, the fact that you don’t give your permission to collect or use your info can’t be used as the basis for a company to deny you access to its service, or to charge you more to use it. A company does have to ask your permission before spying on you, but if you ignore the question, it has to let you just get on with using the service.

  Obviously, that’s not how companies behave. Instead, they bombard you with “cookie consent” dialogues that flagrantly violate the GDPR, and, what’s worse, they get away with it.

  Why is this? It’s down to the intrinsic weaknesses of federalism—the system of government whereby autonomous regions form a federation and cede some of their power to its government. If that sounds abstract, perhaps it’ll be easier if I tell you what that overarching body is called: a federal government.

  The United States is a federation, too (hence The Federalist Papers, the pamphlets that Alexander Hamilton, James Madison, and John Jay wrote to promote the ratification of the Constitution), albeit one that is much older than the EU, with far more powers claimed by the federal authorities. (When I studied for my US citizenship exam, I was required to answer a skill-testing question about this, explaining the meaning of the tortuous syntax of the Tenth Amendment: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.”)

  The twenty-seven member states that make up the EU have a lot of autonomy, for good and bad. Part of that autonomy is broad leeway in how their tax policies are structured. Several EU nations have spent decades locked in a race to the bottom in the hopes of becoming the EU’s top tax haven.

  EU law generally lets a company pay taxes in its home country, no matter how low those taxes might be. So, for example, Amazon EU pretends that it is based in the infinitesimal and wildly corrupt Grand Duchy of Luxembourg, a flyspeck nation claiming 672,000 residents and 41,000 companies. Some of those companies are genuinely Luxembourgian, of course, but tens of thousands of Luxembourg companies’ “headquarters” are staffed by a few lawyers and their assistants, who shuffle paper around and help them avoid taxes. That’s why Amazon is a Luxembourg company, with 4,500 paper pushers who help it duck its tax obligations to the other twenty-six EU member states. (Meanwhile, Amazon employs 220,000 people in France and 25,000 people in Spain.)

  But—Amazon notwithstanding—Luxembourg is far behind in the EU tax-haven sweepstakes. The clear leader here is Ireland, where most of the largest American tech companies pretend their EU operations are based. Having an Irish address allows those companies to insist that their profits are floating in a state of untaxable grace, somewhere over the Irish Sea.

  The thing is, a tax haven always turns into a crime haven. Any company willing to do the paperwork to pretend to be Irish this week could pretend to be Luxembourgian next week (or Cypriot, or Maltese, or Dutch—even Holland has much to offer by way of tax avoidance for footloose tech companies). To keep companies from defecting to rival tax havens, countries like Ireland have to promise lax law enforcement across the board, not just when it comes to taxes.

  Which explains how the GDPR failed. The way the GDPR is written, Europeans whose privacy has been invaded have to first seek justice from the regulators in the company’s home country, though they can eventually appeal a decision up to the European Court of Justice, the EU’s federal court.

  The largest American commercial surveillance companies pretend to be Irish, and in exchange Ireland has the worst privacy regulator in Europe, taking far longer than its non-Irish counterparts to produce a ruling, upholding facially absurd excuses for surveillance, and (eventually) getting overruled by Europe’s federal appeals court at a rate that outstrips all the other EU privacy regulators.

  To give you an idea of how bad the Irish privacy regulator is, consider Facebook’s absolutely laughable excuse for spying on Europeans. Remember, the GDPR allows a company to collect, store, and use your data only if you give your explicit consent, or if it has a “legitimate purpose” for doing so.

  Now, on to Facebook’s laughable excuse. In 2023, Facebook told the Irish privacy regulator that spying on its users to target ads to them was a “legitimate purpose,” because those users had a contract with Facebook wherein Facebook promised to spy on them and flood them with targeted ads. That “contract” was Facebook’s novella-length terms-of-service document, which (to a first approximation) no Facebook user has ever read. Facebook’s argument boiled down to this: “Our users want us to spy on them and bombard them with ads, and we know this is true because they wouldn’t have clicked ‘I agree’ on our terms of service otherwise. Imagine how disappointed those users would be if we didn’t spy on them! We don’t need to get their consent for that surveillance. We have a contract to spy on them, and spying on them to fulfill that contract is definitely a legitimate purpose.”

  The long, expensive road to holding these mock-Irish American tech companies to account for flouting European privacy law bought them about a decade of noncompliance with the GDPR, though some cases are finally making their way to the federal courts.

  The problems of enforcing EU corporate regulations are well understood by EU policymakers, who are plenty steamed about the GDPR’s failure, over its first decade, to notably curb commercial spying in Europe by the biggest tech companies in the world. That’s why new EU tech regulations like the Digital Markets Act (DMA) and Digital Services Act (DSA), both of which took effect in 2024, shift enforcement from EU national courts to the European Court of Justice. This is an extremely promising approach! Though, of course, it’s not without its risks: as the EU consolidates power in its federal institutions, it will face resistance from the member states. (Americans will be familiar with this phenomenon, of course.) Also, a federalized enforcement system for Big Tech may not be corruptible in the same way that the decentralized, national enforcement system is today, but that doesn’t mean it’s incorruptible.

  Meanwhile, the DMA and the DSA represent very big swings. At their best, both acts strike at the root of Big Tech’s power. Both laws have interoperability requirements—rules forcing tech companies to open their app stores, to allow third parties to connect to their services, and to drop the various gambits they use to fight against third-party payment processing.

  These laws also have structural separation provisions—these are rules that force companies to spin off or shut down divisions that compete with their business customers. The idea of structural separation is venerable, simple, and effective. Early US antitrust laws forced banks to spin out their investment arms, on the grounds that banks that owned companies that competed with the businesses that depended on them for loans would have an unstoppable temptation to cheat. If you own a pizzeria and the bank that loaned you the money to start your business also owns the pizzeria across the street, the bank can put you out of business anytime it wants—it can “loan” its own pizzeria enough money to sell pizzas below cost until you go out of business (at which point the bank’s pizzeria can jack up its prices). It can increase your interest rates when your loan rolls over. It can loan itself money to get through an economic downturn and deny the same loan to you.

  It’s very hard to figure out whether a bank has loaned its own business money but denied the same loan to a competitor for fair reasons (the competing business is badly run, say) or for unfair ones (to drive a superior rival out of business). Unless the bank manager puts a confession in writing—sends a memo to a colleague admitting to their motivations—you will never be sure about the reasoning behind a loan or a denial.

  In the wake of the Great Depression, the United States imposed structural separation on its banks: banks could either make investments (“investment banks”) or take in deposits and make loans (“retail banks,” or just “banks”). This worked extremely well, and when the United States ended this practice—when Bill Clinton signed the Gramm-Leach-Bliley Act, under the guise of strengthening Americans’ privacy rights—the eventual result was the Great Financial Crisis of 2008, which brought the world economy to its knees.

  Structural separation is a bedrock of democratic political and legal systems. Lawyers object strenuously to judges who have a conflict of interest, as when the judge is related to one of the parties, or has an investment in their adversary’s client’s business. By and large, judges recuse themselves from these cases, even if they’re sure they can be fair. (The obvious exception is the US Supreme Court, where, shockingly, there appear to be no limits to conflicts of interest.)

  With its 2024 regulations, the EU is bringing structural separation to Big Tech. Platform owners are broadly prohibited from competing with platform users—meaning, for example, that Amazon won’t be able to spy on its sellers’ orders and then clone their best products, and that Apple and Google will have to decide whether to operate app stores for your phone or make apps that compete with the ones in their app stores.

  As with judicial recusal, forcing Apple, Google, Amazon, and other platforms to recuse themselves from competing with their own customers resolves their otherwise unresolvable conflict of interest by eliminating it altogether.

  This is a marked departure from decades of tech policymaking, especially in the EU. For most of its history, the EU has devoted its energy to making tech companies better rather than making them weaker and so making their errors less consequential.

  For example, in the decade before the passage of the new DMA and DSA, many EU countries enacted some form of “harmful content” rule, which makes the platforms responsible for their users’ harassment, hate speech, and other odious conduct.

  On the one hand, this sounds reasonable: the platforms harbor toxic users who force women, LGBTQ people, racial minorities, and other marginalized groups to choose between facing a nonstop shower of the most ghastly abuse and being able to fully participate in public life, with access to all the communities and services that organize on the platform. No one should have to make that choice.

  On the other hand, if platforms are to control their worst users’ conduct, they must be able to conduct fine-grained, continuous surveillance (to spot the statistical correlates of coordinated harassment, such as a small group of users all communicating intensely with one another and then bombarding a separate user with multiple, similar messages), and they must be able to control those users’ conduct. A platform that is responsible for policing coordinated harassment is going to want to do things like control the number of participants in a conversation, limit the actions of new accounts, and take other steps to counter bad activity.

 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183