The Digital Closet, page 16
From this example, we can see a number of problems in the way content gets moderated on Facebook. First, the company’s quest to produce a universal—and English-language-based—set of standards runs into problems when applied in practice. Here these problems consist of cultural variations in what is considered “nudity” and in which contexts that nudity becomes “obscene.” While it is easy to see how a company with the practicality of an entrenched hacker ethic would lean on the visibility of female-presenting nipples as the sole determinant of whether an image is obscene, this comes at the expense of hardcoding heteronormativity into the platform. It reinforces the sexualization of female-presenting bodies and produces an unfair double standard that is disadvantageous for female-presenting people.173 Further, even the proposed policy changes would not introduce toplessness for female-presenting people and potentially achieve the hoped-for desexualization of female-presenting bodies among populations that have historically suppressed this behavior. As the company noted in its Product Policy Forum, “We are trying to focus on historic, social and cultural norms that exist across different groups of people, whether that’s religious groups, racial groups, social groups.”174 This move is also grounded on a homogeneity across historic, social, or cultural opinions about female-presenting toplessness that does not exist. For example, in the United States, thirty-seven states do not have official policies on female-presenting toplessness, and it is more often legislated at the local level, producing a wide variety of local ordinances governing the exposure of female-presenting breasts in the United States.175
The most heteronormative aspect of this particular community standard, however, is the assumption that “female nipples” correlate with genitalia or that anatomical sex can be inferred in cases where genitalia are obscured from the shape and size of the breast tissue that visible nipples are attached to. Take, for example, Courtney Demone’s #DoIHaveBoobsNow? Project, in which Demone, a transgender woman, posted photos of her exposed chest as she underwent hormone replacement therapy, challenging Facebook and Instagram to answer her very simple question, “At what point in my breast development do I need to start covering my nipples?”176 Platforms cannot answer this question based on their cisnormative community standards. In other words, their policies turn out to be profoundly impractical for regulating the visibility of gender fluid, nonbinary, and trans bodies on the platform. With the current accuracy rates of computer vision-based automated content filtering, coupled with the “human algorithms” Facebook employs to sort out the content that slips through its automated filters, the company can quite accurately identify all images with exposed nipples. So why not just produce an overlaid filter that blurs an image until people confirm that they are willing to be exposed to it, as Facebook already does with violent and medical images posted to the site? Why not simply produce a nipples/no nipples toggle in a user’s Facebook settings? It would be easy to implement such a solution and, since the company already verifies user identities and ages, it would be easy to gatekeep children from “unwanted exposure.” Heteronormativity on a platform like Facebook is essentially this: to see biased filtering as the default, most practical solution to the problem of content moderation rather than recognizing the ease with which less normative filtering could be achieved across the platform.
Another recent point of contention in Facebook’s Product Policy Forum has been how to moderate sexual activity in art. In late 2018 and early 2019, the company convened two working group meetings, consulted with fourteen external stakeholders, and held a presentation at their Product Policy Forum to reaffirm their status quo and formalize better operational guidelines to ensure more consistent enforcement by their “human algorithms.” As the company notes, “Our policies distinguish between real world and digital art in the context of adult nudity and sexual activity because we have historically found that digital images are hypersexualized.”177 In essence, the company maintains a medium-specific set of standards. If your image is a photograph of an oil painting or a sculpture, Facebook assumes it is less likely to be hypersexualized than a digitally produced image. The company also is more likely to remove performance art and mass-produced, video-based content. Thus, content moderators are trained to look at whether an image is of a “real object” (paper, wood, canvas, wall) or composed in a “traditional medium” (water colors, pencil, marker, charcoal, spray, marble, bronze) and to use Google Image Search to check whether the art is “real or digital.” They also look to see if an image has been altered by photo editing software through things like cut and paste signals and whether there are traces of paint programs, like vector shapes or 8-bit lines. This essentially reproduces a traditional classist distinction between “high” and “low” art in which Tom of Finland erotica is permissible because it’s been made with oil paints on canvas but anonymously created digital nude portraits are not. While it might not be readily apparent how this connects to heteronormativity, one needs only remember the financially precarious existence of much of the LGBTQIA+ community, particularly a large section of the trans community. This makes it more difficult for the community to publicize its arts and culture, arbitrate and litigate censorship, and take in essential revenue to maintain and produce new artworks. As smartphones increasingly become a necessity for everyday life—essential for gaining employment, managing finances, obtaining housing, and so on—they are increasingly the windows through which arts and culture are accessed, disseminated in the public sphere, and marketed to generate revenue. Digital communications mediated by these smartphones and the social media platforms that dominate their internet usage can easily lead to a future in which artistic production is less viable for the LGBTQIA+ community. By producing a medium-specific set of community standards surrounding artistic nudity, Facebook is potentially leaning into a future in which LGBTQIA+ art bears an undue burden of censorship and risks being rendered invisible to the broader public.
Taken collectively, despite response to public pressure to improve their content moderation policies and practices, the operation of Facebook’s “human algorithms” leaves much to be desired. The employees responsible for the moderation of content of central importance to LGBTQIA+ communities in the United States are largely the least experienced, operate from other cultural contexts, and are underpaid and overburdened, leading to uninformed, exceedingly fast decisions about censorship. As I’ve shown, these decisions are particularly impactful when it comes to content surrounding sex work, breastfeeding, nipple exposure more broadly, cultural nudity, and artistic expression. This creates an undue burden of censorship on LGBTQIA+ communities and disallows a large portion of content that many people would not find to be explicitly obscene or pornographic, ranging from community building, social activism and organizing, sex education, explorations of gendered embodiment, and artistic expression. As we’ll see in chapter 3, these actions by Facebook are not the exception but the rule on the internet. LGBTQIA+ content is faced with undue censorship, account banning, and demonetization across the web.
3
Overblocking
Cyberporn and the End of Regulation
The internet as we know it and many of the companies that dominate it were forged in the wake of a nationwide sex panic about children’s access to pornography. This sex panic might be best exemplified by the August 3, 1995, Time magazine issue on cyberporn (see figure 3.1). In the issue’s cover story, Philip Elmer-Dewitt reported on the findings of a later debunked study showing that 83.5 percent of the images stored online at Usenet newsgroups were pornographic.1 Twenty years later, Elmer-Dewitt would describe this as his worst story “by far” and note that one Time researcher assigned to his story later recalled it as “one of the more shameful, fear-mongering and unscientific efforts that we ever gave attention to.”2 This sex panic surrounding “cyberporn” culminated in Congress passing the Children’s Internet Protection Act (CIPA) in 2000. CIPA required public schools and libraries to install internet filters on all of their computers to block obscene content, child sexual abuse images, and content deemed harmful to minors in order to continue receiving federal funding. Similar to earlier moral panics surrounding the dissemination of pornography in the United States, CIPA also embodied a class-based anxiety over who had access to online pornography, as evidenced by the original extension of the ban to adult library patrons.3
Media scholar Henry Jenkins responded to the Time story in a 1997 article published in Radical Teacher. In a passage worth quoting at length, he wrote,
The myth of “childhood innocence” “empties” children of any thoughts of their own, stripping them of their own political agency and social agendas so that they may become vehicles for adult needs, desires, and politics. . . . The “innocent” child is an increasingly dangerous abstraction when it starts to substitute in our thinking for actual children or when it helps justify efforts to restrict real children’s minds and to regulate their bodies. The myth of “childhood innocence,” which sees children only as potential victims of the adult world or as beneficiaries of paternalistic protection, opposes pedagogies that empower children as active agents in the educational process. We cannot teach children how to engage in critical thought by denying them access to challenging information or provocative images.4
Figure 3.1
Time magazine cover, August 3, 1995.
As we’ll see in this chapter, so much of our lives as children and adults are lived at the interstice between the sexual and the platonic, the prurient and the pure. Critical thought requires us to learn how to navigate these gray areas, and the development of this capacity takes many years of practice, online or off. Overbroad filters stunt this development, blocking access to everything from legitimate nonsexual speech to hard-core pornography and all the gray areas in between. As we’ll see, this problem is particularly acute when it comes to LGBTQIA+ discourse. However, what little resistance there was to CIPA maintained these black-and-white distinctions between legitimate and illegitimate speech, focusing on how nonsexual speech was blocked by overbroad filters.
Prior to CIPA, the American Civil Liberties Union (ACLU) was already publishing detailed white papers arguing against internet censorship based on a broad interpretation of First Amendment rights.5 In 2002 the Kaiser Family Foundation published a study indicating that at moderate levels, internet filters did not significantly impede access to online health information, but at their more restrictive levels, the filters would “block access to a substantial amount of health information, with only a minimal increase in blocked pornographic content.”6 CIPA was challenged in court by the American Library Association, also on First Amendment grounds, and appealed to the Supreme Court by 2003. It is worth noting that none of these free speech arguments against the implementation of porn filters were arguing that pornography ought not be filtered. Pornography has never constituted protected speech in the United States and has been especially vulnerable to censorship after Miller v. California.7 The central fact of all these arguments was that porn filters are unreliable. They always overblock—they filter some portion of nonpornographic sites for one reason or another—and they always still let some porn through. The First Amendment claims against these filters were all based on the fact that they would necessarily be blocking some portion of nonpornographic content, which, precisely because it was not pornography, would qualify for free speech protections.
All nine Supreme Court justices agreed that restricting children’s access to pornography posed no constitutional problem. They also agreed that all available filters were blunt instruments that inevitably block some portion of nonpornographic material.8 The constitutional question was thus whether this overblocking constituted a violation of First Amendment rights. The Supreme Court ultimately decided in favor of CIPA by a margin of six to three. In the aftermath of this decision and the displacement of the cyberporn sex panic from center stage by 9/11 and the escalation of wars in Afghanistan and Iraq, the free speech concern of overblocking largely faded into the background. As Deborah Caldwell-Stone noted in her 2013 American Libraries article, “Debate over filtering became muted. . . . While researchers counted the number of libraries and schools using filters, little inquiry was made into how institutions were implementing CIPA or how filtering was affecting library users.”9
While the critical discourse seeking to combat overblocking by internet filters has yet to fully resurface, this moral panic about access to pornography through public internet outlets, and particularly in schools, is alive and strong. For instance, in both 2017 and 2018, NCOSE (see chapter 1) added EBSCO Information Services to their annual “Dirty Dozen” list of smut peddlers. While CIPA remains in force and most American public schools filter internet pornography, the EBSCO databases that many students use to access educational materials are not subject to these same internet filters. Even after EBSCO worked to scrub their elementary, middle, and high school databases of pornographic and sexually explicit materials, NCOSE found a number of materials on these databases that they objected to. NCOSE researchers found “sexually graphic written content on high school databases, including sexually graphic written descriptions and instructions for oral sex and other sexual acts” that they considered “salacious and not academic.”10 On the high school EBSCO database, they also objected to academic articles about gay porn, articles about pornography more broadly, and articles from magazines like Cosmopolitan and Redbook that provide sex advice. EBSCO’s middle school database (Middle Search Plus) and elementary school database (Primary Search) contained articles on adult entertainer Bettie Page; teen activists working to make public nudity acceptable by posting nude protest images to Instagram; sex advice articles with information on oral sex, anal sex, and BDSM; and other sex education materials that NCOSE considered guilty of normalizing deviant sex and encouraging the use of pornography by children.11
EBSCO spokeswoman Kathleen McEvoy noted that schools are primarily responsible for setting up their own EBSCO filters for blocking objectionable content and that the NCOSE researchers were likely accessing these materials through their home computers that would not be subject to these school filters. However, she noted that EBSCO initiated an investigation after NCOSE’s research findings. While EBSCO was not able to reproduce their findings, she noted that the company took NCOSE’s findings very seriously and that the company has taken steps to ramp up its content filters for public school databases. She also concurred with NCOSE that magazine articles about sex practices like BDSM did not count as “sex education” and thus ought to be censored.12 Despite EBSCO’s response to these problems, at least one school district in Colorado discontinued its subscription to EBSCO Information Services for the foreseeable future.13
We can thus expect that overblocking will continue to be part of the daily lives of public school students in the United States for the foreseeable future as well. This inordinately impacts the most underprivileged students who might not have ready access to the internet via broadband or mobile devices outside of the school’s internet filters. As we’ve repeatedly seen, moral panics over sex and pornography always have class dimensions to them, which are repeated both here and in another instance in which NCOSE, after joining forces with Enough Is Enough, was able to get both Starbucks and McDonald’s to agree to filter sexually explicit content on their free Wi-Fi in locations nationwide.14 The overblocking that results inordinately impacts poor and unhoused adults in the same way that it did when public libraries installed filters after CIPA.
This chapter will make the case that overblocking is a phenomenon common across the internet writ large and is not confined to public schools or free Wi-Fi hotspots. Nearly every major internet platform today engages in systematic overblocking of sexual expression, which by default reinforces heteronormativity. The primary focus will be on analyzing the impact of the Stop Enabling Sex Traffickers Act and Fight Online Sex Trafficking Act, collectively known as FOSTA-SESTA and hereafter referred to as FOSTA, which the US Congress passed in 2017. FOSTA was the first substantial change in legislation and regulative policy surrounding adult content that the United States has made since CIPA and has had the largest impact on the internet since the Communications Decency Act (CDA) was passed in 1996. We can essentially divide content moderation practices into pre-FOSTA and post-FOSTA eras. In the former, ISPs and content hosts, like social media platforms, were not liable for user-generated content disseminated by or hosted on their networks, which led to a lighter, but still quite repressive, censorship regime, as we’ll see below. FOSTA, claimed as a marquee policy victory by NCOSE, has led to extreme crackdowns on sexual speech on the internet. It has adversely impacted many LGBTQIA+ communities and has been exploited by the manosphere (see chapter 1) to punish adult entertainers and sex workers online. In the aftermath of FOSTA, many adult entertainers and sex workers have faced serious consequences as both their livelihoods and their bodies were put at risk by the act. Additionally, the act has ramped up the overblocking of sex education materials, which is likely to have an inordinate impact on the adolescents coming of age during its reign over internet content. As I’ll show in chapter 4, this victory of anti-porn crusaders has also led to a détente wherein pornography is allowed to continue proliferating online provided it is produced by multinational corporations and coheres to heteronormative genre conventions.
Overblocking in the Runup to FOSTA-SESTA
Since 2016, LGBTQIA+ digital content and its creators have been increasingly under attack at a global scale. As Freedom House’s annual “Freedom on the Net” report for 2016 states, “Posts related to the LGBTI community resulted in blocking, takedowns, or arrests for the first time in many settings. Authorities also demonstrated an increasing wariness of the power of images on today’s internet.”15 The organization found attempts to block LGBTQIA+ content in eighteen countries, up from fourteen in 2015, ranging from South Korean regulators asking the Naver web portal to reconsider linking to gay dramas to the Turkish government blocking all the popular LGBTQIA+ websites in the country for a period during 2015.16 Turkey regularly invokes legal provisions about protecting families, censoring obscenity, and preventing prostitution to block LGBTQIA+ websites and apps like Hadigayri.com, Transsick-o, and Grindr.17 By 2017, Freedom House estimated that 47 percent of the global population lived in countries where LGBTQIA+ content was suppressed and sometimes punishable by law.18
