The digital closet, p.17

The Digital Closet, page 17

 

The Digital Closet
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  It is worth noting that information on the overblocking of LGBTQIA+ content online was restricted to only a single page of both Freedom House’s 2016 and 2017 reports and was entirely absent from their 2018 report. Reporting on the overblocking of LGBTQIA+ content is largely absent from contemporary discourse on content moderation. A large part of this is due to the pressing issues of social media spreading alt-right political propaganda and conspiracy theories, which leads to an inevitable focus on content moderation in terms of political speech. However, it may also be due to the false Pandora’s box of porn narrative leading people to believe that LGBTQIA+ content flows freely across the global internet. My hope in this chapter is to convince you that this is a false assumption and that LGBTQIA+ content is regularly overblocked in the United States. This US-based overblocking has global implications. As many of the most prominent internet platforms are headquartered in the United States, its legislation has an inordinate impact on global internet traffic for two primary reasons. First, internet platforms rarely maintain separate content moderation standards for different national or cultural audiences. If a state has the power to influence these standards, that impact is frequently felt globally. Second, the proprietors of these internet platforms and many of their employees live in and are influenced by the same US norms that make legislation like FOSTA possible. The global impact of FOSTA is thus to doubly reinforce heteronormativity, first by subjecting LGBTQIA+ content to stricter scrutiny than heteronormative content and second because silencing sexual expression effectively preserves the status quo.

  The majority of anti-porn discourse argues that content filters are the only way to protect children from unwanted exposure to pornography online, thus justifying overblocking. This, however, does not seem to be the case. For instance, after analyzing two separate datasets, researchers found that the use of internet filters “had inconsistent and practically insignificant links” with adolescents encountering sexually explicit content online.19 The overblocking that results from internet filters thus does not have its desired effect. Mainstream heteroporn with wide distribution networks, advanced search engine optimization (SEO) techniques, and the capacity for mass-producing content still makes it through the filter, as we’ll see in chapter 4. What is lost is always a combination of art, sex education, LGBTQIA+ community resources, and LGBTQIA+ pornography.

  Bad Blocks/Weak Adjudication

  It is difficult to retroactively construct a full catalogue of unduly censored content prior to FOSTA because few researchers were focused on content moderation and no centralized agencies were collecting archival examples of overblocking. For example, a paper from the Berkman Klein Center for Internet and Society at Harvard University examined Google SafeSearch in 2003 and found strong evidence that Google routinely blocked newspapers, government sites, educational resources, and even sites about controversial concepts and images.20 However, since 2003, there have been no academic studies of SafeSearch censorship, and thus there is no real catalogue of what has been getting censored or how the adjudication mechanisms play out for those who believe their content was censored in error and are thus seeking to get it unblocked.21 To stick with the case of Google, this censorship has dire consequences for content producers and website managers, as even a temporary block can do irreparable damage to their position in Google search rankings and thus can cause an unexpected and potentially prolonged cessation of revenue as web traffic slows to a halt. As we will see below, this is not unique to Google. Across the internet, content creators and website administrators, particularly those with less access to capital and representing niche and/or marginalized communities, are confronting undue censorship and loss of revenue. The adjudication channels provided to them are opaque, alienating, and often unsuccessful if they do not have national visibility or expensive legal counsel.

  In lieu of a robust archive of unduly censored content pre-FOSTA, I will work to stitch together what has been documented with some experimental explorations of contemporary content moderation practices, both my own and those conducted by artists using their own convolutional neural networks—particularly what are termed “generative adversarial networks” that reverse engineer the operations of computer vision algorithms. What we’ll find is that automated content moderation performed by computer vision and image recognition algorithms is not very good at parsing the context of nudity, which constitutes a significant problem when it comes to the censorship of art. While some of this lack of contextual knowledge can be compensated for by relaying the moderation decisions to human algorithms, they, too, will often err on the side of overblocking artistic nudity. While they may recognize and override blocks to canonical Western artistic nudity—the types of oil paintings hung in world-class museums—this same consideration is rarely extended to non-Western, noncanonized, or everyday artistic productions.

  It is no wonder then that one of the most frequent victims of overblocking is the artistic representation of nudity. As we saw in chapter 2, even canonical works of art like the Venus de Milo are potentially subject to censorship by Google SafeSearch because automated content filters have trouble with higher-level differentiations like that between pornography and nude art. Several famous works of art have been subjected to censorship on platforms like Facebook. In 2018, Facebook flagged images of the Venus of Willendorf as pornography and censored them on its platform, which led to an online petition against art censorship.22 Facebook also automatically flagged an image of the painting The Origin of the World posted by Gustave Courbet, who had his account deactivated as a result.23 Facebook has also banned images of Gerhard Richter’s 1992 painting Ema, a misted view of a nude woman descending a staircase; Evelyne Axell’s 1964 painting Ice Cream, a pop art painting of a woman’s head as she licks an ice-cream cone; and Edvard Erikson’s 1913 public sculpture The Little Mermaid.24

  This trend is even more impactful when it comes to photography. Take, for example, Michael Stokes’s work, which often includes photographs of men in various stages of undress, including wounded, amputee veterans. Since 2013, Stokes’s photographs have been repeatedly flagged on Facebook as violating their community standards, and he has been subjected to multiple bans from the platform (not to mention hate messages and threats from other users). Stokes compares this to Helmut Newton’s ability to freely post his photograph of Venus Williams in the nude for ESPN’s 2014 Body Issue, which Facebook has allowed to circulate without challenge. Stokes writes, “Nude subjects have traditionally been reserved exclusively for the male gaze, so when a man poses nude, to some this implies that the image is homoerotic.”25 Thus, Stokes has found that images of women can be further undressed than those of men without triggering content filters (either automatically or by people reporting the images). Stokes argues that this trend has only accelerated in the past few years. In 2015, he posted a photo of two male police officers, fully dressed, kissing with a caption about censorship that was, ironically, quickly censored. He further notes that he recognized a strong shift in Instagram’s content moderation after it was purchased by Facebook. He encountered few problems with the platform before its sale and afterward was regularly subject to warnings and takedown notices. More recently, after Tumblr announced that it would no longer host sexually explicit content on its platform, nearly 70 percent of his 900 photographs on the platform were flagged as violating the new community standards.

  Photorealism does seem to be a key marker of the likelihood that an image will be automatically flagged as sexually explicit, at least via Google SafeSearch. For example, I ran the first one hundred images that resulted from Google Image searches for “nude sculpture” and “nude painting” through Google’s Cloud Vision API and found evidence that photorealism was a key indicator for an image being flagged as “adult” or “racy.”26 For instance, of the sculptures, only one was flagged as likely or very likely to be adult, and thirty were flagged as likely or very likely to be racy. Similarly, of the paintings, only twelve were flagged as adult and sixty-seven as racy. The sculptures that were flagged often were realistic and had a sheen to them reminiscent of the sweat and oil often found on models’ skin during filming, and paintings were much more likely to be flagged the less abstract they were. This bears out upon further testing. I ran the first one hundred images from The Vulva Gallery, an online site and printed book containing close-up illustrations of vulvas in the likeness of watercolor paintings. None of them were flagged as adult, and only fifteen of them were flagged as racy. Similarly, I ran two sets of hentai fanart from the site DeviantArt.com through Cloud Vision, fifty color illustrations and fifty line art illustrations. Of the color illustrations, thirty-four were flagged as adult and forty-eight as racy, while only one of the line art illustrations was flagged as adult and only thirty-three as racy. Lastly, I took the first forty-four images of Real Dolls, lifelike silicone sex dolls, from a Google Image Search and ran them through Cloud Vision and found that all forty-four of them were flagged as both adult and racy. These findings are borne out by computer science literature, which demonstrates that color and texture properties are key features in the detection of nudity by computer vision algorithms, as seen in chapter 2.

  Yet what a computer “sees” as indicative color and textural features of nudity is not the same as what we would expect based on our own visual experience. This has been demonstrated by several artists who have been using machine learning to probe the limits of computer vision, image recognition, and adult content moderation as it relates to the arts. Take, for example, the work of Tom White, an artist and senior lecturer in media design at Victoria University of Wellington. White uses a generative adversarial network (GAN) to produce what the tech industry calls “adversarial examples” based on ImageNet classifiers. In essence, a GAN mirrors the CNN that powers an image recognition algorithm (see chapter 2 for a lengthy overview of CNNs), feeding abstract shapes, patterns, or amalgamations of images into the CNN, seeing what classifiers that image triggers, and then adjusting the shapes, patterns, or amalgamations of images iteratively until it outputs an image that will trigger a classifier despite looking nothing like what a human would recognize as an example of that particular classification. As White puts it, he uses abstract forms to “highlight the representations that are shared across neural network architectures—and perhaps across humans as well.”27

  In two exhibitions, Synthetic Abstractions and Perception Engines, White has generated shocking images that will trigger certain classifiers on Amazon, Google, and Yahoo’s image recognition systems but to a human look nothing like an object that ought to trigger that classification.28 Take, for example, figure 3.2, which depicts a series of black-and-white abstract shapes and lines on an orange and yellow background. Google SafeSearch recognizes this abstract image as “very likely” to be adult content, and both Amazon Web Services and Yahoo Open NSFW make similar determinations. White has a series of similar adversarial examples that to humans present as abstract shapes and colors but to image recognition systems look like concrete, identifiable objects. Images like these challenge the efficacy of image recognition systems, probing their boundaries to demonstrate the different ways in which they perceive the world. They also constitute a more practical problem, as White’s work would likely be censored on most major platforms today, and he would be required to individually appeal each automatic flag applied to images on his accounts despite their (to human eyes) obviously “safe for work” status.

  Figure 3.2

  Mustard Dream by Tom White being run through Google’s Cloud Vision API.

  For another example, we can look to Mario Klingemann’s eroGANous project, which stitches together elements from actual images into adversarial examples that will trigger image recognition systems.29 These images are much more photorealistic than White’s and, thus, while White’s images may survive human review after the system has automatically flagged his content, the eroGANous images are more likely to be censored in the six- to eight-second window that human reviewers generally have to make censorship determinations on potentially sexually explicit content (see figure 3.3). As Klingemann notes, “When it comes to freedom, my choice will always be ‘freedom to’ and not ‘freedom from,’ and as such I strongly oppose any kind of censorship. Unfortunately in these times, the ‘freedom from’ proponents are gaining more and more influence in making this world a sterile, ‘morally clean’ place in which happy consumers will not be offended by anything anymore. What a boring future to look forward to.”30 As a side note, for those interested in escaping the boredom of this sterile visual regime, I’d recommend taking a look at Jake Elwes attempt at producing “machine learning porn,” a two-minute video of computer vision pornography unrecognizable—yet uncannily evocative—to human vision.31

  Figure 3.3

  eroGANous image being run through Google’s Cloud Vision API.

  A similar example can be found in Robbie Barrat’s work. Barrat fed images of ten thousand nude portraits into a GAN and used it to iteratively generate new “nude” images. As Barrat notes,

  So what happened with the Nudes is the generator figured out a way to fool the discriminator without actually getting good at generating nude portraits. The discriminator is stupid enough that if I feed it these blobs, it can’t figure out the difference between that and people. So the generator can just do that instead of generating realistic portraits, which is a harder job. It can fall into this local-minima where it isn’t the ideal solution, but it works for the generator, and discriminator doesn’t know any better so it gets stuck there. And that is what is happening in the nude portraits.32

  Thus, as Barrat’s project demonstrates acutely, computer vision has a very peculiar and least-common-denominator approach to detecting nudity that totally collapses the context within which that nudity occurs. For many people, none of the images above would be considered obscene, and even if they were, they are most certainly contained within the realm of artistic nudity rather than pornography. Despite this, these images are routinely censored by all major computer vision algorithms.

  These experiments with computer vision challenge the reliability of image recognition and produce an implicit challenge to content moderation. They also demonstrate the guiding role that the ethic of anti-porn crusaders plays in their production, as overbroad censorship is always preferable to even one pornographic image slipping through. This prioritization of anti-porn morality is, as I’ve shown, explicitly at odds with the needs, desires, and rights of the LGBTQIA+ community. Further, the artists above allow us to imagine a future in which new BigGAN production practices can obfuscate pornography from content moderation algorithms. As Klingemann notes,

  Luckily, the current automated censorship engines are more and more employing AI techniques to filter content. It is lucky because the same classifiers that are used to detect certain types of material can also be used to obfuscate that material in an adversarial way so that whilst humans will not see anything different, the image will not trigger those features anymore that the machine is looking for. This will of course start an arms race where the censors will have to retrain their models and harden them against these attacks and the freedom of expression forces will have to improve their obfuscation methods in return.33

  What goes unnoted here is that these techniques will likely only be available to the most tech savvy of content producers or, in lieu of doing it themselves, those with either the access to capital to hire others to perform this labor or with large enough audiences to crowdsource it for free. A likely unintended effect of this will be that in this arms race between porn obfuscators and content moderators, the only people unable to keep up will be amateur and low-budget artists and pornographers, of which LGBTQIA+ content creators are likely to form a substantial portion. In short, if what computers view as “porn” really can be likened to spam, it seems inevitable that certain types of “porn” will mutate to exploit the weaknesses in image recognition systems. It also seems likely that the content producers that achieve this will be the well-resourced corporations peddling mainstream, heteronormative content.

  Iffy Blocks/Bad Consequences

  As we saw earlier in this chapter, the discourse of moral panic that leverages the idea of unwanted and traumatic exposure of children to hard-core pornography to legitimate regimes of censorship, sexual discipline, and heteronormativity necessarily makes children and adolescents the most likely to have their internet traffic filtered—at school, college, the library, and at home. This filtration is likely to be under the direction of the people with the most authority at any of these locations, and thus the patterns of regulation of internet traffic are likely to draw upon the preexisting material relations of inequalities at these locations, which are often strongly heteronormative in the household.34 By pandering to these moral panics and providing overbroad filters to ensure the smallest possibility of “unwanted exposure,” filters like SafeSearch place themselves at odds with some of the more liberatory potentialities of the internet. Additionally, in the United States, some evidence suggests that adolescents who use online pornography are more likely to be African American and to come from less educated households with lower socioeconomic status.35 There are thus always class and racial tensions that cut through these sex panics.36

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
155