Tumblr. Crystal AbidinЧитать онлайн книгу.
early Content Policy and Guideline documents contained a single sentence claiming that those who regularly host and upload sexual videos would be suspended. The 2012 update to Community Guidelines elaborated by setting two rules: users who “regularly post sexual or adult-oriented content” were asked to flag their blogs as “Not Suitable For Work (‘NSFW’),”7 and users are welcome to embed links to sexually explicit video, but should avoid uploading, because tumblr is “not in the business of profiting from adult-oriented videos and hosting this stuff is fucking expensive.” The call to self-label ushered in the first version of the so-called Safe Mode, where the content of the blogs, which had been self-tagged as NSFW, was filtered out from the dashboards and search results of those users who selected that option. In 2012, Karp went on record saying he is not “into moderating” NSFW content and that tumblr is “an excellent platform for porn,” which he does not “personally have any moral opposition to” (Cheshire 2012). After the sale to Yahoo! in 2013, tumblr started tinkering with the visibility of sexual content in what Gillespie (2018: 173) has described as an attempt on Yahoo!’s part to both let “tumblr be tumblr” as well as sell ads. When invited to comment on the matter by talk show host Stephen Colbert, Karp maintained that tumblr had taken a pretty hard line on freedom of speech, arguing that he did not want to “go in there to draw the line between” art and behind the scenes photos of “Lady Gaga and like, her nip” (Dickey 2013). The Community Guideline clauses regarding NSFW content remained the same throughout updates in 2015, 2016, and 2017, although a link to “report unflagged NSFW content” was added in the 2016 update (tumblr 2016). In 2017 a stricter Safe Mode was introduced. The new system was quite complex, filtering blogs that were self-, moderator-, or automatically labeled as NSFW from the external and internal search results of all non-logged-on users and all logged-on users who were under the age of 18 (see Chapter 6).
In late 2018, to the great shock of tumblr users and scholars, Tumblr Inc. announced that it was banning all “photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content … that depicts sex acts” to “keep the community safe” (staff 2018). The source of this sudden and radical change is twofold: the US Senate passed the twin bills of FOSTA/SESTA (Fight Online Sex Trafficking Act and Stop Enabling Sex Traffickers Act) amending the CDA 230 to allow internet intermediaries to be held responsible for “promoting or facilitating prostitution” or “knowingly assisting, facilitating or supporting sex trafficking,”8 and tumblr’s mobile app was briefly banned from Apple’s App Store on the basis of claims that child pornography had been found on the site.9 LGBTIQA+, fandom, sex worker, artist, and academic circles pointed out that the ban would destroy a unique, safe, and empowering space that many often-marginalized individuals and groups used for exploration of self and sexuality (Ashley 2019; Liao 2018).10 Despite experts’ and users’ suggestions that there are better ways to deal with presumed child porn and increasing porn-bots,11 or that perhaps the growing subsection of racist hate speech warrants attention (Tiidenberg 2019a), Tumblr Inc. went ahead with the ban as planned.
Although many users hoped that the NSFW ruling would be reversed under Automattic, CEO Mullenweg refuted that hope by citing the app stores’ intolerance of NSFW content as the reason for the ban (Patel 2019). Sexually explicit content is still present on the platform, though its make-up and volume has changed. Based on our experiences, original visual content created by tumblr users themselves, often of themselves (see Chapter 6), is nearly gone. What remains is pornographic content: GIFs, videos, and still images from porn, which are much more explicit than selfies with female-presenting nipples ever were.
Algorithms
Algorithms are increasingly used to police user compliance to platform rules, but more broadly they shape flows of information, assign meaningfulness to content, and mold our participation in public life (Gillespie 2012; Langlois 2012). However, algorithms are usually invisible. They become noticeable to everyday users, when there are shifts in how they organize information, show or hide content, increase or reduce the visibility of the user’s own content, recommend accounts or posts, or insert moneyed speech into one’s line of sight. As algorithms themselves tend to be proprietary, researchers study their implications via users’ algorithmic imaginaries (Bucher 2017) or algorithmic lore (Bishop 2020). This is what we will do to describe users’ perceptions of and experiences with tumblr algorithms over the past decade.
tumblr’s algorithms were experienced as comparatively unobtrusive until 2017. We link this to Tumblr Inc.’s particular approach to advertising and classifying users – up until 2015 they almost performatively refused targeted advertising (see Chapter 3). But it can also be linked to tumblr’s vision, responses to user criticism. and historical “spam” problems (Perez 2011). There have almost always been spaces within the tumblr interface for recommended content, but users’ reactions to it have been ambivalent. The now-defunct “spotlight” was introduced in 2011 and clearly articulated by tumblr, and experienced by users, as editorial and not algorithmic (staff 2011). Being featured on Spotlight was generally considered a good thing by users. In 2010 and 2011 even NSFW blogs could get recommended, if they were popular and original enough. “Radar” has been around since at least 2010 and “Recommended blogs” since 2011 (we were unable to precisely date these features). Users have imagined both to combine editorial and algorithmic techniques. There were many posts on and off tumblr articulating either how to increase one’s chances to be featured in those spaces (e.g., tagging content with the #RadarPlz hashtag), or how to use a variety of browser add-ons to suppress them from one’s Dashboard experience (see Chapter 3). Until the introduction of “Best Stuff First” in 2017, tumblr did not (noticeably) reorganize what users saw on their dashboard (staff 2017). Since 2020, tumblr recommendations have been made across sponsored posts, blogs, searches, and tags, which are demarcated as “sponsored.” Within the mobile app there are additional categories of “recommended group chats,” “recommended for you,” and “watch on tumblr.”
An early case of algorithmic imaginary (Bucher 2017) emerged after tumblr’s 2012 policy against self-harm blogs. We noticed vernacular techniques circulating for backup hashtags and otherwise circumventing the algorithms among some thinspo blogs (Kanai et al. 2020; see also Chapter 7). Users’ imaginaries of tumblr algorithms shifted more drastically with the 2017 Safe Mode, when algorithms were obviously and intrusively employed to filter content (see Chapter 6). Certain keywords, which returned results via browser, returned nothing on mobile apps because of app store restrictions. This included “#gay,” because the data that the filtering algorithm was trained on had determined that the hashtag often accompanied pornographic content, but the LGBTIQA+ community rightfully interpreted this as an outright attack. tumblr managed to placate users by reversing some of the changes, promising to work on more intelligent solutions for battling porn bots and filtering content, and primarily by demonstrating that they were listening. Their resolution of this particular governance conflict showed that they understood that moderation involves a politics of visibility (Gillespie 2018), which in the case of sexual self-expression often follows the fault lines of systematic marginalization (e.g., disenfranchising the LGBTIQA+ community).
This understanding seemed to have evaporated by the time of the NSFW ban in 2018. tumblr’s Help page claimed that the new ban was enforced through a “mix of machine-learning classification and human moderation from our team of trained experts,” wherein appeals regarding misflagged posts would be reviewed by humans (tumblr Help Center 2018). tumblr’s classification algorithms (usually referred to as flagging algorithms or flagging bots in vernacular discourse) were shockingly bad and the public backlash against them spanned platforms (Tiidenberg 2019a). While differentiating permitted nudity (mastectomy or gender-confirming scars, breastfeeding) from prohibited nudity (“female presenting nipples,” any depictions