The New Social Media “Super Users”
Understanding how journalists and academics fit into the online moderation ecosystem
Having listened to many journalists who cover social media, from the “look at this cool thing I found” beat to the disinfo beat, every line of discussion eventually ends up at the same point: “Social media companies have abdicated their responsibility to moderate their platform by relying on us journalists to tell them when we find something bad.” This, in my mind, is a grave misunderstanding of what has actually transpired in the ecosystem.
Let us break down what is actually contained in this statement. The type of accounts flagged by reporters generally speaking fall into three buckets. bucket one — Persistent rule-breaking topics that are not automatically being detected despite using obvious prohibited keywords; bucket two — Persistent rule-breaking users that are not manually being detected; and bucket three — Social norm violating posts by newsworthy accounts that have not been outwardly punished. My focus is, along with the majority of discussion on the topic, on the third bucket.
One key misunderstanding that has persisted within the discussion I have read, watched, and listened to online, has to deal with the “passing the buck” as described previously. In reality, Facebook and Twitter (notably the platforms that journalists are both active and popular on) have used journalists and academics a sort of “Trusted User” status, granting them certain moderation authorities that the general public does not have access to, while at the same time not actually absorbing Trusted Users into the actual admin-side moderation stack.
This distinction is key. By not being agents of the platforms themselves, Trusted Users are able to determine what content violates the spirit of uncodified cultural norms that exist on the platform, without creating actual written policy and rules. Instead, an implicit threat (sometimes explicit) is made where the academic or journalist will reveal that Facebook or Twitter are themselves violating the unwritten cultural norms on their own platforms if they fail to remove the content that has been flagged for them. This in my mind makes journalists, social media researchers, and activists more akin to the role of Reddit’s subreddit moderators. Beyond the ability to impact cultural norms (you need some influence to get the platform’s ear) Super Users on non-reddit platforms change how the site administrators focus their moderation efforts towards communities the Super Users are not a part of, which has been an underreported but key power subreddit moderators have.
Social media platforms also benefit from this arrangement in another way-they no longer carry the burden of having to actually define the “rules of the road” for using their platforms, and as such avoid having to answer for enforcement blind spots and inequality. “We were aware of none of it and took action on what was presented to us” is a much more palatable argument than “We were aware of this and didn’t take action until it was flagged through a backchannel” and platforms will of course opt into policies and actions that allow them to make the former argument rather than the latter. As long as they don’t have people on their payroll specifically playing the role of hunting out what is currently flagged by third parties, they can hide behind the veil of “we didn’t see this until now,” because otherwise the argument they’re making is that their staff failed, which is a statement that is difficult to imagine platforms being happy to tell reporters.. This argument also has the bonus of keeping the fight over moderation purely in the “detection” sphere rather than in the “enforcement” sphere, which in my mind is beneficial for everyone involved.
Try and imagine what battles over the “enforcement” sphere would truly look like. Does retweeting a single misleading and/or harmful tweet warrant grounds for account removal? Does retweeting a large quantity? Does mis/disinfo amplification being only a small percentage of content from an account warrant grounds for additional action?
Or what about harassment? Does an account with one hundred thousand followers retweeting a fifty follower account and saying “Only a woman would believe this” count as harassment? What if you do it to hundreds of women? Does mentioning a restaurant in a Facebook Group and calling the staff “beyond incompetent” warrant punishments? What if you mention the waiter by name but don’t tag them?
Don’t forget the classic: ban evasion. Numerous popular but unverified accounts are run by individuals who have had at least one account banned, or created a temporary account during a suspension, both of which under all platform’s TOS are explicitly prohibited. If we’re arguing about more strict enforcement of rules, how do you argue against all of those accounts being Thanos Snapped?
All of these fights would be happening in real time over newly created content. And all would ultimately be subjective calls that could and would lead to popular and ‘legitimate’ social media accounts and users getting punished, as well as many more smaller, yet still ‘legitimate’ accounts. Rather than reflecting on the downstream consequences of these changes, we insead see a push for social media companies to clone the jobs of existing Super Users.
What is being proposed by journalists and academics who complain about the current system is something that is entirely untried and novel in the social media space. If an individual stood up and said: “Facebook and Twitter should have teams of people actively hunting for non-rule breaking but otherwise offensive and norm-breaking content to proactively remove before it is flagged by users” it would be considered an insane idea. This, however, is the crux of the argument made by many commentators in the space.
It is not difficult to imagine how this system would get corrupted either. How do you codify into written rules allowing for disagreements and hyperbole? For example, this tweet saying “Amy Klobuchar on Rachel Maddow just now said Trump insisting on $2,000 stimulus checks ‘is an attack on every American.’” is both misleading but factually correct quote that accurately captures the feeling of a large swath of the online politically active community members. If a misinfo or disinfo rule was actively being enforced, would this tweet be removed? Would all of the numerous large accounts boosting it be punished? Would smaller accounts boosting it get punished? Who makes the call?
Ultimately, the call is for paid staff to be hired to fight issues of society-scale consequences. Rooted in this belief is the idea that if Facebook had better infiltration of large Whatsapp groups, if Twitter was better at policing smaller accounts, people would stop being willing to share content that wasn’t posted by a mainstream source. Despite the increasingly successful attempts to create “right wing” social media platforms, advocates still argue that the social media platforms who currently exist are so large and so immovable there is no possible punishments or rulesets they cannot create that would push users to a more ‘free’ platform.
If half of all Republicans who are active online went to a new platform, and half of all Progressives who are active online went to a new platform, would the online mis/disinfo problem get better or worse? While we can’t truly say until it happens, I have a hard time believing anyone asking for the changes I argue against above would want to find out.