Moderation, I can tell you firsthand, is one of the most important and challenging aspects of social media. With the COVID-19 pandemic, social networks have been under greater pressure than ever to police their platforms to prevent the spread of misinformation.
As a result, Facebook, YouTube, Twitter and the other social-media giants have cranked up their censorship into overdrive, but they are ignoring the structural problems that allow misinformation to be boosted on their platforms in the first place.
These companies are increasingly dictating what their users should and should not see and believe. They are kicking out good users and taking down countless harmless posts, pages and groups simply for asking questions about COVID-19 or presenting opinions that differ with those from the company’s executives and authorities. This widespread censorship of ideas would make George Orwell dizzy and runs counter to the whole purpose of social networking.
On May 11, Twitter announced it would add labels to tweets with false or disputed information about COVID-19, and Facebook already started adding similar labels. This is a Band-Aid that does not solve the fundamental issue and can actually create even more problems.
According to MIT research, when people see that some posts on social media have warning labels, they’re far more likely to assume, incorrectly, that all the posts without these warning labels have been verified by fact-checkers. This misperception is exacerbated by the fact that only a fraction of posts with false or unverified information are checked and marked as such.
The basic problem is the micro-targeting business/revenue model of Facebook, YouTube, Twitter and all current mainstream social networks. These companies allow marketers and purveyors of misinformation to pay to boost content to targeted audiences most susceptible to believing it.
Pizzagate is a well-known example of this, and two weeks ago, after Facebook had vowed to prevent misinformation, it allowed marketers to target ads to 78 million users interested in “pseudoscience.” Putting warning labels on certain posts with unverified information is a superficial solution that fails to address this structural flaw.
In contrast, my company, MeWe, has no targeting, no newsfeed manipulation and no way for advertisers, marketers or political operatives to boost anything to users. This prevents any information or opinion — true or false — from being broadly promoted. Users need to deliberately seek out information for themselves and cannot be targeted by others who wish to reach and manipulate their thoughts.
This is not to say that social-media companies should relinquish moderation altogether and institute an “anything goes” policy. If a user, group, page or post is breaking the law or inciting violence, threats, hate, bullying, harassment, etc., companies have a duty to investigate and remove that user or content in order to keep their other users safe.
This is how MeWe operates. If users or groups want to have conversations following these basic rules of conduct and share different viewpoints about politics, religion, sexuality, medical treatments, diets, exercise regimens, supplements, lifestyles, etc., it is not the role of social-media companies to censor such discussions.
“Rightness” about a position, whether it is related to politics, medicine, health, fitness, science or anything else, has often been reversed or changed over time. In our current period of COVID-19, what information is “true” is changing at an unprecedented pace. In January, the World Health Organization falsely tweeted that COVID-19 could not spread from person to person; in March, it recommended against wearing a mask if you are not sick.
In April, President Trump falsely suggested during a White House briefing that COVID-19 might be treated by injecting disinfectant into the body. Should social networks censor the WHO or the president? Of course not.
Censorship is often counterproductive and amplifies what was meant to be muzzled. It’s human nature to want to see what’s forbidden, which is why books sometimes become more popular after they’re banned. When it comes to conspiracy theories, this is even more true. Banning content proves to the conspiracy-minded that it must be valuable because the authorities don’t want them to see it.
Social media was intended to be a place where good people of all stripes are free to express themselves and share opinions authentically. The widespread censorship we’re currently witnessing runs contrary to this purpose.
Mark Weinstein is CEO of the social-network company MeWe.
Source: Read Full Article