Meta and content moderation

meta and content moderation

Today we’re exploring Meta and content moderation. Choosing what people should see has a huge influence in today’s society.


Foreword: we do not advocate that content should not be under moderation. The goal is to highlight the impact moderation has.
Facebook and Instagram have (unfortunately) a lot of kids spending a lot of time on them. Content must be under strict control, otherwise you’ll end up with porn… See our article here.


Moderation is power


Following our last article, someone told us about a class-action lawsuit stating that OnlyFans was bribing Meta’s moderators.

They were (allegedly) paying to have Meta adding competitors to the “terrorism watch list”.

The competitors noticed that their content was often flagged as “terrorism”, whereas OnlyFans’ creators were safe. In the lawsuit, they claim more than 21 000 names were wrongfully added to the terrorist watch list.


This watch list is maintained by the GIFCT and shared across platforms such as Instagram, Twitter and Youtube.


It might remind you of another recent story. An OnlyFans creator, Kitty Lixo, said she avoided having her account banned by having sex with Meta’s employees.

That’s so typical of big corporations, they work in silos and information is not shared.
This lady thought she needed to pay to have her content safe, but her company was already bribing Meta. What a shame.


The Morale ?


In the above examples, moderating the content directly impacts revenues for a few people.
But in other circumstances, moderating content, or not moderating it, has broader consequences.


We remember the Cambridge Analytica scandal, when a company worked on Facebook to influence the Brexit vote.


There’s also the Trump election that is believed to have been pushed by social media and the lack of moderation of fake news.


Another example: the role Facebook had in Myanmar. The platform let users post content inciting violence against the minority, which has fuelled the genocide.


Maybe a final example, coming from The Intercept, Meta was banning images coming from Palestine but images from Ukraine were considered “newsworthy”, thus not banned.

We do not go as far as accusing Meta of supporting / not supporting some groups. However, by choosing what to show and what to ban, they allow the creation of a narrative.



Those examples are a reminder that everything you see on the internet is by design. We tend to believe in some kind of holy truth coming from the internet, but it’s not the case.
Algorithms are designed mostly to prop up their services, ensure you stay online and interact.

The goal of the tech companies is to make money, not share a message or educate.

The way users behave shows them what to do.
People like watching videos from Tiktok ? No worries, voilà Reels are created.

People like BeReal ? Okay, here’s a feature on Instagram.
People like having news that comforts them, reinforces their ideas, leads them to conspiracy theories ? Have as much as you want, would you like to try QAnon ? You’ll love it.





Some will argue that Google’s algorithm is fair. Although Google was made to promote the best websites, people spend more time on Google than ever.

Google adds content from websites directly in the search results. You don’t use Google to have the best websites, Google takes content from websites and gives it to you.


If you do a search for “Dolphin eating”.
You’ll first have pictures of dolphins eating, then some recommendations of what dolphins eat (fish, so you don’t have so search…), then sponsored links, and finally the search result, which includes usually Youtube videos.


The issue ?

They are in a position of monopoly and they push for their content. They limit competition and innovation.

Also, by taking content from websites, people don’t click, and websites lose traffic and revenue.