Meta exposes its users to a 'tidal wave' of misinformation, denounces an NGO

Published , updated
Meta's announced abandonment of its fact-checking program and hate speech detection systems could result in the cessation of 97% of its current moderation work.
Skip the adThe new moderation policy of Meta , the parent company of Facebook and Instagram, risks increasing "disinformation and dangerous content" on these two networks, warned an NGO fighting online disinformation on Monday. According to a study by the Center for Countering Digital Hate (CCDH), the abandonment announced by Meta of its fact-checking program and its hate speech detection systems could result in the cessation of 97% of its current moderation work and therefore a "tidal wave" of content harmful to Internet users.
The NGO made this estimate by analyzing the main changes announced by Meta on January 7, including the replacement of fact-checking with community ratings, and the abandonment of its rules on "immigration, gender identity and gender." "Meta must explain to its users why it is abandoning an approach that it presented as effective against disinformation and polarization" of opinion, the CCDH points out in its report.
On January 7, a few days before Donald Trump's inauguration, Meta CEO Mark Zuckerberg announced that he was going to "get rid of fact-checkers and replace them with community ratings," believing that the elections had marked a "cultural turning point" that gives "priority to freedom of expression." The Californian group added that it wanted to "simplify" its rules and "abolish a number of limits on topics such as immigration and gender, which are no longer in the mainstream discourse." This turnaround was recently followed by the announcement of a change in policy by Elon Musk's social network X.
Donald Trump's close adviser promised last Thursday to "fix" a feature of X that allows users to deny or qualify potentially false publications, blaming "governments and traditional media" for having seized it, against a backdrop of dissension with Ukraine. For the head of the CCDH, Imran Ahmed, if the community ratings remain "a welcome addition to the security measures of the platforms" , this model based on the participation of Internet users "cannot and will never be able to completely replace dedicated moderation teams and AI detection" .
AFP participates in more than 26 languages in a fact-checking program developed by Facebook, which pays more than 80 media outlets around the world to use their "fact-checks" on its platform, on WhatsApp and on Instagram.
data-script=https://static.lefigaro.fr/widget-video/short-ttl/video/index.js>
lefigaro