Jan. 8 (Portaltic/EP) –
Goal has announced a change in your moderation system, by which it will stop using external verifiers to support the community notes program; two different approaches to address the problem of disinformation and distribution of false information.
Since 2016, viral content distributed on Facebook, Instagram and WhatsApp is monitored by independent fact checkers“who review stories and rate them for accuracy through original reporting,” as Meta explains on the dedicated page.
This program currently has more than 80 organizations working in more than 60 languages, all of them certified by the International Fact-Checking Network (IFCN), which brings together journalists who are dedicated to verifying facts at an international level. In Spain, it was launched in 2019 with three external organizations: AFP, Newtral and Maldita.es.
The professionals who participate in the review of the contents of Meta’s social networks they don’t delete postsnor the accounts or pages, but instead evaluate their accuracy, rating them, if appropriate, as ‘False’, ‘Altered’, ‘Partially false’ or ‘Missing context’.
The main consequence, in case a content is classified as ‘False’ or ‘Altered’, is that the author will notice that distribution is reducedand you will lose the ability to monetize it. It is, however, a decision that the affected person can appeal.
The content is also accompanied by labels and warning notifications with which it is offered context information so that users can decide “what content they want to read and share, and what information to trust.”
This approach seems has ceased to be suitable for Metawhich this Monday announced the adoption of the model already used by the social network X, based on community notes, that is, on the notes that contributors add to publications to provide context.
THE COMMUNITY NOTES
Community Notes They first appeared in January 2021 as a Twitter pilot project known as Birdwatch to monitor content, based on the user community. The idea was that people could identify information in tweets that they believed was misleading and write notes that provided contextual information so they could better evaluate the content of the post.
This initiative was implemented globally on Twitter in December 2022, under the name community notes, and maintaining the essence of the pilot: keeping users better informed and, collaboratively, they could add context to potentially misleading tweets.
However, the note cannot be added by any user, but only those who are registered in the program and have been validated by the company. Then, they become contributors, and can contribute their corrections, with notes or links to other publications. Additionally, they are only displayed publicly if enough contributors with different points of view rate them as useful.
After the acquisition of Twitter by magnate Elon Musk and its renown as X, the social network has made various modifications to this collaborative program. In October 2023 it began demand that they be added verified sourcesand a year later, announced the calls ‘Lightning Notes’with which I sought to ensure that this contextual information would take less than 20 minutes to appear after the content had been uploaded to the social network. Over time too have been extended to images and videos.
FROM THE EXPERTS TO THE COMMUNITY OF USERS
Meta has explained that when they launched the independent verifier program in 2016, they thought it “was the best and most reasonable [decisión] at that time, it was delegate that responsibility to independent fact-checking organizations“.
However, he has alleged that his contribution in recent years has not met the objective, which was to “offer people more information about the things they see online, particularly viral hoaxes, so that they could judge for themselves what that they saw and read”.
On the contrary, he assures that experts also “have their own prejudices and perspectives” and this “became evident in the decisions some made about what to check and how“, leading to a system that imposed “intrusive labels and reduced distribution.”
“A program intended to inform too often became a tool to censor,” he adds. Therefore, they have decided that they will use the community notes after seeing that “this approach works on X, “where they empower their community to decide when posts are potentially misleading and need more context, and people from a wide range of perspectives decide what kind of context is useful for other users to see.”
Community notes will be implemented in the coming months in the United Stateson social networks Facebook, Instagram and Threads.
Add Comment