Facebook says it wants to help correct misinformation plaguing the internet — a problem it may have helped create in the first place.
Parent Facebook Meta announced a new AI-powered tool on Monday called Sphere. It is intended to help detect and address disinformation, or “fake news”, on the Internet. Meta claims it is “the first [AI] model capable of automatically scanning hundreds of thousands of citations at once to check whether they really support the corresponding claims.”
The announcement comes after years of criticism of Facebook’s own role in allowing online misinformation to thrive and spread rapidly across the world. Sphere’s dataset includes 134 million public web pages, according to Meta’s research team. It draws on this collective knowledge of the Internet to quickly analyze hundreds of thousands of web citations, looking for factual errors.
So maybe it’s appropriate for Meta to train the AI model using entries on Wikipedia. According to Meta’s announcement, Sphere is already crawling the Crowdsourced Internet Encyclopedia pages to test its ability to flag sources that do not actually support the entry’s claims.
Meta also indicates that when Sphere spots a questionable source, it may recommend a stronger one – or correction – to help improve the accuracy of the input.
“Wikipedia is the default first stop when looking for research information, background material, or an answer to that nagging pop culture question,” Meta said in a statement, noting that Wikipedia hosts more than 6.5 million entries in the English language alone and adds around 17,000 new entries to its pages every month.
The company also posted a video showing how Sphere works:
A Wikipedia spokesperson told CNBC Make It that the Internet Encyclopedia does not officially partner with Meta for the development of Sphere and that none of the Wikipedia entries are automatically updated. Meta too says TechCrunch earlier this month that there is no financial compensation either way.
Existing automated systems were already able to identify pieces of information that lacked citation. But Meta researchers say the complexity of distinguishing between individual claims with dubious sources and determining whether those sources actually support the claims in question “requires the depth of understanding and analysis of an AI system.”
In a statement, Shani Evenstein Sigalov, a researcher at Tel Aviv University and vice chairman of the Wikimedia Foundation board, called Sphere’s training with Wikipedia “a powerful example of machine learning tools who can help develop the work of volunteers”.
“Improving these processes will allow us to attract new editors to Wikipedia and provide better, more reliable information to billions of people around the world,” Sigalov said.
Sphere marks Meta’s latest effort to combat misinformation online – while potentially deflecting criticism of the company’s role in allowing such misinformation to persist.
Meta has faced consistently harsh criticism over the past few years from users and regulators for spreading false information on the company’s social media platforms, including Facebook, Instagram and WhatsApp. Former employees and leaked internal documents fueled claims that the company prioritized profits over fighting misinformation, and Meta CEO Mark Zuckerberg was called before Congress to discuss the issue.
Last summer, President Joe Biden accused the social media giant of “killing people” by allowing misinformation about the Covid-19 vaccine to spread on its platforms. The company pushed back, saying Facebook and Instagram provide “authoritative information about COVID-19 and vaccines” to billions of users.
Correction: This story has been corrected to reflect that Meta is currently using Wikipedia as a training tool for Sphere.
Register now: Be smarter about your money and your career with our weekly newsletter
Racism could ruin the metaverse if tech doesn’t improve diversity now, CTO warns: ‘It’s absolutely a problem’
Bill Gates Says The Metaverse Will Host Most Of Your Office Meetings In “Two Or Three Years”