Mark Zuckerberg’s shock announcement on Monday that the company would scrap its fact-checking programme has sparked renewed fears over the dangers of social media.
Zuckerberg’s move will also see the removal of curbs on discussions on Facebook and Instagram around topics such as immigration and transgender issues.
It could lead to a host of negative effects, and real-world harm in the UK, including inspiring extremist views and body dysmorphia, Dr Kaitlyn Regehr of University College London told Yahoo News.
The change – which will affect Facebook, Instagram and Threads – will retire a fact-checking programme launched in 2016 and is seen as the biggest change in Meta’s approach to political content since then.
Facebook is rolling out the change in the US first, with no timetable for when it will go worldwide.
Dr Regehr, a disinformation specialist whose work helped shape Britain’s Online Safety Act, told Yahoo News: “Zuckerberg announced that Meta will follow X’s model and dispense with their independent fact-checkers. What that practically means is that Meta will no longer go to efforts to combat misinformation.”
The problem with removing fact-checking is that algorithms – not people – decide what is seen on platforms such as Meta and Instagram – and these can prioritise lies over facts.
“What my research has looked at is the way in which algorithms often prioritise misinformation on feeds, because misinformation tends to be more attention-grabbing than truth,” Dr Regehr said.
“We know that misinformation and harm is prioritised. So the fact that there is no longer going to be the same checks and balances around that is a real concern for all of us.”
What topics will appear?
Meta will stop proactively scanning for hate speech, and will now only respond to user reports, Zuckerberg announced.
Automated scanning will instead focus on “high severity” problems like terrorism, scams and child exploitation, rather than racist content, for example.
This could mean that not only will extremist politics and misinformation be allowed to thrive, but also misinformation around topics such as climate, health and vaccines.
“We don’t know what this is actually going to look like in practice,” Dr Regehr warned. “But what Zuckerberg proposed in his announcement is that it will be individuals now that determine what truth is.
“Individuals can then write and say that this is disinformation or misinformation. That really moves away from the way we’ve conceived of the truth before, that we have trusted experts – like doctors, like scientists – who tell us things like vaccines will be good for us, that we should be concerned about the climate and our environment.
“If we are moving away from a scientific-based model to what Zuckerberg calls a free speech model, it’s going to completely shift our truth.”
What could this lead to?
Some critics have suggested that the changes could lead to Facebook, Meta and Threads becoming a ‘free for all’ where disinformation can thrive, similar to Elon Musk’s X platform.
In the wake of Musk’s takeover, antisemitic content on X has doubled, and engagement with pro-Kremlin accounts has increased by 36%, according to the studies cited in the New York Times.
On Facebook, Threads and Instagram, Zuckerberg plans to implement a system of “community notes” similar to those used by X, where site users add responses to problematic content.
The problem is not that occasional offensive comments will slip through, Dr Regehr explained – it’s that individuals could be bombarded with similar content until their views are changed.
This could have effects that spill over into the real world, she said.
Dr Regehr said: “What we are concerned about as researchers is high doses of problematic content that begins to entrench beliefs in people, because we live now in these information silos where you are getting you’re not getting spontaneous or curated content, you are getting a very specific, algorithmically driven set of content.
“If you end up in a silo that is pushing a specific belief that will change the way you think.”
How could this affect vulnerable groups?
Meta has removed restrictions around topics such as immigration and gender saying they are “out of touch with mainstream discourse”.
Other changes mean that it is now allowed to call gay and transgender people mentally ill on the platform.
Meta’s new Community Standards say: “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
The document also says that Meta “allows room” for insulting language around topics such as “discussing transgender rights, immigration or homosexuality”.
The policy says: “Our policies are designed to allow room for these types of speech.”
Dr Regehr told Yahoo News that for diverse individuals, the changes could be “very scary”
Who could this impact in particular?
Young people are likely to be impacted in particular by misinformation online, Dr Regehr said – both because they are targeted by it (by other users and algorithms), and also because they are more susceptible.
Research by the UK’s Safer Internet Centre in 2021 found that 48% of young people already reported seeing misleading content every day online.
For young people, the effects can be devastating, Dr Regehr said, leading to changed viewpoints and even self-harm.
“We know that young people are often more likely to be targeted by this type of harm, and then can be more susceptible to believing it. It can shape the way that they think. It’s through that normalization process of misinformation and harm that we can see this type of ideology move off screens and onto streets.
“Particularly for young women, we need to be worried about body dysmorphia. We need to be worried about issues of body image. And for young men, we need to be worried about them being pushed to angry, harmful content that may impact their offline behaviours.”
Why are filters and fact-checking necessary?
Filters and fact-checking are needed in today’s social media world because algorithms have gone far beyond simply serving up what people’s friends are doing, Dr Regehr explained.
Social media sites such as Facebook and Instagram use algorithms to serve up content in order to keep users hooked. Without human intervention, algorithms tend to serve up extremely personalised and manipulative content, designed to keep people’s attention at all costs.
Dr Regehr said: “Unfortunately, that often is something that feeds into our deepest insecurities, our fears, things that make us feel angry, things that elicit high levels of emotional response, because that keeps us hooked, and that’s what they can sell to advertisers.
“It’s going to be incredibly personal and individualistic to you. It might hook into things that no one else in your life knows about, hooking into our kind of most vulnerable self, our deepest fears. It is a really problematic thing.”
Dr Regehr believes that Britain’s 2023 Online Safety Act does not go far enough, and people need more protection.
She said: “We do not have the same type of consumer protections that We expect in almost everything else that we consume. I believe that we should be pressuring our governments to do more.
“In the meantime, until that happens, we need to teach screen smarts, in the way we’ve always taught street smarts.”
Read more
This article was originally published by a uk.news.yahoo.com . Read the Original article here. .