Facebook will “downgrade” users who often share misinformation and fake news

Have you ever scrolled through your Facebook feed and accidentally came across a misinformed article from a source you’ve never heard of that was shared by a school mate you haven’t talked to in years? Because I certainly have. Well, Facebook has announced that it would be harder for users to choose to share misinformation.

“We’ve taken stronger action against Pages, Groups, Instagram accounts and domains sharing misinformation and now, we’re expanding some of these efforts to include penalties for individual Facebook accounts too,” wrote Facebook.

While Facebook says that they already have reduced a single post’s reach in the news Feed if it has already been debunked, they aim to up the ante. Starting today, the social media platform will start to “reduce distribution of all posts in news feed” from a user’s account if they are have repeatedly shared misinformed content.

Users would also receive notifications to inform them that they have shared false information. The notification would also include the fact-checker’s article debunking the claim as well as a prompt to share that article with their followers.

The notice also explains to users that people who repeatedly share false information “might have their posts moved lower in news feed so other people are less likely to see them”. In addition, users would be able to delete the post and learn more about fact-checks under their “what you can do” section.

But it’s not just users that would need to be more careful with sharing suspicious articles on their feed, as Facebook pages are also responsible for any potential spread of misinformation. If a page has repeatedly shared content that fact-checkers have rated, users will see a pop up to help them make an informed decision about whether or not they want to follow the page.

While it’s commendable that they seem to be taking some initiative in tackling the spread of misinformation, there’s still a lot of questions that could come out of it. How fast can their fact-checkers do their job if misinformed sites keep churning out their content? And how many posts it would take to trigger the reduction in the news feed?

Spreading dangerous false claims about COVID-19 and vaccines is also a huge problem, so why can’t Facebook just block them out instead of just limited their spread? Twitter seems to have no problem banning spreaders of fake news like Donald Trump—although they have also been deleting content Palestinian residents due to “technical errors“.

Facebook has also been giving people “more context” about the pages they see on the platform by labelling Pages. The Page labels include ‘public official’, ‘fan page’ or ‘satirical page’—which will give users a better understanding of where their information is coming from.

[ SOURCE, 2 ]

Related reading

Recent Posts

Malaysia Airlines’ new A330neo grounded temporarily due to production issues

Malaysia Airlines has temporarily grounded its brand new Airbus A330neo after completing four commercial flights.…

13 hours ago

Proton e.MAS 7: Here’s how much it cost to maintain this EV

Pro-Net recently revealed that you only need to service the new Proton e.MAS 7 EV…

2 days ago

Proton e.MAS 7: How much does it cost to replace the tyres?

The Proton e.MAS 7 is one of the most value for money SUVs at the…

2 days ago

Samsung to launch its new AI-powered home appliances with improved ecosystem integration at CES 2025

Samsung has announced that it will be holding its press conference titled "AI for All:…

2 days ago

SoyaCincau Awards 2024: The Best Phones of the Year

Modern smartphones are very capable computing devices, thanks to powerful hardware trickling down the price…

3 days ago

CelcomDigi offers 5G Home WiFi at RM69/month for Postpaid customers

If you're a CelcomDigi Postpaid 5G customer and can't get fibre broadband for your home,…

3 days ago

This website uses cookies.