dead It was announced in a blog post update Yesterday, a mysterious Instagram setting will be implemented on Threads that lets users control the amount of verified content they see in their feed. Meta says the fact-checking process aims to tackle misinformation, so users will effectively be able to decide how much they want to see controversial topics on the site.
The controls have three levels: Don’t Reduce, Reduce, and Reduce More. Although none of the options can completely hide content, they will affect the ranking of posts that are “found to contain false or partially false information, modified content, or missing context.”
To access the setting from Threads, users will need to tap the two lines in the top-right corner of the Profile tab, then Account > Other Account Settings (which takes you to Instagram) > Content Preferences > Demoted by Fact Check.
On the face of it, this concept seems really compelling. It can basically be a “drama” filter, and who wouldn’t want that in some aspect of their life? Meta said in a Statement for NBC News The options are intended to give users “greater control over the algorithm that ranks posts in their feed,” adding that they respond to users’ demands for “greater ability to decide what they see on our apps.”
NBC News He pointed to A Share with thousands of likes Saying that the change aims to impose censorship on content related to the war between Israel and Hamas. Whether that’s true or not, there’s clearly plenty of room for censorship with a tool that invites users to be complicit.
Meta uses third-party fact-checkers to rate content on Instagram and Facebook as factual or not, and what they determine now indirectly applies to Threads content. The company says that while fact-checkers can’t directly rate the content of topics, Meta will carry over ratings from Instagram and Facebook to “near-identical content on topics.”
Meta says Instagram has had fact-checking rating options for years, but never seems to advertise them properly. according to Economic timesMeta added the feature to Facebook in May, with a Meta spokesperson saying it was intended to “bring user controls on Facebook more consistent with those already in place on Instagram.”
Moderation did not scale well with the rapid expansion of online communications from the small pockets of web forums that existed. No large social networks have found a silver bullet that solves the problem, and in some cases, their efforts have only fueled anger and doubts about their motives or raised questions about the federal government’s involvement.
But Meta has to adjust its program, not only because of EU laws requiring it, nor because of ongoing regulatory efforts by the United States. Advertisers are a big part of the equation, and the company has a perfect example of how ditching moderation can impact a platform at X (formerly Twitter), where revenues reportedly declined after increasingly charged and unmoderated speech contributed to the ongoing advertiser hemorrhage.
“Freelance web ninja. Wannabe communicator. Amateur tv aficionado. Twitter practitioner. Extreme music evangelist. Internet fanatic.”