Social Dilemma
Wikipedia Founder : Can AI replace free will ? Jan 3, 2022
Larry Sanger has a vibrant discussion based on Rajiv Malhotra’s recent book on AI. Is knowledge and the notion of truth being gamified? Is the deep learning by algorithms turning these systems into deities that we must bow down to? The manufacturing of fake emotions and experiences is proliferating and leading to a battle for human agency.
Upto now, social dilemma projected polarisation of content as a "collateral damage" for their quest for eye-balls. Whistleblower says Facebook put profit before reining in hate speech https://indianexpress.com/article/technology/tech-news-technology/whistleblower-says-facebook-put-profit-before-reining-in-hate-speech-7550582/ But the following reports make it clear that political positions are being taken. Frances Haugen: added that Facebook was used to help organize the Capitol riot on January 6, after the company turned off safety systems following the U.S. presidential elections. While she believed no one at Facebook was “malevolent,” she said the company had misaligned incentives.
Pre and post 2019 polls: Facebook memos flag anti-minority post surge by Aashish Aryan , Karunjit Singh , Pranav Mukul November 11, 2021 https://indianexpress.com/article/india/facebook-india-hate-speech-7617437/
A July 2020 report specifically noted there was a marked rise in such posts in the preceding 18 months, and that the sentiment was “likely to feature” in the coming Assembly elections, including West Bengal.
The increase in hate speech and inflammatory content was mostly centred around “themes” of threats of violence, Covid-related misinformation involving minority groups, and “false” reports of Muslims engaging in communal violence. In one internal report in 2021 before the Assembly elections in Assam, Himanta Biswa Sarma, now Assam Chief Minister, was flagged in as being party to trafficking inflammatory rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilizers to produce liver, kidney and heart disease in Assamese.”
Despite all these red flags, another group of staffers at the social media firm suggested only a “stronger time-bound demotion” of such content.
Asked if the social media platform took any measures to implement these recommendations, a spokesperson for Meta Platforms Inc — Facebook was rebranded as Meta on October 28 — told The Indian Express: “Our teams were closely tracking the many possible risks associated with the elections in Assam this year, and we proactively put in place a number of emergency measures to reduce the virality of inflammatory comments, particularly videos. Videos featuring inflammatory content were identified as high risk during the election, and we implemented a measure to help prevent these videos from automatically playing in someone’s video feed”.
3 staff memos flagged ‘polarising’ content, hate speech in India but Facebook said not a problem https://indianexpress.com/article/india/3-staff-memos-flagged-polarising-content-hate-speech-in-india-but-facebook-said-not-a-problem-7615684/ A spokesperson for Meta Platforms Inc said: “We had hate speech classifiers in Hindi and Bengali from 2018. Classifiers for violence and incitement in Hindi and Bengali first came online in early 2021”.
FROM a “constant barrage of polarising nationalistic content”, to “fake or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities, several red flags concerning its operations in India were raised internally in Facebook between 2018 and 2020.
However, despite these explicit alerts by staff mandated to undertake oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform.
India: Facebook struggles in its battle against fake news Soutik Biswas https://www.bbc.com/news/world-asia-india-59006615
India correspondent The Facebook Papers, show the social media giant struggling to tame the avalanche of fake news, hate speech, and inflammatory content -"celebrations of violence", among other things - out of India, the network's biggest market.
fact-checking is only one part of Facebook's efforts at countering misinformation. The problem in India is much bigger: hate speech is rife, bots and fake accounts linked to India's political parties and leaders abound, and user pages and large groups brim with inflammatory material targeting Muslims and other minorities. Disinformation is an organised and carefully mined operation here. Elections and "events" like natural calamities and the coronavirus pandemic usually trigger fake news outbreaks. Also, the fact that Facebook does not fact check opinion and speech posted by politicians on grounds of "free expression and respect for the democratic process" is not always helpful. "A large part of the misinformation on social media in India is generated by politicians of the ruling party. They have the largest clout, but Facebook doesn't fact-check them," says Pratik Sinha, co-founder of Alt News, an independent fact-checking site.
The overwhelming bulk of hate speech and misinformation on the social network are expected to be captured by its internal AI engines and content moderators all over the world. Facebook claims to have spent more than $13bn and hired more than 40,000 people in teams and technology around the world on safety and security issues since 2016.
Alan Rusbridger, a journalist and member of Facebook's oversight board, has said the board will have to "get to grips" with the perception of people who believe that "the algorithms reward emotional content that polarises communities because that makes it more addictive". In other words, the network's algorithms allow "fringe content to reach the mainstream", as Roddy Lindsay, a former data scientist at Facebook, says.
"This ensures that these feeds will continue promoting the most titillating, inflammatory content, and it creates an impossible task for content moderators, who struggle to police problematic viral content in hundreds of languages, countries and political contexts," notes Mr Lindsay.
In the end, as Frances Haugen, the Facebook product-manager-turned-whistleblower, says: "We should have software that is human-scaled, where humans have conversations together, not computers facilitating who we get to hear from."
Series by Cyril Sam, Paranjoy Guha Thakurta
Facebook Papers | How Algorithms promoted Polarization and Hatred | Dhruv Rathee Oct 28, 2021
Whistleblower Frances Haugen, who used to worked at Facebook has unravelled a big dump of internal documents in the Facebook company. This big reveal has been titled as the Facebook Papers. These papers reveal how Facebook has directly and indirectly being ignorant and encouraged hate and violence on their platform. How the Algorithms of Facebook have chosen profits over people and how Facebook has bowed down to authoritarian regimes across the world. Mark Zuckerberg has come under pressure after this expose.
Twitter says its algorithm amplifies right wing political content By Binayak Dasgupta, New Delhi https://www.hindustantimes.com/india-news/twitter-says-its-algorithm-amplifies-right-wing-political-content-101634926182240.html
In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favours right-leaning news sources
In a series of tweets detailing the findings, Rumman Chowdhury, Twitter’s director of software engineering, added that “...establishing *why* these observed patterns occur is a significantly more difficult question”.
“Twitter is a sociotechnical system -- our algos are responsive to what’s happening. What’s next is a root cause analysis — is this unintended model bias? Or is this a function of what & how people tweet, and things that are happening in the world? Or both?” the Twitter staffer added in another tweet.
Experts have recently called on companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of harms their technologies may be causing.
Experts have recently called on companies such as Twitter and Facebook to share access with academic researchers to get to the bottom of harms their technologies may be causing.
Twitter’s ML Ethics, Transparency and Accountability (META) team will now carry out a root-cause analysis and determine “what, if any, changes are required to reduce adverse impacts by our Home timeline algorithm,” the two staffers said in the blog post.