Social Dilemma
Tek Fog: An App With BJP Footprints for Cyber Troops to Automate Hate, Manipulate Trends https://www.youtube.com/watch?v=MVmxwY678mE
The Wire investigates claims behind the use of ‘Tek Fog’, a highly sophisticated app used by online operatives to hijack major social media and encrypted messaging platforms and amplify right-wing propaganda to a domestic audience.
the four features of Tek Fog – (1) hijacking Twitter and Facebook trends, (2) phishing and capturing inactive WhatsApp accounts, (3) creating and deploying a highly granular database of citizens for targeted harassment, and (4) the possibility that its use is part of a political-corporate nexus linking the BJP to large tech players and platforms.
Part 2, https://thewire.in/tekfog/en/2.html The Wire explains the technology by which Tek Fog is used to hijack a WhatsApp account, and the ease with which the processes involved may be scalable.
operators could remotely access 'inactive' WhatsApp accounts added to the app's contact list and use the hijacked phone number to send targeted messages to their 'frequently contacted' or 'all contacts' lists.. for active accounts.. Once the initial phishing file has been delivered, the activity status of the targeted account can be monitored via the Tek Fog app. When the account becomes 'inactive', it can then be seen by the operatives in an auto-complete search in the app.
The source also demonstrated another advanced feature of Tek Fog: the ability to modify existing news articles to create fake news by changing keywords ('BJP' to 'Congress' or 'Left-wing' to 'Right-wing', for example) and to generate political junk news with a fictitious narrative by modifying the link of a published article.
Ayushman Kaul & Devesh Kumar
Wikipedia Founder : Can AI replace free will ? Jan 3, 2022
Larry Sanger has a vibrant discussion based on Rajiv Malhotra’s recent book on AI. Is knowledge and the notion of truth being gamified? Is the deep learning by algorithms turning these systems into deities that we must bow down to? The manufacturing of fake emotions and experiences is proliferating and leading to a battle for human agency.
Upto now, social dilemma projected polarisation of content as a "collateral damage" for their quest for eye-balls. Whistleblower says Facebook put profit before reining in hate speech https://indianexpress.com/article/technology/tech-news-technology/whistleblower-says-facebook-put-profit-before-reining-in-hate-speech-7550582/ But the following reports make it clear that political positions are being taken. Frances Haugen: added that Facebook was used to help organize the Capitol riot on January 6, after the company turned off safety systems following the U.S. presidential elections. While she believed no one at Facebook was “malevolent,” she said the company had misaligned incentives.
Pre and post 2019 polls: Facebook memos flag anti-minority post surge by Aashish Aryan , Karunjit Singh , Pranav Mukul November 11, 2021 https://indianexpress.com/article/india/facebook-india-hate-speech-7617437/
A July 2020 report specifically noted there was a marked rise in such posts in the preceding 18 months, and that the sentiment was “likely to feature” in the coming Assembly elections, including West Bengal.
The increase in hate speech and inflammatory content was mostly centred around “themes” of threats of violence, Covid-related misinformation involving minority groups, and “false” reports of Muslims engaging in communal violence. In one internal report in 2021 before the Assembly elections in Assam, Himanta Biswa Sarma, now Assam Chief Minister, was flagged in as being party to trafficking inflammatory rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilizers to produce liver, kidney and heart disease in Assamese.”
Despite all these red flags, another group of staffers at the social media firm suggested only a “stronger time-bound demotion” of such content.
Asked if the social media platform took any measures to implement these recommendations, a spokesperson for Meta Platforms Inc — Facebook was rebranded as Meta on October 28 — told The Indian Express: “Our teams were closely tracking the many possible risks associated with the elections in Assam this year, and we proactively put in place a number of emergency measures to reduce the virality of inflammatory comments, particularly videos. Videos featuring inflammatory content were identified as high risk during the election, and we implemented a measure to help prevent these videos from automatically playing in someone’s video feed”.
3 staff memos flagged ‘polarising’ content, hate speech in India but Facebook said not a problem https://indianexpress.com/article/india/3-staff-memos-flagged-polarising-content-hate-speech-in-india-but-facebook-said-not-a-problem-7615684/ A spokesperson for Meta Platforms Inc said: “We had hate speech classifiers in Hindi and Bengali from 2018. Classifiers for violence and incitement in Hindi and Bengali first came online in early 2021”.
FROM a “constant barrage of polarising nationalistic content”, to “fake or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities, several red flags concerning its operations in India were raised internally in Facebook between 2018 and 2020.
However, despite these explicit alerts by staff mandated to undertake oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform.
India: Facebook struggles in its battle against fake news Soutik Biswas https://www.bbc.com/news/world-asia-india-59006615
India correspondent The Facebook Papers, show the social media giant struggling to tame the avalanche of fake news, hate speech, and inflammatory content -"celebrations of violence", among other things - out of India, the network's biggest market.
fact-checking is only one part of Facebook's efforts at countering misinformation. The problem in India is much bigger: hate speech is rife, bots and fake accounts linked to India's political parties and leaders abound, and user pages and large groups brim with inflammatory material targeting Muslims and other minorities. Disinformation is an organised and carefully mined operation here. Elections and "events" like natural calamities and the coronavirus pandemic usually trigger fake news outbreaks. Also, the fact that Facebook does not fact check opinion and speech posted by politicians on grounds of "free expression and respect for the democratic process" is not always helpful. "A large part of the misinformation on social media in India is generated by politicians of the ruling party. They have the largest clout, but Facebook doesn't fact-check them," says Pratik Sinha, co-founder of Alt News, an independent fact-checking site.
The overwhelming bulk of hate speech and misinformation on the social network are expected to be captured by its internal AI engines and content moderators all over the world. Facebook claims to have spent more than $13bn and hired more than 40,000 people in teams and technology around the world on safety and security issues since 2016.
Alan Rusbridger, a journalist and member of Facebook's oversight board, has said the board will have to "get to grips" with the perception of people who believe that "the algorithms reward emotional content that polarises communities because that makes it more addictive". In other words, the network's algorithms allow "fringe content to reach the mainstream", as Roddy Lindsay, a former data scientist at Facebook, says.
"This ensures that these feeds will continue promoting the most titillating, inflammatory content, and it creates an impossible task for content moderators, who struggle to police problematic viral content in hundreds of languages, countries and political contexts," notes Mr Lindsay.
In the end, as Frances Haugen, the Facebook product-manager-turned-whistleblower, says: "We should have software that is human-scaled, where humans have conversations together, not computers facilitating who we get to hear from."
Series by Cyril Sam, Paranjoy Guha Thakurta
Facebook Papers | How Algorithms promoted Polarization and Hatred | Dhruv Rathee Oct 28, 2021
Whistleblower Frances Haugen, who used to worked at Facebook has unravelled a big dump of internal documents in the Facebook company. This big reveal has been titled as the Facebook Papers. These papers reveal how Facebook has directly and indirectly being ignorant and encouraged hate and violence on their platform. How the Algorithms of Facebook have chosen profits over people and how Facebook has bowed down to authoritarian regimes across the world. Mark Zuckerberg has come under pressure after this expose.