NEW DELHI: Six months ago, pilot Hana Khan saw her picture on an app that appeared to be auctioning scores of Muslim women in India. The app was quickly taken down, no one was charged, and the issue shelved — until a similar app popped up on New Year’s Day.
Khan was not on the new app called Bulli Bai — a slur for Muslim women — that was hawking activists, journalists, an actor, politicians and Nobel Laureate Malala Yousafzai as maids.
Amid growing outrage, the app was taken down, and four suspects — including a 20-year-old radicalised man suspected of masterminding the app — arrested last week.
The fake auctions that were shared widely on social media are just the latest examples of how technology is being used — often with ease, speed and little expense — to put women at risk through online abuse, theft of privacy or sexual exploitation.
For Muslim women in India who are often abused online, it is an everyday risk, even as they use social media to call out hatred and discrimination against their minority community.
“When I saw my picture on the app, my world shook. I was upset and angry that someone could do this to me, and I became angrier as I realised this nameless person was getting away with it,” said Khan, who filed a police complaint against the first app, Sulli Deals, another pejorative term for Muslim women.
“This time, I felt so much dread and despair that it was happening again to my friends, to Muslim women like me. I don’t know how to make it stop,” Khan, a commercial pilot in her 30s, told the Thomson Reuters Foundation.
Mumbai police said they were investigating whether the Bulli Bai app was “part of a larger conspiracy”.
A spokesperson for GitHub, which hosted both apps, said it had “long-standing policies against content and conduct involving harassment, discrimination, and inciting violence.
“We suspended a user account following the investigation of reports of such activity, all of which violate our policies.”
Advances in technology have heightened risks for women across the world, be it trolling or doxxing with their personal details revealed, surveillance cameras, location tracking, or deepfake pornographic videos featuring doctored images.
Deepfakes — or artificial, intelligence-generated, synthetic media — are used to create porn, with apps that let users strip clothes off women or swap their faces into explicit videos.
Digital abuse of women is pervasive because “everybody has a device and a digital presence,” said Adam Dodge, chief executive of EndTAB, a US-based nonprofit tackling tech-enabled abuse.
“The violence has become easier to perpetrate, as you can get at somebody anywhere in the world. The order of magnitude of harm is also greater because you can upload something and show it to the world in a matter of seconds,” he said.
“And there is a permanency to it because that photo or video exists forever online,” he added.
The emotional and psychological impact of such abuse is “just as excruciating” as physical abuse, with the effects compounded by the virality, public nature, and permanence of the content online, said Noelle Martin, an Australian activist.
At 17, Martin discovered her image had been photoshopped into pornographic images and distributed. Her campaign against image-based abuse helped change the law in Australia.
But victims struggle to be heard, she said.
“There is a dangerous misconception that the harms of technology-facilitated abuse are not as real, serious, or potentially lethal as abuse with a physical element,” she said.
“For victims, this misconception makes speaking out, seeking support, and accessing justice much more difficult.”
Tracking lone creators and rogue coders is hard, and technology platforms tend to shield anonymous users who can easily create a fake email or social media profile.
Even lawmakers are not spared: in November, the US House of Representatives censured Republican Paul Gosar over a photoshopped anime video that showed him killing Democrat Alexandra Ocasio-Cortez. He then retweeted the video.
“With any new technology we should immediately be thinking about how and when it will be misused and weaponised to harm girls and women online,” said Dodge.
“Technology platforms have created a very imbalanced atmosphere for victims of online abuse, and the traditional ways of seeking help when we are harmed in the physical world are not as available when the abuse occurs online,” he said.
Some technology firms are taking action.
Following reports that its AirTags — locator devices that can be attached to keys and wallets — were being used to track women, Apple launched an app to help users shield their privacy.
In India, the women on the auction apps are still shaken.
Ismat Ara, a journalist showcased on Bulli Bai, called it “nothing short of online harassment.”
It was “violent, threatening and intending to create a feeling of fear and shame in my mind, as well as in the minds of women in general and the Muslim community,” Ara said in a police complaint that she posted on social media.
Arfa Khanum Sherwani, also featured for sale, wrote on Twitter: “The auction may be fake but the persecution is real.”
We are the stories we are reporting on.
Communal targeting &humiliation is part of the experience of being a Muslim Woman Journalist in India.
I am one of the women who were targeted in this violent attack on vocal Muslim women.
The auction may be fake but the persecution is real pic.twitter.com/efU3DMWrsw
— Arfa Khanum Sherwani (@khanumarfa) January 4, 2022