Saturday, April 18, 2026
30 C
Colombo

AI Tools & Social Norms are Making the Internet Unsafer For Women

As generative AI usage rises in India, image, video, and audio morphing are becoming easily accessible. Without sufficient guardrails and laws in place, this accessibility poses a grave threat to the digital safety of women and other gender minorities.

(Trigger warning: The story contains descriptions of digital sexual abuse and violence. Reader discretion is advised.)

Twenty nine-year-old Dimple Adiwal received an unexpected email on February 4. At first she ignored it, but as she scrolled down to check its content she was aghast to see a nude version of her LinkedIn profile picture shared on an open Telegram channel with graphic sexual content. 

As someone who has been talking about patriarchal structures and women’s rights on her public Instagram page, she has become used to threatening messages and violent comments. But this was more than that. 

“When I scrolled down, this person had taken particular efforts to send me the screenshot of [the] morphed picture and the fact that he had circulated this on Telegram,” said Adiwal, who is originally from the western state of Maharashtra and is currently pursuing her PhD in Rennes, France.

The image contains a screenshot of a message on Telegram that was shared to target Dimple Adiwal.
Screenshot of the Telegram group on which Dimple Adiwal’s images were circulated. We have refrained from adding the screenshots of the abusive emails due to the explicit nature of the content. Screenshot: Telegram via Dimple Adiwal

Adiwal filed a complaint with the National Cyber Crime Reporting Portal in India, Telegram, and Proton Mail (the perpetrator had used a Proton email address). The email ID was eventually taken down and Telegram shut down the channel, but she says that she received nothing more than a tracking ID from the cyber crime reporting portal.

This is a screenshot of the response Dimple Adiwal got from NCCRP and Proton Mail.
Screenshots of the complaints sent to NCCRP and Proton Mail. Screenshot: via Dimple Adiwal

This is yet another example of how technology has been extensively exploited to facilitate gender-based violence, largely targeting women and gender minorities. It’s not a recent phenomenon, it has been around since the inception of the internet. But with easy access to free online tools that create realistic imagery, the trends around tech-facilitated gender-based violence have been changing.

“Using text to manipulate has been very common, like adding curse words under an image. Then you have the next level, which is using Photoshop or an image editing app, like cropping the body of a porn star or someone in a bikini and then putting the girl’s face on top of it in a crude manner. This needs a slightly higher order of digital ease. But in about 2024, we started getting the first cases where generative AI was being used,” said Siddharth Pillai, co-founder, RATI Foundation, a Mumbai-based NGO working to empower women and children.

Anecdotally, easy access to AI tools has led to an increase in digital abuse but there is not enough research to cement that argument. However, what is undeniable is — AI has made image, audio, and video morphing extremely accessible. What previously required working knowledge, subscription to photo editing tools, and access to a computer, now requires a few minutes and a single prompt to a chatbot. 


Shalin Maria Lawrence, a social and political activist from the southern Indian state of Tamil Nadu, shared a story similar to Adiwal with us. 

The 42-year-old dalit activist said that in 2024 her husband received 23 emails with graphic threats of physical and sexual violence against her and their then two-year-old child. The emails were accompanied with morphed images of both. 

She says as an activist and a political commentator she is used to receiving vile and vicious comments, but with AI the harassment has reached new heights. 

“They used AI to insert my picture into a soft porn Malayalam movie. They have also said [that] I was in a room with some BJP (Bharatiya Janata Party) leader somewhere. They will give you death and rape [threats] in a grotesque manner and they will also describe how they want to rape you,” Lawrence shared with us. 

TechSakhi, a digital safety helpline by Mumbai-based non-profit Point of View, found that since they launched in 2022, they’ve received over 1,100 cases related to tech-facilitated gender-based violence. When they analysed the data in 2024, about 40 percent of those cases had some component of image-based abuse.

“Many of these images are publically shared images taken from social media or as part of profile pictures on messaging apps — then repurposed and disseminated without consent, by the perpetrator,” said Shohini B, knowledge specialist, Point of View. 

For journalist Shivani Kava, this kind of targeted harassment has unfortunately become routine. But the intensity and frequency shot up last year when she was reporting on the Dharmasthala mass burials. The case was based on the complaint of a sanitation worker in Karnataka’s temple town of Dharmasthala. The worker has alleged that he buried several bodies of children and women on the orders of his supervisors between 1995-2015. 

“One person actually took my Twitter photo and made a caricature of it, which gave me enlarged breasts and put pimples on my face. And that photo was widely shared,” said Kava, a senior reporter with a digital news portal The News Minute.

After receiving threats of physical and sexual violence, Kava said she stopped sharing her personal life online. But her work as a digital journalist requires her to be on camera and her videos to be publicly available, leaving her vulnerable.

One of the most common misuses of AI is to create non-consensual intimate imagery (NCII) of women. Beginning December 2025, Elon Musk’s Grok was used to generate millions of nude images of women and minors. 

The image shows how the prompt
The prompt “put her in a bikini” was widely used to generate non-consensual sexual imagery of women. Source: Eliot Higgins/LinkedIN 

RATI Foundation runs a national helpline called Meri Trustline (translation: my trustline), where people can seek assistance with digital abuse and cyberbullying. Ritu Jain, a counsellor at the helpline, spoke of a recent case where they found that Instagram’s AI Studio was revealing phone numbers of users.  

Both TechSakhi and Meri Trustline have also noted sexual harassment threats and cases connected to fake “instant loan” apps. Distorted audio is also becoming common, with popular songs being modified and then misused over imagery of women.

Continued Harm and Irreversible Impact

While AI’s role in the conversation about digital violence is still examined, what hasn’t changed is the impact of gender-based violence on women. 

A recent report, published by Equality Now and Breakthrough, on tech-facilitated gender-based violence in India shows that the psychosocial impact of such acts is profound. The report included interviews of nine survivors and eleven experts across India and noted that young women didn’t want to speak up due to the fear of retribution.

Manjusha Madhu, co-author of the report, said that despite the harm they experienced almost none of the young girls wanted to involve their families because that would mean immediate restriction on their mobility. 

Within India’s sociocultural context, such a reaction is not unusual.

“It’s the enormous role that culture constructs around purity and the idea that women’s bodies carry community, family, honour, pride,” said Madhu, adding, “women’s bodies are perceived as vessels of these great cultures, that they are symbolic. And any sort of overstepping is considered an unforgivable transgression.”

Once an image, morphed or otherwise, is circulated, there is no way for the survivor to stop its spread or to prevent it from reaching their family and friends.

For Adiwal, the first thought on receiving her morphed image was to worry about her mother.

“A lot of people already blame my mother. [They say], ‘You did not teach your daughter good manners so that’s why she is speaking nonsense on the internet.’ So, I was not worried that my nude picture was going to go around, I was more worried that these people would target her more,” said Adiwal.

A common tactic deployed by the perpetrators of digital violence is the use of multiple accounts and social media platforms. Helplines such as Meri Trustline report these accounts to the relevant platforms and conduct reverse image searches to report all versions of the content. 

But taking down the content can often take time and the impact on the survivor is irreversible.

Lawrence said that while she has developed resistance, other women she has worked with often withdraw from public life. “After one incident, most women stop voicing [their opinions]. They say, ‘We can’t do it. Our families are very strict.’ The humiliation and trolling doesn’t settle with women. They fear this kind of troll[ing] or abuse will affect their marital, academic, and work life.”

The report published by Breakthrough and Equality Now, too, underscores that women often disengage from online and public spaces citing safety concerns. ”Survivors speak of mental fatigue, public shaming, career disruption, and the sense that justice systems are not designed to protect them.”

Meanwhile, automated redressal mechanisms have several gaps, and a glaring one is the difference between perceived and experiential severity. 

“For some, only a nude image or partially clothed image might be sexually explicit but for others, depicting someone in a saree and blouse with a deep neckline might be sexually explicit,” said Shohini. 

But the parameters for image based harm and its subsequent moderation are very narrow, with little scope for cultural context or appeals. And in the time between filling forms, waiting for action, follow-ups, and rejections, the image could already be in a hundred different places, and the survivor’s right to be forgotten is forgotten.

Societal Norms — An Aggravator  

For the last few decades, conversation around women’s rights in India has been focused on awareness. But it’s not the lack of formal education that drives this kind of violence and harassment, it’s also the deeply entrenched conservative social norms.

“We are increasingly recognising that the same gender norms that manifest in our physical world are essentially replicated in the online space,” said Madhu.

A survey study of 186 young people in Delhi conducted by Men Against Violence & Abuse (MAVA), a Mumbai-based NGO, and Project Unlearn, a Delhi based grassroots organisation working with at-risk youth, found that while their respondents strongly rejected explicit harm (such as rape and child abuse), male respondents continue to tie women’s worth to marriage and conservative ideals.

Meher Suri, a public health practitioner who co-led the study, said one of the sharpest gender splits was a situational assessment about digital consent. It asked young boys and girls if they agreed with the following statement, “If a girl’s private photos get leaked, it is her fault.”

“The girls in the age group 14-16 had near total disagreement, but boys 12-14 had high agreement, up to 60 percent,” said Suri. “Clearly, with respect to digital consent and accountability, girls completely agree that they’re not at fault, whereas boys feel the onus falls completely on young women.”

The study posits that awareness is not the constraint. Renegotiating everyday power, especially where ideas of family honour and reverence are concerned, should be the primary focus area.

“We have observed that wherever there is a need for intervention, girls more often than not come forward and there is aggregation amongst them or there is this need to intervene or interfere or come together. There is some amount of consensus that is already built amongst them. But with boys there is this fear that if we interfere, our existing social dynamics will get messed up or the relationships that we have with our fellow male peers will fall apart and will invite social rebuke or will make us social outcasts,” said Suri.

When These Biases Creep into AI Systems 

Generative AI’s adoption of pre-existing societal biases and norms is a well-recorded phenomenon. However, now the conversation is moving towards the narrative of inevitability — what can developers do when AI is expected to imitate human behaviour and human behaviour is flawed?

Generative AI is non-deterministic, i.e. it can produce varying outputs for the same input. This fundamental facet has contributed to AI being viewed as a black box, whose outputs cannot be regulated. 

“When you’re giving inputs over a long period of time, that’s when the model can potentially go out of alignment. So it can become unsafe,” said Karandeep Anand, CEO of Character.ai, a chatbot where users can converse with bots that emulate fictional characters.

“How do you solve for that? You solve it by making very clear product decisions on top of it, saying okay, the AI is a non-deterministic system, but I will not allow you to have more than 30 turns of conversations. Or I will not let you have conversations of this type of context,” Anand told Asian Dispatch on the sidelines of Synapse Conclave, an AI-centric event by former journalist Shoma Chaudhary. 

But people like Shohini, who see how big decisions at tech companies are made, believe that these questions about user safety need to be asked far earlier.

“Mainstream solutions often advocate for more data sets, rather than asking the more fundamental question of what knowledge is being visibilised, for whom, and what happens when that is not rooted in the needs of its communities?,” said Shohini.

The question is also of intent. Do platforms consider guardrails that ensure protection of women, minors, gender minorities, marginalised, and vulnerable groups?   

“This is not particular to AI imagery, but on Telegram and Twitter [now X] etc., you will find more explicit content because their community guidelines make space for that. So there will be chances of more because you can nudify a person using the functionality of the app itself, whereas that is not possible on Instagram,” said Pillai.

According to an analysis of Grok by the Centre for Information Resilience, basic safeguards can also have an effect in limiting the scale and severity of abuse. It suggests moderation of key phrases (keywords and phrases are often copy-pasted across platforms) which are widely used to create non-consensual imagery, holding offenders to account, ensuring safeguards exist across entry points, and recognising AI-generated sexual imagery as abuse. 

Laws, Policy Gaps, and Regulation

In India, tech-facilitated gender-based violence and cyberbullying are covered under various sections of other crimes such as voyeurism, stalking, and identity theft. Earlier this year, the government notified its framework related to AI-assisted abuse, making it illegal to generate non-consensual intimate images. The effectiveness of this framework is yet to be tested.

Vivek Sood, a criminal lawyer in the Supreme Court of India, believes that deepfakes and AI-generated content should be a separate crime that attracts harsher punishment. 

“It should not be a light punishment because it almost amounts to a social death of the victim. The impact is so grave on the victim’s profile, social relations, and family. So this crime needs to be deterred with harsh punishments,” said Sood. “Also, it’s an organised crime. These are crimes that are done in a planned way, where you use technology deliberately, you spend time on it.”

Sood also says that the removal of harmful content should be handled at the national level.

“Immediate action should be taken to take down a deepfake. Why should a victim have to knock on the doors of the court and seek some kind of a take down order? It should be investigated expeditiously, rather than a hapless victim having to go through the rigmaroles of the local police station,” said Sood.

As per a 2025 report from RATI Foundation and Tattle, cases involving AI-based sexual abuse appeared to be part of larger online sexual or financial rackets.

In another report analysing year-on-year trends of digital harm, they noted that a majority of cases handled by their helpline evolved into coordinated, network-driven abuse, fueled by the speed and scale of online platforms.

“Abuse was no longer about isolated acts; it became about systems of coordination, where majority perpetrators operated as part of larger networks,” read the report. “Addressing such harm demanded more than removing individual pieces of content—it required dismantling the cross-platform ecosystems that allowed the abuse to flourish.”

The barriers to regulation are multifold — made more difficult by the distribution of power between tech platforms and each country’s laws. 

A lack of standardisation means that the reporting and grievance redressal mechanism is different on every platform, shifting the burden on survivors. Content can originate from one country (with its own laws about consent and violations) but the harm is perpetrated somewhere else. Digital literacy varies by region, age, gender, accessibility and more — and national reporting mechanisms don’t account for these gaps. 

But what is often glaringly missing in discussions of regulation are those impacted by such violence and their experiences.

“There is just no concept of ‘care’ built in either the government system or the [tech platform] system. These response seeking mechanisms need to be victim centric and there can be a multi-stakeholder agreement on what victim centricity looks like,” said Pillai. “Currently what is being discussed is compliance. So, it becomes like two lawyers talking about features and finally it’s a policy. But the fact that there are lived experiences, that victims have to go through this, I think is absolutely discounted.”

Survivors of sexual violence in India have long found reporting to local police to be a difficult experience at best and a humiliating one at worst. Lawrence, too, says she has reported online threats to the cyber cell and the local police many times, but she has found them to be unhelpful.

“They don’t take any action,” said Lawrence, adding that she gave up on filing complaints about two years ago.

Experts that Asian Dispatch spoke to argue that redressal mechanisms need to be built keeping in mind the realities (socio, economic, and cultural) of survivors. Platforms need to ensure their policies protect vulnerable communities, law enforcement agencies support and don’t shame survivors. These public interest interventions, both from an industry and a government standpoint, become imperative as technology increasingly gets embedded into our lives. 

Credits

Anoushka Dalmia, Preeksha Malhotra

Editor: Kritika Goel 

Editor’s Note: Asian Dispatch has reached out to the National Cyber Crime Reporting Portal (NCCRP) for a comment. This story will be updated if and when we receive a response.

This is the first of a two-part series tracing the impact of AI on women across Asia. The next part will focus on Indonesia. This story has been published in collaboration with Project Multatuli.

This story was originally published by  Asian Dispatch on April 6, 2026. It has been republished by Center for Investigative Reporting with permission.

Hot this week

Energy minister under fire as NPP confronts first no-confidence motion over corruption charges

COLOMBO – Sri Lanka’s ruling National People’s Power (NPP)...

Six extraordinary stories on intangible losses due to climate change

There are many climate change stories but those that...

From maritime tensions to economic recovery: Can Sri Lanka navigate the next decade?

Speed Read: Sri Lanka’s relationship with neighboring India remains central...

Holding the Power of Big Tech Accountable

by Sandrine Rigaud • March 24, 2026 Editor’s Note: This is the...

Counter narratives and tackling harmful content

Ove the past year, the Center for Investigative Reporting...

Topics

Energy minister under fire as NPP confronts first no-confidence motion over corruption charges

COLOMBO – Sri Lanka’s ruling National People’s Power (NPP)...

Six extraordinary stories on intangible losses due to climate change

There are many climate change stories but those that...

From maritime tensions to economic recovery: Can Sri Lanka navigate the next decade?

Speed Read: Sri Lanka’s relationship with neighboring India remains central...

Holding the Power of Big Tech Accountable

by Sandrine Rigaud • March 24, 2026 Editor’s Note: This is the...

Counter narratives and tackling harmful content

Ove the past year, the Center for Investigative Reporting...

Sri Lanka’s missing tuberculosis patients: 5,000 cases annually go undetected

Speed Read: Up to 5,000 tuberculosis patients annually go undetected...

Section 377: A Colonial-Era Law That Still Governs Queer Lives in Asia

by Shivansh Srivastava. Introduced during British rule, Section 377 and...

Iran ship crisis puts Sri Lanka in global spotlight: Risks, opportunities, and a diplomatic tightrope

Speed Read: Sri Lanka faces a diplomatic tightrope, balancing transparency...

Related Articles

Popular Categories