Could AI silence women in Sri Lankan politics?
Speed read:
- AI-driven deepfakes and disinformation are emerging as a decisive factor in Sri Lanka’s future elections, with the potential to influence not just campaigns, but outcomes.
- Gendered attacks, especially sexualised and reputation-based, are more damaging to women candidates, exploiting entrenched societal biases around morality and “respectability”.
- Institutions like the Election Commission lack the tools, rapid-response capacity and digital forensics needed to counter fast-moving AI-generated misinformation.
- Legal reforms, digital literacy, stronger platform accountability and long-term shifts in education and social attitudes are critical to protecting women in politics.
COLOMBO – Sri Lanka has a long history of women in politics, yet meaningful representation remains elusive. As traditional barriers persist, a new and more complex threat is emerging. The next major threat to women’s political representation in Sri Lanka may not come from party structures or voter bias alone, but from algorithms.
AI-generated narratives, manipulated images, and synthetic audio are emerging as tools to discredit, intimidate, and silence women in public life. These attacks are not random. They often exploit deeply rooted social norms around morality, respectability, and gender roles, making them particularly effective in shaping public perception.
While women in politics have long demonstrated resilience in balancing social and personal pressures, the digital sphere presents a different kind of vulnerability. AI-driven misinformation spreads at a speed that traditional responses cannot match. In politics, reputations can be damaged in seconds, while corrections take time. By the time the truth emerges, the damage may already be done.
This is not simply a question of resilience. The scale and velocity of AI-generated content are creating a new kind of political battlefield, one where misinformation can decisively influence voter attitudes before it can be challenged.
The pace of this transformation is striking. Cybersecurity firm DeepStrike tracked a dramatic surge in high-quality deepfake content between 2023 and 2025, with identified cases rising from 500,000 to over one million. Beyond the numbers, the technology itself has advanced rapidly, with what experts describe as generational leaps in realism.
As these tools become more accessible and more convincing, their potential impact on women’s political representation cannot be ignored.
Debunking does not undo the damage. Voters may not fully believe what they see but doubt alone can prove decisive in a close race.
This is where gendered hate and electoral disinformation converge most dangerously: hostility supplies the emotional charge, political actors or opportunists inject the falsehoods, and digital platforms deliver them at speed.
If this trajectory continues, digital rights advocate Sanjana Hattotuwa cautions that by 2029 it may become increasingly difficult for ordinary voters to distinguish between authentic and synthetic political communication.
Sri Lanka’s current electoral oversight framework was not designed for this reality.
The Election Commission of Sri Lanka has focused on regulating campaign finance, nominations and the conduct of traditional media. Election observers also concentrate primarily on physical polling stations, ballot handling and visible violations. Yet AI-driven disinformation campaigns operate across global digital platforms, often in multiple languages and through coordinated networks.
According to Hattotuwa, Sri Lanka still lacks robust rapid-response digital forensic teams, clear emergency protocols for removing manipulated content during election periods, and strong institutional capacity to detect deepfakes at scale or quickly debunk coordinated campaigns.
The delay between a false claim appearing online and an authoritative correction can be decisive.
And for women in politics, the consequences may be particularly severe.

Gendered attacks already widespread
Activist and co-founder of Women’s Action Network (WAN) Shreen Saroor says women politicians are already exposed to a hostile digital environment that includes harassment, trolling, sexualised insults and manipulated images.
“Women in politics already experience online harassment, trolling, sexualised insults, doctored or photoshopped images and coordinated smear campaigns on social media,” she explains.
Such attacks are rarely about policy disagreements. Instead, Saroor notes, they are designed to undermine credibility and silence women’s voices rather than engage with their political ideas.
“These attacks are often carried out by anonymous actors, leaving victims with little control over the spread of harmful content,” she says. “When posts go viral, social media platforms are often slow to respond.”
The problem is compounded by a broader cultural tendency to dismiss gender-based online abuse. Saroor points out that sexualised attacks against women public figures are often not taken seriously, reflecting misogyny embedded in digital spaces and in technologies shaped by patriarchal attitudes.
Artificial intelligence could intensify this dynamic dramatically.
“AI could become another way of keeping women out of politics by amplifying disinformation and reinforcing existing inequalities,” Saroor warns.
This would become a significant challenge, particularly for women from conservative societies who fear greater personal and social consequences than men. Women may fear reputational damage not only to themselves but also to their families and communities.
This, she says, risks undoing years of progress achieved by women’s rights activists in Sri Lanka, including efforts to expand women’s representation in local government.
Saroor says that addressing the problem requires stronger legal and institutional responses.
“The government should strengthen existing laws to counter technology facilitated gender based violence,” she says, while urging the Election Commission to develop clear guidelines and rapid response mechanisms to address election disinformation, particularly gender-based attacks and fabricated content.
Social media platforms, she adds, must also improve detection systems and provide faster complaint mechanisms for women facing online abuse.

AI amplifying patriarchal norms
Attorney-at-law and activist Swasthika Arulingam believes Sri Lanka’s deeply patriarchal social environment makes women especially vulnerable to AI-driven disinformation.
“Society is still very slow to accept women in active politics, particularly in electoral politics,” she says.
Artificial intelligence, she argues, could become a powerful tool for those seeking to discredit women in public life.
“One of the most common methods will be the sexualisation of women, alongside the spread of disinformation and misinformation about women and their bodies,” Arulingam explains.
In a society where ideas of “respectability” shape political perceptions, even fabricated images can be weaponised to question a woman’s suitability for leadership.
“For instance, if a photograph of a woman in a bikini were circulated, that alone should not be a reason for people to refuse to vote for her,” she says. “A woman should have the freedom to wear what she chooses. But in a highly patriarchal society, such images can easily be weaponised.”
AI tools now allow users to fabricate highly convincing images, videos or audio recordings showing individuals saying or doing things they never did.
For women politicians, the consequences may be even more damaging because attacks often target their morality rather than their policies.
Arulingam argues that legal protections alone will not solve the problem.
“The internet cannot be fully controlled, and legal systems often move too slowly to regulate rapidly evolving digital technologies,” she says.
Instead, she stresses the importance of long-term social transformation through education.
“Women must be taught not to internalise shame,” Arulingam says. “And men must be taught to respect women.”
If society begins to view attacks on women’s privacy and dignity as violations rather than sources of shame, she believes the power of such smear campaigns will diminish significantly.

A new technological battlefield
Meanwhile, election monitoring experts warn that the speed and scale of AI-generated misinformation create an entirely new political battlefield.
Manjula Gajanayake, Executive Director of the Institute for Democratic Reforms and Electoral Studies (IRES), says reputations can now be damaged within seconds online.
“In politics, reputations can be damaged within seconds, while responding and correcting the record takes time,” he explains.
During that delay, misinformation may already have reached thousands or even millions of voters.
For women politicians, who already face heightened scrutiny, this gap can be especially damaging.
“When false content spreads online, they cannot simply wait until the truth emerges,” Gajanayake says. “By the time the issue is clarified, the damage to their public image may already be significant.”
He stresses that AI technologies are not inherently harmful. In fact, they could help politicians communicate more effectively and engage voters more directly.
But the current trend appears to be the opposite.
“With minimal cost, it is now possible to clone voices, fabricate images or create realistic-looking videos that can easily damage a woman politician’s reputation,” he warns.

Strengthening resilience
Gajanayake believes several practical steps are needed to reduce the risks.
First, Sri Lanka needs a nationwide public awareness campaign on the dangers of technology-driven misinformation.
“Political parties should take the lead in educating supporters and the public about manipulated digital content,” he says.
Second, women politicians themselves should be encouraged and trained to build strong, credible digital presences.
“Producing authentic and positive digital content can help counter misinformation,” he notes.
Third, the Election Commission must strengthen its capacity as a trusted source of election information by recruiting technical specialists and investing in digital monitoring systems.
However, he acknowledges that government salary structures often make it difficult to attract skilled IT professionals.
Another concern is the digital literacy gap among many politicians themselves.
“Many elected representatives in Sri Lanka, including women members of parliament and local government, are still not using digital tools effectively to communicate with voters,” Gajanayake observes.
Political parties therefore have a responsibility to invest in training programmes, research initiatives and support networks that strengthen women’s digital literacy.
Without such support, Gajanayake warns, AI-driven misinformation and disinformation could discourage a new generation of women from entering politics.
A test for democracy
If deepfake technology continues to evolve as rapidly as experts predict, elections will increasingly depend on institutions’ ability to detect and respond to digital manipulation in real time.
But as activists and analysts warn, the challenge is not purely technological.
AI tools will ultimately amplify the biases and inequalities already present in society.
Whether they become instruments of democratic participation, or tools for silencing women in public life, may depend on how quickly institutions, political parties and society adapt to this new reality.
Reporting and Editing: Gagani Weerakoon
This story was produced under the CIR– FACTUM Media Fellowship Program.



