The Saree Portrait Trend That Revealed AI’s Creepy Potential
What began as an innocent viral trend across South Asia – women uploading photos to AI apps to generate digital saree portraits – took a disturbing turn for one Pakistani woman. When Google’s Gemini AI generated an image showing a mole on her arm that wasn’t visible in her original photo, it raised alarming questions about AI’s ability to infer private details. This incident exposes how unregulated technology can become an instrument of fear, particularly for women in patriarchal societies.
How AI Systems Learn and Predict Without Consent
Generative AI platforms like Gemini and Midjourney analyze massive datasets of online content, learning to identify and replicate patterns. They can predict or recreate details not explicitly shown in source images based on their training data. While concerning everywhere, this capability becomes particularly dangerous in Pakistan where data protection laws remain unimplemented. The draft Personal Data Protection Bill, modeled after Europe’s GDPR, has languished for years, leaving women vulnerable in a legal vacuum.
The Rising Epidemic of Digital Violence Against Women
Across South Asia, AI-powered harassment is escalating at terrifying speed. Deepfake pornography has moved from fringe threat to widespread weapon, with victims discovering fabricated nude images circulating on Telegram or being used for blackmail. Earlier this year, a Pakistani content creator faced public shaming when her Instagram photos were altered into explicit images. In India, women journalists and activists found themselves digitally “auctioned” with their faces placed on pornographic content. The consequences extend far beyond digital spaces, threatening real-world safety and reputations in conservative societies.
The Legal and Societal Challenges of AI Abuse
While Pakistan’s PECA law can address some AI-related harms like deepfakes, digital rights advocate Sadaf Khan notes significant gaps in protection. Holding international platforms accountable remains difficult, and outdated laws fail to address modern threats. The damage compounds in societies where women’s honor is tied to reputation – victims often face blame rather than support, leading many to withdraw from digital spaces entirely. This silencing effect carries profound personal and political consequences, erasing women’s voices from public discourse.
Building Defenses Against AI-Powered Threats
Protection requires multi-layered solutions:
- EducationĀ must evolve beyond basic digital literacy to help women understand AI’s capabilities and risks
- Legal reformĀ is urgently needed to pass Pakistan’s data protection bill and define AI-specific offenses
- Regional cooperationĀ could establish protocols to combat cross-border deepfake distribution
- Individual precautionsĀ like avoiding high-resolution uploads and documenting harassment are critical first steps
