One image. One edit. One deepfake is all it takes to destroy someone’s dignity at work.

One image. One edit. One deepfake is all it takes to destroy someone’s dignity at work.

Imagine discovering a colleague has circulated a morphed, sexually explicit image of you in a company WhatsApp group. Unfortunately, this isn’t science fiction—it happened at a Noida firm in April 2025. A 20-year-old woman found her photos digitally altered with lewd comments by coworkers; when she complained, she was asked to “compromise” and later fired. The aftermath? Police arrests and a government inquiry into the company’s PoSH compliance. This incident underscores a sobering reality: AI-driven harassment (like deepfakes and morphed images) is a pressing compliance risk that HR and compliance teams in India must confront head-on.

AI tools can easily manipulate images – turning ordinary photos into harmful “deepfakes.” Such technology-enabled harassment is on the rise, blurring the lines between cyber bullying and workplace misconduct.

AI has made it disturbingly easy to create hyper-realistic fake content — and women are paying the price. A staggering 96% of deepfakes are pornographic, with most targeting women without consent. Supreme Court Justice Hima Kohli recently noted how social media has “reshaped the contours of harassment,” with deepfakes posing serious threats to privacy and dignity.

In seconds, harmful content can go viral, leaving victims powerless. For HR leaders and PoSH consultants, this isn’t just a tech issue — it’s a workplace safety issue. It’s time to rethink policies, build digital awareness, and strengthen safeguards to ensure respect and accountability in today’s AI-influenced world

Policy Upgrade: Make Your Code of Conduct Digital-Ready

It’s time to bring your PoSH policy and Code of Conduct into the AI age. Most were built for the physical workplace — not for deepfakes, cyberstalking, or AI-generated abuse.

Be crystal clear: digital harassment is harassment. Your policies should explicitly ban:

  • Creating or sharing sexually explicit deepfakes of colleagues
  • Circulating morphed images (like face-swapping onto obscene content)
  • Cyberstalking or doxxing coworkers online
  • Using AI tools to generate lewd or threatening content

Adding real-world examples helps employees understand that online misconduct — even outside work hours — won’t be tolerated. Many forward-thinking organizations are already revising policies to address synthetic media and image-based abuse. Don’t wait for a crisis. A strong, updated policy is your first line of defense.

🎯 Employee Training: Move Beyond the Basics

Updating your policy is just step one. Teams need to understand the new risks. Traditional PoSH sessions won’t cut it anymore. Today, employees must be trained on things like:

  • Spotting doctored images and videos
  • The ethics of consent in digital media
  • Why “harmless” AI edits can deeply harm
  • Legal consequences of creating or sharing such content

The goal? Build a culture where people respect digital boundaries as much as physical ones. Real stories and high-profile cases drive the point home: digital harassment is no joke.

🛡 ICC Readiness: Equip Committees for the Online Era

Your Internal Committee (ICC) is central to PoSH compliance — but is it ready to handle screenshots, deepfakes, or doxxing?

Here’s how to prepare:

  • Train ICC members on handling digital evidence
  • Involve IT and legal for tech-heavy cases
  • Partner with law enforcement when harassment goes anonymous
  • Budget for cyber forensics in complex cases

An ICC that can confidently deal with digital abuse inspires employee trust — and that trust is non-negotiable.

🚨 Incident Response: Treat Deepfake Harassment Like a Crisis

When a deepfake or AI-generated image leaks, every second counts. Have a plan ready:

  • Act fast: Take content down internally and report externally
  • Preserve evidence: Save chats, links, and offending material
  • Support the victim: Engage law enforcement when needed
  • Limit damage: Temporarily restrict access for accused, if necessary

A swift, coordinated response shows zero tolerance and protects your people.

⚖ Legal Risk: Know Your Duty of Care

AI harassment isn’t just unethical — it’s legally risky. Under India’s PoSH Act, failing to prevent or act on digital sexual harassment can cost employers up to ₹50,000, with escalating penalties for repeat violations.

Globally, laws are evolving too. Inaction could soon be seen as negligence. Bottom line: HR and compliance teams must treat digital harassment with the same urgency as offline misconduct.

🔒 Privacy Protocols: Don’t Feed the AI

Prevention begins with smarter data habits.

  • Ban use of employee images in AI apps without consent
  • Create boundaries around sharing team photos online
  • Secure storage of ID card and org chart images
  • Discourage casual, unnecessary photo-taking

Digital dignity starts with small everyday actions.

AI laws are still catching up, but change is coming. A proposed amendment to India’s PoSH Act may extend reporting timelines and reduce reliance on mediation. Other countries are already including AI-specific harassment clauses.

Get ahead:

  • Audit your policies today
  • Start tracking AI risks
  • Build awareness before regulation forces your hand

In compliance, proactivity always wins.The spirit of the PoSH Act — dignity, safety, equality — must now expand to the digital space. Deepfakes and cyberstalking are modern tools of harm, and they demand modern safeguards.

#HR professionals and PoSH consultants — this is your moment. Update policies. Train minds. Empower committees. Respond swiftly. Because silence isn’t neutral — it’s complicity.

💬 How is your organization preparing for AI-driven #harassment risks?Let’s learn from each other. Share your thoughts in the comments.

#AI #posh #HR #jyotidadlani #cerebrovocationalplanet

To view or add a comment, sign in

More articles by Jyoti Dadlani

Others also viewed

Explore content categories