top of page

The Draft IT Rules, 2025: India’s Attempt to Regulate AI-Generated Content.

  • Writer: Kiratraj Sadana
    Kiratraj Sadana
  • Oct 27
  • 5 min read

Introduction


The Ministry of Electronics and Information Technology (MeitY) released a draft notification proposing amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.


The proposed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 mark a significant turning point in India’s digital regulatory framework and introduced the term “synthetically generated information” into law for the very first time.


While the move aims to curb AI-driven misinformation and deepfakes, it also raises serious questions about feasibility, enforcement, and the broader implications for free expression and innovation.

 


Key Provisions:

  1. Mandatory Labelling of AI-Generated Content: Platforms offering AI tools (e.g., text, image, or video generation) must ensure that all synthetically generated content carries a visible or audible label, or embedded unique metadata/identifier, covering at least 10% of the visual surface or duration.


  2. No Alteration of Labels or Metadata: The intermediary must prevent users from modifying or removing these identifiers.


  3. User Declaration and Platform Verification:

SSMIs must:

  • Require users to declare whether the content being uploaded is synthetically generated.

  • Deploy automated tools or other suitable mechanisms to verify these declarations.

  • Clearly label AI-generated content before publication.


  1. Accountability for Non-Compliance: If a platform knowingly permits or ignores unlabelled synthetic content, it will be deemed to have failed due diligence under Rule 4(1A) and losing its “safe harbour” protection under Section 79 of the IT Act.

 


The Rationale: Combating Deepfakes and Disinformation


Deepfake technology has evolved from parody videos to a weapon of disinformation, political propaganda, and identity manipulation.


The intent behind the draft Rules is clear: to make synthetic media traceable and accountable. MeitY’s approach echoes the international trend, from the EU’s AI Act to US deepfake disclosure laws, where governments are seeking transparency in AI-generated content.


At a conceptual level, the draft 2025 Rules seek to restore trust in digital communication, ensuring users can distinguish between real and artificially generated content.

 

 

The Critical Perspective: Where the Draft Rules Fall Short


While the intent is commendable, the execution raises more questions than answers.

 

  1. Who Exactly Is Being Regulated?

 

The most fundamental question is whether AI developers even qualify as “intermediaries” under Section 2(1)(w) of the IT Act. The current law defines intermediaries as entities that “receive, store, or transmit” electronic records on behalf of another. However, in the case of Generative AI models, the output is not received or stored, it is created from scratch.

 

The Delhi High Court’s decision in Google LLC v. DRS Logistics clarified that algorithmic curation or autonomous generation denies an entity the protection and classification of an intermediary.

 

If AI developers are not intermediaries, the insertion of Rule 3(3) effectively regulates nobody. Unless the definition of “intermediary” in the IT Act is itself amended to explicitly include AI developers or content generation tools, these obligations lack legal teeth.


 

  1. The broad definition of “Synthetically Generated Information”

 

The definition captures any “artificially or algorithmically created” information, a category so broad that along with the deepfakes (the supposed target) it will include a photoshopped/ corrected image, parody, snippet of your favourite movies you intend to use for social media, electronically rendered advertising campaigns etc.

 

This conflation treats harmless or creative outputs the same as harmful deepfakes.


 

  1. Impractical Enforcement Across Platforms

 

Requiring every intermediary, from generative AI startups to meme-sharing apps, to embed metadata and prevent its removal may be technically unrealistic. Many AI models and hosting systems operate on open APIs or cross-platform integrations where persistent metadata cannot be guaranteed.

 

  1. The Burden of Compliance on SSMIs

 

The new Rule 4(1A) effectively transforms SSMIs into regulatory enforcement bodies. They must not only verify user declarations but also deploy automated tools capable of distinguishing between authentic and synthetic content; a task that even AI researchers have not perfected.

 

This “proactive verification” requirement risks over-blocking, as platforms might err on the side of caution to avoid liability, leading to chilling effects on legitimate content such as satire, creative art, or political commentary.

 


  1. Ambiguity Around “Reasonable and Appropriate Technical Measures”

 

The draft leaves undefined what constitutes a reasonable and appropriate verification mechanism. Without standardisation, this creates interpretive uncertainty and inconsistent enforcement, exactly the kind of ambiguity that complicates compliance audits.

 


  1. Contradictions Between Safe Harbour and Content Moderation

 

The Amended Rules introduces a proviso to Rule 3(1)(b), stating that intermediaries who remove flagged content in “reasonable efforts” won’t lose safe harbour under Section 79(2).

 

But this raises a logical contradiction:

 

a. The entire premise of safe harbour is that platforms cannot pre-judge or edit content, since doing so makes them editors, not intermediaries.


b. If platforms start making judgments to determine what constitutes synthetically generated or unlawful content, they risk losing safe harbour under Section 79(3)(b), which removes protection once intermediaries have “actual knowledge” of content.

 


  1. The Metadata Problem

 

Embedding permanent identifiers “covering at least ten percent of the surface area” of an image or video may have aesthetic and usability implications, especially for creators.

 

Moreover, bad actors can easily circumvent such labels using re-encoding, cropping, or AI-based content modification, making the enforcement of such mandates somewhat symbolic.

 


  1. Chilling Effect on Innovation

 

Startups and small intermediaries offering generative AI tools may face disproportionate compliance costs, stifling innovation and limiting India’s ability to compete globally in the AI ecosystem.

 

 

 

The Broader Implications


The Draft IT Rules, 2025 showcase India’s ambition to lead AI regulation, but they risk turning intermediaries into digital law enforcement agencies without the technical capacity to execute these mandates.


While the intent to curb deepfakes and misinformation is sound, the current framework risks:

  • Over-regulation of benign AI applications,

  • Legal uncertainty for AI startups, and

  • Excessive platform liability, potentially stagnating innovation.

 


A Missed Opportunity for a Balanced Framework


Instead of placing a blanket obligation on intermediaries, India could have adopted a tiered risk-based approach, similar to the EU AI Act, distinguishing between:

  • High-risk synthetic content (e.g., political deepfakes, impersonations, election material); and

  • Low-risk creative or entertainment-based content.


Such a model would have preserved creative freedom and innovation, while ensuring accountability where harm is demonstrable.

 


The Way Forward


The Draft IT Rules, 2025 represent an important policy milestone, India’s first formal attempt to legally define and regulate AI-generated content.


However, to achieve meaningful implementation, the government must:


  1. Engage with AI developers, intermediaries, and legal experts during consultations.

  2. Develop standardised technical protocols for watermarking or labelling AI-generated media.

  3. Introduce graded liability based on scale and intent, rather than a one-size-fits-all approach.

  4. Ensure that enforcement mechanisms do not infringe on user rights or artistic expression.

 


Conclusion


The Draft IT Rules, 2025 embody India’s growing resolve to address the ethical and legal challenges of artificial intelligence. Yet, regulation in this domain must walk a fine line between control and creativity, between responsibility and restraint.


While synthetically generated information poses real threats, an overly broad and compliance-heavy regime could end up chilling digital innovation more than curbing misinformation.


India’s next challenge will be to craft a framework that encourages technological evolution while maintaining public trust without turning intermediaries into unwilling regulators.

 
 
 

Recent Posts

See All
Understanding Consumer Laws in India

Navigating the maze of consumer rights can feel like trying to find your way through a dense forest without a map. But here’s the good news: understanding Indian consumer rights laws is not just for l

 
 
 

Comments


Leave us a message and we'll get back to you.

Thanks for submitting!

bottom of page