top of page

The Draft IT Rules, 2025: India’s Attempt to Regulate AI-Generated Content.

  • Writer: Kiratraj Sadana
    Kiratraj Sadana
  • Oct 27, 2025
  • 5 min read

Updated: Jan 1

Navigating the Draft IT Rules, 2025: A New Era for AI Regulation in India


Understanding the Draft IT Rules, 2025


The Ministry of Electronics and Information Technology (MeitY) has released a draft notification proposing amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. This proposal marks a significant turning point in India’s digital regulatory framework. For the first time, it introduces the term “synthetically generated information” into law.


While this move aims to curb AI-driven misinformation and deepfakes, it raises serious questions about feasibility, enforcement, and the broader implications for free expression and innovation.


Key Provisions of the Draft Rules


  1. Mandatory Labelling of AI-Generated Content:

    Platforms offering AI tools (like text, image, or video generation) must ensure that all synthetically generated content carries a visible or audible label. This label should cover at least 10% of the visual surface or duration, or include embedded unique metadata/identifier.


  2. No Alteration of Labels or Metadata:

    Intermediaries must prevent users from modifying or removing these identifiers.


  3. User Declaration and Platform Verification:

    Social Media Intermediaries (SSMIs) must:

  4. Require users to declare whether the content being uploaded is synthetically generated.

  5. Deploy automated tools or other suitable mechanisms to verify these declarations.

  6. Clearly label AI-generated content before publication.


  7. Accountability for Non-Compliance:

    If a platform knowingly permits or ignores unlabelled synthetic content, it will be deemed to have failed due diligence under Rule 4(1A). This could result in losing its “safe harbour” protection under Section 79 of the IT Act.


The Rationale: Combating Deepfakes and Disinformation


Deepfake technology has evolved from parody videos to a weapon of disinformation, political propaganda, and identity manipulation. The intent behind the draft Rules is clear: to make synthetic media traceable and accountable. MeitY’s approach echoes the international trend, from the EU’s AI Act to US deepfake disclosure laws, where governments are seeking transparency in AI-generated content.


At a conceptual level, the draft 2025 Rules seek to restore trust in digital communication. They aim to ensure users can distinguish between real and artificially generated content.


The Critical Perspective: Where the Draft Rules Fall Short


While the intent is commendable, the execution raises more questions than answers.


Who Exactly Is Being Regulated?


The most fundamental question is whether AI developers even qualify as “intermediaries” under Section 2(1)(w) of the IT Act. The current law defines intermediaries as entities that “receive, store, or transmit” electronic records on behalf of another. However, in the case of Generative AI models, the output is not received or stored; it is created from scratch.


The Delhi High Court’s decision in Google LLC v. DRS Logistics clarified that algorithmic curation or autonomous generation denies an entity the protection and classification of an intermediary. If AI developers are not intermediaries, the insertion of Rule 3(3) effectively regulates nobody. Unless the definition of “intermediary” in the IT Act is amended to explicitly include AI developers or content generation tools, these obligations lack legal teeth.


The Broad Definition of “Synthetically Generated Information”


The definition captures any “artificially or algorithmically created” information. This category is so broad that it will include not only deepfakes but also photoshopped images, parodies, snippets of movies for social media, and electronically rendered advertising campaigns. This conflation treats harmless or creative outputs the same as harmful deepfakes.


Impractical Enforcement Across Platforms


Requiring every intermediary, from generative AI startups to meme-sharing apps, to embed metadata and prevent its removal may be technically unrealistic. Many AI models and hosting systems operate on open APIs or cross-platform integrations where persistent metadata cannot be guaranteed.


The Burden of Compliance on SSMIs


The new Rule 4(1A) effectively transforms SSMIs into regulatory enforcement bodies. They must not only verify user declarations but also deploy automated tools capable of distinguishing between authentic and synthetic content—a task that even AI researchers have not perfected. This “proactive verification” requirement risks over-blocking. Platforms might err on the side of caution to avoid liability, leading to chilling effects on legitimate content such as satire, creative art, or political commentary.


Ambiguity Around “Reasonable and Appropriate Technical Measures”


The draft leaves undefined what constitutes a reasonable and appropriate verification mechanism. Without standardisation, this creates interpretive uncertainty and inconsistent enforcement, complicating compliance audits.


Contradictions Between Safe Harbour and Content Moderation


The Amended Rules introduce a proviso to Rule 3(1)(b), stating that intermediaries who remove flagged content in “reasonable efforts” won’t lose safe harbour under Section 79(2). But this raises a logical contradiction:


  • The entire premise of safe harbour is that platforms cannot pre-judge or edit content. Doing so makes them editors, not intermediaries.

  • If platforms start making judgments to determine what constitutes synthetically generated or unlawful content, they risk losing safe harbour under Section 79(3)(b). This section removes protection once intermediaries have “actual knowledge” of content.


The Metadata Problem


Embedding permanent identifiers “covering at least ten percent of the surface area” of an image or video may have aesthetic and usability implications, especially for creators. Moreover, bad actors can easily circumvent such labels using re-encoding, cropping, or AI-based content modification, making the enforcement of such mandates somewhat symbolic.


Chilling Effect on Innovation


Startups and small intermediaries offering generative AI tools may face disproportionate compliance costs. This could stifle innovation and limit India’s ability to compete globally in the AI ecosystem.


The Broader Implications


The Draft IT Rules, 2025 showcase India’s ambition to lead AI regulation. However, they risk turning intermediaries into digital law enforcement agencies without the technical capacity to execute these mandates. While the intent to curb deepfakes and misinformation is sound, the current framework risks:


  • Over-regulation of benign AI applications,

  • Legal uncertainty for AI startups, and

  • Excessive platform liability, potentially stagnating innovation.


A Missed Opportunity for a Balanced Framework


Instead of placing a blanket obligation on intermediaries, India could have adopted a tiered risk-based approach, similar to the EU AI Act. This approach would distinguish between:


  • High-risk synthetic content (e.g., political deepfakes, impersonations, election material); and

  • Low-risk creative or entertainment-based content.


Such a model would preserve creative freedom and innovation while ensuring accountability where harm is demonstrable.


The Way Forward


The Draft IT Rules, 2025 represent an important policy milestone—India’s first formal attempt to legally define and regulate AI-generated content. However, to achieve meaningful implementation, the government must:


  1. Engage with AI developers, intermediaries, and legal experts during consultations.

  2. Develop standardised technical protocols for watermarking or labelling AI-generated media.

  3. Introduce graded liability based on scale and intent, rather than a one-size-fits-all approach.

  4. Ensure that enforcement mechanisms do not infringe on user rights or artistic expression.


Conclusion


The Draft IT Rules, 2025 embody India’s growing resolve to address the ethical and legal challenges of artificial intelligence. Yet, regulation in this domain must walk a fine line between control and creativity, between responsibility and restraint.


While synthetically generated information poses real threats, an overly broad and compliance-heavy regime could end up chilling digital innovation more than curbing misinformation. India’s next challenge will be to craft a framework that encourages technological evolution while maintaining public trust without turning intermediaries into unwilling regulators.

 
 
 

Have a Query?

Thanks for submitting!

bottom of page