Blog

Unworthy Artificial Intelligence: India’s Urgent Need For a Regulatory Framework

Unworthy Artificial Intelligence: India's Urgent Need For a Regulatory Framework

Artificial Intelligence (AI) began as a symbol of human progress— a tool to enhance efficiency and innovation. Yet, in the absence of regulation, it now threatens the very dignity and order it once promised to uphold. India has witnessed a surge in AI-generated deepfakes — hyper-realistic images, videos, and audio that impersonate real people. From public figures to private citizens, none are spared. The lack of a legal framework has let this technology run unchecked, blurring the line between truth and fabrication.

An unregulated frontier

Technology itself is not new, but what differentiates the present crisis is the accessibility of AI tools. What was once confined to research laboratories is now available on every smartphone. The ability to clone a face, fabricate a voice, or recreate an event can be achieved by anyone with an internet connection. In early 2025, AI systems capable of  producing ultra-realistic  visual and  audio deepfakes entered the Indian digital sphere. Within months, they were misused to circulate defamatory, deceptive, and socially inflammatory content.

The consequences have been devastating — reputations destroyed, families humiliated, and social unrest fueled by false digital narratives. In an age where digital communication is intrinsic to life and liberty, the absence of statutory oversight over use of such technology brings constitutional vacuum where fundamental rights remain unprotected.

Constitutional rights in peril

At the heart of the issue in subject lies the violation of fundamental rights guaranteed under Articles 14, 19, and 21 of the Constitution. The right to privacy and dignity, affirmed in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), extends beyond physical spaces to the digital sphere. Deepfakes that replicate a person’s likeness or voice without consent intrude upon bodily and informational privacy, amounting to a constitutional injury.

The right to reputation, recognized as an integral facet of the right to life, is also under grave threat. As the Delhi High Court observed in ICC Development (International) Ltd. v. Arvee Enterprises (2003), control over one’s own image and personality is a protected legal right. Yet, in the absence of AI-specific laws, such rights are left vulnerable to algorithmic impersonation and misuse  by  unknown  actors  operating  behind  screens.

The State, too, bears a constitutional duty under Article 14 to ensure  equality  before  the  law.  Administrative  inaction  in regulating AI tools has created a class of powerful, unaccountable technological entities that operate above scrutiny, while citizens remain without remedy.

Social and psychological consequences

Deepfake technologies are not mere tools of mischief; they are instruments capable of destroying lives. A fabricated video released days before a wedding or election can irrevocably alter reputations, cause mental distress, or incite violence. Emotional trauma, public humiliation, and loss of social standing follow swiftly,   while   justice   remains   slow   and   uncertain.

In extreme cases, deepfakes have triggered depression, social ostracization, and even suicidal tendencies among victims. In the absence of statutory regulation, the digital space has reached a stage where the moment such manipulated content surfaces online, the ensuing harm becomes irreversible, often wreaking havoc upon individual dignity and public trust alike. Reputations built over decades can be undone in minutes — and the law, as it stands, offers no immediate recourse.

Vacuum in existing legal framework

The Information Technology Act, 2000 — India’s primary digital statute — was never designed to deal with autonomous or generative systems. Section 79, which governs intermediary liability, has often been misused by global platforms to evade accountability. Despite receiving actual notice of AI-generated impersonations or deepfake material, these platforms often fail to remove the content promptly. Instead, they disable access for the complainant/reporter, leaving the defamatory material visible to millions. Such token compliance undermines the due diligence obligations imposed by law and contravenes the spirit of the Shreya Singhal v. Union of India (2015) judgment.

Social media giants, while benefiting from India’s vast digital market, have not instituted transparent grievance-redressal systems or effective AI content detection mechanisms. Their “community guidelines” remain non-binding and arbitrary, permitting synthetic and defamatory media to proliferate without consequence. The result is an ecosystem where technology advances faster than the law, and accountability trails far behind.

Additionally, the absence of a comprehensive AI regulatory framework has compelled citizens to individually invoke their locus standi before courts for protection of privacy, dignity, and reputation against deepfake abuse. This growing influx of petitions before different High Courts reflects a fragmented response to a national problem. The burden on the judiciary underscores a deeper policy failure—the lack of uniform licensing, certification, and monitoring of AI systems that only legislation, not litigation, can effectively address.

Global precedents: India can’t ignore

While India remains without any AI regulatory framework, other jurisdictions have moved decisively. The European Union’s AI Act

2024 adopts a risk-based classification model, prohibiting manipulative AI systems and imposing penalties up to 7% of global turnover. The United States’ Executive Order on Safe, Secure, and Trustworthy AI (2023) mandates content watermarking and government oversight. China and Singapore have likewise enacted detailed frameworks for algorithmic accountability, content labelling, and licensing of high-risk AI systems.

These global precedents demonstrate a simple truth: unregulated AI is incompatible with constitutional democracies. Regulation does not hinder innovation — it legitimizes it. For India, which is home to one of the world’s largest digital populations, failing to act swiftly could mean institutionalizing harm before recognizing it.

Recent legal development

Recently, the Central Government, through Ministry of Electronics and Information Technology (MEITy), proposed amendments to the IT Rules introducing the term “synthetically generated information” for AI-created or modified content. The draft mandates labelling, metadata tagging, and grievance mechanisms—an important step in recognizing deepfake harm. Yet, it retains a structural flaw: intermediaries are required to act within 36 hours of receiving actual knowledge, which arises only through a court order or written government direction. This excludes direct user complaints and delays timely takedown. The rule, therefore, remains reactive rather than preventive. Unless the law empowers citizens to trigger mandatory removal within 24–30 hours of verified reporting and imposes strict sanctions for non-compliance, the reform risks becoming symbolic—acknowledging the problem without ensuring protection.

The path ahead

India urgently requires a comprehensive Artificial Intelligence Regulatory and Licensing Framework that not only governs innovation but safeguards constitutional rights and public trust. To ensure accountability and deterrence, the following measures are imperative:

  1. Mandate ethical licensing of AI tools before public release — ensuring that only verified and responsibly designed technologies are made accessible to the public.
  2. Establish a National Artificial Intelligence Authority vested with investigative and enforcement powers, empowered to monitor compliance, audit AI systems, and impose sanctions for violations.
  3. Ensure transparent and time-bound grievance redressal by digital intermediaries, making it mandatory for platforms such as Meta and Google to remove or disable access to reported and proven manipulative or defamatory content within 24 to 30 hours of receiving a verified complaint. Failure to comply must attract stringent statutory liability.
  4. Prescribe deterrent criminal and financial penalties for the creation, training, or circulation of synthetic media that impersonates real individuals, violates consent, or threatens public order.

The framework must function with precision, speed, and accountability; anything less renders justice reactive, not preventive. Every AI system operating in India should adhere to strict data provenance and consent-based training, ensuring no individual’s likeness or information is used without explicit authorization.

Technology must serve humanity—not dominate it. When innovation erodes autonomy, dignity, and truth, regulation becomes a constitutional imperative. Artificial Intelligence, if left unchecked, will not merely invade privacy; it will redefine identity. India stands at a defining juncture: the law must evolve before the next deepfake destroys another life, reputation, or institution.

Share:

Latest Posts

Send Us A Message

Disclaimer

This website is for informational purposes only and is not intended to advertise or solicit work as per the Bar Council of India rules. By accessing www.foresightlawoffices.com, you acknowledge that you are seeking information about Foresight Law voluntarily. Nothing on this site constitutes legal advice or creates a lawyer-client relationship. Foresight Law is not responsible for any actions taken based on the content here. External links do not imply endorsement. Please do not share confidential information via this website. For details, review our Privacy Policy and Terms of Use.

Scroll to Top