An Indian High Court took a strong stance against the misuse of artificial intelligence (AI) and has ordered the removal of an AI generated video used in a fabricated political message. This video featured India’s Prime Minister Narendra Modi’s deceased mother Heeraben Modi. The decision of the Patna High Court in the Indian state of Bihar reflects a growing international concern about deepfake technology, misinformation, and inadequate regulation surrounding artificial intelligence.
The Case: A Deepfake Enters the Indian Political Fray
This case centers on a viral video that was posted on the verified account of the Bihar unit of the opposition party, Indian National Congress also known as Congress Party on X (formerly known as Twitter). The video appeared to show Heeraben Modi criticizing the decisions made by her son in a political context. The ruling party Bharatiya Janata Party (BJP), who leads India’s government, immediately filed a complaint and sought justice against the alleged defamatory video, which violated the dignity of women.
In a decision by Justice P.B. Bajantri, Acting Chief Justice of the Patna High Court, it was determined that the video should be taken down immediately from all social media platforms. An FIR (First Information Report), the initial step in the criminal justice framework, was also filed against the Congress Party.
While the Congress Party contended that it was simply an example of political satire, it has said they are now investigating the videos internally.
India’s Challenge with Deepfakes
This isn’t the first instance of Indian courts confronting deepfake material. In the past few months there have been the following instances:
- The Delhi High Court gave relief to Indian entrepreneur Ankur Warikoo after AI videos exploited his likeness for phony investment scams.
- Bollywood actors like Aishwarya Rai Bachchan and Abhishek Bachchan received court orders to make platforms remove deepfake content that violated their rights over their identities.
- Regular citizens reported being subjects of AI impersonation, from fraud calls to doctored videos used for extortion.
Yet, India still does not have a standalone law for deepfakes. Indian courts continue to use the Information Technology Act of 2000, parts of the Penal Code, and civil defamation laws. These laws were created long before AI impersonations began as a known threat.
What is at Stake: Reputation, Democracy, Trust
The use of AI to produce fraudulent narratives especially in the political arena, undermines democratic norms. Deepfake videos can confuse citizens, distort public opinion, and hurt the reputations of not only public figures, but regular citizens too. As these types of manipulation become easier to create, governments and social media platforms are under pressure to enforce higher standards of verification, transparency and accountability.
Global Significance of the Case
Deepfakes videos created using artificial intelligence that convincingly change faces and voices are quickly becoming a global vehicle for disinformation.
India’s Concern: Several courts have taken action in the past year to help deal with cases of deepfake, ranging from fake influencers for investment scams, to edited videos involving false celebrity video. But there are still no AI centric laws that were created specifically for this abuse and hence relying on older laws is the only option.
Notable AI Legislation Across the World: The United States, for example, has passed state legislation aimed at AI-manipulated political ads and non-consensual pornography. Likewise, the European Union’s AI Act attempts to regulate high-risk use of AI, including deepfakes, but will be complicated to implement.
Risk to Democracy: With over 960 million registered voters, India’s democracy could be significantly impacted by AI manipulation of messaging around elections, and this specter of AI manipulation is something being considered in European, North American, and elsewhere.
The Bigger Picture: The Critical Need for Regulation of AI
Many specialists feel that this case illustrates the lag between the rapid adoption of AI and slow-moving legislation. If governments do not implement regulatory changes at the level of the establishment of clear international norms, the use of artificial intelligence to create deepfakes will continue to undermine trust in the media, harm reputations, and actually harm democracies.
Regulators and courts globally have called for:
- Mandatory labeling of AI generated content
- Faster response systems for manipulated content
- Accountability for sites hosting manipulated content
- Cooperation internationally, as deepfake content does not recognize borders
From Patna in the eastern state of Bihar, India to Washington, Brussels and beyond, courts and legislatures are wrestling with the same challenge: How to constrain the illegitimate use of AI without impeding free expression. The Patna High Court’s order might have been local in nature, but it speaks to a global concern – the world is at an urgent crossroads to develop strong, aligned mechanisms to regulate AI and prevent the weaponization of deep fakes.
For India, the ruling highlights a crucial moment: can the world’s largest democracy protect its electoral integrity from AI-driven manipulation? For the global community, the message is equally urgent: no country is immune to the disruptive potential of synthetic media.
This is bigger than one video, one leader, or one court ruling. It is about establishing the tone for how humanity will engage truth and deception in the digital era.