+91 (11) 4903 6141

Legal Implications Of AI Driven Arrest Scams: Proposing Amendments To IT Act 2021

TrustmanUncategorizedLegal Implications Of AI Driven Arrest Scams: Proposing Amendments To IT Act 2021

Dec

26

December 26 , 2025 | Posted by kaviraj |

Legal Implications Of AI Driven Arrest Scams: Proposing Amendments To IT Act 2021

INTRODUCTION 

In today’s world digital landscapes are fast and highly evolving, in this Artificial Intelligence has emerged to be a transformative force reshaping communication, governance and innovation. Even though the behind the growth of its promise there lies a troubling reality, India has been seeing a recent surge in the AI based digital arrest scams. These scams appear to be so sophisticated that they are able to make use of AI created deepfake content to build a believable replica of law enforcement authorities or government officials. These deceptive practices exploit and under mines public trust and trick them into thinking they are targets of such inquires which are fraudulent and forcing the to pay or provide their personal details. These frauds are both economically and psychologically crippling, depriving the citizens of the trust to use digital communication platforms and public organizations. The existing anonymity and cross border nature of these fraudulent actions does complicate law and bring out the lack of legal frameworks present in India such as IT Rules,2021.

The present study sheds light on disturbing surge of such digital scams, with an attempt to learn how deepfake technology impacts and creates loopholes in the legal framework. By this analysis of the existing laws and acts imposed on them, this study aims to assist further in this topic, a debate on the limitations of such existing laws and propose potential lines for policies or legislation[1].

EXPLOITATIVE NATURE OF AI GENERATED VOICES, WITH SPECIAL EMPHASIS ON DEEPFAKE (AI TOOLS) FOR IMPERSONIFICATION.

In the era of technological advancements where artificial intelligence or Ai tools are supposed to make our life easier and convenient, it has grown to become one of the most exploited sources of technology. It is very scary how different forms of this exploitation of technology are around the world causing damage to the digital citizens. The rise of Artificial Intelligence is both an advantage and a disadvantage to the current world. AI tools such as Chat-gpt , Google Gemini are all artificial intelligence tools which are currently in the mainstream and getting more popular day by day because of the convenience of its use and effective output delivery to what people desire. But as time passes it is gradually getting more exploited where people use it as a means for deception and commit fraudulent activities.

This advent of the ai powered cyber scams have been causing a great load of anxiety among the country’s legal technological and administrative fields.

Digital arrest frauds have been one of the most popular forms of this phenomenon, where the scammers use a mix of impersonation and voice cloning AI tools and target individuals belonging to various socio-economic groups. They mimic as channels of official communication, making threats and enforces such compliance on the victim of such scams and eventually they give in to such scammers. Scams in the past majorly consisted of generic messages or email phishing. Currently it has taken a huge turn as they are presented in far more believable forms with the use of personalised video calls, voice synthesis or even forged visual images to masquerade as higher officials. This way in which AI technology enables rendering images and voices of individuals has heightened the feeling of deception and also in a seemingly credible manner. One of the most used ai tool for the kinds of impersonification deception is deepfake.

DEEPFAKE TECHNOLOGY

 The name itself denotes that it is a form of artificial intelligence which is used to create extremely convincing images videos and hoaxes. The issue of digital arrest scams itself is based on the use of this deepfake technology. Equipped with this technology, the manipulators make fraudulent communication which are more believable and harder to track. To know how the scammers use this technology to commit digital arrest scams it is important to understand the nature of this deepfake technology and the method through which they are produced and how the growing makes it possible them to be as realistic and attainable[2]. The digital arrest scams in India usually starts with impersonation, with the fraudsters posing as law enforcement officials or officials of regulatory bodies like the CBI, NCB or even the income tax department. This impersonation occurs through this deepfake technology where the scammer would appear to be a Bonafide official where they forged badges or even fake court orders or arrest warrants. Such impersonation brings individual autonomy into disrepute, along with reducing public faith in legitimate communication from the government[3].

Coming to the real-life examples of how these ai generated voices have been used to exploit people. During the time of the covid pandemic everything went digital and that was the time the trend of this scams rose nicely. Official estimates put the amount lost to such scams at ₹120.3 crore for just the first quarter of 2024, indicating an exponential increase in incidents and their monetary amount. There has been a recently seven cases reported in Indore where individuals were duped of lakhs and lakhs of rupees by cybercriminals by using cloned voices of their family members[4].There has been some international case as well for this use of artificial intelligence for monetary manipulations. In Italy, the scammers have cloned the voice of Italian defence minister to defraud international business. The victims have included fashion tycoons such as Giorgio Armani, Patrizio Bertelli of Prada, the scammers have falsely claimed Italian journalist were kidnapped and needed emergency ransom payments. As a result, these international business men almost transferred one million euros to a fraudulent Hong Kong account. Similarly, a counterpart incident in Kerala were a gentleman aged 73 lost around 40,000 rupees after someone who looked very much of deepfake of a colleague had video called him asking for urgent financial help, the deepfake was so convincing there was nothing to convince the victim otherwise[5]. Another happened in Bangalore in 2024 wherein a group of residents lost 95 lakhs after being shown deepfake videos of renowned business-men like Narayana Murthy and Mukesh Ambani fraudulently endorsing an investment scheme, victims find it hard convincing the police that it really was a scam, while law enforcement agencies appear to lack forensic digital support and jurisdictional clarity. These real-life stories clearly illustrate the pressing need for concrete AI-related legislation and an informed public to combat this new catastrophy[6].

ADVANCEMENT OF AI, BANE FOR TECHNOLOGICAL INNOVATION.

Artificial Intelligence (AI), once hailed as a transformative force for innovation, is increasingly revealing its darker side, particularly in the context of AI-generated voice scams. These scams have emerged as a potent cybercrime tool, where criminals use AI to clone voices with startling precision, mimicking officials, family members, or colleagues to deceive victims into transferring money or revealing sensitive data. The very technology that enabled smart assistants and voice-based authentication has now become a weapon against unsuspecting individuals. One of the greatest threats is the anonymity it provides—perpetrators can generate these deepfake voices remotely, host them on foreign servers, and operate across jurisdictions, making it extremely difficult to trace or prosecute them under Indian law. This undermines not only law enforcement’s capacity but also the foundational trust upon which digital innovation depends. Moreover, a significant lack of technical awareness among police and cybercrime agencies further aggravates the problem. Without adequate forensic tools and AI training, investigations are delayed, evidence is misinterpreted, and enforcement remains toothless. While India’s constitutional safeguards under Article 19(1)(a) protect freedom of speech, regulating AI misuse becomes a tightrope walk[7]. Overbroad or vague laws targeting deepfakes and voice manipulation could stifle legitimate expression, creative use of AI, and journalistic freedoms. The judiciary, too, faces a unique challenge. Courts now must grapple with the authenticity of AI-generated audio evidence, raising doubts about its admissibility and credibility unless supported by forensic AI verification. This not only complicates legal proceedings but can also delay justice. On the international front, the cross-border nature of these crimes makes enforcement a nightmare, particularly due to inadequate treaties or extradition mechanisms concerning cyber and AI crimes. Compounding this is the failure of tech platforms to implement strong detection and takedown protocols. Although India’s Intermediary Guidelines and Digital Media Ethics Code, 2021, require swift action on flagged content, their implementation is inconsistent. As AI-generated voices continue to disrupt digital trust, India must consider adopting EU-style content moderation rules, invest in AI forensics, and promote blockchain solutions to track voice manipulations. Unless swift, targeted, and globally coordinated legal reforms are undertaken, AI will continue to transform from technological boon into a regulatory bane, undermining the very systems it was designed to enhance[8].

COMPARISON BETWEEN INDIA IT RULES 2021 AND EU AI ACT

The very fundamental aspect of any innovation is particularly attributed to fostering development in a good way so that people will get the fruits of benefit. Compared to the sole objective, what we often grapple with on the other side is the risks and concerns often associated with it. This is what we see in AI, one of the most powerful tools in this generation with day to day use of various productive activities. This usage is now tilted towards a much more non regulatory form where we see the issue of scams and frauds associated with it. An important aspect of AI voice scams is the lack of comprehensive regulatory framework in the global sphere. Some countries have taken much more serious effort to understand what are the core ideas and issues that have to be dealt with. But some are shadowed by AI innovation with lesser emphasis on regulatory framework. We can see this in the contrasting picture of Indian and EU laws of regulation.

There is a major problem that is associated with the fundamentals of AI voice scams regulation which is regarding the determination of liability. Nowadays AI tools are in large numbers and its endless domain of access finds it very difficult to trace out the source. Thus, in an AI scam hardly a party gets immunity from it. In the case of social media platforms this is the case where they have limited immunity. As we can see in the provisions of latest Information technology (Amended act 2021), section 79 of the act gives the scope of that limited immunity to intermediaries including social media and messaging platforms if they comply with the due diligence. This idea of limited immunity is best to the true interest of a scam case but comparing the broader spectrum of these scams it’s actually less efficient in ensuring smooth justice. Moving on to the main area of apt punishment the information technology act does have a provision of section 66D which states that the act punishes cheating by personation using a computer resource. This is highly applicable to voice scams of any kind[9]. Even the provision of 66C which covers identity theft which includes biometric impersonation and section 72 which covers breach of confidentiality and privacy are strongly associated with the voice scams with greater regulation. Information technology act to its best intention has managed to show a strong regulation by these provisions but another major problem associated with this is the determination of the level of technology that is associated with it which is used to do these scams. Most of these scams which are done in India use different domains which are so large and extensive that it is hard to find. Recently in India, there has been several cases reported of people receiving calls from AI cloned voices on intention to defraud with online money transfers on reasons of emergency. The biggest task is to understand which level of software has been used to change these voices and what is the true identity of the voice which is a matter of great concern. This what circles around the above-mentioned idea of limited immunity to social media and other intermediary platforms as they are in a constant state of confusion despite their warnings to users about the potential risks of getting defrauded. Most of these tools used for deception are encrypted even the intermediaries cannot determine. Rule3(1)(b) of IT (intermediary guidelines and Digital Media Ethics Code) rules 2021 states that intermediaries like social media must inform users not to impersonate others This rule is subjected to difficulty for the above stated reasons of difficulty what intermediaries faces making it disillusioned in the eyes of AI development in India[10]. Thus, there is a need to break out the chain of encryption with honest intentions without threatening the genuine users and ensure that scammers are put to justice and AI innovation should go for the next leap with much more regulation with protection as we can see in the much awaited Digital India act. In matters of voice recognition much stricter compliance must be ensured in giving consent from the source and it’s better that intermediaries could upgrade the platforms with much more updated technology. This is crucial because this is an era of deep fakes and the digital personal data protection act 2023 has no scope for protection on this.

Comparing all these aspects and issues with the newly framed much more in depth and comprehensive EU AI act 2021, we can see that they have a codification of laws which even delves into technologies by categorising them. The EU’s commitment to regulating AI is rooted in a holistic perspective that transcends technological considerations. By categorizing AI based on risk levels, the EU seeks to balance the potential benefits of AI, such as better healthcare and efficient manufacturing, with the need to protect individuals and society from potential harms. The emphasis on fundamental rights and environmental sustainability underscores a comprehensive approach that goes beyond traditional regulatory frameworks. An important aspect of the act is that it goes along without hindering innovation by balancing everything. Compared to all this as a result of an advanced west economy, we should not deny the fact that Indian laws have moved a step much ahead within a short span with the limits that we have in technology which will be rectified as it is a fastest growing economy. The new digital act that will soon come up gives us a much more hope way forward

AMENDMENTS

In order to address the rising threat of AI generated voice and digital scams in India, there is a need to have a efficient and comprehensive legal framework which would address this issue and find solution to it. A key suggestion which is to be proposed fir addressing these threats is amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to include a classification of AI tools mentioning its risk factor. Technologies used for voice cloning and deepfake technology shall be categorised as “high risk”.

Secondly, there should be a strong consent based legal framework where it allows collection of voice for the purpose of data collection and govern over its misuse to identify the culprits over the internet scams. Since under the IT rules the responsibility upon the intermediaries is limited to the users, at the same there shall also be an explicit consent recorded before any voice or biometric data is processed or mimicked by the AI tools, aligning itself more closely with the DPDP Act, 2023.

As far as for the third suggestion there shall be a dedicated regulatory authority for AI oversight, which uses the EU’s coordinated approach, which would audit AI systems which are being used on the digital platforms and enforce compliance.  This authority can alco facilitate cooperation with the law enforcement and forensic experts to enhance the investigation efficiency.

Finally, the provision under IT rules 2021, section 79 shall be narrowed down where the intermediaries shall not be able to claim immunity if they fail to detect and remove deepfake content, particularly when advanced detection tools are available. Also, automated AI forensic tools shall be made mandatory for platforms hosting user generated audio or video content.

CONCLUSION 

The surge of AI generated digital scams sheds light on some of glaring gaps in Indian law enforcement apparatus. Technologies which are used to power these scams represent the trickery and extremely complicated challenges the previous laws not able to efficiently deal with. Although there are offences such as defamation, theft and obscenity which are currently covered by the current cyber laws, they might be insufficient to address the risk posed by AI. To combat deepfake, the digital India Act and the upcoming AI legislation must include certain requirements that should strike a balance between the national security, free expansion and privacy. Therefore, India has to adopt proactive legislative strategy which reduce these threats posed by AI and related technologies and digital citizens shall also exercise caution before trusting and disseminating such information.

[1] Online Bureau, “Rising threat of ‘Digital Arrest’ exploiting deepfake sparks concern among law enforcement”, available at: https://legal.economictimes.indiatimes.com/news/law-policy/rising-threat-of-digital-arrest-exploiting-deepfake-sparks-concern-among-law enforcement/114274182 (Visited on March 7, 2025).

[2] Shinu Vig, “Regulating Deepfakes: An Indian perspective”, 17 JSS 70 (2024)

[3] 7MP. Sandoval, M. de Almeida Vau, et.al., “Threat of deepfakes to the criminal justice system: a systematic review”, 13 CS 41 (2024).

[4] Karishma Kotwal, AI Voice Clone: New Tool of Cybercrime, TIMES OF INDIA (Mar. 7, 2024, 11:34 AM IST), https://timesofindia.indiatimes.com/city/indore/ai-voice-clone-new-tool-of-cybercrime/articleshow/108280568.cms.(timesofindia.indiatimes.com).

[5] Rishabh Chaudhary & Kanika Aggarwal, AI-Driven Digital Arrest Scams: Legal Gaps in Regulating Deepfake Impersonation, 6 INT’L J. RES. PUB. & REVS. 13124 (2025).

[6] Kuldeep Singh Panwar, Nilutpal Deb Roy, et.al., “Rising Menace of Deepfakes with the Help of AI: Legal Implications in India”, 4 IJIRL 94 (2023).

[7] The Constitution of India 1950, art 19 (1)(a).

[8] Jui Purwat, Legal Challenges of Deepfake Technology and AI-Generated Content in India, JUS CORPUS (Apr. 21, 2024), https://www.juscorpus.com/legal-challenges-of-deepfake-technology-and-ai-generated-content-in-india/.

[9] Information technology rules, 2021 s 79

[10] Information technology rules, 2021 r 3(1)(b).