IndiaAI launches five pioneering research and innovation projects to safeguard against Deepfakes and cyber threats

SUMMARY
To strengthen the Indian digital trust ecosystem and combat the growing menace of deepfakes, IndiaAI has granted the green light to five innovative research and innovation projects. These projects are part of its Safe & Trusted AI project, which represents a significant step in the national quest to create a safe, ethical, and responsible Artificial Intelligence space. The general aim of the initiatives is to reestablish the ability of India to identify, audit, and offer protection over dangerous AI-generated data. The IndiaAI Mission has an official nodal implementation body, IndiaAI, an independent division under the Ministry of Electronics and Information Technology (MeitY).
Focus on developing advanced deepfake defense mechanisms
The projects were chosen based on the second round of the Expression of Interest (EoI) by IndiaAI, which was announced on December 10, 2024. This round was enthusiastically received and received more than 400 proposals and various entities in the vibrant Indian AI ecosystem, including Indian Institutes of Technology (IITs), startups, research laboratories, and forensic institutes.
Five projects were eventually selected after an arduous selection process through the involvement of various stakeholders on the basis of technical novelty, scalability, and general applicability to society. The popularity of deepfake detection among the chosen themes can be attributed to the increasing level of concern about the misuse of AI to transmit false information and distort the digital reality all over the world.
Saakshya is a joint venture between IIT Madras and IIT Jodhpur. Saakshya aims to take advantage of multi-agent systems and Retrieval-Augmented Generation (RAG) technology. Its main role is to correctly identify images, videos, and audio that have been manipulated in real-time and provide critical and actionable information to law enforcement agencies and media verification agencies.
AI Vishleshak is a collaborative project between the IIT Mandi and the Directorate of Forensic Services, Himachal Pradesh. The AI Vishleshak project aims at developing explainable AI (XAI) frameworks. These are the structures that are expected to be used in detecting deepfakes and validating forged handwritten signatures. The primary aspect of this effort is that it focuses on the issue of adversarial robustness, which is essential to improve the reliability of forensic AI tools in the long term under the impact of changing threats.
Voice Deepfake Detection System is a free-standing project developed by IIT Kharagpur, and it aims at voice impersonation generated by AI specifically. The voice fakes are becoming a new and serious channel of digital fraud and misinformation dissemination. The technology that will be the result of this project is projected to be of great help to India with regard to forensic capabilities and cybersecurity defenses, as it will improve its operational capabilities in real-time voice authentication.
Enhancing security and fairness of the AI ecosystem
The fundamental mandate is to encourage technological independence, spread the benefits of AI widely, and to make sure that the national AI ecosystem can take good care of the issues of synthetic media, algorithmic bias, and adversarial manipulation of generative models.
In addition to the direct technical protection of deepfakes, there are two other future-looking projects under the IndiaAI initiative aimed at improving the safety and integrity of the AI ecosystem.
Digital Futures Lab and Karya are collaborating on a project to investigate and address gender bias that exists in agricultural AI systems. It aims to make sure that AI models that are applied in the vast Indian farming industry are both inclusive and equitable, thereby facilitating responsible deployment.
Anvil is the fifth project and is being developed by Globals ITES Pvt Ltd and IIIT Dharwad. Anvil is created to be a penetration testing framework tailored to generative AI models. The purpose of this toolkit is to make these models resilient to possible adversarial exploits, including data poisoning, which may otherwise undermine their integrity and operation.
A senior official associated with the IndiaAI Mission said, “Deepfakes pose not just a technological challenge but a societal one. Through these initiatives, India is building the guardrails for an AI future that’s both innovative and responsible.”
Quotation Source: TICE
Conclusion
All five of these projects reflect the idea of IndiaAI to turn the ethical principles of AI into operational frameworks that would incorporate bias evaluation, forensic analysis, and system resilience. The tools and technologies resulting from this program are expected to be directly and directly applicable to the main spheres, such as law enforcement, cybersecurity, media verification, and digital governance.
Recommended For You
Note: We at scoopearth take our ethics very seriously. More information about it can be found here.