Source: The post India’s AI-powered surveillance and its impact on privacy rights has been created, based on the article “The legal gaps in India’s unregulated AI surveillance” published in “The Hindu” on 18th December 2024
UPSC Syllabus Topic: GS Paper3- Science and Technology – Developments and their applications and effects in everyday life.
Context: The article discusses India’s increasing use of AI-powered surveillance and its impact on privacy rights. It highlights legal gaps, issues with the Digital Personal Data Protection Act, lack of safeguards, and suggests adopting transparent, risk-based regulations to protect citizens’ constitutional rights.
How has India expanded its surveillance infrastructure?
- In 2019, India announced plans to create the world’s largest facial recognition system for policing.
- Over the next five years, AI-powered surveillance systems were deployed at railway stations, and Delhi Police integrated AI for crime patrols.
- Plans include launching 50 AI-powered satellites, enhancing surveillance infrastructure further.
What concerns does AI-powered surveillance raise?
- Privacy Violation: AI systems like facial recognition collect data indiscriminately, as seen in Telangana Police’s data breach, where databases from schemes like “Samagra Vedika” were accessed.
- Lack of Regulation: India has deployed surveillance without risk assessments or guidelines, unlike the EU’s Artificial Intelligence Act, which bans real-time biometric surveillance in most cases.
- Legal Gaps: The DPDPA 2023 provides broad exemptions for government data collection, such as in Section 7(g)(epidemics) and Section 7(i) (employment data). Citizens face stricter rules under Section 15(c), penalizing errors like outdated personal data.
- Proportionality Issues: India’s surveillance lacks safeguards, challenging the principles of the K.S. Puttaswamy judgment, which recognized privacy as a fundamental right.
For detailed information on Social and Political Impacts of AI read this article here
How does India’s approach differ from global practices?
- The EU’s Artificial Intelligence Act categorizes AI by risk levels, banning high-risk activities like real-time biometric surveillance except in emergencies.
- India uses AI-powered facial recognition in cities like Delhi and Hyderabad without risk assessments or public guidelines.
- India’s Digital Personal Data Protection Act (DPDPA) grants broad exemptions, unlike the EU’s stricter regulations.
- While the EU ensures accountability, India lacks a regulatory framework; promised laws like the Digital India Actremain pending.
For detailed information on Regulation of AI read this article here
What should be done?
- Adopt transparent data collection practices, including disclosure of what data is collected, its purpose, and storage duration.
- Ensure independent judicial oversight for data processing exemptions.
- Follow a risk-based approach like the EU to regulate high-risk AI applications.
- Embed privacy measures and consent mechanisms into AI systems before deployment.
- Retroactive fixes for privacy issues are costly and inefficient.
- Transparent rules, consent mechanisms, and accountability can prevent misuse.
- Addressing gaps in the DPDPA and enacting the Digital India Act are urgent for safeguarding privacy and civil liberties.
Question for practice:
Discuss how India’s expanding use of AI-powered surveillance raises concerns about privacy rights and how these challenges can be addressed effectively.
Discover more from Free UPSC IAS Preparation Syllabus and Materials For Aspirants
Subscribe to get the latest posts sent to your email.