National Automated Facial Recognition System (NAFRS) – Explained, pointwise

ForumIAS announcing GS Foundation Program for UPSC CSE 2025-26 from 19 April. Click Here for more information.

ForumIAS Answer Writing Focus Group (AWFG) for Mains 2024 commencing from 24th June 2024. The Entrance Test for the program will be held on 28th April 2024 at 9 AM. To know more about the program visit: https://forumias.com/blog/awfg2024

For 7PM Editorial archives, click HERE
Introduction

In Mar 2020, the Home Ministry gave its approval to the National Automated Facial Recognition System (NAFRS) to be implemented by NCRB.

On its implementation, NAFRS will function as a national-level search platform that will use facial recognition technology to facilitate investigation of crime or for identifying a person of interest (e.g., a criminal) regardless of face mask, makeup, plastic surgery, etc.

NAFRS is based on the relatively new technology of facial recognition, so there is a constant debate on finding the right balance between regulation and promotion.

Let’s take a deep dive into various issues involved.

Must Read: What is NAFRS? – Explained
Rationale/Need

NAFRS will play a very vital role in improving outcomes in the area of Criminal identification and verification by facilitating easy recording, analysis, retrieval and sharing of Information between different organizations.

Also, the current facial recognition in India is done manually, the fingerprints and iris scans provide far more accurate matching results. Automatic facial recognition is an easier solution, especially for identification amongst crowds.

Benefits/Advantages

In India, a severely under-policed nation, NAFRS surely offers many benefits:

  • Control of crime with enhanced detection abilities.
  • Better border controls and countering terrorism.
  • The facial recognition technology could help protect human and child trafficking victims.
  • In identification of unidentified dead bodies
Issues/Concerns

Various issues and concerns have been expressed against the proposed system. Some of those are:

  • Intrusive nature of the tech: The technology is absolutely intrusive: computer algorithms map unique facial-landmarks (biometric data) such as shape of the cheekbones, contours of the lips, distance from forehead to chin, and convert these into a numerical code — termed a faceprint. Thus, for the purposes of ‘verification’ or ‘identification’, the system compares the faceprint generated with a large existing database of faceprints (typically available to law enforcement agencies) through a database on driver’s licence or police mugshots).
  • Results are not accurate: The real problem is that facial recognition does not return a definitive result — it ‘identifies’ or ‘verifies’ only in probabilities (e.g., a 70% likelihood that the person shown on an image is the same person on a watch list). This can result in the possibility of ‘false positives’ (a situation where the algorithm finds an incorrect match, even when there is none) resulting in wrongful arrest.
  • Possibility of bias: Facial recognition software is based on pre-trained models. Therefore, if certain types of faces (such as female, children, ethnic minorities) are under-represented in training datasets, then this bias will negatively impact its performance. With the element of error and bias, facial recognition can result in profiling of some overrepresented groups (such as Dalits and minorities) in the criminal justice system.
  • Impact on Right to privacy: As NAFRS will collect, process, and store sensitive private information i.e. facial biometrics for long periods, it will impact the right to privacy.
  • Discourage civic society activism: Further, as anonymity is key to functioning of a liberal democracy, unregulated use of facial recognition technology will dis-incentivize independent journalism or the right to assemble peaceably without arms, or any other form of civic society activism. Due to its adverse impact on civil liberties, some countries have been cautious with the use of facial recognition technology.

Facial recognition is already being used in various states of India.

Instances of usage in India
  • The govt used facial recognition technology to track down the protestors who were present at the Red Fort on January 26, 2021
  • UP police is using an AI-based facial recognition system called Trinetra. Police used this software to run surveillance on anti-CAA protestors following which more than 1,100 arrests were made.
  • The Central Board of Secondary Education (CBSE) used facial recognition to match admit card photos on record to match students logging in to give their board exams.
  • The Internet Freedom Foundation (IFF) estimates that there are currently 42 ongoing facial recognition projects in India, front he Automated Multimodal Biometric Identification System (AMBIS) in Maharashtra to FaceTagr in Tamil Nadu. Of these, at least 19 are being developed and deployed by state-level police departments and the NCRB for the specific purpose of security and surveillance.
Global examples
  • USA: The Federal Bureau of Investigation in the United States uses facial recognition technology for potential investigative leads. However, in 2020, the Facial Recognition and Biometric Technology Moratorium Act of 2020 was introduced in the Senate to prohibit biometric surveillance without statutory authorization.
  • England: Police forces in England use facial recognition to tackle serious violence. However, in one instance, the Court of Appeal in the United Kingdom ruled the use of facial recognition technology by South Wales as unlawful in the absence of clear guidelines.
  • China: In other cases, countries such as China use facial recognition for racial profiling and mass surveillance — to track Uighur Muslims.
  • Europe: Privacy watchdogs in the European Union have called for a ban on facial recognition.
  • Various multinational companies: IBM has closed its facial recognition technology division. Amazon has put a moratorium on the technology for a year. Microsoft has announced it will not sell its facial recognition technology to the police in places without federal regulation.
Implications

The biggest implication is the likely impact on Right to privacy. In Justice K.S. Puttaswamy vs Union of India (2017) Supreme Court recognized right to privacy as a precious fundamental right and provided a three-fold requirement. Accordingly, any encroachment on the right to privacy requires:

  1. The existence of ‘law’ (to satisfy legality of action)
  2. There must exist a ‘need’, in terms of a ‘legitimate state interest’
  3. The measure adopted must be ‘proportionate’ (there should be a rational nexus between the means adopted and the objective pursued) and ‘least intrusive.’

Unfortunately, NAFRS fails each one of these tests.

  • NAFRS lacks ‘legitimacy’: It does not stem from any statutory enactment (such as the DNA Technology (Use and Application) Regulation Bill 2018 proposed to identify offenders or an executive order of the Central Government. Rather, it was merely approved by the Cabinet Committee on Economic Affairs in 2009.
  • Disproportionate measure: Even if we assume that there exists a need for NAFRS to tackle modern day crimes, this measure is grossly disproportionate. This is because to satisfy the test of ‘proportionality’, benefits for the deployment of this technology have to be sufficiently great, and must outweigh the harm.
    • For NAFRS to achieve the objective of ‘crime prevention’ or ‘identification’ will require the system to track people on a mass-scale, resulting in everyone becoming a subject of surveillance: a disproportionate measure.
Suggestions/Measures
  • Adequate safeguards: Both the Information Technology Act 2000, and the Personal Data Protection Bill 2019 gives the central government unchecked power for the purposes of surveillance. We need adequate safeguards such as penalties so that police personnel are not able to misuse the facial recognition technology.
  • Algorithmic Impact Assessment: Agencies that want to deploy these technologies should be required to carry out a formal algorithmic impact assessment (AIA). Modelled after impact-assessment frameworks for human rights, environmental protection and data protection, AIAs help governments to evaluate artificial-intelligence systems and guarantee public input.
  • Rigorous review: Legislation should be enacted that requires that public agencies rigorously review any facial recognition technologies for bias, privacy and civil-rights concerns.
Way forward

Without accountability and oversight, facial recognition technology has strong potential for misuse and abuse. In the interest of civil liberties and to save democracy from turning authoritarian, it is important to impose a moratorium on the use of facial recognition technology till we have meaningful checks & balances, in addition to statutory authorization of NAFRS and guidelines for deployment.

Terms to know:

Source: The Hindu, Business Insider, Down to Earth, Indian Express

Print Friendly and PDF
Blog
Academy
Community