Pre-cum-Mains GS Foundation Program for UPSC 2026 | Starting from 5th Dec. 2024 Click Here for more information
Source: The post is based on the article “Grappling with AI: How govt’s plan to deal with revolutionary tools like ChatGPT and Google’s Bard” published in Indian Express on 3rd May 2023
What is the News?
Group of Seven (G7) countries has said that a “risk-based” regulation of AI could be a potential first step towards creating a template to regulate emerging tools such as Open AI’s ChatGPT and Google’s Bard.
What is a risk based approach to regulate AI Chatbots?
G7’s “risk-based” approach could involve graded regulation, with a lesser compliance burden on developers or users of AI tool deployed in areas such as the word processing business or generating music as compared to the regulatory supervision on, say, a tool aiding doctors in medical diagnosis or one linked to a face-reading device that’s matching people’s identities.
How have different governments responded to generative AI tools like ChatGPT?
EU: EU has taken a predictably tough stance, with the proposed AI Act segregating artificial intelligence by use-case scenarios based broadly on the degree of invasiveness and risk. Italy has become the first major Western country to ban ChatGPT out of concerns over privacy.
UK: It took a ‘light-touch’ approach that aims to foster, and not stifle, innovation in this nascent field. Japan too has taken an accommodative approach to AI developers.
China: It has been developing its own regulatory regime. It has also put out a 20-point draft to regulate generative AI services, including mandates to ensure accuracy and privacy, prevent discrimination and guarantee protection of intellectual property rights.
India: It has said that it is not considering any law to regulate the artificial intelligence sector.
US: It has built on a Blueprint for an AI Bill of Rights. The bill proposes a nonbinding roadmap for the responsible use of AI.
– The bill has spelt out five core principles to govern the effective development of AI systems, with special attention to the unintended consequences of civil and human rights abuses.