Artificial Intelligence (AI) has emerged as a transformative force, influencing sectors ranging from healthcare to finance. Recognising the profound implications of AI, the European Union (EU) has pioneered a comprehensive regulatory framework – the AI Act – to ensure ethical development and deployment of AI technologies, as mentioned in my last article. This particular article examines the EU’s AI Act and assesses its applicability to India, considering the unique socio-economic and technological landscape of the country. It also explores what mechanisms a few other nations have laid out for AI regulation.

In July 2024, the EU enacted the AI Act, marking the first major legal framework for AI regulation globally. The Act categorises AI applications based on a set of risk levels mentioned below:

 

  • Unacceptable Risk: AI systems that pose significant threats, such as social scoring by governments, are prohibited.
  • High Risk: Applications in critical sectors like healthcare and transportation are subject to stringent requirements, including transparency, accountability, and human oversight.
  • Limited Risk: Certain AI applications, such as chatbots, must adhere to transparency obligations, ensuring users are aware they are interacting with AI.
  • Minimal Risk: Most AI systems, including video games and spam filters, are exempt from regulatory requirements.

 

The Act also establishes a European Artificial Intelligence Board to oversee implementation and ensure consistent application across member states (European Commission, 2024).

India’s engagement with AI regulation has been evolving. In 2018, NITI Aayog, the government’s policy think tank, released the National Strategy for Artificial Intelligence, focusing on sectors like healthcare, agriculture, education, smart cities, and smart mobility. Subsequently, in 2021, NITI Aayog introduced the Principles for Responsible AI, addressing ethical considerations for AI deployment in India.

In 2023, the Indian government enacted the Digital Personal Data Protection Act, which addresses some privacy concerns related to AI platforms. The Ministry of Electronics and Information Technology (MeitY) has issued advisories requiring platforms to obtain explicit permission before deploying unreliable AI models and to label AI-generated content, especially those vulnerable to misuse in deepfake technologies.

Despite these initiatives, India lacks a comprehensive, AI-specific regulatory framework akin to the EU’s AI Act. The absence of a dedicated AI regulatory body and reliance on existing agencies for AI-related policies indicate a need for a more structured approach.

While the EU’s AI Act presents a commendably robust regulatory paradigm, its complete transplantation into the Indian context confronts a constellation of formidable challenges. Foremost among these is India’s vast and heterogeneous populace, which demands a regulatory framework nuanced enough to accommodate regional disparities and the nation’s kaleidoscopic diversity. Moreover, the breakneck pace of AI innovation necessitates a governance structure endowed with the agility to adapt to rapid technological evolution.

Equally significant is India’s imperative to cultivate innovation and accelerate economic growth, which may necessitate a more malleable regulatory regime, lest overly rigid rules stifle the burgeoning ecosystem of AI startups. Furthermore, while the Digital Personal Data Protection Act mitigates some privacy concerns, the absence of an overarching and coherent data governance architecture leaves room for improvement in ensuring responsible AI deployment. Finally, the successful enforcement of AI regulations hinges upon India’s ability to bolster its institutional expertise and infrastructure – an endeavour requiring sustained investment and capacity building.

The EU’s AI Act represents a pioneering effort in AI regulation, emphasising ethical considerations and public trust. However, India’s unique socio-economic context, technological landscape, and economic priorities necessitate a tailored approach to AI regulation. While adopting the EU model wholesale may not be feasible, India can draw valuable insights from the Act to develop a framework that balances innovation with ethical considerations, ensuring that AI technologies benefit society at large.

We must, albeit briefly, also cast our gaze upon the AI regulatory paradigms established across other parts of the globe. It is seldom a misstep to assimilate commendable elements from international frameworks into India’s prospective AI regulatory edifice, whenever it emerges from the labyrinth of deliberation and sees the light of day.

The United Kingdom, though yet to unveil a comprehensive AI regulatory framework akin to its continental neighbour, the EU, has made significant strides in this domain. It has instituted the Office for AI, an independent authority within the AI Policy Directorate of the Department for Science, Innovation, and Technology. This body champions a context-sensitive and balanced strategy, leveraging extant sector-specific laws to guide AI governance. In March last year, the UK government unveiled a white paper elucidating its vision for a pro-innovation domestic AI regulatory approach. With over £2.5 billion invested in AI since 2014, Britain stresses the pitfalls of an overzealous and inflexible regulatory posture that risks stifling innovation and impeding AI adoption. Instead, it prioritises regulating AI applications based on their contextual deployment, adopting a judicious calculus of benefits versus risks.

To this end, the UK has delineated essential characteristics for its AI regulatory architecture: enabling responsible innovation, maintaining proportionality to avoid undue burdens, fostering trust through risk mitigation, ensuring adaptability to technological evolution, providing clarity for stakeholders, and encouraging collaborative efforts among government, regulators, and industry. These principles aim to regulate AI usage rather than the technology itself, reflecting a nuanced approach. A phased implementation strategy, commencing with regulator discretion and evolving toward statutory obligations, further brings to fore this iterative approach’s adaptability.

Indian policymakers can derive two pivotal insights from the British model. First, a context-sensitive, pro-innovation framework can catalyse responsible AI proliferation without hampering innovation, focusing on applications rather than the technology’s architecture. Second, the articulation of cross-sectoral principles, coupled with the flexibility accorded to sector-specific regulators, ensures coherence and dynamism in addressing the evolving AI landscape.

Switzerland, in its 2021 position paper, laid the merits of a technology-neutral approach, advocating the judicious adaptation of existing data protection standards. Meanwhile, the United States, despite lacking a unified regulatory framework, boasts over 80 federal guidelines governing AI, reflecting a pragmatic, case-by-case governance ethos that eschews excessive precaution. Canada’s AI and Data Act (AIDA) places a premium on safeguarding human rights, curtailing high-risk AI applications, and promoting responsible innovation – a perspective Indian policymakers would do well to emulate.

Intriguingly, Japan’s laissez-faire approach offers yet another paradigm, relying on sector-specific laws such as data protection, antimonopoly, and copyright guidelines while entrusting the private sector with AI management. This strategy highlights the potential for self-regulation within a robust legal framework.

By synthesising these global paradigms with India’s distinct socio-economic realities, policymakers can architect a regulatory framework that is agile, inclusive, and forward-looking, ensuring AI emerges as a vanguard of progress rather than a catalyst for inequity.