The reviewed paper, Regulating AI: Applying Insights from Behavioural Economics and Psychology to the Application of Article 5 of the EU AI Act, presents a well-founded legal critique of the EU Artificial Intelligence Act, particularly Article 5, which outlines prohibited AI practices. The authors adopt an interpretive approach rooted in behavioural economics and cognitive psychology, offering a compelling argument for expanding the scope of regulatory interpretation. They address key shortcomings in the current legal text, especially its reliance on subjective intent (mens rea) and the vague use of terms like “subliminal,” “manipulative,” and “deceptive.”

I. Legal Certainty and the Vagueness Doctrine: Clarifying Prohibited Practices

Legal enforceability is inherently contingent upon the legal certainty (lex certa), a cornerstone of EU law under Article 52(1) of the EU Charter of Fundamental Rights. Article 5’s ambiguous language undermines enforceability and legal predictability. The authors propose precise definitions which currently suffer from normative ambiguity, rendering compliance assessments speculative and enforcement discretionary:

  • Subliminal techniques (stimuli below conscious awareness),
  • Manipulative techniques (interference with decision-making processes),
  • Deceptive techniques (misrepresentation of information),

II. Behavioural Science in Legal Interpretation: Evidence-Based Regulation

A key contribution of the paper lies in its interdisciplinary method, integrating behavioural science into legal analysis. This aligns with contemporary regulatory trends that factor in psychological vulnerabilities and cognitive biases. The authors explain that certain mental shortcuts like focusing on recent information (availability bias), relying too much on the first thing we see (anchoring), or following what others do (social conformity) can be used by AI tools such as hidden messages or subtle cues to influence people’s choices without them realizing it. 

A particularly relevant example is the representativeness heuristic, where people judge probabilities based on how much something resembles their mental stereotypes rather than on statistical reasoning. For instance, someone might assume that an introverted person who likes to read is more likely to be a librarian than a salesperson even though salespeople vastly outnumber librarians. This cognitive shortcut is not inherently harmful but can become problematic when used systematically by AI.

In digital environments such as social media and recommendation platforms, this heuristic contributes to the creation of echo chambers, where individuals are only exposed to information that aligns with their existing beliefs. When AI systems use personalised algorithms to recommend content, they often rely on these cognitive patterns to maximise engagement. Although the goal may be to reduce information overload and tailor experiences, the unintended consequence is the reinforcement of existing biases, beliefs, and stereotypes.

Empirical studies have shown how this dynamic can lead to polarisation and even radicalisation. For example, if an AI system detects mild racist views in a user’s behaviour, it may begin to recommend increasingly extreme content, leading the user to believe these views are socially accepted and widespread. Haroon et al. provided large-scale evidence of such ideological bias in YouTube’s algorithm, highlighting stronger radicalisation patterns. 

Similarly, the availability heuristic suggests that individuals assess risk or importance based on how easily examples come to mind. AI systems can exploit this by amplifying emotionally charged or recent information. During the COVID-19 pandemic, for instance, widespread sharing of images depicting overwhelmed hospitals influenced public risk perception and led to precautionary behaviours like mask-wearing and social distancing. While beneficial in public health contexts, the same logic is exploited in commercial settings. In 2022, research revealed that 12 Food and beverage companies use AI to increase exposure to unhealthy products in stores and online. Frequent visibility makes these items cognitively salient, influencing purchasing decisions and reducing the appeal of healthier alternatives.

These behavioural insights provide a compelling justification for shifting the legal focus away from intent and toward measurable effects especially in AI systems that manipulate beliefs, perceptions, or behaviours at scale.

III. Moving from Intent to Impact: Toward an Effects-Based Liability Framework

Perhaps the paper’s most significant argument is for replacing intent-based liability with an objective, harm-focused model. Current formulations of Article 5 hinge on the developer’s intentions, failing to account for the complex, emergent behaviours of AI systems.

By proposing a strict liability or effects-based approach mirroring frameworks in product liability, GDPR (Art. 5 & 25), and consumer law the authors advocate for a regulatory model focused on foreseeable harm rather than intent. This would:

  • Ensure AI systems that cause subliminal or manipulative effects fall under Article 5, regardless of intent,
  • Require developers to conduct proactive risk assessments and apply due diligence, consistent with the precautionary principle.

A real-world example of this is found in the increasing use of chatbots in mental health applications. These tools aim to offer psychological support, and many companies are now integrating large language models (LLM) like ChatGPT into them. However, if not carefully designed, such systems may unintentionally harm users with mental health conditions.

For instance, one of the most crucial principles in psychological therapy is consistency, which includes both the regularity of sessions and the stability of therapeutic methods. This is particularly vital for individuals with Borderline Personality Disorder (BPD), who are especially sensitive to inconsistent communication. Large language models often generate responses that vary in tone, style, or emotional depth due to their reliance on a broad training dataset. Such inconsistency may confuse or distress users, especially BPD patients, potentially worsening their symptoms.

This example illustrates the broader point made in the paper: the impact of AI tools must be evaluated not only based on intent but also on the foreseeable psychological harm they may cause. This reframes regulatory concerns away from developer intent and toward the cognitive impact on users, supporting a harm-based regulatory rationale.

  1. Regulatory Pragmatism: Balancing Prohibition with Innovation

Importantly, the authors do not succumb to the overzealousness of precautionary maximalism. They temper their critique with a strong commitment to regulatory proportionality, a cardinal principle in EU administrative law. They acknowledge the innovation-preserving mandate embedded in Recitals 1 and 5 of the EU AI Act, and caution against regulatory chilling effects.

Rather than imposing blanket bans, they advocate for a risk-based approach: AI techniques with high potential to manipulate cognition should face stricter oversight, but not be automatically prohibited without proper context.

  1. Compliance and Enforcement: Towards a Functionalist Jurisprudence

The illustrative catalogue of cognitive techniques and heuristics developed by the authors functions as a compliance toolkit enabling stakeholders, including:

  • AI developers,
  • Data protection officers (DPOs),
  • Independent auditors,
  • Regulatory authorities,

to conduct algorithmic audits, human rights impact assessments (HRIAs), and data protection impact assessments (DPIAs) under a unified interpretative lens.

In doing so, the paper operationalises a functionalist approach to law, transforming Article 5 from a principled aspiration into a justiciable, enforceable standard. This is particularly significant for emerging litigation on AI-induced harm, where legal standards must evolve to encompass algorithmic agency, non-human causation, and autonomous system outputs.

  1. Prospective Legal Reforms and Global Implications

Finally, the authors’ proposal to amend Article 5 to include objectively harmful consequences, even in the absence of intent, holds significant relevance beyond the EU. As jurisdictions such as India, Brazil, and the ASEAN bloc move toward AI-specific legal regimes, this paper may serve as a model legislative commentary.

The proposed revision is in harmony with the UNESCO Recommendation on the Ethics of AI and the OECD AI Principles, which emphasise human agency, transparency, and accountability as non-negotiable regulatory imperatives.

The EU AI Act and by extension, this paper’s recommendations have multi-layered economic implications for India, including:

1. Trade and Market Access

Indian tech firms offering AI services or products in Europe will need to ensure ex-ante conformity with Article 5. Those failing to do so may face legal action, reputational damage, or barriers to market access. This could necessitate a realignment of product design, documentation, and testing procedures, especially among Indian MSMEs and startups targeting EU clients.

2. Compliance-Driven Innovation

This shift will create demand for AI compliance professionals, auditors, legal consultants, and ethicists, opening a niche but critical employment sector. Indian law firms, too, will increasingly need to expand their AI law practice groups to support this transition.

3. Norm Diffusion and Regulatory Imitation

India, like many jurisdictions, is likely to be influenced by the Brussels Effect, whereby EU regulations set global standards due to the size of its market. The principles clarified in this paper may indirectly shape India’s forthcoming AI policy, including bans or restrictions on dark patterns, manipulative UX design, or algorithmic nudging that mimic the techniques examined in the paper.

4. Consumer Trust and Brand Differentiation

Companies that proactively align with the higher standards of the EU AI Act could enjoy reputational benefits and consumer trust, both within India and internationally. Ethical AI practices, once perceived as regulatory burdens, may become competitive differentiators in global B2B and B2C markets.

A Normatively Grounded and Legally Coherent Intervention

In sum, this paper performs a critical jurisprudential function: it exposes and rectifies the doctrinal under-specification of Article 5 by leveraging empirical behavioural insights, functional legal analysis, and comparative doctrinal reasoning. It rightly reorients the debate toward a consequence-sensitive and ethically grounded framework for AI regulation.

For India, this is not merely an academic exercise. As a major AI developer nation with global ambitions, India must internalise the compliance implications of this framework while leveraging its principles to shape its own sovereign AI policy. The intersection of legal clarity and economic strategy as articulated in this paper offers India a roadmap for becoming a globally respected AI power that is both competitive and ethical.

If integrated into India’s regulatory imagination, the insights from this paper could catalyse a new era of AI governance that protects citizens, empowers innovators, and anchors India’s position in the global AI economy.

The authors’ interpretive and normative stance, if codified, would fortify the EU AI Act against emergent algorithmic risks while maintaining its commitment to fostering innovation and legal certainty. This is a model of principled pragmatism and deserves to inform both legislative amendments and interpretive guidance across AI-regulating jurisdictions worldwide.

References

  • Zhong, H., O’Neill, E., & Hoffmann, J. A. (2024, March 24). Regulating AI: Applying insights from behavioural economics and psychology to the application of Article 5 of the EU AI Act. AAAI-24 Technical Tracks, 38(18).
  • Haroon, M., & Chhabra, A. (2022, March). YouTube, the great radicalizer? Auditing and mitigating ideological biases in YouTube recommendations.

Hamdoun, S., Monteleone, R., Bookman, T., & Michael, K. (2023, March). AI-based and digital mental health apps: Balancing need and risk. IEEE Technology and Society Magazine, 42(1)