Unpacking the EU AI Act: An ASEAN Perspective

By Nigel Hee

The European Union (EU) recently unveiled the Artificial Intelligence Act, a novel piece of legislation that aims to regulate the development, deployment and use of artificial intelligence (AI) systems within the EU. The Act is predicated on a risk-based approach, classifying AI systems into different risk categories and imposing corresponding obligations and restrictions while prohibiting some specific uses.

One of the most significant aspects of the AI Act is its potential extraterritorial reach. Given the size and influence of the European market, companies targeting markets within the EU may need to comply with the Act’s requirements, even if they are based outside the EU. Additionally, the Act is accompanied by strict liability rules, holding companies accountable for any harm caused by their AI systems, regardless of fault.

The Risk-Based Approach: Not For Everyone

However, the degree of alignment with the EU AI Act varies across the region. As the EU takes the lead in establishing a comprehensive AI governance framework, other regions are closely watching to determine and refine their own regulatory approaches.

As a bloc, ASEAN released the ASEAN Guide on AI Governance and Ethics in February 2024. Nonetheless, many ASEAN countries have adopted a wait-and-see approach such as Vietnam, Thailand, Indonesia, and the Philippines. These countries are still in the early stages of developing comprehensive AI governance frameworks, though they have expressed interest in aligning with international standards.

At the same time, other ASEAN countries have adopted a principles-based stance and issued guidelines and frameworks. Notable for its efforts in AI regulation under this stance is Singapore. Emphasizing ethical AI development and deployment, it has published multiple guidelines, frameworks and toolkits, such as the Model AI Governance Framework for Generative AI, AI Verify toolkit, and most recently the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems.

Australia is one jurisdiction that has adopted a risk-based approach that it is most closely linked to the EU AI Act. While not legally binding, the framework defines ethical principles and guidelines for the development and use of AI systems.

Japan has taken a distinct approach to AI regulation, focusing on “agile governance”. Rather than imposing strict regulations, Japan has opted for a more collaborative approach, encouraging industry self-regulation and public-private partnerships The government has also implemented initiatives to foster public trust in AI, such as the established guidelines for AI development that describe the AI Utilization Principles and the Social Principles of Human-Centric AI. This approach has thus far encouraged innovation while addressing ethical and social concerns.

Revising Liability Laws for AI

A unique component of the EU AI Act is the introduction of strict liability rules for AI systems. Under the Revised Product Liability Directive, companies can be held liable for any harm caused by their AI systems, even in the absence of fault or negligence. This approach represents a significant shift from traditional product liability laws, which typically require the claimant to prove fault or defect.

The AI Liability Directive, a complementary piece of legislation, aims to harmonize liability rules across the EU for AI-related damages. It introduces a “presumption of causality” that addresses how victims can explain that harms were caused through a specific fault or mission. The Directive also provides more tools for victims to seek legal reparation via a right of access to evidence from companies and suppliers.

This strong focus on liability has prompted discussions in other jurisdictions, including ASEAN and Australia, about the need to revise their existing liability frameworks to address the unique challenges posed by AI systems.

ASEAN’s Liability Landscape

Within ASEAN, the liability landscape for AI-related harms is diverse and in flux. Countries like Singapore and Malaysia are aware of the gaps in their existing liability regimes, recognizing the need for clarity and accountability in the AI era.

Other ASEAN countries, such as Indonesia and Vietnam, are still in the early stages of discussing potential liability reforms related to AI. However, as these nations continue to develop their AI governance frameworks, liability is likely to be a key consideration.

Australia’s Potential Liability Reforms 

On the other hand, in Australia, there have been calls to update the country’s product liability laws to better accommodate AI systems. The current laws, based on the principle of strict liability for defective products, may not be sufficient to address the complexities of AI-related harms. This is perhaps best reflected in the public’s general distrust of the sufficiency of current regulation: 77% of Australians believe that the government should require businesses to assess AI risk before being released to the public, while 69% of Australians think that the government should have an independent third party assess risks of businesses’ AI products or services before release.

Currently, under the Australian Consumer Law, manufacturers of AI systems are strictly liable for personal injuries or property damage caused by a ‘safety defect’ in the AI system “digital products” or “digital services” within the liability framework, specifically tailored to AI systems. Some proposals suggest introducing a new category of “digital products” or “digital services” within the liability framework, specifically tailored to AI systems. With the passing of the EU AI Act, Australia may take another page out of the EU AI Act playbook and establish product liability for AI systems, requiring AI developers and providers to demonstrate that their systems are safe and free from defects.

Balancing Innovation and Accountability

While revising liability laws is crucial for ensuring accountability and protecting consumers, policymakers must strike a balance between promoting innovation and imposing overly burdensome regulations. Excessive liability concerns could potentially stifle the development and adoption of beneficial AI technologies.

As such, any liability reforms should be carefully crafted to provide clear guidance and legal certainty, while still encouraging responsible and ethical AI development and deployment.

Alternative Approaches and Global Harmonization 

While the EU AI Act represents a significant step towards establishing a global standard for AI governance, it is important to recognize that alternative approaches exist. Some jurisdictions may prioritize innovation and industry self-regulation, while others may adopt a more prescriptive regulatory framework.

As AI systems increasingly operate across borders, there is a growing need for international cooperation and harmonization of AI governance frameworks. Efforts are underway to develop global standards and best practices, facilitated by organizations like the Organisation for Economic Co-operation and Development (OECD) and the International Organization for Standardization (ISO).

The EU AI Act and the accompanying AI Liability Directive have set a precedent for addressing the complex issue of liability in the AI era. As ASEAN countries, Australia, and other jurisdictions navigate their own AI governance frameworks, revising liability laws to cover AI-related harms and damages is likely to be a priority. By learning from the EU’s approach and engaging in international collaboration, these regions can contribute to the development of a harmonized and effective global AI liability regime.

Posted in

Related Articles

Digital Sovereignty in ASEAN

By Mackenzie Gunther With large tech companies owning significant amounts of data, geopolitical tensions, the risk of critical data leaks, and the rising importance of self-reliance in the eyes of world leaders, the concept of who controls data is becoming a high priority.   The global context over the past decade has set the scene for […]

What’s the path to stronger digital trust in Vietnam?

By Nga Dao The alarming situation I receive spam emails, texts, and calls almost every day and about almost everything: from warning a security threat or inviting to a promotional event to offering sales or investment opportunities. Spam is annoying, but I’m still lucky they haven’t stolen my money. Last month, a friend of mine […]

The Problem with Malaysia’s PADU Initiative

By Edika Amin The Problem with Malaysia’s PADU Initiative   At the time of writing, PADU also known as the Government’s Central Database System’s registrations has reached 1.63 million people, including those of children under 18 who have completed the registration process. There was a recent influx in registration numbers following the announcement that targeted subsidies […]