The EU AI Liability Directive was originally intended to modernise current liability frameworks to address the unique challenges posed by AI systems. Notably, one of the stated goals of the EU AI Liability Directive was to lower the threshold of proving negligence, while also providing much needed clarity for developers and deployers of AI technologies in the EU. However, at the AI Action Summit in Paris (10-11 Feb 2025), the European Commission withdrew the AI Liability Directive from the list of legislative acts it was considering for 2025.
Though the EU AI Liability Directive is currently in limbo, the question remains: how should liability and AI systems be handled? Traditional liability frameworks, such as product and tort liability, are not able to account for harms arising from AI systems. This means that current frameworks need to be adapted to ensure that when an AI system causes harm, claimants must not bear an undue burden on proving fault.
Diversity brings fragmentation
This question presents a unique challenge for Asia. Some countries – like China and Korea – have binding rules (China’s Interim Measures for the Management of Generative AI Services Diverse Regulatory Models for AI Liability, and Korea’s AI Basic Act) that specify accountability standards for AI-related harm. For example, Korea’s AI Basic Act demands clear content labelling of generative AI output, and spells out the obligations on developers, and the enforcement mechanisms.
Conversely, countries such as Singapore, Japan, and Australia favour a soft, voluntary framework. For instance, Singapore’s Model AI Governance Framework offers best practices and ethical guidelines without immediate legal enforceability. This diversity reflects the region’s balancing act between maintaining oversight of AI and fostering innovation to support rapid economic growth. However, the diverse regulatory approaches to AI has a direct impact on concepts and bearers of liability, potentially leading to fragmentation and cross-border challenges.
Attribution and transparency challenges
Another major problem is the difficulty of attributing harm to an AI system. At this point, AI systems are “black boxes”: it is next to impossible to point to a part of AI systems and definitively conclude that that’s the source of harm. In certain types of AI, such as image and video generation tools, it would be relatively simple to identify when harmful content is generated. However, in automated decision-making systems, it would be far more complicated to identify the exact source of harm. Current legal concepts of liability principles – strict and fault-based liability in particular – must thus be updated to assign responsibility.
Some jurisdictions like Japan and Singapore have issued non-binding guidelines that encourage companies to institute internal governance protocols and conduct regular risk assessment exercises to manage these challenges. China has taken a more direct approach by requiring AI developers to explicitly label and insert metadata into generated content, making it easier to trace content origins and assign responsibility. Nonetheless, the technical challenges pose obstacles to how we can conclusively assign responsibility to AI.
Balancing innovation with accountability
There is a clear regional tension: on one side, there is a desire to protect public safety and uphold ethical standards on one hand – evident in China’s comprehensive AI laws and Vietnam’s draft Digital Technology Industry Law – and on the other, a need to preserve the region’s competitive edge in innovation. Policymakers are cautious not to impose overly burdensome requirements that could stifle tech development, especially in markets like India and Singapore where AI is seen as a critical driver for economic growth. Regional efforts such as the ASEAN Guide on AI Governance aim to harmonize these disparate approaches, seeking a common baseline for accountability without sacrificing the flexibility needed to nurture rapid technological advancement.
Moving forward
AI technologies inherently transcend national borders. Without a common set of standards, companies operating across different Asian markets are blanketed by a patchwork of regulations – each with its own definitions, risk classifications, and enforcement mechanisms.
Harmonization minimizes regulatory arbitrage, reduces compliance costs, and ensures that AI systems are developed under consistent safety and ethical criteria. For instance, initiatives like the ASEAN Guide on AI Governance are paving the way for a regional baseline that could eventually align with international standards such as the EU AI Act and the GDPR.
Furthermore, uniform standards allow companies to innovate confidently, knowing that a product meeting one set of rules can be more easily adapted for neighbouring markets. This consistency is particularly important in Asia because it helps create an integrated market for AI technologies.
Most importantly, harmonization brings consistency. Consumers and stakeholders will ultimately benefit from clearer expectations and legal thresholds about AI safety, transparency, and accountability. This standardized approach can help prevent scenarios where inconsistent regulations lead to public distrust of AI. Cross‑border cooperation on liability in AI systems also enables regional authorities to share best practices, conduct joint audits, and address emerging risks collectively, thus strengthening the overall governance framework for AI in Asia.
It is indisputable that social media is a powerful tool for influencing people’s lives. Filipinos are some of the heaviest social media users in the world, and the influence of social media in the Philippines is unquestionable. This has become more evident in recent weeks after the controversial arrest of former President Rodrigo Duterte by […]
As the United States shocked the world on April 2nd with its universal “reciprocal” tariffs, only to place them under a 90-day pause just days later, the future of global trade has never been more uncertain. Whether the US’s proposed tariffs are reinstated or not, global trust in the United States as a trading partner […]
In a bold move to reshape global trade dynamics, U.S. President Donald Trump announced sweeping tariffs on April 2, 2025. This policy marks one of the most significant shifts in international trade since World War II. The tariffs include a universal 10% duty on all goods imported into the U.S. and higher “reciprocal” tariffs on […]
jQuery(function(jQuery){jQuery.datepicker.setDefaults({"closeText":"Close","currentText":"Today","monthNames":["January","February","March","April","May","June","July","August","September","October","November","December"],"monthNamesShort":["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],"nextText":"Next","prevText":"Previous","dayNames":["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],"dayNamesShort":["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],"dayNamesMin":["S","M","T","W","T","F","S"],"dateFormat":"d MM, yy","firstDay":1,"isRTL":false});});
var gform_i18n = {"datepicker":{"days":{"monday":"Mo","tuesday":"Tu","wednesday":"We","thursday":"Th","friday":"Fr","saturday":"Sa","sunday":"Su"},"months":{"january":"January","february":"February","march":"March","april":"April","may":"May","june":"June","july":"July","august":"August","september":"September","october":"October","november":"November","december":"December"},"firstDay":1,"iconText":"Select date"}};
var gf_legacy_multi = [];
var gform_gravityforms = {"strings":{"invalid_file_extension":"This type of file is not allowed. Must be one of the following:","delete_file":"Delete this file","in_progress":"in progress","file_exceeds_limit":"File exceeds size limit","illegal_extension":"This type of file is not allowed.","max_reached":"Maximum number of files reached","unknown_error":"There was a problem while saving the file on the server","currently_uploading":"Please wait for the uploading to complete","cancel":"Cancel","cancel_upload":"Cancel this upload","cancelled":"Cancelled"},"vars":{"images_url":"https:\/\/ps-engage.com\/wp-content\/plugins\/gravityforms\/images"}};
var gf_global = {"gf_currency_config":{"name":"U.S. Dollar","symbol_left":"$","symbol_right":"","symbol_padding":"","thousand_separator":",","decimal_separator":".","decimals":2,"code":"USD"},"base_url":"https:\/\/ps-engage.com\/wp-content\/plugins\/gravityforms","number_formats":[],"spinnerUrl":"https:\/\/ps-engage.com\/wp-content\/plugins\/gravityforms\/images\/spinner.svg","version_hash":"c8e6739cc393d67db1a2db79d11eb8af","strings":{"newRowAdded":"New row added.","rowRemoved":"Row removed","formSaved":"The form has been saved. The content contains the link to return and complete the form."}};
var gform_theme_config = {"common":{"form":{"honeypot":{"version_hash":"c8e6739cc393d67db1a2db79d11eb8af"}}},"hmr_dev":"","public_path":"https:\/\/ps-engage.com\/wp-content\/plugins\/gravityforms\/assets\/js\/dist\/"};