As ChatGPT Hits Asia’s Shores, Is There Impetus for Asia’s Policymakers to Legislate AI?

By Desarack Teso, JD/MBA, CIPP-A, CIPP-E | Senior Advisor, Asia-Pacific

Having lived in Europe from February 2020 up until earlier this month, I certainly witnessed a whole host of new digital-related legislations being adopted under the EU’s digital strategy. These legislations are getting more and more sophisticated as uptakes of old, emerging, and even novel technologies accelerated during the pandemic. They include the EU Data Governance Act, the Digital Services Act, the Digital Markets Act, the Digital Operational Resilience Act (aka DORA), and Directive 2022/2555, on Network and Information Security (aka NIS2 Directive), to name a few. On deck are the AI Act Proposal and the AI Liability Directive Proposal.

I relocated back to Asia earlier this month after a 3-year stint in Europe, just in time to witness the reactions of the region’s policymakers and regulators as ChatGPT became available. The question in mind then becomes whether Asia will follow suit with the EU AI Act Proposal.

Just last week, one major Asian jurisdiction publicly released a consolidated AI bill and called on its lawmakers to urgently deliberate and approve the country’s first-ever AI legislative framework (and the first in Asia for that matter). Key policy goals included realizing the potential value of AI to enhance the country’s national competitiveness, which is not unlike the EU’s digital strategy and managing the potential risks of AI.

Could the EU AI Act Proposal serve as a benchmark for the policymakers in this diverse region to legislate AI? After the adoption of the EU’s GDPR in 2018, I was able to witness first-hand several countries in Asia ramping up their data protection laws, including China, India, Indonesia, Japan, Malaysia, Singapore, South Korea, Thailand and Vietnam (I may have missed a few!). Kudos to Japan and South Korea for receiving an adequacy decision from the EU which demonstrates that its data protection regime is on par with EU GDPR.

Based on my experience, most policymakers in Asia do appreciate that their respective jurisdiction entails distinctive legal systems, diverse characters, and historical backgrounds. As such, they contend that it is impossible to model their legislation after another jurisdiction’s, even on such a novel and complex policy issue as AI.

I agree with this assessment but contend that the EU AI Act Proposal has some strengths for Asian policymakers to consider studying further. One such strength is the proposal’s risk-based approach. That is, setting the regulatory burdens proportionate to the AI system’s risks.

In short, the EU AI Act Proposal sets out a 4-tier risk approach: (1) AI systems with “unacceptable risk,” which are flat-out prohibited; (2) AI systems with “high-risk,” which must be subject to a pre-market conformity assessment and other regulatory requirements; (3) AI systems with “transparency risks,” which are subject to transparency obligations; and  (4) AI systems with “minimal or no risk,” which are permitted with no restrictions.

“High-risk” AI systems are subject to the most regulatory burdens including, for example, record-keeping and reporting requirements, notification and transparency requirements, cybersecurity measures and human oversight.

Another key strength worth noting is that the AI system provider, not a government regulator, is responsible for the assessment of the risks associated with the provider’s AI system being put into the market. This “self-regulatory” system, which is based on a set of clear principles and criteria (to be issued later), has the potential to be the most effective regulatory response that balances the speed of innovations (e.g., ChatGPT) versus managing the risks of AI, many of which will only become known after the innovation is introduced in the market.

Is it still too early to tell what influence the EU AI Act will have on potential AI legislation in Asia (and around the world)? The lesson learned from the GDPR experience is that it is not a question of “what” but “when” this new wave of AI legislation will come. Similar to the GDPR experience, policymakers in Asia at the very least are likely to aim for a certain degree of consistency with the EU AI Act, hopefully, around the latter’s risk-based approach and self-regulatory system. For organizations, it is never too early to start considering how they plan to address this new wave of digital-related legislation in the near future.

Posted in

Related Articles

PP Tunas: Indonesia’s New Digital Regulation to Protect Children Online

The rapid growth of digital platforms has reshaped how children learn, communicate, and entertain themselves. While the internet offers unprecedented opportunities, it also exposes children to serious risks such as harmful content, cyberbullying, online grooming, and the exploitation of personal data. Recognizing these challenges, the Indonesian government introduced PP Tunas, a landmark regulation designed to […]

Dynamic Pricing and Ethical AI in E-Commerce: A Double-Edged Sword for Businesses and Consumers

When Instacart charged some U.S. customers up to 23% more than others for identical groceries based on AI algorithms, the backlash was swift and fierce. A box of crackers cost $3.99 for some shoppers but $4.89 for others; eggs ranged from $3.99 to $4.79- all determined by machine-learning models testing consumers’ price sensitivity. The controversy […]

Korea’s Data Breach Crisis: A Wake-Up Call for Digital Trust in Asia

In five months, hackers quietly siphoned personal data from 33.7 million Coupang customers—virtually the platform’s entire user base in South Korea. Nobody noticed. Not the e-commerce giant. Not regulators. Not the security systems supposedly guarding against exactly this kind of breach.  When the theft finally came to light on December 1, it marked South Korea’s worst data breach in over […]