New Law on Artificial Intelligence Takes Effect on 1 March 2026

Introduction

On 10 December 2025, the National Assembly of Vietnam approved and passed the Law on Artificial Intelligence No. 134/2025/QH15 (“Law on AI“). The Law on AI takes effect on 1 March 2026 and will replace the existing legal provisions related to artificial intelligence (“AI“) under the recent Law on Digital Technology Industry. The Law on AI marks Vietnam’s first comprehensive, standalone law governing AI. It draws considerable influence from the European Union’s Artificial Intelligence Act, with much of the concepts lifted from that law.

This Update provides a high-level overview of the key regulatory reforms introduced by the Law on AI. Note that many of the provisions under the Law on AI remain the subject to future regulatory guidance from the Government, which are likely to take the form of decrees.

Key Features

Key Definitions

The Law on AI defines artificial intelligence as the electronic implementation of human intellectual capabilities, including learning, reasoning, perception, judgment, and natural language understanding. An artificial intelligence system is a machine-based system designed to perform AI capabilities with varying degrees of autonomy, capable of self-adaptation after deployment, and able to infer from input data to generate outputs such as predictions, content, recommendations, or decisions that may affect physical or virtual environments.

These definitions are couched in broad terms, with the objective of covering many existing and foreseeable AI technologies, including machine learning models, generative AI systems, and automated decision-making tools.

Risk-based Classification of AI systems

The Law on AI adopts a risk-based regulatory approach, under which AI systems are classified based on (i) their impact on human rights, safety and security; (ii) the field of use of the system, particularly essential sectors or sectors directly related to the public interest; and (iii) the scope of users and the scale of system’s impact. The Law on AI classifies AI systems into three categories:

  1. High-risk AI systems: Systems that may pose significant risks to human life, health, legitimate rights, and interests of individuals or organisations, public interests, or national security.
  1. Medium-risk AI systems: Systems that may mislead, influence, or manipulate users where the AI-generated nature of the interaction or output is not readily recognisable.
  1. Low-risk AI systems: Systems that do not fall within the high-risk or medium-risk categories.

Responsibility of Providers of AI Systems

Providers of AI systems are responsible for classifying their own systems, and must maintain sufficient documentation to support the classification. Those that provide medium‑risk or high‑risk AI systems must notify the Ministry of Science and Technology through the National AI Portal before the systems can be deployed. If they are uncertain as to the risk classification, they need to seek guidance from the authorities.

The notification procedure is not mandatory for providers of low‑risk AI systems. However, they are encouraged to publish the basic information of their systems to enhance transparency.

Responsibility of Deployers of AI Systems

Deployers of AI systems can rely on (or “inherit”) the providers’ classifications but must carry out a reclassification of the AI systems if modifications to these systems introduce or create new or higher risks.

For high-risk AI systems, regulators will conduct periodic inspections or targeted checks when violations are suspected. For medium-risk AI systems, these will be monitored through reports, sample checks or independent assessments.

Transparency Requirements

The Law on AI introduces broad transparency requirements – especially for systems that interact with humans and generate content.

For all AI systems that interact with humans, providers must ensure that users can identify when they are interacting with an AI system rather than a human (unless other laws specify otherwise).

For AI‑generated content, providers must mark AI‑generated audio, images and videos in a machine‑readable format in accordance with Government regulations. Deployers must clearly notify the public when text, audio, images or videos have been created or edited by AI, if the content could cause confusion about the authenticity of events or persons. In addition, they must apply visible labels to AI‑generated content that simulates real people’s appearance or voice or recreates real events.

For artistic works such as films or other creative content, labelling must be implemented in a way that does not obstruct display. These transparency measures must be maintained throughout the period during which the system and its content are provided to users.

Conformity Assessment for High-risk AI Systems

High‑risk AI systems are subject to mandatory conformity assessment before they can be deployed and if significant changes occur to the systems. The law provides two pathways for assessment:

  1. Mandatory certification before use: The Prime Minister will issue a list of systems that need to be assessed by registered or recognised assessment organisations.
  1. Self-assessment: Other high‑risk AI systems may either be self‑assessed by the provider or assessed by registered or recognised assessment organisations.

The assessment verifies compliance with the requirements of the Law on AI, specifically risk management, data governance, technical documentation, human oversight, transparency, and incident management.

Obligations for High-risk AI Systems

Providers of high-risk AI systems must implement continuous risk management, maintain documentation and logs, ensure transparency and human oversight, cooperate with authorities, and support incident remediation.

Deployers must operate systems within approved purposes, ensure safety and transparency, provide information to authorities and users, and cooperate in inspections and incident handling.

Obligations for Medium-risk and Low-risk AI Systems

Providers of medium‑risk AI systems must ensure transparency in line with the law’s requirements and, when requested by authorities during inspections or where risks or incidents are suspected, explain the system’s purpose, operating principles, main input data and safety measures – without disclosing source code, detailed algorithms, parameters or trade secrets. Deployers must also be able to explain system operations, risk control processes and incident handling when requested. Users must also follow notification and labelling rules relevant to their systems.

Providers and deployers of low‑risk systems are required to provide accountability information only when violations are suspected or where system use affects legitimate rights and interests.

Users are generally free to exploit and use low‑risk systems for lawful purposes but must assume legal responsibility for their own use.

Support Mechanisms for Businesses

The Law on AI formally sets out mechanisms through which the State will support AI development, which includes incentives. Central to this framework is the National AI Development Fund, a non-profit, off-budget state fund designed to finance AI infrastructure, research and development of core technology, enterprise development and AI talent.

The law also grants enterprises with preferential access to national AI infrastructure, including computing resources, shared datasets and controlled testing environments. Startups, small and medium-sized enterprises and research-oriented entities are identified as priority beneficiaries.

Finally, the Law on AI establishes a controlled sandbox mechanism, allowing eligible projects to be tested under regulatory supervision. Sandbox results may serve as a basis for conformity assessment recognition or for exemption, reduction or adjustment of certain compliance obligations. 

Transition Period

Providers and deployers of AI systems already in operation before the Law on AI takes effect are granted transition periods to comply with the new requirements:

  1. Medical, educational and financial sectors: 18 months
  2. All other sectors: 12 months

During these transition periods, competent state authorities retain the power to order the suspension or termination of any AI systems found to pose a serious risk of harm. 

Further Information

Please feel free to reach out to our contact partners should you have queries on the above development.


 

Disclaimer

Rajah & Tann Asia is a network of member firms with local legal practices in Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, the Philippines, Singapore, Thailand and Vietnam. Our Asian network also includes our regional office in China as well as regional desks focused on Brunei, Japan and South Asia. Member firms are independently constituted and regulated in accordance with relevant local requirements.

The contents of this publication are owned by Rajah & Tann Asia together with each of its member firms and are subject to all relevant protection (including but not limited to copyright protection) under the laws of each of the countries where the member firm operates and, through international treaties, other countries. No part of this publication may be reproduced, licensed, sold, published, transmitted, modified, adapted, publicly displayed, broadcast (including storage in any medium by electronic means whether or not transiently for any purpose save as permitted herein) without the prior written permission of Rajah & Tann Asia or its respective member firms.

Please note also that whilst the information in this publication is correct to the best of our knowledge and belief at the time of writing, it is only intended to provide a general guide to the subject matter and should not be treated as legal advice or a substitute for specific professional advice for any particular course of action as such information may not suit your specific business and operational requirements. You should seek legal advice for your specific situation. In addition, the information in this publication does not create any relationship, whether legally binding or otherwise. Rajah & Tann Asia and its member firms do not accept, and fully disclaim, responsibility for any loss or damage which may result from accessing or relying on the information in this publication.

CONTACTS

Vietnam,
+84 28 3821 2382
Vietnam,
+84 28 3821 2382
China, Vietnam,
+84 28 3821 2382
Vietnam,
+84 28 3821 2382
Vietnam,
+84 24 3267 6127
Vietnam,
+84 28 3821 2382
Vietnam,
+84 28 3821 2382
2673

Country

SECTORS

Share

Rajah & Tann Asia is a network of legal practices based in Asia.

Member firms are independently constituted and regulated in accordance with relevant local legal requirements. Services provided by a member firm are governed by the terms of engagement between the member firm and the client.

This website is solely intended to provide general information and does not provide any advice or create any relationship, whether legally binding or otherwise. Rajah & Tann Asia and its member firms do not accept, and fully disclaim, responsibility for any loss or damage which may result from accessing or relying on this website.

© 2024 Rajah & Tann Asia. All Rights Reserved. All trademarks are property of their respective owners.