Introduction
The Infocomm Media Development Authority of Singapore (“IMDA“) has launched a new Model Artificial Intelligence (“AI“) Governance Framework for Agentic AI (“MGF for Agentic AI“). This framework for reliable and safe agentic AI deployment is the first of its kind, giving organisations looking to deploy agentic AI a structured overview of the relevant risks and emerging best practices in managing these risks.
The MGF for Agentic AI recognises that agentic AI holds transformative potential for users and businesses, but also carries with it new challenges for effective accountability. It is thus crucial to understand these risks and implement the necessary governance measures to harness agentic AI responsibly.
The MGF for Agentic AI provides organisations with guidance on technical and non-technical measures they need to implement to deploy agents responsibly, across four dimensions:
- Bounding of risks: Assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers, such as autonomy and access to tools and data;
- Human accountability: Making humans meaningfully accountable for agents by defining significant checkpoints at which human approval is required;
- Technical controls: Implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services; and,
- End-user responsibility: Enabling end-user responsibility through transparency and education/training.
This Update highlights the key features of the MGF for Agentic AI and what organisations looking to deploy agentic AI should be aware of to ensure safety and reliability.
Risks of Agentic AI
Agentic AI systems are systems that can plan across multiple steps to achieve specified objectives, using AI agents. Unlike traditional AI and generative AI, AI agents can reason and take actions to complete tasks on the behalf of users, allowing organisations to automate repetitive tasks, such as those related to customer service and enterprise productivity.
However, due to agents’ access to sensitive data, the ability to make changes to their environment, and increased capability and autonomy, agentic AI introduces new risks:
- Erroneous actions: Incorrect actions such as an agent fixing appointments on the wrong date or producing flawed code.
- Unauthorised actions: Actions taken by the agent outside its permitted scope or authority, such as taking an action without escalating it for human approval based on a company policy or standard operating procedure.
- Biased or unfair actions: Actions that lead to unfair outcomes, especially when dealing with groups of different profiles and demographics, such as vendor selection or hiring decisions.
- Data breaches: Actions that lead to the exposure or manipulation of sensitive data, including personal data or confidential information (e.g. customer details, trade secrets, or internal communications).
- Disruption to connected systems: As agents interact with other systems, they can cause disruption to connected systems when they are compromised or malfunction e.g. deleting a production codebase, or overwhelming external systems with requests.
Model Governance Framework
The MGF for Agentic AI provides the following guidance on the measures that should be implemented for the responsible deployment of agentic AI.
- Assess and Bound the Risks Upfront
When planning to use agentic AI, organisations should consider the following:
- Determining suitable use cases for agent deployment by considering agent-specific factors that can affect the likelihood and impact of the risk; and
- Design choices to bound the risks upfront by applying limits on agent’s access to tools and systems and defining a robust identity and permissions framework.
- Make Humans Meaningfully Accountable
When deploying agentic AI, the organisations and humans responsible remain accountable for the agents’ behaviours and actions. However, there are challenges when agent actions emerge dynamically and adaptively from interactions instead of fixed logic.
To address these challenges to human accountability, organisations should consider:
- Clear allocation of responsibilities within and outside the organisation, by establishing chains of accountability across the agent value chain and lifecycle, while emphasising adaptive governance, so that the organisation is set up to quickly understand new developments and update their approach as the technology evolves; and
- Measures to enable meaningful human oversight of agents, such as requiring human approval at significant checkpoints, auditing the effectiveness of human approvals, and complementing these measures with automated monitoring.
- Implement Technical Controls and Processes
Agentic AI requires additional controls during the key stages of the implementation lifecycle. Organisations should consider:
- Design and development stage: The new components and capabilities of agents necessitate new and tailored controls. Implement controls such as tool guardrails and plan reflections, and limit the agent’s impact on the external environment by enforcing least-privilege access to tools and data.
- Pre-deployment stage: Before deployment, it is important to test for new dimensions such as overall task execution, policy adherence and tool use accuracy, and test at different levels and across varied datasets to capture the full spectrum of agent behaviour.
- Deployment stage: When deploying, gradually roll out agents and continuously monitor them in production. It is recommended to roll out agents gradually, supported with real-time monitoring post-deployment to ensure that agents function safely.
- Enable End-User Responsibility
Human accountability extends to the end users who use and rely on agents. Organisations should provide sufficient information to end users to promote trust and enable responsible use.
Organisations should consider:
- Transparency: Users should be informed of the agents’ capabilities (e.g. scope of agent’s access to user’s data, actions the agent can take) and the contact points whom users can escalate issues to if the agent malfunctions.
- Education: Users should be educated on proper use and oversight of agents (e.g. training should be provided on an agent’s range of actions, common failure modes like hallucinations, usage policies for data), as well as the potential loss of trade craft, as basic operational knowledge could be eroded when agents take over more functions. Sufficient training should be provided to ensure that humans retain core skills.
Concluding Words
Agentic AI carries enormous potential for organisations looking to enhance their processes, particularly for business transformation. It promises capability in, among other aspects, automating complex processes, optimising supply chains, and enhancing customer service.
However, the inherent risks of traditional and generative AI are amplified in the autonomous nature of agentic AI, along with new operational hazards. Organisations looking to deploy agentic AI to keep their businesses in pace with advancing technology should thus be aware of the risks set out in the MGF for Agentic AI, and should consider implementing the recommended measures for responsible and safe deployment.
Existing AI governance frameworks do provide practical clarity and risk management techniques. However, there remain open questions from a legal risk management standpoint such as apportioning legal liability for the actions taken by autonomous agents amongst stakeholders in agent development and deployment. While accountability issues are addressed in the MGF for Agentic AI, legal questions remain on the extent to which such agents can bind their principals to actions taken autonomously. On this front, we note clients’ concerns on what in law would be considered a reasonable level of autonomy that principals should grant to agents and the amount of human intervention and oversight over such agents in specific use cases from a legal risk management standpoint. We expect developments in legal jurisprudence in this coming year as AI agents grow in prevalence and impact.
In the meantime, we remain focused on assisting our clients in drafting and deploying internal governance frameworks, policies and guidelines in line with the MGF for Agentic AI and assisting in the drafting of specific contractual clauses and negotiation strategies to manage and mitigate the legal risks of developing and deploying agentic AI solutions amongst various stakeholders.
For further queries, please feel free to contact our team set out on this page.
Disclaimer
Rajah & Tann Asia is a network of member firms with local legal practices in Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, the Philippines, Singapore, Thailand and Vietnam. Our Asian network also includes our regional office in China as well as regional desks focused on Brunei, Japan and South Asia. Member firms are independently constituted and regulated in accordance with relevant local requirements.
The contents of this publication are owned by Rajah & Tann Asia together with each of its member firms and are subject to all relevant protection (including but not limited to copyright protection) under the laws of each of the countries where the member firm operates and, through international treaties, other countries. No part of this publication may be reproduced, licensed, sold, published, transmitted, modified, adapted, publicly displayed, broadcast (including storage in any medium by electronic means whether or not transiently for any purpose save as permitted herein) without the prior written permission of Rajah & Tann Asia or its respective member firms.
Please note also that whilst the information in this publication is correct to the best of our knowledge and belief at the time of writing, it is only intended to provide a general guide to the subject matter and should not be treated as legal advice or a substitute for specific professional advice for any particular course of action as such information may not suit your specific business and operational requirements. You should seek legal advice for your specific situation. In addition, the information in this publication does not create any relationship, whether legally binding or otherwise. Rajah & Tann Asia and its member firms do not accept, and fully disclaim, responsibility for any loss or damage which may result from accessing or relying on the information in this publication.