Stay updated

on Agentic Risks Enterprise-Wide Controls

Agentic AI Red Teaming in Financial Services: Now A Pre-Deployment Requirement

A risk-based agentic AI adoption strategy classifies AI agents into risk tiers and applies stronger controls where the risk is highest. In comparison, more blunt approaches have inherent flaws: permissive access with post-event monitoring sacrifices control, and full pre-approval constrains scalability. Instead, the risk-based model assigns low-risk agents a register-and-attest process, medium-risk agents a proportionate review, and high-risk agents full governance, making it productive, enforceable, and defensible at scale. To implement this, add four deliverables to your agentic roadmap: tier criteria, technical enforcement, shadow agent detection, and risk manager dashboards. If you are at the start of an agentic transformation, the Agentic AI Readiness Assessment evaluates your firm’s readiness across all the prerequisites for agentic AI – including those needed to implement this model – in 90 minutes.

How to Design a Risk-Based Agentic AI Adoption Strategy

A risk-based agentic AI adoption strategy classifies AI agents into risk tiers and applies stronger controls where the risk is highest. In comparison, more blunt approaches have inherent flaws: permissive access with post-event monitoring sacrifices control, and full pre-approval constrains scalability. Instead, the risk-based model assigns low-risk agents a register-and-attest process, medium-risk agents a proportionate review, and high-risk agents full governance, making it productive, enforceable, and defensible at scale. To implement this, add four deliverables to your agentic roadmap: tier criteria, technical enforcement, shadow agent detection, and risk manager dashboards. If you are at the start of an agentic transformation, the Agentic AI Readiness Assessment evaluates your firm’s readiness across all the prerequisites for agentic AI – including those needed to implement this model – in 90 minutes.

New Academic Research Finds Behavioural Drift To Be An Agentic AI Compliance Matter

An April 2026 academic paper has confirmed what we advised IRM delegates in January: a low-risk use case does not mean a low-risk agent. The paper establishes that agentic AI compliance under EU law is not limited to the AI Act – it spans GDPR, DORA, NIS2, and sector-specific regulation simultaneously. Critically, behavioural drift is now a live legal obligation, not just a governance preference: firms must trace it, record it, and treat threshold changes as regulatory events. Three standards gaps remain unresolved. Our frameworks already address all three.

Anthropic Just Called It Too: Agentic AI Risk Is A New Category

A frontier lab has just told the US government what Agentic Risks has been saying since July 2025: agentic AI risk is a distinct category of harm that existing frameworks do not describe. For risk managers, compliance officers and CROs at regulated firms, Anthropic’s 9 March 2026 submission to NIST’s CAISI validates a discipline that standards bodies have yet to catch up with. This post walks through six ways the submission strengthens the approach Agentic Risks has taken, where we believe more is needed, and why we publish our IP and methodologies freely.

Agentic AI Governance for Regulated Firms

Agentic AI is already in scope of the EU AI Act despite not being named in it – a foundational challenge for agentic AI governance – and firms building agents in-house for EU operations will be treated as both provider and deployer, with high-risk systems due for compliance by 2 August 2026.Meeting those obligations is necessary but insufficient because agentic systems break four of the Act’s core assumptions, so an effective governance framework must extend beyond compliance to cover operational realities like agent identity, pre-execution boundaries, reasoning chain integrity, and liability across the value chain.For organisations governing agents already in production, our 32 agentic AI risk flags provide a fast, defensible way to surface agents operating at a higher risk level than may have been appreciated – on the principle that if you cannot disprove a flag, you have a risk.

Agentic AI Governance Framework: What It Is and What You Need

The Agentic AI Governance Framework is a structured guide to governing autonomous AI systems – what to keep from traditional AI governance, and what new controls you need. It’s essential reading for risk, compliance, and technology leaders whose organisations are deploying, or planning to deploy, agentic AI. It tells you exactly which foundations still hold, which new components you need to add, and how to navigate the areas where the debates remain inconclusive. With it, you can build a governance model that is defensible to regulators, auditors, and boards.

Agentic AI Readiness Assessment

Firms whose adoption strategies succeed are those whose roadmaps are achievable from their current state of readiness. The Agentic AI Readiness Assessment ensures your transformation is evidence-based, achievable, and customised to your situation. It does this by establishing whether each prerequisite is in place (strategic, technical and operational, and organisational), its maturity level, and the extent of work needed to support your target risk tier. Triage-style 90-minute session. Output is a complete and systematic view – strengths, weaknesses, and prioritised next steps – ready within 48 hours for you to discuss with your colleagues.

Agentic Workflow Risk Assessment: How To Map Risks And Controls

Unstructured agent-building is a costly and risky choice, increasing the chance of overlooked risks, security incidents, and scrambled remediation when external stakeholders ask questions.This is because agentic workflows create new risks that you will need to control and monitor in novel ways.In response to this situation, I summarise the specific ways risk management needs to evolve for the agentic workflow risk assessment: the novel aspects of the agentic workflow design process, the pre-deployment agentic risk assessment, and how to ensure effective agentic KRIs.Adopt these techniques to give structure to your agentic transformation, prevent risk, and ensure trustworthy monitoring.

Operationalising Your Agentic KRIs for Agentic Workflow Monitoring

Across most enterprise platforms, an agentic workflow comprises three layers – model, orchestration, and application – and each layer can see risk signals that the others cannot. For your agentic workflow monitoring to be viable, a KRI should reside where its risk signal lives; those that do not risk noise, blind spots, or a false sense of security. We mapped a sample of KRIs across the three layers and found that half reside primarily at the application layer, while the model layer was the primary source for only one. Monitoring built on functional layers is therefore less likely to have coverage gaps than a framework tied to a single platform. We conclude by advising firms to understand their risk requirements before selecting a platform – and embed their risk controls into their workflow designs.

Agentic Key Risk Indicators

Agentic key risk indicators are the metrics organisations use to monitor the behaviour, risk profile, and operational performance of autonomous AI agents. Agentic systems do not operate in a steady state. As they learn and adapt, small behavioural changes can accumulate and alter their risk profile. Effective governance therefore requires organisations to monitor the direction and speed of this AI behavioural drift before it moves beyond risk appetite. Agentic key risk indicators, sometimes described more generally as AI key risk indicators, provide the operational evidence needed to monitor AI agent risk and ensure regulatory compliance. This article sets out 12 practical principles for designing effective agentic KRIs and illustrates them through 20 worked examples across 5 common agentic risks.

How Agentic Workflows Differ From Traditional Workflows

Agentic AI marks a shift from tools that produce outputs to systems that achieve outcomes. As a result, agentic workflows differ from traditional ones. In brief, they can plan, decide, and execute multi-step actions autonomously to achieve your goal, remembering and learning from their experiences. Because of this, their behaviour can evolve, creating both opportunities and risks. Therefore, understanding how agentic workflows differ becomes vital to effective risk identification. So, this article examines 10 dimensions that vary between agentic and traditional workflows – from decision authority and system access to failure modes and accountability – and how risk managers should respond.

The Pre-Deployment Agentic Risk Assessment

Evidence is growing that a pre-deployment agentic risk assessment reduces incidents, rework, and build cycles by embedding risk controls into workflow design – creating a stronger organisation that is ready for future deployments. This article outlines the 7-step process from agentic risk identification to an approved risk report that integrates into your enterprise framework. To finish, we note that, at the time of writing, we are incorporating this functionality into Gerido© – Agentic Risks’ in-house risk management tool. Subscribe to our newsletter if you would like to receive further updates on this topic.

Agentic Risk Appetite and Adoption Strategy

This article explores the nature of autonomy and the sources of its risks before outlining a five-step process to integrating agentic risk appetite and adoption strategy into your roadmap. It includes the support materials for the webinar we provide on this topic. Drawing on established risk management disciplines and centuries of delegating autonomy to non-humans, the process will ensure you prioritise your use cases, confirm your adoption strategy, identify your prerequisites, assess readiness, and construct an achievable implementation roadmap. Autonomous AI agents fundamentally reshape risk management by shifting human control from execution to the design and oversight phases of a workflow. From the board to the operational workflow, firms that adopt a risk-based agentic AI adoption strategy will find it easier to scale the benefits of autonomous AI, with fewer surprises and stronger regulatory defensibility.

Agentic AI Risk Appetite

Autonomous AI introduces a new class of risk – agentic risk – because it delegates execution to non-humans that interpret objectives, constraints, and context differently from people.

In agentic workflows, ambiguity itself becomes a risk: unless risk appetite is expressed in behavioural and quantitative terms, agents cannot reliably understand or enforce organisational expectations.

Risk managers must therefore integrate agentic risk into existing risk appetite processes by defining an agentic AI risk appetite statement, applying it consistently at the workflow level, and instituting key risk indicators for agentic risks.

Organisations that do this will enable safe, scalable agentic adoption; those that do not will face fragmented, compounding, and unmanaged autonomous behaviour risks.

The Fundamentals of Agentic AI Risk Management

AI agents represent a paradigm shift in automation – digital problem-solvers that independently plan, execute, and learn from multi-step workflows at machine speed. While autonomous AI offers unprecedented productivity and scalability, its independent nature introduces unfamiliar AI agent risks that traditional risk management cannot adequately address. This article summarises our presentation to 180 members of the Institute of Risk Management. Key points include why controls calibrated to autonomy levels should be non-negotiable, why agentic workflows break a key assumption of traditional risk management, and how the Enterprise-Wide Agentic AI Risk Control Framework can help firms navigate their evolution to the era of agentic AI risk management safely.

Risk Flags for an Agentic AI Risk Assessment

Agentic workflows are not “just another AI model” – they are operational systems that can act, spend, and escalate at speed. That means risk is shaped as much by design choices (ownership, boundaries, monitoring, and stop authority) as by the task the agent performs. If you, as a risk manager, are asked to perform an agentic AI risk assessment, you will need a fast, defensible way to determine whether the project is sufficiently controllable to go live. This article introduces you to 32 systematic, verifiable agentic AI risk flags you can test with evidence, so you can quickly translate findings into clear, proportionate risk treatment plans.

A 3-Phase Agentic Workflow Design Process

An agentic workflow is a system where AI agents autonomously plan, decide, and act across interconnected tasks, with explicit controls and human oversight embedded at each stage. The success of these initiatives rests not just on technology, but on organisations building the capability to govern autonomy before they introduce it. This article sets out a practical, audit-defensible agentic workflow design process that helps firms decide whether an AI agent should exist, what authority it may safely hold, and how that authority is constrained, tested, and monitored over time. The process is structured into three phases – foundation, build, and deployment – to move firms beyond demos to a repeatable, governable, and scalable agentic workflow. The outcome is a step-by-step guide on how to design an agentic workflow using a risk-led, auditable agentic workflow design process.

From Traditional to Agentic Risk Management

This article is for experienced risk professionals and explains why agentic risk management is now essential for retaining control of agentic AI workflows. Traditional risk management assumes humans design, execute, and oversee systems end-to-end, but agentic AI breaks that assumption by delegating execution to autonomous systems. This development fundamentally changes how risk emerges, how controls fail, and where accountability must sit. To mitigate risk in an agentic workflow, therefore, risk management must extend its role beyond post-hoc monitoring into the design phase, monitor continuously rather than periodically, and be ready to address new risks arising from emergent behaviours.

AI Agent Autonomy Policy and Adoption Strategy

An AI agent autonomy policy is essential because autonomy is not binary and must be calibrated for each use case. Organisations adopting agentic workflows should decide their risk appetite for different autonomy levels early, as delaying this decision causes either too much or too little autonomy to undermine your adoption of agentic AI. The article highlights key risks, trade-offs, and oversight needs, especially in regulated sectors. It outlines what a strong policy must include, from objectives and agentic risk appetite to governance, approval, and autonomy monitoring. It also provides a free agentic risk appetite statement and adoption strategy template.

AI Agent Ethics: How to Delegate Autonomy 

AI agent ethics cannot rely on human-style moral tests because AI agents feel no shame, consequence, or responsibility, so ethical protection must come from external controls. Society already delegates autonomy to non-humans such as working dogs, but only with strict training, clear boundaries, accountability, and controlled contexts – showing that autonomy should be earned, limited, and supervised. To deploy AI agents ethically, organisations should grant autonomy gradually and with robust controls that define purpose, restrict risk, ensure supervision, and maintain clear accountability.

Template Agentic Risk Appetite and Adoption Strategy download

Fill in this form and get access to our
Template Agentic Risk Appetite and Adoption Strategy for free

Agentic AI Risk Appetite Statement and Adoption Strategy

Enterprise-Wide Agentic AI Controls Framework

Fill in this form and get access to the
Enterprise-Wide Agentic AI Controls Framework.

Subscribe to our newsletter

Fill in this form and stay up to date

Get in touch