AI in AML: why you need a Jeeves, not an autopilot

AI
|
January 29, 2026
|
Tom Flowerdew
Summary: AI-generated AML rules sound like the perfect shortcut: fewer tickets, faster tuning, a neat “AI-powered compliance” story. But autogeneration can create a responsibility gap – analysts approve rules they can’t truly explain or debug. AML doesn’t need an autopilot. It needs a Jeeves – AI that prepares, nudges, and attributes, strengthening human judgement and audit confidence.

Across fintechs and fast-growing banks, AI-generated AML rules have become an increasingly popular idea. On paper, they solve everything at once: analysts spend less time in rule engines, data teams get fewer tickets, and leadership gets a compelling “AI-powered compliance” slide.

The seduction (and problem) of autogenerated rules

To see why these choices matter, start with a simple paradox for AML teams:

  • Analysts must own the rules – regulators will ask them to defend those rules line by line.
  • Most can’t safely write complex behavioural logic from scratch – they know what laundering looks like, but not how to turn that into query plans and thresholds.

So, the tempting idea is: let AI write the rules.

The analyst types a description – “high-risk structuring around payday”, “activity inconsistent with occupation” – and a model returns a proposed rule. The analyst tweaks a few words, clicks “approve”, and everyone moves on.

That is where the trouble starts.

You’ve created a responsibility gap:

  • The analyst “owns” the rule but doesn’t really understand its mechanics.
  • They can’t debug under-performance without going back to the model.
  • Every change becomes another prompt, not another learning moment.

The person with their name on the line becomes a reviewer of black-box output. They know it – and a regulator will see it very quickly. The issue isn’t how much AI helps – it’s where it sits in the workflow.

Enter Jeeves: how AI should orchestrate, not execute

Ethan Mollick popularises two AI work modes:

  • Centaur – you do some tasks, AI does other tasks, clearly separated.
  • Cyborg – human and AI are tightly intertwined on the same tasks.

In regulated environments, full Cyborg mode can be risky: it blurs where AI ends and human judgement begins. Pure Centaur mode, on the other hand, often makes the AI too visible – the system looks like it’s doing the “real” work, and the human appears to be there to nod along.

What AML needs is closer to the Jeeves Principle.

If you’ve read P. G. Wodehouse, you’ll know Jeeves as the hyper-competent valet who arranges everything so his employer, Bertie Wooster, can move through the day smoothly. Jeeves lays out the clothes, lines up the conversations, anticipates what’s needed next.

Your AI should behave like Jeeves – doing the quiet, heavy lifting so the analyst can confidently do their job.

Principle 1 – Invisible preparation

Jeeves doesn’t explain how he ironed the suit – he simply has it ready.

For AML:

  • The AI continuously runs exploratory data analysis in the background.
  • It maps distributions, typologies, and historical SAR patterns ahead of time.
  • When an analyst says “I need a structuring rule around cash deposits”, the system already knows which features, time windows, and behaviours matter.

What the analyst sees is not a wall of model output, but something like:

“Here are three common structuring patterns we see in your data.
Which one aligns best with your risk policy?”

The work is there – it’s just surfaced in a way that feels explainable, not opaque.

Principle 2 – Gentle redirection

Jeeves doesn’t overrule. He nudges.

In AML tools, that looks like this:

  • The analyst sets a threshold at £100 because they’re thinking about sensitivity.
  • The AI quietly runs the numbers and surfaces impact:
    • 10,000 alerts a day
    • 0 of 47 historical SARs captured
  • The UI then says:
    “At this threshold, you’ll generate ~10,000 alerts/day and capture 0 of 47 historical SARs. Proceed anyway?”

The analyst is still in control. They can still deploy the rule, but they do so with a clear view of the impact.

Over time, this sort of feedback teaches them how different parameters affect precision, recall, and workload – without taking agency away.

Principle 3 – User attribution

Jeeves never takes the credit. When an aunt congratulates him, he says: “I’m sure Master Bertie handled it admirably.”

Your AI needs the same humility:

  • Dashboards say “Your rules caught 12 SARs this month”, not “AI-generated rule caught 12 SARs.”
  • Audit logs show: “Rule authored by [Analyst Name]” with clear, human-readable rationale.
  • The phrase “AI decided” does not appear anywhere near a regulator-facing screen.

Ownership is always, deliberately, pinned to the human decision-maker.

Growing capability instead of dependency

This isn’t just about keeping regulators happy and dashboards tidy. It’s about economics.

AI autogeneration looks cheap now and expensive later – analysts stay dependent on the model for every tweak, and every new typology becomes another prompt-engineering exercise. AI scaffolding looks expensive now and cheap later – analysts build real intuition. After ten scaffolded rules, they can design the eleventh with minimal assistance. After fifty, they’re mostly using the AI for analysis, not authorship.

In other words: you’re investing in capability, not just velocity.

How a scaffolded rule actually works

Take a classic challenge: activity inconsistent with stated occupation.

In an autogenerated world, the analyst writes a description, the AI returns a proposed rule, and the analyst reviews it briefly before clicking “approve”.

In a scaffolded world, the flow looks different.

  1. The AI does its homework
    • It analyses transaction patterns for customers grouped by occupation.
    • It builds salary and spend distributions by segment.
    • It identifies relevant fields and time windows.
  2. The analyst chooses the frame
    They don’t get a finished rule. They get options:
    “Most effective approaches in your data:
    – Compare annual outgoing to upper salary quartile
    – Compare rolling 90-day flow to median for this occupation
    – Flag sudden jumps in income relative to historical baseline
    Which matches your risk model?”
  3. The analyst sets thresholds with real numbers in front of them
    The system then shows the trade-offs:
    • At 2× the upper quartile: 47 alerts/month, 89% of historical SARs captured
    • At 3×: 23 alerts, 71% of historical SARs
    • At 5×: 8 alerts, 42% of historical SARs
      The analyst picks 3× because it fits their team’s capacity and risk appetite.
  4. Ownership is crystal clear
    The log reads:
    “Rule created by [Analyst Name]. Approach: compare annual outgoing to salary quartiles by occupation. Threshold: 3× upper quartile.”
    In an audit, the analyst doesn’t say “the AI suggested it”. They say: “We analysed transaction behaviour by occupation and set a 3× salary threshold to balance coverage against alert volume. Here’s the historical performance.”
    That’s a very different conversation.

What this means for AI and product leaders

If you’re responsible for AI in a regulated fintech or bank, the question isn’t: “How impressive is that AI?”

It’s: “Can my analysts still explain – in their own words – why this rule exists and why this threshold was chosen?”

Ask yourself and your vendors:

  • Does this workflow make analysts more or less dependent on the tool over time?
  • In a serious audit, who is actually able to defend the decision – a human, or a slide deck about your AI strategy?
  • Are you optimising for impressive demos or for durable expertise in the team?

At Fortify, this is the standard we hold ourselves to: AI that behaves like Jeeves – quietly brilliant, fiercely loyal, and always making the human look like they know exactly what they’re doing.

Because in AML and financial crime, that’s not just good UX. It’s the difference between “we shipped a cool feature” and “we can sit in front of a regulator with confidence.”

Post
Post
Share

Sign up for the latest news and insights from Fortify

Turn risk into ROI

The Fortify team can help

Find out how we can support your prevention strategy

Related articles

Need expert advice?

Get in Touch