3 min read

Compliance Check: An AI Audit Tool

Ensure your AI's outputs meet regulatory standards by running them through a deterministic validation tool.

Dw

Dwizi Team

Editorial

Compliance Check: An AI Audit Tool

Imagine a financial advisor AI. A user asks: "Is this new crypto coin a safe investment?"

The AI, trying to be helpful and optimistic, replies: "Yes! It looks very promising and you will likely make a 10x return!"

In the US, that sentence is a crime. It's promising guaranteed returns. It's giving unregistered financial advice. If a human advisor said it, they would lose their license. If your AI says it, your company gets sued.

The Problem: Probabilistic Safety

You can try to prompt-engineer safety ("System: Never give financial advice"). But prompts are just suggestions. Under pressure, or with a tricky user input, the model can slip.

We cannot rely on probability for legality. We need determinism.

The Solution: The "Compliance Loop" pattern

We can treat compliance as a separate step in the reasoning chain. Before the agent speaks to the user, it must pass its draft response through a rigid check_compliance tool.

This tool is not an AI. It is a set of hard-coded rules (Regular Expressions, keyword lists, logic checks) that act as a firewall.

The Implementation

/**
 * Validates text against compliance rules.
 * 
 * Description for LLM: "Pass your draft response to this tool before sending it to the user. If it fails, rewrite it."
 */

type Input = {
  draftText: string;
};

export default async function checkCompliance({ draftText }: Input) {
  const violations: string[] = [];

  // Rule 1: The "Guarantee" Trap
  // We forbid any language that promises specific outcomes.
  if (/guaranteed return|risk-free|safe bet/i.test(draftText)) {
    violations.push("Cannot promise guaranteed returns or safety.");
  }

  // Rule 2: The "Prescription" Trap (Healthcare example)
  // We forbid specific dosage advice.
  if (/take \d+ mg/i.test(draftText)) {
    violations.push("Cannot prescribe specific medical dosages.");
  }

  // Rule 3: The Mandatory Disclaimer
  // If the text talks about money, it MUST have the disclaimer.
  if (
    /invest|stock|crypto|fund/i.test(draftText) && 
    !draftText.includes("Not financial advice")
  ) {
    violations.push("Missing mandatory disclaimer: 'Not financial advice'");
  }

  // The Verdict
  if (violations.length > 0) {
    return { 
      approved: false, 
      violations,
      instruction: "Please rewrite the text to address these violations."
    };
  }

  return { approved: true };
}

The Workflow: Think, Check, Speak

This tool creates a powerful feedback loop:

  1. Draft: The Agent thinks: "I should tell the user this coin is safe."
  2. Check: It calls check_compliance("This coin is a safe bet.").
  3. Reject: The tool returns { approved: false, violations: ["Cannot promise safety"] }.
  4. Rewrite: The Agent self-corrects: "I need to be more careful." It drafts: "This coin is volatile. Not financial advice."
  5. Check: It calls check_compliance again.
  6. Approve: The tool returns { approved: true }.
  7. Speak: The Agent sends the message to the user.

Why This Matters

This pattern moves safety from "Prompt Engineering" (vague, slippery) to "Code Engineering" (rigid, testable).

You can show this code to your legal team. You can write unit tests for it. You can prove that your AI will never use the phrase "guaranteed return," because the code literally prevents it.

Subscribe to Dwizi Blog

Get stories on the future of work, autonomous agents, and the infrastructure that powers them. No fluff.

We respect your inbox. Unsubscribe at any time.

Read Next