Browse CIRO Exam Guides: CIRE, RSE, Trader, Supervisor & Derivatives

Automation, manual review triggers, and testing

Explain how automated supervisory controls should be designed, where human review still matters, and when testing or redesign is required because automated outputs are no longer reliable.

Automation, manual review triggers, and testing appears in the official CIRO Supervisor Exam syllabus as part of Supervisory structure: Investment Dealer responsibilities. Questions here usually test whether automation is being used as a disciplined supervisory tool or as an excuse to avoid human judgment.

Automation Is A Tool, Not A Safe Harbour

The exam often describes a rule engine, dashboard, surveillance alert, or automated workflow and then asks whether supervision was adequate. The right answer is rarely “yes, because the system was automated.” The real question is whether the automation is built around the right trigger logic, whether exceptions are reviewed by the right humans, and whether the firm tests the tool when real-world activity changes.

What Automation Can And Cannot Do Well

Automation useUsually appropriateHuman judgment still needed
threshold or attribute screeningyessomeone must decide whether the threshold still reflects real risk
exception generationyessomeone must determine whether the exception is real, urgent, or stale
account-feature checksyessomeone must assess context when multiple risk factors interact
suitability, disclosure, or conduct judgmentpartiallythe final judgment often depends on context, contradictions, and behavioural signals the tool may not interpret well

Manual Review Triggers Matter

    flowchart TD
	    A["Automated review or surveillance rule runs"] --> B{"No exception?"}
	    B -- Yes --> C["Retain evidence and continue monitoring"]
	    B -- No --> D["Route to manual review"]
	    D --> E{"Clear, low-risk exception with known cause?"}
	    E -- Yes --> F["Document resolution and close"]
	    E -- No --> G["Escalate, investigate, or redesign the rule"]
	    G --> H["Retest the control and confirm the output is reliable"]

The stronger answer usually identifies that some fact patterns should force manual attention even when automation exists. Typical triggers include:

  • non-individual or authority-complex accounts
  • insider or employee relationships
  • derivatives or leveraged features
  • PEP or heightened AML risk
  • stale or contradictory account information
  • recurring false positives or recurring missed exceptions

Testing And Redesign Are Part Of The Control

A control is not finished once it is coded. Supervisors should be able to explain:

  • why the trigger logic was chosen
  • how the firm knows the rule still catches the right risks
  • what happened when business lines, products, or client patterns changed
  • whether false positives or false negatives led to redesign

The exam often rewards the answer that demands recalibration or redesign when the system is no longer reliable, not the answer that keeps trusting the output because “the system approved it.”

Learning Objectives

  • Explain the role of automation in supervision and the Supervisor’s responsibility for automated processes.
  • Distinguish tasks and activities that can be automated from those requiring human supervisory judgment.
  • Recognize manual-review triggers in automated processes, including age, non-individual status, trading authorization, insider status, derivative experience, or PEP status.
  • Determine when automation testing, auditing, or redesign is needed because the process is not functioning as intended.

Exam Angle

The stronger answer usually asks whether the automated control is still producing decision-useful output. If the tool is missing important cases or overwhelming reviewers with noise, the correct supervisory response is often to investigate and redesign the process rather than to keep relying on it.

Sample Exam Question

An automated review tool clears almost every file with no manual follow-up, but later testing shows that high-risk exceptions involving complex authority structures were not being routed for review. What is the strongest conclusion?

The better answer is that automation design and testing were inadequate. The issue is not just one missed file; it is that the trigger logic failed to reflect real supervisory risk and should be reviewed and recalibrated.

Key Takeaways

  • Automation helps supervision, but it does not replace supervisory judgment.
  • Manual-review triggers define where human intervention still matters.
  • Testing, exception analysis, and redesign are part of the control, not an optional extra.
Revised on Thursday, April 23, 2026