Integrating AI into Patient Safety Reporting Systems
Patient safety reporting systems weren’t designed with AI in mind. Most systems capture incidents involving medications, procedures, falls, and clinical errors—but lack clear pathways for reporting AI-related safety concerns.
As AI deployment expands, this gap needs addressing. Here’s how I think about adapting patient safety reporting for the AI era.
Why AI-Specific Reporting Matters
AI creates new failure modes that traditional safety reporting might miss:
Algorithm errors. AI makes incorrect recommendations that influence clinical decisions. These aren’t medication errors or procedural errors in the traditional sense.
System interaction failures. AI works correctly, but integrates poorly with clinical workflows, leading to missed or delayed action on AI recommendations.
User trust miscalibration. Clinicians over-trust AI (acting on incorrect recommendations without adequate scrutiny) or under-trust AI (ignoring correct recommendations).
Drift and degradation. AI performance deteriorates over time, creating gradual safety risk that doesn’t appear as discrete incidents.
If your reporting system can’t capture these failure modes, you can’t learn from them.
Adapting Existing Systems
You don’t necessarily need a new reporting system. Most established systems (Riskman, VHIMS, etc.) can be adapted:
Add AI as an incident category. Create incident types specific to AI, allowing categorisation and analysis of AI-related events.
Include AI in contributing factors. When analysing any incident, prompt reporters to consider whether AI was a contributing factor—even if AI wasn’t the primary cause.
Create AI-specific fields. Capture information relevant to AI incidents:
- Which AI system was involved?
- What was the AI recommendation?
- Was the AI recommendation followed?
- Was the AI performing as expected?
Train staff on AI incident recognition. Staff need to recognise when AI contributes to incidents. This isn’t always obvious, especially when AI operates in the background.
What Should Be Reported
Define clear guidance on AI-related reporting:
Definite AI incidents. Events where AI directly contributed to patient harm or near-miss:
- AI missed a significant finding that led to delayed diagnosis
- AI recommendation was followed but was incorrect
- AI system failure during critical clinical activity
Possible AI incidents. Events where AI might have contributed:
- Unclear whether AI recommendation influenced clinical decision
- AI was available but whether it was used is uncertain
- AI performed correctly but clinical workflow around AI failed
AI system issues without immediate harm. Problems that didn’t cause harm but could:
- Significant AI performance degradation
- Repeated user reports of AI inaccuracy
- Integration failures affecting AI availability
- Security incidents involving AI systems
Near-misses. Events where AI error or failure was caught before reaching the patient. These are valuable learning opportunities.
Encourage broad reporting—uncertainty shouldn’t prevent reporting. You can investigate and classify later.
Challenges in AI Incident Investigation
Investigating AI incidents presents challenges:
Establishing causation. Did AI actually contribute to the incident, or was it coincidental? AI made a recommendation—but would the outcome have been different if it hadn’t?
Accessing AI decision data. Understanding why AI made a particular recommendation often requires accessing logs and historical data. This may require vendor cooperation.
Technical expertise. Investigating AI incidents requires understanding of both clinical and technical factors. Your incident investigators may need support.
Confidentiality and blame. AI incidents might implicate vendors, clinicians, or system design. Navigating these sensitivities while supporting learning is challenging.
For significant incidents, consider investigation teams that include clinical informatics expertise, not just traditional quality staff.
Learning from AI Incidents
The purpose of reporting is learning. For AI incidents, learning questions include:
System-level questions:
- Is this a known AI limitation that should be addressed?
- Should AI use processes change based on this incident?
- Are similar incidents occurring elsewhere?
- Does the AI vendor need to be notified?
Human factors questions:
- Was the clinician appropriately trained on AI use?
- Were clinical workflows designed to support appropriate AI interaction?
- Was workload or time pressure a factor?
Governance questions:
- Were monitoring processes adequate?
- Did governance structures function as intended?
- Are policies and standards adequate?
Share learnings appropriately—within the organisation, with the AI vendor, and (for significant incidents) with the TGA and broader healthcare community.
Regulatory Reporting Obligations
Some AI incidents trigger regulatory reporting obligations:
TGA reporting. Serious adverse events or near-misses involving TGA-registered medical devices (including AI devices) may require manufacturer and TGA notification under post-market surveillance requirements.
Coronial reporting. Deaths where AI may have been a contributing factor should be considered for coronial notification, like any other potential clinical contributing factor.
Accreditation requirements. Health service accreditation standards require effective incident management. This includes AI-related incidents.
Professional reporting. If AI incidents raise concerns about individual clinical practice, standard AHPRA reporting obligations apply.
Ensure your reporting pathways include AI-aware triage so that appropriate external reporting occurs.
Building Reporting Culture
Technical systems matter less than culture. Staff need to:
Feel safe reporting. Non-punitive culture where reporting AI concerns doesn’t create personal risk.
Know how to recognise AI issues. Training on what AI-related incidents look like and when to report.
See that reporting leads to action. Reports that disappear without feedback or change undermine future reporting.
Understand AI limitations. If staff don’t know AI can fail, they won’t consider AI when incidents occur.
Leadership visible commitment to AI safety, and demonstrated response to reported concerns, builds the culture needed for effective reporting.
Practical Steps
If you’re implementing AI and want to strengthen safety reporting:
-
Review your current reporting system. Can it capture AI-related incidents? What modifications are needed?
-
Develop AI-specific guidance. What should be reported? How? By whom?
-
Train staff. Include AI in patient safety training. Ensure staff understand AI involvement in their clinical areas.
-
Establish investigation capability. Ensure investigation teams can handle AI incidents or have access to expertise.
-
Create feedback loops. Use AI incident data to inform AI governance, vendor relationships, and system improvements.
-
Monitor reporting rates. Low AI incident reporting might indicate system gaps, not AI safety.
As AI becomes more prevalent in clinical care, patient safety systems must evolve to match. The principles are the same—report, investigate, learn, improve. The application needs updating for AI-specific challenges.
Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.