A Clinical AI Governance Framework That Actually Works
Clinical AI governance is one of those topics that generates a lot of documents but not much clarity. I’ve seen organisations produce elaborate governance frameworks that look impressive on paper but don’t actually work in practice.
Let me share a framework that does work—based on what I’ve observed across successful implementations.
The Problem with Most AI Governance
Many AI governance frameworks fail because they’re:
Too abstract. High-level principles without operational guidance. “AI should be ethical” doesn’t help someone decide whether to deploy a specific system.
Too bureaucratic. Multiple committees, lengthy approval processes, and sign-off requirements that slow everything without adding value.
Disconnected from clinical governance. AI governance as a separate silo, rather than integrated with existing clinical quality structures.
Static. Frameworks that approve AI once, then assume it works forever without ongoing oversight.
Good governance is none of these things.
The Framework Structure
I recommend a three-layer approach:
Layer 1: Strategic Governance (Board/Executive)
At the highest level, governance addresses:
AI strategy alignment. Does proposed AI align with organisational strategy and clinical priorities? Not every AI opportunity should be pursued.
Risk appetite. What level of AI risk is acceptable? High-stakes diagnostic AI versus administrative AI represent different risk profiles.
Resource allocation. What investment is appropriate for AI initiatives? How does AI compete with other priorities?
External accountability. How do you report to external stakeholders (regulators, accreditors, the public) on AI use?
This level involves the board (or a board committee) and executive leadership. Decisions are infrequent but significant.
Layer 2: Operational Governance (Clinical AI Committee)
This is where most governance work happens. A Clinical AI Committee (or equivalent) handles:
Pre-implementation review. Evaluating proposed AI systems before deployment:
- Clinical evidence assessment
- Technical fit evaluation
- Risk analysis
- Ethical considerations
- Implementation planning
Performance monitoring. Ongoing oversight of deployed AI:
- Performance metrics review
- Incident review
- Drift detection
- User feedback analysis
Change management. Decisions about AI modifications:
- Algorithm updates
- Scope changes
- Vendor changes
- Discontinuation decisions
Policy and standards. Developing and maintaining:
- AI implementation standards
- Monitoring requirements
- Incident response protocols
- Training requirements
The committee should meet monthly and include:
- Clinical informatics leadership (chair)
- Medical and nursing representation
- Quality and safety representation
- IT leadership
- Legal/risk representation
- Ethics expertise (could be ad hoc)
Keep membership small enough to be functional—8-10 members maximum.
Layer 3: Local Clinical Governance (Department/Unit Level)
AI governance isn’t just a central function. Clinical departments using AI have local governance responsibilities:
Clinical supervision. Ensuring clinicians using AI do so appropriately:
- Proper training completion
- Appropriate clinical oversight of AI recommendations
- Escalation of concerns
Performance feedback. Providing input to the central committee:
- Clinical experience with AI systems
- Workflow issues
- Safety concerns
- Improvement suggestions
Local incident management. Initial response to AI-related incidents:
- Recognition and reporting
- Immediate patient safety actions
- Contributing to investigation
This happens through existing clinical governance structures—mortality and morbidity review, departmental meetings, quality committees. It doesn’t require new structures, just expanded scope.
Essential Governance Processes
Within this structure, several processes are essential:
Pre-Implementation Assessment
Before any clinical AI deployment, assess:
Clinical evidence. What’s the evidence base? How applicable is it to your context? Is the evidence independent of the vendor?
Regulatory status. TGA registration (if required)? Privacy compliance? Professional standards alignment?
Technical assessment. Integration requirements? Infrastructure needs? Vendor stability?
Risk analysis. What could go wrong? What’s the potential impact? What mitigations are available?
Implementation plan. Training approach? Change management? Go-live process? Support arrangements?
Document this assessment. The record matters for future reference and accountability.
Performance Monitoring
Ongoing monitoring should include:
Quantitative metrics. Sensitivity, specificity, accuracy, or whatever metrics are relevant to the AI application. Track these over time and compare to baseline.
Operational metrics. Usage rates, workflow integration, user satisfaction. Is the AI being used as intended?
Incident tracking. Any AI-related incidents (or near-misses), regardless of severity. Look for patterns.
User feedback. Regular (at least quarterly) collection of user perspectives on AI performance and workflow fit.
Set thresholds that trigger investigation or escalation. Don’t just collect data—act on it.
Incident Management
When AI contributes to a clinical incident:
Immediate response. Ensure patient safety. Consider whether AI should be suspended pending investigation.
Investigation. Understand what happened. Was it AI error, user error, or system interaction? Was it predictable?
Reporting. Internal incident reporting. External reporting if required (TGA, coroner, etc.).
Remediation. Address root causes. Update AI use if needed. Communicate learning.
Treat AI incidents with the same rigour as other clinical incidents—they’re not special, but they’re not exempt either.
Periodic Review
At least annually, conduct a comprehensive review of each AI system:
- Performance against objectives
- Ongoing value demonstration
- Governance compliance
- User experience
- Vendor relationship
- Future direction
This provides structured opportunity to decide whether to continue, modify, or discontinue AI use.
Making It Practical
Some tips for making governance practical rather than bureaucratic:
Right-size requirements to risk. High-stakes diagnostic AI needs more governance than administrative AI. Don’t apply the same process to everything.
Integrate, don’t duplicate. Use existing clinical governance structures wherever possible. Add AI scope rather than creating parallel systems.
Focus on decisions, not documentation. Documentation matters, but governance is about making good decisions and ensuring accountability. Keep paperwork proportionate.
Build in feedback. Governance should improve over time. Regularly ask whether processes are working and adjust.
Support implementation. Governance shouldn’t just be gatekeeping. Help teams implement AI well, don’t just say no.
Getting Started
If you don’t have AI governance:
-
Establish a Clinical AI Committee. Small, empowered, meeting regularly.
-
Adopt basic standards. Pre-implementation assessment requirements. Performance monitoring expectations. Incident reporting processes.
-
Inventory existing AI. What AI is already deployed? Bring it under governance.
-
Build incrementally. You don’t need everything on day one. Develop governance capability over time.
AI governance done well protects patients, supports clinicians, and enables responsible innovation. Done poorly, it’s bureaucratic overhead that doesn’t achieve any of those things.
The framework matters less than whether it actually works in practice.
Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.