TGA's New AI Medical Device Framework: What Healthcare Leaders Need to Know
The Therapeutic Goods Administration dropped something significant last month. Their updated framework for AI-enabled medical devices isn’t just bureaucratic shuffling—it’s the clearest signal yet about how Australian regulators plan to handle clinical AI.
I’ve spent the past two weeks going through the documentation, talking to colleagues at major health networks, and comparing notes with regulatory affairs specialists. Here’s what actually matters for healthcare executives and CIOs.
The Headline Changes
First, the TGA has moved away from treating AI systems as static products. This matters because most machine learning models aren’t static—they’re updated, retrained, and improved over time. The old framework treated every update like a new product submission. That was unworkable.
The new approach creates a tiered system:
- Locked algorithms that don’t change post-deployment go through standard pathways
- Adaptive algorithms that learn from new data face additional ongoing monitoring requirements
- Continuous learning systems require a “predetermined change control plan” before approval
That last category is where things get interesting. If you’re implementing diagnostic AI that improves over time, you’ll need to map out how changes happen before you even start.
Why This Matters for Implementation Planning
I was talking to a CCIO at a Melbourne private hospital network last week. They’d been evaluating a radiology AI system for chest X-ray interpretation. The vendor promised the system would “continuously improve” based on their imaging data.
Sounds great. But under the new framework, that continuous improvement creates regulatory obligations. Every algorithm update needs documentation. Performance monitoring becomes mandatory, not optional. And if the system drifts outside its approved performance envelope, you can’t just keep using it while you figure things out.
This doesn’t mean you shouldn’t implement adaptive AI. It means you need to factor regulatory overhead into your total cost of ownership calculations. A system that looks cheaper upfront might cost more over five years when you account for compliance requirements.
Integration with My Health Record
Here’s something that didn’t get enough attention in the initial coverage: the framework explicitly addresses AI systems that interact with My Health Record data.
The TGA and ADHA have apparently been coordinating on this. If your AI system pulls information from My Health Record—even for decision support rather than direct diagnosis—there’s a new set of data governance requirements.
Specifically:
- Audit trails for any MHR data used in AI inference
- Patient consent tracking that goes beyond standard MHR consent
- Restrictions on using MHR data for algorithm training without specific approval
I’ll be honest—I’m not entirely sure how this will work in practice. The guidance says “appropriate consent mechanisms” without defining what that means. We’ll need to wait for implementation guidance, or just ask directly during early consultations.
What About Existing Systems?
If you’ve already deployed clinical AI, don’t panic. The TGA has built in a transition period.
Systems approved under the old framework have 24 months to demonstrate compliance with new monitoring requirements. That’s enough time to implement proper governance, but not so much that you can ignore it for a year and a half.
The practical steps:
-
Audit your current AI deployments - Do you know what’s actually running in your clinical environment? I’ve worked with health services that couldn’t answer this question quickly.
-
Classify by algorithm type - Is it locked, adaptive, or continuous learning? This determines your compliance pathway.
-
Document performance baselines - You’ll need to show the system performs as expected. If you don’t have baseline metrics, start collecting them now.
-
Review vendor contracts - Who’s responsible for regulatory compliance? This should be crystal clear.
The Bigger Picture for Healthcare AI Strategy
Beyond the specific regulatory changes, I think this signals something important about the direction of healthcare AI in Australia.
The TGA is taking a “regulation proportionate to risk” approach. Low-risk applications face lighter requirements. High-risk diagnostic tools face heavier scrutiny. That’s sensible, and it aligns with how other major regulators (FDA, EU) are moving.
For healthcare leaders, this means:
Start with lower-risk applications. Administrative AI, scheduling optimisation, supply chain management—these face fewer regulatory hurdles. Build your organisational capability with these before tackling high-stakes clinical applications.
Invest in clinical informatics capability. The new framework assumes you have people who understand both clinical workflows and AI system behaviour. If you don’t have health informaticists on staff, you’ll need them—or you’ll need partners who do. AI consultants Melbourne and similar firms are increasingly working with health services on exactly this kind of capability building.
Plan for the long term. A five-year AI roadmap isn’t optional anymore. The regulatory framework rewards organisations that can articulate how their systems will evolve over time.
Questions I’m Still Asking
I don’t want to pretend I have all the answers here. Some things remain unclear:
-
How will the TGA handle AI systems developed in-house by health services? The framework seems oriented toward commercial products.
-
What happens when an AI system is approved for one clinical context but clinicians want to use it in adjacent situations? This “off-label” AI use is already happening.
-
How will state health departments align their clinical governance frameworks with these national requirements?
If you’re grappling with these questions too, I’d genuinely like to hear about it. The more we share implementation experiences, the faster we all figure this out.
Next Steps
The TGA is running consultation sessions through March. If your organisation is implementing or considering clinical AI, I’d strongly recommend having someone attend. The framework is set, but implementation details are still being shaped.
For now, make sure your executive team understands that “buying AI” is no longer (and really never was) a simple procurement decision. It’s a regulatory, clinical governance, and organisational capability decision bundled together.
The organisations that get this right will have a significant advantage. The ones that don’t will find themselves either stuck in compliance limbo or, worse, dealing with safety incidents that could have been prevented.
Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.