AI in Indigenous Health: Opportunities, Risks, and Essential Principles


Aboriginal and Torres Strait Islander peoples experience significant health disparities compared to non-Indigenous Australians. Life expectancy gaps, higher rates of chronic disease, and reduced access to healthcare services persist despite decades of policy attention.

Could AI help address these disparities? Potentially. But AI also carries risks of entrenching or worsening inequities if implemented without appropriate care.

This is sensitive terrain. I want to share my perspective while acknowledging that Indigenous voices should lead this conversation.

Why AI Could Help

AI has potential to address some drivers of health inequity:

Access barriers. Remote Indigenous communities often lack access to specialist healthcare. AI-assisted telehealth, diagnostic support, and decision tools could partially bridge these gaps—extending specialist capabilities to primary care settings with limited staffing.

Workforce shortages. Communities struggle to recruit and retain healthcare workers. AI that supports generalist clinicians to manage complex conditions could improve care where specialists aren’t available.

Pattern recognition. AI analysis of health data could identify community health patterns and intervention opportunities that individual clinicians might miss.

Cultural adaptation. AI could potentially support culturally appropriate care delivery by prompting culturally relevant questions, ensuring appropriate communication, and supporting Indigenous health workers.

The Significant Risks

The risks are substantial:

Data bias. AI trained predominantly on non-Indigenous populations may perform poorly for Indigenous peoples. Different disease presentations, genetic factors, and social determinants mean AI validated elsewhere can’t be assumed to work in Indigenous contexts.

Historical harms. Aboriginal and Torres Strait Islander peoples have experienced profound harms from data collection and research conducted without appropriate consent or benefit-sharing. AI that continues extractive data practices perpetuates these harms.

Surveillance concerns. AI-powered monitoring and prediction could be experienced as surveillance in communities with justified distrust of government and mainstream health services.

Inappropriate automation. Healthcare for Indigenous peoples often requires relationship-building, cultural understanding, and holistic approaches. AI that fragments or depersonalises care may do harm.

Access amplifying inequality. If AI improves healthcare primarily for well-resourced urban settings while rural and remote Indigenous communities lack infrastructure to benefit, AI could widen rather than narrow disparities.

Essential Principles

Drawing from Indigenous health ethics and emerging AI ethics frameworks, several principles should guide AI in Indigenous health:

Indigenous data sovereignty. Indigenous peoples should control collection, ownership, and use of data about their communities. This principle is well-established in research ethics; it should apply equally to AI development and deployment.

Self-determination in technology. Indigenous communities should lead decisions about whether and how AI is used in their healthcare. Top-down imposition of technology solutions has failed repeatedly.

Benefit sharing. If AI generates value from Indigenous health data, benefits should flow to Indigenous communities—not just to technology companies or mainstream health services.

Cultural safety. AI systems used in Indigenous healthcare should be culturally safe and support (not undermine) culturally appropriate care models.

Validation in context. AI systems should be validated specifically in Indigenous contexts before deployment. Good performance in other populations isn’t sufficient.

Transparency and consent. Indigenous patients should understand when AI is involved in their care and have meaningful ability to decline.

What Good Practice Looks Like

Some examples of better approaches:

Aboriginal Community Controlled Health Organisations (ACCHOs) are leading some AI initiatives, ensuring Indigenous control over technology deployment. ACCHO leadership means Indigenous perspectives shape design from the start.

Co-design processes that genuinely involve Indigenous health workers, community members, and governance structures—not just consultation at the end of development.

Specific validation studies in Indigenous populations before deployment, identifying and addressing performance gaps.

Indigenous data governance frameworks that implement data sovereignty principles in AI contexts—controlling who accesses data, how AI is trained, and where benefits flow.

What Bad Practice Looks Like

Warning signs of problematic approaches:

  • AI deployed without specific Indigenous validation
  • No Indigenous involvement in design, implementation, or governance
  • Data extracted from Indigenous communities without clear consent and benefit-sharing
  • Technology imposed without community consultation
  • AI that replaces rather than supports human relationship-building
  • Focus on surveillance rather than empowerment

Practical Recommendations

For healthcare organisations considering AI with Indigenous populations:

Engage early and meaningfully. If your service serves Indigenous patients, Indigenous community members and ACCHOs should be involved in AI decisions from the start.

Assess data representation. Understand whether proposed AI systems were trained and validated with Indigenous populations. If not, proceed with extreme caution or not at all.

Evaluate cultural safety. How does the AI interact with cultural factors in care delivery? Could it undermine culturally safe practices?

Consider governance. Who controls the AI? Who benefits from it? Are Indigenous governance structures involved?

Start with Indigenous-led initiatives. Support Indigenous-led AI initiatives rather than imposing mainstream solutions.

Be willing to step back. If Indigenous communities don’t want AI in their healthcare, respect that decision.

The Broader Context

AI in Indigenous health sits within broader questions about Indigenous health sovereignty, self-determination, and systemic racism in healthcare. Technology can’t solve these issues—and poorly implemented technology can make them worse.

The potential for AI to contribute positively to Indigenous health depends entirely on how it’s developed and deployed. Indigenous leadership, genuine partnership, and commitment to addressing power imbalances are essential.

Without these, AI is more likely to entrench health inequity than address it.


Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She acknowledges the traditional custodians of Country throughout Australia and pays respect to Elders past, present, and emerging.