Acredia CaresAcredia InsightsAIThe Question the Ageing Australia AI Panel Finally Answered

16/03/20260

The aged care sector has been circling a particular question for some time without quite landing on it. The question is not whether AI has value in aged care, because that conversation has been settled, at least in principle. The question is what a provider actually needs to have in place before AI governance is possible, and whether most providers currently have it. 

The Ageing Australia Deep Dive into AI in Aged Care: Readiness, Risk and Governance panel did something the sector-level AI conversation has struggled to do. It named the infrastructure prerequisites plainly, without softening them into a general call for digital maturity. You can watch the full discussion at the Ageing Australia webcast page. 

David Richter, Customer Strategy Manager at Acredia, speaking at the Ageing Australia “Deep Dive into AI in Aged Care” panel, framed the readiness question as a diagnostic sequence that every provider should run before any AI strategy conversation begins. The sequence starts with data quality, moves to system connectivity, and arrives at a question most providers have not formally tested: can your systems produce, from a single connected interface, the structured and contemporaneous data that a governed AI tool requires? 

His point was not that AI adoption should wait. It was that providers who skip the infrastructure diagnostic and move directly to AI pilots are not getting ahead of a problem. They are feeding an AI engine with inputs it cannot reliably interpret, and the errors embedded in that data are not corrected by the AI layer. They are magnified by it.

 

Three Points the Panel Made That the Programme Has Already Demonstrated

The Acredia Insights programme has spent the past two weeks examining infrastructure gaps in specific clinical domains. Our previous Insight on Care Plan Compliance and the Delivery Gap, established that the gap between a care plan and its delivery is an evidence problem: providers cannot demonstrate point-of-delivery without infrastructure that produces system-generated timestamps at the moment of care, not at the moment of documentation. Our Executive Summary, Clinical Observation and the Condition Change Gap, established that observation documentation entered retrospectively cannot produce contemporaneous timestamps, and that the gap between observation and entry is precisely the window where clinical deterioration can be missed. 

Both findings connect directly to the three contributions David Richter made on the panel. 

First, data quality must come before AI ambition. The delivery evidence gap and the observation gap are both data quality problems expressed in specific clinical domains. A governed AI tool asked to monitor either domain would be interpreting data that fails the quality standard the diagnostic above tests. 

Second, system connectivity determines whether data can be assembled into structured form. A provider with three clinical systems that do not communicate cannot produce a connected resident record from a single interface. That is not a workflow problem. It is a connectivity architecture problem, and no amount of staff coordination resolves it. 

Third, boards need to be asking specific infrastructure questions, not abstract AI strategy questions. The question “Are we ready for AI?” is unanswerable without first answering “Can our systems produce structured, connected, timestamped, attributable data across the clinical domains an AI tool would need to monitor?” Most boards have been having the second conversation without the first. 

 

What the Infrastructure Diagnostic Reveals That a Process Review Cannot

The diagnostic instrument produced this week tests five infrastructure prerequisites across five clinical domains. The prerequisites are:  

  • Structured capture at the point of care 
  • System-generated timestamps distinct from documentation timestamps 
  • Cross-system data connectivity 
  • Documented escalation pathways with verifiable timeframes 
  • Structured output capability that satisfies both regulatory review and AI governance requirements 

A process review of a provider’s clinical documentation practices might find strong compliance. Notes are being made. Handovers are conducted. Care plans are updated. None of that tells you whether the data those practices produce meets the five prerequisites. A team can observe, document, escalate, and update care plans with complete diligence and still produce data that fails every column of the infrastructure grid. The two standards are not the same standard. Only the infrastructure standard determines whether governed AI is possible. 

The diagnostic is designed so that the gaps become visible in the pattern of responses before any prose explanation is needed. Providers who find widespread gaps across the grid are not discovering a future problem. They are identifying a current governance infrastructure position that clinical review is already testing and that AI adoption will test more severely. 

 

The Convergence

What the Ageing Australia panel confirmed, and what the Acredia Insights programme has been demonstrating in specific clinical domains over the past fortnight, is that the AI governance readiness question and the clinical governance defensibility question are not sequential priorities. They are the same question examined from different positions. 

The infrastructure that a provider needs to close the care delivery evidence gap is the same infrastructure that a governed AI monitoring tool requires to detect delivery failures. The infrastructure that a provider needs to close the observation documentation gap is the same infrastructure that a governed AI deterioration monitor requires to detect clinical changes at the moment they occur. The data foundation question is not new. It has been the governance infrastructure question all along. AI readiness has given it a new name and a sharper deadline. 

Providers who complete the diagnostic this week and find their infrastructure position defensible have confirmed something worth knowing. Providers who find widespread gaps have identified the decision that precedes every other decision on the AI governance agenda, and on the clinical governance defensibility agenda. 

The full diagnostic instrument is available to providers who want to work through their own infrastructure position. Access the AI Governance Readiness Infrastructure Grid here. 

 

A few questions answered

Why does the infrastructure diagnostic matter if AI adoption is still some time away for our organisation?
The five infrastructure prerequisites the diagnostic tests are not specific to AI adoption. They are the same structural standards that clinical governance review, incident investigation, and regulatory audit apply to clinical documentation. Providers who find gaps in the diagnostic are carrying a governance infrastructure deficit that exists independently of any AI timeline. The AI readiness framing makes the diagnostic timely; the clinical governance defensibility argument makes it immediate.
 

We have strong documentation practices. Why might we still fail the diagnostic?
Documentation practice quality and infrastructure capability are not the same standard. A provider can maintain thorough, complete, conscientious clinical documentation and still produce data that fails the infrastructure grid, because the data is entered retrospectively, stored in disconnected systems, and cannot be exported in structured form without manual assembly. The diagnostic tests what your systems produce, not how diligent your team is.
 

The panel mentioned AI already being used in aged care. Does that change the infrastructure argument?
It sharpens it. Providers already using AI tools, for documentation support, pain assessment, or clinical monitoring, who have not tested the data infrastructure those tools depend on are operating on an unverified assumption about the quality of the AI’s inputs. The infrastructure diagnostic applies retroactively to existing AI deployments as much as it does to planned ones.
 

What does a board conversation about AI readiness actually need to include?
It needs to include the output of a structured infrastructure assessment, not a vendor briefing. The board needs to understand, for each clinical domain, whether the data the organisation’s systems produce meets the five prerequisites. A board that has discussed AI strategy without receiving an infrastructure assessment has been given an incomplete picture.
 

Is this a problem that additional staff training can address?
Process quality and staff diligence cannot close an architecture gap. System-generated timestamps cannot be produced by better documentation habits. Cross-system connectivity cannot be created by staff following a more careful handover protocol. The gaps the diagnostic identifies are structural. The resolutions are structural.
 

 

David Richter is the Sales and Marketing Director at Acredia and a panellist on Ageing Australia’s “Deep Dive into AI in Aged Care: Readiness, Risk and Governance.” 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright 2025. Acredia. All rights reserved.

bt_bb_section_top_section_coverage_image
bt_bb_section_bottom_section_coverage_image