Back to blog
PrivacyAI ComplianceOAICHealth InformationPrivacy ActGeneral PracticePractice Management

AI Privacy Compliance for Australian Healthcare Practices: What the OAIC Says You Must Do

ClinicComply Team
17 min read

Key Takeaways

  • The OAIC published specific guidance in 2025 warning against entering personal health information into commercially available AI tools that are not configured to protect that information. This includes general-purpose tools like ChatGPT, Gemini, and Copilot used outside enterprise configurations.
  • Every AI tool that processes patient health information is subject to the Australian Privacy Act 1988, regardless of where the AI vendor is located. Cross-border disclosure rules under APP 8 mean your practice is accountable for what an overseas AI vendor does with your patient data.
  • Australia's Privacy Act reform - already in its second tranche - will require healthcare practices to update their privacy policies by December 2026 to disclose whether they use any automated decision-making that significantly affects individuals. Billing algorithms, clinical triage tools, and referral systems may all qualify.
  • AI clinical documentation tools (AI scribes, transcription software) are the highest-risk category for most GP practices because they process identifiable, sensitive health information in real time without patients always being informed.
  • The OAIC has indicated that health information processed through AI is subject to the same APP 6 secondary purpose restrictions as health information collected any other way - the AI cannot use patient data to train its models unless the practice has a legal basis for that disclosure.

Australian healthcare practices are adopting artificial intelligence tools at a pace that compliance frameworks have not matched. AI clinical scribes are transcribing consultations, AI-assisted diagnostic tools are analysing pathology results, patient-facing chatbots are triaging symptoms, and general-purpose tools like ChatGPT are being used by reception and administration staff to draft correspondence, summarise notes, and generate letters. Each of these use cases involves health information. Each is governed by the Privacy Act 1988. And the OAIC has made clear that healthcare providers remain fully accountable for what happens to patient data once it reaches an AI system - including AI systems that are not operated in Australia.

This guide explains what the current legal obligations actually require, where the compliance gaps are, and what your practice needs to put in place.

What the OAIC's AI Guidance Actually Says

In 2025, the Office of the Australian Information Commissioner published two pieces of guidance specifically addressing AI and privacy: guidance on the use of commercially available AI products, and guidance on developing and training generative AI models. Both are directly relevant to healthcare.

The core message for practices using commercially available AI tools is straightforward: you must not enter personal health information into any AI system unless you have determined that the system is configured to protect that information in a way that is compliant with your Privacy Act obligations. The OAIC's guidance specifically identifies public AI tools - those where user inputs may be used to train future model versions, stored by the vendor, or accessible to vendor staff - as high-risk for sensitive information. Health information is among the most sensitive categories the Privacy Act recognises.

This is not guidance that says AI is prohibited in healthcare. It is guidance that says practices must actively assess the tools they use, understand how those tools handle health data, and have documented that assessment.

The second piece of guidance, on developing and training AI models, is relevant where AI vendors use the data they receive from customers to improve their models. If your practice's patient data - even in de-identified form - is being used by a vendor to train a model that will be deployed commercially, that is a secondary use of health information that may not have been consented to when the information was collected. APP 6 requires that health information is only used for the primary purpose of collection, or with the patient's consent for a secondary purpose, or within a narrow set of permitted exceptions. Vendor model training is not among those exceptions.

The Privacy Act Obligations That Apply to AI

Several of the Australian Privacy Principles have direct application to AI use cases in healthcare.

APP 1: Transparency About AI in Your Privacy Policy

APP 1 requires organisations to have a clearly expressed and up-to-date privacy policy. The OAIC's guidance indicates that practices should disclose when they use AI tools that process personal information, what types of AI they use, and what information is processed. A privacy policy drafted before your practice started using an AI scribe or patient chatbot is already out of date. Most practice privacy policies were written in 2018 when the Notifiable Data Breaches scheme commenced and have not been meaningfully updated since.

Updating your privacy policy to address AI does not need to be complex. It needs to explain what AI tools you use, what health information they process, where that information is stored and by whom, and what rights patients have. For most GP practices, this means adding a section to the existing policy covering clinical documentation AI, patient-facing tools, and any administrative AI.

APP 6: Health Information Can Only Be Used for Its Primary Purpose

When you collect health information from a patient - through a consultation, a registration form, or a referral - you collect it for the purpose of providing healthcare. APP 6 restricts the use or disclosure of that information to the primary purpose of collection, unless one of the exceptions applies. The main exceptions are: the patient has consented to the secondary use, the secondary use is directly related to the primary purpose and the patient would reasonably expect it, or a law or court order requires it.

AI vendor model training is a secondary purpose. If your AI scribe vendor is using the consultation transcripts your practice generates to improve its language model, that is a disclosure of health information for a purpose other than patient care. Whether that use has a legal basis depends on: what the vendor's terms of service say, whether patients were informed of this use, and whether the data has been genuinely de-identified before use.

Genuine de-identification under Australian law requires that the data cannot reasonably be used to re-identify the individual, either directly or in combination with other data. Removing a patient's name from a consultation transcript does not de-identify it if the transcript contains enough clinical detail to identify the person in context.

APP 8: Cross-Border Disclosure Rules Apply to Overseas AI Vendors

Most AI tools used in Australian healthcare are operated by vendors based in the United States or Europe. When your practice sends patient data to an overseas AI system, that is a cross-border disclosure of health information subject to APP 8. The APP 8 rule is that before disclosing personal information overseas, you must take reasonable steps to ensure the overseas recipient handles it in compliance with the Australian Privacy Act - or you must obtain the patient's express consent to the transfer.

"Reasonable steps" under APP 8 typically means: reviewing the vendor's data processing agreement to confirm it includes privacy protections equivalent to Australian law, confirming where data is stored and processed, and ensuring there are contractual obligations on the vendor to notify you of a breach. Simply using a foreign AI tool without reviewing its data terms does not satisfy APP 8.

APP 11: Reasonable Steps to Protect Health Information

APP 11 requires taking reasonable steps to protect health information from misuse, interference, loss, unauthorised access, modification, or disclosure. The Federal Court's October 2025 judgment in the Australian Clinical Labs case established that this is an objective standard assessed holistically across your systems, policies, and procedures.

For AI tools, APP 11 means: the tool should have appropriate access controls, data in transit and at rest should be encrypted, there should be a data breach notification clause in your vendor contract, and the vendor should have a documented security framework. Using a free-tier or consumer AI product to process patient records almost certainly fails this standard.

The Highest-Risk AI Use Cases in Healthcare Practices

AI Clinical Scribes and Transcription Tools

This is the category of highest immediate risk for most GP practices. AI scribes - tools like Heidi, Nabla, Lyrebird, and similar products - listen to or record consultations and generate clinical notes, summaries, or referral letters. They process some of the most sensitive health information your practice holds: the verbatim content of patient consultations.

The compliance questions for AI scribe tools are: Does the patient know their consultation is being recorded and processed by AI? Is there a disclosure in your practice's privacy policy and on the patient registration or consent form? Does the vendor store transcripts, and for how long? Is data stored in Australia or overseas? Has the vendor provided a data processing agreement that addresses APP 8? Is model training opt-out available on your plan?

The patient disclosure question is the most immediate. Most practices that have adopted AI scribes have done so without updating their patient privacy consent process to reflect it. The OAIC's position is that patients should be informed when their health information is processed by AI systems in ways they would not expect.

Administrative AI: ChatGPT, Copilot, and General-Purpose Tools

The use of consumer AI tools by reception and administrative staff is a significant compliance gap in many practices. When a staff member pastes a patient's name, date of birth, and clinical history into ChatGPT to generate a referral letter, that is a disclosure of identifiable health information to a third party - the AI vendor - under whatever terms the vendor applies to free or standard accounts.

OpenAI's standard consumer terms, for example, permit the use of inputs to improve their models unless users opt out or subscribe to a business plan with different terms. The enterprise or business tier of most AI tools includes terms that prevent model training on customer data and typically include data processing agreements. The free and standard consumer tiers generally do not.

This is not a prohibition on using AI for administrative tasks. It is a requirement to use the appropriate tier or configuration of those tools, and to have a documented decision about which tools are permitted and under what conditions.

Patient-Facing Chatbots and Symptom Checkers

Practices using patient-facing chatbots for appointment booking, symptom triage, or pre-consultation questionnaires are collecting health information through those tools. The chatbot vendor may be storing that information, transmitting it overseas, or using it to improve their tool. All of the same APP obligations apply.

Patient-facing AI tools also raise informed consent questions: does the patient know they are interacting with AI, and do they understand what happens to the information they provide? A disclaimer buried in terms of service is unlikely to satisfy the OAIC's transparency expectations for health information.

Diagnostic and Clinical Decision Support AI

AI tools that process diagnostic results, flag at-risk patients, or generate referral recommendations interact directly with clinical decision-making. These tools are subject to the same Privacy Act requirements as all other AI systems, and additionally raise questions about the practice's regulatory obligations under the Therapeutic Goods Act if the tool meets the definition of a medical device. The TGA has been progressively issuing guidance on AI-as-a-medical-device since 2023.

For compliance purposes, practices using clinical decision support AI should confirm whether the tool is listed on the Australian Register of Therapeutic Goods, if applicable, and ensure their vendor agreement addresses data privacy, model bias, and liability for clinical recommendations.

What to Look for in an AI Vendor Contract

When a vendor supplies an AI tool that processes patient health information, the vendor contract and data processing agreement are the primary mechanism for meeting your obligations under APP 8 and APP 11. A compliant AI vendor contract for healthcare should include at minimum:

Data processing and use restrictions: The vendor must only process patient health information for the purposes of providing the contracted service. No secondary use, including model training, without your explicit opt-in and a documented legal basis.

Data residency: Where is data stored and processed? For sensitive health information, Australian data residency is preferable. If data is processed overseas, the contract should confirm the jurisdiction and the vendor's compliance with equivalent privacy protections.

Breach notification: The vendor must notify you of any actual or suspected data breach within a timeframe that allows you to meet your NDB scheme obligations (notification to OAIC as soon as practicable after confirming an eligible breach).

Sub-processors: Who else does the vendor share data with? Most AI vendors use third-party cloud infrastructure, speech recognition APIs, or other sub-processors. These should be disclosed and subject to the same contractual protections.

Deletion and return: What happens to patient data when you end the contract? The vendor should be able to provide confirmation of deletion.

Security certifications: What is the vendor's security framework? ISO 27001, SOC 2 Type II, and similar certifications are reasonable evidence of baseline security controls.

If a vendor cannot provide a data processing agreement that addresses these points, or declines to negotiate on data use terms, that is a signal that the tool is not designed for healthcare use.

Writing an AI Acceptable Use Policy for Your Practice

An AI acceptable use policy does not need to be long, but it needs to exist and be communicated to staff. At minimum, it should address:

Approved tools: Which AI tools may be used for which purposes. Include the specific tier or configuration (for example, Microsoft Copilot for Microsoft 365 Business plan, not the consumer version).

Prohibited use cases: No patient identifiable information to be entered into unapproved AI tools. This should be explicit and specific: names, dates of birth, Medicare numbers, clinical details, referral content, and consultation notes.

Patient disclosure: How and when patients are informed that their information may be processed by AI. This should reference the privacy policy and any relevant consent forms.

De-identification before use: If staff use general-purpose AI tools for tasks like generating template letters or drafting non-patient-specific content, the policy should clarify what de-identification means in practice.

Review process: Who is responsible for assessing new AI tools before adoption, and what the assessment process covers (vendor contract review, data residency, privacy impact assessment).

Training: All staff who use AI tools should complete a short training session covering the policy and the reasons for the restrictions. The completion of that training should be documented.

The December 2026 Automated Decision-Making Deadline

The Privacy and Other Legislation Amendment Act 2024 introduced new requirements for automated decision-making that are likely to affect healthcare practices more than most realise. From 10 December 2026, privacy policies must disclose whether the organisation uses automated systems to make decisions that could significantly affect individuals.

In a healthcare context, automated decision-making systems include: billing software that automatically applies or declines Medicare item eligibility, AI-assisted triage tools that route patients to different appointment types, clinical decision support tools that recommend (or flag against) specific clinical actions, and patient risk stratification systems that determine follow-up frequency.

The disclosure obligation applies to the existence and general operation of these systems - not to the underlying algorithm or model itself. But practices that currently have no mention of AI or automated systems in their privacy policy need to add it before the December 2026 deadline.

A detailed guide to the automated decision-making obligations and what they mean in practice is forthcoming on the ClinicComply blog. For now, the immediate step is to audit which AI or algorithmic systems in your practice make or significantly influence decisions about patients, and document them in preparation for the privacy policy update.

How ClinicComply Helps

ClinicComply tracks your practice's privacy compliance obligations as a built-in framework alongside RACGP accreditation, cybersecurity, Medicare, and other regulatory requirements. As the OAIC's AI guidance develops and the Privacy Act reform requirements take effect, those obligations can be tracked, assigned to the right team member, and documented with evidence in one place.

For practices that are mid-way through adopting AI tools and have not completed the vendor assessment and policy work that those tools require, ClinicComply provides a structured way to capture what has been done, what remains, and when things need to be reviewed.

The December 2026 privacy policy update deadline is eight months away. If your practice has not started the audit of AI systems and automated decisions that deadline requires, now is the right time to begin. Start your free trial at cliniccomply.com.au.


Frequently Asked Questions

Can Australian healthcare practices use AI tools like ChatGPT for patient information?

Not on standard consumer accounts. The OAIC's 2025 guidance warns against entering personal health information into AI tools that are not configured to prevent that information from being used for model training or accessed by the vendor. Consumer or free-tier accounts of general-purpose AI tools like ChatGPT typically include terms that permit input data to be used for model improvement. Healthcare practices using these tools must use enterprise or business plans with a data processing agreement that prevents secondary use of patient data. Using unapproved AI tools with patient information is a likely breach of APP 6 (secondary use), APP 8 (cross-border disclosure), and APP 11 (reasonable security steps).

Does Australia's Privacy Act apply to overseas AI vendors used by healthcare practices?

Yes. When an Australian healthcare practice sends patient health information to an overseas AI vendor, that is a cross-border disclosure subject to APP 8 of the Privacy Act 1988. The practice must take reasonable steps to ensure the overseas recipient handles the information in compliance with the Australian Privacy Principles, or obtain the patient's express consent to the transfer. Reasonable steps typically mean reviewing the vendor's data processing agreement, confirming data residency, and ensuring contractual breach notification obligations are in place.

What is the automated decision-making privacy disclosure deadline for healthcare practices?

From 10 December 2026, healthcare practices must update their privacy policies to disclose whether they use automated systems to make decisions that could significantly affect individuals. This covers billing software that automatically assesses Medicare eligibility, AI triage tools, clinical decision support systems, and patient risk stratification tools. The disclosure obligation requires practices to identify which automated or AI-assisted decision systems they use and document them in an updated privacy policy before the deadline.

What are the privacy risks of AI clinical scribes in Australian healthcare?

AI clinical scribes process sensitive health information - verbatim consultation content - in real time. Privacy risks include: patients not being informed that their consultation is being recorded and processed by AI (transparency breach under APP 1), consultation transcripts being stored by the vendor in overseas data centres without APP 8 safeguards, vendor use of transcripts to train AI models without a legal basis under APP 6, and inadequate security controls failing the APP 11 standard. Practices should confirm patient consent processes, review vendor data processing agreements, check data residency, and ensure model training is opt-out before deploying AI scribe tools.

What should be in an AI acceptable use policy for a healthcare practice?

An AI acceptable use policy for a healthcare practice should include: a list of approved AI tools and the specific account tier permitted, explicit prohibition on entering identifiable patient information into unapproved tools, patient disclosure requirements (how and when patients are told their information is processed by AI), de-identification standards for using general-purpose AI tools, the process for assessing new AI tools before adoption (vendor contract review, data residency check, privacy impact assessment), and training requirements for staff using AI tools. The policy should be communicated to all staff and completion documented.

Is a patient's consent required before using AI to process their health information?

The Privacy Act 1988 requires health information to be used only for the primary purpose of collection - providing healthcare - unless an exception applies. The main exception is patient consent. For AI tools where health information will be processed in ways patients would not reasonably expect (such as transcription by an overseas AI system, or use in model training), informed consent from the patient is the safest legal basis. The OAIC's guidance suggests patients should be informed when their information is processed by AI in unexpected ways, even where consent is not strictly required. Updating your patient registration form and privacy policy to address AI use is the practical mechanism.

Ready to get started?

Your next accreditation visit starts today.

Join Australian GP clinics and medical practices that have replaced spreadsheets and email threads with a single healthcare compliance platform. Your free trial starts the moment you sign up.

No credit card required
Australian data residency
Cancel anytime