Artificial Intelligence
13 Dec 2024
Patient safety and quality of care
Healthcare providers must be satisfied that AI systems meet safety standards, minimise harm, and deliver a quality of care that is at least equivalent to human-led interventions. According to recent guidance published by the Australian Health Practitioner Regulation Agency (Ahpra),1 AI should support the clinician’s judgement, not replace it, thereby maintaining a human-centred approach to healthcare.
The Therapeutic Goods Administration (TGA) plays a critical role in this validation process, particularly for AI systems classified as medical devices. The TGA ensures these systems meet safety and performance standards before they are deployed in clinical practice, in the interests of patient safety. Developers and manufacturers must adhere to stringent regulatory requirements, including post-market surveillance and reporting of adverse events.
According to the TGA, software will be considered a medical device where its intended medical purpose includes one or more of the following:
- diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease, injury or disability
- investigation, replacement or modification of the anatomy, or of a physiological process or state
- to control or support conception.
Apps that track a person’s health information to diagnose diabetes, or software that analyses skin images to screen for melanoma, are deemed to be medical devices2 – whereas generative AI tools used in clinical practice (such as AI scribing) are not regulated by the TGA.
The AMA issued a position statement in 2023 on the application of AI,3 including automated decision making (ADM) and application of Large Language Models (LLMs) in healthcare.
Informed consent and transparency
Informed consent is a cornerstone of medical ethics, and it extends to the use of AI in healthcare. Patients should be informed about the involvement of AI in their care, and have the right to understand how AI tools might affect their diagnosis or treatment. Transparency is essential for maintaining trust – patients must be aware of the role AI plays in their treatment, and the potential risks and limitations associated with it.
Accountability and responsibility
One of the significant challenges in using AI is determining accountability when errors occur. Healthcare professionals are ultimately responsible for the decisions made using AI tools. However, the lack of clear guidelines regarding the division of responsibility between the AI system developers, healthcare institutions, and the professionals using these tools presents an ongoing challenge. Ahpra stresses the need for healthcare providers to remain accountable for the outcomes of their clinical decisions, even when AI is used as a supportive tool.
Data privacy and security
AI systems in healthcare rely on vast amounts of patient data to train algorithms and improve accuracy. Healthcare professionals must protect sensitive health information and ensure that AI tools comply with data protection requirements pursuant to the Australian Privacy Principles and the Privacy Act.4 Ahpra also highlights the importance of safeguarding patient privacy, ensuring that data collected for AI-driven processes is securely managed and protected from breaches.
REGULATION
The regulation of AI in healthcare is still in its early stages, with regulatory bodies working to establish frameworks that ensure the safe, ethical and effective use of AI. Several key regulatory measures are being developed and enforced globally and within Australia. Australia’s current regulatory framework is not fit for purpose to respond to the risk AI poses.
Ahpra’s role in AI regulation
Ahpra has taken a proactive role in providing guidance and regulatory oversight for the safe and ethical use of AI in healthcare in Australia. Ahpra recognises the transformative potential of AI technologies, while emphasising the need for healthcare professionals to understand its risks and limitations.
TGA regulation of AI as medical devices
The TGA plays a crucial regulatory role in ensuring that AI systems used in healthcare meet the necessary safety and efficacy standards. AI technologies that are classified as medical devices – such as those used in diagnostic imaging or treatment recommendations – must undergo a comprehensive evaluation by the TGA before they can be approved for use in Australia.5 The TGA also monitors the ongoing performance of AI-based medical devices through post-market surveillance to ensure they continue to meet safety and performance requirements.
Voluntary safety standards
In August 2024, the Department of Industry, Science and Resources (DISR) issued a voluntary AI standard providing guidance around responsible AI implementation while regulation is being developed. The Voluntary AI Safety Standard6 sets out 10 ‘guardrails’ designed to provide practical guidance to AI developers and AI deployers on the safe and responsible development and deployment of AI systems in Australia.
In September 2024, the DISR issued a Proposals paper for introducing mandatory guardrails for AI in high-risk settings7 (includes healthcare).
Ethical guidelines
The DISR published Australia’s 8 Artificial Intelligence (AI) Ethics Principles in 2019. The principles support the importance of professional oversight, ensuring AI systems complement rather than replace clinical decision-making, and the need for clinicians to remain informed about the tools they use.
Reference
-
https://www.ahpra.gov.au/Resources/Artificial-Intelligence-in-healthcare.aspx
-
https://www6.austlii.edu.au/cgi-bin/viewdoc/au/legis/cth/consol_act/tga1989191/s41bd.html
-
https://www.ama.com.au/index.php/articles/artificial-intelligence-healthcare
-
https://www8.austlii.edu.au/cgi-bin/viewdb/au/legis/cth/consol_act/pa1988108/
-
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
Stay updated with the latest medico-legal content |
Subscribe to MDA National’s biannual Member publication, Defence Update, for the latest medico-legal updates, articles and case studies.
Doctors Let's Talk: Get Yourself A Fricking GP
Get yourself a fricking GP stat! is a conversation with Dr Lam, 2019 RACGP National General Practitioner of the Year, rural GP and GP Anesthetics trainee, that explores the importance of finding your own GP as a Junior Doctor.
25 Oct 2022
Systematic efforts to reduce harms due to prescribed opioids – webinar recording
Efforts are underway across the healthcare system to reduce harms caused by pharmaceutical opioids. This 43-min recording of a live webinar, delivered 11 March 2021, is an opportunity for prescribers to check, and potentially improve, their contribution to these endeavours. Hear from an expert panel about recent opioid reforms by the Therapeutic Goods Administration and changes to the Pharmaceutical Benefits Scheme.
14 May 2021
Diplomacy in a hierarchy: tips for approaching a difficult conversation
Have you found yourself wondering how to broach a tough topic of conversation? It can be challenging to effectively navigate a disagreement with a co-worker, especially if they're 'above' you; however, it's vital for positive team dynamics and safe patient care. In this recording of a live webinar you'll have the opportunity to learn from colleagues' experiences around difficult discussions and hear from a diverse panel moderated by Dr Kiely Kim (medico-legal adviser and general practitioner). Recorded live on 2 September 2020.
05 Oct 2020