Australia's peak medical association recently warned doctors in Perth hospitals against using ChatGPT for note-taking as ChatGPT was banned due to stringent regulations and transparency. In response, they call for stringent regulations and transparency about AI use within healthcare.
Australian Medical Association (AMA) recently provided its input in response to the federal government's discussion paper on safe and responsible artificial intelligence, emphasizing the necessity of comprehensive regulations protecting patients and healthcare professionals while instilling confidence in AI systems within healthcare industries.
Australia's regulatory status is concerning due to lagging behind other countries, risking patient safety and privacy.
Implementation of Artificial Intelligence in healthcare poses unique difficulties. At five of Perth's South Metropolitan Health Services hospitals, staff was discovered using ChatGPT medical record writing applications despite concerns about patient confidentiality and potential risks arising from its adoption.
Paul Forden, CEO of South Metropolitan Health Service in Sydney, expressed concern when considering AI bot technology such as ChatGPT for patient confidentiality. Patient data protection must remain top of mind at all times. Any uncertainty in this regard necessitates extra caution in these regards.
AI holds immense promise to revolutionize healthcare positively. However, it must complement rather than replace clinical expertise. Clinicians must remain the ultimate decision-makers when it comes to medical decisions.
However, patients can participate in this process by giving informed consent before receiving treatments or diagnostic procedures utilizing AI technology.
The American Medical Association emphasizes the need to implement ethical oversight of AI systems to avoid further exacerbating health disparities caused by such systems, nor contributing to unequal access or perpetuating bias; safeguards are thus paramount to promote fairness and equity when applying AI in healthcare applications.
As part of examining regulatory frameworks, the Australian Medical Association advises looking towards established models like the EU Artificial Intelligence Act.
This act categorizes risks posed by AI technologies while creating an oversight board to oversee responsible implementation when considering regulatory frameworks. Furthermore, Canadian regulations that mandate human intervention points can provide valuable lessons when looking at Australian regulations.
“Future regulation should ensure that clinical decisions that are influenced by AI are made with specified human intervention points during the decision-making process,” the submission states.
“The final decision must always be made by a human, and this decision must be a meaningful decision, not merely a tick box exercise.”
“The regulation should make clear that the ultimate decision on patient care should always be made by a human, usually a medical practitioner.”
Google's Chief Health Officer, Dr. Karen DeSalvo, believes it will improve health outcomes.; however, it must strike an appropriate balance and correctly constrain AI models to guarantee accuracy, consistency, and ethical adherence.
Google's recent research study published in Nature demonstrated the impressive abilities of their large medical language model Med-PaLM 2 .
It provided accurate answers for common medical queries that were searched online 92.9% of the time, thus illustrating AI as a promising future solution while underlining its cautious implementation process.