The Digital Health Cooperative Research Centre (DHCRC) has unveiled a new initiative with the Department of Health and Aged Care and two specialist AI teams within the University of Technology Sydney UTS Rapido and UTS Human Technology Institute, that will create an online tool to help classify different types of Artificial Intelligence (AI) solutions used in Australian healthcare.
This complements efforts by governments, peak organisations, and clinical professional and safety bodies to ensure AI is deployed safely into health care settings.
The online tool will adapt the Organisation for Economic Cooperation and Development (OECD) AI Classification Framework to the Australian context by ensuring recent government policies, including proposed mandatory guardrails for AI across the Australian economy.
The OECD AI Network of Experts developed the AI Classification Framework as a tool for policy-makers, regulators, legislators, and those procuring AI solutions to assess the opportunities and risks that different types of AI systems present. This project will see the tool being adapted for healthcare organisations in Australia, supporting the safe adoption of AI.
The framework’s dimensions cover users and stakeholders; the economic context in which the AI will be used; data collection, format, scale and appropriateness; AI models; and the tasks of the AI system.
Having been endorsed by 46 countries, including Australia, the framework provides an internationally recognised baseline for classifying AI systems and in turn for assessing the effectiveness of national AI strategies.
The project will road test the localisation of the framework with developers, deployers and end users of AI solutions in healthcare to ensure consistency in the classification of AI systems and alignment to existing software as a medical device regulation and the emerging regulatory guardrails in Australia.
The dynamic, interactive tool will be a first-of-its-kind initiative, highlighting specific risks associated with bias, explainability and robustness of AI within healthcare.
This project will deliver a self-serve advisory and benchmarking tool for AI developers, users, and policy makers, specifically tailored for the Australian healthcare sector. The project team is aiming to have a basic web tool ready for testing by mid-2025.
Participant quotes
Annette Schmiede, Digital Health CRC CEO, said:
“To complement the work of government and industry to define AI ethics principles, develop AI risk assessments, and provide guardrails for the safe and responsible use of AI, there needs to be a standardised approach to classifying the varied types of AI systems in use. The availability and adoption of AI is without doubt moving at a rapid pace across all sectors, including healthcare”, said Ms Schmiede. “The challenge is building clear and consistent guidance and tools, ensuring these are effective for the diverse range of audiences and AI solutions across healthcare including developers, health care providers and consumers.”
Sam Peascod, Assistant Secretary, Digital and Service Design, Department of Health and Aged Care, said:
“As Government looks to build community trust and promote AI adoption, we need to provide guidance on how to use AI safely and responsibly. Having a tool that can assist in classifying and performing a risk assessment of AI technologies will support the adoption of AI solutions by health care organisations and health care providers, ultimately leading to better health outcomes for consumers.”
Professor Adam Berry, Deputy Director of the UTS Human Technology Institute, said:
“For AI to realise its tremendous promise for all, it depends upon responsible practice. A critical first step to realising that practice is to be consistent in the documentation of how individual AI systems are used, function, and deliver impact across diverse stakeholders. That consistency helps us build common approaches for assessing and addressing risk and enables everyone to consistently talk clearly about the use of AI by preserving the integrity of the tool as developed by the OECD There will be potential to offer this consistency, enabling states, organisations, and people from across the 46 OECD countries to compare the effectiveness of policy, procedures controls as the field of AI expands.”
Hervé Harvard, Executive Director, UTS Rapido, said:
“Partnering with universities like UTS and its innovation hub UTS Rapido enables industry to leverage cutting-edge technology to enhance safety and responsibly navigate the transformative impact of AI in healthcare.”
Raj Calisa, Principal Delivery Manager, UTS Rapido, said:
“The exciting aspect of this project is to build and test an interactive tool that provides great user experience and where the ‘smarts’ behind the scenes can be dynamically refined with subsequent iterations.”