The World Health Organization (WHO) on Thursday issued a call for better regulations over the use and potential mis-use of artificial intelligence (AI) in the healthcare industry.
Its new publication emphasises the importance of establishing safe and effective AI systems and fostering dialogue about using it as a positive tool, bringing together developers, regulators, manufacturers, health workers, and patients.
With the increasing availability of healthcare data and rapid progress in analytic techniques, WHO recognizes the potential AI has, to enhance health outcomes by strengthening clinical trials, improving medical diagnosis, and supplementing healthcare professionals’ knowledge and competencies.
When using health data, however, AI systems could potentially access sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity.
“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Tedros Adhanom Ghebreyesus, WHO Director-General.
In response to the growing need to responsibly manage the rapid rise of AI health technologies, WHO is stressing the importance of transparency and documentation, risk management, and externally validating data.
“This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks,” said Mr. Ghebreyesus.
The challenges posed by important, complex regulations – such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States – are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
AI systems are complex and depend not only on the code they are built with but also on the data they are trained on, said WHO. Better regulation can help manage the risks of AI amplifying biases in training data.
It can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies, or even failure.
To help mitigate these risks, regulations can be used to ensure that the attributes – such as gender, race and ethnicity – are reported and datasets are intentionally made representative.
A commitment to quality data is vital to ensuring systems do not amplify biases and errors, the report stressed.