WHO: Unpacking artificial intelligence in sexual and reproductive health and rights

26 March, 2024

New brief considers risks and opportunities

22 March 2024 Departmental news

https://www.who.int/news/item/22-03-2024-unpacking-artifical-intelligenc...

A new technical brief by the World Health Organization (WHO) and the UN Special Programme on Human Reproduction (HRP) explores the application of artificial intelligence (AI) in sexual and reproductive health and rights (SRHR) and evaluates both opportunities and risks of this rapidly advancing technology. This brief is informational and complements WHO’s recent guidance on artificial intelligence, including on regulatory considerations and on ethics and governance.

“AI is already transforming technology for sexual and reproductive health. If we’re aware about the potential dangers, and cautious about implementation, and recognize AI as a tool and not a solution, we have a great opportunity to make sexual and reproductive services and information more accessible to all,” said Dr Pascale Allotey, Director of HRP and WHO’s Department of Sexual and Reproductive Health.

Researchers developed the technical brief through consultation with experts and a scoping review, to assess trends and potential challenges ahead. They found that current applications of AI within sexual and reproductive health often focus on screening and predicting health concerns, in areas such as infertility and pregnancy, as well as access to information through conversational agents, or chatbots.

However, the use of AI could also lead to potential harms and risks, in part because of potential ways data may be shared and breaches leading to the exposure of sensitive information – for instance, around fertility or sexual health; bias in the training data sets that lead to inaccurate results; access problems due to unequal global digital access and connectivity; misinformation; misuse, and more. Another challenge is the potential lack of transparency in how the systems are developed or applied, while some tools may lack accuracy for underrepresented populations. Although these are common challenges across AI systems and tools generally, the field of SRHR often grapples with underlying issues of access and equity, which can be greatly amplified and exacerbated by AI.

"The way that the AI ecosystem is developed, governed and regulated has an enormous impact on its effects in the real world," said Dr Jeremy Farrar, Chief Scientist at WHO. "It's especially crucial to be thoughtful and deliberative at every stage of the process to ensure ethical and equitable distribution."

How can policymakers ensure that AI is developed and applied in the most equitable way?

In accordance with existing guidance, the brief outlines priority actions and considerations to mitigate the specific risks of AI in SRHR, including the need to revisit data protection regulations to protect privacy, redress data breaches, and ensure duty of care and limitations in how SRHR data is shared for third-party use. To ensure inclusivity, training data should be as diverse and representative as possible. The AI industry should work to recruit diverse developers to build these systems with attention to the needs of marginalized groups, and to establish standards for AI-driven procedures for SRHR. On a macro level, the brief reiterates calls for an increased focus on addressing misinformation and targeted disinformation among potential users.

These actions are all expected to be prioritized by local and international regulatory bodies collaborating to make sure AI is developed in accordance with ethical guidelines.

“It is vital that developers and regulators understand that equity is paramount when it comes to artificial intelligence,” said Dr Alain Labrique, Director of WHO’s Department of Digital Health and Innovation. “Systems must be designed with all stakeholders engaged, and no one should be left behind. Digital technologies can uplift everyone, if we are intentionally inclusive and address to people’s diverse needs and safeguard their rights.”

The brief is available here: https://www.who.int/publications/i/item/9789240090705

HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org