HomeHealthcareHealthcare AnalyticsAI Ethics in delivering Healthcare Analytics

AI Ethics in delivering Healthcare Analytics

By Dr. Anthony J. Rhem, CEO/Principal Consultant, A.J. Rhem & Associates

AI provides the mechanisms to enable machines to learn. Incorporating AI in the delivery of healthcare knowledge will facilitate fast, efficient, and accurate healthcare decision making. AI provides the capabilities to expand, use, and create knowledge in ways we have not yet imagined. AI systems which use machine learning, can detect patterns in enormous volumes of data and model complex, interdependent systems to generate outcomes that improve the efficiency of healthcare decision making, The use of AI (machine learning) in delivering knowledge is based on the data that is used to train the machine learning algorithms.AI has been positioned as an important element in delivering health related knowledge and it is imperative that when AI is utilized to assist patients and physicians to make healthcare decisions that the knowledge is without bias, and the decisions made with the knowledge are ethical.

Healthcare Analytics

The healthcare sector is a knowledge-intensive industry, and it depends on data and analytics to improve the delivery of healthcare (treatments, practices, procedures). There has been tremendous growth in the range of information collected, including clinical, genetic, behavioral and environmental data, with healthcare professionals, biomedical researchers and patients producing vast amounts of data from an array of devices (for example, genome sequencing machines, high-resolution medical imaging, and smartphone applications). How this data is collected used and protected will also bring challenges that all countries driven by its legal standards will have to address. In the United States organizations such as the Food and Drug Administration (FDA) and policies such as the Health Insurance Portability and Accountability Act (HIPAA) of 1996 are in place to ensure standards, guidelines, data security and privacy is adhered to and enforced. However when it comes to AI applications in healthcare there must be standards in place, in particular ethical standards to protect the healthcare consumer when AI applications are used.

Ethical Issues of AI Delivery of Healthcare Knowledge

AI applications in healthcare has produced many benefits by delivering knowledge to detect health conditions early, deliver preventative services, optimizing clinical decision making, discovering new treatments and medications, delivering personalized healthcare, while providing powerful self-monitoring tools, applications and trackers. Although AI in healthcare offers many benefits it also raises policy questions and concerns that include access to (health) data and privacy, which includes personal data protection as well as ethical issues caused by bias in AI algorithms.

Bias in AI algorithms for health care can have catastrophic consequences by propagating deeply rooted societal biases. This can result in misdiagnosing certain patient groups, such as gender and ethnic minorities, that have a history of being underrepresented in existing datasets, further amplifying inequalities. Cognitive biases which are systematic errors in thinking also occur. This is inherited through cultural and personal experiences, that lead to distortions of perceptions when making decisions. Although data might seem objective, data is collected and analyzed by humans, and thus can be biased.

Open science practices can assist in moving toward fairness in AI for health care. These include (1) participant-centered development of AI algorithms and participatory science; (2) responsible data sharing and inclusive data standards to support interoperability; and (3) code sharing, including sharing of AI algorithms that can synthesize underrepresented data to address bias. Future research needs to focus on developing standards for AI in health care that enable transparency and data sharing, while at the same time preserving patients’ privacy.

In their 2019 paper in Journal of Global Health “Artificial intelligence and algorithmic bias: implications for health systems,” Panch, Mattie, and Rifat Atun define algorithmic bias as the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems[1].

The Need for a People-Centered AI Approach

People-centered AI focuses on improving the human condition. The central theme of the OECD AI standard is to support, guide and influence policies that will enable AI applications to be people-centered. People-centered AI applications support the inclusivity and well-being of people that it serves; respects human-centered values and fairness; AI applications must be designed, developed and implemented with transparency; AI applications are robust and safe; and there is accountability for the results and decisions that AI applications produce and/or the decisions AI applications influence (OECD, 2019)[2].

AI development must be done to ensure inclusivity and well-being. Artificial intelligence (AI) plays an increasingly influential role. As the technology diffuses, the potential impacts of its predictions, recommendations or decisions on people’s lives increase as well. The technical, business and policy communities are actively exploring how best to make AI people-centered and trustworthy, maximize benefits, minimize risks and promote social acceptance.


AI has the potential to exacerbate the inequality and divides that exist concerning AI resources, technology, talent, data and computing power. In addition, this will lead to AI perpetuating biases and impacting vulnerable and underrepresented populations. In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. In addition, some evidence shows that algorithms can improve decision making, causing it to become fairer in the process. Therefore, it is important to apply ethical AI practices to remove bias in our AI solutions and the knowledge these solutions provide to drive decision making.

Algorithms in health care technology don’t simply reflect back social inequities but may ultimately exacerbate them. What does this mean in practice, how does it manifest, and how can it be counteracted? These are the questions that must be answered going forward so that the ethical challenges for AI applied in healthcare can be met.

[1]Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health 2019; 9: 020318.

[2]OECD (2019). OECD Recommendation of the Council on Artificial Intelligence,

OECD/LEGAL/0449. Retrieved April 20, 2022, from http://legalinstruments.oecd.org

Must Read

Related News