Leadership survey : Data Management in Healthcare – Do we need Ethical Leadership?



Authors : Rachael Spooner (USA, Co-chair), Heikki Yli-Ollila (Finland), Márcio Reis (Portugal). Reviewed by Tavy Alford (IHF, Intern)


The digitization of health information provides an enormous opportunity to improve patient care by providing and predicting accurate diagnoses, optimizing and customizing treatment plans, improving patient flow, preventing accidents, supporting pandemic responses, and accelerating advancements in medical research. Tools such as Electronic Health Records (EHRs), mobile health apps, and Artificial Intelligence algorithms that can learn from data, are the present and future of healthcare. Acknowledging all the benefits that big data and digital disease detection can provide, there are three areas of ethical issues which must be analyzed and addressed:1,2

1 – Context Sensitivity:

    • Differentiating between commercial versus public health uses of data – Is consent required and can it be revoked?
    • User agreements, terms of service, participatory epidemiology – are users protected in all contexts (irrespective of privacy laws that differ according to jurisdiction)?
    • Global health issues – is privately collected data open to global public health use?


2 – Nexus of Ethics and Methodology:

    • Robust methodology, algorithm validation, recalibration, noise filtering, and feedback – have false identification or inaccurate predictions been avoided? Who is responsible?
    • Data Provenance – awareness of public health uses of personal data – does the public know and understand when and how their information is used?


3 – Legitimacy Requirements

    • Lack of common standards – who is responsible for monitoring data use? Which standards should be upheld?  Consequences of improper use, or inaccurate results? 
    • Communication to the public – how do we manage expectations?


While there are many ethical considerations in healthcare, the Covid-19 pandemic has accelerated digital transformation, bringing it to the forefront of discussions. In 2021, the WHO published the “Ethics and governance of artificial intelligence for health: WHO guidance”. This report endorses six key ethical principles: (1) Protect autonomy; (2) Promote human well-being, human safety, and the public interest; (3) Ensure transparency, explainability, and intelligibility; (4) Foster responsibility and accountability; (5) Ensure inclusiveness and equity; (6) Promote AI that is responsive and sustainable.3 In the past several years, there have been several ransom-ware attacks, questions around biases in AI algorithms, and additional definitions of data ownership and use.4

The ethical challenges for the healthcare organizations are obvious and, in this work, we were able to take the pulse of the healthcare leaders about these matters.


During the summer of 2021, we conducted an international survey to healthcare leaders and professionals about ethical health data management. The participation in the survey was voluntary and anonymous. No identifiable information was collected.

In the survey, we requested information regarding the functions, seniority, age bracket and country of the responders, as well as their opinions about:

    • Level of the ethical maturity in their working organization concerning the health data management
    • Top 3 concerns about the health digitalization
    • Which WHO key ethical principles for the use of AI for health are the most challenging to address
    • Priority to improve the health data governance across the organization



We received 30 replies from Canada, Finland, Portugal, UK, USA and Spain. Twenty-eight of the respondents were leaders and managers (53% Director level) and two were individual contributors. 27% work in clinical care, 17% were Chief Operating Officers, and another 17% were Data Protection Officers. A large portion of the respondents had many decades of professional experience, with 40% of the responders corresponding to the age bracket between 55 to 64 years old.

The responders unanimously agreed that there is an ethical obligation in the governance of health data. 

87% of responders believe the level of ethical maturity at their organization qualified as regular or high.

Regular level indicates the organization must have a data protection officer and some policies or procedures are implemented. For a high level, the ethical concerns must be versed in the organization policies and all staff must be aware (training programs).

The perception of progress towards high levels of data ethics maturity is encouraging to see, however there was an individual who reported “low” as a director level clinical care provider. This is concerning and should be kept in mind as senior leaders enact policies and communication strategies. A component of ethical data management is communicating the efforts and expectations to all levels and areas of the staff in a way that is understandable and actionable in their role.


The top three concerns reported about the fast digitalization of the health care were cyberattacks, safeguarding the anonymization of patient data and Ais automatic data collection and decisions.

In the US, in the month of July 2021 alone 52 hacking/IT incidents were reported in which protected health information of 5,393,331 individuals was potentially compromised.5

Whether the survey is validated by, or possibly reflective of, the impact news headlines have on top-of-mind issues for healthcare leaders is not possible to say. However, as ethical leaders we need to be forward looking and identify and mitigate future problems while improving protections for current issues.

At the same time, an investigation from UC Berkley, published in JAMA 2018, suggests that despite data aggregation and the removal of protected health information, there is a possibility that de-identified physical activity data collected from wearable devices can be re-identified, when using machine-learning algorithms. This means that the current practices for the de-identification of data are insufficient to ensure individual´s privacy.6 Working together with device companies and technical specialists will be key to building effective, safe, and trusted predictive models in the future.

Regarding the WHO ethical principles for the use of AI for health, survey respondents highlighted top three principles most challenging to address (in this order):

1 – Ensuring transparency, explainability and intelligibility (AI must be intelligible and/or understandable to everyone, from developers to patients)

This was clearly the principle that was evaluated to be the most difficult to address (43% of the responders) and was also considered a keystone principle.  If transparency, explainability and intelligibility are not ensured, it is hard to achieve the rest of the principles. The main concerns were the lack of digital skills of the citizens and some AI methods are virtually impossible to explain to an average citizen.

Even for healthcare professionals who believe in the value of using AI to analyse patterns in medical images, and to make predictions about the likely presence or absence of disease, it is imperative that the IA systems to be able to explain the reasoning behind a decision.7

2 – Protecting human autonomy (The risk of machine decision making override human control of healthcare systems and medical decisions) 

20% of the responders find this principle the most challenging to address. In an AI driven environment, Human autonomy risks becoming increasingly irrelevant; nevertheless, the principle of autonomy is perceived to be important to respect. On the other hand, responders agreed that situations where automatized generation of AI-based classifiers will override human control are becoming more common.

This fear or risk must be addressed, along with a legal data protection framework that will ensure that humans remain in full control of medical decisions, to protect privacy and confidentiality, and to ensure patients provide informed and valid consent.3

3 – Promoting AI that is responsive and sustainable (AI systems must match promoted expectations and requirements, while being consistent with the effort to reduce human impact on the environment. Anticipated disruptions in the workplace cannot be ignored.) 

Promoting AI that is responsive and sustainable was named as a top challenge by 17% of the responders. Ensuring alignment of intentions of the AI programs to the outcome and impact they drive will be increasingly difficult as underlying algorithms become more complex. Effective controls should be routinely reviewed to assess actual versus expected outcomes and what new or updated requirements are in place. Efforts to reduce human impact on the environment, including biohazardous waste, and fossil fuel use should be a component in evaluating the Ais impact on patient outcomes, organizational intentions, and healthcare sustainability.

Nevertheless, in 2018, the Mckinsey Global Institute reported 160 AI initiatives with the potential to do social good, including examples from the healthcare field (such as AI systems with higher accuracy at detecting skin cancer than dermatologist8), and environmental protection (such as robots with AI capabilities that can be used to sort recyclable material from waste).9

When we asked what would be your first decision to improve health data governance across the organization? The emerged answers could be clustered into the categories of Diagnosis, Education, Monitoring and control, and Investment.


The following recommendations are not comprehensive, but we believe can serve as starting point for healthcare leaders preparing their organizations to safely manage the health data of their patients.

1 – Know and mitigate risks in your health organization regarding the use of health data

It is mandatory to carry out a risk assessment by an independent entity. It is crucial to have dedicated staff who truly understand not only the latest technologies to handle the health care data safely and correctly, but also the ethical challenges. Our small data sample demonstrated that over 10% hospitals surveyed did not have a data protection officer.

2 – Knowledge is essential to cybersecurity, not just firewalls

To ensure responsible use of health data, it is mandatory to educate your staff about cyber security and correct handling of the patient data. Written policies must be in place about the use manage of health data by the healthcare professionals, along with routine training.  Regular audits should be performed to ensure compliance to the policies and procedures.

3 – Prioritize safety, transparency, and patient autonomy. 

When managing health data, always ask:

Employ staff or consultants who can explain underlying algorithms to the organization. If providers and staff cannot explain the benefit AI provides to the patient, patients are unlikely to participate or permit their data to be used. Do not take it as given that AI is functioning properly or without biases; have routine reviews in place to validate appropriateness of recommendations and make updates or adjustments when necessary.


    1. Vayena E, Salathé M, Madoff LC, Brownstein JS. Ethical Challenges of Big Data in Public Health. PLoS Comput Biol. 2015;11(2):1-7. doi:10.1371/journal.pcbi.1003904
    2. Hall MA, Schulman KA. Ownership of medical information. JAMA – J Am Med Assoc. 2009;301(12):1282-1284. doi:10.1001/jama.2009.389
    4. Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does “aI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ. 2021;372:1-5. doi:10.1136/bmj.n304
    5. Alder S. July 2021 Healthcare Data Breach Report. HIPAA journal.
    6. Na L, Yang C, Lo C-C, Zhao F, Fukuoka Y, Aswani A. Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning. JAMA Netw Open. 2018;1(8):e186040. doi:10.1001/jamanetworkopen.2018.6040
    7. The Royal Society. Explainable AI: The Basics.; 2019. https://royalsociety.org/topics-policy/projects/explainable-ai/
    8. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. doi:10.1038/nature21056
    9. Chui M, Francisco S, Manyika J, Francisco S. NOTES FROM THE AI FRONTIER APPLYING AI FOR SOCIAL GOOD. Mckinsey Global Institute. Published online 2018. https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/applying%20artificial%20intelligence%20for%20social%20good/mgi-applying-ai-for-social-good-discussion-paper-dec-2018.ashx
    10. The Data Futures Partnership. A Path to Social Licence: Guidelines for Trusted Data Use. Published online 2017. https://www.aisp.upenn.edu/wp-content/uploads/2019/08/Trusted-Data-Use_2017.pdf
No Comments

Sorry, the comment form is closed at this time.