ICMR has created “Ethical Guidelines for AI in Healthcare and Biomedical Research” to safely introduce AI-based technologies across all sectors, including healthcare. This move reflects the increasing acceptance of AI technology in the industry.
ICMR has pointed out several applications of AI in healthcare, such as diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, and predicting disease outcomes. Along with these topics, AI can be used for behavioral and mental healthcare and managing health systems.
Constructing an ethical policy framework to manage the advancements and applications of AI technologies within healthcare is paramount. This should serve as guidance so that AI cannot evade responsibility for its decisions.
The ICMR guiding document highlights the importance of establishing processes to address accountability in case of errors when AI technologies are increasingly being implemented in clinical decision-making. Protection is also imperative for safeguarding against such mistakes. It appears necessary to ensure the efficient utilization of Artificial Intelligence in this domain.
All stakeholders in the health sector have been provided with 10 key patient-centered ethical principles to aid them when applying AI. These standards prioritize the safety, autonomy, and interests of patients impacted by AI technology.
Accountability and liability, autonomy, data privacy, collaboration, risk minimization and safety, accessibility, and equity, data quality optimization, non-discrimination, fairness, validity, and trustworthiness are all important factors that must be considered when regulating online platforms.
The principle of autonomy demands that human involvement is involved in overseeing AI systems and their performance. Furthermore, gaining the patient’s consent is essential before initiating any process and letting them become acquainted with any potential physical, psychological, and social risks.
Risk minimization is employed to prevent intentional and unintentional misuse using global technology with data that has been anonymized to ensure there is no linkage. An ethical committee must complete a positive benefit-risk assessment to implement this safety principle successfully.
Regular internal and external audits to ensure the optimum functioning of AI systems must be conducted and made available to the public to uphold the accountability and liability principle.
The deployment of AI technology should consider the principle of accessibility, equity, and inclusiveness to bridge the digital divide. This means that appropriate infrastructure must be widely available to make AI technology accessible to all.
Developing AI tools for the health sector is a multi-step process involving stakeholders like researchers, clinicians/hospitals / public health systems, patients, ethics committees, government regulators, and the industry. In this regard, the guidelines outlined a brief for all these participants to ensure patient trust and safety in AI objectives.
To guarantee their technical accuracy, ethical propriety, and equitable application, adhering to standard protocols for developing AI-based solutions should be the focus of all stakeholders. Doing so will make the technology more user-friendly and accepted by many users.
The Ethics Committee must review the ethical considerations of AI in health, such as data source and quality, anonymity, piracy, selection biases, participant protection, compensation, and any potential for stigmatization. To guarantee the suitability, they must assess all these factors according to the advised guidelines.
The body must assess the scientific and ethical components of all health research, guaranteeing that it is scientifically accurate and appropriately weighing any risks or advantages for the researched population. This entity is responsible for ensuring the proposal upholds a rigorous standard.
In India, the governance of AI tools used in the health sector is only in its initial stages, even in highly developed countries. Per the guidelines, acquiring informed consent and regulating these AI tools is a significant concern. To address this issue, numerous Indian frameworks have been formed that merge healthcare with advancing technology.
The National Health Policy (2017) outlines the Digital Health Authority for leveraging Digital Health Technologies, the Medical Device Rules, 2017 specifies regulations related to medical devices, and The Digital Information Security in Healthcare Act (DISHA) 2018 provides security for digital information in healthcare.
The ICMR guidelines on AI provide a valuable resource for healthcare professionals and organizations looking to safely and ethically integrate AI into their workflows. By following these guidelines, healthcare providers can ensure that they use AI technologies safely, effectively, and beneficial for patients while maintaining the highest standards of ethical conduct and data privacy.