Artificial Intelligence (AI) has the potential to bring about positive transformations in science and society, including in healthcare. As a powerful enabler of progress and innovation, AI can increase individual well-being and contribute to the common good. However, while offering tremendous opportunities, relatively new AI technology could also raise concerns relating to potential misuse and inadequate accountability, as well as systemic risks inherent to algorithmic bias and discrimination.
The member companies of IFPMA are committed to the responsible development and use of AI to discover, develop, manufacture, and deliver medicines and healthcare solutions. This is grounded in IFPMA’s commitment to improve healthcare for patients and society. In developing and deploying AI algorithms or applications (“AI systems”), IFPMA’s member companies should act with integrity to maintain the trust of patients, healthcare professionals, payors, public authorities, and other stakeholders.
These principles are intended to help IFPMA’s member companies use AI systems responsibly and sustainably in alignment with the IFPMA Ethos of care, fairness, respect, and honesty. The principles strive to promote values-based decision-making and the creation of pragmatic, appropriate, and risk-based AI frameworks and controls. They aim to provide a set of guardrails that member companies should consider, adapt, and operationalize within their organizations.
The IFPMA AI Ethics Principles complement and align with the broader IFPMA Data Ethics Principles with specific emphasis on the considerations relevant to design and use of AI. Moreover, the IFPMA AI Ethics Principles are meant to work in the context of other existing AI principles, laws, or regulations of both general application and in healthcare specifically. The principles below embed an “ethics by design” approach and apply both to AI developed in-house or sourced from third parties.