Development and clinical implementation of artificial intelligence requires profound ethical and legal considerations. For more information on the legal framework applicable to AI, see our Legal Framework section.
To guide the activities conducted at SPKI, the Strategy for AI lays down a set of overarching legal and ethical princples, inspired by ethical guidelines from the EU's High-Level Expert Group on AI and the Norwegian government's national AI strategy:
Responsibility and accountability
Development and use of AI, as well as AI research at SPKI, shall be organized with regard to ensuring a clear allocation of responsibility and liability. SPKI shall facilitate internal control measures as well as external audits, as may be required, at all stages of AI development. During all stages of design and development, SPKI shall provide auditors with access to algorithms, datasets and research or development protocols. Furthermore, responsibility and accountability entails that all projects shall be founded on risk and impact assessments which shall include (but shall not limited to) considerations of patient safety, ethical aspects, information security and privacy aspects.
Deployment and use of AI in the course of medical care shall only be conducted if and to the extent that it is compatible with the duty of care. Documenation thereof shall be drawn up whenever AI is implemented in a work or decision-making process. Entities that deploy AI are responsible for ensuring that their personnel receive the training that is necessary to ensure responsible use of AI systems. Those entities shall establish written procedures for responsible use, monitoring and maintenance of AI systems in the respective institutions. Moreover, health personnel using AI systems must always adhere to their personal duty of care and shall exercise careful judgment when relying on outputs from AI systems, particularly during early stages of implementation.
Robustness refers to the resiliency, reliability and operating safety of AI systems. AI systems developed and used at SPKI shall adhere to high standards of accuracy and stability over time. Workflows where AI is implemented should entail backup plans to minimize the consequences of any system downtime or error. We believe that these features are crucial to prevent errors from causing disturbance and inconvenience for patients or health personnel.
Transparency and interpretability
Transparency entails that we shall (in accordance with applicable privacy and data protection legislation) be open about the content, composition and origins of training data, as well as the logic employed by AI systems. When AI is used in medical practice, health personnel and patients shall be made aware that they are engaging with an AI system and for what purposes the AI system is used.
Researchers and developers shall at all stages of development strive for interpretable (or explainable) AI systems, as far as possible according to the state of the art from time to time. This means that AI systems should, to the extent possible, produce an explanation for their conclusions and recommendations. Explanations shall, as a minimum, be interpretable to the health personnel or other personnel who are the intended users of the system.
Autonomy and human oversight
The deployment of AI in medical practice shall not negatively affect the right of patients to exercise autonomy and participate in health-related decision making, nor shall it negatively affect the professional autonomy of health personnel.
Workflows involving the use of AI systems must facilitate human oversight. Information or recommendations generated by AI systems shall be subject to a genuine assessment by qualified personnel before being relied on in a decision-making process. For this reason, AI systems need to have user interfaces which provide the means for exercise of human control during operation of the systems. When deemed necessary, users must be able to overrun the AI system.
Diversity and non-discrimination
Development and use of AI at SPKI shall aim at counteracting the marginalization of vulnerable groups, including minority populations. The risk that the use of AI systems can lead to disparity in access to and quality of care should be taken into account and mitigated through proactive measures. Research and training activities shall strive for diversity and representation across vulnerable groups. Particular consideration shall be given to the composition of the population in the geographical area where an AI system will be used. Any disparate impact on vulnerable groups in the relevant population should be documented and emphasized in the system's instructions for use, etc.
SPKI recognizes its particular responsibility towards the Sami people. Where relevant, development and use of AI at SPKI shall therefore take into account any special considerations concerning the Sami people.
Big data analysis and training of machine learning algorithms entails significant energy consumption, which could impact the climate and the welfare of future generations. Decisions at SPKI, for example regarding the priority of AI projects, shall take the potential environmental impact into account. Climate neutral solutions should be preferred where they are available.