Development and clinical implementation of artificial intelligence requires profound ethical and legal considerations. For more information on the legal framework, see our Legal Framework section.
The North Norwegian Health Authority’s strategy for AI guides the activities conducted at SPKI. The strategy lays down a set of overarching legal and ethical principles, inspired by ethical guidelines from the EU’s High-Level Expert Group on AI and the Norwegian Government’s national AI strategy.
Responsibility and accountability
Development and use of AI, as well as AI research at SPKI, shall be organized in a way which ensures clear allocation of responsibility. SPKI shall facilitate internal control measures as well as external audits at all stages of AI development. During all stages of design and development, auditors should have access to algorithms, datasets and research or development protocols. Furthermore, responsibility and accountability entail that all projects shall be founded on risk and impact assessments. The assessments must consider aspects such as patient safety, ethical aspects, information security and privacy aspects.
Deployment and use of AI in medical care must be compatible with the duty of care. Necessary documentation to demonstrate the safety and performance of AI systems shall be drawn up whenever AI is implemented in a workflow or decision-making process. Entities that deploy AI are responsible for ensuring that their personnel receive the training necessary to ensure responsible use of AI systems. Written procedures should be established to ensure responsible use, monitoring and maintenance of AI systems. Health personnel using AI systems must always adhere to their personal duty of care. They shall exercise careful judgment when relying on outputs from AI systems, particularly during early stages of implementation.
Robustness refers to the resiliency, reliability and operational safety of AI systems. AI systems developed and used at SPKI shall adhere to high standards of accuracy and stability over time. Workflows where AI is implemented should entail backup plans to minimize the consequences of system downtime or errors. Downtime and errors should cause minimal disturbance and inconvenience for patients and health personnel.
Transparency and interpretability
Transparency entails that we shall be open about the content, composition and origins of training data. As far as possible, we shall also explain the logics employed by AI systems. When AI is used in medical practice, health personnel and patients shall be made aware that they are engaging with an AI system and for what purposes the AI system is used.
Researchers and developers shall at all stages of development strive for interpretable and explainable AI systems, as far as possible according to the state of the art. This means that AI systems should, to the extent possible, produce an explanation for their conclusions and recommendations. Explanations shall, as a minimum, be interpretable to the health personnel or other personnel who are the intended users of the system.
Autonomy and human oversight
The deployment of AI in medical practice shall not negatively affect the right of patients to exercise autonomy and participate in health-related decision making, nor shall it negatively affect the professional autonomy of health personnel.
Workflows involving the use of AI systems must facilitate human oversight. Information or recommendations generated by AI systems shall be subject to a genuine assessment by qualified personnel before being relied on in a decision-making process. For this reason, AI systems need to have user interfaces which provide the means required to exercise human control during operation of the systems. When necessary, users must be able to intercept and overrun the AI system.
Diversity and non-discrimination
Development and use of AI at SPKI shall aim at counteracting the marginalization of vulnerable groups, including minority populations. The risk of disparity in access to and quality of care should be taken into account and mitigated through proactive measures. Research and training activities shall strive for diversity and representation across vulnerable groups. Particular consideration shall be given to the composition of the population in the geographical area where an AI system will be used. However, a representation which corresponds to the relevant patient population is not always sufficient to achieve acceptable performance across different patient groups. Minorities in the population will usually be minorities in datasets. Measures should be considered for the purpose of ensuring the quality of services provided to minority groups. Any disparate impact on vulnerable groups in the relevant population should be documented and accounted for in the system’s accompanying documentation, including the instructions for use.
SPKI recognizes its particular responsibility towards the Sami people. Where relevant, development and use of AI at SPKI shall therefore take into account any special considerations concerning the Sami people.
Big data analysis and training of machine learning algorithms entail significant energy consumption, which can impact the climate and the welfare of future generations. Decisions at SPKI, for example regarding the priority of AI projects, shall take the potential environmental impact into account. Climate neutral solutions should be preferred whenever they are available.