Researchers from the Delft Design for Values Institute, in collaboration with eLaw(Center for the Law and Digital Technologies) of Leiden University finalized a research consultancy project on “Artificial Intelligence and Ethics at the Dutch Police” by providing a whitepaper highlighting requirements for the responsible use of AI at the Police and a long-term research strategy.
Virginia Dignum (Associate Professor) and Jordi Bieger (Researcher) for the Design for Values Institute (TU Delft) and Francien Dechesne (Assistant Professor) and Lexo Zardiashvili (Researcher) from eLaw – Center for the Law and Digital Technologies (Leiden University) have worked on a research project commissioned by the Dutch Police since November 1, 2018. The research highlights that AI has many potentially beneficial applications in law enforcement including
- predictive policing,
- automated monitoring,
- (pre-) processing large amounts of data (e.g., image recognition from confiscated digital devices, police reports or digitized cold cases),
- finding case-relevant information to aid investigation and prosecution,
- providing more user-friendly services for civilians (e.g. with interactive forms or chatbots),
- and generally enhancing productivity and paperless workflows.
The research found that AI can be used to promote core societal values central to police operations (human dignity, freedom, equality, solidarity, democracy, and the rule of law), but the use of AI may also challenge values carefully guarded in existing processes and procedures. It is impossible to anticipate all the effects of the use of AI in society, and more specifically, in the law enforcement domain. Therefore, the research found that it is essential that adoption and use of any application be continuously evaluated, for the Dutch Police to ensure policing practices in line with the values acknowledged by the Dutch state and the European Union.
In the whitepaper that was delivered in March 2019, researchers identified six morally salient requirements for the Dutch Police to ensure responsible use of Artificial Intelligence. These requirements are:
- Privacy & data protection;
- Fairness & inclusivity;
- Human autonomy & agency; and
- (Socio-technical) robustness and safety.
Moreover, in June 2019, researchers delivered a “Long term-research strategy for Ethics and Artificial Intelligence at the Police” as the second output of the research. In this document, researchers identified the following research areas that must be further explored for the Police to reach their goals of increasing efficacy and efficiency on the one hand, and trust and trustworthiness on the other. The document lists the following areas for further research:
- Impacts on human dignity;
- Impacts on public trust;
- Ethics guidelines and oversight;
- Impacts on police personnel;
- Explainable AI;
- Justifiable/verifiable AI;
The police organization in the Netherlands is committed to protect fundamental human rights and to ensure respect for the rule of law. Therefore, the police should incorporate the ethical considerations (stemming from the ethical principles and values that statutory law aims to uphold) through practical measures to ensure responsible use of AI and contribute towards enhancing (rather than limiting) legitimacy of and trust in the police.