Skip navigation

PROBabLE Futures Publications and News Coverage

The research of PROBabLE Futures focuses on the use of Probabilistic AI in Law Enforcement and the delivery of an ‘operational-ready’ blueprint of an evidence-based, contextually informed and future-oriented framework for responsible AI in law enforcement. 

PROBabLE Futures splits its research into 7x work packages, the primary research questions of each work package can be read here.

PROBabLE Futures has finalised an EDI Policy Document and an RRI Policy & Action Plan. Our 2024 PROBabLE Futures 2024 Achievement Report is also available to read. 

Policy Engagement to Date and Related Publications

Responsible AI in Policing Checklist and Case-Studies: Developed in collaboration with the NPCC AI Portfolio and circulated to police forces 

Two Consultation Responses Submitted by PROBabLE Futures, both available to read below: 

Ethical review to support Responsible Artificial Intelligence (AI) in policing: Research report supported by BRAID, the findings of which will be built upon in PROBabLE Futures.

PROBabLE Futures Recent Publications and Press Releases

shape

PROBabLE Futures Response to HMICFRS Consultation on Proposed policing inspection programme and framework 2025–29

- CLICK TO READ

shape

College of Policing Data Ethics and Data-Driven Technologies APP Consultation:

Response from Marion Oswald, Muffy Calder, Kyriakos N. Kotsoglou, Michael Maher, Carole McCartney, Kyle Montague, Claire Paterson-Young and Elizabeth Tiarks on behalf of ‘PROBabLE Futures'

shape

Ethical review to support Responsible Artificial Intelligence (AI) in policing:

A preliminary study of West Midlands Police's specialist data ethics review committee - CLICK TO READ THE EXECUTIVE SUMMARY

Ethical AI in policing News

19 Sep 2024

Police use of AI more responsible with an independent data ethics advi...

Groundbreaking research led by Northumbria Univers...

Probable Futures News

07 May 2024

Major funding for flagship research project on AI in Law Enforcement

A Northumbria University-led research team has rec...

shape

A six-month scoping project, funded through the AHRC’s Bridging Responsible AI Divides (BRAID) programme, brought together a team of researchers in Law, Computer Science, Social Innovation, and Policing, with extensive experience of theory and practice of real-world ethical approaches to data analytics and AI in sensitive contexts. The research team’s partnership with West Midlands Police and Crime Commissioner and West Midlands Police presented a unique opportunity to analyse the impacts of advice from its data ethics committee on the operationalisation of AI tools in policing, with a specific focus on the engagement of vulnerable groups. The findings and recommendations show how A Data Ethics Committee with diverse independent voices can contribute positively to the validity and responsibility of policing AI, thus supporting operational policing, but only if fully incorporated into the system of oversight and scrutiny - CLICK TO READ THE FULL REPORT

Back to top