The Case of Predictive Policing


Summary:

Artificial intelligence is increasingly used in law enforcement, driven by austerity, the need for preventive policing, and the explosion of data. Predictive policing, derived from intelligence-led policing, applies algorithms to forecast crime locations or potential offenders, yet evidence of its effectiveness remains weak and inconsistent. Legal frameworks such as the EU AI Act and the Law Enforcement Directive impose safeguards but leave loopholes, particularly for law enforcement applications. Ethical and social concerns—bias, surveillance harms, and loss of public trust—persist, alongside organizational challenges like automation bias and reduced officer discretion. Ultimately, the technology’s benefits are speculative, while its risks to rights and legitimacy are real, prompting calls to redirect focus toward community-based prevention and accountable AI governance.

Link to article here

Van Brakel R. Legal, Ethical, and Social Issues of AI and Law Enforcement in Europe: The Case of Predictive Policing. In: Smuha NA, ed. The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence. Cambridge Law Handbooks. Cambridge University Press; 2025:367-382.