Skip navigation

AI Law and Regulation

URKI LOGO

 CCAI Large

 




Law and regulation can send clear messages about acceptable behaviour and how the interests of citizens should be upheld. However, laws and regulation regarding AI are in a state of flux. The EU is going in one direction with its AI Act, working alongside EU digital markets/services and data governance legislation; the current UK government prefers a principles-based approach adopted by existing regulators in their particular domains, with legally-enforced red lines adopted only as a last resort. The ‘Bletchley Declaration’ issued following the UK-hosted AI Safety Summit focuses on tools such as ‘increased transparency by private actors…appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.’

The adoption of AI in both public and private sectors is continuing at pace in ways that will affect the lives, rights and welfare of citizens. High stakes contexts include criminal justice, health, social benefits and retail. There is an urgent need to consider ways to regulate and evaluate how AI models are used, who by and for what purposes, not only by some 'bad actor' but by governments, public sector bodies and commercial companies, to ensure that AI is being used for public good. Relevant considerations include: consumer protection, competition law, corporate social responsibility, AI standards and assurance, ethical oversight, transparency and explainability methods, data governance, human rights, public law frameworks and specific AI laws and regulation.

Expert

  • Professor Marion Oswald

Related Projects at Northumbria

  • AHRC-funded BRAID project ‘Ethical review to support Responsible AI in Policing – A Preliminary Study of West Midlands Police's specialist data ethics review committee’
  • Centre for Digital Citizens

Suggested Literature

*if you are struggling to access any of the suggested literature, then please contact ccai.cdt@northumbria.ac.uk

Back to top