Human v AI Legal Article ImageWritten by: Helen Hanley, PNLD Legal Adviser

Not reviewed after the date of publication - 25 September 2025

AI is rapidly evolving with some embracing the advantages it brings whilst others are concerned about its consequences. In this article, PNLD Legal Adviser Helen Hanley provides an overview of what constitutes AI and, the law that currently applies to its use and development, specifically in relation to law enforcement and criminal justice.

The term ‘Artificial Intelligence’ (AI) was first coined in 1956 by John McCarthy during a research project exploring the ways in which machines could stimulate aspects of human intelligence and an initial AI system ‘Logic Theorist’ was presented. From this point onwards, interest in the field of AI continued (albeit with so called ‘winters’) and today, AI is used in many aspects of everyday life. However, what constitutes AI now is far more involved than that in the 50’s but this same term stands, alongside an ever-expanding language of words and phrases. This article explores some of these concepts and the law that applies.

What is AI ?


A starting point is to define AI. Currently, there is no universal definition, however in the ‘Artificial Intelligence Playbook for the UK Government’, published in February 2025, and also in the Law Commission’s ‘AI and the Law -  a discussion paper’ published July 2025, the following definition was re-iterated -

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.

How does a computer program become AI ?

 

In simple terms, AI or machine learning (as it is also known), involves a process of ‘training’. A computer program is given a large amount of data and a set of instructions. The program is then given training material which contains data and clues from which the machine learns and becomes an AI model. There are two ways in which a computer program can train / learn, ‘supervised’ or ‘unsupervised’.

  • ‘Supervised training / learning’ is when sets of data given to the computer program as training material are given labels by humans and AI is asked to figure out the patterns in the data. AI is then asked to apply those patterns to some new data and give feedback on its accuracy.
  • ‘Unsupervised training / learning’ is when the computer program uses complex algorithms (i.e., sets of rules or instructions, unlabelled data, that guide the machine on how to perform a specific task). It accesses huge datasets, learning without guidance.

There are many AI models being used in all areas of life, just a few examples are: natural language models e.g., Siri and Alexa (lesser autonomous models); AI algorithms e.g., facial recognition; and Language Learning Models (LLMs), e.g., ChatGPT (more autonomous model known as generative AI). LLMs consider individual words and whole sentences, compare the use of words and phrases in a passage to other examples across all its training data and then read a question and generate an answer. As an example of the amount of data that is needed to train an AI model, if a human person were to read the datasets used to train leading LLMs word by word, 24 hours a day, 7 days a week, it would take that person thousands of years just to finish reading.

What is the legal framework for AI ?


The UK has a different legal framework to that of the EU.

The EU are implementing a statutory framework for AI which applies to all sectors alike. Regulation (EU) 2024/1689 of EU Parliament (as amended), known as ‘the EU AI Act’, lays down harmonised rules on AI. This Regulation is yet to be fully implemented – see Implementation Timeline.

Once fully implemented, its purpose is to lay down a uniform legal framework for the development and use of AI systems, having an AI regulator with overall authority across all sectors and using AI regulatory sandboxes for testing (controlled environments where AI systems can be developed, tested, and validated before being released on the market).

Although the EU AI Act is an EU regulation, UK businesses and police forces may still fall within its scope if they provide AI systems that are used within the EU, hosted or produce effects there.

Of interest, regarding law enforcement, the EU AI Act contains rules on the protection of individuals –

  • the processing of personal data concerning the restrictions of the use of AI systems for remote biometric identification for the purpose of law enforcement.
  • the use of AI systems for risk assessments of natural persons for the purpose of law enforcement and
  • the use of AI systems of biometric categorisation for the purpose of law enforcement.

See the Summary for further information on the EU AI Act.

The advantage of the EU framework is that it provides clarity on how AI can and cannot be used, classifying AI systems according to risk. However, it could also be argued that there is potential for this framework to stifle any further evolution of AI. This is the argument of the UK, and so, unlike the EU framework, the UK government have adopted, so far, a more flexible, principles-based framework using existing laws such as the Data Protection Act 2018, the Consumer Protection Act 1987 and, the Online Safety Act 2023.

Existing regulators, such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA), Competition and Markets Authority (CMA) and the communications regulator, OFCOM, are providing guidance to regulate the use and development of AI with direction from the Department of Science, Innovation and Technology’s ‘AI Opportunities Action Plan’ published January of this year. See the following information from the regulators:


How will the legal framework develop?

The ‘AI Opportunities Action Plan’ aims to ramp up AI adoption across the UK to boost economic growth, provide jobs for the future and improve people’s everyday lives. The plan includes -

  • building data centres to house the large and complex computers used to train AI models and to run ‘inference’ (where AI is used to complete tasks and answer queries).
  • unlocking data assets (data assets refer to a system, application output file, document, database, or web page that companies use to generate revenues) in the public and private sector; AI applications, developers need access to high-quality data (believed to be the lifeblood of modern AI).
  • creating a National Data Library (a government initiative aiming to make public data more accessible for research and developments); and
  • ensuring British people are prepared for jobs in the AI industries of tomorrow.

The Data (Use and Access) Act 2025, which received Royal Assent on 19th June 2025 is part of that plan to unlock data assets, in that it contains provisions to  –

  • reform parts of the UK’s data protection and privacy framework (Data Protection Act 2018 and UK GDPR) to maintain high standards of protection, aiming to create clarity so that there can be safe development and deployment of new technologies. See part 5 of the Act and paragraphs 35 to 38 of the explanatory notes.
  • facilitate the flow and use of personal data for law enforcement and national security purposes. See Part 5 of the Act and paragraphs 39 to 43 of the explanatory notes for changes made by this Act to Part 3 of the Data Protection Act 2018 in relation to law enforcement processing.
  • extend data sharing powers under section 35 of the Digital Economy Act 2017 to include businesses, with a view to better enabling targeted public services to support business growth and to deliver joined-up public services and reduce legal barriers to data sharing.
  • reform the regulator, the Information Commissioner, including its governance structure, duties, enforcement powers, reporting requirements and its development of statutory codes of practice. These reforms give the regulator new, stronger powers and a more modern structure – while maintaining its independence – see part 6 of the Act.

For information on the commencement of these provisions, see section 142 of the Act (commencement) and the Data Use and Access Act 2025: plans for commencement which  provides a summary of the government’s plans for bringing into force provisions in the Data (Use and Access) Act 2025).

The aim of the approach by the UK is to enable the continued evolution of AI.

However, concerns have been expressed that the approach is unstructured and spread across different pieces of legislation and guidance, meaning it may be difficult for developers of AI to locate relevant rules and comply, and secondly, that this flexible approach doesn’t sufficiently address the risks AI can pose, such as bias and discrimination (the systematic and unfair favouritism or discrimination that arises in AI systems due to the data they are trained on, the design of the algorithms, or the way they are deployed). AI may potentially amplify existing biases such as race, gender, socioeconomic which could lead to unfair decisions for example in the context of policing.



How is law enforcement using AI to date and what are the benefits? 

Police forces across the UK are already implementing AI. Some examples include – automated triage of cases, early identification of exploitation and routes to criminality, back office/business support functions, risk management of warrants, redaction, forensic analysis of data, resource allocation, facial recognition, ANPR and use of the Child Abuse Database (CAD).

  • The CAD uses AI to identify victims of perpetrators of child sexual abuse. Its quick and effective identification of victims and perpetrators in digital abuse images allows for action to remove victims from harm and ensure abusers are held to account. The use of AI in this way increases the scale and speed of analysis while it protects staff welfare by reducing their exposure to distressing content.
  • Automated facial recognition (AFR) helps investigators to compare images of unknown person, such as someone suspected of committing a crime or a mugshot of an arrestee, against a reference database. This reference database is typically supervised and stored, such as custody images or images collected during criminal proceedings.
  • Live facial recognition (LFR) is also being introduced by some forces. LFR performs a real-time reading of all people passing a camera regardless of their capacity and compares them against a pre-determined closed watch list of persons of interest. In some scenarios, the system will discard images that triggered no results immediately to avoid undue infringement of applicable data protection laws. Retailers are planning to use this also – see Sainsbury's to trial facial recognition to catch shoplifters - BBC News.

The benefits of AI are many; increased efficiency and automation, enhanced decision-making, 24/7 availability, cost saving, innovative and creative, accuracy and accessibility. Disadvantages will be model specific. For example, use of LFR applications can pose significant challenges from a technical and a human perspective (system load, human capacity and biases, etc.).

The development, use and implementation of AI is ongoing. Further information and guidance for police forces can be found in –

General advice to services offering AI, is to be thorough in the ongoing testing of the software, ensuring that any representations made regarding the AI system are accurate and the system is operated as required.

What effect is AI going to have in the Criminal Justice system ? 


The Criminal Justice System is believed to be in crisis as the number of outstanding cases in Crown Court has reached a record high and trials are being listed as far ahead as 2029. One contributing cause of this is the increasing complexity of criminal law, in both its procedures and new forms of evidence (mobile phone evidence, computer evidence or DNA analysis).

As a result, on 12th December 2024, the Lord Chancellor announced an Independent Review of the Criminal Courts to be carried out by the Rt Hon Sir Brian Leveson. The Review’s purpose is to produce options and recommendations for a) how the criminal courts could be reformed to ensure cases are dealt with proportionately, considering the current pressures on the Crown Court; and b) how they could operate as efficiently as possible. The Review is in two parts, part 1: the Policy Review and part 2: the Efficiency Review. Part 1 has been published and can be read here Independent Review of the Criminal Courts - Part 1 but of particular interest, part 2 (yet to be published) intends to consider improvements to end‑to-end case progression, incentivising more effective inter-agency collaboration and local leadership to improve performance outcomes, developing an experienced workforce, using the court estate more effectively and also in relation to AI, encouraging integration of new technologies, including artificial intelligence (AI).

In part 1 of the report, the Rt Hon Sir Brian Leveson states - 

“AI will be approached as the starting point for a long-term vision for criminal justice beyond the immediate crisis. The pace of change in technology is such that, within ten years, the landscape within which any criminal justice system will operate is beyond our ability to visualise. The Times Justice Commission described the opportunities available to the criminal justice system through AI and other developing technologies. A Report into the state of the criminal justice system (Times Crime and Justice Commission, 2025). Similarly, Professor Richard Susskind CBE, KC (Hon) has described the need for a vision for criminal justice in 2035, recognising the transformational potential of technology. With thanks to Professor Richard Susskind for his submission to this Review. I agree with this and have no doubt that AI can improve aspects of the system and timeliness when I turn to it in the Efficiency Review.

It will be interesting to see what recommendations are to be proposed in relation to AI, once part 2 of the report is published.  

Of interest also, is the Ministry of Justice Policy Paper ‘AI action plan for justice’ published 31st July 2025 proposing to incorporate AI productivity tools, such as translation tools (amongst other suggestions).

What are the perceived risks of AI ?

As with other technological developments, in delivering benefits, the use of AI comes with risks. These include –

1. It can and is being used for unlawful purposes such as perpetuating fraud, causing harassment, assisting in cyber-attacks – being a threat to national security.

2. It may be used to spread disinformation which could then potentially harm democratic process. On July 11, 2025, the Science, Innovation and Technology Committee published a report on its inquiry into the spread of misinformation and the sufficiency of the online safety regime. It concluded that the current online safety regulation does not go far enough and urges the government to regulate further on GenAI.

3. It can and is used to create ‘deepfakes’. There are already offences making it illegal to share photographs or films without the subject’s consent, which show or appear to show another person in an intimate state – sections 66A, 66B, 66C, 66D Sexual Offences Act 2003, but there are ongoing proposals for new offences in the Crime and Policing Bill such as, it will become illegal to adapt, possess, supply or offer to supply a child sexual abuse material (CSAM) image generator, punishable by a maximum imprisonment of five years in prison. There will be defences for OFCOM and the intelligence agencies and a delegated power for the Secretary of State to permit relevant organisations to possess CSAM image generators for an appropriate purpose, for example testing to determine capabilities of the models to prevent future crime.

4. There are social concerns that AI could cause harm by way of social upheaval, for example by AI replacing work forces, at scale, in a wide range of industries, from manual to highly skilled.

5. That it will have an environmental impact on the way that technology uses a large quantity of energy and water.

6. That it poses risks to mental health. An example being the use of chatbots, fuelling dependence and blurring boundaries.

7. Black box systems and opacity in AI algorithms - machine learning systems whose workings are not understandable to the user and there is a lack of transparency as to how the algorithm arrives at its conclusion.

8. Data Protection – persons should be mindful of entering private or sensitive data into publicly available AI systems such as ChatGPT, as that data can potentially be accessed by staff or sub-contractors working on training the model. Such sharing could potentially be a breach of duty to keep such information confidential, where relevant.

9. Generative AI can ‘hallucinate’. This refers to instances where AI generates false or misleading information, often presenting it as fact. Lawyers are increasingly using AI systems to build legal arguments and two cases this year were blighted by made-up case-law citations that were either definitely or suspected to have been generated by AI. See Ayinde -v- London Borough of Haringey, and Al-Haroun -v- Qatar National Bank.

A final word 

Whilst this article is titled ‘the trial - human v machine’, AI is not so much on trial but as illustrated, is already embedded into many aspects of our lives and indeed being used in law enforcement. It is here to stay. The use of AI models may benefit law enforcement and the criminal justice system but there are risks, especially with the use of generative AI, with its opacity (lack of transparency in decision making) and its potential to hallucinate (provide false facts). Also, the use of AI in society should not be seen as AI versus the machine but humans working with machine. As AI models can ‘learn / train’ and infer, we too must learn about AI and its ways of working. We must develop the technical skills that are needed to communicate and understand AI development whilst ensuring that we do not lose sight of the importance of human qualities - trust, understanding, empathy, respect, compassion, collaboration, qualities which are crucial where any decision-making process is to be made, whether it be made by human or AI. Human oversight and testing before deployment are paramount. The UK’s flexible approach to the legal framework for AI does allow innovation but as to whether this approach goes far enough to prevent the posed risks of AI is a question that can only be answered with time. Likewise, although the EU AI Act does appear to provide a much clearer framework for preventing the risks posed by AI, whether the EU will fall behind the UK in its innovative use of AI is again a matter of time to provide an answer.

Want more of this type of content? Check out our range of legal articles here.

Lightbulb icon to illustrate a PNLD tip For quick and easy access in the future, click the pin icon from the top right of any document to save it to 'My Documents'.