Artificial Intelligence: legal issues in the UK

16 Jul 2020
11 min read
Artificial Intelligence: Legal Issues in the UK

The United Kingdom is one of Europe’s leaders in artificial intelligence, particularly in the healthcare sector. According to the McKinsey Global Institute, AI can boost the UK’s economy by 22% within 10 years. 

If you’re developing an AI-based startup or any kind of AI-powered product in the UK, you’re in a better position than the rest of Europe. The country is particularly open to innovation, and the government keenly supports many initiatives. For instance, the Alan Turing Institute serves as the national entity for artificial intelligence and data science solutions, while the House of Lords also has its own AI-focused committee. 

This data-driven culture makes the UK a significant artificial intelligence hub. A report by Coadec suggests that every week, a new AI startup is founded in the country. Before you get going, though, there’s something you should consider first: legal requirements and most common AI-related legal issues.

As AI is a new and developing field, it has slightly different requirements than other, more traditional industries. If you’re looking for legal advice, we’re here to get you started.

Here are some of the most important AI legal issues to consider:

The legal definition of AI 

First of all, how do you even define artificial intelligence? 

Here’s where the issue starts. Some lawmakers stick to this heuristic and define AI as a combination of software and data. It sounds quite simple, yet it needs to be pointed out that we’re dealing with more sophisticated software and larger data volumes than ever before. 

Other legal professionals, such as Jonas Schuett of the Goethe University suggest it’s better to avoid the term artificial intelligence. He says that there is no definition of AI which meets the requirements for legal definitions. Instead, he suggests focusing on:

  • certain designs
  • use cases
  • capabilities with possible risks in mind

These suggestions are mainly addressed to the policymakers, yet they can also save as a guideline for yourself. To get it right, it’s best to focus on the very specific case of AI use and the risks that come with it. 

Definition of AI in the United Kingdom

When it comes to the legal definition of artificial intelligence, here’s how the UK government describes it:

[…] technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.

The British parliament has recently added another aspect to this definition. It is important to note that AI systems have the capacity to learn or adapt to new experiences or stimuli.

The key legal issues of AI

Processing large amounts of data

To work properly, artificial intelligence algorithms need a lot of data. And here comes another AI legal issue: who owns the data and who takes care of the security? It gets even more complicated with sensitive information in sectors like banking or healthcare. 

There are two main data security acts currently in effect in the UK:

Data Protection Act and GDPR (General Data Protection Regulation)

In 2018, the Data Protection Act has replaced a regulation from 1998. Together with the GDPR, it shapes the processing of personal data in the UK.

As you probably already know, it has completely changed the way we deal with personal data in the European Union. Even despite all the changes that come with Brexit, UK businesses still need to comply with the GDPR, as they often process the data of other European clients. 

Some of the AI-related implications that come with GDPR include:

  • Fairness principle – This objective claims that a business can process the subject’s data in line with their interests. Biased input data is a huge issue in AI – we’re going to cover this later in more detail, along with hands-on examples.
  • Purpose limitation – The user has to have access to the information about the reason why you collect their data. As AI requires large volumes of information, you need to let your audience know what you’re going to do with it. 
  • Transparency and access to information – Your customers have the right to access their data and ask to remove it on request. This is known as the right to be forgotten

The story of the Royal Free NHS Foundation Trust and DeepMind, Google’s AI unit, makes an interesting example here. The collaboration between these two parties is known to have violated UK data protection laws. The ICO, UK’s data privacy authority, found out that the patients were not informed that their data will be used for the development of an AI solution. 

Data anonymisation

To use and share large data volumes without breaking the law, you need to anonymise it first. Anonymised data is a term to describe information that one is not able to link to a living individual. When the data is anonymised, the British Data Protection Act doesn’t apply anymore. 

The process of anonymisation requires getting rid of:

  • direct identifiers, such as the name, email, or phone number
  • indirect identifiers that could reveal the individual by cross-referencing, such as the workplace and location

This practice helps protect the users’ privacy, yet removing the identifiers is just the beginning of the journey: 

Ethical issues and biases

Although the name artificial intelligence may suggest otherwise, this technology is not immune to human-like biases. In her book Technically Wrong, Sara Wachter-Boettcher describes a range of cases where AI goes, well, terribly wrong. 

The author proves that while artificial intelligence can be quite autonomous, it’s still based on some kind of input, which is not free from biases and our initial assumptions. For instance, she described the case of the Google Photos algorithm. The goal was to detect what’s in the picture, yet it came with one serious limitation – it considered white-skinned people as default. Because of that, it was likely to automatically tag black people as… gorillas. One of the users has found out that the algorithm considered them to be apes in all of the Google Photos albums.

In this case, the problem lies in input data. The neural network trained mainly on white models, which is why it didn’t catch the racial differences. Even though the algorithm was not explicitly racist, it still displays an apparent racial bias. 

Cases like these prove that, in layman’s words, AI is what we make it. We feed it with information affected by our own biases and limitations. Wachter-Boettcher named one of the sections of her book Biased Input, Even More Biased Output. This short sentence describes how AI can intensify potential ethical issues. 

Legal challenges

As we’ve mentioned, AI is autonomous, but the question is: who’s responsible for the damage it may cause? 

When it comes to UK legal regulations, an automated system (such as an AI algorithm) is not an agent in the face of the law. The responsibility lies in the hands of its creators, such as the stakeholders, the operators, the designers, or the testers of the system. 

The question of responsibility for damages caused by AI is a hot topic when it comes to introducing new products, such as autonomous cars. The European Parliament has issued a draft report with recommendations regarding a civil liability regime and its suitability for AI. 

The Parliament has underlined that it is not always possible to trace specific actions back to a specific human input or design. Because of that, they suggest the liability should be based on risk and that deployers of AI-systems should consider holding liability insurance. 

In the coming years, we’re going to see how different jurisdictions will respond to AI products in order to ensure a proper compensation of any harm.

Legal issues of artificial intelligence: final thoughts

We hope that this summary has helped you learn more about the legal status and the most common AI legal issues in the United Kingdom.

At Miquido, we’re working with years of experience in creating AI solutions for the UK market. If you’d like to discuss an AI-based solution that will suit your business needs, or simply ask a question, don’t hesitate to contact us!

A special thanks to WKB – Wiercińsi, Kwieciński, Baehr for their legal tips and tricks, helping us with writing this article!

Top AI innovations delivered monthly!

The administrator of your personal data is Miquido sp. z o.o. sp.k., with its ... registered office in Kraków at Zabłocie 43A, 30 - 701. We process the provided information in order to send you a newsletter. The basis for processing of your data is your consent and Miquido’s legitimate interest. You may withdraw your consent at any time by contacting us at marketing@miquido.com. You have the right to object, the right to access your data, the right to request rectification, deletion or restriction of data processing. For detailed information on the processing of your personal data, please see Privacy Policy.

Show more
book consolation
Written by:

Miquido

Author Our team of specialists in AI, software, design, and product strategy share their knowledge across various industries.
book consolation

The administrator of your personal data is Miquido sp. z o.o. sp.k.,... with its registered office in Kraków at Zabłocie 43A, 30 - 701. We process the provided information in order to send you a newsletter. The basis for processing of your data is your consent and Miquido’s legitimate interest. You may withdraw your consent at any time by contacting us at marketing@miquido.com. You have the right to object, the right to access your data, the right to request rectification, deletion or restriction of data processing. For detailed information on the processing of your personal data, please see Privacy Policy.

Show more