Microsoft recently announced a significant investment they made in OpenAI – the company behind ChatGPT. The advanced AI chatbot is all over the headlines, but the jury is out. Machines (or AI) are continually challenging human intelligence and is very progressive, enabled by advancements in deep learning algorithms, the cloud, and others alike. Jackie Barlow, Data Protection Senior Consultant at Xcina Consulting explains why its use can present firms with potential data protection compliance risks.
This month sees the introduction of a new international privacy standard, an upgrade from ISO 27001 adding to a growing total of certifications from the ISO. The new standard is said to bring a consumer-focused approach. How will it impact your business?
Learn more about the details behind changes in the pipeline.
‘ChatGPT’ (Chat Generative Pretrained Transformer) and data protection concerns
ChatGPT’ is a chatbot tool powered by Artificial Intelligence (AI) that has been trained to respond to user inquiries in an interactive chat interface.
There has been a huge increase in popularity in the tool since its release, especially with students and professionals, who have been using it because it performs really efficiently as a tool for assisting human functions. It can answer questions and assist on tasks like composing essays, job applications or letters. It can also be used for conversing with a virtual assistant and creating text in response to a prompt.
However, users of ChatGPT have no control over the large data sets of text from the internet used to train it or the algorithms it runs on. These large datasets might include the personal data of individuals and there is a risk that personal data might be inadvertently processed, in violation of data protection laws.
- Users have no means to ensure the personal data was collected legitimately.
- If an individual wants to exercise their right of erasure under Article 17 GDPR, it is not clear whether this can be complied with, or other data subjects’ rights.
Organisations will need to fully understand what data is used to create the responses to the tool in order for them to comply with data protection laws including individuals’ rights.
Why it matters
The use of ‘ChatGPT’ (and other ‘generative’ AI models) raises a number of ethical and legal concerns in terms of data privacy. Organisations that use ‘ChatGPT’ need to be aware of the difficulty in understanding and controlling the large data sets used.
Investigations will need to take place in order to identify and enforce the legislation around the use of AI models, making sure individuals’ privacy rights are protected and organisations using the tool, will require a thorough understanding of how the AI systems interpret and generate responses.
So, at present, it is uncertain whether ‘ChatGPT’ or other AI models can adhere to data protection laws including individuals’ rights (particularly the right to erasure required by Article 17 GDPR).
The Information Commissioner’s Office has produced guidance on AI and data protection which can be found at Guidance on AI and data protection | ICO
'Privacy by Design’ is to become a new ISO standard – ISO 31700
The International Organisation for Standardisation (ISO) has recently set out a new standard for ‘Privacy by Design for consumer goods and services’. It is being described as ‘a major milestone in privacy’. The concept of Privacy by Design is already recognised; Article 25 of UK GDPR sets out the requirement for ‘Data Protection by Design and by Default’.
The ISO 31700 standard contains 30 requirements. It includes:
- General guidance on how to design capabilities to enable individuals to enforce their privacy rights,
- How to assign relevant roles and authorities,
- How to provide privacy information to consumers,
- How to conduct data protection impact assessments,
- How to establish and document requirements for privacy controls and,
- Preparing for and managing data breaches.
Organisations must also implement appropriate technical and organisational measures to adhere to UK GDPR and also to protect the rights of individuals.
ISO 31700 was published at the end of January 2023. A spokesperson from the – ISO/PC 317 communications group said:
“[ISO 31700] embraces win-win privacy solutions for organisations and consumers working together, towards the next generation of privacy-respectful consumer products ranging from social media to banking and even clothing.”
Why it matters
The ISO 31700 document establishes high level requirements for privacy by design. Most organisations are already familiar with the concept of ‘Data Protection by Design and Default’ and are already completing data protection impact assessments (DPIAs) when they embark on new projects, new technologies or new third party relationships.
Before now, UK GDPR already requires organisations to put in place appropriate technical and organisational measures to implement data protection principles effectively and safeguard individual rights. This new ISO standard will provide a greater incentive for organisations to take this best practice approach.
It is a voluntary certification but it might enable certified organisations to gain control and influence over their uncertified competitors as it signifies a commitment to data privacy.
Building consumer trust along with providing assurance that individual’s privacy rights are met effectively, are defining concerns for the digital economy, so being certificated might evidence an organisation’s commitment to quality.
Artificial Intelligence (AI) and GDPR/Employment Law
Artificial Intelligence continues to expand into everyday life with organisations keen to minimise overheads and increase automation. The advent of AI is increasingly being used to make decisions about individuals and it is likely to influence employment in many ways; replacing people in lower skilled labour, but also in less obvious ways such as in compliance with equality legislation.
This disruptive technology is already present in most businesses; for example using facial recognition to secure company devices or spam filters on email. Some companies are considering using AI to screen candidates in recruitment, however, there is a concern that some AI programmes carry out this screening process without adherence to employment law or GDPR protections.
AI led employment decisions are potentially unlawful and it is important to remember that individuals have the right not to be subject to automated decision making under GDPR. Protection against automated decision-making is an existing concept that legislation will need to expand on going forwards.
The UK’s 10 year ‘AI Action Plan’ is still in its early stages. The Alan Turing Institute (the UK’s national institute for data science and artificial intelligence) is working together with the ICO on the action plan and a framework will be developed for explaining processes, services and decisions delivered by AI to improve transparency and accountability.
Why it matters
Where AI involves the use of personal data, often vast amounts of this information are involved to train and test machine learning and deep learning models. Very often personal data is collected and fed through a model to make decisions about individuals and these decisions, even if only predictions or inferences, are themselves personal data.
There are potential dangers where AI is used in employment related activities, particularly where roles that were previously carried out by human intelligence are simulated and replaced with a machine.
New ‘AI projects’ will require the completion of data protection impact assessments (DPIAs) to assess the risks involved.
Whilst the details on the new AI framework are yet to be confirmed, organisations can prepare for wider use of AI in the workplace by reviewing their processes and being clear on which roles may be at risk of being replaced by this. This will help firms plan for adopting AI technologies and ensure adequate protection from potential challenges.
For further information, the Information Commissioner’s Office (ICO) has produced guidance on AI and data protection at Guidance on AI and data protection | ICO