In this issue of In Perspective, Jackie Barlow, Data Protection Senior Consultant at Xcina Consulting, discusses the questions every business should be asking.
This month we review why the Equality & Human Rights Commission is warning businesses to take care when using AI to recruit talent, why you should take care when using Excel when processing sensitive data and we also consider new regulation (NIS2) around cybersecurity risk management in Europe.
We take a look at why this is important and the implications for both businesses and individuals.
Find out more below.
The Equality and Human Rights Commission (EHRC) – a warning to take care when using AI in recruitment
What happened
- The EHRC recently updated guidance for using AI tools to draft job adverts. The risk is that AI created output might include biases that affect the diversity of job applicants
- AI is very useful in generating job descriptions, creating messages to applicants, summarising candidates’ skills, producing suitable interview questions and creating follow up emails
- Using AI can save much of the time previously taken in doing these tasks manually
- However, bias in AI systems occurs in a number of ways. It is often due to the data that is used to train the model (including societal and cultural factors contained in the training data) but also due to the algorithms and techniques that the AI uses
- A number of factors can influence who applies for a job, such as (i) job title used (ii) skills required and (iii) the language in the job description and this includes what is known as ‘gendered language’
- Gendered language can be typically female language for example where it is nurturing or collaborative, or male language which is more dominant, competitive or ambitious
- Employers must have an AI policy in place which sets out clearly how AI can be used. A good policy will include (i) how to avoid confidentiality and data breaches (ii) how to avoid using inaccurate information created by AI, whether it is discriminatory or a potential hallucination (iii) when to make it clear that a generative AI tool has been used and (iv) how to make sure use of AI tools is responsible and ethical.
Why it matters
- Generative AI models are being used increasingly in the recruitment sector to streamline the hiring process
- There is a risk that using AI in recruitment can cause biases and discrimination
- It is crucial that employers review their job advertisements and other recruitment literature to make sure they are inclusive and not discriminatory
- Neutral language must be used and assumptions must not be made about which types of people will be suitable for a role
- Recent EHRC guidance also shows that gendered job title roles such as postman, policeman or sales girl are likely to be discriminatory under the Equality Act 2010
- Organisations must have an AI policy which sets out clearly how AI can be used and for which purposes
Next steps
On 6.11.24 the ICO published a report on the use of AI in recruitment. Almost 300 recommendations were made.
Guidance on using AI in Recruitment will be issued in early 2025. Please see at Our plans for new and updated guidance | ICO
Processing sensitive data? - take care when using Excel
What happened
- The ICO has fined the Police Service of Northern Ireland (PSNI) £750,000
- It is the biggest fine on a public body and follows the unauthorised disclosure of an Excel spreadsheet containing the personal data of 9,483 police officers and staff
- The PSNI had failed to notice that the Excel file contained a hidden tab. Only the visible tab was checked.
- The hidden tab included the personal data of all officers and staff. Surnames and initials, job roles, rank, staff numbers, location, contract type and gender were disclosed
- The spreadsheet was uploaded to the What Do They Know (WDTK) website in response to a Freedom of Information request
- The PSNI was alerted to the breach by its officers the same day and the file was hidden from public view by WDTK and then deleted from the website soon after
- The PSNI reported the breach to the ICO
- The ICO’s policy is to issue reprimands and enforcement notices, instead of fines where possible, to act as a deterrent
- However, this was a serious case, which warranted the issue of a fine.
- This sends a stark warning to private sector organisations. The PSNI’s fine was reduced, being a public sector organisation, but for private organisations, the fine would have been £17.5m
Why it matters
- The PSNI personal data is especially sensitive as officers often conceal their occupation from friends and family
- Furthermore, the risk to PSNI members is high. A senior policy officer was shot in Northern Ireland in February 2023 and risks to officers in covert roles is especially high
- The PSNI breached UK GDPR by; (i) failing to ensure appropriate security (ii) failing to implement appropriate technical and organisational measures to ensure appropriate security relevant to the risk of processing the data and (iii) failing to appropriately assess the level of security required
- Accounts provided by individuals affected by the breach highlighted the risks involved and these were pertinent in the ICO’s evaluation of the breach
Further information
Further details of this fine can be found at What price privacy? Poor PSNI procedures culminate in £750k fine | ICO
Takeaways
The PSNI’s failure to implement appropriate technical and organisational measures was an important factor in determining the large penalty imposed in this case.
It is important for all organisations to train staff involved in processing personal data to ensure that they recognise the risks associated with hidden tabs in spreadsheets.
In particular, documents that are to be uploaded to public systems must be thoroughly checked
New NIS2 overhauls cybersecurity risk management in Europe
What happened
- The Network and Information Systems (NIS) Directive was the first piece of EU legislation on cybersecurity and it aimed to achieve a high level of cybersecurity across the EU
- It increased member states’ cybersecurity capabilities but it proved to be difficult to implement. This resulted in fragmentation at different levels across the internal market
- The new NIS2 came into effect on 18.10.24
- NIS2 is a big change for data centre providers and also those in their supply chain. This is because of new obligations to report incidents, bigger audit and oversight measures and a rise in enforcement powers
- Cloud based architecture is now seen as the industry norm and data centres have become custodians of most digital records and are a core part of any infrastructure or service
- Incidents like ransomware attacks and DDoS attacks on data centres have increased in recent years with many experiencing these on a daily basis
- The fines are big; non-compliance with NIS2 involves fines of up to EUR10m or 2% of worldwide turnover
- NIS2 has also introduced enhanced audit and inspection measures, with each EU state regulator conducting regular inspections and audits of organisations’ information security management frameworks and cybersecurity posture
Why it matters
- NIS2 is a proactive approach to cybersecurity and operational resiliency
- NIS2 has been implemented to strengthen security requirements, address the security of supply chains, streamline reporting obligations and to introduce more stringent supervisory measures and stricter enforcement requirements
- NIS2 requires regulated entities to make sure their information systems are hardened against cybersecurity threats, vulnerabilities and outages.
- NIS2 imposes obligations on a broader range of entities. The obligations depend on whether they are identified as being ‘essential’ or ‘important’
- Data centres and related providers have an essential role in the European economy, so are categorised under NIS2 as ‘essential’ and, therefore, subject to NIS2’s most stringent security measures
Further information
Information from the European Parliament can be found at The NIS2 Directive