In this issue of In Perspective, Jackie Barlow, Data Protection Senior Consultant at Xcina Consulting, discusses the questions every business should be asking.
This month we review why cyber risks continue to challenge organisations, we review the proposed changes to the new Data (use and access) Bill and we consider whether the UK will change its stance on the regulation of AI.
We take a look at why this is important and the implications for both businesses and individuals.
Find out more below.
Cyber Risks Continue to Challenge Organisations
What happened
- With external risk evolving and internal challenges increasing, organisations face tough times in the cyber space
- Ransomware and business email compromise (BEC) attacks continue to provide criminals with financial rewards
- The use of AI in cyberattacks is on the rise – AI is being used to create more sophisticated phishing attacks, deepfake scams and automated hacking attempts
- With more businesses moving to the cloud, vulnerabilities in cloud security are becoming more apparent
- Human error is still responsible for a large percentage of successful cyberattacks, e.g. falling for phishing scams or failing to follow security protocols. Remote working trends have exacerbated this
- Supply chain attacks – cybercriminals are exploiting the interconnected nature of modern businesses and can increase their returns by launching attacks on groups of customers related to suppliers
- The increase of Internet of Things (IoT) devices has opened up new avenues for cyberattacks. These devices often lack robust security measures making them easy targets
- Many attacks are difficult to identify because attackers are becoming more sophisticated and even specialising in particular areas
Why it matters
- In addition to the short term concerns of incident response, remediation and legal costs, organisations in tightly regulated sectors e.g. in the NIS2 regulated sector in the EU might need to notify regulators under both GDPR and NIS
- Litigation risk is important. Class actions have become more prevalent with potentially greater costs to organisations
- Reputational risk is at stake, particularly where organisations respond inadequately to incidents
- Attackers have become more aggressive and sophisticated and have greater unpredictability. For example, ready to deploy malware kits and breached access points in victim networks are now available on the dark web with a variety of payment models
- Board engagement is seen as critical to regulators, who will evaluate how well organisations understand cyber risk during inspections
What next?
In the next three years, the regulatory burden in the cyber security landscape is expected to grow
The supply chain will remain a key area and there will be a need for increasingly complex contracts and security requirements in supply chains
Ransomware attacks are expected to evolve and become faster and more targeted
Cyber criminals will increasingly integrate AI and automation into their ransomware ops – which means early detection will be critical
Further information
In 2024 the ICO published the following article on the growing threat of cyber risk; Organisations must do more to combat the growing threat of cyber attacks | ICO
The new Data (Use and Access) Bill
What happened
- The new Data (Use and Access) Bill had its first reading in the House of Lords on 23.10.24
- It includes many proposals from the previous Bill but there are some key changes;
- Data controllers must have a complaint form that can be completed electronically, acknowledge complaints within 30 days and respond without delay
- For DSARs (data subject access requests) controllers that process large amounts of personal data can ask for further information to clarify what information is actually required. Only a reasonable and proportionate search needs to be made
- In terms of transparency, if the controller is processing for research, archiving or statistical purposes, it does not have to provide information in its privacy notice about this if doing so involves disproportionate effort
- Automated decision making – based on special category data, needs to be based on explicit consent, contractual necessity or substantial public interest
- For international transfers, the test for assessing if third countries’ data protection is adequate, is now ‘not materially lower’ instead of ‘essentially equivalent’
- Some categories of ‘recognised legitimate interests’ are created where no legitimate interests assessment is needed
- A number of changes to PECR are also included; (i) consent is not required for low risk cookies, (ii) fines for PECR breaches will be in line with GDPR and (iii) officers can be fined if a data breach occurs with their consent or connivance or is due to neglect on their part
- PECR breaches (like GDPR breaches) must be reported within 72 hours
Why it matters
- In order for the UK to retain adequacy, it is important that the new Bill is seen by the EU to ensure a high standard of data protection
- Some provisions included in the previous bill, have been dropped:
- The change from Data Protection Officer (DPO) to Senior Responsible Individual (SRI);
- Relaxation of the requirements for data protection impact assessments (DPIAs) and Record of Processing Activities documents (RoPAs); and
- A change in the test for when controllers could refuse DSARs from “manifestly unfounded” to “vexatious or excessive”.
- Additionally, the changes proposed to allow banks to provide the Government with bank details of benefits recipients are not in the new Bill. These will be included in the Fraud, Error and Debt Bill, coming in the autumn of 2025.
Next steps
The new Bill passed its second reading in the House of Lords on 19.11.24 and is going through the Committee Stage. It is expected to come into force later in 2025.
The ICO has set out its response to the new Bill at Information Commissioner’s response to the Data (Use and Access) (DUA) Bill | ICO
Will the UK change its stance on the regulation of AI?
What happened
- So far, the UK does not plan to adopt specific AI legislation
- The previous Conservative government’s intention was to ‘leverage the expertise of our world class regulators who understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’
- It was thought that the new Labour government might move to an EU style of regulation. This has not happened and it is not known how the most powerful AI models in the UK will be regulated
- This has sparked debate. The government’s stance is pro-innovation and it wants to balance fostering AI development with managing its risks
- Will the UK assess an AI model’s power in terms of risk (as the EU does) or in terms of computational power (as in the US)?
- Some changes are proposed in the Product Safety and Metrology Bill which might enhance consumer protection in terms of new technologies such as AI
- Also, the Digital Information and Smart Data Bill promises reforms to areas of data law where a lack of clarity might prevent the safe development of new technologies that might impact AI systems
- Trade Unions might push for legislation to enhance employees’ rights in response to risks caused by AI in the workplace
- The current regulators are the Competition and Markets Authority (CMA); the Information Commissioner’s Office (ICO); the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA)
Why it matters
- The new Labour government have not yet put forward significant changes, so the cross sectoral, regulator led approach remains the UK’s default position on AI regulation
- The 5 sectors currently included are; safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance and contestability and redress
- This sectoral approach will be implemented in line with existing legislation. Regulators have been invited to add additional criteria suitable for their specific industries
- Critics argue that this method might be too lenient, possibly allowing harmful AI applications to slip through the cracks. Without stringent, AI specific rules, will the UK struggle to address the unique challenges that AI presents, such as bias, privacy concerns and accountability?
Takeaways
There is a fear that the UK’s approach is too light touch. With a lack of AI specific regulation, any problems raised by the development and use of AI are only subject to relevant parts of current legislation.
The success of this approach will depend on how well it can adapt to the fast evolving AI landscape whilst at the same time, ensuring public trust and safety. This space needs to be watched!
Details of the UK regulators’ strategic approaches to AI can be found at Regulators’ strategic approaches to AI – GOV.UK
The government’s white paper on its pro-innovation approach can be found at AI regulation: a pro-innovation approach – GOV.UK