Fined £7.5m: Facial recognition company Clearview AI Inc
ICO fines facial recognition database company Clearview AI Inc more than £7.5m and orders UK data to be deleted
- Clearview AI Inc has been fined more than £7.5m by the ICO for exploiting photographs of people from the UK that were acquired from the web and social media to develop a worldwide online database that could be used for face recognition.
- Clearview’s database is likely to contain a significant amount of information about UK individuals that was collected without their knowledge.
- The ICO held that Clearview lacked a lawful basis for processing the information; breached their obligation to use the information in a fair and transparent way; failed to prevent data being stored indefinitely; failed to meet the higher data protection standards for biometric data and potentially impeded the rights of individuals who wished to object to their data being processed.
- The ICO has issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data from its systems.
Why it matters
- This is a clear reminder of the ICO’s strong stance on the use of AI-based technology and how, without proper consideration and planning, its use can fall foul of UK data protection law.
- The ICO recently launched an AI and data protection risk toolkit to help organisations comply with data protection regulations and spread best practice in the use of artificial intelligence. The toolkit provides a methodology to audit AI applications and ensure they process personal data fairly.
- Organisations should use the AI toolkit to assess the privacy risks associated with their use of AI, in addition to carrying out a Data Protection Impact Assessment.
Google introduces a ‘reject all’ button for cookies following regulatory action
- Google has announced the introduction of a ‘reject all’ button on its cookie banners after its existing policy was found to be non-compliant and the French Supervisory Authority issued a fine.
- Google’s previous cookie banner allowed users to accept all tracking cookies with a single click but forced users to click through various menus to reject them all. The process was found to be deceptive and did not allow consent to be denied or withdrawn easily when compared to providing consent.
- Web-browsers such as Apple’s Safari and Mozilla’s Firefox have already acted and implemented similar techniques.
Why it matters
- The ICO praised the decision, describing it as a long-awaited improvement to both user interaction and compliance.
- The ICO have commented that there is an expectation that the online advertising industry will follow Google’s lead to provide clearer choices for consumers.
- According to the European Centre for Digital Rights, which advocates for fair cookie mechanisms, 90% of users accept all cookies, but only 3% actually want them. Changes like those made by Google may contribute to tipping this balance.
- Organisations should review their cookie banners to ensure that users can experience a privacy-friendly browsing experience and reject cookies with ease.
Twitter agrees to pay $150 million in fines after allegedly violating its privacy pledges
- After the US government sued Twitter, alleging that it misled consumers about how it secures their personal data, the firm agreed to pay $150 million in fines.
- According to the federal lawsuit, Twitter told its users that it was collecting their contact information for account-security purposes, yet had failed to disclose that it also would use that information to help companies send targeted advertisements to consumers.
- During the time period covered by the complaint, more than 140 million users gave their email addresses or phone numbers to Twitter for security purposes.
- As part of the settlement, Twitter agreed to put in place significant additional compliance measures to protect users in the future, including multi-factor authentication options that don’t require user phone numbers and the requirement to conduct a privacy review prior to launching any new product or service that collects users’ personal data.
Why it matters
- This is one of the largest penalties issued for data privacy violation and follows several investigations into Twitter’s privacy and security practices.
- The case serves as important reminder that organisations must be open and transparent with individuals from the outset about their purpose for collecting personal data and what they intend to do with it.
- The GDPR purpose limitation principle (which is closely linked with the principle of fairness, lawfulness and transparency) requires that data can only be collected for specified, explicit and legitimate purposes. Data cannot be further processed in a manner that is incompatible with those purposes.
- Organisations failing to comply will be held accountable, facing regulatory action and reputational damage.