AirTags might be useful, but are they a threat to privacy?
 
 
Xcina Blog

AirTags might be useful, but are they a threat to privacy?

In this issue of In Perspective, Jackie Barlow, Data Protection Senior Consultant at Xcina Consulting, discusses how new technology may have implications for UK businesses and individuals.

Read the latest Data Protection Blog by Jackie Barlow at Xcina Consulting.

Apple Air tag

AirTags might be useful, but are they a threat to privacy?

What happened

  • Apple’s AirTags were introduced in April 2021 as an easy way to be able to track many things including your keys, your wallet or lost luggage, but recently there have been reports of these tiny devices being used by thieves and stalkers to track people.
  • One example included AirTags being placed in luxury vehicles so that they could later be stolen. Many individuals have found the devices hidden in belongings. 
  • There are notable differences between the technology behind AirTags and older tracking devices. They do not have a built-in GPS system but instead they are said to ‘piggyback’ off the location data of Apple devices that are nearby, by emitting a continuous Bluetooth signal. This is then viewable by the tag’s owner. 
  • For iPhone users at least, there is some good news. They will receive a  notification if an AirTag is separated from them. However, this is provided their phone is running on iOS 14.5 or a later system and they have the correct settings on. 
  • Android users are not able to get these notifications but Apple has released an application called Tracker Detect that will scan for an unknown AirTag nearby. Tracker Detect only works when the app is open. 

If someone is alerted to the presence of an unknown AirTag, they can trigger an audible chime so they can locate the device.  The battery can be removed to deactivate the AirTag. 

Why it matters

  • The use of AirTags has shown that despite technology becoming more sophisticated and advanced, it also becomes much easier to misuse and abuse.
  • AirTags were designed to help people track their personal belongings, but they have been misused for stalking and other crimes, posing a danger to personal privacy and safety.

It has been seen that the current technology found in smartphones cannot provide a strong enough defence for this type of tracking. Going forward, Apple will work together with law enforcement and they intend to roll out additional software updates to help iPhone users become more aware of and locate any unknown AirTags in their possession. 

Takeaways

Apple already produced an ‘AirTag Safety Guide’ early this year, due to concerns raised around the device.  The BBC reported on this at; Apple unveils AirTag safety guide amid stalker fears – BBC News

The Apple Safety User Guide can be found at Personal Safety User Guide – Apple Support (UK)

If you are being maliciously tracked, current advice is to contact the police.  Each AirTag has a unique identifier which can be traced to find the perpetrators. 

 

 

Chat GPT suffers its first data breach

What happened

  •       ChatGPT’s creator, OpenAI, has recently released details of a data breach which occurred during an outage on 20 March 2023
  •       The breach was only discovered when ChatGPT took the data offline having found a bug in an open-source library which meant that some users were able to see titles from other active users’ chat history. 
  •       The bug was patched but when technical details of the breach were reported, it was noticed that the same bug might have caused a personal data breach.  
  •        The breach exposed payment-related and other personal data relating to 1.2% of ChatGPT subscribers that were active during the specific 9 hour window.
  •        Until ChatGPT was taken offline, it was possible for some users to see other users’ first and last names, email address, payment address, the last 4 digits of credit card numbers and the cards’ expiry dates. 
  •      Additionally, portions of users’ conversations were also exposed for around 9 hours – and these might have contained any type of personal data, including more sensitive data.
  •       Italy’s national data protection agency has made a decision to block access to ChatGPT immediately and will start an investigation into its creator, OpenAI. *
  •        The Italian regulator claimed there was no legal basis for using individuals’ data i.e. no justification for the mass collection and storage of personal data used to train the algorithms behind ChatGPT. 
  •      The regulator also claimed that the data was processed inaccurately to train the chatbot.

Why it matters

  •        The data breach has validated previous warnings from industry watchers that entering information into a chatbot might be risky. 
  •     ChatGPT is powered by an artificial intelligence system that is trained on a vast amount of information culled from the internet.  Many are seeing this as just another ‘avenue’ whereby individuals disclose personal data.
  •       An important point that came out of the breach, was that ChatGPT has no age verification mechanism so it might expose minors to absolutely unsuitable answers compared to their development or awareness. 

Takeaways

There is much activity in this space.  In particular, two recent developments are:-

(i) The ICO has produced updated guidance (in line with its ICO25 strategy) following requests for clarification on fairness requirements when using AI. The guidance has been restructured with a number of sections being added/expanded. There is content on what to consider in terms of AI when completing a data protection impact assessment and there are stand alone chapters on transparency and lawfulness.  Details can be found at  ICO issues updated guidance on AI and data protection

 

(ii) The UK’s long awaited white paper on AI, was published on 29 March 2023 and has indicated that the government will provide a non-statutory definition of AI for regulatory purposes and there will be ‘a set of high level, overarching principles’.  The paper is called ‘A pro-innovation approach to AI regulation’ and it sets out a very different outcomes-based approach to the EU’s AI Act.  The consultation on the white paper closes on 21 June 2023.  The government’s press release can be found at UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth – GOV.UK (www.gov.uk)

 

*Reuters’ report on the ban by Italy’s data protection agency can be found at

Italy’s ChatGPT ban attracts EU privacy regulators | Reuters

 

 

We’d love to hear from you

Jackie has over 14 years’ experience in providing advice and training on data protection, records management and electronic marketing, which she has gained from working in a number of different types of organisations. Prior to joining Xcina, she managed the data protection functions at an investment management firm, pensions provider and within the not-for-profit sector including a university and charity. She is experienced in identifying and overcoming complex information governance and data protection challenges.

To discuss how the areas highlighted in this post, or any other aspect of risk management, information governance or compliance impact your business, speak with our team, tell us what matters to you and find out how we can help you navigate complex issues to help you deliver long term value.

If you have any questions or comments, or if there’s anything you would like to see covered, please get in touch by emailing Xcina Consulting at info@xcinaconsulting.com. We’d love to hear from you.

Jackie Barlow

Data Protection Senior Consultant and Group Privacy Officer

Speak to me directly by Email, or
Telephone: +44 (0)20 3745 7843

Subscribe to Updates

Receive regular updates from our expert consultants as they provide clarification and guidance on issues impacting your organisation.

Subscribe >>