Berry Picks in IT Law #47

This week’s featured image introduced it better than I – or 10 year old me attempting an essay – ever could. Nice to see you again.

🤖 Artificial Intelligence

AI companies are being sued for suicides allegedly linked to their chatbots. The cases against OpenAI, Google, and Character.ai treat these systems as products rather than neutral services. That shift brings the disputes into the realm of product liability and negligence. The core question is whether design choices, such as memory, personalisation, simulated empathy, and weak safeguards, create foreseeable risks, especially for children. The argument is not that harm simply occurred, but that the products may have been defectively designed or insufficiently protected. Once framed this way, the legal analysis becomes rather familiar. If a company releases a product that can foster dependency and fails to mitigate obvious dangers, it may be asked to answer for the consequences.

German regulators concluded a pilot project simulating an AI regulatory sandbox, offering an early look at how the AI Act may operate in practice. The initiative tested real use cases and produced a roadmap on how the AI Act interacts with other regimes, such as medical device law. The key takeaway is fairly pragmatic: effective sandboxes require close inter-agency coordination, clear communication, and a structured approach to assessing AI systems at different stages of development. They are also expected to help companies navigate compliance before market entry. With the AI Act requiring Member States to establish such sandboxes by August 2026, the project positions Germany as moving early to translate regulatory ambition into operational guidance. A rare example of AI regulation trying to be useful before becoming bureaucratic?

Encyclopaedia Britannica and Merriam-Webster have reportedly joined the steadily growing queue of rightsholders suing AI companies, filing a copyright and trademark action against OpenAI over the alleged use of their reference materials to train ChatGPT. The claim is fairly pointed: OpenAI is said to have copied nearly 100,000 articles and entries, reproduced Britannica content in near-verbatim form, diverted traffic through AI-generated summaries, and even attached Britannica’s name to hallucinated material it never authored. OpenAI, unsurprisingly, leans on the now-familiar fair use defence and says its models are trained on publicly available data. So the legal fight is once again about whether AI training is transformative innovation or industrial-scale appropriation dressed up in probabilistic prose. So when does machine learning stop learning from a work and start replacing it? Dare you to say that 10x faster.

🔏 Data Protection & Privacy

The CJEU ruled that even a first access request under Article 15 GDPR can, in exceptional circumstances, be treated as “excessive” under Article 12(5), where the controller can show that it was made abusively rather than for the genuine purpose of checking how personal data were being processed. At the same time, the Court made clear that this does not weaken the substance of the right of access itself. A data subject may still claim compensation under Article 82 for damage caused by a refusal to comply with that right, even if the harm does not stem directly from the underlying processing operation. The Court also accepted that non-material damage may include loss of control over personal data or uncertainty about whether those data have been processed, provided that such harm is actually proven and was not essentially brought about by the data subject’s own conduct.

The Spanish Data Protection Authority fined a company €950,000 for its use of facial analysis technology in digital identity and age verification services. The decision is a clear reminder that biometric data rules under the GDPR depend on how a system works in practice, not just on what the company says it does. The AEPD found that, although the tool was presented as an age estimation system, it created and stored facial templates that were later used to verify users. That was enough to bring the processing within Article 9 GDPR. The case also takes a strict view of consent. Default settings and opt-out mechanisms were not enough, especially given the use of sensitive data and the possible involvement of minors.

📄 Recommended Readings

Here’s a couple –in no particular order– of recent publications and op that piqued my interest this week. Remember to grab a cuppa and settle in for some riveting reading.

Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR by Holli Sargeant

The EU GDPR and secondary use of health and genetic data for research support purposes by Regina Becker & Edward S Dove

Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within. I try to include mostly open access publications due to, well you know, accessibility of knowledge and science.

If you have any thoughts or suggestions on how to make this digest more enjoyable, feel free to drop a line. Your feedback is always welcome!

Featured image generated using DALL·E 3.

Sena Kontoğlu Taştan

IT law enthusiast and researcher.

Leave a Reply

Your email address will not be published. Required fields are marked *