Berry Picks in IT Law #50

It’s been a minute, but 3 years and countless life experiences later, we’re at 50. Welcome to the golden jubilee of the Berry Picks in IT Law!

🤖 Artificial Intelligence

The UK is reportedly considering giving ministers new powers to amend large parts of the Online Safety Act through Henry VIII clauses, effectively allowing significant changes with only limited parliamentary scrutiny. The immediate driver is AI, particularly gaps exposed by cases involving chatbots and deepfake content, where Ofcom has acknowledged it lacks sufficient powers. While the policy direction itself is not especially surprising, the method is what is interesting. Instead of revisiting the Act through the usual legislative process, the government is attempting to retrofit it via unrelated bills, enabling faster but less scrutinised rulemaking. The result is a familiar tension between speed and legitimacy. There is a clear push to future-proof the regime against emerging harms, but at the cost of diluting parliamentary oversight. Collingridge, unsurprisingly, remains relevant.

🪁 Children’s Rights in Cyberspace

Google, Meta, Microsoft and Snap have issued a joint statement urging the EU to move quickly after the expiry of the ePrivacy derogation that had provided legal cover for certain voluntary CSAM detection measures. The companies argue that the lapse creates legal uncertainty around the continued use of tools such as hash-matching, despite longstanding expectations that platforms should act to detect, remove and report child sexual abuse material. The statement is carefully framed, unsurprisingly, around both child safety and privacy, with the signatories insisting they remain committed to voluntary action while pressing EU institutions to agree on an interim fix and a longer-term regulatory framework. The immediate point is not especially subtle. The platforms are saying that if the law expects them to intervene, it also needs to provide a stable legal basis for doing so.

🔏 Data Protection & Privacy

The EDPB published its 2025 Annual Report. The report emphasises a fairly clear institutional shift towards clarity, support, and dialogue in a data protection landscape that is becoming increasingly dense. Much of the focus is on making compliance more workable in practice, whether through the Helsinki Statement, public consultations on useful templates, or closer engagement with stakeholders on upcoming guidance. It also reflects the Board’s growing involvement in the wider EU digital rulebook, particularly through its work on the GDPR’s interaction with the DMA, DSA, and AI Act. None of this is exactly revolutionary, but it does suggest an EDPB that is trying to present itself less as a distant interpreter of doctrine and more as a body concerned with whether the system can actually function. Which, to be fair, is not a bad ambition at this stage.

🛒 E-Commerce & Digital Consumer

Germany’s Supreme Court clarified liability in online advertising in a Google-Ads case. The Court held that a trader cannot avoid responsibility simply because the unlawful ads were generated and placed by Google. Once the business provides product information and entrusts Google with promoting its products, Google may be treated as an “agent” under § 8(2)  German Act against Unfair Competition (“UWG”). What matters, crucially, is not the level of control the trader actually kept, but the level of control it could and should have secured over the relevant risk area. The judgment therefore makes clear that delegating advertising functions to a platform does not remove the trader from the chain of liability. Outsourcing visibility, it seems, still leaves responsibility at home.

The European Commission is reportedly contemplating whether OpenAI’s ChatGPT should be designated a very large online search engine under the Digital Services Act, after OpenAI disclosed user numbers above the relevant 45 million threshold. Apparently according to the Commission, the assessment is still ongoing and large language models may fall within the DSA on a case-by-case basis. OpenAI says the figure relates specifically to ChatGPT Search, which it reported at around 120.4 million average monthly active EU users over the six months to the end of September 2025. If the designation goes through, the significance would be obvious enough. It would mark a serious step in pulling generative AI services into the DSA’s systemic risk framework, and not just the AI Act.

📄 Recommended Readings

Here’s a couple –in no particular order– of recent publications that piqued my interest this week. Remember to grab a cuppa and settle in for some riveting reading.

Webs within the web: the role of epistemic injustice in creating barriers to public legal information about rights in a digital age by Linsa Mulcahy & Joseph Patrick Mcaulay

AI as legal persons: past, patterns, and prospects by Claudio Novelli, Luciano Floridi, Giovanni Sartor & Gunther Teubner

*Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within.

If you’d like to see more recommended readings, visit the collection here.

So there you have it, folks, another week in the fascinating realm of IT Law. Remember to pop back next week for your latest dose of legal updates.

If you have any thoughts or suggestions on how to make this digest more enjoyable, feel free to drop a line. Your feedback is always welcome!

Featured image generated using DALL·E 3.

Sena Kontoğlu Taştan

IT law enthusiast and researcher.

Leave a Reply

Your email address will not be published. Required fields are marked *