Berry Picks in IT Law #44

Well, it’s been a minute. I could list a plethora of reasons -most of which will probably not interest anyone- but the most important one is that I’ve finally crossed the finish line in my PhD journey, call me Doc., I know. One would expect changes to our discussion topics on the newsletter given the time lapse, but it seems IT Law isn’t as fast paced as we once thought. The emphasis is on the as, it’s still a lot more exciting than Roman Law. May Collingridge rest in peace, he’s still good.

🪁 Protection of Children in Cyberspace

UK’s ICO issued Reddit a £14.47m fine for children’s privacy failures. The fine is a warning that privacy compliance now turns heavily on age assurance. The regulator found that Reddit lacked robust age checks, had no lawful basis for processing the data of children under 13, and failed to carry out an adequate DPIA on risks to children. The broader message is quite clear to be honest, if children are likely to be on your platform, self-declaration will not be enough. It turns out that “please answer truthfully” is not quite the gold standard of child protection.

Across the pond, the FTC issued a policy statement on the COPPA Rule (Children’s Online Privacy Protection Rule) in an attempt to remove one of the biggest legal obstacles in the use of age verification technologies online. The Commission says it will not bring a COPPA enforcement action against certain operators of general audience and mixed audience sites that collect, use, or disclose personal information solely to determine a user’s age, even without prior verifiable parental consent, provided they comply with a set of conditions. These conditions include purpose limitation, prompt deletion, restricted third-party disclosure, clear notice, reasonable security safeguards, and a requirement that the chosen age verification method be reasonably accurate. The message is fairly clear: where online child safety is of concern, privacy objections may begin to carry less regulatory weight. Seems about right.

🔏 Data Protection & Privacy

Some 61 privacy authorities, including the EDPB and regulators from around the globe, signed a joint statement through the Global Privacy Assembly’s International Enforcement Cooperation Working Group on AI systems that generate realistic images and videos. The statement responds to rising concerns over tools that can depict real, identifiable people without their knowledge or consent. The statement focuses particularly on harms such as non-consensual intimate images, defamatory content, and risks to children and other vulnerable groups. Its core message is that organisations developing or using these systems must put in place meaningful safeguards, transparency measures, and effective mechanisms for removing harmful content. The statement is obviously a step in the right direction, especially because it’s a coordinated international stance against borderless systems. But we’ll see in due time whether this stays a declaratory intervention or whether it’ll lead to concrete action. Fingers crossed.

Delfi and IT law seem destined to be. This time, AKI (Andmekaitse Inspektsioon – the Estonian Data Protection Authority) ordered Delfi to remove a complainant’s name and photograph from a series of old articles, while allowing the articles themselves to remain online in pseudonymised form. The order was based on the finding that there was no longer sufficient public interest in identifying this person by name and image more than twenty years later. Delfi argued that de-indexing was enough and that anonymisation would interfere with archive integrity and press freedom. AKI disagreed, stating that de-indexing merely makes material harder to find, while the individual remains identifiable in a freely accessible digital archive. Perhaps it’s time to revisit our right to be forgotten notes.

In the case DSG Retail Ltd. v. Information Commissioner, UK’s Court of Appeal held that a data controller’s security duty extends to protecting data against unauthorised or unlawful third-party processing even where the attacker could not itself identify the individuals concerned. In practical terms, the judgment reinforces that data security obligations are assessed from the perspective of the controller’s relationship to the data, not simply by asking whether the wrongdoer could identify the data subjects. For controllers, the message is not subtle: if the data are personal data in your hands, security obligations may bite even where the intruder cannot fully identify the individuals concerned.

Some bad news for the plastic surgeons of Instagram, medical consent forms do not magically cure bad anonymisation. Garante per la protezione dei dati personali, aka the Italian DPA, fined a doctor €5,000 for posting photographs from a patient’s rhinoseptoplasty procedure on Instagram, even though he argued that the patient had signed a consent form and that the images had been anonymised. The authority held that, despite partial obscuring, the images were still identifiable, that this involved the unlawful disclosure of health data, and that the consent was invalid because it had been given on the assumption that only anonymous images would be used. In other words, “anonymous enough” is not a category the GDPR recognises, especially when health data and social media are involved.

🛒 E-Commerce & Digital Consumer

UK’s Media Act updated the regulatory framework to broaden Ofcom’s reach to on-demand services, connected TVs, streamers, and smart speakers. In practical terms, this means Ofcom will play a central role in regulating certain video-on-demand services, not by treating them exactly like old-style broadcasters, but by developing a tailored VoD Code and related accessibility and prominence rules for in-scope services. Ofcom has already begun implementing the Media Act by reviewing the VoD market, consulting on accessibility and prominence, updating rules for public service broadcasters, and publishing draft recommendations on which connected TVs and voice-activated platforms should fall within scope. The next step is for Ofcom to consult on a new VoD Code and Accessibility Code, make further scope and designation decisions, and issue codes of practice and guidance for the new prominence and radio selection regimes. So yes, Ofcom has a rather full plate. In any event, major VoD platforms can no longer assume they sit comfortably beyond the broadcast-style regulatory imagination.

🐆 AI in the Wild

AI politics have now turned theatrical. The Trump administration has reportedly ordered federal agencies to stop using Anthropic’s technology, and the Pentagon has labelled the company a “supply chain risk”, after a public dispute over whether Anthropic would permit military uses inconsistent with its safeguards on domestic mass surveillance and fully autonomous weapons. Anthropic says it will challenge the move in court. OpenAI, by contrast, announced its own Pentagon agreement, accepting that its models would be used by the military, but on terms it says preserve its core safeguards. Whether this is meaningfully different in principle, or simply a more government-compatible version of the same compromise, is debatable. Apparently, AI safety standards are stringent for the everyday user, but negotiable when governments come calling.

📄 Recommended Readings

Here’s a couple of recent publications that piqued my interest recently. Remember to grab a cuppa and settle in for some riveting reading.

If it ain’t broke, don’t fix it? Ten improvements for the upcoming tenth anniversary of the General Data Protection Regulation by Dariusz Kloza (ed.), Laura Drechsler (ed.), Elora Fernandes (ed.), Arian Birth, Julien Rossi, Pierre Dewitte, Jarosław Greser, Lisette Mustert, Gianclaudio Malgieri and Heidi Beate Bentzen

Forget Me Not? Machine Unlearning’s Implication for Privacy Law by Jevan Hutson, Cedric Whitney and Jay T. Conrad

Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within. I try to include mostly open access publications due to, well you know, accessibility of knowledge and science.

If you’d like to see more recommended readings, visit the collection here.

That seems to be it for now, hope you enjoyed. Do pop back next week for more. Cheerio!

If you have any thoughts or suggestions on how to make this digest better, feel free to drop a line. Your feedback is always welcome! Contact info here.

*Featured image generated using DALL·E 3

Sena Kontoğlu Taştan

IT law enthusiast and researcher.

Leave a Reply

Your email address will not be published. Required fields are marked *