The first three weeks of March lasted all of 10 seconds, and the last week about 2 years. The sheer volume of updates this week had us gripping our seats. I did pull my editorial privileges and pick the most interesting news, though. Enjoy.

🤖 Artificial Intelligence
The European Parliament’s March II 2026 plenary session covered a wide range of geopolitical and regulatory issues, from EU-US trade and banking reform to corruption and AI governance. We’ll cherry pick the AI governance aspect. Members adopted Parliament’s position on measures intended to simplify the application of the AI Act. The proposal supports fixed deadlines to delay certain obligations for high-risk AI systems, in order to give regulators and market actors more time to establish the necessary governance structures. At the same time, it maintains key safeguards, including a targeted ban on AI systems generating non-consensual sexual or intimate content and revised rules on processing sensitive data for bias detection. The position also shortens timelines for labelling AI-generated content and places new emphasis on AI literacy, signalling that simplification is not just about easing compliance, but about making the system workable in practice.
🪁 Children’s Rights in Cyberspace
Austria joined the latest line-up of countries planning a ban on social media use for children. The country reportedly plans to introduce the ban for children under 14, as part of a broader push to address risks linked to addictive platform design and harmful content. The government has agreed on the principle of the ban but has not yet finalised how it will be implemented or when it will take effect, with draft legislation expected by June. Notably, the approach does not target specific platforms by name, but instead focuses on criteria such as algorithmic addictiveness and exposure to content like sexualised violence. The proposal reflects a wider international trend, with countries such as Australia and France pursuing similar restrictions. The difficult question, of course, is not whether to ban, but how to make such a ban actually work in practice.
🔏 Data Protection & Privacy
The EDPB published a new case digest on “legitimate interest” Dr. TJ McIntyre. To be truthful, it reads a bit like a catalogue of failure. The digest offers a practice-based view of how Article 6(1)(f) is actually being applied, and where controllers are getting it wrong. The pattern is fairly consistent. Controllers tend to rely on legitimate interest in broad, almost formulaic terms, but struggle to meet its cumulative requirements in practice. Interests are vaguely defined, necessity is asserted rather than demonstrated, and the balancing test is often underdeveloped. DPAs, however, are increasingly strict: less intrusive alternatives must be genuinely excluded, and generic or ex post legitimate interest assessments will not suffice. The digest also highlights two structural issues. First, national differences still shape outcomes despite the one-stop-shop mechanism. Second, the overlap with the ePrivacy regime continues to complicate enforcement, particularly in cookie-related cases. Overall, the takeaway is simple: legitimate interest is not a flexible fallback. It is a demanding legal basis, and one that is being enforced as such.
🛒 E-Commerce & Digital Consumer
The European Commission has opened formal proceedings against Snapchat under the Digital Services Act, signalling a fairly serious escalation in its child safety enforcement agenda. The investigation focuses on whether Snapchat provides a sufficiently high level of protection for minors, with particular scrutiny on age assurance (which is still largely based on self-declaration), risks of grooming and criminal recruitment, weak default privacy settings, ineffective content moderation around illegal and age-restricted goods, and potentially opaque reporting mechanisms. In essence, the Commission is testing whether Snapchat’s system design meets the DSA’s high level of protection standard, not just in theory but in practice. If the concerns are upheld, the case could become a textbook example of how the DSA moves beyond content removal towards regulating platform architecture itself. Apparently, “we ask users their age” is no longer going to carry much regulatory weight.
The European Commission has preliminarily found Pornhub, Stripchat, XNXX and XVideos in breach of the Digital Services Act for failing to protect minors from accessing pornographic content. The Commission considers that the platforms did not adequately assess the risks to minors and, where they did, failed to use sufficiently objective and thorough methodologies, at times focusing on business concerns rather than societal risks. It also found that existing safeguards are ineffective. In particular, reliance on self-declaration and measures such as warnings, labels and page blurring do not prevent minors from accessing content. The Commission takes the view that more effective, privacy-preserving age verification measures are required. The platforms now have the opportunity to respond, and if the findings are confirmed, the Commission may adopt a non-compliance decision and impose fines.

In the US, a Los Angeles jury reportedly found Meta and Google negligent in the design of their social media platforms, awarding $6 million in damages in a case expected to serve as a bellwether for thousands of similar claims. The key legal move here is the shift away from content and towards design. The plaintiff argued that features such as infinite scroll created addictive patterns, and that the companies failed to warn users of the risks. This matters because US law generally shields platforms from liability for user content, but not necessarily for how their products are built. Meta and Google plan to appeal, but the verdict signals that courts may be increasingly willing to scrutinise platform architecture rather than just what appears on it. Finally, tech regulation acceptance.
In the Netherlands, an Amsterdam court reportedly ordered X.AI and X (so Grok) to stop generating and distributing non-consensual “undressing” images and child sexual abuse material, at least insofar as the conduct concerns people living in the Netherlands or the making and dissemination of such material in the Netherlands. Legally, the ruling is striking for two reasons. First, the court was willing to frame non-consensual undressing images as a data protection problem, on the basis that generating such images involves the unlawful processing of personal data and infringes the privacy rights of the person depicted. Second, it treated the facilitation of AI-generated child sexual abuse material as an unlawful act under Dutch tort law, even where the images may not clearly relate to identifiable real persons. The court was not persuaded by the defendants’ claim that users, rather than the companies, were the real actors. Because X.AI and X control the relevant functionality, they could be ordered to prevent the generation and spread of unlawful content. The result is a fairly robust interim injunction, backed by substantial daily penalties, and a rather blunt reminder that “the user did it” is not much of a shield when the system is built to make it possible.
The European Commission has reportedly used the Digital Services Act as an ordinary user to secure the removal of a leaked call from YouTube, invoking Article 16’s notice-and-action mechanism rather than any institutional privilege. The platform complied promptly following a privacy complaint, underscoring how the DSA operationalises takedowns through standardised reporting channels rather than state-centric enforcement. What looks minor is actually quite revealing: the Commission is positioning itself not as a regulator above the system, but as a participant within it. In practice, this reinforces the DSA’s procedural logic, illegal content is mediated through structured notices, not unilateral state orders, while also signalling how public authorities may strategically rely on platform governance frameworks to protect reputational and privacy interests. No drama, no overreach, just compliance.

👩🏼🎨 Intellectual Property
A bit of fresh air for IT & IP as we take a break from AI. Advocate General Emiliou delivered a significant Opinion in Case C-579/24 on the relationship between platform copies and platform licensing. The Opinion offers a fairly pragmatic answer to a technical copyright question with real licensing consequences. The AG accepts that the server copies made by online content-sharing platforms are “reproductions” under Article 2 of the InfoSoc Directive, but concludes that they do not require a separate licence. Instead, the authorisation platforms must obtain under Article 17(1) DSM for communication or making available to the public also necessarily covers the technically required copies that make those acts possible. The AG takes the same approach to users under Article 17(2), reasoning that non-commercial users should not need to secure an additional reproduction authorisation simply because uploading a file inevitably generates server copies. The Opinion therefore pushes against any attempt to split platform uses into separately licensable fragments. In effect, the AG treats platform uploading as one integrated copyright event, not a licensing buffet. The reproduction predicament is slowly fading, is it not?
The US Supreme Court has reportedly significantly narrowed contributory copyright infringement, holding in Cox v Sony that liability does not arise merely because a service provider knows its users are infringing. According to the Court, contributory liability now requires something more pointed: either actual inducement of infringement or the provision of a service tailored to it. On the facts, Cox’s internet service was plainly capable of substantial non-infringing uses, and the Court rejected the idea that failing to terminate allegedly infringing users was enough to establish the requisite intent. The judgment therefore pushes back against efforts to turn general-purpose intermediaries into copyright guarantors, and makes one point rather firmly: knowledge alone is not intent, and internet access is not infringement by association.

🐆 AI in the Wild
The Anthropic vs. Pentagon series continues. A US federal judge has reportedly temporarily blocked the Pentagon’s decision to designate Anthropic a national security supply-chain risk, a move that would have effectively excluded it from key government contracts. Judge Rita Lin signalled that the designation may not have been genuinely grounded in security concerns, but could instead amount to retaliation for Anthropic’s public stance against surveillance and autonomous weapons, raising both First Amendment and due process issues. The ruling is not final, but it underscores a broader point: “national security” may not a free pass, and executive action in AI procurement will still be tested against constitutional limits.
📄 Recommended Readings
Here’s a couple –in no particular order– of recent publications that piqued my interest this week. Remember to grab a cuppa and settle in for some riveting reading.
Preserving Balance in the EU Digital Single Market: How Like Company Could Reframe Copyright and Innovation in the Generative AI Era by Enrico Bonadio, Giancarlo Frosio, Christophe Geiger, Andrés Guadamuz, Stavroula Karapapa & Irini A. Stamatoudi
Legal Regulation, Technological Management and the Future of Human Agency by William Lucy
Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within. I try to include mostly open access publications due to, well you know, accessibility of knowledge and science.
If you’d like to see more recommended readings, visit the collection here.
So there you have it, folks, another week in the fascinating realm of IT Law. Remember to pop back next week.

If you have any thoughts or suggestions on how to make this digest even more enjoyable, feel free to drop a line. Your feedback is always welcome!
Featured image generated using DALL·E 3.