Sometimes I feel like data protection news are just the same one after another. So I skip them. This week is one of those times. Welcome to a short digest of relatively more interesting(?) news of the week!

🤖 Artificial Intelligence
The US Supreme Court declined to hear Thaler v Perlmutter, leaving intact the position that a work said to be generated autonomously by AI, with no human author, cannot be registered under US copyright law. The judgment itself is far less dramatic than some of the headlines suggest, and is best understood as a fairly conventional exercise in statutory interpretation. The more interesting question is whether authorship must necessarily remain human in a system that often uses copyright to allocate economic value as much as to recognise creative contribution. For the moment, however, US copyright law continues to treat human authorship as the baseline.
🪁 Children’s Rights in Cyberspace
UK regulators Ofcom and ICO both issued an ultimatum to major platforms. Ofcom issued a deadline to the usual suspects, including Meta, TikTok, Roblox, and YouTube, to explain by April 30th exactly how they intend to protect younger users under the Online Safety Act. The regulator’s four core demands focus on enforcing stated age limits with “highly effective” age assurance, implementing strict anti-grooming controls, addressing algorithmically driven harms in feeds, and ensuring new AI tools are subjected to statutory risk assessments before minors are allowed to interact with them. Concurrently, the ICO published an open letter bluntly stating that self-declaration is no longer a valid mechanism for establishing a lawful basis to process children’s data. The privacy watchdog is demanding the deployment of genuine, privacy-friendly age assurance technologies like as digital IDs or facial age estimation within the next two months. It seems the era of relying on a simple drop-down menu for a child to truthfully declare their birth year is drawing to a rather abrupt regulatory close. If spring does not bring satisfactory operational updates, formal enforcement from both fronts seems practically guaranteed. We shall see what May brings. Watch this space.
🛒 E-Commerce & Digital Consumer
Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, the original cohort of DMA gatekeepers, submitted their updated compliance and independently audited consumer profiling reports to the European Commission. A year on from their initial designation, the tech majors are required to detail exactly how they have been adapting their ecosystems to the new ex ante rules. The Commission now faces the undoubtedly thrilling task of digesting these lengthy self-assessments, cross-referencing them against the inevitably less rosy feedback from third-party developers and ongoing regulatory probes. Public, non-confidential summaries are being made available online for those interested. I wonder how creatively the gatekeepers have defined “compliance” this time around.

X has now met the Commission’s deadline for submitting a compliance plan on its blue check system, after being fined €120 million in December under the DSA for deceptive design. The issue, of course, is that the old verification badge once signalled authenticity, whereas in the era of Musk, it became something much closer to a paid accessory with identity implications attached. TThe Commission will now review the proposed remedies, while X must also pay the fine by 16 March and still faces a further deadline in April on transparency failures linked to researcher data access and its ad repository. X is appealing, naturally. Still, the point is fairly simple: if a design element signals trust or authenticity, regulators tend to expect it to mean exactly that. Shocking
🐆 AI in the Wild
You may remember, the US Department of War had formally designated Anthropic a supply chain risk after the company refused to relax safeguards on mass domestic surveillance and fully autonomous weapons. Well, reportedly, Anthropic responded, with two federal lawsuits no less. The move apparently appears to rely on procurement-security powers under 10 U.S.C. § 3252 and FASCSA, with immediate consequences for defence contractors using Anthropic in covered government work. The broader message is not especially subtle, AI safety principles seem to remain intact only until the state asks for something else
That seems to be it for now, hope you enjoyed. Do pop back next week for more. Cheerio!

If you have any thoughts or suggestions on how to make this digest better, feel free to drop a line. Your feedback is always welcome! Contact info here.
Featured image generated using DALL·E 3.