Hello and welcome back. It’s officially summer, and we have no summer related content. That’s the digital realm for you. Still a thrill of a read inside, though, welcome back!

🤖 Artificial Intelligence
The AI Office has been established by the European Commission. The office will be the EU’s center of AI expertise, implementing the AI Act and promoting trustworthy AI. It consists of five units: Regulation and Compliance, AI Safety, Excellence in AI and Robotics, AI Innovation and Policy Coordination, and AI for Societal Good, plus two advisors. With over 140 staff, the Office will support Member States, enforce AI rules, and collaborate with stakeholders. It will develop evaluation tools, create codes of practice, investigate rule infringements, and prepare guidelines, aiming to foster a trustworthy AI ecosystem and enhance EU competitiveness. Internationally, the Office will promote the EU’s AI approach and foster global cooperation. It will work closely with the European Artificial Intelligence Board and the Scientific Panel, ensuring strong ties with the scientific community. In conclusion, the European AI Office will guide the EU in balancing AI innovation with safety and public trust, ensuring AI benefits society while upholding fundamental values. Reportedly, apparently, as a headline.
The Royal Society published a report titled “Science in the Age of AI”. The report highlighted the rapid growth of AI technologies and their impact on scientific research. Platforms like ChatGPT and Midjourney sparked public interest and raised concerns among policymakers about AI’s societal integration and potential risks. Key findings from the report included that AI, long used in research, recently experienced a “deep learning revolution” driven by significant advancements and investment. AI now identifies complex patterns in large datasets, enhancing scientific discovery and aiding in simulations and synthetic data creation to address societal and environmental challenges. However, the report identified several challenges with increased AI use, such as reproducibility issues, high energy consumption, and the need for interdisciplinary collaboration. AI’s black-box nature complicates transparency and reproducibility, risking scientific integrity. The report also highlighted AI’s applications in climate science, material science, and rare disease diagnosis. It emphasized the importance of open science principles to tackle challenges like data bias. The Royal Society’s recommendations aim to ensure AI’s positive impact on science while maintaining integrity and public trust. A little (of around 108 pages) food for thought there.
Reports galore as Singapore published the report “Model AI Governance Framework for Generative AI”. Traditional AI models, developed over years, had led to Singapore’s 2019 Model AI Governance Framework, updated in 2020. The advent of generative AI was found to introduce new risks like hallucinations and copyright infringement, highlighted in a June 2023 Discussion Paper by the Infocomm Media Development Authority of Singapore, Aicadium, and AI Verify Foundation. The latest report highlights that governance frameworks need to be revised to balance user protection and innovation. The report also finds that international discussions on accountability, copyright, and misinformation are crucial. The Model AI Governance Framework for Generative AI outlines a systematic approach involving all stakeholders, addressing nine key dimensions:
- Accountability: Ensuring responsibility across the AI development chain, with parallels to cloud and software development.
- Data: Emphasizing data quality and pragmatic handling of contentious data, like personal and copyrighted material.
- Trusted Development and Deployment: Advocating best practices and transparency in AI development and deployment.
- Incident Reporting: Establishing processes for monitoring and reporting incidents for continuous improvement.
- Testing and Assurance: Promoting third-party testing and assurance for independent verification and standard development.
- Security: Adapting existing information security frameworks and developing new tools for generative AI-specific threats.
- Content Provenance: Enhancing transparency about AI-generated content to combat misinformation, using technologies like digital watermarking.
- Safety and Alignment R&D: Accelerating global cooperation in R&D to improve AI model alignment with human values and intentions.
- AI for Public Good: Ensuring responsible AI promotes public benefit through democratized access, public sector adoption, and sustainable development.

🔏 Data Protection & Privacy
noyb filed complaints in 11 European countries against Meta’s new privacy policy. Meta informed millions of Europeans that it planned to use personal data, including private posts and images, for unspecified AI technology and share it with third parties, without asking for user consent. Instead, Meta claimed a “legitimate interest” that supposedly outweighed user privacy rights, violating GDPR rules. Max Schrems criticized Meta’s actions, pointing out that the European Court of Justice had previously ruled against Meta’s similar claims. Meta’s policy changes would impact around 4 billion users, offering no option to delete their data later. The opt-out process was made unnecessarily difficult, putting the burden on users to protect their own privacy. The Irish Data Protection Commission (DPC) faced criticism for allowing Meta to bypass GDPR, leading to substantial fines. noyb requested an urgency procedure under Article 66 GDPR to stop Meta’s policy before implementation. This procedure allows quick action and potential EU-wide measures via the European Data Protection Board (EDPB). Now, Data Protection Authorities (DPAs) must decide whether to start an urgency procedure or handle the complaints through regular processes. If Meta continued with its plans, further legal challenges, including injunctions and class actions, might follow, adding to Meta’s legal troubles in the EU.
🛒 E-Commerce & Digital Consumer
The CJEU ruled that the order button, or a similar function, in online orders must clearly indicate that, by clicking on it, the consumer assumes an obligation to pay. So traders must clearly inform consumers that placing an online order creates a payment obligation, even if this obligation depends on a subsequent condition. This decision stemmed from a German case where a tenant used a debt recovery service to seek rent overpayment refunds. The service’s website order button did not include the required phrase “order with obligation to pay” or similar wording. The landlords challenged the service’s authority, leading the German court to refer the question to the CJEU. The Court emphasized that the directive mandates clear communication of payment obligations to consumers before they place an order. Non-compliance by traders means the consumer is not bound by the order, though they can still confirm it voluntarily. This ruling underscores the importance of transparency in online transactions, ensuring consumers understand their financial commitments before finalizing purchases.

Some good(?) news for online service providers at last? The CJEU ruled that a Member State cannot impose additional obligations on an online service provider established in another Member State. This ruling came from a case in Italy, where national provisions required online intermediaries like Airbnb and Google to register with an administrative authority, provide detailed economic information, and pay a financial contribution. The companies challenged these obligations, arguing they contravened EU law and increased administrative burdens. The CJEU confirmed that under the Directive on electronic commerce, only the home Member State of an online service provider regulates the provision of those services. Member States of destination must respect the principle of mutual recognition and generally cannot restrict the freedom to provide services. The Court held that Italy’s additional obligations on foreign-established providers do not meet the exceptions allowed by the Directive. These obligations were of general application and not necessary to protect any objectives of general interest specified in the directive. Thus, Italy cannot enforce these additional requirements on providers established in other Member States. This decision highlights the importance of the principle of mutual recognition within the EU, ensuring that online service providers are primarily regulated by the laws of their home Member State, fostering a more uniform and less burdensome regulatory environment for cross-border e-commerce. So too many cooks spoil the broth, the CJEU agrees.
📄 Recommended Readings
Here’s a list –in no particular order– of recent publications that piqued my interest this week. Grab a cuppa (optional) and settle in for some riveting reading.
Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions by Dawen Zhang, Pamela Finckenberg-Broman, Thong Hoang, Shidong Pan, Zhenchang Xing, Mark Staples & Xiwei Xu
Reporting cybersecurity to stakeholders: A review of CSRD and the EU cyber legal framework by Clara Boggini
The unfair side of Privacy Enhancing Technologies: addressing the trade-offs between PETs and fairness by Alessandra Calvi, Gianclaudio Malgieri & Dimitrsi Kotzinos
Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within. I try to include mostly open access publications due to, well you know, accessibility of knowledge and science.
Do remember to pop back soon for your latest dose of legal updates. Cheerio!

If you have any thoughts or suggestions on how to make this digest more enjoyable, feel free to drop a line. Your feedback is always welcome!
Featured image generated using DALL·E 3.