Berry Picks in IT Law #34

Off with a bang and it’s the news we have all been waiting for. IT’S ALIVE! (the it – the AI Act) And also, we have a lot to catch up on- so let’s skip through the pleasantries, shall we?

🤖 Artificial Intelligence

The AI Act has finally been passed. It’s been as if it’s July 20th, 1969 and we’re all sitting waiting for the Eagle to land. Alas, it has. We can all rejoice, before going into the nitty-gritty. For an overview of the AI Act, see my in a nutshell summary here. For an even shorter explanation, though, I asked the AI Act itself (well, a chatgpt plugin): “Please introduce the AI Act to the esteemed readers of my blog, The IT Berry, who have legal backgrounds and would like to get to know you in 250 words or less.” Here’s what they had to say:

“The EU AI Act is a pioneering piece of legislation by the European Union designed to regulate the use of artificial intelligence (AI) across its member states. It marks a significant effort to establish legal standards that ensure AI technologies are developed and used in a way that is safe, respects privacy, data protection, and is aligned with fundamental human rights and freedoms. The Act classifies AI systems according to the risk they pose to society, from minimal to unacceptable risk, with the aim of fostering innovation while preventing potential harm.
High-risk AI applications, which include technologies that could significantly affect people’s lives or the safety of EU citizens, are subject to strict compliance requirements. These requirements range from transparency obligations, ensuring data governance, to detailed documentation, making it possible to assess their algorithms’ impact. On the other end of the spectrum, AI systems considered a clear threat to people’s safety, livelihoods, or rights face an outright ban.
This legislation is not just about curtailing the risks; it is also about unlocking the potential of AI to contribute positively to society. By setting these standards, the EU aims to build public trust in AI technologies, ensuring that they are used to enhance human decision-making rather than replace it indiscriminately.
As this legislation moves towards implementation, it’s crucial for businesses, legal practitioners, and policymakers to understand its implications thoroughly. The EU AI Act sets the stage for a global conversation on the ethical use of AI, potentially influencing future regulatory frameworks worldwide.”

Back in the field, AI disappoints in Canada. In a recent case, Moffatt v. Air Canada, the incompetence of a chatbot was ruled as negligent misrepresentation. The core issue revolved around Jake Moffatt seeking a refund from Air Canada, arguing he was misled by a chatbot to believe he could apply for a bereavement fare retroactively after booking flights following his grandmother’s death. Air Canada contested, stating that Moffatt did not follow the correct procedure and that the airline was not liable for the chatbot’s information. The Civil Resolution Tribunal found that Air Canada committed negligent misrepresentation by failing to ensure the accuracy of the information provided by its chatbot. This finding was based on the principle that businesses must exercise reasonable care to avoid misleading consumers. Moffatt’s reliance on the chatbot’s advice, deemed reasonable under the circumstances, led him to book flights under the assumption that he would be eligible for a bereavement discount, which resulted in financial damages when the discount was not applied. Organisations are still accountable for the actions or inactions of their computer systems, as well as for any inaccuracies communicated to the public, regardless of whether these are conveyed by a human representative or an automated chatbot. We’re not suddenly in a fairy tale complete with autonomous AI floating around, good to know.

The World Health Organization (“WHO”) published its report titled “Ethics and governance of artificial intelligence for health: guidance on large multi-modal models”. The report focused on the complex domain of large multi-modal models (LMMs) to be in use in health care. The WHO underscores six ethical principles—ranging from protecting autonomy to ensuring sustainability and inclusiveness—aimed at guiding stakeholders through the ethical development and deployment of AI. Moreover, the document highlights the dual-edged nature of LMMs, capable of transforming health care delivery while also raising concerns about privacy, bias, environmental impact, and the exacerbation of inequalities. Through its comprehensive framework, the WHO calls for national and international governance mechanisms to navigate these challenges, ensuring that AI’s integration into health care benefits all segments of society without compromising ethical standards or human rights.

World Intellectual Property Organisation (“WIPO”) also jumped on the bandwagon, though for IP. A factsheet titled “Generative AI: Navigating Intellectual Property” was published on the risks and potential safeguard for generative AI, in terms of IP. The factsheet touches upon the challenges genAI presents to IP: unresolved questions about copyright ownership, the use of copyrighted materials in AI training, and the potential for IP infringement. To navigate this terrain, the factsheet recommends adopting robust policies, conducting thorough AI tool assessments, and ensuring staff are well-versed in the legal aspects of AI utilization. As the regulatory framework around generative AI continues to evolve, staying informed and proactive in documenting human contributions and securing clear agreements on IP ownership is crucial for organizations aiming to leverage AI technologies while protecting their IP rights. In other words: keep an eye on everything, it’s a mine field.

🔏 Data Protection & Privacy

The European Data Protection Board (“EDPB”) published a one-stop-shop case digest, titled “Security of Processing and Data Protection” and penned by Eleni Kosta. The comprehensive report analyses decisions made by Supervisory Authorities (SAs) regarding the security of personal data processing and data breach notifications under the GDPR, specifically Articles 32, 33, and 34. The data was compiled from final decisions between January 2019 and June 2023. The report concludes by emphasizing the importance of a case-by-case assessment of security measures, the proactive notification of breaches, and the adoption of robust password policies. It also anticipates future guidance from the CJEU on interpreting the adequacy of security measures under the GDPR. Gripping read, no sarcasm.

The European Data Protection Supervisor (“EDPS”) found the European Commission’s use of Microsoft 365 to be in violation of data protection law (Regulation 2018/1725), specifically in the areas of data transfers outside the EU/European Economic Area and the specification of data collection purposes in its contract with Microsoft. These infringements concern the transfer and processing of personal data without adequate safeguards and clarity. As a corrective measure, the EDPS has mandated the Commission to suspend all data flows to Microsoft and related entities outside the EU/EEA by 9 December 2024, unless these flows comply with EU data protection standards. The decision, aimed at ensuring the protection of personal data processed by EU institutions, underscores the importance of adhering to data protection safeguards, especially in the context of cloud-based services.

The CJEU ruled that the Belgian Official Journal, responsible for publishing official documents and acts, qualifies as a “data controller” under data protection law. The classification is especially significant despite the Journal’s lack of legal personality or direct control over content, highlighting the role national law plays in determining an entity’s responsibilities for personal data processing. The Court’s decision stems from the understanding that Belgian law, at least implicitly, sets the purposes and means of processing personal data within the context of the Journal’s publication duties. The ruling underscores the broad interpretation of what constitutes a data controller within the EU’s data protection framework, focusing on the entity’s designated functions rather than its direct control over data or legal status.

The “consent or pay” model is under scrutiny by the EDPB. The data protection authorities of Norway, the Netherlands and Hamburg have asked the EDPB to issue an opinion on the model, which sees large online services demand payment if users do not consent to their data being processed for behavioral advertising.

📄 Recommended Readings

Here’s a couple –in no particular order– of recent publications that piqued my interest this week. Grap a cuppa (optional) and dig in.

The Emergence of EU Cybersecurity Law: A Tale of Lemons, Angst, Turf, Surf and Grey Boxes by Lee A. Bygrave

EU Digital Legislation as a “Wimmelpicture”? What Children’s Books Can Teach Us About Laws by Heiko Richter

Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR by Andreas Häuselmann & Bart Custers

Disclaimer: I am in no way affiliated with the authors or publishers in sharing these, and do not necessarily agree with the views contained within. I try to include mostly open access publications due to, well you know, accessibility of knowledge and science.

So there you have it, folks – another week in the fascinating realm of IT Law. Remember to pop back for more IT Law updates. Toodle-oo!

If you have any thoughts or suggestions on how to make this digest even more enjoyable, feel free to drop a line. Your feedback is always welcome!

Featured image generated using DALL·E 3.

Sena Kontoğlu Taştan

IT law enthusiast and researcher.

Leave a Reply

Your email address will not be published. Required fields are marked *