Liane Rose ColonnaAss. lektor, docent
About me
I am currently employed as an assistant professor in law and information technology at the Department of Law, Stockholm University (SU) where I am performing research in the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society program. In particular, I am interested in ethical and legal challenges in relationship to AI-driven practices in higher education. I am also interested in methodologically oriented research in the field of AI and Law.
I am the director of the Swedish Law and Informatics Research Institute (IRI) and I have been a member of the New York Bar since 2008.
Teaching
I am the head of a course called “Legal Aspects of Information Security” at SU’s Department of Computer Science as well as the head of a doctoral course entitled Global Legal Research and Information Management (GRIM). Furthermore, I am an active contributor to the undergraduate course “rättsinformatik”.
Research
I am a co-PI of a WASP-HS research cluster entitled "The Rule of AI — AI, Regulation, and Society." I am also a member of the Digital Futures Faculty, a cross-disciplinary research center that explores and develops digital technologies jointly established by KTH Royal Institute of Technology, Stockholm University and RISE Research Institutes of Sweden.
From 2020-2024, I was a co-PI of a Marie Skłodowska-Curie Actions Innovative Training Network entitled Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (“visuAAL”). During this time I was also the Action Vice Chair of the COST Action entitled “Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living”.
Prior to obtaining my position as an assistant professor within the WASP-HS program, I was employed as a post-doctoral researcher in the PAAL Project, a European Union – Horizon 2020 program, funded by JPI More Years, Better Lives and the Swedish Research Council for Health, Working Life, and Welfare (FORTE). This project was focused on building privacy aware lifelogging tools for older and frailer individuals in order to support their health, wellness, and independence. In addition to the PAAL Project, I have also worked on other EU Horizon 2020 projects like e-Skills Match and Skills Match.
Research projects
Publications
A selection from Stockholm University publication database
-
Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI?
2021. Liane Colonna. Boston University Journal of Science and Technology Law 27 (2), 312-343
Article -
A Methodological Approach to Privacy by Design within the Context of Lifelogging Technologies
2020. Alex Mihailidis, Liane Colonna. Rutgers Computer & Technology Law Journal 46 (1), 1-52
ArticleLifelogging technologies promise to manage many of the concerns raised by population aging. The technology can be used to predict and prevent disease, provide personalized healthcare, and to give support to formal and informal caregivers. Although lifelogging technologies offer major opportunities to improve efficiency and care in the healthcare setting, there are many aspects of these devices that raise serious privacy concerns that can undercut their use and further development. One way to manage privacy concerns raised by lifelogging technologies is through the application of Privacy by Design, an approach that involves embedding legal rules into information systems at the outset of their development. Many current approaches to Privacy by Design, however, lack methodological rigor, leaving stakeholders perplexed about how to achieve the objectives underlying the concept in practice.
This paper will explore ways to develop a Privacy by Design methodology within the context of Ambient Assistive Living (AAL) technologies like lifelogging. It will set forth a concrete, methodological approach towards incorporating privacy into all stages of a lifelogging system's development. The methodology begins with a contextual understanding of privacy, relying on theoretical and empirical studies conducted by experts in humancomputer relations. It then involves an analysis of the relevant black-letter law. A systematic approach as to how to incorporate the requisite legal rules into lifelogging devices is then presented, taking into the account the specific design elements of these kinds of systems.
-
Privacy, Risk, Anonymization and Data Sharing in the Internet of Health Things
2020. Liane Colonna. Pittsburgh Journal of Technology Law & Policy 20 (1), 148-175
ArticleThis paper explores a specific risk-mitigation strategy to reduce privacy concerns in the Internet of Health Things (IoHT): data anonymization. It contributes to the current academic debate surrounding the role of anonymization in the IoHT by evaluating how data controllers can balance privacy risks against the quality of output data and select the appropriate privacy model that achieves the aims underlying the concept of Privacy by Design. It sets forth several approaches for identifying the risk of re-identification in the IoHT as well as explores the potential for synthetic data generation to be used as an alternative method to anonymization for data sharing.
-
Legal and regulatory challenges to utilizing lifelogging technologies for the frail and sick
2019. Liane Colonna. International Journal of Law and Information Technology 27 (1), 50-74
ArticleLifelogging technologies have the capacity to transform the health and social care landscape in a way that few could have imagined. Indeed, the emergence of lifelogging technologies within the context of healthcare presents incredible opportunities to diagnose illnesses, engage in preventative medicine, manage healthcare costs and allow the elderly to live on their own for longer periods. These technologies, however, require coherent legal regulation in order to ensure, among other things, the safety of the device and privacy of the individual. When producing lifelogging technologies, it is important that developers understand the legal framework in order to create a legally compliant device. The current regulation of lifelogging is highly fragmented, consisting of a complex patchwork of laws. There are also a number of different regulatory agencies involved. Laws and regulations vary, depending on jurisdiction, making development of these technologies more challenging, particularly given the fact that many lifelogging tools have an international dimension.
-
The end of open source?: Regulating open source under the cyber resilience act and the new product liability directive
2025. Liane Colonna. The Computer Law and Security Review 56
ArticleRooted in idealism, the open-source model leverages collaborative intelligence to drive innovation, leading to major benefits for both industry and society. As open-source software (OSS) plays an increasingly central role in driving the digitalization of society, policymakers are examining the interactions between upstream open-source communities and downstream manufacturers. They aim to leverage the benefits of OSS, such as performance enhancements and adaptability across diverse domains, while ensuring software security and accountability. The regulatory landscape is on the brink of a major transformation with the recent adoption of both the Cyber Resilience Act (CRA) as well as the Product Liability Directive (PLD), raising concerns that these laws could threaten the future of OSS.
This paper investigates how the CRA and the PDL regulate OSS, specifically exploring the scope of exemptions found in the laws. It further explores how OSS practices might adapt to the evolving regulatory landscape, focusing on the importance of documentation practices to support compliance obligations, thereby ensuring OSS's continued relevance and viability. It concludes that due diligence requirements mandate a thorough assessment of OSS components to ensure their safety for integration into commercial products and services. Documentation practices like security attestations, Software Bill of Materials (SBOMs), data cards and model cards will play an increasingly important role in the software supply chain to ensure that downstream entities can meet their obligations under these new legal frameworks.
-
Teachers in the loop? An analysis of automatic assessment systems under Article 22 GDPR
2023. Liane Colonna. International Data Privacy Law 14 (1), 3-18
ArticleKey Points
- This article argues that while there is great promise in the everyday automation of higher education to create benefits for students, efficiencies for instructors, and cost savings for institutions, it is important to critically consider how AI-based assessment will transform the role of teachers and the relationship between teachers and students.
- The focus of the work is on exploring whether and to what extent the requirements set forth in Article 22 of the General Data Protection Regulation (GDPR) apply within the context of AI-based automatic assessment systems, in particular the legal obligation to ensure that a teacher remains in the loop, for example being capable of overseeing and overriding decisions when necessary.
- Educational judgments involving automatic assessments frequently occur in a complicated decision-making environment that is framed by institutional processes which are multi-step, hierarchical, and bureaucratic. This complexity makes it challenging to determine whether the output of an AI-based automatic assessment system represents an ‘individual decision’ about a data subject within the meaning of Article 22.
- It is also unclear whether AI-based assessments involve decisions based ‘solely’ on automatic processing or whether teachers provide decisional support, excluding the application of Article 22. According to recent enforcement decisions, human oversight is entangled with institutional procedures and safeguards as well as system design.
-
The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?
2024. Liane Colonna. The Yearbook of Socio-Economic Constitutions, 51-93
ChapterThis paper argues that by failing to acknowledge the complexity of modern research practices that are shifting from a single discipline to multiple disciplines involving many entities, some public, some private, the proposed AI Act creates mechanisms for regulatory arbitrage. The article begins with a semantic analysis of the concept of research from a legal perspective. It then explains how the proposed AI Act addresses the concept of research by examining the research exemption that is set forward in the forthcoming law as it currently exists. After providing an overview of the proposed law, the paper explores the research exemption to highlight whether there are any gaps, ambiguities, or contradictions in the law that may be exploited by either public or private actors seeking to use the exemption as a shield to avoid compliance with duties imposed under the law.
To address whether the research exemption reflects a coherent legal rule, it is considered from five different perspectives. The paper begins by examining the extent to which the research exemption applies to private or commercial entities that may not pursue research in a benevolent manner to solve societal problems, but nevertheless contribute to innovation and economic growth within the EU. Next, the paper explores how the exemption applies to research that takes place within academia but is on the path to commercialization. The paper goes on to consider the situation where academic researchers invoke the exemption and then go on to provide the AI they develop to their employing institutions or other public bodies for no cost. Fourth, the paper inspects how the exemption functions when researchers build high-risk or prohibited AI, publish their findings, or share them via an open-source platform, and other actors copy the AI. Finally, the paper considers how the exemption applies to research that takes place “in the wild” or in regulatory sandboxes.
-
Addressing the Responsibility Gap in Data Protection by Design: Towards a More Future-oriented, Relational, and Distributed Approach
2022. Liane Colonna. Tilburg Law Review 27 (1), 1-21
ArticleThis paper explores the extent to which technology providers are responsible to end users for embedding data protection rules in the AI systems they design and develop, so as to safeguard the fundamental rights to privacy and data protection. The main argument set forth is that a relational rationale, requiring a broader range of actors in the supply chain to share legal responsibility for Data Protection by Design (DPbD) is better suited to address infringements to these fundamental rights than the current model that assigns responsibility mainly to the data controller or data processor. Reconceptualizing the law in a more future-oriented, relational, and distributed way would make it possible to adapt legal rules – including those within the GDPR and the continuously evolving EU acquis – to the complex reality of technology development, at least partly addressing the responsibility gap in DPbD.
A future-oriented conception of responsibility would require technology providers to adopt more proactive approaches to DPbD, even where they are unlikely to qualify as a controller. A relational approach to DPbD would require technology providers to bear greater responsibilities to those individuals or groups that are affected by their design choices. A distributed approach to DPbD would allow for downstream actors in the supply chain to bear part of the legal responsibility for DPbD by relying on legal requirements that are applicable to various actors in the supply chain supporting DPbD such as those found in contract law, liability law, and the emerging EU acquis governing AI, data, and information security.
Show all publications by Liane Rose Colonna at Stockholm University
$presentationText