When care gets smart: Legal balancing acts
Artificial intelligence is reshaping elderly and vulnerable care. But how do we safeguard human rights and dignity? In his dissertation, Maksymilian Kuźmicz explores how law can help balance the competing interests involved in AI-driven care.

When we think of care for elderly people or those with special needs, we might picture nurses, family support, or medical professionals. But increasingly, care is being provided by machines. Known as Active Assisted Living (AAL) technologies, these systems include everything from smartwatches to AI-powered video surveillance installed in private homes. Their purpose? To help people live independently for longer—ideally without compromising their safety.
“AAL is still a nascent technology,” says doctoral candidate Maksymilian Kuźmicz, “but it’s already being used, mainly in care homes. The more advanced systems can remind people to take medications or suggest physical activities.”
Privacy vs. safety: the core dilemma
While the benefits of AAL technologies are clear, they also raise difficult legal and ethical questions. Who gets to see the data collected? What happens when privacy and safety are in conflict?
“One of the most typical conflicts of interest concerns privacy,” Kuźmicz explains. “While an assisted person may want their data to be used only for support, the provider might want to use it to improve their product. And families often want updates to know their loved ones are safe – but that can come at the cost of privacy.”
These dilemmas become even more pronounced when AAL is used in private homes rather than institutional settings.
“Home is the most intimate, private place, which makes most of the conflicts more vivid. Thus, all legal and ethical conflicts are even more pressing. That’s why I chose to focus on this setting.”
How the law can help – by balancing values

The quest for the law is mostly to ensure that we can implement innovations while providing a high level of protection for individuals.
Kuźmicz’s research focuses on how legal frameworks can help manage these conflicting interests. A key concept is balancing – a method of weighing competing rights and interests to find a legally and ethically acceptable outcome;
“While balance and balancing are often used in the legal context, they are not precisely defined. They are usually used when we want to emphasise a clash between two or more values or norms, and we recognise the importance of both of them. Balancing suggests some process of seeking solutions that would respect all involved values and remain effective. In that way, it’s a promising approach to new technologies which present possibilities but also challenges for individuals and society.”
Kuźmicz identifies two main approaches to balancing used in European case law: proportionality and compromising. His work proposes a way of integrating both into one method that could serve as a decision-making model in regulatory or practical contexts.

From abstract theory to real-world impact
The dissertation is not just about theory – it offers practical insights into how to regulate AI technologies like AAL. In particular, it responds to the newly adopted AI Act in the EU, which sets legal requirements for high-risk AI systems.
“My work proposes a model of how to conduct the balancing process, which could actually serve as a Risk Management System under the AI Act,” says Kuźmicz. “It’s a contribution to the implementation of the law, but also to broader discussions about how we regulate innovation.”
He also suggests a new way of categorising stakeholders involved in AAL systems, a move he believes could lead to more inclusive and widely accepted technologies. Not least, this is also important from a business perspective, as it would lead to the development of technologies that people are ready to use – and buy.
The bigger picture – and a personal motivation
“Already a few years ago i felt that technology would shape our lives—individually and socially—and wondered how we can ensure that this development actually benefits people. Law seemed a viable tool” says Kuźmicz.
Kuźmicz has long been interested in the intersection of law and technology. Along the way, he’s been surprised by how technology has evolved and how unevenly is implemented – even within the same city.
“During my research journey, the biggest surprise has been how technology is implemented. On one hand, it’s hardly predictable. For example, in 2019, it seemed that self-driving cars were almost there. Yet, it’s 2025, and we don’t see many of them on the streets.”
“The other surprise is that while some companies and institutions use the newest solutions, others are a decade behind. And it’s not always about money. Often, it’s a matter of mindset.”
The future of care – and the role of law
Looking ahead, Kuźmicz sees AI as one of the few viable options to address the growing care needs of ageing populations.
“The only ethical alternatives I see are multigenerational families, mass migration, or bioengineering,” he says. “Some are socially opposed, and others not yet technically available. That leaves us with technology, or a lack of support.”
“The challenge for the law is to allow innovation while protecting individual rights and dignity. In that task, the balancing model proposed in my dissertation may come in handy.”
Maksymilian Kuźmicz will publicly defend his doctoral thesis on May 6, 2025, 10.00 – 13.00 at Stockholm University.
Opponent: Professor Tobias Mahler, Oslo University, Norway.
Last updated: May 5, 2025
Source: Department of Law