Navigating the legislative dilemma
The rapid advancement of artificial intelligence (AI) presents profound regulatory challenges, as emerging technologies often outpace traditional legislative frameworks. This article critically examines the European Union’s AI Act as a case study in regulating AI, highlighting the inherent tension between the law’s stability and the need for flexibility in governing innovation. The AI Act employs a risk-based approach, categorising AI systems according to their potential societal risks and introducing a multifaceted regulatory governance structure incorporating statutory, administrative, and outsourced legislative policy models. The study identifies key challenges arising from this fragmented regulatory approach, including legal uncertainty, inconsistencies, and concerns over political accountability. Through a comparative analysis of four legislative policy models—statutory, administrative, judicial, and outsourced—the article argues for an administrative model centred on a dedicated EU AI Agency. This proposed model aims to balance regulatory adaptability with legal certainty by consolidating expertise, ensuring procedural clarity, and strengthening accountability. By outlining a refined governance structure, this research offers a preliminary blueprint for a more coherent regulatory framework for emerging technologies, particularly AI.




