Wednesdays@DEI: Talks, 28-02-2024

Author and Affiliation: Paulo de Sousa Mendes, Centro de Investigação de Direito Privado (CIDP) from FDUL

Bio: Paulo Manuel Mello de Sousa Mendes graduated in Law from the Faculdade de Direito da Universidade de Lisboa (FDUL) in 1981, where he also obtained an LLM degree in 1987. He earned his PhD in Legal Sciences in 2006, and the title of Associate Professor (Agregado) in 2019 from the Universidade de Lisboa (UL). This title (in latin: venia legendi) is the highest qualification issued through the university examination process and remains a central concept in academic careers in many countries of Roman-Germanic tradition, without which no one can become a full professor. Since 2020, he has been a Full Professor (Catedrático) at FDUL, where he teaches Criminal Law, Criminal Procedure, Regulatory Law, Law of Evidence, Comparative Criminal Law, International Criminal Law, and Artificial Intelligence & Law. He has held various positions at FDUL, including Vice-President of the Institute of Legal Cooperation (2007-2009), Vice-President of the Erasmus Office and International Relations (2007-2009), and member of the Board of Directors. Scientific Council (2015-2024), including its Standing Committee (2015-2016). He was Coordinator of the Scientific Postgraduate Commission from 2018 to 2022. He is Scientific Coordinator, along with Professor José Azevedo Pereira, of the Master's in Law & Management, jointly created by FDUL and the Instituto Superior de Economia e Gestão da Universidade de Lisboa (ISEG), whose third edition began in September 2023. He is also Scientific Coordinator, along with Professor João Marques Martins, of the LLM in Artificial Intelligence in Legal Practice and its Regulation at FDUL, in partnership with the Champalimaud Foundation, Oracle, Outsystems, among others, with applications open for 2024-2025.

Title: Artificial Intelligence Regulation
Abstract: Artificial Intelligence (AI) is the technology of our time, although it was created in the mid-20th century. In the last decade, AI has gained extraordinary traction and has become ubiquitous. All countries are investing in AI development, seeking to take the lead. In 2016, the Federal Government of the United States of America (USA) published the first National Strategic Plan for Artificial Intelligence Research and Development (2023 update), recognizing the tremendous promise of AI and the need for continuous advancement. The Biden-Harris Administration is committed to promoting ethical, reliable, and safe AI systems that serve the public good, as stated in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework. Over the past decade, China has also emerged as a significant driver and user of AI. Chinese policymakers have devised various regulations to ensure what they consider to be the appropriate use of AI, even if their perspective on ethical AI does not align with the vision of Western countries on human rights. In the National AI Strategy, the Government of the United Kingdom (UK) has committed to developing a pro-innovation national position on AI governance and regulation and outlined its stance in a policy document in 2022. This white paper sets out the Government's proposals for implementing a balanced, future-proof, and innovation-friendly framework to regulate AI. The European Union's (EU) AI Regulation represents the world's first comprehensive law on AI, addressing AI risks and positioning Europe to play a leadership role globally. The proposed EU Regulation on AI introduces rules to increase transparency and minimize risks to security and fundamental rights before AI systems can be used in the EU. To avoid overregulation, the EU Regulation on AI focuses on so-called high-risk AI use cases. High-risk systems include all AI systems intended for use in judicial decision-making. All remote biometric identification systems are also considered high-risk and subject to stringent requirements. Their live use in publicly accessible spaces for the purposes of crime prevention and repression is prohibited in principle. High-risk AI systems must be assessed for compliance with these requirements before being placed on the market or put into service. The presentation aims to compare the AI regulation models currently under discussion.

Tags: