Entrevista com Ugo Pagallo, por Paola Cantarini

Entrevista com Ugo Pagallo, por Paola Cantarini 

Esta entrevista foi realizada originalmente em inglês em 18 de dezembro de 2023.

A entrevista foi revisada em sua tradução inicial realizada por Paola Cantarini, por Guershom David, Mestrando em Direito Político e Econômico e Diretor do Projeto MackriAtIvity vinculado à Incubadora da Universidade Presbiteriana Mackenzie.

Ugo Pagallo é u ex-advogado e professor de Jurisprudência na Faculdade de Direito da Universidade de Turim (Itália). Participou de muitos projetos e pesquisas internacionais, colaborando com instituições como a Comissão Europeia, a Organização Mundial da Saúde, o governo japonês e a Iniciativa Global IEEE para Considerações Éticas em IA e Sistemas Autônomos. É docente do programa de Doutorado Internacional Conjunto em Direito, Ciência e Tecnologia (Erasmus Mundus da EU). Autor de treze monografias e cerca de cem ensaios em periódicos e capítulos de livros. Seus principais interesses incluem Inteligência Artificial e Direito, redes baseadas em computadores através da teoria jurídica, e direito da tecnologia da informação, especialmente proteção de dados e direito autoral.

Versão original

Paola Cantarini: What's your area of expertise? Would you start telling us about your AI and data protection work?

Ugo Pagallo:  Well, my background is in philosophy and law. I did practice law decades ago, although, for the past 25 years, I've been deeply surrounded by information technology, law, and artificial intelligence. Wow, 25 years, at least. There are also good reasons for that, because my family has a three-generation legacy of AI scholars, beginning with my brother-in-law, who attended John McCarthy's first pioneering artificial intelligence course at Stanford University. Victor, now an emeritus in mathematics at Stanford, laid the groundwork. Following suit, my sister earned her Ph.D. in machine learning in the early '90s and has spent decades contributing to Apple's innovations in Cupertino. Some of the devices you use might have been crafted by my sister's team. Finally, my nephews are also immersed in the field, focusing on artificial intelligence and legal applications, adding another dimension to my interest, driven by family ties. It's incredible to witness the transformation in this landscape for such a long time as a part of it. Roughly 25 years ago, only a handful of scholars were exploring this field. However, in the last decade, the challenges posed by artificial intelligence have captivated scholars' attention. My focus has often been at the institutional level. I was privileged to be part of the High-Level Expert Group on liability for uses of AI, established by the European Commission in 2018. During the times of the pandemic, I collaborated with the World Health Organization, addressing data privacy concerns linked to AI usage in the health sector. What extended to broader discussions on AI governance in healthcare. In addition, I participated as part of a group of experts established by the Japanese government at the recent G7 meeting in Japan in April 2023. In addition to my role as a scholar, I've been actively engaged at the institutional level, dedicating mainly a significant effort to our focus area – today's topic.

Paola Cantarini: Is there a need and, therefore, the possibility for a worldwide law to regulate AI globally, at least fixing minimal standards?

Ugo Pagallo: Let's cross our fingers for an international framework, even though it sounds like a challenging accomplishment. Now, we need to have clear the differences between civilian and military uses regarding artificial intelligence. For instance, from the military perspective, there are significant attempts to govern the use of AI, or what some should slickly call 'lethal autonomous weapons systems', shortly LAWS. A group of government-related personnel has worked on this since 2017 under the UN auspices, the CCW, i.e., towards certain agreement weapons convention. Six years later, no one has any solid results, although the General Assembly pushed through a resolution a few weeks back. We'll have to wait and see what unfolds next year or so. On the other hand, there are global endeavors to at least hash out some general principles for AI in the civilian sector. The European Council has been doing its bit. Even the G7, which I tossed into the mix earlier, hammered out a resolution on AI, focusing on large language models, and generative AI. But here's the kicker – depending on how you look at it, in the short run, national lawmakers, who've been hustling hard for the past two or three years, will be the ones calling the shots for the legal playground of AI uses. Just look at Canada, passing its AI Act in 2022. The White House chimed in with its executive order on AI in October 2023. Europe is still figuring things out, but they spilled the beans on what could be the final compromise last week. China has been very active too with several strict top-down regulations, as should have been expected. So, in the coming 2024, we might have some further national regulations, each with its own particularities, for example, US versus EU. An international framework isn't a lost cause, but in the meantime, national lawmakers are the real MVPs.

Paola Cantarini: How would the so-called "tradeoff" between innovation and regulation work? Or would regulation by itself prevent or compromise innovation and international competition? According to Daniel SOLOVE, this would be a mistaken concept in his book "Nothing to Hide. The False Tradeoff Between Privacy and Security" (Y.U. Press, 2011). Could you comment on this point?

Ugo Pagallo: There's a risk here. Strictly regulating technology, especially in the rapidly advancing field of A.I., comes with the potential downside of stifling technological innovation. We've seen instances where legislative attempts to govern technology failed because the risk of rendering legislation obsolete in a short timeframe is quite real. One noteworthy example is the E-Money Directive, passed by Brussels lawmakers two decades ago. Shortly after this European directive, the then young guy from South Africa, Elon Musk brought us PayPal. Not only did PayPal revolutionize electronic payments, but it also rendered the E-Money Directive obsolete, prompting E.U. lawmakers to make amendments. The risk is evident, and there are various strategies to mitigate such risks of either continually amending legislation - due to the fast pace of technological innovation, or having legislation that remains static, obstructing technological progress. Therefore, how can we navigate these challenges? On the one hand, an example from Europe involves the principle of technological neutrality. The GDPR, for instance, embraces this principle, ensuring it applies to any processing of personal data, irrespective of the technology involved. This approach helps prevent legislation from becoming obsolete. On the other hand, there's a technique I call legal experimentation. Over the past 15 years, the Japanese government has established several special zones in Japanese cities, aiming to balance technological innovation and regulation. This experimental approach recognizes the importance of understanding how technology works before regulating it. In many cases, there's a lack of data to comprehend technology's nuances. Since 2011, the Japanese government has thus set up open labs to test technology, inspiring similar initiatives in various jurisdictions. This trend is apparent in Silicon Valley, where self-driving cars, although with a human on board, roam the streets. Even Europe's Artificial Intelligence Act plans to establish what they call 'sandboxes.' So, to address your initial question, it's not a zero-sum game. The challenge is finding a balance where regulation supports technological innovation. Let's observe how principles like technological neutrality and legal experimentation will play out in practice, for example, with the AI Act in Europe. There are multiple avenues to prevent the pitfalls of ill-fitted regulation and risky technological advancements.

Paola Cantarini:  Taking as a paradigmatic example in the area of data protection the LIA – the evaluation of legitimate interest, provided in the Brazilian LGPD and in the GDPR of the European Union as being a mandatory compliance document, when using the legal basis for processing personal data, that is, the legitimate interest (measured by a proportionality analysis/test), would it be possible to create a "framework" aiming to protect fundamental rights embedded in a specific document, the AIIA - Algorithmic Impact Assessment? And thus establishing, after a weighted analysis, adequate, necessary and strictly proportional risk mitigation measures to such rights?

Ugo Pagallo: Once again, the short answer is yes, and it's been quite a challenging issue. For instance, in the early stages of formulating the Artificial Intelligence Act here in Europe, a specific impact assessment dedicated to this technology was proposed. However, under regulations like the GDPR, there's a mandate for a proactive evaluation of new technologies to grasp their implications for processing personal data. Therefore, a separate regulation might not be necessary solely for evaluating the impact of A.I. systems on personal data. Your question broadens the scope, encompassing personal data and fundamental rights, non-discrimination, dignity, and more. As of December 18th, I still need the details of the new and final text of the Artificial Intelligence Act in Europe. We'll have to wait and see how things unfold. A couple of months ago, along with several colleagues, I signed a letter advocating for the inclusion of a specific impact assessment regarding the use of A.I. and its impact on fundamental rights. Let's see if the final text of the A.I. Act incorporates this impact assessment. I emphasize that when discussing the protection of fundamental rights, one of these rights is related to environmental protection. Not just because it's a human right but more so because it's a prerequisite for safeguarding such rights. In other words, the environmental impact assessment of A.I. systems and other technologies, such as blockchain, is crucial. Energy consumption is significant and increasing, with specific studies predicting that ICTs and A.I. systems will constitute around 30% of global energy consumption in the next decade, which is staggering. We contend that we need environmental protection not only when A.I. directly impacts human rights but also when considering the direct impact of A.I. on the environment itself. Here, I'm not overly optimistic about finding a specific environmental impact assessment in Europe's final text of the Artificial Intelligence Act, which might present a problem because philosophically speaking. There is divergence or even a clash between the human-centric approach to A.I. on one hand, and, on the other hand, the ecocentric approach to environmental protection. We should protect nature not just because it's instrumental to safeguarding fundamental rights, but because it's our inherent responsibility towards nature and future generations. Once again, as in the previous question, our goal should be to protect both humans and nature. It's not a zero-sum game, choosing between humans or nature.  Rather, understanding fundamental rights in a broader sense involves considering the protection of both individual and collective rights, which includes the environment as such.

Paola Cantarini: What do you meant by AI governance? What relationship do you see between innovation, technology, and law?

Ugo Pagallo: There's an obvious distinction for me between regulation and governance. When we talk about A.I. governance, I'm thinking about how we oversee this technology, and it can involve various types of regulation, whether it's top-down, co-regulation, self-regulation, or even a mix of these approaches. I apologize if most of my examples focus on Europe, but, the distinction becomes evident in the initial draft of the European Artificial Intelligence Act. It's pretty revealing that this act encompasses all three forms of regulation, providing an overall framework for governing A.I., to be more specific, the European approach categorizes uses of AI, into high-risk and low-risk categories, with some falling in between. Someone might ask then, what happens with high-risk uses? Certain uses of AI should be outright prohibited, for instance, those exploiting vulnerabilities in older adults and children, which is unacceptable. Then, when we talk about legitimate but high-risk uses, there's a strict top-down approach due to the associated risks, such as the uses of facial recognition software. On the flip side, when dealing with low-risk use - like everyday people interacting with Alexa, Siri, Google Assistant, and so on - most of these interactions, at least in Europe, are left to self-regulation. In the middle, let’s say with chatbots, developers have specific responsibilities regarding transparency and ensuring users know whether they're interacting with a human or a robot. Coming back to your question, in a nutshell, the critical difference between regulation and governance lies in the fact that governance can encompass all three regulatory models, even within a single normative act. The A.I. Act in Europe is a prime example of this governance complexity. These represent the various regulatory approaches in play.

Paola Cantarini:  In this year's Venice Architecture Biennale (2023) the theme of the Brazilian stand is “Earth and ancestry”, that is to say, decolonization (“De-colonizing the canon”, Brazil's "Earth" pavilion at the Venice Biennale). Would it be possible to escape such colonialist logic, which is also present in the AI/data areas?

Ugo Pagallo: I am aware that many scholars are sounding the alarm on what they call digital colonialism, and it's pretty intriguing since most of these critics are from the North, mainly Americans. Exploitation cases are indeed prevalent in our world, where few billionaires possess as much wealth as 90% of the global population, a situation that's frankly unacceptable. There is a predicament - though I doubt labeling it as colonialism, that truly captures the essence of what's unfolding. To illustrate the claim, consider the current global projects aiming to establish new celestial colonies on the Moon and Mars - such as the US-led Artemis project, and  China’s International Lunar Research Station. It's captivating because, on one side, we still use the term "colony," but in a neutral sense, showcasing the lingering neutral connotation of colonization. On the flip side, various issues arise in establishing celestial colonies involving territorial occupation and security, problems we're likely to encounter again when venturing to the Moon and Mars. Yet, here's my take on it. Beyond the broader issue of economic inequality, which extends beyond digital technologies, and the exploitation of rare materials - with the dominance of the digital economy by a handful of colossal corporations, there are subtler issues at play. When we develop digital technologies and, more specifically, A.I. systems for vital aspects of our lives - like A.I. for the Industrial Goods Sector - we face challenges related to data. Given AI's reliance on data, we often have subpar outcomes. While it may not be a form of colonialism, it undeniably poses a genuine problem. An interesting example comes from working with the WHO underscoring this issue. The WHO published a report highlighting the lack of data for crucial sectors, such as health data for multiple African populations, for instance. This scarcity makes it challenging to develop specific AI systems. So, if we want to use the term "colonization," it's not just about imposing values directly or indirectly through power dynamics. Instead, it's crucial to recognize cases where the problem lies in our failure to harness the technology's benefits, due to neglecting data gathering for certain population segments, or countries. When we discuss colonialism, there's a risk of viewing the new through old lenses or applying a label that may overlook specific challenges. As I previously mentioned, we're contending with issues like leadership, supremacy, and geopolitics. Simultaneously, we must acknowledge equally significant problems that might be sidelined in this discourse.

Paola Cantarini: What are the main challenges today with the advancement of AI, and after the controversy with ChatGPT and the "moratorium" requested in a letter/manifesto by Elon Musk and other leading figures?

Ugo Pagallo: To wrap it up, the last question captures everything we've been discussing. When assessing challenges, it's essential to distinguish between misuses and underuses of technology. Scholars often focus on the misuses - criminals exploiting A.I. or new forms of colonialism - not just because they're serious problems, but because they grab the public's attention. Yet, we shouldn't overlook the elephant in the room: underuses of technology. That's when we have the tech, but we don't leverage it for the wrong reasons. Last week, I published a paper on this topic with colleagues in the open-access journal Health and Technology. The underuse of AI in the health sector, as discussed in my Italian book from last year, is a significant challenge. In my estimation, the underuse of A.I. systems in Italy alone costs citizens roughly 2 or 3% of the G.D.P. annually, which is staggering. Of course, misuse is also a problem. Scholars have explored the next generation of robotic crimes since the early 2000s. Over the past decade, I’ve mentioned that the risks of A.I. on the battlefield, particularly lethal autonomous weapon systems, have been in focus. Therefore, misuse and criminal use of A.I. are crucial concerns. Still, we mustn't forget the flip side. There are commendable uses of A.I., without harnessing risks towards its development and now, shifting to ChatGPT, your example. Around 13 months ago, it became a trending topic, especially after Silicon Valley brought it into the spotlight. Last April, at the G7 meeting in Japan most of the ministers mentioned ChatGPT. Later, the European Parliament addressed it by amending the proposal of the Artificial Intelligence Act. While we shouldn't underestimate the potential misuse and overuse of the technology, it's crucial to recognize that ChatGPT is just a vibrant facet of A.I. innovation, not the only one. Over the past year, the pressure has centered on trendy and risky A.I. sectors like generative A.I., what is fine for discussions of the academia, scholars, and the public at large. However, the real problem arises when lawmakers overly focus on a specific sector - albeit crucial, like is happening with ChatGPT. There's a risk of missing the bigger picture or, conversely, the difficulty of inserting new vibrant sectors of A.I. into such a bigger picture. A more balanced approach to regulation and governance is essential. We should avert legislation hindering sound technological innovation, or requiring over-frequent revisions to tackle such technological advancements.