Entrevista com Ann Cavoukian, por Paola Cantarini

Esta entrevista foi realizada originalmente em inglês em setembro de 2023. 

Ann Cavoukian foi a ex-Comissária de Informação e Privacidade para a província canadense de Ontário. Seu conceito de privacidade por design, que leva a privacidade em consideração ao longo de todo o processo de engenharia de sistemas, foi expandido, como parte de uma equipe conjunta canadense-holandesa, tanto antes quanto durante seu mandato como comissária de Ontário (de 1997 a 2014). Ela foi contratada pela Universidade Ryerson (agora Universidade Metropolitana de Toronto) como professora visitante distinta após o término de seus três mandatos como IPC. Cavoukian foi nomeada diretora executiva do Instituto de Privacidade e Big Data da Ryerson em 2014. Desde 2017, Cavoukian tem sido a Especialista Distinta em Residência do Centro de Excelência em Privacidade por Design da Universidade.

Paola Cantarini: What's your area of expertise? Would you start telling us about your AI and data protection work?

Ann Cavoukian: I started, as you said, as commissioner in Ontario, Canada. And the interesting thing was, you see, my background. I'm not a lawyer; I'm a psychologist. My Ph.D. is in psychology and law. So when I started at the office, at the Commissioner's Office, there were brilliant lawyers who wanted to apply the law and make sure it worked and all that. But I wanted more than that. I wanted to be proactive, meaning, let's prevent the privacy harms from arising, not just offer solutions after they've been perpetrated. So literally at my kitchen table over three nights, I created Privacy by Design, which is all about trying to prevent the privacy harms from arising, addressing privacy and security hand in hand, not one versus the other. And then I developed this, and it has seven foundational principles. Then I took it into the office and I sold it to the lawyers, if you will. And they were great. They understood I was all for the law. I was just hoping we wouldn't have to invoke the laws because we could avoid a lot of the privacy concerns and the data breaches, etc. So that's how it all started. And I was Commissioner for three terms, a long time, 17 years, and we had great success with it. It was unanimously passed as an international standard in 2010 by the International Assembly of Privacy Commissioners and data protection authorities in Europe at one of our conferences. And then, as you know, in 2018, it was included in European law, the GDPR, the General Data Protection Regulation, which is huge. They've been working on that for five years and then included privacy by design and also the second of the seven foundational principles, privacy as the default setting, which is so important. It means you don't ask your customers, your citizens, to go find the ways to protect their privacy. No, you say to them, we give it to you automatically. It's the default setting. You don't have to worry. People love that. It builds trust, not like no other, at a time when there's so little trust.

Paola Cantarini: Do you think there is a need and, therefore, also the possibility for a worldwide law to regulate AI globally, at least fixing minimal standards?

Ann Cavoukian: Well, I think a global approach would be admirable because you could focus on one standard. The EU just produced the standard, the AI Regulation Act. It'll be finalized later in the year, but they just produced it. The US, the United States is working on one, and they're talking about working on it together. The US and EU are standards, and I think that should be global because if you have a bunch of different ones like Canada, where the federal commissioner is working on one with two provinces, and then the Ontario commissioner is working on another one entirely by himself. You can have all these different things that you don't want; you want one strong, like GDPR standard, that says you do not encroach upon people's personal information, and that's going a long way because with AI, it does so much so quickly vis-a-vis our ability to address those issues and detect them. I mean, forget it. It's going to take an enormous amount of time and effort. And you have to understand the people who create these amazing tools, AI and ChatGPT, etc. They're brilliant, of course, but there are also very brilliant hackers and phishers who are going after all of this to obtain lots of information, and ChatGPT, for example, does not prevent personally identifiable data from being accessed. So where are the privacy protect measures? I mean, there's nothing there right now. And I know he's working with the governments to get something going, but we have to move on this very quickly."

Paola Cantarini: How would the so-called "trade-off" between innovation and regulation work? Or would regulation by itself prevent or compromise innovation and international competition? According to Daniel SOLOVE, in his book “Nothing to hide. The false tradeoff between privacy and security” (Yale University Press, 2011), this would be a mistaken concept. Could you comment on this point?

Ann Cavoukian: I don't agree with that at all. It's not privacy versus innovation or privacy versus data utility. You have to have both. I always refer people to Steve Jobs, you know, the brilliant creator of Apple; he said: 'Look, if I didn't have privacy, with privacy, I can do crazy blue-sky thinking, come up with wild ideas and then throw them out because it's ridiculous’. But also, I can end up with the brilliant formulation that was Apple back then. He believed very strongly in privacy and innovation, of course. So you can and must have both. We have to get rid of the 'zero-sum game.' Either win or lose. That's so yesterday. Forget about that. You do both. If you're smart and you can innovate in a brilliant way, you can also innovate in a way that builds in privacy and data protection.

Paola Cantarini: Taking as a paradigmatic example in the area of data protection the LIA – the evaluation of legitimate interest, provided in the Brazilian LGPD and in the GDPR of the European Union as being a mandatory compliance document, when using the legal basis for processing personal data, that is, the legitimate interest (measured by a proportionality analysis/test), would it be possible to create a "framework" aiming to protect fundamental rights embedded in a specific document, the AIIA - Algorithmic Impact Assessment? Thus, after a weighted analysis, establish adequate, necessary, and strictly proportional risk mitigation measures to such rights.

Ann Cavoukian: I'll wait and see, but I don't want to say no outright, but I also want to say that's again the either-or one or the other. I want you to do both, and companies as well. I'm often invited to speak to boards of directors, and I walk in, and the CEO and his team, their heads are down. They don't want to hear what I have to say. And I always say to them, give me 10 minutes, give me 10 minutes to let me show you how privacy can enhance your operations and the delivery of your products and services to your customers. And then, if you're not interested, I'll leave. So all of a sudden, they wake up, and they go, 'oh, okay, go ahead.' And I talk to them about how it has to be positive, some meaning hand in hand, privacy and innovation, privacy and data utility. And then they're all for it. They say, 'I didn't know we could do both.' Of course, you can do both. It's not that hard to ensure that personally identifiable data, the personal identifiers are removed or they're synthetic data, you create and recreate the data so that it's not personally identifiable but can be used widely for data utility purposes, for innovation. There are so many ways to do this. We just have to give up the old world, thinking of one or the other, and move ahead. There's so much going on.

Paola Cantarini: What do you mean by AI governance, and what is the relationship do you see between innovation, technology, and law?

Ann Cavoukian: Well, AI governance will be tough, there's no question. Because for AI governance, you need the regulators to understand how the AI works, as complicated as it is. I've been trying to work on neural nets and increase my own learning, and it takes a lot of time. So the likelihood of these guys doing it is slim. What I'm telling people, the regulators, is to make sure you have experts, staff in this area who can advise you about this because audits will be taking place, and when there's an audit of your operations, if you're an AI, you better know how to answer the questions. So it can't be one versus the other. When I explain that, of course, they're going to have some tech staff or get one of your tech staff really focused on AI and data protection. How you calculate to protect personally identifiable data in a way that doesn't minimize your AI but enhances it because then it frees you from having to protect the data in any way. You can then go wild and use it for various purposes you probably hadn't contemplated. So I wouldn't give up on this at all.

Paola Cantarini: In this year's Venice Architecture Biennale (2023) the theme of the Brazilian stand is “Earth and ancestry”, that is to say, decolonization (“De-colonizing the canon”, Brazil's "Earth" pavilion at the Venice Biennale). Would it be possible to escape such colonialist logic, which is also present in the AI/data areas?

Ann Cavoukian: I never want to say no to things, so I'm always open. I'd like to see what they develop and convince me that personal information is, in fact, strongly protected while ensuring the fluidity and use of AI. I'm not opposed to AI at all. It enables you to do amazing things. If you go to ChatGPT, especially the 4th ChatGPT or the fifth version, it's amazing. The answers you get are remarkable. But I've also heard that if it doesn't find an answer, it hallucinates, so it can make up things. My God, that's the last thing we want. Nor do I want them to prove to me that that's not going to happen And what are the main challenges today with the development of AI? After the controversy, especially regarding this, we have the moratorium requested in a latter manifesto by Elon Musk and other leading figures. It will take time because there is a lot of controversy associated with it. You know, there's a group of very brilliant tech individuals saying, 'put the brakes on this right now,' including, well, not Sam Altman, who's the leader that created ChatGPT, but he's participating with various governments in terms of how we address the privacy issues. We have to address this now, and whether you put the brakes on it completely or work with those who are creating and using it, I'm not sure the direction that will go in. But this has to be addressed. If it's not addressed, companies using AI and ChatGPT, etc., will end up in the courts. There will be lawsuits. There will be class-action lawsuits because people's personal information has been used in ways that were not consented to or that wreaked havoc on individuals' lives. That's what we're trying to avoid by saying you have to look under the hood now; trust but verify. Actually, in this case, don't trust; just verify, look under the hood, and do audits.