The Digital Panopticon: Surveillance Technologies, Big Data, and the Crisis of Democratic Societies

Paola Cantarini

Governança

50 minutos

INTRODUCTION

This article aims to bring critical reflections, with the theoretical framework based on the works and the “Surveillance Technology and Society“  course (1)  taught by Professor David Lyon in 02/2024, held at USP Ribeirão Preto in CEADIN (Advanced Center for Studies in Innovation and Law at the University of São Paulo, Law School, Ribeirão Preto campus) coordinated by myself together with Professor Nuno Coelho.

David Lyon is the principal investigator of the Big Data Surveillance Project, emeritus professor of sociology and law at Queen’s University, former director of the Surveillance Studies Centre, and one of the foremost specialists on the topic.

The objective of this critical note is to present reflections on some of his main works, in dialogue with other authors who study the subject, involving stages prior to the digital surveillance society, especially regarding Foucault’s thoughts on his studies of the society of normalization, discipline, and regulation, and its evolution in the works of Deleuze and Byung-Chul Han, with the perspective of the control society and the digital panopticon, analyzing concrete paradigmatic cases to combine theoretical analysis with practice, in the sense of “phronesis,” meaning practical knowledge for the Greeks.

From Disciplinary Societies to the Digital Panopticon

Surveillance is a key dimension of the modern world and is currently closely related to big data (2), particularly through AI applications such as facial recognition and predictive policing. This has led to a form of surveillance now characterized as massive surveillance, under the slogan “collect everything”, enabled by the analysis and access to vast volumes of personal data. In this scenario, a general vulnerability arises from the ubiquity of information and the asymmetry inherent in such relationships.

Moreover, the increasing deployment of AI applications that enable real-time prediction, automation of outcomes, and modulation of human behaviors, intentions, and emotions (as seen in neuromarketing, captology, data brokers, and affective computing) introduces new and specific vulnerabilities that raise pressing issues that go far beyond the protection of individual fundamental rights such as privacy and data protection, challenging core democratic principles and the limits of surveillance within a Democratic Rule of Law.

 The lack of transparency effectively nullifies the possibility of oversight, accountability, or responsibility in cases of abuse or error. These dynamics must be critically examined in the broader context of emerging forms of colonialism—data colonialism, carbon colonialism, and biocolonialism — as nations with histories of systemic discrimination remain especially vulnerable, as highlighted by a recent study from the Security Observatories Network (3).

The main characteristic of current security intelligence is the extensive collaboration with technology companies, which store, process, and use our digital footprints, relying on big data, expanding the previous focus on collaboration with telecommunications companies, such as AT&T.

This company collaborated with the USA Government and was subject of a lawsuit filed by the Electronic Frontier Foundation (EFF) that eventually was dismissed based on the approval by Congress of the controversial Foreign Intelligence Surveillance Act (FISA) of 1978. The FISA granted retroactive immunity to AT&T and allowed the Attorney General to request dismissal of the case from 2008 onwards, if the government secretly certifies to the court that the surveillance did not occur, was legal, or was authorized by the president, whether legal or illegal.

Based on retroactive immunity for cases involving criminal liability, the possibility of criminalization based on the law prohibiting warrantless wiretaps was nullified, with the law being replaced by a presidential order, whether legal or illegal, undermining the foundations of the separation of powers and the rule of law.

This immunity becomes the rule, being increasingly used by governments to enable their mass surveillance activities. Retroactive immunity reveals the illegal origin of mass surveillance, operating in an anti-law zone, blurring the lines between legal and illegal surveillance, a kind of “gray area”.

One example of the growth of surveillance technologies and the hegemonization of this business model based on big data is the growth in the offer of informational services and software to public education institutions “for free” by the largest data technology companies in the world – known by the acronym GAFAM (Google, Apple, Facebook, Amazon, Microsoft). The counterpart of “for free” is full access to the personal data of thousands of users, affecting what can be understood as state sovereignty, as Big Techs are mostly headquartered in the USA and increasingly in China. This scenario creates an obscure relationship, without information being provided to verify the details of such operation that are not disclosed by the companies or institutions.

This scenario provokes an asymmetry of power and knowledge, given the evident disparity between what companies operating under the surveillance capitalism system know about us and what we know about what they do with our personal data which deepens the north-south gap.

As research points out, the inequalities and potential affront to human and fundamental rights in the field of AI are more problematic in Global South countries, having a greater impact in places where there is a systematic denial of rights to oppressed communities (4). The agreements between companies and Brazilian universities, especially regarding “Google Suite for Education” and “Microsoft Office 365 for Schools & Students”, reveal how such relationships are opaque, real black boxes, lacking transparency, the fundamental requirement of trustworthy AI, impacting especially those whose personal data is being used, as showed by the Electronic Frontier Foundation report (5).

In this sense, David Lyon, in the course held by CEADIN – Advanced Center for Studies in Innovation and Law at the University of São Paulo, Law School, Ribeirão Preto campus, points out that originally, in the 1990s, surveillance was defined as systematic and routine attention to personal details with the intention of conditioning, managing, protecting, or directing individuals, involving targeted observation for various purposes, including influencing social media, labor relations, and organizational behavior.

Although generally associated with entities like the police, security agencies, border controls, and similar institutions, surveillance can also influence life choices, purchasing decisions, or work, with its concept later expanded to include both the operation and experience of surveillance, involving the collection, analysis, and use of personal data to shape choices or manage groups and populations. In the modern or postmodern era, 21st-century it is characterized by its ubiquitous nature, involving a “surveillance culture,” a new dimension that relies on our voluntary participation as a fundamental factor, where personal data is its main ingredient. Smartphones, for example, have become the predominant surveillance devices due to their widespread adoption and data analysis capability, used by large companies, public and private entities, and government agencies to monitor individuals, even without any indication of suspicion.

Big Data and Surveillance Capitalism

Among David Lyon’s various works, “Liquid Surveillance” co-authored with Zygmunt Bauman (6) stands out, resulting from successive exchanges of messages, dialogues, and joint activities, such as participation in the 2008 biannual conference of the Surveillance Studies Network. The authors point to the new phase of liquid, mobile, and flexible surveillance, infiltrating and spreading across various areas of our lives, becoming an increasingly present aspect, assuming ever-changing characteristics, differentiating from the old panopticon form studied by Foucault and Deleuze.

According to Foucault, in his studies of disciplinary, regulatory, and normalization societies, the panopticon is one of the main instruments of disciplinary power, a surveillance mechanism that allows seeing without being seen, producing the effect of a constant state of visibility. The architecture is designed so that light passes through. Everything must be illuminated; everything must be visible. In the transparency society, nothing should be left out. For Deleuze, in his “Postscript on Control Societies”, control societies are characterized by informatics and computers, as a mutation of capitalism. In them, the essential is no longer a signature or a number, but a code: the code is a password. Individuals have become “dividuals”, divisible, and masses have become samples, data, markets, or “banks”.

The characteristic of the digital panopticon, according to Byung-Chul Han when speaking of the “transparency society”, is to allow the globalized reach of digital winds, that transform the world into a single panopticon: “there is no outside of the panopticon; it becomes total, with no wall separating the inside from the outside.” Network giants like Google and Facebook present themselves as spaces of freedom, but they can also be instruments for the adoption of panoptic forms.

For example, the revelations made by Edward Snowden in 2013 about the PRISM project, which allowed the United States National Security Agency (NSA) to obtain practically any information it wanted from internet companies. A fundamental feature of the digital panopticon is the total protocolization of life, following an efficiency logic which replaces trust with control. Instead of Big Brother, there is big data. We live the illusion of freedom, based on self-exposure and self-exploitation. Here, everyone observes and surveils everyone else.

The Crisis of Democratic Oversight

The surveillance market in the democratic state has a dangerous proximity to the digital surveillance state. Instead of biopower, there is psychopower, as it can intervene in psychological processes. It is more efficient than biopower as it surveils, controls, and influences the human being not from the outside, but from within. It is the era of digital psychopolitics. Large volumes of data are thus a decisive factor of change. From the ubiquitous barcode allowing the identification of products by type or factory, we have evolved to radio frequency identification (RFID) chips, comprising individual identifiers for each product, and to quick response codes (QR, Quick Response Code), sets of symbols placed on products and scanned by smartphones to access certain websites. These codes reveal different uses and types of monitoring applications, for example, providing customer convenience, such as reducing queues in supermarkets.

Therefore, Big Data can be defined as data resulting from its ubiquity. Its amount and speed are their main characteristics, but what matters most are the new applications they enable, such as predictive policing and neuromarketing, as pointed out in the Big Data Surveillance Project, with Lyon highlighting that data results are mainly characterized by the combination of databases from various sources, often merged into a single source. Therefore, Big Data is both complex and complicated, characterized by the immensity of data, capturing details of our lives in vast amounts, almost impossible to compute.

This phenomenon of surveillance capitalism, a new “non-violent” economic and social order, was denounced in the 2018 work “The Age of Surveillance Capitalism,” by Shoshana Zuboff. She reveals an economic system based on the commodification of personal data with the primary purpose of making a profit, involving an emergent logic of accumulation with unprecedented power, through means of extracting and commodifying personal data to predict and modify human behavior, using big data analytics. The consequences reveal significant problems for democratic societies, pointing to the influence of new information technologies on our understanding and reality of freedom, power, democracy, and control, both in individual and social terms.

Digital surveillance is one of the fundamental dimensions of our contemporary society, involving new forms of vulnerabilities and new models of organization, as well as fundamentally modifying democracy itself. These relationships must be critically analyzed so that we are not limited to digital sociotechnical black boxes.

Regarding the new mass surveillance system, Snowden, in his book “Permanent Record,” states that we have moved from targeted surveillance of individuals to mass surveillance of entire populations, with national identity cards being a central factor. These combine high-precision technology with embedded biometrics and RFID chips, justified by arguments for better accuracy, efficiency, and speed, as well as immigration control, anti-terrorism measures, and e-government. However, despite these alleged benefits, there are numerous potential dangers, including unforeseen financial costs, increased security threats, and an unacceptable imposition on citizens, making independent and continuous risk assessment and regular review of management practices essential (7). There might exist a true “Card Cartel” involving the state, companies, and technical standards, generating significant controversies in countries such as Australia, the United Kingdom, Japan, and France.

“I am seen, therefore I exist”. This phrase reflects the desire to be seen on social networks, leading to the voluntary and even enthusiastic sharing of personal data, which is used by the market to personalize ads with high potential for manipulating choices (through seduction, not coercion), thus commoditizing our lives and personas. At the same time, there is consumer surveillance, in a positive sense, directed at the consumer market, and in a negative sense, concerning those who do not conform to expectations, resulting in “rational discrimination” and creating a negative spiral where the poor become poorer, and wealth concentration increases (8).

Related to surveillance in the big data era, issues of inferences and profiling stand out, through the enormous amount of personal data, which is amplified by the questionable role of data brokers who sell this information in unethical and illegal activities, as there is no necessary real consent (informed, fragmented, and with new consent required for each new purpose and change of the company benefiting from such data). These data are used in analysis via deep learning through quantitative optimization to enhance behavioral and emotional manipulation, meaning personalized ads are made to maximize the probability of a purchase or time spent on a social network, being a fundamental fact in creating previously nonexistent desires.

As Morozov points out (9) in 2012, Facebook entered into an agreement with Datalogix, allowing them to associate what we buy at the market with the ads displayed on Facebook. Similarly, Google has an application that allows the analysis of nearby stores and restaurants to recommend offers.

In turn, several interesting cases are cited by Kai-Fu Lee in his book “2041: How Artificial Intelligence Will Transform Your World” (10), and although it is a book with fictional stories, it brings information, examples, and scenarios that already occur in reality. For example, there are AI-based fintech companies like Lemonade in the United States and Waterdrop in China, aimed at selling insurance through apps or obtaining loans with instant approval. In the chapter “Quantum Genocide,” Kai-Fu Lee states that technology is inherently neutral. This follows what Jose van Dijck calls “dataism,” which corresponds to the belief in the “objectivity of quantification” and what is termed “solutionism”, imagining that the solution to all social problems lies in data and analysis of results, not in causes. He argues that “disruptive technologies can become our Promethean fire or Pandora’s box, depending on how they are used”. He cites the example of the Ganhesha insurance with the objective function of the algorithm being to reduce the insurance cost as much as possible. Consequently, with each behavior of the insured, the insurance cost increases or decreases, besides being linked to several applications, sharing user data, encompassing e-commerce, recommendations and coupons, investments, ShareChat (a popular Indian social network), and the fictional FateLeaf, a divination app. One of the possible alternatives mentioned by the author to balance such an objective function aimed at maximizing corporate profit would be to teach AI to have complex objective functions, such as lowering insurance costs and maintaining justice. However, he believes that such a requirement would only be possible through regulation, as it would run into commercial interests, preventing voluntary action. He also mentions the important role of corporate responsibility, such as ESG – Environmental, Social, and Corporate Governance.

In the book “Big Data Surveillance and Security Intelligence – the Canadian Case” by David Lyon and David Murakami Wood (11), the change in surveillance practices with the use of big data and new data analysis methods to assess potential risks to national security is emphasized. The “Five Eyes” partnership involving Australia, Canada, New Zealand, the United Kingdom, and the United States stands out, with the interconnection between “security intelligence” and “surveillance”, now including internet monitoring and, especially, social networks, linked to personal data analysis. The notion of security expands to encompass a series of new domains, allowing the use of torture and interrogation as extraordinary means, as happened with the Canadian Maher Arar after the September 11, 2001 event, who was considered a suspect.

The connection of national security activities with big data and surveillance, now in terms of  “mass surveillance”, is corroborated by the revelations of American security agents like William Binney, Thomas Drake, Mark Klein, and Edward Snowden, and included the use of metadata from a study of more than 500 documents disclosed by Snowden that showed how it can be used to build detailed profiles of the lives of those under surveillance

On the other hand, there are several initiatives to legalize state mass surveillance activities, such as in Canada, with bill proposals in 2009 (12), with emphasis on Bill C-51 of 2019, giving intelligence authorities more powers domestically and internationally and immunity from liability for the use of these powers. This resulted in Bill C-59 (National Security Act, 2017), following a trend or global wave of legalization, such as the “Big Brother Laws” in France (anti-terrorism measures enacted after 2015) and Japan’s surveillance laws – the Secrecy Law, the 2016 Wiretapping Law, expanding the categories of crimes subject to wiretap investigations by the police, legitimizing wiretap means in criminal investigations, and authorizing, in sum, the police to potentially eavesdrop on everyone’s conversations.

However, despite the mentioned legal foundations, there is a lack of transparency measures, involving, for example, demonstrating that security measures have been adopted regarding the personal data used, so as not to violate the Canadian Charter of Rights and Freedoms, as well as international human rights treaties, and proving that the so-called “four-part constitutional test” has been respected, demonstrating that the secrecy or other adopted security measures are minimal, proportional, necessary, and effective. There is an absence of information about what content is intercepted, what types of metadata are stored, where the data is stored and for how long, data disposal methods, which organizational entities have access to the data and for what purposes, whether the data is anonymized, or if “minimization” and security procedures have been adopted (13)(13A).

The proportionality of such measures is questioned in light of their potential infringement on privacy and freedom of expression, as the measures of exception are indeed becoming the norm. This was previously foreseen by many classic authors, and more recently explored by Giorgio Agamben (“Homo Sacer: Sovereign Power and Bare Life” Einaudi; 1ª edição, 2005; “State of Exception”, University of Chicago Press; 1ª edição, 2005), and to some extent by Shoshana Zuboff with the theme of surveillance capitalism, speaking of a “state of exception by Google,” in line with what Morozov (14) also asserts when pointing to algorithmic governance, as evidenced by the numerous social experiments carried out by Facebook, as a true real-life laboratory, in addition to the defense of “information sovereignty” by Russia, China, and Iran.

In this context, the Council of Europe Convention on Cybercrime from 2001 positions itself in favor of surveillance legislation during the War on Terror and was signed by forty-three countries, including non-member states like Canada, Japan, South Africa, and the United States; this convention requires participating nations to enact legislation that facilitates the investigation and prosecution of crimes committed over the internet, also providing for broad legal access to traffic data by law enforcement authorities.

Toward Ethical and Democratic AI Governance

AI tools used for surveillance based on big data have the potential for “bias” in the sense of a feedback loop of prejudices and biased data, encompassing content shaped by structural prejudices and reproduced via algorithms. Facial recognition technology, in particular, may therefore be duplicating or amplifying the institutional and structural racism that exists in society, resulting in coded inequity that fosters unjust infrastructures, as it perpetuates injustices and other forms of discrimination due to various instances of “bias”, which are not systematically addressed through an appropriate algorithmic governance framework. For example, studies by Big Brother Watch indicate that 98% of matches obtained by cameras that alert UK police incorrectly identified innocent people as fugitives (15).

Other issues relate to the absence of mechanisms for holding citizens accountable for their rights and the lack of preventive and mitigative measures for damage and information security. Additionally, there is a lack of assessments on the proportionality of the negative impacts versus the positive externalities, which are generally associated with greater effectiveness, although this is questionable as pointed out by a 2021 LAPIN report. The report states that there is a lack of transparency due to the absence of systematized, consolidated, or publicized statistical data on the processing of data by facial recognition technologies by Public Administration. Therefore, there is no evidence of greater efficiency in public sector activities. In other words, according to the disclosed data, “the narrative of the technology’s efficiency does not seem to be statistically confirmed” (16).

As there would be other ways to achieve the same intended purpose, and there are doubts about the technology’s efficiency due to the errors and other issues raised, it seems that questioning the proportionality of the above metioned measures is valid, given the potential harm to the fundamental rights of millions of people who, without being suspects, are subjected to mass surveillance by the State and have their personal data collected. A paradigmatic example is Salvador’s 2020 carnival, where 80 cameras with facial recognition were used, leading to the arrest of 42 fugitives but capturing the biometric data of 11.7 million people, including adults and children (17, 18)

In order to reduce the mythology surrounding the neutrality and objectivity of algorithms and their predictions, it is important to emphasize that data is only a sample and never speaks for itself. Correlations can be random and may generate incorrect information as there is a lack of contextual and domain-specific knowledge. Therefore, it is essential that technical teams, usually from the hard sciences, be expanded to include qualified personnel with expertise in law, philosophy (ethics), and sociology, providing an interdisciplinary and holistic analysis.

The application of such technology, given its potential for errors and infringements on fundamental rights, and being classified as high-risk by various international documents, should be preceded by the prior development of an algorithmic impact assessment. This is to ensure that measures are taken to mitigate the negative impact, bringing about a better balance between the benefits to be achieved by the measures and the damage to fundamental rights. Finally, it is essential that Algorithmic Impact Assessments (AIA) be prepared independently by a multidisciplinary team, in order to ensure their legitimacy and impartiality. AI systems classified as high-risk must undergo preemptive evaluations to assess proportionality and mitigate harm to fundamental rights.

Equally essential is breaking the mythology of objectivity. Algorithmic outputs are products of social, cultural, and political contexts. Critical AI literacy, digital citizenship, and democratic oversight must be expanded—especially in Global South contexts.

Conclusion

We live in an era of hyperconnected crises—ecological, democratic, epistemological. The rise of the Digital Panopticon exposes deep flaws in current legal, political, and philosophical paradigms. As surveillance technologies evolve, so too must our frameworks for justice, accountability, and human dignity.

A truly democratic epistemology is required – one that recognizes pluralism, respects socio-cultural particularities, and promotes inclusive and sustainable governance of technology. This means moving from mimesis (representation) to poiesis (creation), from centralized control to technodiversity, and from reactive policies to proactive, ethically grounded design.

In the words of Wittgenstein, every language game is embedded in a form of life. If we are to remain human in the age of algorithms, we must reimagine both our technologies and ourselves accordingly.

Notas

(1) Playlist no YouTube – https://www.youtube.com/playlist?list=PLGA5ByQlQm0BpnPkTdjv0sp6A957L9mF-

(2) Para a melhor compreensão do presente texto, trazemos os seguintes conceitos essenciais: Digital Panopticon: A socio-technological system where surveillance is continuous, borderless, and algorithmically driven, integrating voluntary self-disclosure with data extraction to enable behavioral control. Big Data Surveillance: A mode of monitoring that relies on massive, automated analysis of personal data to predict, categorize, and influence behavior. Surveillance Society: A societal model where observation and data collection are ubiquitous, involving both state and corporate actors.

(3) Big Data Surveillance Project, Surveillance Studies Centre, Canada – https://www.surveillance-studies.ca

(4) Rede de Observatórios de Segurança – Primeiro Relatório – https://www.ucamcesec.com.br/wp-content/uploads/2019/11/Rede-de-Observatorios_primeiro-relatorio_20_11_19.pdf

(5) Spying on Students: School-Issued Devices and Student Privacy – https://www.eff.org/de/node/95598

(6) LYON, David; BAUMAN, Zygmunt. Vigilância Líquida. 2014.

(7) LYON, David; BENNETT, Colin J. Playing the Identity Card: Surveillance, Security and Identification in Global Perspective. 2008.

(8) LYON, David. The Electronic Eye: The Rise of Surveillance Society. 2005.

(9) MOROZOV, Evgeny. Big Tech: A Ascensão dos Dados e a Morte da Política. 2018, p. 33 et seq.

(10) LEE, Kai-Fu. AI Superpowers: China, Silicon Valley, and the New World Order. 2022.

(11) LYON, David; MURAKAMI WOOD, David. Data Shadows and Urban Surveillance. 2020.

(12) Bill C-46, Investigative Powers for the 21st Century Act e Bill C-47, Technical Assistance for Law Enforcement in the 21st Century Act.

(13,13A) Report Necessary and Proportionate: International Principles on the Application of Human Rights to Communications Surveillance – Article 19, Asociación Civil por la Igualdad y la Justicia, Asociación por los Derechos Civiles, Association for Progressive Communications, Bits of Freedom, Center for Internet & Society India, Comissão Colombiana de Juristas, Electronic Frontier Foundation, European Digital Rights, Reporter Without Borders, Fundación Karisma, Open Net Korea, Open Rights Group, Privacy International, and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, 2014. Link: https://necessaryandproportionate.org/files/2016/03/04/en_principles_2014.pdf

(14) R v Oakes, [1986] 1 SCR 103 – http://www.canlii.org/en/ca/scc/doc/1986/1986canlii46/1986canlii46.html

(15) Biometric Britain: The Rise of Facial Recognition and Biometric Surveillance in the UKhttps://bigbrotherwatch.org.uk/wp-content/uploads/2023/05/Biometric-Britain.pdf

(16) Relatório: Uso de Tecnologias de Reconhecimento Facial e Câmeras de Vigilância pela Administração Pública no Brasilhttps://lapin.org.br/2021/07/07/vigilancia-automatizada-uso-de-reconhecimento-facial-pela-administracao-publica-no-brasil/

(17) Reportagem: Reconhecimento facial leva foliões para a cadeia em Salvadorhttps://www.terra.com.br/amp/story/byte/reconhecimento-facial-leva-folioes-para-a-cadeia-em-salvador,d97c9f3ab20747c829c0d0cb331db06dxy9c1jt3.html

(18) Reportagem: Prensados pelo Carnaval: prisões com uso de reconhecimento facial reacendem o debate sobre erros e racismo tecnológicohttps://revistaafirmativa.com.br/reconhecimento-facial-prisoes-no-carnaval-reacendem-o-debate-de-uma-tecnologia-com-altas-taxas-de-erros/

Compartilhe em suas redes