Inverse Surveillance AI Expert Interviews

Posted on

In collaboration with podcast Future Based we conducted 4 expert interviews on the topic of Inverse Surveillance

Episode 1 – Aidan Lyon

“In this first podcast episode we will go into conversation with Aidan Lyon. Aidan completed his PhD in 2010 at the Australian National University on the philosophical foundations of probability. His research focuses on psychedelics, meditation, uncertainty, wisdom, and collective decision making. Aidan is also an entrepreneur: He is CEO and co-founder of DelphiCloud, and often works as a freelance consultant on projects relating to my research areas. His new book Psychedelic Experience, is a philosophical analysis of psychedelic experience with the central thesis that psychedelic experiences are mind-revealing experiences and can also occur via meditation.” source: Future Based Podcast Page


Episode 2 – Steve Mann

Steve Mann, an inventor and professor widely hailed as “the father of wearable computing” expanded on the concept at the MIT Media Lab. He is a professor studying priveillance, i.e. the interplay between privacy and the veillances such as surveillance (oversight) and sousveillance (undersight) as well as metaveillance (sensing sensors and sensing their capacity to sense). Steve has been described by the media as “the world’s first cyborg” for his invention of Mediated Reality (predecessor of Augmented Reality), and also invented HDR and panoramics now implemented in most cameras including Apple iPhone. He is considered by many to be the inventor of the WearComp (wearable computer) and WearCam (EyeTap and reality mediator). Furthermore, Steve joined Blueberry as Co-Founder and CTO in 2020. He is currently the acting director of the EyeTap Personal Imaging (ePI) Lab at the University of Toronto. He is also the Chief Technical Advisor of VisionerTech.

Steve has written more than 200 publications+books+patents, and his work and inventions have has been shown at the Smithsonian Institute, National Museum of American History, The Science Museum (Wellcome Wing, opening with Her Majesty The Queen June 2000), MoMA (New York), Stedelijk Museum (Amsterdam), Triennale di Milano, Austin Museum of Art, and San Francisco Art Institute.” source: Future Based Podcast Page


Episode 3 – Nadia Benaissa

Nadia Benaissa is a human right advocate and has worked as a data protection officer at the municipality, she is humanitarian, writer and policy advisor at Bits of Freedom. Bits of Freedom is an organization that stands up for two fundamental rights in your digital communication that are indispensable for your freedom: privacy and communication freedom. These rights have been built up over centuries in the offline world and because they are incredibly important for your individual freedom, for a just society and for a healthy functioning democracy, it is important to reflect on how online rights are being guaranteed. But how exactly is democracy, freedom and privacy being ensured online? In this episode, we talk with Nadia about the commonalities between AI and law and learning from historical data to improve the future.” source: Future Based Podcast Page


Episode 4 – Rudy van Belkom

Rudy van Belkom is a futures researcher at the Netherlands Study Centre for Technology Trends (STT). He recently published his book about ethics in the design process (‘AI no longer has a plug’) that offers developers, policymakers, philosophers and basically anyone with an interest in AI, tools for integrating ethics into the AI design process. The main question of his research is always: what future do we want? We need to ask ourselves what purpose we want to use technology for, rather than seeing it as purpose in itself. How can we use technology to create a better world? And what exactly is a better world? Currently Rudy is focusing on the impact of technology on the future of Democracy. In addition he developed an ethical design game for AI, inspired by the scrum process, that can be used to translate ethical issues into practice. The essence of the game is based on the position paper that he wrote together with the HU research group on AI and was accepted for ECAI 2020: ‘An Agile Framework for Trustworthy AI’. Van Belkom also investigated the role of AI in the future of his own field.” source: Future Based Podcast Page


Inverse Surveillance AI Hackathon 2021

Posted on

This hackathon is part of the Inverse Surveillance AI research project.

Hackathon Challenge: With your help we can demonstrate the potential of Inverse Surveillance AI —> using AI to surveil governments and bigger organizations to identify and predict wrongful behavior or systematic flaws and by doing so empower citizens.

Everyone is welcome to join. (Individuals & Teams)
This includes students, researchers, professionals, etc.

What to Expect

The hackathong consists of two parts:
1. One month of preparation time (starting October 15, 2021)
2. Hackathon Weekend (19-20-21 November, 2021)

Online, via Discord, English, CET (UTC+1h)

Those with other obligations are not required to join all hackathon events, as long as you submit your code before the deadline.

Deliverables:
1. Concept for Inverse Surveillance AI
2. Proof of Concept of Inverse Surveillance AI
3. A (video) pitch explaining your Proof of Concept

You can download the full Hackathon briefing in the link below. Here you can find a full description, the challenge expectation, guiding questions, Prices, Elaborate Timeline & Schedule, etc.

Timeline & Schedule

  • Preparation Month – Friday 15 Oct. – Friday 19 Nov.
    You are allowed to prepare your concept and write code
  • Pe-Hackathon Week – Friday 12 Nov. – Friday 19 Nov.
    • Q&A Session: Friday 12 Nov., 18:00-19:00 CET (UTC+1h)
  • Hackathon Day 1 – Friday 19 Nov. (18:30 – 20:30)
  • Hackathon Day 2 – Saturday 20 Nov. (09:00-18:00)
  • Hackathon Day 3 – Sunday 21 Nov. (09:00-18:30)
    • 15:00 CET (UTC+1h) Submit code, and (video) pitch

Join and make a difference!

Your Proof of Concept, in combination with the theoretical research and expert interview podcasts will serve as a launchpad for future research and work into the topic of Inverse Surveillance AI.

Inverse Surveillance offers a new pespective on the dynamic between citizens and bigger organisation and governments. AI makes this dynamic feasible. Inverse Surveillance AI can empower citizens and turn them into auditors keeping bigger organisations and goverments in check, and by doing so democratize technology in the process.

Your proof of concept has the power to demonstrate the potential of Inverse Surveillance AI and get this idea rolling.

Sign-Up & Questions

For sign-ups you can e-mail Juliette van der Laarse at juliette@asimovinstitute.org or contact her through LinkedIn

Two Examples for Inverse Surveillance

Posted on

Authors: J.P.R. van der Laarse & N.L. Neuman
Publication Date: September 24, 2021

Here we provide some metaphors as examples to better illustrate Inverse Surveillance. These metaphors are a representation of how we see inverse surveillance in comparison to other forms of surveillance and sousveillance at this moment in time. Throughout this project we aim to continue to refine this concept, and more clearly describe the differences between the different forms of veillance. 

Defining Surveillance

We use the terms surveillance and sousveillance as stand-alone concepts in these metaphors, based on the consensus within academic research. But surveillance could also be seen as an umbrella term for all activities. And the same is true for the term sousveillance with respect to all surveillance activities carried out by citizens, including inverse surveillance. 

The definitions used in these metaphors are based on our framework for inverse surveillance research. Prof. Steve Mann, the author on sousveillance, uses a broad veillance framework for veillances that encompasses surveilllance, sousveillance, inverse surveillance and other veillance concepts. He made the case for using veillance as the umbrella term instead of surveillance, which has different connotations.

1) Police Officer vs. Auditor

Inverse surveillance is by definition not anti-government in a dystopian sense, but pro-government from a utopian stance. Inverse surveillance provides citizens with leverage for holding a government accountable, which ought to be considered a positive effect in a functioning democratic society. For the Panopticon effect to work, there needs to be some level of threat. However, citizens will not take the role of a police officer, who issues fines based on criminal behaviour, and exercises power. Rather, citizens using inverse surveillance AI will essentially fulfill the role of an auditor. Auditors are also within their right to assess, correct, and sometimes enforce norms under the threat of specific consequences.However, an auditor is different from a police officer, since auditors report, while offering organizations also an opportunity for improvement. An auditor can be seen as an additional means of control to check that everything is running as it should within an organisation according to some normative framework. Despite the strict monitoring role of auditors, in which they directly hold organizations accountable for their behavior, independent auditors are frequently hired by organizations themselves to monitor their business and operations to ensure that they have everything in order when a formal audit occurs. This dynamic of organizations reaching out to auditors for help in auditing their systems and contributing ideas for improvement is exactly the kind of relationship our Inverse Surveillance project aims to stimulate between citizens and governments or other large /organizations. 

2) School examination

This metaphor relates to the different forms of veillance, and aims to illustrate the differences.

Surveillance: A teacher walks around during an exam to check if students are cheating. This is a form of power from above.

Counter-Surveillance: A student sits behind a pillar during an exam in protest, or sets their table up so that the teacher cannot perceive them properly. Whether the student cheats or not is irrelevant. The focus is on evading surveillance by the teacher. 

Sousveillance: The teacher walks past the tables and a student addresses their behaviour. For example, “Sir/Madam, I keep seeing you walking past the tables of students of colour. This is a form of discrimination”. The teachers’ surveillance is being observed and reported by a student.

Inverse Surveillance: The teacher walks past the students making their exam, without the students paying attention to it. Surveillance is part of this process and the students are not necessarily concerned about it. However, the students have set up a student council to evaluate the teachers and school system. Are they working fairly? What exactly is being surveilled? Have any processes crept in that lead to, for example, occurrences of racism? Or are there patterns that can be identified that indicate corruption? 

Panopticon for the Masses

Posted on

Authors: J.P.R. van der Laarse & N.L. Neuman
Publication Date: May 7, 2021

With security cameras in public places, police making their regular rounds in neighborhoods, proctors watching students during exams, and government organizations monitoring suspicious behavior online, surveillance is a part of our daily lives. Not only does such surveillance help spot and punish criminal behavior, it also has a psychological effect, and it is this effect that makes surveillance so effective. This is known as the Panopticon effect, first coined by Jeremy Bentham in the 18th century. 

What is the panopticon effect? 
In short, it means that when you know you can be watched, you will behave better. In a public place you are less likely to show bad behavior because you are aware that you can be watched. Thus, you correct your own behavior without the police or other enforcement agents having to intervene. It is this psychological self-policing mechanism (Foucault, 1997) that makes surveillance such a powerful tool.

Bentham first looked at the panopticon model in the context of prisons. And he articulated the dynamic, and requirements, needed to make the panopticon model work. Within this structure, the panopticon takes place inside an annular building of cell blocks, where at the center of the building a watchtower is positioned. Each person within a cell block (the subject) is sectioned off from the other ‘prisoners’ inside their cell block, leading to an individualization of the subjects. The officials within the tower (the observers) are invisible to the subjects, however they have total visibility of the subjects themselves, leading to an asymmetrical power relation. The end result of this surveillance structure is that the subjects create a self-regulating mechanism that replaces the anxiety of being watched and thus adhere to the institutional categories of evaluation and behave as is expected from them. As Foucault explained, “the major effect of the panopticon: to induce the inmate a state of conscious and permanent visibility that assures the automatic functioning of power”. (Foucault, 1977; Jezierksi, 2006).

According to Foucault, the panopticon model is as fascinating as it is frightening, and it illustrates Foucault’s views on the unequal power dynamic between citizens and government in general in the best possible way.

A Panopticon for the masses
To achieve Bentham’s form of a panopticon model, architectural change is required. It is the well-known dome prisons that are architecturally designed specifically for this purpose. Security cameras achieve the same effect. The subjects can be viewed undisturbed by the observers without the subjects being able to engage in dialogue with them. To make the panopticon a reality, either a lot of money is needed for architectural redesign, or enough money is needed to install means of large-scale observation – such as security cameras. Permission to build and install, as well as the financial resources are  often in the hands of the government and larger organizations. 

The democratization of AI, however, can be a game-changer for this dynamic. A simple algorithm can be developed at relatively little cost and function as thousands of observers. Not only is this useful for governments in the analysis of big data, this same tool can now be used by citizens to create a panopticon effect. AI thus makes surveillance by citizens, Inverse Surveillance, possible.

We do not see inverse surveillance as a counter-action to surveillance by governments and other large organizations. We merely acknowledge that citizens can now, through AI, create a panopticon effect of their own and thereby take part in the activity of surveillance. This presents opportunities in democratizing surveillance AI that we think are worth exploring. Within this research, we recognize that the panopticon effect works and citizens too can successfully use it as a tool. 

References:

  1. Foucault, M. (1977). Discipline and punish : The birth of the prison. Translated by Sheridan, A. New York: Pantheon Books.
  2. Jezierski, W. (2006). Monasterium panopticum. Frühmittelalterliche Studien, 40(1), 167-182.

On Utopian Thinking

Posted on

Authors: J.P.R. van der Laarse & N.L. Neuman
Publication Date: April 23, 2021

Surveillance AI is not exactly considered to be a positive development in this day and age, with controversial stories like China’s mass surveillance headlining many news platforms. (Andersen, 2020; BBC, 2021). These news items evoke a negative perception on AI and reminds us of movies like iRobot, Terminator, 2001: Space Odyssey, and Minority Report.This technophobic and dystopian view of Surveillance AI is part of the reason why ethical AI is a growing academic field. The focus of these studies lies primarily in preventing and countering this dystopian application of technology. However, despite the fact that these studies from a dystopian perspective are very much needed they mainly focus on limiting or governing these developments, and work from within existing structures and systems. It thus leaves little room for positive innovative developments.

Utopian Thinking
In order to get us to a future that opens up new possibilities in regards to Surveillance AI, instead of limiting them, we need a different approach to complement the dystopian work. Theory U teaches us that we need to be critical of our frame of mind, and preferably break out of our institutional bubble.This would enable innovation and accelerate the emerging future to take shape. (Scharmer, & Senge, 2016). We thus need a more out-of-the-box type of approach that is not limited by existing institutional structures. This approach stands at the center of Thomas More’s ‘Utopia’ (1516), imagining a perfect world in comparison to the world we are living in. Regardless of its attainability we focus on the thinking method itself.


Utopian Thinking has been at the foundation of many great technological innovations, for example the World Wide Web, and smartphones. Not to mention groundbreaking ideas, such as the theory of relativity, and the apartheid abolition (Hök, 2019). According to Brown (2015), it also facilitates collective thinking, which is essential for tackling complex problems “in these times of transformational change” (p.1). Bell and Pahl (2018) add that co-production – like using a thinktank for example – is a Utopian Thinking method. According to them (Bell & Pahl, 2018) Utopian Thinking methods are essential for reshaping the world as we know it for the better. In addition it encourages the public to become involved in the process (Fernando et al., 2018), which is precisely the type of citizen involvement we deem to be important for design, development, and implementation of Inverse Surveillance AI.

It is for these reasons that we approach our research from a utopian perspective, and therefore we encourage, imaginative, original, out of the box thinking, which follows the example of great thinkers that stood at the basis of monumental innovations and ideas (Hök, 2019). We need to look past our current way of thinking within existing structures, and build a new vision of what is socially acceptable in order to drive the growth and implementation of Surveillance AI (Harari, 2018). As Albert Einstein, emphasized: “we cannot solve our problems with the same thinking we used when we created them” (Kataria, 2019).

Bibliography:

  1. Andersen, R. (2020). The Panopticon Is Already Here. The Atlantic. Retrieved from https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
  2. BBC. (2021). Uighur-identifying patent is ‘deeply disturbing’. BBC News. Retrieved from https://www.bbc.com/news/av/technology-55651932
  3. Bell, D.M., & Pahl, K. (2018). Co-production: Towards a utopian approach. International Journal of Social Research Methodology, 21(1), 105-117.
  4. Brown, V.A. (2015). Utopian thinking and the collective mind: Beyond transdisciplinarity. Futures : The Journal of Policy, Planning and Futures Studies, 65, 209-216.
  5. Fernando, J. W., Burden, N., Ferguson, A., O’Brien, L. V., Judge, M., & Kashima, Y. (2018). Functions of Utopia: How Utopian Thinking Motivates Societal Engagement. Personality and social psychology bulletin, 44(5), 779-792. https://doi.org/10.1177/0146167217748604
  6. Harari, Y. N. (2018). 21 lessons for the 21st century (First ed.). Random House USA.
  7. Hök, B.W.  (2019). Are great innovations driven by utopian ideas? Journal of Innovation Management, 6(4), 98-116.
  8. Kataria, V. (2019). 3 Lessons from Albert Einstein on Problem Solving. Medium, The Startup. Retrieved from https://medium.com/swlh/3-lessons-from-albert-einstein-on-problem-solving-c5438b2ac2b9
  9. More, T. (1516). Utopia. Retrieved from Planet Ebook: https://www.planetebook.com/utopia/
  10. Scharmer, C., & Senge, P. (2016). Theory U : Leading from the future as it emerges : The social technology of presencing (Second ed.).

Conceptualizing Inverse Surveillance

Posted on

Authors: J.P.R. van der Laarse & N.L. Neuman
Publication Date: April 23, 2021

In our new project, we focus on unwrapping the concept of Inverse surveillance, and how it can be used to empower citizens with AI technology. Since we wanted to place surveillance in the hands of citizens, the first name that popped in mind to label this utopian vision on surveillance was ‘Inverse Surveillance’. After a quick Google search, we found out that this term has actually been used before, so we did a deep dive into the literature. We soon learned that Inverse Surveillance is often used as a synonym (or translation) for sousveillance (Mann, 2004), and also mentioned in relation to counter-surveillance. However, neither of these concepts fully captures what we were going for. We decided to flesh out this concept a bit more and write down what we think are the main distinctions between the different types of surveillance. 


For those interested, we will publish how we came to these distinctions and our definition of inverse surveillance based on the literature in another post, but in this post, we will focus on the table below, and our conclusions.

 SurveillanceCounter-
surveillance
SousveillanceInverse Surveillance
AgentTopBottomBottomBottom
SubjectBottomTopTop & BottomTop
ActionSurveillanceEvading & UnderminingSurveillance & gaining more insight and involvementSurveillance
GoalControlling and Influencing subjectCounter-reaction against surveillance of citizensCounter-reaction against surveillance of citizensControlling and influencing subject
Power DynamicCentralization of PowerChallenging institutional power asymmetriesReversing the balance of power (hierarchical sousveillance); levelling the balance of power (personal sousveillance).Democratisation of Power

Surveillance

Although ‘surveillance’ is also an umbrella term for the other concepts, in its colloquial use surveillance refers to The systematic monitoring (surveillance) of citizens (bottom) by governments or bigger organizations (top), in order to influence and control them (goal) and thus exercise power (power dynamic) (Ball et al., 2012; Hier & Greenberg, 2014; Lyon, 2007).

Counter-surveillance

In the case of counter-surveillance, citizens (bottom) actively evade and undermine surveillance by governments and bigger organizations (top) as a counter-reaction to the surveillance of citizens (goal) and by doing so are challenging institutional power asymmetries (power dynamic) (Monahan, 2006).

Sousveillance

Sousveillance happens when citizens (bottom) are surveilling governments and bigger organizations (top) with the goal to gain more insight and involvement into surveillance, as a counter-reaction against the surveillance of citizens (goal) and by doing so reversing or leveling the power balance (power dynamic) (Mann, 2004; Mann et al., 2002). 

Conceptualizing a fourth surveillance type

The exact definition of sousveillance is quite broad. Some articles focus on sousveillance as a means of gaining insight into surveillance done by governments and bigger organizations by surveilling the agent itself. In most articles, sousveillance often takes a ‘stance against’ surveillance. In other articles, all surveillance activities in which citizens partake in surveillance are included in the sousveillance concept. 

The latter is a bit closer to what we aim to focus on. Thus according to existing terminology, our project would fall under sousveillance. We, however, wanted to make one clear distinction between the ‘anti’ movement also present within sousveillance. And thus we decided to separate the term inverse surveillance from sousveillance and give it a bit more depth. Whether we can view our definition of inverse surveillance as part of the umbrella term sousveillance or not is up for debate but not what we are focussing on. 

Our definition
Inverse Surveillance

In the case of inverse surveillance, citizens (bottom) surveil governments and bigger organizations (top) in order to control and influence (goal) and thus promote transparency and equality, and by doing so democratizing power (power dynamic).

This definition is not definite yet, and it might change during the research. But we wanted to offer a clear starting point for fleshing out a new surveillance concept. 

What we want to emphasize with this distinction is that our perspective on surveillance as a method is closer to surveillance than it is to sousveillance. In our case the focus is not surveillance itself, surveillance is seen as a mere tool that we deem helpful in exercising power, control, and influencing the subject. The difference with surveillance, however, and what puts us in line with sousveillance is that in our case surveillance is done from the bottom to the top.

Facilitating Inverse Surveillance through Artificial Intelligence

Our suggestion to deepen the definition of Inverse Surveillance is the product of technological advancements through which ideas like these are becoming more realistic for the first time in history. In Foucault’s (1977) book, surveillance can only be used by those in power, due to the extensive resources needed to conduct large-scale surveillance (for example, by having a police force that can patrol). With the rise of AI, we no longer need hundreds of eyes to watch data, videos, or images. This makes AI a realistic tool not only for organizations to monitor individuals but also for individuals monitoring organizations, without needing the extensive resources organizations have. For this reason, our project focuses on employing AI to facilitate Inverse Surveillance.

Utopian Vision on Inverse Surveillance AI 

In this project, we focus on a utopian way of thinking. We realize that there are also many side effects to AI such as ethical complications, and these studies from a dystopian perspective are therefore also much needed. However, within this project, we are mainly looking for solutions, and innovative ideas to get this concept off the ground. Thus, from a utopian perspective, we focus not only on the possibilities of inverse surveillance but also on the broader role AI can play in society in this regard.

Throughout this project, our definition of inverse surveillance as elaborated upon here will serve as a starting point for our research. Building on this, we will focus on the utopian vision and the practical application of AI in the context of Inverse Surveillance. 

Bibliography:

  1. Ball, K., Haggerty, K., & Lyon, D. (2012). Routledge handbook of surveillance studies (Routledge international handbooks). Abingdon, Oxon ; New York: Routledge
  2. Foucault, M. (1977). Discipline and punish: The birth of the prison. New York: Pantheon Books.
  3. Hier, S., & Greenberg, J. (2014). Surveillance power, problems, and politics. Vancouver: UBC Press.
  4. Lyon, D. (2007). Surveillance studies : An overview. Cambridge, UK ; Malden, MA: Polity.
  5. Mann, S. (2004). Sousveillance: inverse surveillance in multimedia imaging. Proceedings of the 12th ACM International Conference on Multimedia, New York, NY, USA, October 10-16, 2004. 620-627. DOI: 10.1145/1027527.1027673.
  6. Mann, S., Nolan, J., & Wellman, B. (2002). Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments. Surveillance & Society, 1(3), 331-355.
  7. Monahan, T. (2006). Counter-surveillance as Political Intervention? Social Semiotics, 16(4), 515-534.