Surveillance, Security, and Privacy, Part 1: The Ethics of Surveillance

In an age of open access, widely available public information, and instantaneous search results, the ethics of surveillance and privacy are becoming a pressing topic with which researchers must grapple.

My name is Jennifer Ross, and I am in my final year (fingers crossed!) of dissertation writing in the American Studies program. My research explores the intersection of post-9/11 counterterrorism, anti-Black and anti-Muslim racisms, and the preparation and response to Hurricane Katrina. As part of my dissertation, I have designed a digital humanities component that visualizes the nature and location of counterterror policy-in-practice in post-Katrina New Orleans. Where do we see the influence of counterterrorism, and what form is it taking? More theoretically, my project attempts to reveal the messy interplay between anti-Black and anti-Muslim racisms, mass incarceration, “law and order” policies, and counterterror strategies such as private security contractors and preventive detention. How are two racialized systems of social hierarchy informing and reinforcing each other at this particular moment in history? Using Python, I overlaid flood depth, city demographics, and points of interest to demonstrate how counterterror rhetoric, policy, and practice designed for use in overseas warfare were also utilized to quell what commentators at the time described as an “insurgent” black population in New Orleans.

Flood depth map and points of interest in metropolitan New Orleans

NOAA flood depth map overlaid on satellite imagery. Green points indicate the locations of Abdulrahman Zeitoun, an Arab American accused of terrorism. Red points indicate locations of major interest, including the Superdome and Convention Center. Blue indicates sites of levee failure. Regarding the flood depth itself, blue and green indicates the higher levels of water (6-10 feet with local areas up to 20 feet), while red and orange indicate lower levels (0-4 feet)

Given the violences enacted through extreme surveillance and militarized response in largely black portions of New Orleans, I have continually interrogated the ethics of my digital research. Will this project cause more harm to survivors and/or black and brown communities in New Orleans? Does the mapping put marginalized individuals or populations under greater scrutiny or risk?  How can I mitigate the possible exposure of survivors and their communities?

With funding from the Charles Center, the Dean of the Faculty for Arts and Sciences, and William & Mary Libraries, I enrolled in the “Surveillance and Critical Digital Humanities” course at the Digital Humanities Summer Institute (DHSI) in Victoria, British Columbia. The course adopted an anti-colonial methodology to examine how physical and digital surveillance methods are differently applied to racialized, criminalized, and gendered populations. For instance, we explored how corporations and government engage in mass surveillance by gathering internet user activity, interests, contacts, and customization preferences in what Julia Angwin terms the “dragnet nation.” We talked about the use of biometrics—fingerprints, facial recognition, DNA—as a means to track people through physical spaces like airports and train stations, or digital ones such as social media. To process all this data, programmers can write “algorithms of oppression” (Safiya Noble) imbued with human social, cultural, and racial biases to do everything from select job candidates, provide search results, and construct predictive analytics for crime prevention or the allocation of resources.

Image of bus station converted to outdoor prison.

Camp Greyhound, an outdoor prison facility reminiscent of Guantanamo Bay. Built at Union Passenger Terminal in downtown New Orleans.

I was familiar with many of the theoretical arguments going into the class (all those hours working to understand Michel Foucault and Wendy Brown and Jasbir Puar finally paid off!), but I knew less about what to do about it digitally speaking. Even after this class I by no means have all the answers, but I do have some tactics to work with going forward.

First: consent. Sometimes it seems like much of our job as researchers focuses on the data. Find the data. Analyze the data. Present the data. We do have oversight like IRB to check that human research participants are informed of their rights when giving and gifting records like oral histories. But data acquisition gets messier when we talk about social media and other publicly accessible information. You’d be surprised what’s out there on you, getting traded around by third-party companies or collected by sites like Spokeo (spoiler: following this link could be terrifying). Just because it’s technically public doesn’t mean it’s ethical to use. Perhaps a person doesn’t want their data used for research purposes or in a particular mode of research, like mapping. Perhaps they don’t even know their data is out in the world in the first place. Much of my mapping data centers on government and military entities, but I want to take extra care to acquire consent before posting data that could potentially de-anonymize individual survivors, even if that data is publicly available.

Second: compartmentalization. The increasing number of coordinated online abuse and harassment campaigns requires us to take further protective measures, particularly if our research takes up highly charged topics (more on this in the next post!). I plan on creating a Scalar version of my digital work, with still images of and links to the map I have been creating. This will give internet trolls another front on which to launch their attacks. In light of this, I have to take care to set up as many physical, digital, and emotional buffers as I can between trolls and survivors: no identifying data on either front end or back end, no Cloud storage of research data (even for back-ups), mindful design involving the use of plug-ins and widgets, and stricter parameters for comments and participation.

Finally: sousveillance. This tactic is a little more nebulous and future-orientated than the previous two in that it anticipates raising awareness and making social, cultural, and political changes to our level of acceptance of counterterror state governance. Sousveillance typically refers to the act of “watching the watchers.” At our current moment, cell phone footage of police brutality or shootings stands as the most well-known type of sousveillance. My project retroactively surveils state actors for their implementation of war-time counterterror policies on US citizens. Like others engaged in sousveillace, I hope that my act will not only bring about social awareness, but social change as well. I recognize that this sounds a little naïve, particularly given how entrenched national security concerns and militarized domestic policing have become since the Cold War. However, I still have hope. After all, the NSA has finally recommended ending its sprawling phone surveillance program and our national grand strategy is transitioning from counterterrorism back toward superpower blocs. And, since the election of P45, people seem quite ready to take up social justice issues. If nothing else, the project intervenes in a tendency to completely separate 9/11 and Hurricane Katrina. It is my hope that this sousveillance project will ultimately transform our historical narrative of the post-9/11 period and help create momentum for dismantling some of the most egregious excesses of the counterterror state.