The Effect of Surveillance Capitalism on Social Change Efforts, with Allen Gunn


author: Karissa McKelvey
date: 2019-12-11
slug:
title: The effect of surveillance capitalism on social change efforts, with Allen Gunn
wordpress_id:
categories: – blog
image: https://images.digital-democracy.org/assets/aspiration_team-1600@2x.jpg

This week, I sat down with Allen Gunn (“Gunner”), the Executive Director of Aspiration in San Francisco, USA. He works to help NGOs, activists, foundations and software developers make more effective use of technology for social change.

The common thread that connects all facets of Gunner’s work is a focus on open approaches to capacity building and knowledge sharing in social change efforts. We wanted to speak with him as part of our new Technology Solidarity series, where we’re taking a moment to explore what we’ve learned in conversation with others working at the intersection of technology and human rights.

You have a 10,000-foot view of this space. What do you think are the most pressing issues regarding technology for social change?

I’m trying to birth a paper about controlling our digital destiny – about having alternative digital infrastructure that we actually control. So much about what we’re trying to do at Aspiration is move away from talking about technology, and more towards talking about data. A particular pathology of tech thinking and tech decision-making in the non-profit sector is that software and hardware get line-items within budgets, but data is not part of that conversation because the cost and value of organizational data don’t show up in standard budget templates. In other words, nonprofit decision makers worry more about the tech than the data. We are looking to flip that perspective and invert the narrative of how we talk about data first and then about tech.

From a game theoretic standpoint, we’ve already lost. Too many of our civil society adversaries have too much of our data, and we have so little of theirs. I try to explain it to folks we work with using simple real-world analogies. Imagine you’re in a room with a newly ambulatory toddler who has just locked their sights into something, you can see their eyes, and you know their future and where they are heading in the minutes ahead. If it’s an inanimate object, some cute crawling lies ahead and you’ve got to go get the camera. If it’s an electrical socket they spotted, you see that they’re going to put their finger in it – you intervene. If it’s a live pet, you do a little threat modeling: is the cat cute and cuddly, or will it scratch the kid’s face off? By putting our data in the corporate cloud, we are that kid. We are giving corporations and governments a way to see where we are looking towards, and they can predict our future and decide whether and how to intervene or subvert.

The best options for cloud software are not open source, and the cost of using state-of-the-art tools is giving up control of all our data. We are addicted to phones that track us 24/7 and know everything we do. Idiots – and you can quote me on that – continue to buy home surveillance-ware and hardware that listens in on everything they say and do, even going so far as to send doorbell camera feeds to police departments and facial recognition databases. We are trending in all the wrong directions. We should be fighting hard to not have our data in the control of governments and creepy, digitally-extractive multinational corporations.

If you want to talk about impact stories for non-profit and civil society, you need to understand what is at risk and stand in solidarity with those who are practicing data minimalism and data solidarity.

Where are the biggest gaps in practicing data solidarity?

A complete understanding of the social graph we carry with us is a huge one that people still don’t appreciate or internalize. Data solidarity means considering how my social graph impacts and potentially toxifies other people’s social graphs, and puts them potentially at risk. For example, we proudly work with Freedom of the Press Foundation, where Edward Snowden chairs the board, we’ve organized Tor developer meetings and Tor trainings and we directly support scores of great organizations who oppose governments and corporations around the world; in short, we work with lots of folks that the US and other governments and multinational corporations worry about. If I make a clear social graph connection with someone from a marginalized community or lacking US citizen status, am I putting them at risk? If suddenly the government wants to trace my social graph and implicate anyone I have collaborated with, it would be really easy for them. This could affect their visa status, job prospects, get them on watch lists, or worse. The unknown future impacts of collected data can not be calculated.

Everyone should concern themselves with having good data stewardship and retention policies. For example, do you save participant lists from your events forever? That’s something we wrestle with managing responsibly. We are experimenting with ways to talk about correlating data practices with carbon minimization and sequestration practices. In the same way that carbon in the atmosphere is going to kill us, data in the digital sphere has profound potential to harm us in our digital—and physical—environments as we move into the future. You already see this when folks are denied health care or financial services based on data secretly collected about them.

The US State Department tried to subpoena Dreamhost for all the IP addresses of people who visited the DisruptJ20 web site used for organizing protests around the Trump inauguration. Luckily, Dreamhost fought and prevented the seizure of that information, but what would the government have done with that data on US citizens and non-citizens, and how would we ever know?

The amazing Daniel Kahn Gillmor (aka “dkg”) says another way to think about this is to equate data with pesticides. As soon as they’re utilized, we don’t know the future harm they could cause. The more data we collect or let get collected, the more future harm we must imagine could be possible – so we should collectively try to generate and store as little data as possible, and think about the future risks associated with data we steward.

Google Drive accounts are honey pots, meaning they contain valuable data that can be seized in one go. A cellphone with Signal on it without disappearing messages is a potential honey pot. All off-the-shelf computing devices we use are ready-made honey pots. Increasing our resilience against data seizure and surveillance of all kinds means reducing the number of honey pots into which we stick stuff. Although not yet an impact story at scale, Next Cloud is part of that future, it encrypts data in smart ways and has features that dramatically increase users’ agency with their data. There are also emerging applications designed with these threat models at their core: Signalboost uses the Signal protocol to do group notifications for group actions like protests and mass mobilizations. It removes and/or hides as much metadata as possible, so numbers in the group are not visible to the recipients. The owner of the Twilio account behind each group of users still knows all the phone numbers, but it’s a step in the right direction.

As a leader in the non profit techology scene, you must have seen a wide breadth of projects over the years. What projects have had the most impact that are still active today?

Signal was and still is an exemplar of a way to do technology for civil society. It was a design-first project, meaning the user experience came first and was critical to get right from the beginning. The goal was to make it so that the user need know nothing about the security that protects them. They combined this with a hugely talented staff and were able to ship a very secure, wonderfully usable message app. There are very few stories like this in civil society.

WordPress, the open source content management system, is another example where we have seen technology deliver a thriving resource for the sector. WordPress is a paradigm for what an open source ecosystem should look like. It’s free and open, but still sustainable, and gives users agency regarding where they store the data associated with their sites, both the content and traffic data. You have options: you can do self-hosting where you control the data, you can use third-party hosting where you can pay any provider you trust to host your content, or you can pay the WordPress company directly and enjoy extremely high uptime and availability. WordPress supports full data exportability, which means I can export my content and settings any time and switch to a different one of the above scenarios, and it’s an open format that easily integrates into importers for other platforms. Even though it’s a for-profit organization, it’s a case study at scale of how open source should be designed, developed and deployed, where users have a variety of options for how they can control their own data. Compare that to trendy web platforms like Squarespace, where they spy on all your visitors and site activity in very non-transparent ways. It hurts to think about they monetize and share that data. And come to think of it, I hope WordPress doesn’t do shady things with user data at www.wordpress.com . But that’s why we host elsewhere.

For those reading this who are involved in non-profit technology; either supporting an existing project or making a new one, do you have some advice?

Listen to your users first, second, and third and prioritize your accountability to them over your desire to “do tech”. Technology implementation should come last, implementing and coding early is a sign of immaturity. “Measure thrice, cut once”. Don’t just do user research in one go and then code for a year and disappear. Test your thinking and designs with your users, treat them like core partners in your technology processes, make it accessible for them to provide tough love to your plans and projects, and let the findings drive where you go.

Be free and open source in all that you produce. And please please please, if you start an open-licensed project, model for success and have project governance conversations early. People are hopefully going to want to contribute code, time, money, and more. You need to talk about how project decisions and priorities get made, how those contributions get honored, how you federate equity in a project as it scales. For example, see Mozilla’s module governance. Ownership of code and activities supporting Firefox are federated, there’s a governance mailing list to discuss how things are working, and you can pass on ownership of a module. There’s always a fallback “peer” if you get hit by a bus. This is a solid example of a governance model that has sustained over time.

I’m sure there are many people reading this now who want to do something to help this movement against surveillance capital. What would you say to those who want to enter into this space?

My first answer is: learn how to be a data steward or data ally. Help organizations proactively think about what data they collect and how it is governed after its collected. Help organizations get their collective head around all the data they possess, how they curate it, how they back it up, and how over time they minimize it.

There’s a french proverb that basically says “The words that you have not spoken; you are their owner. The words you have spoken, they own you.” The data that organizations and individuals have committed to digital memory stands to ultimately control them. How can you set up that dynamic to be the healthiest it can be by practicing data minimalism and actively considering your data decisions?

Don’t go to code academy, go to design academy. Be advocates of the user & consumer. It’s not about learning how to code, it’s about translating real-world needs to technological specifications in just ways that give end users agency and equity in design, development and delivery. Be a champion of user-centric design. Learn how to steward data and offer your help.

Finally, work on your information security literacy and teach all that forward. Learn what a threat model is, and explore how to model and mitigate present and future threats. After all, it’s not about the tech, it’s about the data, and we need more people thinking, talking and teaching about emerging data threats.

Remember that tech decisions are political decisions, you are what you use, and ultimately your integrity lies in how you manage your data and the data of those with whom you work and live.

This article was cross-posted on Medium.com