Pages

Friday, June 26, 2020

We can make surveillance work for us


Data mining is a threat to our autonomy, but it also offers opportunities to build a more humane world. We need a new social compact to manage it.

By Iyad Rahwan and William Powers

Updated June 26, 2020, 34 minutes ago



Sundar Pichai, chief executive of Google's parent company, Alphabet, is among the tech executives who have called for new government regulations.GEERT VANDEN WIJNGAERT/BLOOMBERG


Two horrible events, the COVID-19 pandemic and the killing of George Floyd, have shaken the world, raising consciousness about social, economic, and political problems that had been festering for generations. They could also inspire us to solve a relatively new problem with huge implications for democracy’s future: the growing role of surveillance in our lives.

Surveillance used to be associated mainly with government spies and eavesdropping satellites. But about 20 years ago, the tech industry discovered there was money to be made in tracking people’s personal relationships, political leanings, physical movements, and countless other aspects of their lives. The resulting behavioral data is sold to advertisers and other businesses for commercial use.

It’s called surveillance capitalism — the subject of an influential 2019 book by Harvard Business School professor Shoshana Zuboff — and it’s how Google, Facebook, and many other companies became profit machines. Any digital device or app is a potential tool of the tech-surveillance complex. We’re surveilled not just by our phones, but our Internet-connected cars, our smart speakers and vacuum cleaners, refrigerators and thermostats, even the toys our children play with.

Both kinds of surveillance, government and industry, threaten our privacy, agency, and autonomy. Public concerns about Big Brother prompted modern democracies to place limits on government surveillance of citizens long ago. But as surveillance capitalism emerged from Silicon Valley and spread all over the world, it drew relatively little public scrutiny or pushback.

Partly, this is because this new surveillance isn’t conducted by the government, but by private companies that we allow to surveil us every time we approve one of those “user agreements” we haven’t read. In other words, we bought into surveillance capitalism. Why? Because the same technologies do many other useful things for us and genuinely enrich our lives. In effect, we’ve cut a deal with the tech industry, giving up a lot of our privacy in exchange for substantial utility and pleasure.

Related
Facial-recognition tech may have value, but real-time surveillance goes too far
Boston City Council unanimously votes to ban use of face surveillance technology by city government
The other (bigger) privacy problem

But awareness has risen over the past few years of how closely the tech industry is watching us when we use its tools, and the harmful behavioral and social impacts it can have. Two prominent examples are the rise of fake news, which can be targeted to fit readers’ preconceived beliefs, and the corrosive impact that social media platforms have had on political discourse. Manifestos like Zuboff’s and popular dystopian movies and TV series, such as “Westworld” and “Black Mirror,” have also stoked fears, as have numerous studies revealing how algorithms often make decisions that manipulate our emotional states and decisions and reinforce stereotypes, inequities and biases. One recent study found that a widely used hospital algorithm has been causing Black people to get less medical care than white people with exactly the same illnesses. Even industry leaders such as Apple’s Tim Cook and Google’s Sundar Pichai are conceding that regulation — new government rules and boundaries for these technologies — is needed.

How to reconcile the extraordinary potential of these technologies with their significant human downsides? Can privacy and autonomy be protected in a world increasingly dominated by surveillance? Can we steer clear of the emerging authoritarian model, a hybrid of government and capitalist surveillance with little regard for freedom and self-determination? Now could be the moment to resolve these questions. In both the global pandemic and the death of George Floyd, we can see the ameliorative possibilities of the technologies that undergird surveillance capitalism.

One of the lessons of COVID-19 is how effectively surveillance technologies can serve purposes beyond capitalism, i.e. the public good. These include rapid public education, via social media and other platforms, about how the virus spreads, and innovative measures for keeping the death rate down. For example, in China, South Korea, and other countries, contact tracing based on surveillance data has been used to effectively flatten the curve of coronavirus infections. And for all the current weariness of Zoom calls, moving our social and work lives to the screen allowed many of us to forge onward with a semblance of normalcy.

The other major crisis, the death of George Floyd at the hands of Minneapolis police, began with a single video shot by teenager Darnella Frazier on her phone. Citizens have natural suspicions about government’s dark side, but few have the chance to turn the camera on it when the worst abuses are underway. Frazier was surveilling the police, and by capturing the awful 8 minutes and 46 seconds it took Floyd to die, she changed the conversation about policing and race in a way that people who have been working on this issue for years hadn’t been able to do.

The shared lesson of these two stories: surveillance is simultaneously a threat and an opportunity to build a better, more humane world. Why not devise a way to use the positive power of surveillance to make society healthier, happier, and more prosperous, while keeping the downsides sufficiently in check to protect the public?

We need a social compact for the surveillance age, one that codifies the values guiding our use of these technologies, so we can make the tradeoffs wisely. It might include the following:

1. Individual citizens own their personal data. This could involve what MIT Professor Alex Pentland calls a “New Deal on Data” that would give people the ability to see what’s being collected about them and choose to opt in or out. Government could mandate that the surveillance industry guarantees people access to their data and gives them final say over its uses.

2. Government has the right to use our data in anonymized, aggregate form for promoting the public good. For example, the government could use anonymized data to help public health officials meet a new challenge. Just as voting is mandatory in some countries like Australia, sharing some kinds of anonymized personal data with the government for the benefit of the public good — say, to prevent a pandemic — could also be part of our duty as citizens.

3. Explicit limits on the government’s use of our data and on the use of algorithmic decision-making in sensitive areas, such as granting parole or allocating medical care. An independent agency could be tasked with oversight and enforcement of these limits. This agency might allow the government to access citizen data under very specific circumstances — for criminal investigations or matters of national security — while updating existing privacy laws to prevent the executive branch from using these technologies to track our individual movements or use our data for political purposes.

4. Citizen surveillance of the government. Citizens would be encouraged to track government operations to assess their effectiveness and fairness. Imagine a future in which individuals could surveil both government officials and algorithms used for public programs, and call out bad ones the way Frazier called out the cops.

If we’ve gained any wisdom about technology in the last few months, it’s that the surveillance puzzle is more complicated and nuanced than it often seems. These tools can be a force for ill or for good. We need to sit down as a society and sort out the difference, so we can steer this fractured world to a better place.


Iyad Rahwan, who was an associate professor at MIT for the past five years, is the director of the new Center for Humans and Machines at the Max Planck Institute in Berlin. William Powers, a former MIT research scientist, is a visiting scholar at the Center for Humans & Machines. He is the author of “Hamlet’s BlackBerry: Building a Good Life in the Digital Age.”




No comments:

Post a Comment