April 2020: We are compiling summaries of state-of-the-art research in ethics at the frontier of technology, following the theme of our 2019 Susilo Symposium. Today, we review insights on privacy and algorithms from Nathanael Fast and Arthur Jago (University of Southern California).
On the one hand, services based on artificial intelligence offer a variety of benefits to consumers. On the other hand, these services require a large amount of personal data. Hence, the growing number of algorithmic services poses new threats to privacy.
While people have historically cared about the right for privacy, the diffusion of algorithm-based services may erode our capacity and motivation to protect our privacy. Specifically, Fast and Argo suggest that our tendency to rationalize privacy-reducing algorithm is based on four factors:
1. Awareness of the benefits and conveniences of algorithms: consumers are aware that algorithms will help them identify relevant content (e.g. music, movies), improve decision making (e.g. recommendation systems), or even increase productivity (e.g. digital assistants).
2. Underestimate the likelihood of harm: companies make it difficult to know exactly how they collect and use personal data. In turn, this may cause consumers to think that harm is relatively unlikely to occur.
3. Exposure to negative consequences only after usage has already begun: people’s experience with negative outcomes associated with privacy occurs well after usage has started. Even when costs become salient, consumers may remain loyal to the algorithm-based services because they are not ready to make drastic actions to protect privacy. Facebook usage, for example, increased following the Cambridge Analytica scandal.
4. Certainty that losing privacy is forgone conclusion: the last factor promoting the rationalization behind the use of algorithms is the perception that losing privacy is inevitable: a necessity in today’s technological world.
In summary, Fast and Argo propose that four rationalization factors explain people’s willingness to accept algorithm-based services associated with reduced privacy. The authors suggest that future empirical research should consider these factors, as well as others, to better understand consumer preferences for privacy.
The published academic paper can be found here:
Fast, Nathanael J., and Arthur S. Jago (2020), Privacy matters… Or does it? Algorithms, rationalization, and the erosion of concern for privacy. Current Opinion in Psychology, 31, 44-48.