Academic Writing
Preprints
2025
- Anthropic Economic Index report: Uneven geographic and enterprise AI adoptionRuth E. Appel, Peter McCrory, Alex Tamkin, Miles McCain, Tyler Neylon, and 1 more author2025
In this report, we document patterns of Claude usage over time, in 150+ countries, across US states, and among businesses deploying Claude through the API. Based on a privacy-preserving analysis of 1 million conversations on Claude.ai and 1 million API transcripts, we have four key findings: (1) Users increasingly entrust Claude with more autonomy, with directive task delegation rising from 27% to 39% in the past eight months, (2) Claude usage is geographically concentrated with high income countries overrepresented in global usage relative to their working age population, (3) Local economic considerations shape patterns of use both in terms of topic and in mode of collaboration with Claude, (4) API customers use Claude to automate tasks with greater specialization among use cases most amenable to programmatic access. To enable researchers and policymakers to further study the impact of AI on the economy, we additionally open-source the underlying data for this report.
Journal Articles
2025
- Psychological inoculation improves resilience to and reduces willingness to share vaccine misinformationRuth E. Appel, Jon Roozenbeek, Rebecca Rayburn-Reeves, Jonathan Corbin, Melisa Basol, and 2 more authorsScientific Reports, 2025
Vaccine misinformation endangers public health by contributing to reduced vaccine uptake. We developed a short online game to reduce people’s susceptibility to vaccine misinformation. Building on inoculation theory, the Bad Vaxx game exposes people to weakened doses of manipulation techniques commonly used in vaccine misinformation and to strategies to identify these techniques. Across three preregistered randomized controlled trials (N=2,326), we find that the game significantly improves participants’ ability to discern vaccine misinformation from non-misinformation, their confidence in their ability to do so, and the quality of their sharing decisions. Further, taking the perspective of a character fighting as opposed to spreading misinformation is more effective on some outcome measures. In line with the learning goals of the intervention, we show that participants improve their ability to correctly identify the use of specific misinformation techniques. This insight is important because teaching manipulation technique recognition is not only effective to help evaluate information about vaccines, but also more viable than trying to debunk myriads of constantly-evolving myths. Our findings suggest that a short, low-cost, gamified intervention can increase resilience to vaccine misinformation.
- Bridging the empathy perception gap fosters social connectionRui Pei, Samantha J. Grayson, Ruth E. Appel, Serena Soh, Sydney Garcia, and 4 more authorsNature Human Behaviour, 2025
Young adults face a rising tide of mental illness and loneliness. We propose that an overlooked barrier for social connection is how people perceive each other’s empathy. Here, our longitudinal study of an undergraduate student community (N = 5,192) reveals that undergraduates who perceive their peers as empathic report better current and future well-being. Yet we document an ‘empathy perception gap’: people systematically see others as less empathic than others see themselves. Students who perceived their peers as less empathic were less willing to take social risks and grew more isolated over time. To disrupt this cycle, we conducted two field experiments that presented students with data on their peers’ self-reported empathy and behavioural nudges to encourage social risk taking. These interventions reduced the empathy perception gap, increased social behaviours and expanded social networks months later. This work offers a promising, scalable strategy to cultivate social well-being, simply by presenting people with data about each other.
2023
- Partisan conflict over content moderation is more than disagreement about factsRuth E. Appel, Jennifer Pan, and Margaret E. RobertsScience Advances, 2023
Social media companies have come under increasing pressure to remove misinformation from their platforms, but partisan disagreements over what should be removed have stymied efforts to deal with misinformation in the United States. Current explanations for these disagreements center on the “fact gap”—differences in perceptions about what is misinformation. We argue that partisan differences could also be due to “party promotion”—a desire to leave misinformation online that promotes one’s own party—or a “preference gap”—differences in internalized preferences about whether misinformation should be removed. Through an experiment where respondents are shown false headlines aligned with their own or the opposing party, we find some evidence of party promotion among Democrats and strong evidence of a preference gap between Democrats and Republicans. Even when Republicans agree that content is false, they are half as likely as Democrats to say that the content should be removed and more than twice as likely to consider removal as censorship.
2020
- Privacy in the age of psychological targetingSandra C. Matz, Ruth E. Appel, and Michal KosinskiCurrent Opinion in Psychology, 2020
Psychological targeting describes the practice of extracting people’s psychological profiles from their digital footprints (e.g., their Facebook Likes, Tweets or credit card records) in order to influence their attitudes, emotions or behaviors through psychologically-informed interventions at scale. We discuss how the increasingly blurred lines between public and private information, and the continuation of the outdated practices of notice and consent, challenge traditional conceptualizations of privacy in the context of psychological targeting. Drawing on the theory of contextual integrity, we argue that it is time to rethink privacy and move beyond the questions of who collects what data to how the data are being used. Finally, we suggest that regulations of psychological targeting should be accompanied by a mindset that fosters (1) privacy by design to make it easy for individuals to act in line with their privacy goals, as well as (2) disclosure by choice in which individuals can freely decide whether and when they might be willing to forsake their privacy for better service.
Conference Papers
2025
- Generative AI Regulation Can Learn from Social Media RegulationRuth E. AppelIn Proceedings of the 42nd International Conference on Machine Learning, 2025Position Track (oral)
There is strong agreement that generative AI should be regulated, but strong disagreement on how to approach regulation. While some argue that AI regulation should mostly rely on extensions of existing laws, others argue that entirely new laws and regulations are needed to ensure that generative AI benefits society. In this paper, I argue that the debates on generative AI regulation can be informed by the debates and evidence on social media regulation. For example, AI companies have faced allegations of political bias regarding the images and text their models produce, similar to the allegations social media companies have faced regarding content ranking on their platforms. First, I compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, I discuss specific policy recommendations based on the evolution of social media and their regulation. These recommendations include investments in: efforts to counter bias and perceptions thereof (e.g., via transparency, researcher access, oversight boards, democratic input, research studies), specific areas of regulatory concern (e.g., youth wellbeing, election integrity) and trust and safety, computational social science research, and a more global perspective. Applying lessons learnt from social media regulation to generative AI regulation can save effort and time, and prevent avoidable mistakes.
- Political Neutrality in AI is Impossible — But Here Is How to Approximate ItJillian Fisher, Ruth E. Appel, Chan Young Park, Yujin Potter, Liwei Jiang, and 7 more authorsIn Proceedings of the 42nd International Conference on Machine Learning, 2025Position Track (oral)
AI systems often exhibit political bias, influencing users’ opinions and decision-making. While political neutrality—defined as the absence of bias—is often seen as an ideal solution for fairness and safety, this position paper argues that true political neutrality is neither feasible nor universally desirable due to its subjective nature and the biases inherent in AI training data, algorithms, and user interactions. However, inspired by Joseph Raz’s philosophical insight that "neutrality [...] can be a matter of degree" (Raz, 1986), we argue that striving for some neutrality remains essential for promoting balanced AI interactions and mitigating user manipulation. Therefore, we use the term "approximation" of political neutrality to shift the focus from unattainable absolutes to achievable, practical proxies. We propose eight techniques for approximating neutrality across three levels of conceptualizing AI, examining their trade-offs and implementation strategies. In addition, we explore two concrete applications of these approximations to illustrate their practicality. Finally, we assess our framework on current large language models (LLMs) at the output level, providing a demonstration of how it can be evaluated. This work seeks to advance nuanced discussions of political neutrality in AI and promote the development of responsible, aligned language models.
- In-House Evaluation Is Not Enough. Towards Robust Third-Party Evaluation and Flaw Disclosure for General-Purpose AIShayne Longpre, Kevin Klyman, Ruth E. Appel, Sayash Kapoor, Rishi Bommasani, and 29 more authorsIn Proceedings of the 42nd International Conference on Machine Learning, 2025Position Track (spotlight)
The widespread deployment of general-purpose AI (GPAI) systems introduces significant new risks. Yet the infrastructure, practices, and norms for reporting flaws in GPAI systems remain seriously underdeveloped, lagging far behind more established fields like software security. Based on a collaboration between experts from the fields of software security, machine learning, law, social science, and policy, we identify key gaps in the evaluation and reporting of flaws in GPAI systems. We call for three interventions to advance system safety. First, we propose using standardized AI flaw reports and rules of engagement for researchers in order to ease the process of submitting, reproducing, and triaging flaws in GPAI systems. Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs, borrowing from bug bounties, with legal safe harbors to protect researchers. Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports across the many stakeholders who may be impacted. These interventions are increasingly urgent, as evidenced by the prevalence of jailbreaks and other flaws that can transfer across different providers’ GPAI systems. By promoting robust reporting and coordination in the AI ecosystem, these proposals could significantly improve the safety, security, and accountability of GPAI systems.
Book Chapters
2022
- Privacy and ethics in the age of Big DataSandra C. Matz, Ruth E. Appel, and Brian CrollIn The psychology of technology: Social science research in the age of Big Data, 2022
This chapter discusses the new ethical challenges introduced by the age of Big Data. Although there are many other ethical challenges related to technology and data (e.g., addiction, inequality), it reviews the topic of privacy as one of the major challenges associated with Big Data. The chapter introduces the concept of privacy, briefly discussing its history, universality, and core assumptions that lie at the heart of privacy protections. It then moves on to the questions of how Big Data threatens our privacy in unprecedented ways and challenges current approaches to privacy protection. Next, the chapter discusses how placing the burden of privacy protection on users alone is misguided and provide a number of potential systemic solutions related to regulation, collaboration, design principles, and technological tools. It concludes with concrete practical guidelines for researchers and practitioners of how to design studies, products, and services that protect individuals’ privacy.
2021
- Psychological targeting in the age of Big DataRuth E. Appel, and Sandra C. MatzIn Measuring and Modeling Persons and Situations, 2021
Advances in the collection, storage, and processing of large amounts of user data have given rise to psychological targeting, which we define as the process of extracting individuals’ psychological characteristics from their digital footprints in order to target them with psychologically-informed interventions at scale. In this chapter, we introduce a two-stage framework of psychological targeting consisting of (1) psychological profiling and (2) psychologically-informed interventions. We summarize the most important research findings in relation to the two stages and discuss important methodological opportunities and pitfalls. To help researchers make the most of the opportunities, we also provide practical advice on how to deal with some of the potential pitfalls. Finally, we highlight ethical opportunities and challenges and offer some suggestions for addressing these challenges. If done right, psychological targeting has the potential to advance our scientific understanding of human nature and to enhance the well-being of individuals and society at large.
Under Review
2025
- How deceptive online networks reached millions in the US 2020 electionsRuth E. Appel, Young Mie Kim, Jennifer Pan, Yiqing Xu, Daniel R. Thomas, and 27 more authors2025
Deceptive online networks are coordinated efforts that use identity deception to pursue strategic political or financial goals. During the US 2020 elections, these networks reached at least 37 million Facebook and 3 million Instagram users, representing 15% and 2% of the platforms’ active US adult users, respectively. Only 3 networks out of 49–1 network with explicitly political aims and 2 that appeared to use politics as a lure for profit–were responsible for over 70% of users reached. Notably, accounts unaffiliated with the networks played an important role in facilitating this reach by resharing content the three networks produced. Deceptive networks, regardless of whether their goals were political or financial, reached users who were older, more conservative, more frequently exposed to content from untrustworthy sources, and spent more time on Facebook.
Work in Progress
2025
- How partisanship affects preferences for content moderation in large language modelsRuth E. Appel, Jennifer Pan, and Margaret E. Roberts2025
- Combating misinformation on social mediaRuth E. Appel, Susan Athey, Dean Karlan, Kristine Koutout, Michael Luca, and 3 more authors2025