“Do Billboard Advertisements Increase Voter Turnout? A Large-Scale Field Experiment” (with Donald Green, Lionel Ong, and Aaron Schein). Quarterly Journal of Political Science 19, no. 3.
This paper reports results from a nationwide experiment conducted during the 2020 general election in the United States. A total of 298 billboards were randomly assigned to treatment or control in 155 geographic clusters. We estimate the impact of billboards on voter turnout, estimating exposure both by geographic distance from billboards and modeled exposure using cell phone data. Using this variety of estimation approaches, we obtain point estimates that are close to zero, with hints of stronger effects among those who reside near treated billboards. On the whole, it appears that signage does little to raise turnout in high salience elections.
What: A survey experiment with 4,000+ participants which evaluates whether traditional search (Google) and AI assistants (Perplexity) help individuals distinguish true and false/misleading news.
Why: Individuals increasingly rely on AI chatbots for answers to questions, yet we know little about their utility, especially when reliable information is scarce.
Findings: Both traditional and AI search tools tend to increase belief in misinformation, but AI does so at less than half the magnitude of Google. Search strategies, not unreliable links, matter.
What's cool: Our custom browser extension allows us to see everything that participants search for during the experiment.
With Zeve Sanderson, Jonathan Nagler, and Josh Tucker
What: A descriptive analysis of transcripts for all 2024 episodes from the top 300 podcast channels in the US, using LLMs to identify topics and speech patterns.
Why: Podcasts have become an increasingly prominent medium generally, with political content and guests appearing across genres. Which groups are incidentally exposed to ideological content? Are masculine (feminine) interests correlated with conservative (liberal) ideology?
Findings: Ideology is correlated with gendered interests and characteristics, with the most masculine content having a high rate of conservative content. Very little feminine-conservative content exists within podcasts.
What's cool: We have an ongoing collection and transcription of podcast content, with over 30,000 episodes transcribed and counting.
With Melina Much, Josh Tucker, and Jonathan Nagler
What: A 5-month field experiment with over 5,000 participants who are treated with one of 6 experimental ranking algorithms, implemented across Facebook, X, and Reddit.
Why: Engagement-based algorithms dictate what we see on social media and have shown to be imperfect distortions. Several alternative systems have been proposed, but not tested. We set out to conduct that test.
Findings: Some of the suggested "prosocial" alternatives to engagement-based ranking can improve political and wellbeing outcomes, but potentially at the cost of on-platform engagement
What's cool: We are able to seamlessly adjust each individual's on-platform ranking through our browser extension, increasing the internal validity of our treatment.
With a LOT of amazing people
What: A literature review of research on social media, using LLMs to carefully examine which platform affordances are actually being studied, and which remain largely unexamined.
Why: For almost every research question related to social media, there are top papers which present contradictory results. But are we even measuring the same things? And what outcomes are we missing?
Findings: Many of the activities which are often suspected to be most-impactful are also the easiest to observe. Data access, not societal importance, typically drives research design and platform and case selection.
What's cool: We construct a typology which distinguishes the three key actors in the social media system (producers, intermediaries/platforms, and consumers), and all on-platform actions they can take.
With Tamar Mitts
What: A vast, ongoing collection of TikTok data, focused on understanding how political content is produced and consumed in the US.
Why: TikTok has become one of the most-popular platforms globally, and its recommendation-based algorithm has led to the re-shaping of virtually all other social media platforms.
Findings: Political content production is highly concentrated relative to other topics. It represents about 3-5% of content produced on the platform. We find little evidence of algorithmic manipulation.
What's cool: The data collection is immense, and has enabled a several other research projects related to TikTok.
With Ben Guinaudeau, Jonathan Nagler, Sol Messing, and Josh Tucker
What: A 4-month field experiment, implemented across Facebook, X, and Reddit, which experimentally removes "toxic" content from social media feeds.
Why: Online toxicity has often been cited as a reason for many of the adverse impacts of social media. But can we actually disentangle the impact of specific content from social media use in general?
Findings: Removing toxic content actually increases indicators of anxiety and depression, and decreases platform use. Several political outcomes are virtually unaffected.
What's cool: Our browser extension closely monitors platform engagement, allowing us to directly observe our treatment's impact on engagement and time online.
With George Beknazar-Yusbachev, Mateusz Stalinski, Jonathan Stray, Julia Kamin, and Ceren Budak
The social media ecosystem centers around a three-way interaction involving content creators, content consumers, and complex platform systems. The majority of prior research has focused on platform decisions or conditions and their effects on consumers. I argue that a more holistic framework is necessary to correctly understand the content environment and accurately diagnose its impacts.
Methodologically, I employ a variety of experiments, informed by novel formal modeling and simulations. I aim primarily to establish the importance of content production dynamics: nothing happens on social media until someone posts, and we know little about why the small subset of content creators choose to spend effort creating content, and what steers their decision-making. To complement this, I also show how the platform's relationship with producers parallels and impacts the consumer-facing relationship.
Beyond substantive contributions, my dissertation innovates with new ideas for lightweight and low-cost experimentation on social media. Affordable and ecologically-valid approaches to studying social media and its effects are vital as we move into an era with decreased data access across platforms.
Content creators on social media compete against each other for algorithmic recommendations. Whether they produce for profit or leisure, increased exposure and engagement positively affect their utility. Despite the abundance of both content and consumers, the number of slots on each individual's feed are ultimately finite, and sophisticated algorithms use sophisticated engagement predictions to place content. In this paper, I present an analytical framework for understanding how consumer interests and algorithmic sorting influence the types of content produced on social media platforms. Building off of a Downsian framework, I model two producers who adjust the content they create in order to maximize their reach, given the production point of their competitor. Unlike typical Downsian models, social media engagement can come both from preferences being very close to content, or very far, what I term concordant and discordant engagement, respectively. I show that polarization of content production can occur with a sufficient prevalence of discordant engagement, even without polarization in the population or producer preferences. I support this finding with observational data and interviews with content creators, including media staffers for Members of Congress.
The content on social media platforms is created almost entirely by users. While much research has focused on how social media content affects users who consume it, it is equally important to understand how platforms and experiences on them affect content producers. Prior research suggests that engagement and platform monetization impact the frequency and type of content that users create. However, this research has typically avoided political content and forums. In this paper, I investigate how a costly engagement signal, Reddit Awards, affects both the production of comments and original posts, in both political and non-political subreddits. I find that Reddit Awards seem to incentivize increased comments for new users, but do little to move veteran Redditors. I find weak evidence for a relationship in the opposite direction for individuals who post, rather than comment. These results suggest that engagement can affect certain types of original content production, including political content. However, posting and commenting are different behaviors that appear to have distinct relationships with engagement and user tenure.
In the early phases of social media, content was a reverse-chronological stream of posts made by selected online connections. Now, all major social media platforms are increasingly moving toward algorithmically-curated content, drawing from a wider pool and sorting to maximize user engagement and retention. While the selected tweets or videos may seem like they fell out of a coconut tree, each item is strategically selected based on a user's activity, and that of all users who came before them. In this paper, I present results from two experiments on TikTok. In the first, I conduct an algorithmic audit to show how engagement signals can alter initial recommendations. I find that effect sizes are conditional on the topic of interest, noting that engagement with political content appears to trigger relatively high rate of related recommendations. In the second experiment, I examine how initial engagement signals persist over time. In the lab, I observe algorithmic behavior over 40 minutes of browsing by treatment-blind users. I find that political recommendations persist for treated accounts, even after significant browsing time. I also present preliminary results for algorithmic effects on user attitudes and experiences.