Skip to main content

SwatTalk: “Social Media and the 2024 Elections: The Threats to Democracy, Free Speech, and Online Safety”

with Yoel Roth '11

Recorded on Wednesday, Jan. 24, 2024

 

TRANSCRIPT

Daniela Kucz ’14 All right. Welcome everyone. A few administrative matters before we begin. Please note that this talk is being recorded and will be posted online on the SWAT talks page in two to three weeks. If you have questions for our speaker during the conversation, please use the Q&A feature to submit them, sharing your name and class year as well. We'll do our best to get through the questions, but please be patient. And that's it on the administrative side. Welcome. So my name is Daniela Kucz, I'm class of 2014 and a member of the Alumni Council, which organizes the SWAT Talks initiative to spotlight alumni excelling in their fields. Tonight I'm super excited to welcome Yoel Roth, class of 2011, who's here tonight to share insights from his research and work around the state of trust in social media and other emerging technologies. Yoel has deep expertise in this topic from both the academic and corporate perspective. He's currently a night visiting scholar at the University of Pennsylvania, a Technology Policy fellow at UC Berkeley, and a non-resident scholar at the Carnegie Endowment for International Peace. His research and writing focus on social media, AI, and other emerging technologies. Previously, Yoel was the head of trust and safety at Twitter for more than seven years. He led the teams responsible for Twitter's content moderation, integrity, and platform security efforts. Before that, Yoel received his PhD from the Annenberg School for Communication at the University of Pennsylvania. Prior to his PhD, Yoel graduated from Swarthmore in 2011 where he participated in the honors program and studied political science and film and media studies. In particular, Yoel singles out Professor Carol Nackenoff's Constitutional Law class as especially influential on his thought and career, as I'm sure is true for many of you who've had the pleasure of taking that class as well. Thank you for joining us tonight, Yoel.

Yoel Roth ’11 Thank you so much for hosting.

Daniela Kucz ’14 So I want to start with your time at Twitter, now X. Some people may be familiar with a few of the bigger decisions you were involved with. For example, the decision to ban former President Trump, but a lot of us haven't heard about trust and safety jobs before. So tell us more about that. What actually was your job at Twitter?

Yoel Roth ’11 Yeah, so in a sentence, trust and safety is the job dedicated to managing the terrible, awful, horrible things that people do on the internet. How big of a job is it? So let's talk for a minute about scale. A lot of times tech companies like to say that harmful content is an infinitesimally small fraction of stuff that's online, and that most of what's out there on Twitter or Facebook or Instagram is good, and that's true, but let's do the math. If you imagine hypothetically that harmful content is 1/1000 of a percent of the content that's out there, 0.001% at Twitter's scale that the company was operating at where we would see half a billion tweets every day, that works out to 5,000 harmful tweets per day, or 150,000 harmful tweets per month, which is a non-trivial slice of potentially dangerous stuff you have to deal with. And that's the job, even if the vast overwhelming majority of content on the internet is totally fine and is never going to harm anyone, there's actually still a pretty significant amount of it that you need to think strategically about how you're going to manage it. And that's the job of trust and safety. So in my eight years at Twitter, I led a team of folks who were responsible for writing and maintaining Twitter's policies, so setting the rules of the service and then maintaining those rules over time as things change. I also ran teams that were responsible for preparing for global elections, for finding things like government backed troll farms, and then also for sort of thinking about how we could build Twitter's product in a safer and more resilient way. So in ways that would be less prone to being misused by bad actors. But I guess since this is a talk for Swarthmore folks, I feel like I have to bring a historical perspective to it and say trust and safety is a new name for something that has been around for really as long as people have been using computers to communicate with each other, which is governance and norm setting within communities. And I'll say a lot of the earliest internet technologies largely adopted a pretty libertarian perspective to these issues. They said like, let's create an unencumbered space for free speech, and that will be that. And almost inevitably, and it seems like it should have been obvious from the start, bad actors showed up and you run headfirst into exactly those challenges of needing to manage harmful content. And so now in 2024, this all seems really obvious in every day, like we've been through cycles of Russian interference in American elections, we're familiar with or have experienced online abuse or harassment. We know about the ways that social media can impact wellbeing and mental health, but there was a time when companies hadn't yet really developed the capabilities to do this type of work, and in some cases didn't even think that it was going to be necessary. And so the field of trust and safety is really rooted in that recognition, in a recognition that even if you create a space where the vast majority of the content is benign and positive and legitimate, there is still a job to be done to manage the rest of it. The last thing I'll say on this is a lot of times when people ask me about my work, the word censorship comes up, and it's become kind of an inescapable part of public discourse about content moderation these days. But you know, in my own experience getting into this field, I got into it going back to Professor Nackenoff's Con Law class precisely because I see immense value in free expression on the internet, but there's two sides to it. On one hand, there is creating a space where people can express themselves, the ability to speak freely. But then on the other side of it, there's the management of chilling effects. This idea that there can be a heckler's veto where the people can be intimidated or scared away from being comfortable expressing themselves. And so when we think about free speech and censorship in the context of social media, a lot of it is about finding that balance of creating enough space for people to participate and also making it so that as many people as possible feel comfortable participating in the space that you've created. And that push pull, that tension is really the day-to-day job of trust and safety work at a social media platform.

Daniela Kucz ’14 Sounds like it would be a lot of work, knowing what I know about the internet generally, but obviously a lot has changed at Twitter since you left your job there in November, 2022. For people that haven't followed every twist and turn of the journey of Twitter, what is your summary of what's happened there and why does it matter?

Yoel Roth ’11 Yeah, you know, a year later, when I reflect back on the story, the thing that really stands out to me is it's a tale about the accumulation of capital. Twitter used to be a publicly traded company on the New York Stock Exchange that was worth somewhere in the ballpark of $40 billion. And that number went up and down, and there were shareholders and quarterly earnings reports, and sometimes they were good and sometimes they were bad, but Twitter really operated like a company, and like any market, there was somebody on the other end of the market who had $40 billion sitting around and thought, "I'd like to buy this thing," the way that you or I might buy a pair of sneakers. And then they did. And so Elon Musk took an enormous accumulation of personal wealth and used it to purchase a thing, and the thing happened to be Twitter. But the weirdness of all of this, and part of what makes it so tricky is that Twitter wasn't just a pair of sneakers and it wasn't just a tech company, it was this platform that we had imbued with a lot of political and social significance. We'll talk, I'm sure, about all of the ways that, you know, even though most people in the United States and globally didn't actually use Twitter, Twitter occupied this enormous mind share for people around the world and played an especially prominent role in political and cultural discourse. And so all of a sudden you have one person spending a massive amount of money to buy that thing. And so he did. And then what happened? There were a couple of big changes. The biggest one was laying off a significant chunk of the company, upwards, like, of 80% of the company's former staff by current counts. So that was a radical transformation also in service of a vision of free speech, the company then rolled back a number of their policies, specifically the work that trust and safety had been doing. And I'll say that, you know, I have no illusions that our work was perfect or that content moderation on Twitter was beyond reproach. Certainly we made mistakes and there were things that I would do differently and that I would've changed, but it was really like setting fire to the thing rather than changing it. Core basic policies around things like targeted harassment and hate speech were summarily rolled back, and these things have have pretty profound impacts. Then there were a few other changes, right? So one of them, the one that has really gotten a lot of attention is that anybody can now spend $8 to buy a coveted Twitter blue check mark. This was what's known as the verification badge, and it used to be given out to people who are in some way noteworthy, so maybe politicians or celebrities or commentators or experts, and it became a way of knowing that the Britney Spears you're interacting with on Twitter is actually that Britney Spears, and now all of a sudden if you swipe a credit card, you too could be that Britney Spears. And so it really fundamentally altered the dynamics of identity on Twitter as they had existed for the last 15 or so years. But then another thing happened, Twitter introduced the ability, now it's called X as a company. So the period, the transition happens, now it's X, X introduces the ability for you to get monetized when your content gets attention. That means is if you post something on Twitter and Twitter runs ads against it, if you've paid your $8 dues, Twitter at least theoretically will give you a chunk of the ad revenue that they received. And this really fundamentally transformed Twitter. This created a direct transactional attention economy on Twitter. And so it created these massive new incentives for people to produce attention getting content. Didn't have to be good, didn't have to be true, it just had to get as much attention as possible, because there's a direct dollar value connected to that attention. And then finally, the last big change that happened was that Twitter turned off the ability of anybody to really understand what was going on. Twitter used to be one of the most observable social media platforms for researchers. Data about what's happening on Twitter, what millions of people are talking about was available in real time for free to anyone. And almost immediately upon buying the company, Elon Musk turned that off, and there's a number of reasons why they claim that they did it, but the result of it was that it's now much harder to know how much hate speech there is on Twitter, how many Russian bots are on Twitter. The truth is we don't know. And all of that makes it harder to rationally and realistically assess the consequences of this really broad transition. What we can know is that the value of Twitter has gone down from something like $40 billion to now something like $12 billion. And there's a lesson in there as a trust and safety professional. For me, the lesson is if you don't want the value of your company to go down by $30 billion, don't fire your trust and safety team. But we're seeing that this immensely culturally relevant thing that is Twitter has all of a sudden receded in both cultural and political and certainly financial relevance. And that's a very significant change. We're also starting to see the very beginnings of an exodus from Twitter, and it hasn't necessarily happened at scale. Lots of people still use Twitter, journalists can't seem to leave Twitter, they can't quit it, but we have seen people start to migrate to alternative platforms. We'll talk about that a little bit later on, but we're starting to see services like Threads and Mastodon and Bluesky emerge to challenge Twitter. And I suspect over the course of the next couple of years, we're going to continue to see some of those dynamics evolve and play out over time.

Daniela Kucz ’14 Thank you. Yeah, I actually would love to dive more into sort of alternatives to Twitter. David left a great question about sort of what frameworks you advise for a future credible social conversation environment and mentions Threads, Post, I think, which I'm less familiar with, but I would love to hear your take on the landscape and state of social media today given the decline you alluded to, and are there different demographic groups that are engaging with these different platforms? Who do we know we can trust? All these big questions now that there's I would say more of a fractured environment in front of us in social media.

Yoel Roth ’11 Yeah, so the moment that we're in right now I think has two big trends and they're pulling in somewhat opposing directions. The first one is one that I'm really optimistic about. We're seeing for the first time in, really 15 years new developments in the social media space. We're seeing new entrants come into the marketplace to try to challenge the role formerly occupied by Twitter. And so we've seen Mastodon, we've seen Bluesky, we've seen Posts, we've seen Threads, and there are a number of others. And whether they succeed or fail in some ways is beside the point. For a while, nobody even tried, nobody challenged Twitter. And so I think intrinsically that competition is a good thing. I think it's gonna prompt certain types of innovation and exploration about what good platform design is, what effective governance is. And I think we're going to see transformation in the social media space really in a way that we haven't seen it since Twitter first rose to prominence almost a decade ago. And so that's one direction. The interesting thing about a lot of these services that are emerging is that they're built in a very different way than how social media has traditionally worked. Traditionally, a social media site was a centralized entity. You would sign up for Instagram, you post your content on Instagram. Instagram moderates your content, they manage it, it's an end-to-end experience. That all happens within the walled garden of a single product run by a single company. What we're starting to see now are decentralized services. Bluesky is one example of a service that's built to not just be one company and one platform, but actually a lot of smaller interoperable platforms that can connect with each other. So imagine this as being the difference between every person living in a big city versus people living in smaller towns that are connected by a series of highways. And that unlocks a lot of cool and interesting new ideas. You know, Facebook's mission statement for many years has been connect every person on the planet. And that sounds really nice as a marketing slogan, but like, it's not obvious to me that humans can exist in a single society that contains, you know, three odd billion people the way Facebook does. And governance as a result is really hard because those 3 billion people all have different standards and expectations. They disagree with each other fundamentally. And so what if we didn't have to fit 3 billion people into the same community? What if there could be lots of different communities and they're all connected to each other? And so that's tremendously exciting to me. I think that's a real opportunity for structural change in how the internet works in a way that we haven't seen in a very, very long time. But the other trend is a pullback towards centralization, right? And this is where Threads comes into the picture, right? Threads is a Twitter-like product built by Instagram, which is owned by the parent company Meta, which also owns Facebook, and it looks and feels a lot like Twitter. And what it's built on top of is all of the infrastructure, including all of the moderation capabilities of Instagram. And when you sign up for Threads, if you've used Instagram, which billions of people do, you can just drop right in and all of your Instagram contacts are already there waiting for you, and there's a giant network of people already on the service ready and waiting to go. And that creates powerful network effects that pull people into a centralized platform like Threads. Now there's some complications there that we can get into in the Q&A, but in a way this is a very different vision for the future of the internet. On one hand we have this proliferation of new competitors trying new ideas. On the other hand we have Threads, which is an extension of Facebook. It's the dominant version of the internet, the internet that most people have lived on for the last 15 years. And here it is now, it looks like Twitter, you can probably tell where my biases on this are, but those are two very different competing visions about where social media is going to go. And so how is that impacting who uses what? Like what have we seen so far? The answer is it kind of depends who you are and what you care about. Research consistently has shown that Twitter was never used by most people, right? Like research from the Pew Research Center has kind of consistently demonstrated that it was really only a subset of highly political and news-engaged Americans who were using Twitter obsessively. And so Twitter's impact was never about everyone being on it. It was about the influence that it had on journalists, politicians, and celebrities. That was Twitter's impact. And those folks have not been able to leave Twitter, right? If you still look at who is sustaining conversations on Twitter, it tends to be a lot of the commentators, a lot of the political and news elites who were the addicts of the service to begin with. They might move elsewhere. We've seen some shift to Bluesky, some to Threads, but Twitter's dominance here still exists despite all of the changes I was talking about. But for everyone else, we're seeing that, you know, either they don't use a text-based social network at all, and they kind of don't care. Instagram is big, TikTok is big, and in some ways this whole Twitter drama has not really mattered to a lot of people, but other communities are migrating elsewhere. We've seen in particular that LGBTQ folks who under new policies at Twitter are particularly subjected to harm and harassment, have left and have found refuges on platforms like Mastodon and Bluesky. And so the answer of where has everyone gone depends a lot on your personal vantage point.

Daniela Kucz ’14 Thank you. Super fascinating. I think one thing I would like to dive into and sort of will probably continue to evolve as the landscape changes, but is there a risk that these sort of, the splintering of platforms or the landscape will cause further divides among communities that use social media? Is that, and I'm sure it's already happening, but I think, you know, with the upcoming election on probably a lot of people's mind with the primaries beginning, there seems to be a lot of pockets of communities that don't interact with one another on social media already. And will this continue getting worse if we have multiple different platforms to engage on?

Yoel Roth ’11 Yeah, you know, there's a lot in this that in some ways predates and has nothing to do with the changes at Twitter. People tend to self-segregate. We've known this, I learned this in sociology classes at Swarthmore, like we've known this about the way people behave in cities and in towns, and it's true on the internet as well. People tend to gravitate to spaces that align with their identity and that map to what their peer group is. And that's been a dynamic on social media since the days that MySpace was the dominant platform, and then Facebook came out, and as a platform targeted at generally wealthy college students, all of a sudden Facebook was the prestige platform and MySpace was the one for the weird artsy kids, and there was a migration and we all know which platform won. And so these kinds of self-segregation dynamics have always been an element of it. But we're starting to see those dynamics accelerate. And I would note that the divisions now are less about things like socioeconomic status, where you went to school, and tend to be more correlated with being a member of an extremist group. We've seen that as mainstream social media platforms adopt pretty common sense moderation strategies. Like you can't be a terrorist and white supremacy isn't okay, that people who advance those views have migrated to fringe platforms. Those dynamics started ahead of the 2020 election, but we saw that, for example, the January 6th attacks were not organized mostly on Facebook or on Twitter or on Instagram or on Reddit. They were organized on fringe platforms. And some of those dynamics have continued to take place. And I worry about that, I worry about it first because increasingly we as a society and as a democracy are not having a single conversation. We're in divergent realities without sort of shared facts. And I think that's sort of an existential risk to the ability to exist in a functional democracy. But as somebody who works in trust and safety, I also really worry about it from a content moderation standpoint, right? If you believe that violent extremism is a problem, you want the internet to be well moderated, and as we've seen people migrate to fringe platforms, they're not moderating. And so there's now this sort of growing ability of people to exist outside the sphere of sensible content moderation. And unfortunately, I think that that can lead to violence. I'd also note, you know, to temper my optimism about some of these federated social media platforms a little bit, that some of these new entrants, Bluesky, Mastodon, Post, you name it, are struggling to build out their trust and safety capabilities that have existed at centralized platforms for years. Facebook isn't perfect, I criticize them a lot, but most of the time they're pretty okay when it comes to things like removing child sexual abuse media or taking down terrorist accounts. If you're a new social media platform and you're building everything from the ground up, you inherently are going to lack some of those capabilities. And we've seen that these upstart platforms have struggled. They've struggled even with basic things like preventing child sexual abuse. These are going to be the stakes for an increasingly splintered internet. And unfortunately I see it getting worse before it gets better.

Daniela Kucz ’14 Thank you. Yeah, it seems like there's a lot of built up knowledge that probably the newer platforms don't have access to. And on that note, there have been several questions about the intersections of novel technologies such as AI, LLMs, and using those in order to help ensure trust and safety. I think one really interesting question from Lisa asks specifically about balancing the benefits of preventing human reviewer burnout against the costs of incorrect decisions from automation. My understanding from some people I know that have worked in the space, it can be a lot of really harrowing content. For example, you just mentioned child sexual abuse. So what is your take about applying those novel technologies and have you done it in the past? Where do you think that is going in the future?

Yoel Roth ’11 Yeah, so there's a lot of pieces to this and I'd also note I saw a couple of questions come in about the Biden deep fake robocall, and so some of the malign uses of AI. And so I think there's a lot to pick apart here, and there's also a lot that we don't know. So I'll start with what we do and don't know about AI. On the positive side, so the potentiality of AI to make content moderation better, we don't really know if it's gonna be effective. So we have some initial anecdotes that in some situations AI can moderate as well as a human moderator, but we don't have a ton of data about how it performs outside of English or in cultural context other than the United States. We also don't yet have a ton of data about how language models and generative AI are able to wrestle with cultural nuance. The difference between how I talk to my friends and the way that I would talk to a perfect stranger, and that's exactly the nuance that makes content moderation so difficult to do, but you have to do it. And it's not yet clear that AI implementations in their current state are quite fit for purpose around that. And so I think there's a lot of opportunities here for research to understand it better, but we don't yet have a clear perspective on how all of this is going to play out. Then there's some of the consequences of AI on public conversations, and this is an area where I think there's some reasons to be really concerned and some reasons maybe not to be that concerned, reasons to be really concerned. I think the Biden deep fake is troubling, right? Like the ability of AI to credibly impersonate the voice of somebody else is one of the scarier developments in AI. And I think we've seen even now early on some of the beginnings of what this technology will look like. It's noteworthy that these things are still getting detected quickly, debunked quickly, responded to quickly, but there's the possibility that they'll become more and more convincing. But the even scarier thing is what happens when we actually can't trust anything? This is a concept known as the Liar's Dividend. This idea that when an information environment erodes so profoundly that what you're left with is questioning everything, then there's no ability to know whether something is the truth or a lie. And when I think about AI's impact on the 2024 elections in the United States, that's the one that keeps me up at night. Even if nobody ever makes another deep fake ever again, we'll be wondering whether things are deep fakes. I think back to 2016 and the Access Hollywood tape and wonder, what would it be like if that happened now? Would it just be dismissed as a hoax and a forgery? And if so, would the coverage and the response and public discourse and including how voters react, be the same degree of outrage that happened in 2016? Or would it be this kind of apathy, this feeling that, well, we don't know and we can't tell and it's all kind of hard, and so you sort of disengage from it. And I worry a lot about that apathy being the future of what our information environment looks like.

Daniela Kucz ’14 Thank you. I would like to move on to some potential solutions to this. Will asked a great question about regulation and whether you think that could be a potential solution. And I'd be curious, beyond that, what are social media companies doing today to help guard against that? Things like deep fakes, et cetera. Is there anything you can do? And beyond what they're already doing, what are some other solutions that they could take in the future to help us feel more trust and less apathy?

Yoel Roth ’11 Yeah, you know, I hate to lead with bad news. Like I really, I hope this can be inspiring, but the bad news is that platforms are less prepared for elections this year than they have been for any election since 2016. And there's a few reasons that that happened. The first is layoffs, right? We talked about Twitter, but they're not the only ones, right? Like, I'm not here to pick on my former employer. Every major tech company has laid off staff and have laid off trust and safety employees. YouTube did it, Facebook did it, TikTok did it. And so we're in a space where there are fewer people doing this work. We're also in a space where the work the companies are doing is dialed back. And that's not just about the amount they're doing, it's also about the effectiveness of what they're doing. Really since 2020, we've seen a sustained pressure campaign on platforms to moderate less, the emergence of censorship discourse and whether or not it's true that social media companies engage in unjustifiable censorship. And we can have that conversation. The sort of accusations of censorship and pressure from Congress and from pundits has created an incentive structure where moderating is not really in platforms' interest anymore. And so in place of some of their aggressive interventions against foreign troll farms and misinformation, we're now seeing platforms offer sort of watered down interventions. I'll give an example from Facebook. Facebook have said to regulators and to the public, like, we label misinformation, but they don't fact check politicians. And that's a strategic decision, that's a way to avoid certain types of pressure. In 2020, they still didn't fact check politicians, but people were yelling at them about labeling misinformation. And so they did, they put labels on misinformation, but what did the labels say? The labels said visit the Facebook Election Information Center to learn more about the 2020 election. They weren't debunking misinformation, they weren't even calling it out as misinformation at all. They were generic election related labels. And so you have a tech company that can say, we have an election strategy, we're intervening against misinformation while not really doing anything at all. And what worries me about that is that when we look at regulations and we look at, especially in Europe, the Digital Services Act emerging as a way to try to keep companies in check. We're seeing companies turn this into a box checking exercise. It's like, yep, we have some election labels, guess we did that, we're done now. And what we're not really getting are testable, provable, effective interventions that mitigate harm. And a lot of that comes down to platforms lacking the incentive to do so. So what can we do? The bottom line is not a whole lot, but we do have some tools at our disposal. And the biggest one is advocacy. Platforms are extremely sensitive to public outrage. And this is sort of a, has been a property of the major tech platforms for years. It continues to be a property of them. They really don't like it when people yell at them. And I think that gives us as users and as members of the public the ability to demand change and accountability. The most effective way to do this is by enlisting advertisers. So most of the big tech platforms are largely reliant on revenue coming from Fortune 100, 500, 1000 companies. And those companies in turn work with big ad agencies. And if you know, if you as a consumer were to object to an ad for a Proctor and Gamble product running next to misinformation or hate speech, then maybe Proctor and Gamble would tell Facebook to change their policies or they'll pull their ads. That's a somewhat attenuated system of advocacy, but it's worked. In 2018 and 19, we saw campaigns like Stop Hate for Profit meaningfully influence the policies and practices that social media companies implemented. I was able to get additional funding for my team because there was this external pressure for Twitter to do more. And so, you know, in a way, this is a lesson in using the tools of capitalism to try to drive the outcomes we want. We should understand social media firms as businesses, and as businesses, they have to be responsive to their fiduciary obligations. And that means that we can use those financial interests as a way to drive changes in their behavior, if that's what we wanna see.

Daniela Kucz ’14 Thank you. Very interesting. I think the next question I would like to move to is, which I think is a little bit perhaps you alluded to in your analysis of social media companies as corporations, but I'm curious about sort of the decision making in your own path. And this is a little bit of a different topic, but after receiving your PhD, coming from Swarthmore as well, I'm sure you were very embedded in sort of a more academic approach to social media and thinking about this, and I'm sure that you were torn between staying in that landscape and joining the private sector. What inspired you to go to Twitter over academia and how is that time at Twitter influencing your experiences back in academia now?

Yoel Roth ’11 Yeah, I mean, I hope this isn't too cheesy to say, but like, I really love the internet. I said sometimes and people always call me on it, like I think the internet is potentially humanity's greatest achievement. Lots of people then say like, no, what about penicillin? And it's like, yeah, penicillin's great too, but like, I really am a fan of the internet and that love of the internet really ties into to the experiences I've had over the course of my life as a teenager figuring out my identity and my sexuality. I was able to find community online in a way that I wasn't able to at home. And that helped me learn about myself, about the histories of what it means to be a gay man in the world and really to connect with folks in a way that fundamentally shaped what I've gone on to do with the rest of my life. And so what I've always wanted to do, like the theory of change I have is, you know, how can I use my time on this planet to help as many other people have those same transformational experiences with the internet that I did? For other people to be able to, you know, right now it feels like a privilege to have that, how do we make that as accessible to everyone as we conceivably can? And I've gone through a few different ideas about how to do that, right? Like when I was at Swarthmore, it felt like the default path was to get a PhD. And so I did, and my idea was that by doing research and writing articles and publishing them in peer reviewed journals, that I would be able to shape the social media industry by criticizing it, that I would be able to push these companies to do better. And so as a graduate student, I was looking at sort of the first wave of online dating apps back in the early days, and I got connected to a number of their software developers and their executives, and they were just bozos. And so I was like, you know, maybe I can make this better. Like I'll write an article and I'll be like, okay, here's how to be a little bit less bad at your job. And they'd never read my articles. They didn't care. And that really like triggered a bit of disillusionment and kind of a crisis for me. 'Cause I was doing all of the academic stuff the right way. I was writing and publishing and all this. And it was not, like I was writing very applied stuff that was telling a specific app like, go do this thing and it will be better. They didn't care. And so one summer, I was disillusioned in Philadelphia and I thought, what if I apply for an internship in Silicon Valley? And I was an avid Twitter user and I went on their website and they had an internship that said trust and safety intern graduate student. And I was like, I'm a graduate student and I like Twitter, and trust and safety sounds cool. And so I applied for it and got the job and went to Twitter. And what I experienced there was the opportunity to kind of roll up your sleeves and get your hands dirty, actually working on content governance. My first week on the job, they didn't really know what to do with me. I was this weird PhD student studying content moderation. And they were like, work on whatever you want, like you're a PhD. And I was like, no, I don't have the PhD yet. I'm a student. And they were like, figure it out. So I did, and one of the first things that they sort of gave me to do was moderating videos posted on Vine, which was sort of a predecessor to TikTok where you could post short videos. And the very first video that came up in the queue of videos to moderate was a dog being set on fire. And the videos loop, so it's six seconds over and over and over and then there was like a button under it and the button said ban, and the video looped a few times and then I clicked ban, and then the next video came up and you repeat this process over and over and over and over again. And that was a moment where it really started to sink in that the work of governance and of shaping the internet and of making these decisions is at once incredibly consequential, right? I wanted to make sure nobody else had to see that video of a dog being set on fire. But on the other hand it felt incredibly random. Nobody taught me what to do. I looked at it and I was like, yeah, this dog video definitely should not be on the internet, but who was I? I was a random graduate student who showed up in San Francisco and I got to make that decision and a bunch of others. And that seemed to me like an opportunity. And so I worked at Twitter the rest of the summer. I helped drive some research oriented changes in the company's work and roadmap and that convinced me that there was an opportunity to transform some of these institutions from within. And that's what led me back to the private sector. But I would say, I think something that I have always loved about Swarthmore and that drew me to the college in the first place is how impact oriented people are. People truly want to make the world a better place. And I think sometimes we can be pulled away from the corporate world by that impulse of saying like, we don't wanna sort of be tainted by the thing that is corporate America by being in it. And I would say like that, yes. And there's possibilities to transform institutions from within. And what I saw at Twitter was that it was imperfect. It was a complicated, messy company with lots of flaws and lots of issues, but I got to spend eight years incrementally making them better. And in the process, did I protect everyone on the internet? No, but I protected more people than would've been protected otherwise. And that progress felt really meaningful to me.

Daniela Kucz ’14 Thank you for sharing that. I know that one of the topics you're interested in is the intersection of academia and social media, which is unsurprising given your passion for both. But I know that there have been a lot of attacks on academics and the liberal arts on social media in particular. Tell us more about the kinds of things you're seeing and why these are a concern.

Yoel Roth ’11 Yeah, so for many years, there has been kind of a dynamic community of people who are studying social media and really helping drive accountability for large platforms. It's not a big group of people, but it's an influential and an important group who have focused on things like finding foreign governments meddling in elections and understanding the spread of misinformation online. And those researchers increasingly are under sustained attack. And so before I talk about what that attack looks like, let me outline the theory of the case, right? So people who are engaging in sort of stochastic violence against random researchers don't think they're evil, or at least I don't think they think they're evil, they think they're doing something good, and so here's what they think they're doing. Basically the idea is that the federal government, namely the Biden administration, but you know, whatever, wants social media companies to censor things, right? That's some latent viewpoint or desire that the executive branch has, but the First Amendment precludes them from doing that. I'll pause here and note, there was a question that came in I saw about Section 230 and whether I would support repealing that. I actually think the robust protection of the independence of tech companies is a good thing. It's a complicated thing and it has limits, but I think some of the independence from government that is afforded by Section 230 is an incredibly important way for companies to defend free speech and organization, especially against the government. The government can sometimes be the threat actor, and we don't see this as much in the United States, but it is true globally that oftentimes social media is the one outlet that can really be used to criticize and organize against the government. And so dispositionally, I worry about eroding that First Amendment independence of social media companies. And so let's go back to the case, right? The idea here is the First Amendment says the executive branch cannot do this. And so they were looking for a workaround. And so parts of the federal government, like the Department of Homeland Security turned to these academics, they turned to folks at Stanford and the University of Washington and they said, you have these relationships with tech platforms, what if we tell you stuff that we don't like? And then you'll tell them and then they'll take it down. And the idea is that during the 2020 election, that's what happened. The Department of Homeland Security would decide to censor people and they would tap Stanford researchers on the shoulder and say, hey, this isn't okay. And Stanford would tell my team at Twitter and we'd be like, of course, we will delete this at once. And the problem with that case is it didn't actually happen, right? Like this is the narrative advanced in things like the Twitter Files and it just so happens to not be true. And if you look back at all of the available evidence about these interactions, what you see is that platforms frequently disagreed with the government and pushed back on it. And I was frequently the one doing that at Twitter. And also most of these recommendations and questions and comments were coming into independent evaluation processes at tech companies. It was not a direct hotline that would be, you know, Twitter immediately deleting things at Stanford's behest, but the truth shouldn't get in the way of a good story. And so now we have this pervasive viewpoint that there is a censorship industrial complex, a pipeline running from the White House and the executive branch to academia, to social media platforms. And even though there's not really much evidence that that's how it worked, there have been tremendous attacks on academic researchers working in this space. And they've taken a few forms. The most overt form is intimidation by legislators and regulators. This includes being subpoenaed to testify in front of Congress, which I can say from personal experience is not a super fun thing to do. And we've also then seen it turn into things like lawsuits and frivolous freedom of information act requests, things that can be done under the law, but that can become your full-time job. If you're a professor running a research project and you get a FOIA request for every email that contains the word election, you can't just turn over those emails because it would potentially compromise the privacy of your students. You have to redact them, you have to look at every single one, and all of a sudden you're not doing research, you're not teaching, you're just dealing with FOIA requests all day. It's like a denial of service attack targeting academic research. And then finally there's the threats. A lot of researchers have been bombarded with death threats, personal invective, and vitriol that's meant to intimidate them and scare them and threaten their families. And I would note it's not just academics who are facing this, it's people who volunteer to work as poll workers, it's secretaries of state, it's people who are engaged in the machinery of a working democracy. And by going after them in this very personal, very violent way, it's creating chilling effects. It's creating an environment where people are too scared to participate. And especially for academic researchers, it's making them sort of wonder, is this worth doing at all? And we can say looking at American elections and democracy that yes, this research is incredibly important, but these are people too. And so if you have the choice of doing research about elections and facing death threats and subpoenas, or you can go and study something else or you know, work at a hedge fund, why wouldn't you do the hedge fund thing? That actually just seems much better. Choosing the alternative path requires immense courage, and more than that, it requires that of institutions as well as people. And we're seeing that American universities are not always really up to the challenge of withstanding this pressure. And again, that's one of the dynamics that really concerns me, not just for 2024, but as a going concern that the researchers we really need supporting these pro-democracy efforts increasingly are going to go and do something else. And that pipeline drying up seems incredibly worrying to me.

Daniela Kucz ’14 Thank you. I think we've gotten a few questions that I wanna address before we have to wrap up around what actually happened day to day with the moderation. I think you talked a little bit about your own experience and based on what you said, it seems that there's probably different approaches to it now based on the platform. But what happens when you're moderating content and say you come across something that warrants, for example, a police alert? What are the sort of processes there beyond just keeping the community safe if there is a tangible risk of harm to another person? What is the responsibility of a platform? And yeah, I would love to hear more about the actual day-to-day, please.

Yoel Roth ’11 Yeah, these are some of the hardest decisions you have to make because as a platform you have very, very little context about what's going on. All you can look at are something my boss used to say, you can look within the four walls of the tweet and that's all you get, and you have to make these really enormously consequential decisions just based on that limited amount of information in front of you. And sometimes that's a question of do we call the police so that somebody can go and intervene and when is that appropriate? And there's risks, like there are trade-offs on every side of this issue. It's not obvious, and there's lots of research that supports this, that the right thing to do if a teenager talks about being depressed is have the cops show up at their home. Teenagers frequently post incredibly outrageous, outlandish things on the internet, and certainly a focus on youth mental health is an essential part of what we need to be doing online. But sometimes there's a call for, you know, platforms should be emailing their parents, platforms should be telling schools so they can intervene. And maybe that's the right answer in some situations, but sometimes it's not. Sometimes the internet is the only outlet that a kid has because their parents don't accept them or don't understand them, or their parents would perhaps be violent if they found out that they're gay. And you have no knowledge of any of those dynamics. There's also incredibly risky privacy implications to this, right? So if I as an employee at Twitter look at a piece of content and say, it looks like this person might do something violent, the decision to turn over their personal information, which I as Twitter, am a custodian of, to the police, is enormously consequential. It's not happening because a judge reviewed it and made this decision in accordance with a set of laws. It's me, a guy at a desk deciding it. And so a lot of bigger tech companies make policies and internal guidelines about how to manage these decisions. They have boards of people who review decisions before they turn over user data, but it's a trade off, right? It's do you risk revealing too much and being overly interventionist or do you risk not intervening in a moment where you could have prevented harm and protected somebody? There's never a right answer. It's moment to moment, case by case, trying to balance these competing factors and land maybe hopefully in the right spot.

Daniela Kucz ’14 Very tough. And yeah, thank you, thank you for doing that work on behalf of everyone who uses social media, because I think you're definitely making, were definitely making it safer for us. I know we only have a few more minutes, so I think I would love to hear any closing thoughts you might have about the upcoming election. I know we've talked about that a little bit already, but would love to know if you have anything additional to say on that note or any other sort of trends we should watch out for in 2024 around social media.

Yoel Roth ’11 Yeah, you know, I have been asked a lot, you know, especially after October 7th, what are strategies for staying sane online? There is more than ever, more than at any point in time, an enormous amount of content that we can engage with. And it can be informative and it can be constructive and it can also be traumatizing and scary and depressing and it can be misleading. And it's a deeply personal decision how to engage with it. What do you want to see, what don't you want to see? What is constructive for you to see? And I think sometimes we can push ourselves to feel like being engaged with social media, being informed all the time, being on the bleeding edge of this information is our responsibility as participants in a democracy. And I'm less convinced that that's true. I think each of us should be making these decisions based on not just a feeling of obligation or responsibility, but a feeling of care of the self, of saying, what do I need to do for myself and my wellbeing and my community right now? And is this engagement constructive? A lot of social media platforms are engineered and architected to maximize engagement. They're products that are built to get you to engage, to post, to reply, to get in an argument. And sometimes we can't help it, but engage with social media in those ways. And I'll say this, like, I still, despite its flaws, love social media. I think it's net net a good thing for humanity. But what we can do to try to shape, to push back on those tendencies is resist the impulse to engage, to resist the impulse to reshare, retweet, to use the angry emoji on Facebook to get in an argument with somebody. If it feels like the right thing, great, but if it doesn't and you feel a sense of obligation, responsibility, or that you're just doing it automatically, pause for a minute and question, is this constructive? Am I changing the world in the positive way that I want to by doing this? Or am I just clicking the angry emoji on Facebook because I feel like I need to do it to dunk on the other side of the argument? And with that heuristic, I think it can help all of us at the ground level as users of social media start to move into a slightly more constructive space.

Daniela Kucz ’14 Fantastic. This has been such an interesting conversation. I really appreciate you Yoel, for taking the time to speak with us this evening. And I know everyone else, as I'm seeing in the Q&A, in comments has appreciated your time as well. So thank you.

Yoel Roth ’11 Thank you for the really thoughtful questions and moderation and you know, thanks to the Alumni Council for the invitation.

Daniela Kucz ’14 Awesome. All right, have a great night everyone.

Yoel Roth ’11 See you later.

WATCH ADDITIONAL SWATTALK RECORDINGS.