Connect with us

Uncategorized

EmTech Stage: Twitter’s CTO on misinformation

Published

on

In the second of two exclusive interviews, Technology Review’s Editor-in-Chief Gideon Lichfield sat down with Parag Agrawal, Twitter’s Chief Technology officer to discuss the rise of misinformation on the social media platform. Agrawal discusses some of the measures the company has taken to fight back, while admitting Twitter is trying to thread a needle of mitigating harm caused by false content without becoming an arbiter of truth. This conversation is from the EmTech MIT virtual conference and has been edited for clarity.

For more of coverage on this topic, check out this week’s episode of Deep Tech and our tech policy coverage.

Credits:

This episode from EmTech MIT was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.

Transcript:

Strong: Hey everybody it’s Jennifer Strong back with part two of our conversation about misinformation and social media. If Facebook is a meeting place where you go to find your community, YouTube a concert hall or backstage for something you’re a fan of, then Twitter is a bit like the public square where you go to find out what’s being said about something. But what responsibility do these platforms have as these conversations unfold? Twitter has said one of its “responsibilities is to ensure the public conversation is healthy”. What does that mean and how do you measure that? 

It’s a question we put to Twitter’s Chief Technology Officer Parag Agrawal. Here he is, in conversation with Tech Review’s editor-in-chief Gideon Lichfield. It was taped at our EmTech conference and has been edited for length and clarity.

Lichfield: A couple of years ago, there was a project you started talking about metrics that would measure what a healthy public conversation is. I haven’t seen very much about it since then. So what’s going on with that? How do you measure this?

Agrawal: Two years ago in working with actually some folks at the MIT media lab and inspired by the thinking, we set out on a project to work with academics outside of the company, to see if we could define a few simple metrics or measurements to indicate the health of the public conversation. What we realized in working with experts from many places is that it’s very, very challenging to boil down the nuances and intricacies of what we consider a healthy public conversation into a few simple to understand, easy to measure metrics that you can put your faith in. And this conversation has informed a change in our approach.

What’s changed is whether or not we are prescriptive in trying to boil things down to a few numbers. But what’s remained is us realizing that we need to work with academic researchers outside of Twitter, share more of our data in an open-ended setting, where they’re able to use it to do research, to advance various fields. Uh, and there are a bunch of API related products that we’ll be shipping in the coming months. And one of the things that directly led to that conversation was in April, as we saw, uh, COVID, uh, we created an end point for COVID-related conversation that academic researchers could have access to. Uh, we’ve seen research across four 20 countries, access it.

So in some sense, I’m glad that we set out on that journey. And I still hold out hope that with this open-ended approach, there’ll be academics and our collaboration with them, which will ultimately lead us to understand public conversation and healthy public conversation enough to be able to boil down the measurement to few metrics. But I’m also excited about all the other avenues of research this approach opens up for us. 

Lichfield: Do you have a sense of what an example of such a metric would look like?

Agrawal: So when we set out to talk about this, we hypothesized, there were a few metrics around, do people share a sense of reality? Do people have diverse perspectives and can be exposed to diverse perspectives? We thought about is the conversation civil, right? So, conceptually these are all properties we desire in a healthy public conversation. The challenge lies in being able to measure them in a way that is able to evolve as the conversation evolves, in a way that is reliable and can stand the test of time, as the conversation two years ago was very different from the conversation today. The challenges two years ago, as we understood them are very different today. Uh, and that’s where some of the challenges and our understanding of what healthy public organization means is still emergent for us to be able to boil it down into these simple metrics.

Lichfield: Let’s talk a little bit about some of the things you’ve done over the last couple of years. I mean, there’s been a lot of attention, obviously, on the decisions to flag some of Donald Trump’s tweets. I think the more systematic work that you’ve been doing over the last couple of years against misinformation, can you summarize the main points of what you’ve been doing? 

Agrawal: Our approach to it isn’t to try to identify or flag all potential misinformation. But our approach is rooted in trying to avoid specific harm that misleading information can cause. We’ve been focused in our approach, and focusing on harm that can be done with misinformation around COVID-19, which has to do with public health, where a few people being misinformed can lead to implications on everyone. Similarly, we focused in on misinformation around what we call civic integrity, which is about people having the ability to know how to participate in elections.

So an example, just to make this clear, is around civic integrity, we care about and we take action on content which might misinform people who say you should vote on November 5th, when election day is November 3rd. And, we do not try to determine uh what’s true or false when someone takes a policy position or when someone says the sky is purple or blue, or red for that matter. Our approach for misinformation is also not one that’s focused on taking content down as the only measure, which is the regime we all have operated in for many years. But it’s an increasingly nuanced approach with a range of interventions, where we think about whether or not certain content should be amplified without context, or whether it’s our responsibility to provide some context so that people can see a bunch of information, but also have the ability and ease to discover all the conversation and context around it, to inform themselves about what they choose to believe in.

Lichfield: How do you evaluate whether something is harmful without also trying to figure out whether it’s true, in other words, COVID specifically for example?

Agrawal: That’s a great question and I think in some cases you rely on credible sources to provide that context. So you don’t always have to determine if something is true or false, but if there’s potential for harm, we choose not to flag something as true or false, but we choose to add a link to credible sources, or to additional conversation around that topic, to provide people context around the piece of content so that they can be better informed, even as this data for understanding and knowledge is evolving. And public conversation is critical to that evolution. We saw people learn through Twitter, because of the way they got informed. And experts have conversations through Twitter to advance the state of our understanding around this disease as well. 

Lichfield: People have been warning about QAnon for years. You started taking down QAnon accounts in July. What took you so long? Why did you… what changed in your thinking?

Agrawal: The way we think about QAnon or we thought about QAnon, is we have a coordinated manipulation policy that we’ve had for awhile, and the way it works is we work with civil services and human rights groups across the globe in trying to understand which groups, or which organizations, or what kind of activity rises to a level of harm where it requires action from us. In hindsight, I wish we’d acted sooner, but since we understood the threat well, by working with these groups, we took action and our actions have involved sort of decreasing amplification of this content and flagging this content in a way that led to very rapid decrease in the amount of reach QAnon and related content got on the platform by over 50%. And since then, we’ve seen sustained decreases as a result of this move.

Lichfield: I’m getting quite a few questions from the audience, which are kind of all asking the same thing. And they’re basically asking, well, I’ll read them. Who gets to decide what is misinformation? Can you give a clear clinical definition of misinformation? Does something have to have malicious intent to be misinformation? How do you know if your credible sources are truthful, what’s measuring the credibility of those sources and someone even saying I’ve seen misinformation in the so-called credible sources. So how do you define that phrase?

Agrawal: I think that’s the, the existential question of our times. Defining misinformation is really, really hard. As we learn through time, our understanding of truth also evolves. We attempt to not adjudicate truth, we focus on potential for harm. And when we say we lean on credible sources, we also lean on all the conversation on the platform that also gets to talk about these credible sources and points out potential gaps as a result of which the credible sources also evolve their thinking or what they talk about.

So, we focused way less on what’s true and what’s false. We focus way more on potential for harm as a result of certain content being amplified on the platform without appropriate context. And context is oftentimes just additional conversation that provides a different point of view on a topic so that people can see the breadth of the conversation on our platform and outside and make their own determinations in a world where we’re all learning together.

Lichfield: Do you apply a different standard to things that come from world leaders? 

Agrawal: We do have a policy around public content in the public interest, it’s in our policy framework. So, yes, we do apply different standards. And this is based on the understanding and the knowledge that there’s certain content from elected officials that is important for the public to see and hear. And that all of the content on Twitter is not only on Twitter. It is in newsrooms, it is in press conferences, but oftentimes the source content is on Twitter. The public interest policy exists to make sure that the source content is accessible. We do however flag very clearly for everyone around when such content violates any of our policies. We take the bold move to flag it, label it so that people have the appropriate context that this is indeed an example of a violation, so people can look at that content in light of that understanding.

Lichfield: If you take President Trump, there was a Cornell study showing that – they measured that 38% of COVID misinformation mentions him. They called them the single largest driver of misinformation around COVID. You flagged some of his tweets, but there’s a lot that he puts out that doesn’t quite rise to the strict definition of misinformation, and yet misleads people about the nature of the pandemic. So doesn’t this, this exception for public officials, doesn’t it undermine the whole strategy?

Agrawal: Every public official has access to multiple ways of reaching people. Twitter is one of them. We exist in a large ecosystem. Our approach in labeling content actually allows us to, at the source flag content, that might potentially harm people, and also provide people additional context and additional conversation around it. So a lot of these studies and I’m not familiar on the one you cited, are actually broader than Twitter. And if they are about Twitter, they talk about reach and impressions, without talking about people also being exposed to other bits of information around the topic. Now, we don’t get to decide what people choose to believe, but we do get to showcase content and a diversity of points of views on any topic, so that people can make their own determinations.

Lichfield: That sounds a little bit like you’re trying to say, well, it’s not just our fault. It’s everybody’s fault. And therefore there’s not much we can do about it.

Agrawal: I don’t believe I’m saying that. What I’m talking about, the topics of misinformation have always existed in society. We are now a critical part of the fabric of public conversation, and that’s our role in the world. These are not topics we get to extricate ourselves from. These are topics that will remain relevant today and will remain relevant in five years. I don’t live in the illusion that we can do something that magically makes the misleading information problem goes away. We don’t have that kind of power or control. And I would honestly like not want that power or control. But we do have the privilege of listening to people, of having a diverse set of people on our platform, them expressing a diverse set of points of view, the things that really matter to everyone, and for us to be able to showcase them with the right context so that society can learn from each other and move forward.

Lichfield: When you talk about letting people see content and draw their own conclusions or come to their own opinions, that’s the kind of language that is associated with, I think the way that social media platforms traditionally presented themselves. ‘We’re just a neutral space, people come and use us, we don’t try to adjudicate’. And it seems a little bit at odds with what you were saying earlier about the wanting to promote a healthy public conversation, which clearly involves a lot of value judgments about what is healthy. So how are you reconciling those two?

Agrawal: Oh, I’m not saying that we are a neutral party to this whole conversation. As I said, we’re critical part of the fabric of public conversation. And, you wouldn’t want us to be adjudicating what’s true or what is false in the world. And honestly, we cannot do that globally in all the countries we work in across all the cultures and all the nuances that exist. We do, however, have the privilege of having everyone on the platform being able to change things, to give people more control and have to steer the conversation in a way that it’s sort of more receptive and allows more voices to be heard and for all of us to be better informed. 

Lichfield: One of the things that some observers say you could do that would make a big difference would be to abolish the trending topics feature, because that is where a lot of misinformation ends up getting surfaced. Things like the QAnon hashtag save the children, or there was a conspiracy theory about Hillary Clinton staffers rigging the Iowa caucus. Sometimes things like that make their way into trending topics, and then they have a big influence. What do you think about that?

Agrawal: I don’t know if you saw it, but just this week we made a change to how trends and trending topics work on the platform. And one of the things we did was, we’re going to show context on everything that trends, so that people are better informed as they see what people are talking about.

Strong: We’re going to take a short break – but first… I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard, explores hard questions like

Lander: Should we alter the Earth’s atmosphere to prevent climate change? And, can truth and democracy survive the impact of deepfakes? 

Strong: Brave New Planet is from Pushkin Industries. You can find it wherever you get your podcasts. We’ll be back right after this.

[Advertisement]

Strong: Welcome back to a special episode of In Machines We Trust. This is a conversation between Twitter’s Chief Technology Officer Parag Agrawal and Tech Review’s editor-in-chief Gideon Lichfield. If you want more on this topic, including our analysis, please check out the show notes or visit us at Technology Review dot com.

Lichfield: The election obviously is very close. And I think a lot of people are asking what is going to happen particularly on election day, as reports start to come in from the polls, there’s worry that some politicians are going to be spreading rumors of violence or vote rigging or other, other problems, which in turn could spark demonstrations and violence. And so that’s something that all of the social platforms are going to need to react to very quickly in real time. What will you be doing?

Agrawal: We’ve worked through elections in many countries over the last four years. India, Brazil, large democracies learned through each of them, and we’ve been doing work over the years to be better prepared for what’s to come. Last year we made a change around policy to ban all political advertising on Twitter, which was in anticipation of its potential to do harm. And we wanted our attention to be focused, not on advertising, but on the public conversation that’s happening organically to be able to protect it and improve it, especially as it relates to conversations around the elections.

We did a bunch of work on technology to get better at detecting and understanding state bad actors and their attempts to manipulate elections, and we’ve been very transparent about this. We’ve made public releases of hundreds of such operations from over 10 nations, with tens of thousands of accounts each and terabytes of data that allow people outside the company to analyze it and understand the patterns of manipulation at play. And we’ve gone ahead with product changes to make there be more consideration and thoughtfulness in how people share content and how people amplify content.

So, we’ve done a bunch of this work in preparation and through learnings along the way. To get to an answer about election night. We’ve also strengthened policies on our civic integrity to not allow anyone, any candidate or anyone across all races to be able to claim an election when a winner has not been declared. We also have strict measures in place to avoid incitements of violence. And we have a team ready, which will work 24/7 to put us in an agile state. 

That being said, we’ve done a bunch of work to anticipate what could happen, but one thing we know for sure is what’s likely to happen is not something we’ve exactly anticipated. So what’s going to be important for us on that night and beyond, and even leading up to that time to be prepared, to be agile, to respond to the feedback we were getting on the platform, to respond to the conversation you see seeing on and off platform, uh, and try to do our best to serve the public conversation conversation in this important time in this country.

Lichfield: Someone in, uh, in the audience asked something that I don’t think you would agree to, which was, they said, should Facebook and Twitter be shut down for three days before the election? But maybe a more modest version of that would be, is there some kind of  content that you think should be shut down right before an election?

Agrawal: Just this week one of the prominent changes that’s worth talking about in some detail is we made people have more consideration, more thought when they retweet. So instead of being able to easily just retweet content without additional commentary, we now default people into adding a comment when they retweet. And this is for two reasons, one to add additional considerations when you retweet and amplify certain content and two, to have content be shared with more context about what you think about it so that people understand why you’re sharing it, and what the context around the set of conversation is. We also made the trends change which I described earlier. These are changes which are meant to have the conversation on Twitter be more thoughtful.

That being said, Twitter is going to be a very, very powerful tool during the time of elections for people to understand what’s happening, for people to get really important information. We have labels on all candidates. We have information on the platform about how they can vote. We have real-time feedback coming from people all over the country, telling people what’s happening on the ground. And all of this is important information for everyone in this country to be aware of in that time. It’s a moment where each of us is looking for information and our platform serves a particularly important role on that day.

Lichfield: You’re caught in a bit of a hard place as somebody in the audience is also pointing out, that you’re trying to combat misinformation, you also want to protect free speech as a core value, and also in the U.S. as the first amendment. How do you balance those two?

Agrawal: Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation. The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed. One of the changes today that we see is speech is easy on the internet. Most people can speak. Where our role is particularly emphasized is who can be heard. The scarce commodity today is attention. There’s a lot of content out there. A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory. 

Lichfield: Well, we are out of time, but thank you for really interesting insight into how you think about these very complicated issues.

Agrawal: Thank you Gideon for having me.

[Music]

Strong: If you’d like to hear our newsroom’s analysis of this topic and the election… I’ve dropped a link in our show notes. I hope you’ll check it out. This episode from EmTech was produced by me and by Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. As always, thanks for listening. I’m Jennifer Strong.

[TR ID]

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

What about $30 billion under 30

Published

on

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast (now on Twitter!), where we unpack the numbers behind the headlines.

We’re back with not an Equity Shot or Dive of Monday, this is just the regular show! So, we got back to our roots by looking at a huge number of early stage rounds. And a few other things that we were just too excited about to not mention.

So from Chris and Danny and Natasha and I, here’s the rundown:

That was a lot, but how could we leave any of it out? We’re back Monday with more!

Equity drops every Monday at 7:00 a.m. PDT and Thursday afternoon as fast as we can get it out, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

Continue Reading

Uncategorized

Henry picks up cash to be a Lambda School for Latin America

Published

on

Latin America’s startup scene has attracted troves of venture investment, lifting highly-valued companies such as Rappi and NuBank into behemoth businesses. Now that the spotlight has arrived, those same startups need more talent than ever before to meet demand.

That’s where one seed-stage Buenos Aires startup wants to help. Henry has created an online computer science school that trains software developers from low-income backgrounds to understand technical skills and get employed. The company was founded by brother-sister duo Luz and Martin Borchardt, as well as Manuel Barna Ferrés, Antonio Tralice and Leonardo Maglia.

The Henry team.

The company claims that there’s an estimated 1 million software engineering job openings in Latin America, but fewer than 100,000 professionals that have training suitable for those roles.

“Higher education is only for 13% of the population in Latin America,” says Martin Borchardt, CEO and co-founder of Henry . “It’s very exclusive, very expensive, and has very low impact skills. So we’re giving these people an opportunity.”

With 90% of graduates coming from no formal higher education background, Henry seeks to help bring more back-end junior developers and full-stack developers into startups. Henry offers a five-month course that goes from Monday to Friday, 9 a.m. to 6 p.m., which focuses on software developer skills. Beyond technical training, Henry gives participants job coaching, resume workshops and up-skilling opportunities post-graduation.

To make the school more affordable, Henry looks to take on the same strategy used by Lambda School, a YC-graduate that has raised over $122 million in known funding: income-share agreements. The set-up would allow for boot camp participants to join the program at zero upfront costs, and then only pay once they get hired at a job.

Lambda School’s ISA terms ask students to pay 17% of their monthly salary for 24 months once they earn $4,167 monthly. The students pay a maximum of $30,000. Henry takes a much smaller slice of the pie, partly because salaries are lower in Latin American than in the United States. Henry asks students to pay 15% of their monthly salary for 24 months once students earn $500 a month.

If a Henry student doesn’t get employed in a job that allows them to make $500 a month within five years after the program completes, they are off the hook for paying back the boot camp.

Henry is also focused on helping more women get into the field of software development. Internally, Henry’s remote team is 20% women, 64% men. The current students reflect the same breakdown.

One issue with coding boot camps is that while it might help a student go from unemployed to employed, the lack of credential and degree might limit career mobility past that first job. For that reason, Henry has created a database of alumni resources, including up-skilling and reskilling opportunities in the latest skill, which will be free of charge for graduates.

Henry needs to execute on job placement to be successful in its field. Currently, more than 80% of students in Henry’s first cohort have found jobs, but it’s too soon in the startups’ trajectory to get a stronger metric on that front. About four Henry graduates have been employed by the startup.

The need for more talent in emerging countries has not gone unnoticed. Microverse, also funded by Y Combinator, is similarly using income-sharing agreements to bring education to the masses in developing countries, including spaces in Latin America. Henry thinks the competitor is approaching the dynamic too broadly.

“They’re focusing on all emerging markets and don’t teach to Spanish speakers,” Borchardt said. Henry, alternatively, focuses on Spanish speakers, over 60% of its market in Latin America.

What if Lambda School, the source of Henry’s inspiration, was to break into Latin America? The founder added that the richly funded company has tried, and failed, to expand into international geographies, including China and Europe, due to fragmentation.

Currently, Henry has graduated 200 students and is working with 600 students across Colombia, Chile, Uruguay and Argentina. It plans to expand into Mexico and to bring on Portuguese instruction.

Now, VCs are giving Henry some cash to do so. After going through Y Combinator’s Summer batch, Henry announced today that it has raised $1.5 million in seed funding in a round led by Accion Venture Lab, Emles Venture Partners and Noveus VC. There were also a number of edtech angel investors from Latin American that participated in the round.

“I love the human interaction within instructors and our staff and students,” Borchardt said. “That is something very powerful of Henry compared to a MOOC. The biggest challenge is how do you scale maintaining those assets that bring you that?”

Continue Reading

Uncategorized

Fantasy startup Esports One raises $4M more

Published

on

Esports One, a startup bringing the fantasy approach to esports, is announcing that it has raised an additional $4 million in funding.

When I first wrote about Esports One in April, co-founder and COO Sharon Winter described it as the first “all-in-one fantasy platform” in the esports world, allowing you to research players, create fantasy teams and watch games, with an initial focus on the North American and European divisions of League of Legends.

According to the Esports One team, creating this platform required building out a set of data and analytics products, as well as using computer vision technology that can track game activity (and update player stats) without relying on a publisher’s API.

The startup says its user base has been growing by more than 25% month-over-month. It may also have benefited from the pause in professional sports earlier this year, while CEO and co-founder Matt Gunnin told me recently that he also sees fantasy as a way to make video games accessible to a broader audience — he recalled one Esports One user who introduce his sister to League of Legends using the fantasy platform.

“I use the example of growing up and sitting there with my dad, watching a baseball game, he’s telling me everything that’s happening,” Gunnin said. “Now it’s the opposite — parents are sitting and watching their kids.”

Many parents, he suggested, are “never going to pick up a mouse and keyboard and play League of Legends,” but they might play the fantasy version: “That’s an entry point … if we can make it easily accessible to individuals both that are hardcore gamers playing video games and watching League of Legends their entire life, as well as someone who has no idea what’s going on.”

The new funding was led led by XSeed Capital, Eniac Ventures, and Chestnut Street Ventures, bringing Esports One to a total of $7.3 million raised. The company also recently signed a partnership deal with lifestyle company ESL Gaming.

Gunin said the money will allow the company to grow its Bytes virtual currency, which players use to enter contests and buy customizations — starting next year, players will be able to spend real money to purchase Bytes. In addition, it’s working on native iOS and Android apps (Esports One is currently accessible via desktop and mobile web).

Gunnin and his team also plan to develop fantasy competitions for Rainbow Six: Siege, Rocket League, Valorant and Fortnite.

“As a fairly new player in the esports world, we’ve seen immense determination and grit from Matt, Sharon, and the whole Esports One team to grow into a household name, ” said XSeed’s Damon Cronkey in a statement. “I’m excited to be partnering with a company that will deliver new perspectives and features to an evolving industry. We’re eager to see how Esports One grows in 2021.”

Continue Reading

Trending