Connect with us

Uncategorized

EmTech Stage: Facebook’s CTO on misinformation

Published

on

Misinformation and social media have become inseparable from one another; as platforms like Twitter and Facebook have grown to globe-spanning size, so too has the threat posed by the spread of false content. In the midst of a volatile election season in the US and a raging global pandemic, the power of information to alter opinions and save lives (or endanger them) is on full display. In the first of two exclusive interviews with two of the tech world’s most powerful people, Technology Review’s Editor-in-Chief Gideon Lichfield sits down with Facebook CTO Mike Schroepfer to talk about the challenges of combating false and harmful content on an online platform used by billions around the world. This conversation is from the EmTech MIT virtual conference and has been edited for length and clarity.

For more of coverage on this topic, check out this week’s episode of Deep Tech and our tech policy coverage.

Credits:

This episode from EmTech was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.

Transcript:

Strong: Hey everybody, it’s Jennifer Strong. Last week I promised to pick out something to play for you from EmTech, our newsroom’s big annual conference. So here it is. With the U-S election just days away, we’re going to dive straight into one of the most contentious topics in the world of tech and beyond – misinformation. 

Now a lot of this starts on conspiracy websites, but it’s on social media that it gets amplified and spread. These companies are taking increasingly bold measures to ban certain kinds of fake news and extremist groups, and they’re using technology to filter out misinformation before humans can see it. They claim to be getting better and better at that, and one day they say they’ll be able to make the internet safe again for everyone. But, can they really do that? 

In the next two episodes we’re going to meet the chief technology officers of Facebook and Twitter. They’ve both taken VERY different approaches when it comes to misinformation, in part because a lot of what happens on Facebook is in private groups, which makes it a harder problem to tackle. Whereas on Twitter, most everything happens in public. So, first up – Facebook. Here’s Gideon Lichfield, the editor in chief of Tech Review. He’s on the virtual mainstage of EmTech for a session that asks, ‘Can AI clean up the internet’? This conversation’s been edited for length and clarity.

Lichfield: I am going to turn to our first speaker, who is Mike Schroepfer. Known generally to all his colleagues as Schrep. He is the CTO of Facebook. He’s worked at Facebook since 2008 and when it was a lot smaller and he became CTO in 2013. Last year The New York Times wrote a big profile of him, which is a very interesting read. It was titled ‘Facebook’s AI whiz is now facing the task of cleaning it up. Sometimes that leads him tears.  Schrep, welcome. Thank you for joining us at EmTech.

Schroepfer: Hey Gideon, thanks. Happy to be here.

Lichfield: Facebook has made some pretty aggressive moves particularly in just the last few months. You’ve taken action against QAnon, you’ve banned Holocaust denial, and anti-vaccination ads. But people have been warning about QAnon for years, people have been warning about anti-vaccination misinformation for years. So, why did it take you so long? What, what, changed in your thinking to make you take this action?

Schroepfer: Yeah, I mean, the world is changing all the time. There’s a lot of recent data you know, on the rise of antisemitic beliefs or lack of understanding about the Holocaust. QAnon you know has moved into more of a threat of violence in recent years. And the idea that there would be threats of violence around a US election is a new thing. And so, particularly around places where society and things that are critical events, like an election, we’re doing everything we can to, to make sure that people feel safe and secure and informed to make the decision they get to make to elect who is in government. And so we’re taking more aggressive measures.

Lichfield: You said something just now, you said there was a lot of data. And that sort of resonates with me with something that I had Alex Stamos, the former chief security officer of Facebook, he said in a podcast recently, that at Facebook decisions are really taken on the basis of data. So is it that you need, you needed to have overwhelming data evidence, but, you know, the Holocaust denial is causing harm or the QAnon is causing harm before you take action against it. 

Schroepfer: What I’d say is this is. We operate a service that’s used by billions of people around the world and so a mistake I don’t wanna make is assume that I understand what other people need, what other people want, or what’s happening. And so, a way to avoid that is to rely on expertise where we have it. So, you know, for example, for dangerous organizations, we have many people with backgrounds in counter terrorism, went to West Point, we have many people with law enforcement backgrounds where you talk about voting interference, we have experts with backgrounds and voting and rights.

And so you, you listen to experts, uh, and you look at data and you, and you try to understand that topic rather than, you know, you don’t want me making these decisions. You, you want sort of the experts and you want the data to do it. And because it’s not just, you know, this issue here, it’s, it’s issues of privacy, it’s issues and locales, and, and, so I would say that we try to be rigorous in using sort of expertise and data where we can, so we’re not making assumptions about what’s happening in the world or, or what we think people need.

Lichfield: Well, let’s talk a bit more about QAnon specifically because the approach that you take, obviously, to handling this information, as you try to train your AIs to recognize stuff that is harmful. And the difficulty with this approach is the nature of misinformation keeps changing it’s context specific, right? And misinformation about Muslims in Myanmar, which sparked riots there. You don’t know that that is misinformation until it starts appearing. The issue it seems to me with Q Anon is it’s such a, it’s not like ISIS or something. its beliefs keep changing the accounts, keep changing. So, how do you tackle something that is so ill defined as, as a threat like that?

Schroepfer: Well, you know, I will talk about this and, and I think one of the, from a technical perspective, one of the hardest challenges that I’ve been very focused on in the last few years, because of similar problems in terms of subtlety, coded language and adversarial behavior, which is hate speech.  There’s overt hate speech, which is very obvious and you can use sort of phrases you’ve banked or, or, or keywords. But people adapt and they use coded language and they do it, you know, on a daily, weekly basis. And you can even do this with memes where you have a picture and then you overlay some words on top of it, and it completely changes the meaning. You smell great today. And the pictures of skunk is a very different thing than, you know, a flower, and you have to put it all together.

And so, um, and similarly, as you say, with QAnon and there can be subtlety and things like that. This is why I’ve been so focused on, you know, a couple of key AI technologies. One is we’ve dramatically increased the power of these classifiers to understand and, and deal with nuanced information. You know, five or ten years ago, sort of keywords were probably the best we could do. Now we’re at the point where our classifiers are catching errors in the labeling data or catching errors that human reviewers sometimes make. Because they are powerful enough to catch subtlety in topics like, is this a post that’s inciting violence against a voter? Or are they just expressing displeasure with voting or this population? Those are two very… unfortunately it’s a, it’s a fine line when you look at how careful people try to be about coding the language to sort of get around it.

And so you see similar things with QAnon and others. And so we’ve got classifiers now that, that, you know, our state-of-the-art work in multiple languages and are really impressive in what they’ve done through techniques that we can go into like self supervision, um, to look at, you know, billions of pieces of data to, to train. And then the other thing we’ve got is we sort of use a similar technique like this, that allows us to do, you know, the best way to describe it as sort of fuzzy matching. Which is as a human reviewer, spends the time and says, you know what, I think that these pieces of misinformation, or this is a QAnon group, even though it’s coded in different languages, what we can then do is sort of fan out and find things that are semantically similar, not the exact words, not keywords, not regexes, um, but things that are very close in a, in an embedding space that are semantically similar. And then we can take action on them.

And this allows what I call quick reaction. So, even if I had no idea what this thing was yesterday, today, if a bunch of human reviewers find it, we can then go amplify their work sort of across the network and implement that proactively anytime new pieces of information. Just to put this in context, you know, in Q2, we took down 7 million pieces of COVID misinformation. Obviously in Q4 of last year, there was no such thing as COVID misinformation. So we had to sort of build a new classifier techniques to do this. And the thing I’ve challenged the team is like getting our classifier build time down from what used to be many, many months to, you know, what, sometimes weeks, to days, to minutes. First time I see an example, or first time I read a new policy, I want to be able to build a classifier that’s functional at, you know, at billion user scale. And, you know, we’re not there yet, but we’re making rapid progress

Lichfield: Well. So I think this is what the question is, how rapid is the progress, right? That, that 7 million pieces of misinformation statistic. I saw that quoted by a Facebook spokesperson in response to a study that came out from Avaaz in August. And it had looked at COVID misinformation that found that the top 10 websites that were spreading misinformation had four times as many estimated views on Facebook as equivalent content from the websites of 10 leading health institutions, like the WHO, they found that only 16% of all health misinformation, they analyzed had a warning label from Facebook. So in other words, you’re obviously doing a lot, you’re doing a lot more than you were and you, and you’re still, by that count way behind the curve. How, and this is a crisis that is killing people. So how long is it going to take you to get there, do you think?

Schroepfer: Yeah, I mean, I think that, you know, this is where, you know, I’d like us to be publishing more data on this. Because really what you needed to compare apples to apples is overall reach of this information, and sort of what is the information, sort of, exposure diet of the average Facebook user. And I think there’s a couple of pieces that people don’t get. The first is most people’s newsfeed is filled with content from their friends. Like, news links, these are sort of a minority of the views all in and people’s news feed and Facebook. I mean, the point of Facebook is to connect with your friends and you’ve probably experienced this yourself. It’s, you know, posts and pictures and things like that.

Secondly, on things like COVID misinformation, like what you really got to compare that with is, comparing it, for example, to views of our COVID information center, which we literally shoved to the very top of the newsfeed so that everyone could get information on that. We’re doing similar things, um, for voting. We’ve help to register almost two and a half million voters, in the U.S.. Similar information, you know, for issues of racial justice given all the horrible events that have happened this year. So what I don’t have is the comprehensive study of, you know, how many times did someone view the COVID information hub versus these other things? Um, you know, but my guess is it would be that they’re getting a lot more of that good information from us.

But look, you know, anytime any of this stuff escapes I’m, I’m not done yet. This is why I’m still here doing my job is, is we want to get this better. And, and, and yes, I wish it was 0%. I wish our classifiers were 99.999% accurate. They’re not. You know, my job is to get them there as fast as humanly possible. And when we get off this call, that’s what I’m going to go work on. What I can do is just look at like recent history and project progress forward. Because I can’t fix the past, but I can fix today and tomorrow. When I look at things like, you know, hate speech where, you know, in 2017, only about a quarter of the pieces of hate speech were found by our systems, first. Almost three quarters of it was found by someone on Facebook first. Which is awful, which means they were exposed to it and had to had to report it to us. And now the number’s up to 99, 94.5%. Even in the last, you know, between Q2 of this year and same time last year, we 5Xed, the amount of content we’re taking down for hate speech. And I can trace all of that. Now, that number should be 99.99 and we shouldn’t even be having this conversation because you should say, I’ve never seen any of this stuff, and I never hear about it, ‘cause it’s gone.

That is my goal, but I can’t get there yet. But if you just look at the last, you know, anytime I say something 5Xs in a year, or it goes from 24% to 94% in two years, like, and I say, we’ve got a, we’re not, I’m not out of ideas, we’re still deploying state-of-the-art stuff like this week, next week, last week, then that’s why I’m optimistic overall that, that we’re going to move this problem into a place where it’s not the first thing you want to talk to me about but I’m not there yet.

Lichfield: It’s a tech problem. It’s also obviously a, a workforce problem. You’re obviously going to be familiar with, uh, the, the memo that Sophie Zhang, who was a former Facebook data scientist wrote when she departed. And she wrote about how she was working on one of the teams, you have multiple teams that work on trying to identify harmful information around the world. And her main complaint, it seems was that she felt like those teams were understaffed and she was having to prioritize decisions about whether to treat, you know, misinformation around an election in a country for instances as dangerous. And when that, those decisions one prioritized, sometimes it could take months for a problem to be dealt with and that could have real consequences. Um, you have, I think what 15,000 human moderators right now, do you think you have enough people?

Schroepfer: I never think we have enough people on anything. So I, you know, I’ve yet to be on a project where we were looking for things to work on and I mean that real seriously. And we, you know, at 35,000 people working on this from, you know, review and content and safety and security side. The other thing that I think we don’t talk a lot about is, if you go talk to the heads of my AI team and ask them what has Schrep been asking us to do for the last three years, it’s integrity, it’s content moderation. It’s not cool wizzy, new things. It’s like, how do we fight this problem? And it’s been years we’ve been working on it.

So I’ve taken sort of the best and the brightest we have in the company and said, you know, and it’s not like I have to order them to do it because they want to work on it. I say, we’ve got this huge problem, we can help, let’s go get this done. Are we done yet? No. Am I impatient? Absolutely. Do I wish we had more people working on it? All the time. You know, we have to make our trade-offs on these things, and so, you know, um, but my job, you know, and what we can do with technology is sort of remove some of those trade-offs. You know, every time we deploy a new, more powerful classifier, um, that removes a ton of work from our human moderators, who can then go work on higher level problems. You know, instead of you, you know, really easy decisions, they move on to misinformation and really vague things and evaluating dangerous groups and that sort of moving people up the difficulty curve is, is also improving things. And that’s what we’re trying to do. 

Strong: We’re going to take a short break – but first, I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard explores hard questions like;

Lander: Should we alter the Earth’s atmosphere to prevent climate change? And Can truth and democracy survive the impact of deepfakes? 

Strong: Brave New Planet is from Pushkin Industries. You can find it wherever you get your podcasts. We’ll be back right after this.

[Advertisement]

Strong: Welcome back to a special episode of In Machines We Trust. This is a conversation between Facebook’s Mike Schroepfer and Tech Review’s Editor-In-Chief Gideon Lichfield. It happened live on the virtual stage of our EmTech Conference, and it’s been edited for length and clarity. If you want more on this topic, including our analysis, please check out the show notes or visit us at Technology Review dot com.

Lichfield: A couple of questions that I’m going to throw in from the audience, how does misinformation affect Facebook’s revenue stream? And another is, um, about, uh, how does it affect trust in Facebook? Well, there seems to be an underlying lack of trust in Facebook and how do you measure trust? And the gloss that we want to put on these questions is, clearly you care about misinformation, clearly a lot of the people that work at Facebook care about it or worried by it, but there is, I think an underlying question that people have is does Facebook as a company care about it, is it impacted by it negatively enough for it to really tackle the problem seriously? 

Schroepfer: Yeah. I mean, look, I’m a person in society too. I care a lot about democracy and the future and advancing people’s lives in a positive way. And I challenge you to find, you know, someone who feels differently inside our offices. And so we, yes, we work at Facebook, but we’re people in the world and I care a lot about the future for my children. And well, well, you’re asking, do we care? And the answer is yes. Um, you know, do we have the incentives? Like what did we spend a lot of our time talking about today? We talked about misinformation and other things, you know, honestly, what would I rather talk about? I’d rather talk about VR and, and positive uses of AR and all the awesome new technology we’re building, because, you know, that’s, that’s normally what a CTO would be talking about.

So it is obviously something that is challenging trust in the company, trust in our products, that is a huge problem for us, um, from a self-interest standpoint. So even if you think I’m full of it, you just, from a practical self-interested standpoint, like as a brand, as a consumer product that people voluntarily use every single day, when I try to sell a new product like Portal, which is a camera for your home, like the people trust the company that’s behind this product and think we have, you know, their, their best intentions at heart. If they don’t, it’s going to be a huge challenge for absolutely everything I do. So, I think the interests here are, are pretty aligned. I don’t think there’s a lot of good examples of consumer products that are free, that survive if people don’t like them, don’t like the companies or think they’re bad. So this is from a self-interested standpoint, a critical issue for us. 

[Credits]

Strong: This conversation with Facebook’s CTO is the first of two episodes on misinformation and social media. In the next part we chat with the CTO of Twitter. If you’d like to hear our newsroom’s analysis of this topic and the election, I’ve dropped a link in our show notes. I hope you’ll check it out. This episode from EmTech was produced by me and by Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. As always, thanks for listening.  I’m Jennifer Strong. 

[TR ID]

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Pinterest tests online events with dedicated ‘class communities’

Published

on

Pinterest is getting into online events. The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest, while creators use Pinterest’s class boards to organize class materials, notes and other resources, or even connect with attendees through a group chat option. The company confirmed the test of online classes is an experiment now in development, but wouldn’t offer further details about its plans.

The feature itself was discovered on Tuesday by reverse engineer Jane Manchun Wong, who found details about the online classes by looking into the app’s code.

Currently, you can visit some of these “demo” profiles directly — like “@pinsmeditation” or “@pinzoom123,” for example — and view their listed Class Communities. However, these communities are empty when you click through. That’s because the feature is still unreleased, Wong says.

When and if the feature is later launched to the public, the communities would include dedicated sections where creators will be able to organize their class materials — like lists of what to bring to class, notes, photos and more. They could also use these communities to offer a class overview and description, connect users to a related shop, group chat feature and more.

Creators are also able to use the communities — which are basically enhanced Pinterest boards — to respond to questions from attendees, share photos from the class and otherwise interact with the participants.

When a user wants to join a class, they can click a “book” button to sign up, and are then emailed a confirmation with the meeting details. Other buttons direct attendees to download Zoom or copy the link to join the class.

It’s not surprising that Pinterest would expand into the online events space, given its platform has become a popular tool for organizing remote learning resources during the coronavirus pandemic. Teachers have turned to Pinterest to keep track of lesson plans, get inspiration, share educational activities and more. In the early days of the pandemic, Pinterest reported record usage when the company saw more searches and saves globally in a single March weekend than ever before in its history, as a result of its usefulness as a online organizational tool.

This growth has continued throughout the year. In October, Pinterest’s stock jumped on strong earnings after the company beat on revenue and user growth metrics. The company brought in $443 million in revenue, versus $383.5 million expected, and grew its monthly active users to 442 million, versus the 436.4 million expected. Outside of the coronavirus impacts, much of this growth was due to strong international adoption, increased ad spend from advertisers boycotting Facebook and a surge of interest from users looking for iOS 14 home screen personalization ideas.

Given that the U.S. has failed to get the COVID-19 pandemic under control, many classes, events and other activities will remain virtual even as we head into 2021. The online events market may continue to grow in the years that follow, too, thanks to the kickstart the pandemic provided the industry as a whole.

“We are experimenting with ways to help creators interact more closely with their audience,” a Pinterest spokesperson said, when asked for more information.

Pinterest wouldn’t confirm additional details about its plans for online events, but did say the feature was in development and the test would help to inform the product’s direction.

Pinterest often tries out new features before launching them to a wider audience. Earlier this summer, TechCrunch reported on a Story Pins feature the company had in the works. Pinterest then launched the feature in September. If the same time frame holds up for online events, we could potentially see the feature become more widely available sometime early next year.

Continue Reading

Uncategorized

SpaceX targeting next week for Starship’s first high-altitude test flight

Published

on

SpaceX looks ready to proceed to the next crucial phase of its Starship spacecraft development program: A 15km (50,000 feet) test flight. This would far exceed the max height that any prior Starship prototype has achieved so far, since the current record-setting hop test maxed out at around 500 feet. Elon Musk says that SpaceX will look to make its first high-altitude attempt sometime next week.

This tentative date (these are always subject to change) follows a successful static test fire of the current SN8 generation prototype — essentially just firing the test spacecraft’s Raptor engines while it remains stationary on the pad. That’s a crucial step that paves the way for any actual flight, since it proves that the spacecraft can essentially hold together and withstand the pressures of active engines before it leaves the ground.

SpaceX’s SN8 prototype is different from prior versions in a number of ways, most obviously because it has an actual nosecone, along with nose fins. The prototypes that did the short test hops, including SN6, had what’s known as a mass simulator up top, which weighs as much as an actual Starship nose section but looks very different.

Musk added that the chances of an SN8 high-altitude flight going to plan aren’t great, estimating that there’s “maybe a 1/3 chance” given how many things have to work correctly. He then noted that that’s the reason SpaceX has SN9 and SN10 ready to follow fast, which is a theme of Starship’s development program to date: building successive generations of prototypes rapidly in parallel in order to test and iterate quickly.

We’ll likely get a better idea of when the launch will take place due to alerts filed with local regulators, so watch this space next week as we await this major leap forward in SpaceX’s Starship program.

Continue Reading

Uncategorized

Police case filed against Netflix executives in India over ‘A Suitable Boy’ kissing scene

Published

on

Netflix, which has invested more than $500 million to gain a foothold in India in recent years, is slowly finding out just about what all could upset some people in the world’s second-largest internet market: Apparently everything.

A police case has been filed this week against two top executives of the American streaming service in India after a leader of the governing party objected to some scenes in a TV series.

The show, “A Suitable Boy,” is an adaptation of the award-winning novel by Indian author Vikram Seth that follows the life of a young girl. It has a scene in which the protagonist is seeing kissing a Muslim boy at a Hindu temple.

Narottam Mishra, the interior minister of the central state of Madhya Pradesh, said a First Information Report (an official police complaint) had been filed against Monika Shergill, VP of Content at Netflix and Ambika Khurana, Director of Public Policies for the firm, over objectionable scenes in the show that hurt the religious sentiments of Hindus.

“I had asked officials to examine the series ‘A Suitable Boy’ being streamed on Netflix to check if kissing scenes in it were filmed in a temple and if it hurt religious sentiments. The examination prima facie found that these scenes are hurting the sentiments of a particular religion,” he said.

Gaurav Tiwari, a BJP youth leader who filed the complaint, demanded an apology from Netflix and makers of the series (directed by award-winning filmmaker Mira Nair), and said the film promoted “love jihad,” an Islamophobic conspiracy theory that alleges that Muslim men entice Hindi women into converting their religion under the pretext of marriage.

Netflix declined to comment.

In recent days, a number of people have expressed on social media their anger at Netflix over these “objectionable” scenes. Though it is unclear if all of them — if any — are a Netflix subscriber.

The incident comes weeks after an ad from the luxury jewelry brand Tanishq — part of the 152-year-old salt-to-steel conglomerate — which celebrated interfaith marriage received intense backlash in the country.

For Netflix, the timing of this backlash isn’t great. The new incident comes days after the Indian government announced new rules for digital media, under which the nation’s Ministry of Information and Broadcasting will be regulating online streaming services. Prior to this new rule, India’s IT ministry oversaw streaming services, and according to a top streaming service executive, online services enjoyed a great degree of freedom.

Continue Reading

Trending