Connect with us

Uncategorized

Machine-learning project takes aim at disinformation

Published

on

There’s nothing new about conspiracy theories, disinformation, and untruths in politics. What is new is how quickly malicious actors can spread disinformation when the world is tightly connected across social networks and internet news sites. We can give up on the problem and rely on the platforms themselves to fact-check stories or posts and screen out disinformation—or we can build new tools to help people identify disinformation as soon as it crosses their screens.

Preslav Nakov is a computer scientist at the Qatar Computing Research Institute in Doha specializing in speech and language processing. He leads a project using machine learning to assess the reliability of media sources. That allows his team to gather news articles alongside signals about their trustworthiness and political biases, all in a Google News-like format.

“You cannot possibly fact-check every single claim in the world,” Nakov explains. Instead, focus on the source. “I like to say that you can fact-check the fake news before it was even written.” His team’s tool, called the Tanbih News Aggregator, is available in Arabic and English and gathers articles in areas such as business, politics, sports, science and technology, and covid-19.

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast was produced in partnership with the Qatar Foundation.

Show notes and links

Tanbih News Aggregator

Qatar Computing Research Institute

“Even the best AI for spotting fake news is still terrible,” MIT Technology Review, October 3, 2018

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is disinformation. From fake news, to propaganda, to deep fakes, it may seem like there’s no defense against weaponized news. However, scientists are researching ways to quickly identify disinformation to not only help regulators and tech companies, but also citizens, as we all navigate this brave new world together.

Two words for you: spreading infodemic.

My guest is Dr. Preslav Nakov, who is a principal scientist at the Qatar Computing Research Institute. He leads the Tanbih project, which was developed in collaboration with MIT. He’s also the lead principal investigator of a QCRI MIT collaboration project on Arabic speech and language processing for cross language information search and fact verification. This episode of Business Lab is produced in association with the Qatar Foundation. Welcome, Dr. Nakov.

Preslav Nakov: Thanks for having me.

Laurel Ruma: So why are we deluged with so much online disinformation right now? This isn’t a new problem, right?

Nakov: Of course, it’s not a new problem. It’s not the case that it’s for the first time in the history of the universe that people are telling lies or media are telling lies. We had the yellow press, we had all these tabloids for years. It became a problem because of the rise of social media, when it suddenly has become possible to have a message that you can send to millions and millions of people. And not only that, you could now tell different things to different people. So, you could microprofile people and you could deliver them a specific personalized message that is designed, crafted for a specific person with a specific purpose to press a specific button on them. The main problem with fake news is not that it’s false. The main problem is that the news actually got weaponized, and this is something that Sir Tim Berners-Lee, the creator of the World Wide Web has been complaining about: that his invention was weaponized.

Laurel: Yeah, Tim Berners-Lee is obviously distraught that this has happened, and it’s not just in one country or another. It is actually around the world. So is there an actual difference between fake news, propaganda, and disinformation?

Nakov: Sure, there is. I don’t like the term “fake news.” This is the term that has picked up: it was declared “word of the year” by several dictionaries in different years, shortly after the previous presidential election in the US. The problem with fake news is that, first of all, there’s no clear definition. I have been looking into dictionaries, how they define the term. One major dictionary said, “we are not really going to define the term at all, because it’s something self-explanatory—we have ‘news,’ we have ‘fake,’ and it’s news that’s fake; it’s compositional; it was used the 19th century—there is nothing to define.” Different people put different meaning into this. To some people, fake news is just news they don’t like, regardless of whether it is false. But the main problem with fake news is that it really misleads people, and sadly, even certain major fact-checking organizations, to only focus on one thing, whether it’s true or not.

I prefer, and most researchers working on this prefer, the term “disinformation.” And this is a term that is adopted by major organizations like the United Nations, NATO, the European Union. And disinformation is something that has a very clear definition. It has two components. First, it is something that is false, and second, it has a malicious intent: intent to do harm. And again, the vast majority of research, the vast majority of efforts, many fact-checking initiatives, focus on whether something is true or not. And it’s typically the second part that is actually important. The part whether there is malicious intent. And this is actually what Sir Tim Berners-Lee was talking about when he first talked about the weaponization of the news. The main problem with fake news—if you talk to journalists, they will tell you this—the main problem with fake news is not that it is false. The problem is that it is a political weapon.

And propaganda. What is propaganda? Propaganda is a term that is orthogonal to disinformation. Again, disinformation has two components. It’s false and it has malicious intent. Propaganda also has two components. One is, somebody is trying to convince us of something. And second, there is a predefined goal. Now, we should pay attention. Propaganda is not true; it’s not false. It’s not good; it’s not bad. That’s not part of the definition. So, if a government has a campaign to persuade the public to get vaccinated, you can argue that’s for a good purpose, or let’s say Greta Thunberg trying to scare us that hundreds of species are getting extinct every day. This is a propaganda technique: appeal to fear. But you can argue that’s for a good purpose. So, propaganda is not bad; it’s not good. It’s not true; it’s not false.

Laurel: But propaganda has the goal to do something. And, and by forcing that goal, it is really appealing to that fear factor. So that is the distinction between disinformation and propaganda, is the fear.

Nakov: No, fear is just one of the techniques. We have been looking into this. So, a lot of research has been focusing on binary classification. Is this true? Is this false? Is this propaganda? Is this not propaganda? We have looked a little bit deeper. We have been looking into what techniques have been used to do propaganda. And again, you can talk about propaganda, you can talk about persuasion or public relations, or mass communication. It’s basically the same thing. Different terms for about the same thing. And regarding propaganda techniques, there are two kinds. The first kind are appeals to emotions: it can be appeal to fear, it can be appeal to strong emotions, it can be appeal to patriotic feelings, and so on and so forth. And the other half are logical fallacies: things like black-and-white fallacy. For example, you’re either with us or against us. Or bandwagon. Bandwagon is like, oh, the latest poll shows that 57% are going to vote for Hillary, so we are on the right side of history, you have to join us.

There are several other propaganda techniques. There is red herring, there is intentional obfuscation. We have looked into 18 of those: half of them appeal to emotions, and half of them use certain kinds of logical fallacies, or broken logical reasoning. And we have built tools to detect those in texts, so that you can really show them to the user and make this explicit, so that people can understand how they are being manipulated.

Laurel: So in the context of the covid-19 pandemic, the director general of the World Health Organization said, and I quote, “We’re not just fighting an epidemic; we’re fighting an infodemic.” How do you define infodemic? What are some of those techniques that we can use to also avoid harmful content?

Nakov: Infodemic, this is something new. Actually, MIT Technology Review had about a year ago, last year in February, had a great article that was talking about that. The covid-19 pandemic has given rise to the first global social media infodemic. And again, around the same time, the World Health Organization, back in February, had on their website a list of top five priorities in the fight against the pandemic, and fighting the infodemic was number two, number two in the list of the top five priorities. So, it’s definitely a big problem. What is the infodemic? It’s a merger of a pandemic and the pre-existing disinformation that was already present in social media. It’s also a blending of political and health disinformation. Before that, the political part, and, let’s say, the anti-vaxxer movement, those were separate. Now, everything is blended together.

Laurel: And that’s a real problem. I mean, the World Health Organization’s concern should be fighting the pandemic, but then its secondary concern is fighting disinformation. Finding hope in that kind of fear is very difficult. So one of the projects that you’re working on is called Tanbih. And Tanbih is a news aggregator, right? That uncovers disinformation. So the project itself has a number of goals. One is to uncover stance, bias, and propaganda in the news. The second is to promote different viewpoints and engage users. But then the third is to limit the effect of fake news. How does Tanbih work?

Nakov: Tanbih started indeed as a news aggregator, and it has grown into something quite larger than that, into a project, which is a mega-project in the Qatar Computing Research Institute. And it spans people from several groups in the institute, and it is developed in cooperation with MIT. We started the project with the aim of developing tools that we can actually put in the hands of the final users. And we decided to do this as part of a news aggregator, think of something like Google News. And as users are reading the news, we are signaling to them when something is propagandistic, and we’re giving them background information about the source. What we are doing is we are analyzing media in advance and we are building media profiles. So we are showing, telling users to what extent the content is propagandistic. We are telling them whether the news is from a trustworthy source or not, whether it is biased: left, center, right bias. Whether it is extreme: extreme left, extreme right. Also, whether it is biased with respect to specific topics.

And this is something that is very useful. So, imagine that you are reading some article that is skeptical about global warming. If we tell you, look, this news outlet has always been very biased in the same way, then you’ll probably take it with a grain of salt. We are also showing the perspective of reporting, the framing. If you think about it, covid-19, Brexit, any major event can be reported from different perspectives. For example, let’s take covid-19. It has a health aspect, that’s for sure, but it also has an economic aspect, even a political aspect, it has a quality-of-life aspect, it has a human rights aspect, a legal aspect. Thus, we are profiling the media and we are letting users see what their perspective is.

Regarding the media profiles, we are further exposing them as a browser plugin, so that as you are visiting different websites, you can actually click on the plugin and you can get very brief background information about the website. And you can also click on a link to access a more detailed profile. And this is very important: the focus is on the source. Again, most research has been focusing on “is this claim true or not?” And is this piece of news true or not? That’s only half of the problem. The other half is actually whether it is harmful, which is typically ignored.

The other thing is that we cannot possibly fact-check every single claim in the world. Not manually, not automatically. Manually, that’s out of the question. There was a study from MIT Media Lab about two years ago, where they have done a large study on many, many tweets. And it has been shown that false information goes six times farther and spreads much faster than real information. There was another study that is much less famous, but I find it very important, which shows that 50% of the lifetime spread of some very viral fake news happens in the first 10 minutes. In the first 10 minutes! Manual fact-checking takes a day or two, sometimes a week.

Automatic fact-checking? How can we fact-check a claim? Well, if we are lucky, if the claim is that the US economy grew 10% last year, that claim we can automatically check easily, by looking into Wikipedia or some statistical table. But if they say, there was a bomb in this little town two minutes ago? Well, we cannot really fact-check it, because to fact-check it automatically, we need to have some information from somewhere. We want to see what the media are going to write about it or how users are going to react to it. And both of those take time to accumulate. So, basically we have no information to check it. What can we do? What we are proposing is to move at a higher granularity, to focus on the source. And this is what journalists are doing. Journalists are looking into: are there two independent trusted sources that are claiming this?

So we are analyzing media. Even if bad people put a claim in social media, they are probably going to put a link to a website where one can find a whole story. Yet, they cannot create a new fake news website for every fake claim that they are making. They are going to reuse them. Thus, we can monitor what are the most frequently used websites, and we can analyze them in advance. And, I like to say that we can fact-check the fake news before it was even written. Because the moment when it’s written, the moment when it’s put in social media and there’s a link to a website, if we have this website in our growing database of continuously analyzed websites, we can immediately tell you whether this is a reliable website or not. Of course, reliable websites might have also poor information, good websites might sometimes be wrong as well. But we can give you an immediate idea.

Beyond the news aggregator, we started looking into doing analytics, but also we are developing tools for media literacy that are showing to people the fine-grained propaganda techniques highlighted in the text: the specific places where propaganda is happening and its specific type. And finally, we are building tools that can support fact-checkers in their work. And those are again problems that are typically overlooked, but extremely important for fact-checkers. Namely, what is worth fact-checking in the first place. Consider a presidential debate. There are more than 1,000 sentences that have been said. You, as a fact-checker can check maybe 10 or 20 of those. Which ones are you going to fact-check first? What are the most interesting ones? We can help prioritize this. Or there are millions and millions of tweets about covid-19 on a daily basis. And which of those you would like to fact-check as a fact-checker?

The second problem is detecting previously fact-checked claims. One problem with fact-checking technology these days is quality, but the second part is lack of credibility. Imagine an interview with a politician. Can you put the politician on the spot? Imagine a system that automatically does speech recognition, that’s easy, and then does fact-checking. And suddenly you say, “Oh, Mr. X, my AI tells me you are now 96% likely to be lying. Can you elaborate on that? Why are you lying?” You cannot do that. Because you don’t trust the system. You cannot put the politician on the spot in real time or during a political debate. But if the system comes back and says: he just said something that has been fact-checked by this trusted fact-checking organization. And here’s the claim that he made, and here’s the claim that was fact-checked, and see, we know it’s false. Then you can put him on the spot. This is something that can potentially revolutionize journalism.

Laurel: So getting back to that point about analytics. To get into the technical details of it, how does Tanbih use artificial intelligence and deep neural networks to analyze that content, if it’s coming across so much data, so many tweets?

Nakov: Tanbih initially was not really focusing on tweets. Tanbih has been focusing primarily on mainstream media. As I said, we are analyzing entire news outlets, so that we are prepared. Because again, there’s a very strong connection between social media and websites. It’s not enough just to put a claim on the Web and spread it. It can spread, but people are going to perceive it as a rumor because there’s no source, there’s no further corroboration. So, you still want to look into a website. And then, as I said, by looking into the source, you can get an idea whether you want to trust this claim among other information sources. And the other way around: when we are profiling media, we are analyzing the text of what the media publish.

So, we would say, “OK, let’s look into a few hundred or a few thousand articles by this target news outlet.” Then we would also look into how this medium self-represents in social media. Many of those websites have also social media accounts: how do people react to what they have been published in Twitter, in Facebook? And then if the media have other kinds of channels, for example, if they have a YouTube channel, we will go to it and analyze that as well. So we’ll look into not only what they say, but how they say it, and this is something that comes from the speech signal. If there is a lot of appeal to emotions, we can detect some of it in text, but some of it we can actually get from the tone.

We are also looking into what others write about this medium, for example, what is written about them in Wikipedia. And we are putting all this together. We are also analyzing the images that are put on this website. We are analyzing the connections between the websites. The relationship between a website and its readers, the overlap in terms of users between different websites. And then we are using different kinds of graph neural networks. So, in terms of neural networks, we’re using different kinds of models. It’s primarily deep contextualized text representation based on transformers; that’s what you typically do for text these days. We are also using graph neural networks and we’re using different kinds of convolutional neural networks for image analysis. And we are also using neural networks for speech analysis.

Laurel: So what do we learn by studying this kind of disinformation region by region or by language? How can that actually help governments and healthcare organizations fight disinformation?

Nakov: We can basically give them aggregated information about what is going on, based on a schema that we have been developing for analysis of the tweets. We have designed a very comprehensive schema. We have been looking not only into whether a tweet is true or not, but also into whether it’s spreading panic, or it is promoting bad cure, or xenophobia, racism. We are automatically detecting whether the tweet is asking an important question that maybe a certain government entity might want to answer. For example, one such question last year was: is covid-19 going to disappear in the summer? It’s something that maybe health authorities might want to answer.

Other things have been offering advice or discussing action taken, and possible cures. So we have been looking into not only negative things, things that you might act on, try to limit, things like panic or racism, xenophobia—things like “don’t eat Chinese food,” “don’t eat Italian food.” Or things like blaming the authorities for their action or inaction, which governments might want to pay attention to and see to what extent it is justified and if they want to do something about it. Also, an important thing a policy maker might want is to monitor social media and detect when there is discussion of a possible cure. And if it’s a good cure, you might want to pay attention. If it’s a bad cure, you might also want to tell people: don’t use that bad cure. And discussion of action taken, or a call for action. If there are many people that say “close the barbershops,” you might want to see why they are saying that and whether you want to listen.

Laurel: Right. Because the government wants to monitor this disinformation for the explicit purpose of helping everyone not take those bad cures, right. Not continue down the path of thinking this propaganda or disinformation is true. So is it a government action to regulate disinformation on social media? Or do you think it’s up to the tech companies to kind of sort it out themselves?

Nakov: So that’s a good question. Two years ago, I was invited by the Inter-Parliamentary Union’s Assembly. They had invited three experts and there were 800 members of parliament from countries around the world. And for three hours, they were asking us questions, basically going around the central topic: what kinds of legislation can they, the national parliaments, pass so that they get a solution to the problem of disinformation once and for all. And, of course, the consensus at the end was that that’s a complex problem and there’s no easy solution.

Certain kind of legislation definitely plays a role. In many countries, certain kinds of hate speech is illegal. And in many countries, there are certain kind of regulations when it comes to elections and advertisements at election time that apply to regular media and also extend to the web space. And there have been a lot of recent calls for regulations in UK, in the European Union, even in the US. And that’s a very heated debate, but this is a complex problem, and there’s no easy solution. And there are important players there and those players have to work together.

So certain legislation? Yes. But, you also need the cooperation of the social media companies, because the disinformation is happening in their platforms. And they’re in a very good position, the best position actually, to limit the spread or to do something. Or to teach their users, to educate them, that probably they should not spread everything that they read. And then the non-government organizations, journalists, all the fact-checking efforts, this is also very important. And I hope that the efforts that we as researchers are putting in building such tools, would also be helpful in that respect.

One thing that we need to pay attention to is that when it comes to regulation through legislation, we should not think necessarily what can we do about this or that specific company. We should think more in the long term. And we should be careful to protect free speech. So it’s kind of a delicate balance.

In terms of fake news, disinformation. The only case where somebody has declared victory, and the only solution that we have seen actually to work, is the case of Finland. Back in May 2019, Finland has officially declared that they have won the war on fake news. It took them five years. They started working on that after the events in Crimea; they felt threatened and they started a very ambitious media literacy campaign. They focused primarily on schools, but also targeted universities and all levels of society. But, of course, primarily schools. They were teaching students how to tell whether something is fishy. If it makes you too angry, maybe something is not correct. How to do, let’s say, reverse image search to check whether this image that is shown is actually from this event or from somewhere else. And in five years, they have declared victory.

So, to me, media literacy is the best long-term solution. And that’s why I’m particularly proud of our tool for fine-grained propaganda analysis, because it really shows the users how they are being manipulated. And I can tell you that my hope is that after people have interacted a little bit with a platform like this, they’ll learn those techniques. And next time they are going to recognize them by themselves. They will not need the platform. And it happened to me and several other researchers who have worked on this problem, it happened to us, and now I cannot read the news properly anymore. Each time I read the news, I spot these techniques because I know them and I can recognize them. If more people can get to that level, that will be good.

Maybe social media companies can do something like that when a user registers on their platform, they could ask the new users to take some digital literacy short course, and then pass something like an exam. And then, of course, maybe we should have government programs like that. The case of Finland shows that, if the government intervenes and puts in place the right programs, the fake news is something that can be solved. I hope that fake news is going to go the way of spam. It’s not going to be eradicated. Spam is still there, but it’s not the kind of problem that it was 20 years ago.

Laurel: And that’s media literacy. And even if it does take five years to eradicate this kind of disinformation or just improve society’s understanding of media literacy and what is disinformation, elections happen fairly frequently. And so that would be a great place to start thinking about how to stop this problem. Like you said, if it becomes like spam, it becomes something that you deal with every day, but you don’t actually think about or worry about anymore. And it’s not going to completely turn over democracy. That seems to me a very attainable goal.

Laurel: Dr. Nakov, thank you so much for joining us today on what’s been a fantastic conversation on the Business Lab.

Nakov: Thanks for having me.

Laurel: That was Dr. Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For information about us and the show, please check out our website at technologyreview.com.

The show is available wherever you get your podcasts.

If you enjoyed this podcast, we hope that you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Collective, a back-office for the self-employed, raises $20M from Ashton Kutcher’s VC

Published

on

With so much focus on the ‘creator economy’, and countries hit by the effects of the pandemic, the self-employed market is ‘booming’, for good or for ill. So it’s not too much of a surprise that
Collective,a subscription-based back-office for the self-employed has raised a $20 million Series A funding after launching only late last year.

The round was led by General Catalyst and joined by Sound Ventures (the venture capital fund founded by Ashton Kutcher and Guy Oseary). Collective has now raised a total of $28.65 million. Other notable investors include: Steve Chen (Founder YouTube), Hamish McKenzie (Founder Substack), Aaron Levie (founder Box), Kevin Lin (founder Twitch), Sam Yam (founder Patreon), Li Jin (Atelier Ventures), Shadiah Sigala (founder HoneyBook), Adrian Aoun (founder Forward), Holly Liu (founder Kabam), Andrew Dudum (founder Hims) and Edward Hartman (founder LegalZoom).

Ashton Kutcher said in a statement: “We’re proud to be supporting a company that’s making it easier for creators to focus on what they do best by taking care of the back office work that creates so much friction for so many early entrepreneurs. I would have loved something like this when I was getting started.”

Launched in September 2020 by CEO Hooman Radfar, CPO Ugur Kaner and CTO Bugra Akcay, Collective offers “tailored” financial services, access to advisors that oversee accounting, tax, bookkeeping, and business formation needs. There are currently 59 million self-employed workers in the U.S. (36% of US workforce) who mostly do all their own admin. So Collective hopes to be their online back office platform.

Speaking to me over email, Radfar said that the start-up fintech market tends to serve companies like them – other start-ups and growing SMBs: “Companies like Pilot have done an amazing job at building a back-office platform that handles taxes, bookkeeping and finances for start-ups. We want to offer that same great value to the underserved business-of-one community, since they are the largest group of founders in the country.”

He added: “Before Collective, consultants, freelancers, and other solo founders had to string together their back-office solution using DIY platforms like Quickbooks, Gusto, and LegalZoom. If they were lucky, they had the help of a part-time accountant to advise them. Collective makes handling finances easy with the first all-in-one platform that not only bundles these tools into one platform, but also provides the technology and team to optimize their tax savings like the pros.”

According to some estimates, the number of lone freelancers in the US is projected to make up 86.5 million, 50% of the US workforce by 2027, with the freelancer space projected to grow three times faster than the traditional workforce.

Niko Bonatsos, Managing Director of General Catalyst said: “Collective is serving the $1.2 trillion business-of-one industry by building the first back-office platform that saves individuals significant time and money, while providing them with the appropriate tools and resources they need to help them succeed,” said “We’re excited to support Collective as they expand their team and build an exceptional service for the business-of-one community.”

Continue Reading

Uncategorized

UK publishes draft Online Safety Bill

Published

on

The UK government has published its long-trailed (child) ‘safety-focused’ plan to regulate online content and speech.

The Online Safety Bill has been in the works for years — during which time a prior plan to require age verification for accessing online porn in the UK, also with the goal of protecting kids from being exposed to inappropriate content online but which was widely criticized as unworkable, got quietly dropped.

At the time the government said it would focus on introducing comprehensive legislation to regulate a range of online harms. It can now say it’s done that.

The 145-page Online Safety Bill can be found here on the gov.uk website — along with 123 pages of explanatory notes and an 146-page impact assessment.

The draft legislation imposes a duty of care on digital service providers to moderate user generated content in a way that prevents users from being exposed to illegal and/or harmful stuff online.

The government dubs the plan globally “groundbreaking” and claims it will usher in “a new age of accountability for tech and bring fairness and accountability to the online world”.

Critics warn the proposals will harm freedom of expression by encouraging platforms to over-censor, while also creating major legal and operational headaches for digital businesses that will discourage tech innovation.

The debate starts now in earnest.

The bill will be scrutinised by a joint committee of MPs — before a final version is formally introduced to Parliament for debate later this year.

How long it might take to hit the statute books isn’t clear but the government has a large majority in parliament so, failing major public uproar and/or mass opposition within its own ranks, the Online Safety Bill has a clear road to becoming law.

Commenting in a statement, digital secretary Oliver Dowden said: “Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.

“We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”

The length of time it’s taken for the government to draft the Online Safety Bill underscores the legislative challenge involved in trying to ‘regulate the Internet’.

In a bit of a Freudian slip, the DCMS’ own PR talks about “the government’s fight to make the internet safe”. And there are certainly question-marks over who the future winners and losers of the UK’s Online Safety laws will be.

Safety and democracy?

In a press release about the plan, the Department for Digital, Media, Culture and Sport (DCMS) claimed the “landmark laws” will “keep children safe, stop racial hate and protect democracy online”.

But as that grab-bag of headline goals implies there’s an awful lot going on here — and huge potential for things to go wrong if the end result is an incoherent mess of contradictory rules that make it harder for digital businesses to operate and for Internet users to access the content they need.

The laws are set to apply widely — not just to tech giants or social media sites but to a broad swathe of websites, apps and services that host user-generated content or just allow people to talk to others online.

In scope services will face a legal requirement to remove and/or limit the spread of illegal and (in the case of larger services) harmful content, with the risk of major penalties for failing in this new duty of care toward users. There will also be requirements for reporting child sexual exploitation content to law enforcement.

Ofcom, the UK’s comms regulator — which is responsible for regulating the broadcast media and telecoms sectors — is set to become the UK Internet’s content watchdog too, under the plan.

It will have powers to sanction companies that fail in the new duty of care toward users by hitting them with fines of up to £18M or ten per cent of annual global turnover (whichever is higher).

The regulator will also get the power to block access to sites — so the potential for censoring entire platforms is baked in.

Some campaigners backing tough new Internet rules have been pressing the government to include the threat of criminal sanctions for CEOs to concentrate C-suite minds on anti-harms compliance. And while ministers haven’t gone that far, DCMS says a new criminal offence for senior managers has been included as a deferred power — adding: “This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.”

Despite there being widespread public support in the UK for tougher rules for Internet platforms, the devil is the detail of how exactly you propose to do that.

Civil rights campaigners and tech policy experts have warned from the get-go that the government’s plan risks having a chilling effect on online expression by forcing private companies to be speech police.

Legal experts are also warning over how workable the framework will be, given hard to define concepts like “harms” — and, in a new addition, content that’s defined as “democratically important” (which the government wants certain platforms to have a special duty to protect).

The clear risk is massive legal uncertainty wrapping digital businesses — with knock-on impacts on startup innovation and availability of services in the UK.

The bill’s earlier incarnation — a 2019 White Paper — had the word “harms” in the title. That’s been swapped for a more anodyne reference to “safety” but the legal uncertainty hasn’t been swapped out.

The emphasis remains on trying to rein in an amorphous conglomerate of ‘harms’ — some illegal, others just unpleasant — that have been variously linked to or associated with online activity. (Often off the back of high profile media reporting, such as into children’s exposure to suicide content on platforms like Instagram.)

This can range from bullying and abuse (online trolling), to the spread of illegal content (child sexual exploitation), to content that’s merely inappropriate for children to see (legal pornography).

Certain types of online scams (romance fraud) are another harm the government wants the legislation to address, per latest additions.

The umbrella ‘harms’ framing makes the UK approach distinct to the European Union’s Digital Service Act — a parallel legislative proposal to update the EU’s digital rules that’s more tightly focused on things that are illegal, with the bloc setting out rules to standardize reporting procedures for illegal content; and combating the risk of dangerous products being sold on ecommerce marketplaces with ‘know your customer’ requirements.

In a response to criticism of the UK Bill’s potential impact on online expression, the government has added measures which it said today are aimed at strengthen people’s rights to express themselves freely online.

It also says it’s added in safeguards for journalism and to protect democratic political debate in the UK.

However its approach is already raising questions — including over what look like some pretty contradictory stipulations.

For example, the DCMS’ discussion of how the bill will handle journalistic content confirms that content on news publishers’ own websites won’t be in scope of the law (reader comments on those sites are also not in scope) and that articles by “recognised news publishers” shared on in-scope services (such as social media sites) will be exempted from legal requirements that may otherwise apply to non journalistic content.

Indeed, platforms will have a legal requirement to safeguard access to journalism content. (“This means [digital platforms] will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content,” DCMS notes.)

However the government also specifies that “citizen journalists’ content will have the same protections as professional journalists’ content” — so exactly where (or how) the line gets drawn between “recognized” news publishers (out of scope), citizen journalists (also out of scope), and just any old person blogging or posting stuff on the Internet (in scope… maybe?) is going to make for compelling viewing.

Carve outs to protect political speech also complicate the content moderation picture for digital services — given, for example, how extremist groups that hold racist opinions can seek to launder their hate speech and abuse as ‘political opinion’. (Some notoriously racist activists also like to claim to be ‘journalists’…)

DCMS writes that companies will be “forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation”.

“Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom,” it goes on, adding: “When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.”

Platforms will face responsibility for balancing all these conflicting requirements — drawing on Codes of Practice on content moderation that respects freedom of expression which will be set out by Ofcom — but also under threat of major penalties being slapped on them by Ofcom if they get it wrong.

Interestingly, the government appears to be looking favorably on the Facebook-devised ‘Oversight Board’ model, where a panel of humans sit in judgement on ‘complex’ content moderation cases — and also discouraging too much use of AI filters which it warns risk missing speech nuance and over-removing content. (Especially interesting given the UK government’s prior pressure on platforms to adopt AI tools to speed up terrorism content takedowns.)

“The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate,” writes DCMS. “All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.”

“People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity,” it goes on.

“Category 1 services [the largest, most popular services] will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects. These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.”

Another confusing-looking component of the plan is that while the bill includes measures to tackle what it calls “user-generated fraud” — such as posts on social media for fake investment opportunities or romance scams on dating apps — fraud that’s conducted online via advertising, emails or cloned websites will not be in scope, per DCMS, as it says “the Bill focuses on harm committed through user-generated content”.

Yet since Internet users can easily and cheaply create and run online ads — as platforms like Facebook essentially offer their ad targeting tools to anyone who’s willing to pay — then why carve out fraud by ads as exempt?

It seems a meaningless place to draw the line. Fraud where someone paid a few dollars to amplify their scam doesn’t seem a less harmful class of fraud than a free Facebook post linking to the self-same crypto investment scam.

In short, there’s a risk of arbitrary/ill-thought through distinctions creating incoherent and confusing rules that are prone to loopholes. Which doesn’t sound good for anyone’s online safety.

In parallel, meanwhile, the government is devising an ambitious pro-competition ex ante regime to regulate tech giants specifically. Ensuring coherence and avoiding conflicting or overlapping requirements between that framework for platform giants and these wider digital harms rules is a further challenge.

Continue Reading

Uncategorized

Amazon updates Echo Show line with a pan and zoom camera and a kids model

Published

on

Amazon this morning announced a handful of updates across its Echo Show line of smart screens. The top-level most interesting bit here is the addition of a pan and zoom camera to the mid-tier Echo Show. The feature is similar to ones found on Facebook’s various Portal devices and Google’s high-end Nest Hub Max.

Essentially, it’s designed to keep the subject in frame – Apple also recently introduced the similar Center Stage features for the latest iPad Pro. It comes after Amazon introduced a far less subtle version in the Echo Show 10, which actually follows the subject around by swiveling the display around the base. I know I’m not alone in being a little creeped out, seeing it in action.

The new feature arrives on the Show 8’s 13-megapixel camera, which is coupled with a built-in physical shutter – a mainstay as Amazon is look to stay ahead of the privacy conversations. The eight-inch HD display is powered by an upgrade octa-core processors and coupled with stereo speakers. The new Show 8 runs $130.

The other biggest news here is the arrival of the Echo Show 5 Kids – the one really new product in the bunch. At $95, the kid-focused version of the screen features a customizable home screen, colorful design, a two-year warranty in case of creaks and a one-year subscription to Amazon Kids+.

There’s a new version of the regular Show 5, too, featuring an upgraded HD camera, new colors and additional software features. That runs $85. The new devices go up for preorder today and start shipping later this month.

 

Continue Reading

Trending