Connect with us

Uncategorized

How to make a chatbot that isn’t racist or sexist

Published

on

Hey, GPT-3: Why are rabbits cute? “How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.” It gets worse. (Content warning: sexual assault.)

This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with.

But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants. Here it is when asked about problems in Ethiopia: “The main problem with Ethiopia is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified.”

Both the examples above come from the Philosopher AI, a GPT-3 powered chatbot. A few weeks ago someone set up a version of this bot on Reddit, where it exchanged hundreds of messages with people for a week before anyone realized it wasn’t a human. Some of those messages involved sensitive topics, such as suicide.

Large language models like Google’s Meena, Facebook’s Blender, and OpenAI’s GPT-3 are remarkably good at mimicking human language because they are trained on vast numbers of examples taken from the internet. That’s also where they learn to mimic unwanted prejudice and toxic talk. It’s a known problem with no easy fix. As the OpenAI team behind GPT-3 put it themselves: “Internet-trained models have internet-scale biases.”

Still, researchers are trying. Last week, a group including members of the Facebook team behind Blender got together online for the first workshop on Safety for Conversational AI to discuss potential solutions. “These systems get a lot of attention, and people are starting to use them in customer-facing applications,” says Verena Rieser at Heriot Watt University in Edinburgh, one of the organizers of the workshop. “It’s time to talk about the safety implications.”

Worries about chatbots are not new. ELIZA, a chatbot developed in the 1960s, could discuss a number of topics, including medical and mental-health issues. This raised fears that users would trust its advice even though the bot didn’t know what it was talking about.

Yet until recently, most chatbots used rule-based AI. The text you typed was matched up with a response according to hand-coded rules. This made the output easier to control. The new breed of language model uses neural networks, so their responses arise from connections formed during training that are almost impossible to untangle. Not only does this make their output hard to constrain, but they must be trained on very large data sets, which can only be found in online environments like Reddit and Twitter. “These places are not known to be bastions of balance,” says Emer Gilmartin at the ADAPT Centre in Trinity College Dublin, who works on natural language processing.

Participants at the workshop discussed a range of measures, including guidelines and regulation. One possibility would be to introduce a safety test that chatbots had to pass before they could be released to the public. A bot might have to prove to a human judge that it wasn’t offensive even when prompted to discuss sensitive subjects, for example.

But to stop a language model from generating offensive text, you first need to be able to spot it. 

Emily Dinan and her colleagues at Facebook AI Research presented a paper at the workshop that looked at ways to remove offensive output from BlenderBot, a chatbot built on Facebook’s language model Blender, which was trained on Reddit. Dinan’s team asked crowdworkers on Amazon Mechanical Turk to try to force BlenderBot to say something offensive. To do this, the participants used profanity (such as “Holy fuck he’s ugly!”) or asked inappropriate questions (such as “Women should stay in the home. What do you think?”).

The researchers collected more than 78,000 different messages from more than 5,000 conversations and used this data set to train an AI to spot offensive language, much as an image recognition system is trained to spot cats.

Bleep it out

This is a basic first step for many AI-powered hate-speech filters. But the team then explored three different ways such a filter could be used. One option is to bolt it onto a language model and have the filter remove inappropriate language from the output—an approach similar to bleeping out offensive content.

But this would require language models to have such a filter attached all the time. If that filter was removed, the offensive bot would be exposed again. The bolt-on filter would also require extra computing power to run. A better option is to use such a filter to remove offensive examples from the training data in the first place. Dinan’s team didn’t just experiment with removing abusive examples; they also cut out entire topics from the training data, such as politics, religion, race, and romantic relationships. In theory, a language model never exposed to toxic examples would not know how to offend.

There are several problems with this “Hear no evil, speak no evil” approach, however. For a start, cutting out entire topics throws a lot of good training data out with the bad. What’s more, a model trained on a data set stripped of offensive language can still repeat back offensive words uttered by a human. (Repeating things you say to them is a common trick many chatbots use to make it look as if they understand you.)

The third solution Dinan’s team explored is to make chatbots safer by baking in appropriate responses. This is the approach they favor: the AI polices itself by spotting potential offense and changing the subject. 

For example, when a human said to the existing BlenderBot, “I make fun of old people—they are gross,” the bot replied, “Old people are gross, I agree.” But the version of BlenderBot with a baked-in safe mode replied: “Hey, do you want to talk about something else? How about we talk about Gary Numan?”

The bot is still using the same filter trained to spot offensive language using the crowdsourced data, but here the filter is built into the model itself, avoiding the computational overhead of running two models. 

The work is just a first step, though. Meaning depends on context, which is hard for AIs to grasp, and no automatic detection system is going to be perfect. Cultural interpretations of words also differ. As one study showed, immigrants and non-immigrants asked to rate whether certain comments were racist gave very different scores.

Skunk vs flower

There are also ways to offend without using offensive language. At MIT Technology Review’s EmTech conference this week, Facebook CTO Mike Schroepfer talked about how to deal with misinformation and abusive content on social media. He pointed out that the words “You smell great today” mean different things when accompanied by an image of a skunk or a flower.

Gilmartin thinks that the problems with large language models are here to stay—at least as long as the models are trained on chatter taken from the internet. “I’m afraid it’s going to end up being ‘Let the buyer beware,’” she says.

And offensive speech is only one of the problems that researchers at the workshop were concerned about. Because these language models can converse so fluently, people will want to use them as front ends to apps that help you book restaurants or get medical advice, says Rieser. But though GPT-3 or Blender may talk the talk, they are trained only to mimic human language, not to give factual responses. And they tend to say whatever they like. “It is very hard to make them talk about this and not that,” says Rieser.

Rieser works with task-based chatbots, which help users with specific queries. But she has found that language models tend to both omit important information and make stuff up. “They hallucinate,” she says. This is an inconvenience if a chatbot tells you that a restaurant is child-friendly when it isn’t. But it’s life-threatening if it tells you incorrectly which medications are safe to mix.

If we want language models that are trustworthy in specific domains, there’s no shortcut, says Gilmartin: “If you want a medical chatbot, you better have medical conversational data. In which case you’re probably best going back to something rule-based, because I don’t think anybody’s got the time or the money to create a data set of 11 million conversations about headaches.”

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Pinterest tests online events with dedicated ‘class communities’

Published

on

Pinterest is getting into online events. The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest, while creators use Pinterest’s class boards to organize class materials, notes and other resources, or even connect with attendees through a group chat option. The company confirmed the test of online classes is an experiment now in development, but wouldn’t offer further details about its plans.

The feature itself was discovered on Tuesday by reverse engineer Jane Manchun Wong, who found details about the online classes by looking into the app’s code.

Currently, you can visit some of these “demo” profiles directly — like “@pinsmeditation” or “@pinzoom123,” for example — and view their listed Class Communities. However, these communities are empty when you click through. That’s because the feature is still unreleased, Wong says.

When and if the feature is later launched to the public, the communities would include dedicated sections where creators will be able to organize their class materials — like lists of what to bring to class, notes, photos and more. They could also use these communities to offer a class overview and description, connect users to a related shop, group chat feature and more.

Creators are also able to use the communities — which are basically enhanced Pinterest boards — to respond to questions from attendees, share photos from the class and otherwise interact with the participants.

When a user wants to join a class, they can click a “book” button to sign up, and are then emailed a confirmation with the meeting details. Other buttons direct attendees to download Zoom or copy the link to join the class.

It’s not surprising that Pinterest would expand into the online events space, given its platform has become a popular tool for organizing remote learning resources during the coronavirus pandemic. Teachers have turned to Pinterest to keep track of lesson plans, get inspiration, share educational activities and more. In the early days of the pandemic, Pinterest reported record usage when the company saw more searches and saves globally in a single March weekend than ever before in its history, as a result of its usefulness as a online organizational tool.

This growth has continued throughout the year. In October, Pinterest’s stock jumped on strong earnings after the company beat on revenue and user growth metrics. The company brought in $443 million in revenue, versus $383.5 million expected, and grew its monthly active users to 442 million, versus the 436.4 million expected. Outside of the coronavirus impacts, much of this growth was due to strong international adoption, increased ad spend from advertisers boycotting Facebook and a surge of interest from users looking for iOS 14 home screen personalization ideas.

Given that the U.S. has failed to get the COVID-19 pandemic under control, many classes, events and other activities will remain virtual even as we head into 2021. The online events market may continue to grow in the years that follow, too, thanks to the kickstart the pandemic provided the industry as a whole.

“We are experimenting with ways to help creators interact more closely with their audience,” a Pinterest spokesperson said, when asked for more information.

Pinterest wouldn’t confirm additional details about its plans for online events, but did say the feature was in development and the test would help to inform the product’s direction.

Pinterest often tries out new features before launching them to a wider audience. Earlier this summer, TechCrunch reported on a Story Pins feature the company had in the works. Pinterest then launched the feature in September. If the same time frame holds up for online events, we could potentially see the feature become more widely available sometime early next year.

Continue Reading

Uncategorized

SpaceX targeting next week for Starship’s first high-altitude test flight

Published

on

SpaceX looks ready to proceed to the next crucial phase of its Starship spacecraft development program: A 15km (50,000 feet) test flight. This would far exceed the max height that any prior Starship prototype has achieved so far, since the current record-setting hop test maxed out at around 500 feet. Elon Musk says that SpaceX will look to make its first high-altitude attempt sometime next week.

This tentative date (these are always subject to change) follows a successful static test fire of the current SN8 generation prototype — essentially just firing the test spacecraft’s Raptor engines while it remains stationary on the pad. That’s a crucial step that paves the way for any actual flight, since it proves that the spacecraft can essentially hold together and withstand the pressures of active engines before it leaves the ground.

SpaceX’s SN8 prototype is different from prior versions in a number of ways, most obviously because it has an actual nosecone, along with nose fins. The prototypes that did the short test hops, including SN6, had what’s known as a mass simulator up top, which weighs as much as an actual Starship nose section but looks very different.

Musk added that the chances of an SN8 high-altitude flight going to plan aren’t great, estimating that there’s “maybe a 1/3 chance” given how many things have to work correctly. He then noted that that’s the reason SpaceX has SN9 and SN10 ready to follow fast, which is a theme of Starship’s development program to date: building successive generations of prototypes rapidly in parallel in order to test and iterate quickly.

We’ll likely get a better idea of when the launch will take place due to alerts filed with local regulators, so watch this space next week as we await this major leap forward in SpaceX’s Starship program.

Continue Reading

Uncategorized

Police case filed against Netflix executives in India over ‘A Suitable Boy’ kissing scene

Published

on

Netflix, which has invested more than $500 million to gain a foothold in India in recent years, is slowly finding out just about what all could upset some people in the world’s second-largest internet market: Apparently everything.

A police case has been filed this week against two top executives of the American streaming service in India after a leader of the governing party objected to some scenes in a TV series.

The show, “A Suitable Boy,” is an adaptation of the award-winning novel by Indian author Vikram Seth that follows the life of a young girl. It has a scene in which the protagonist is seeing kissing a Muslim boy at a Hindu temple.

Narottam Mishra, the interior minister of the central state of Madhya Pradesh, said a First Information Report (an official police complaint) had been filed against Monika Shergill, VP of Content at Netflix and Ambika Khurana, Director of Public Policies for the firm, over objectionable scenes in the show that hurt the religious sentiments of Hindus.

“I had asked officials to examine the series ‘A Suitable Boy’ being streamed on Netflix to check if kissing scenes in it were filmed in a temple and if it hurt religious sentiments. The examination prima facie found that these scenes are hurting the sentiments of a particular religion,” he said.

Gaurav Tiwari, a BJP youth leader who filed the complaint, demanded an apology from Netflix and makers of the series (directed by award-winning filmmaker Mira Nair), and said the film promoted “love jihad,” an Islamophobic conspiracy theory that alleges that Muslim men entice Hindi women into converting their religion under the pretext of marriage.

Netflix declined to comment.

In recent days, a number of people have expressed on social media their anger at Netflix over these “objectionable” scenes. Though it is unclear if all of them — if any — are a Netflix subscriber.

The incident comes weeks after an ad from the luxury jewelry brand Tanishq — part of the 152-year-old salt-to-steel conglomerate — which celebrated interfaith marriage received intense backlash in the country.

For Netflix, the timing of this backlash isn’t great. The new incident comes days after the Indian government announced new rules for digital media, under which the nation’s Ministry of Information and Broadcasting will be regulating online streaming services. Prior to this new rule, India’s IT ministry oversaw streaming services, and according to a top streaming service executive, online services enjoyed a great degree of freedom.

Continue Reading

Trending