Connect with us

Uncategorized

The internet is excluding Asian-Americans who don’t speak English

Published

on

Jennifer Xiong spent her summer helping Hmong people in California register to vote in the US presidential election. The Hmong are an ethnic group that come from the mountains of China, Vietnam, Laos, and Thailand but don’t have a country of their own, and Xiong was a volunteer organizer at Hmong Innovating Politics, or HIP, in Fresno. There are around 300,000 Hmong people in the US, and she spent hours phone-banking and working on ads to run on Hmong radio and TV channels. It was inspiring work. “This was an entirely new thing for me to see,” she says. “Young, progressive, primarily women doing this work in our community was just so rare, and I knew it was going to be a huge feat.” And by all accounts it was. Asian-American turnout in the 2020 election in general was extraordinary, and observers say turnout among Hmong citizens was the highest they can remember. 

But Xiong says it was also incredibly disheartening. 

While Hmong people have long ties to the US—many were encouraged to migrate across the Pacific after being recruited to support the United States during the Vietnam War—they are often left out of mainstream political discourse. One example? On the website of Fresno’s county clerk, the government landing page for voter registration has an option to translate the entire page into Hmong—but, Xiong says, much of the information is mistranslated. 

And it starts right at the beginning. Instead of the Hmong word for “hello” or “welcome,” she says, is “something else that said, like, ‘your honor’ or ‘the queen’ or ‘the king’ instead.” 

Seeing something so simple done incorrectly was frustrating and off-putting. “Not only was it just probably churned through Google Translate, it wasn’t even peer edited and reviewed to ensure that there was fluency and coherence,” she says.

Xiong says this kind of carelessness is common online—and it’s one reason she and others in the Hmong community can feel excluded from politics.

They aren’t the only ones with the sense that the digital world wasn’t built for them. The web itself, invented in America, is built on an English-first architecture, and most of the big social media platforms that host public discourse in the United States put English first too. 

And as technologies become proxies for civic spaces in the United States, the primacy of English has been magnified. For Asian-Americans, the move to digital means that access to democratic institutions—everything from voting registration to local news—is impeded by linguistic barriers. 

It’s an issue in health care as well. During the pandemic, when Black, Hispanic, and Native patients have been two to three times more likely to be hospitalized or die than white patients, these barriers add another burden: Brigham and Women’s Hospital in Boston found that non-English-speaking patients were 35% more likely to die of covid than those who spoke English. Translation problems are not the only issue. Xiong says that when Hmong users were trying to make vaccine appointments, they were asked for their zodiac sign as a security question—despite the fact that many in this community are unfamiliar with Western astrology.

In normal times, overcoming these challenges would be complicated enough, since Asian-Americans are the most linguistically diverse ethnic group in America. But after a year that has seen a dramatic increase in real-world and online attacks on Asian-Americans, the situation has become urgent in a different way.

“They don’t catch misinformation”

Christine Chen, executive director of APIAVote, a nonprofit that promotes civic engagement among Asian people and Pacific Islanders, says that political life has always been “exclusionary” for Asian people in the US, but “with digital spaces, it’s even more challenging. It’s so much easier to be siloed.” 

Big platforms like Facebook, Twitter, and YouTube are popular among Asian-Americans, as are messaging apps like WeChat, WhatsApp, and Line. Which communication channels people use often depends on their ethnicity. During the election campaign, Chen focused on building a volunteer network that could move in and out of those siloes to achieve maximum impact. At the time, disinformation targeting Asian-Americans ran rampant in WeChat groups and on Facebook and Twitter, where content moderation is less effective in non-English languages. 

APIAVote volunteers would join different groups on the various platforms to monitor for disinformation while encouraging members to vote. Volunteers found that Vietnamese-Americans, for example, were being targeted with claims that Joe Biden was a socialist, preying on their fears of communism—and similar to political messages pushed at Cuban-Americans

Chen says that while content moderation policies from Facebook, Twitter, and others succeeded in filtering out some of the most obvious English-language disinformation, the system often misses such content when it’s in other languages. That work instead had to be done by volunteers like her team, who looked for disinformation and were trained to defuse it and minimize its spread. “Those mechanisms meant to catch certain words and stuff don’t necessarily catch that dis- and misinformation when it’s in a different language,” she says.

Google’s translation services and technologies such as Translatotron and real-time translation headphones use artificial intelligence to convert between languages. But Xiong finds these tools inadequate for Hmong, a deeply complex language where context is incredibly important. “I think we’ve become really complacent and dependent on advanced systems like Google,” she says. “They claim to be ‘language accessible,’ and then I read it and it says something totally different.” 

(A Google spokesperson admitted that smaller languages “pose a more difficult translation task” but said that the company has “invested in research that particularly benefits low-resource language translations,” using machine learning and community feedback.)

All the way down

The challenges of language online go beyond the US—and down, quite literally, to the underlying code. Yudhanjaya Wijeratne is a researcher and data scientist at the Sri Lankan think tank LIRNEasia. In 2018, he started tracking bot networks whose activity on social media encouraged violence against Muslims: in February and March of that year, a string of riots by Sinhalese Buddhists targeted Muslims and mosques in the cities of Ampara and Kandy. His team documented “the hunting logic” of the bots, catalogued hundreds of thousands of Sinhalese social media posts, and took the findings to Twitter and Facebook. “They’d say all sorts of nice and well-meaning things–basically canned statements,” he says. (In a statement, Twitter says it uses human review and automated systems to “apply our rules impartially for all people in the service, regardless of background, ideology, or placement on the political spectrum.”)

When contacted by MIT Technology Review, a Facebook spokesperson said the company commissioned an independent human rights assessment of the platform’s role in the violence in Sri Lanka, which was published in May 2020, and made changes in the wake of the attacks, including hiring dozens of Sinhala and Tamil-speaking content moderators. “We deployed proactive hate speech detection technology in Sinhala to help us more quickly and effectively identify potentially violating content,” they said.

“What I can do with three lines of code in Python in English literally took me two years of looking at 28 million words of Sinhala”

Yudhanjaya Wijeratne, LIRNEasia

When the bot behavior continued, Wijeratne grew skeptical of the platitudes. He decided to look at the code libraries and software tools the companies were using, and found that the mechanisms to monitor hate speech in most non-English languages had not yet been built. 

“Much of the research, in fact, for a lot of languages like ours has simply not been done yet,” Wijeratne says. “What I can do with three lines of code in Python in English literally took me two years of looking at 28 million words of Sinhala to build the core corpuses, to build the core tools, and then get things up to that level where I could potentially do that level of text analysis.”

After suicide bombers targeted churches in Colombo, the Sri Lankan capital, in April 2019, Wijeratne built a tool to analyze hate speech and misinformation in Sinhala and Tamil. The system, called Watchdog, is a free mobile application that aggregates news and attaches warnings to false stories. The warnings come from volunteers who are trained in fact-checking. 

Wijeratne stresses that this work goes far beyond translation. 

“Many of the algorithms that we take for granted that are often cited in research, in particular in natural-language processing, show excellent results for English,” he says. “And yet many identical algorithms, even used on languages that are only a few degrees of difference apart—whether they’re West German or from the Romance tree of languages—may return completely different results.” 

Natural-language processing is the basis of automated content moderation systems. Wijeratne published a paper in 2019 that examined the discrepancies between their accuracy in different languages. He argues that the more computational resources that exist for a language, like data sets and web pages, the better the algorithms can work. Languages from poorer countries or communities are disadvantaged.

“If you’re building, say, the Empire State Building for English, you have the blueprints. You have the materials,” he says. “You have everything on hand and all you have to do is put this stuff together. For every other language, you don’t have the blueprints.

“You have no idea where the concrete is going to come from. You don’t have steel and you don’t have the workers, either. So you’re going to be sitting there tapping away one brick at a time and hoping that maybe your grandson or your granddaughter might complete the project.”

Deep-seated issues

The movement to provide those blueprints is known as language justice, and it is not new. The American Bar Association describes language justice as a “framework” that preserves people’s rights “to communicate, understand, and be understood in the language in which they prefer and feel most articulate and powerful.” 

The path to language justice is tenuous. Technology companies and government service providers would have to make it a much higher priority and invest many more resources into its realization. And, Wijeratne points out, racism, hate speech, and exclusion targeting Asian people, especially in the United States, existed long before the internet. Even if language justice could be achieved, it’s not going to fix these deep-seated issues.

But for Xiong, language justice is an important goal that she believes is crucial for the Hmong community. 

After the election, Xiong took on a new role with her organization, seeking to connect California’s Hmong community with public services such as the Census Bureau, the county clerk, and vaccine registration. Her main objective is to “meet the community where they are,” whether that’s on Hmong radio or in English via Facebook live, and then amplify the perspective of Hmong people to the broader public. But every day she has to face the imbalances in technology that shut people out of the conversation—and block them from access to resources. 

Equality would mean “operating in a world where interpretation and translation is just the norm,” she says. “We don’t ask whether there’s enough budgeting for it, we don’t question if it’s important or it’s valuable, because we prioritize it when it comes to the legislative table and public spaces.”

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

If you don’t want robotic dogs patrolling the streets, consider CCOPS legislation

Published

on

Boston Dynamics’ robot “dogs,” or similar versions thereof, are already being employed by police departments in Hawaii, Massachusetts and New York. Partly through the veil of experimentation, few answers are being given by these police forces about the benefits and costs of using these powerful surveillance devices.

The American Civil Liberties Union, in a position paper on CCOPS (community control over police surveillance), proposes an act to promote transparency and protect civil rights and liberties with respect to surveillance technology. To date, 19 U.S. cities in have passed CCOPS laws, which means, in practical terms, that virtually all other communities don’t have a requirement that police are transparent about their use of surveillance technologies.

For many, this ability to use new, unproven technologies in a broad range of ways presents a real danger. Stuart Watt, a world-renowned expert in artificial intelligence and the CTO of Turalt, is not amused.

Even seemingly fun and harmless “toys” have all the necessary functions and features to be weaponized.

“I am appalled both by the principle and the dogbots and of them in practice. It’s a big waste of money and a distraction from actual police work,” he said. “Definitely communities need to be engaged with. I am honestly not even sure what the police forces think the whole point is. Is it to discourage through a physical surveillance system, or is it to actually prepare people for some kind of enforcement down the line?

“Chunks of law enforcement have forgotten the whole ‘protect and serve’ thing, and do neither,” Watts added. “If they could use artificial intelligence to actually protect and actually serve vulnerable people, the homeless, folks addicted to drugs, sex workers, those in poverty and maligned minorities, it’d be tons better. If they have to spend the money on AI, spend it to help people.”

The ACLU is advocating exactly what Watt suggests. In proposed language to city councils across the nation, the ACLU makes it clear that:

The City Council shall only approve a request to fund, acquire, or use a surveillance technology if it determines the benefits of the surveillance technology outweigh its costs, that the proposal will safeguard civil liberties and civil rights, and that the uses and deployment of the surveillance technology will not be based upon discriminatory or viewpoint-based factors or have a disparate impact on any community or group.

From a legal perspective, Anthony Gualano, a lawyer and special counsel at Team Law, believes that CCOPS legislation makes sense on many levels.

“As police increase their use of surveillance technologies in communities around the nation, and the technologies they use become more powerful and effective to protect people, legislation requiring transparency becomes necessary to check what technologies are being used and how they are being used.”

For those not only worried about this Boston Dynamics dog, but all future incarnations of this supertech canine, the current legal climate is problematic because it essentially allows our communities to be testing grounds for Big Tech and Big Government to find new ways to engage.

Just last month, public pressure forced the New York Police Department to suspend use of a robotic dog, quite unassumingly named Digidog. After the tech hound was placed on temporary leave due to public pushback, the NYPD used it at a public housing building in March. This went over about as well as you could expect, leading to discussions as to the immediate fate of this technology in New York.

The New York Times phrased it perfectly, observing that “the NYPD will return the device earlier than planned after critics seized on it as a dystopian example of overly aggressive policing.”

While these bionic dogs are powerful enough to take a bite out of crime, the police forces seeking to use them have a lot of public relations work to do first. A great place to begin would be for the police to actively and positively participate in CCOPS discussions, explaining what the technology involves, and how it (and these robots) will be used tomorrow, next month and potentially years from now.

Continue Reading

Uncategorized

Bird Rides to go public via SPAC, at an implied value of $2.3B

Published

on

Bird Rides, the shared electric scooter startup that operates in more than 100 cities across 3 continents, said Wednesday it is going public by merging with special purpose acquisition company Switchback II with an implied valuation of $2.3 billion. The announcement confirms earlier reports, including one this week from dot.la, that Bird intended to go public via a SPAC.

Bird said it was able to raise $106 million in private investment in public equity, or PIPE, by institutional investor Fidelity Management & Research Company LLC, and others. Apollo Investment Corp. and MidCap Financial Trust provided an additional $40 million asset financing.

The transaction will enable the combined entity to retain net proceeds of up to $428 million of cash, according to Switchback, which brings $316 million cash-in-trust to the table. The announcement also provided new information about a previously undisclosed $208 million, which Bird raised privately as part of an April 2021 Senior Preferred Convertible equity offering led by Bracket Capital, Sequoia Capital and Valor Equity Partners.

When and how Bird would go public has been an item of speculation after Bloomberg reported last November that the company received “inbound interest” from SPACs.

Bird’s ride has been bumpy at times. In 2020, revenue dropped to $95 million, or 37% from the previous year. That year the company also laid off around 30% of its workforce – 406 people – for cost-saving reasons. The company may use this new access to cash to expand its European operations and pay off debt.

Most importantly, the new injection of cash may help the company finally achieve profitability. It’s a rarity amongst scooter startups, who face notoriously high overhead.

Special purpose acquisition companies, or SPACs, have become a popular route for going public amongst transportation startups. Already this year, scooter company Helbiz, which is based in Europe and the U.S., went public via SPAC in a merger with GreenVision Acquisition Corp. SPAC shell corporations allow companies to list on the NASDAQ without doing a traditional initial public offering.

Continue Reading

Uncategorized

Only three days left to buy $99 passes to TC Disrupt 2021

Published

on

The countdown clock keeps on ticking, and you have just three days to secure your $99 pass to TechCrunch Disrupt 2021. You read that right — $99 is all that you’ll pay, $99 is all (everybody sing)!

Silly Minions aside, you’ll snag serious savings if you buy your Disrupt 2021 pass before the deadline expires on May 14 at 11:59 pm (PT).

TechCrunch Disrupt is a massive gathering of the tech startup world’s top leaders, innovators, makers, investors, founders and ground breakers. The all-virtual platform means more global participation and exposure. It’s all designed to help early-stage founders — and the people who invest in them — build a thriving business.

The Disrupt stage features in-depth interviews and panel discussions with a who’s-who of tech talent. The Extra Crunch stage is where you’ll find a deep bench of subject-matter experts sharing practical how-to content. You’ll take away actionable insights you can put into practice now — when you need it most. Check out our roster of speakers — we’re adding more every week.

Granted, we might be a tad biased about Disrupt — of course we think it’s awesome. But your contemporaries recognize its value, too. Here’s what a few of them told us about their experience at Disrupt 2020.

There was always something interesting going on in one of the breakout rooms, and I was impressed by the quality of the people participating. Partners in well-known VC firms spoke, they were accessible, and they shared smart, insightful nuggets. You will not find this level of people accessible and in one place anywhere else. — Michael McCarthy, CEO, Repositax.

I loved the variety of topics and learning about recent technology trends as they’re happening. Disrupt gave me a whole new perspective on the ways innovation happens in big companies. — Anirudh Murali, co-founder and CEO, Economize.

Watching the Startup Battlefield was fantastic. You could see the ingenuity and innovation happening in different technology spaces. Just looking at the sheer number of other pitch decks and hearing the judges tear them down and give feedback was very helpful. — Jessica McLean, Director of Marketing and Communications, Infinite-Compute.

If watching Startup Battlefield is thrilling (and it is), imagine what it would feel like to compete — or to win. We’re still accepting applications but not for long. Want to take a shot at winning $100,000? Apply to compete in Startup Battlefield before May 13 at 11:59 pm (PT).

There’s so much more opportunity waiting for you at Disrupt 2021. Explore Startup Alley, our expo area. Better yet, exhibit there yourself and, in addition to a bunch of other perks, you might be one of only 50 exhibiting startups chosen to participate in the Startup Alley+ VIP experience. Read more about Startup Alley+ here. TechCrunch will notify selected startups at the end of June.

Time is running out, and $99 is all that you’ll pay — if you buy your Disrupt 2021 pass before Friday, May 14 at 11:59 pm (PT).

Is your company interested in sponsoring or exhibiting at Disrupt 2021? Contact our sponsorship sales team by filling out this form.


Continue Reading

Trending