Connect with us


How to avoid sharing bad information about the election



Election Day misinformation will have two big goals, with the emphasis shifting over the course of the day. First it will attempt to keep people away from the polls, and then it will undermine the integrity of the election results. As experts and reporters track, verify, and debunk the onslaught of online rumors about voting, it’s likely that you, the well-meaning feed refresher, will encounter quick-moving and questionable videos, claims, and dispatches before the work can be done to figure out their truthfulness and context. 

Although major, persistent conspiracy theories like QAnon have attached themselves strongly to the US right wing, anyone can be vulnerable to sharing misinformation. Moments of urgency or crisis can give misinformation more fuel, even among people who should know better. 

Misinformation is already overwhelming local election officials. President Trump has repeatedly questioned the legitimacy of mail-in voting, boosting narratives that feed the conspiracy-laden claims of a coup staged by Democrats. A candidate could claim victory before mail-in ballots are fully counted in states where those votes could change the result. The potential for extended uncertainty feeds concerns ranging from the prospect of voters’ being discouraged from going to their polling places to the possibility of violence.  

So how do you avoid the trap of sharing bad information when everything feels terrible and urgent? Here’s some election-specific advice, building on our existing guide to protecting yourself from misinformation. 

Your attention matters: “People often think that because they’re not influencers, they’re not politicians, they’re not journalists, that what they do [online] doesn’t matter,” Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University, told me. But it does. Sharing dubious online trash with even a small circle of friends and family can help something catch. Things can trend on Twitter when regular users join in and amplify something that is being engineered to gain attention. So, you know, give yourself some credit. 

So does your engagement: During an urgent and developing news story, well-meaning people may quote, tweet, share, or otherwise engage with a post on social media in order to challenge and condemn it. Companies like Twitter and Facebook have introduced a lot of new rules, moderation tactics, and fact-checking provisions to try to combat popular misinformation. But interacting with misinformation on social media at all risks amplifying the content you’re trying to minimize, by signaling to the platform that the thing you’re interacting with is interesting. 

Many of the things that experts expect to see spreading on Election Day are going to be interesting but false. Misinformation’s goal is to “evoke an emotion,” says Shireen Mitchell, the founder of Stop Online Violence Against Women, who has long studied how misinformation targets and harms Black communities online. “The minute it evokes an emotion, you have to hit pause.”

Identify (and share!) some reliable sources and context beforehand: Lyric Jain, the founder of the UK-based verification app Logically, said he’s found it can be helpful to know “where your north star is” in the information landscape: figure out “which organizations and which publishers you can more or less trust most of the time.” 

You can help others, too.

“Right now, you can start spreading good information on what to expect. Right now. We know the sorts of things that are likely to happen,” Mike Caulfield, a digital literacy expert, told me. We know, for instance, that vote counting procedures vary state by state, which will factor into the uncertainty around the final results. On Election Day, “when people try to spin these events, the people in your network say ‘Oh, I remember Jill sharing something about that.’” 

Do your research, but carefully: One of the tricky things about telling people to research the things they see online for themselves is that there are a ton of traps out there waiting for would-be truth seekers. It’s not just about doing your research; it’s about doing it well, learning how to put isolated bits of information into context, and not trusting unreliable sources with good Google optimization. 

Caulfield’s method for addressing misinformation has the helpful acronym SIFT: “Stop. Investigate the source. Find better coverage. Trace claims, quotes, and media to the original context.” When I asked him how SIFT might apply to election week, he offered a useful tweak: 

“There’s gonna be some stuff that you’re just not going to be able to check in the moment,” he said. Realistically speaking, most people aren’t going to—and shouldn’t—turn themselves into fact-checking operations to address all these claims as they filter through their feeds. 

“I really would encourage people to think about their role in social media as not running around sharing things they can’t verify, or arguing back and forth with a bunch of people who don’t want to hear what they say,” Caulfield said. Instead, you can share things that provide context and clarity: resources on your state’s voting rules, information on how the swell of mail-in votes is being handled in key states, or reporting on when, based on those rules, experts expect final tallies. Although many things are uncertain about this week, experts have also flagged some long-running misinformation threads designed to undermine the integrity of the elections. Some of those have already been debunked. 

Don’t become a bot hunter: Although the threat of misinformation on Election Day is serious, “I don’t want people to spend their day online thinking that everybody is a Russian troll,” says Camille Francois, the chief innovation officer at the social-network analysis company Graphika. And it’s not just because there are people like her who are literally experts in tracking and understanding how cyber conflict works. Foreign actors, she notes, often depend on people believing their false or exaggerated claims of interference and impact, so “if [you] see a foreign actor come and say they successfully hacked the election, [you] should take it with a grain of salt.” 

“Stay away from becoming a bot hunter or fact checker,” says Angelo Carusone, the president of Media Matters for America, a left-leaning media watchdog group that has been tracking far-right misinformation campaigns. 

People can do two things instead, she says. “They can slow the speed of lies, but they can also slow the speed at which tensions escalate.” 

Beware of your expectations: Multiple experts I spoke with expressed variations on a similar concern: that well-intended efforts to highlight election-related violence could lead people to believe that violent unrest is significantly more widespread than it is. This has left some organizations with a quandary of how to inform the public without overhyping isolated incidents or ignoring moments that deserve more widespread concern and attention. 

“I don’t think we’re going to get this one right. I think we’re going to err on the side of over-discussing violence,“ Carusone says. “I think that it’s almost impossible to talk about the threat of misinformation, disinformation, or the integrity of the election without the specter of violence.” 

But you should be careful about any claim in this area. It’s really uncertain right now what, exactly, will happen in the days ahead, and you shouldn’t really trust anyone who is claiming to have the gift of prophecy here. That impulse, to settle on certainty, is one you should also keep an eye on within your own mind. 

Consider logging off: When every day feels like a year and every week feels like a century, it’s easy to get frayed and overloaded and burned out on the river of online content. While it’s important to pay attention to things like the future of democracy, it’s also a good idea to stop doomscrolling. 

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading


YC-backed LemonBox raises $2.5M bringing vitamins to Chinese millennials



Like many overseas Chinese, Derek Weng gets shopping requests from his family and friends whenever he returns to China. Some of the most wanted imported products are maternity items, cosmetics, and vitamin supplements. Many in China still uphold the belief that “imported products are better.”

The demand gave Weng a business idea. In 2018, he founded LemonBox to sell American health supplements to Chinese millennials like himself via online channels. The company soon attracted seed funding from Y Combinator and just this week, it announced the completion of a pre-A round of $2.5 million led by Panda Capital and followed by Y Combinator .

LemonBox tries to differentiate itself from other import businesses on two levels — affordability and personalization. Weng, who previously worked at Walmart where he was involved in the retail giant’s China import business, told TechCrunch that he’s acquainted with a lot of American supplement manufacturers and is thus able to cut middleman costs.

“In China, most supplements are sold at a big markup through pharmacies or multi-level marketing companies like Amway,” Weng said. “But vitamins aren’t that expensive to produce. Amway and the likes spend a lot on marketing and sales.”

Inside LemonBox’s fulfillment center

LemonBox designed a WeChat-based lite app, where users receive product recommendations after taking a questionnaire about their health conditions. Instead of selling by the bottle, the company customizes user needs by offering daily packs of various supplements.

“If you are a vegetarian and travel a lot, and the other person smokes a lot, [your demands] are going to be very different. I wanted to customize user prescriptions using big data,” explained Weng, who studied artificial intelligence in business school.

A monthly basket of 30 B-complex tablets, for instance, costs 35 yuan ($5) on LemonBox. Amway’s counterpart product, a bottle of 120 tablets, asks for 229 yuan on That’s about 57 yuan ($9) for 30 tablets.

Selling cheaper vitamins is just a means for LemonBox to attract consumers and gather health insights into Chinese millennials, with which the company hopes to widen its product range. Weng declined to disclose the company’s customer size, but claimed that its user conversion rate is “higher than most e-commerce sites.”

With the new proceeds, LemonBox is opening a second fulfillment center in the Shenzhen free trade zone after its Silicon Valley-based one. That’s to provide more stability to its supply chain as the COVID-19 pandemic disrupts international flights and cross-border trade. Moreover, the startup will spend the money on securing health-related certificates and adding Japan to its sourcing regions.

Returnees adapt

Screenshot of Lemonbox’s WeChat-based store

In the decade or so when Weng was living in the U.S., the Chinese internet saw drastic changes and gave rise to an industry largely in the grip of Alibaba and Tencent. Weng realized he couldn’t simply replicate America’s direct-to-customer playbook in China.

“In the U.S., you might build a website and maybe an app. You will embed your service into Google, Facebook, or Instagram to market your products. Every continent is connected with one other,” said Weng.

“In China, it’s pretty significantly different. First off, not a lot of people use web browsers, but everyone is on mobile phones. Baidu is not as popular as Google, but everybody is using WeChat, and WeChat is isolated from other major traffic platforms.”

As such, LemonBox is looking to diversify beyond its WeChat store by launching a web version as well as a store through Alibaba’s Tmall marketplace.

“There’s a lot of learning to be done. It’s a very humbling experience,” said Weng.

Continue Reading


Health tech venture firm OTV closes new $170 million fund and expands into Asia



OTV (formerly known as Olive Tree Ventures), an Israeli venture capital firm that focuses on digital health tech, announced it has closed a new fund totaling $170 million. The firm also launched a new office in Shanghai, China to spearhead its growth in the Asia Pacific region.

OTV currently has a total of 11 companies in its portfolio. This year, it led rounds in telehealth platforms TytoCare and Lemonaid Health, and its other investments include genomic machine learning platform Emedgene; microscopy imaging startup Scopio; and at-home cardiac and pulmonary monitor Donisi Health. OTV has begun investing in more B and C rounds, with the goal of helping companies that already have validated products deal with regulations and other issues as they grow.

OTV focuses on digital health products that have the potential to work in different countries, make healthcare more affordable, and fill gaps in overwhelmed healthcare systems.

Jose Antonio Urrutia Rivas will serve as OTV’s Head of Asia Pacific, managing its Shanghai office and helping its portfolio companies expand in China and other Asian countries. This brings OTV’s offices to a total of four, with other locations in New York, Tel Aviv and Montreal. Before joining OTV, Rivas worked at financial firm LarrainVial as its Asian market director.

OTV was founded in 2015 by general partners Mayer Gniwisch, Amir Lahat and Alejandro Weinstein. OTV partner Manor Zemer, who has worked in Asian markets for over 15 years and spent the last five living in Beijing, told TechCrunch that the firm decided it was the right time to expand into Asia because “digital health is already highly well-developed in many Asia-Pacific countries, where digital health products complement in-person healthcare providers, making that region a natural fit for a venture capital firm specializing in the field.”

He added that OTV “wanted to capitalize on how the COVID-19 pandemic has thrust the internationalized and interconnected nature of the world’s healthcare infrastructures into the limelight, even though digital health was a growth area long before the pandemic.”

Continue Reading


WH’s AI EO is BS



An executive order was just issued from the White House regarding “the Use of Trustworthy Artificial Intelligence in Government.” Leaving aside the meritless presumption of the government’s own trustworthiness and that it is the software that has trust issues, the order is almost entirely hot air.

The EO is like others in that it is limited to what a president can peremptorily force federal agencies to do — and that really isn’t very much, practically speaking. This one “directs Federal agencies to be guided” by nine principles, which gives away the level of impact right there. Please, agencies — be guided!

And then, of course, all military and national security activities are excepted, which is where AI systems are at their most dangerous and oversight is most important. No one is worried about what NOAA is doing with AI — but they are very concerned with what three-letter agencies and the Pentagon are getting up to. (They have their own, self-imposed rules.)

The principles are something of a wish list. AI used by the feds must be:

lawful; purposeful and performance-driven; accurate, reliable, and effective; safe, secure, and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable.

I would challenge anyone to find any significant deployment of AI that is all of these things, anywhere in the world. Any agency claims that an AI or machine learning system they use adheres to all these principles as they are detailed in the EO should be treated with extreme skepticism.

It’s not that the principles themselves are bad or pointless — it’s certainly important that an agency be able to quantify the risks when considering using AI for something, and that there is a process in place for monitoring their effects. But an executive order doesn’t accomplish this. Strong laws, likely starting at the city and state level, have already shown what it is to demand AI accountability, and though a federal law is unlikely to appear any time soon, this is not a replacement for a comprehensive bill. It’s just too hand-wavey on just about everything. Besides, many agencies already adopted “principles” like these years ago.

The one thing the EO does in fact do is compel each agency to produce a list of all the uses to which it is putting AI, however it may be defined. Of course, it’ll be more than a year before we see that.

Within 60 days of the order, the agencies will choose the format for this AI inventory; 180 days after that, the inventory must be completed; 120 days after that, the inventory must be completed and reviewed for consistency with the principles; plans to bring systems in line with them the agencies must “strive” to accomplish within 180 further days; meanwhile, within 60 days of the inventories having been completed they must be shared with other agencies; then, within 120 days of completion, they must be shared with the public (minus anything sensitive for law enforcement, national security, etc.).

In theory we might have those inventories in a month, but in practice we’re looking at about a year and a half, at which point we’ll have a snapshot of AI tools from the previous administration, with all the juicy bits taken out at their discretion. Still, it might make for interesting reading depending on what exactly goes into it.

This executive order is, like others of its ilk, an attempt by this White House to appear as an active leader on something that is almost entirely out of their hands. To develop and deploy AI should certainly be done according to common principles, but even if those principles could be established in a top-down fashion, this loose, lightly binding gesture that kind-of, sort-of makes some agencies have to pinky-swear to think real hard about them isn’t the way to do it.

Continue Reading