Connect with us

Uncategorized

Of course you could have seen this coming

Published

on

Maybe you saw this coming nearly a decade ago, when #YourSlipIsShowing laid bare how racist Twitter users were impersonating Black women on the internet. Maybe, for you, it was during Gamergate, the online abuse campaign targeting women in the industry. Or maybe it was the mass shooting in Christchurch, when a gunman steeped in the culture of 8chan livestreamed himself murdering dozens of people. 

Maybe it was when you, or your friend, or your community, became the target of an extremist online mob, and you saw online anger become real world danger and harm. 

Or maybe what happened on Wednesday, when a rabble of internet-fuelled Trump supporters invaded the Capitol, came as a surprise.

For weeks they had been planning their action in plain sight on the internet—but they have been showing you who they are for years. The level of shock you feel right now about the power and danger of online extremism depends on whether you were paying attention. 

The consequences of inaction

The mob who tried to block Congress from confirming Joe Biden’s presidential victory  showed how the stupidity and danger of the far-right internet could come into the real world again, but this time it struck at the center of the US government. Neo-nazi streamers weren’t just inside the Capitol, they were putting on a show for audiences of tens of thousands of people who egged them on in the chats. The mob was having fun doing memes in the halls of American democracy as a woman—a Trump supporter whose social media history shows her devotion to QAnon—was killed trying to break into Congressional offices.

The past year, especially since the pandemic, has been one giant demonstration of the consequences of inaction; the consequences of ignoring the many, many people who have been begging social media companies to take the meme-making extremists and conspiracy theorists that have thrived on their platforms seriously. 

Facebook and Twitter acted to slow the rise of QAnon over the summer, but only after the pro-Trump conspiracy theory was able to grow relatively unrestricted there for three years. Account bans and algorithm tweaks have long been too little, too late to deal with racists, extremists and conspiracy theorists, and they have rarely addressed the fact that these powerful systems were working exactly as intended.    

I spoke with a small handful of the people who could have told you this was coming about this for a story in October. Researchers, technologists, and activists told me that major social media companies have, for the entirety of their history, chosen to do nothing, or to act only after their platforms cause abuse and harm. 

Ariel Waldman tried to get Twitter to meaningfully address abuse there in 2008. Researchers like Shafiqah Hudson, I’Nasah Crockett, and Shireen Mitchell have tracked exactly how harassment works and finds an audience on these platforms for years. Whitney Phillips talked about how she’s haunted by laughter—not just from other people, but also her own—back in the earliest days of her research into online culture and trolling, when overwhelmingly white researchers and personalities treated the extremists among them as edgy curiosities.

Many, many people who have been begging social media companies to take the meme-making extremists and conspiracy theorists seriously.

Ellen Pao, who briefly served as CEO of Reddit in 2014 and stepped down after introducing the platform’s first anti-harassment policy, was astonished that Reddit had only banned r/The_Donald in June 2020, after evidence had built for years to show that the popular pro-Trump message board served as an organizing space for extremiss and a channel for mob abuse. Of course, by the time it was banned, many of its users had already migrated away from Reddit to TheDonald.win, an independent forum created by the same people who ran the previous version. Its pages were filled with dozens of calls for violence ahead of Wednesday’s rally-turned-attempted-coup. 

Banning Trump doesn’t solve the issue

Facebook, Twitter, and YouTube didn’t create conspiracy thinking, or extremist ideologies, of course. Nor did they invent the idea of dangerous personality cults. But these platforms have—by design—handed those groups the mechanisms to reach much larger audiences much faster, and to recruit and radicalize new converts, even at the expense of the people and communities those ideologies target for abuse. And crucially, even when it was clear what was happening, they chose the minimal amount of change—or decided not to intervene at all. 

In the wake of the attempted coup on the Capitol building, people are again looking at the major social media companies to see how they respond. The focus is on Trump’s personal accounts, which he used to encourage supporters to descend on DC and then praised them when they did. Will he be banned from Twitter? There are compelling arguments for why he should. 

But as heavy and consequential as that would be, it’s also, in other ways… not. Abuse, harassment, conspiracy thinking, and racism will still be able to benefit from social media companies that remain interested in only acting when it’s too late, even without Trump retweeting them and egging them on. 

Facebook has banned Trump indefinitely, and also increased the extent of their moderation of groups, where a lot of conspiracy-fueled activity lives. These changes are good, but again, not new: people have told Facebook about this for years; Facebook employees have told Facebook about this for years. Groups were instrumental in organizing Stop the Steal protests in the days after the election, and before that, in anti-mask protests, and before that in spreading fake news, and before that in as a central space for anti-vaccine misinformation. None of this is new. 

There are only so many ways to say that more people should have listened. If you’re paying attention now, maybe you’ll finally start hearing what they say. 

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Elon Musk says Tesla Semi is ready for production, but limited by battery cell output

Published

on

Tesla CEO Elon Musk said on the company’s 2020 Q4 earnings call that all engineering work is now complete on the Tesla Semi, the freight-hauling semi truck that the company is building with an all-electric powertrain. The company expects to begin deliveries of Tesla Semi this year, the company said in its Q4 earnings release, and Musk said the only thing limiting their ability to produce them now is the availability of battery cells.

“The main reason we have not accelerated new products – like for example Tesla Semi – is that we simply don’t have enough cells for it,” Musk said. “If we were to make the Semi right now, and we could easily go into production with the Semi right now, but we would not have enough cells for it.”

Musk added that the company does expect to have sufficient cell volume to meet its needs once it goes into production on its 4680 battery pack, which is a new custom cell design it created with a so-called ‘tables’ design that allows for greater energy density and therefore range.

“A Semi would use typically five times the number of cells that a car would use, but it would not sell for five times what a car would sell for, so it kind of would not make sense for us to do the Semi right now,” Musk said. “But it will absolutely make sense for us to do it as soon as we can address the cell production constraint.”

That constraint points to the same conclusion for the possibility of Tesla developing a van, Musk added, and the lifting of the constraint will likewise make it possible for Tesla to pursue the development of that category of vehicle, he said.

Tesla has big plans for “exponentially” ramping cell production, with a goal of having production capacity infrastructure in place for a Toal of 200 gigawatt hours per year by 2022, and a target of being able to actually produce around 40% of that by that year (with future process improvements generating additional gigawatt hours of cell capacity  in gradual improvements thereafter).

Continue Reading

Uncategorized

Pro-Trump Twitter figure arrested for spreading vote-by-text disinformation in 2016

Published

on

The man behind a once-influential pro-Trump account is facing charges of election interference for allegedly disseminating voting disinformation on Twitter in 2016.

Federal prosecutors allege that Douglass Mackey, who used the name “Ricky Vaughn” on Twitter, encouraged people to cast their ballot via text or on social media, effectively tricking others into throwing away those votes.

According to the Justice Department, 4,900 unique phone numbers texted a phone number Mackey promoted in order to “vote by text.” BuzzFeed reported the vote-by-text scam at the time, noting that many of the images were photoshopped to look like official graphics from Hillary Clinton’s presidential campaign.

Some of those images appeared to specifically target Black and Spanish-speaking Clinton supporters, a motive that tracks with the account’s track record of white supremacist and anti-Semitic content. The account was suspended in November 2016.

At the time, the mysterious account quickly gained traction in the political disinformation ecosystem. HuffPost revealed that the account was run by Mackey, the son of a lobbyist, two years later.

“… His talent for blending far-right propaganda with conservative messages on Twitter made him a key disseminator of extremist views to Republican voters and a central figure in the alt-right’ white supremacist movement that attached itself to Trump’s coattails,” HuffPost’s Luke O’Brien reported.

Mackey, a West Palm Beach resident, was taken into custody Wednesday in Florida.

“There is no place in public discourse for lies and misinformation to defraud citizens of their right to vote,” Acting U.S. Attorney for the Eastern District of New York Seth D. DuCharme said.

“With Mackey’s arrest, we serve notice that those who would subvert the democratic process in this manner cannot rely on the cloak of Internet anonymity to evade responsibility for their crimes.”

Continue Reading

Uncategorized

Tesla is willing to license Autopilot and has already had “preliminary discussions” about it with other automakers

Published

on

Tesla is open to licensing its software, including its Autopilot highly-automated driving technology, and the neural network training it has built to improve its autonomous driving technology. Tesla CEO Elon Musk revealed those considerations on the company’s Q4 earnings call on Wednesday, adding that the company has in fact already “had some preliminary discussions about licensing Autopilot to other OEMs.”

The company began rolling out its beta version of the so-called ‘full self-driving’ or FSD version of Autopilot late last year. The standard Autopilot features available in general release provide advanced driver assistance (ADAS) which provide essentially advanced cruise control capabilities designed primarily for use in highway commutes. Musk said on the call that he expects the company will seek to prove out its FSD capabilities before entering into any licensing agreements, if it does end up pursuing that path.

Musk noted that Tesla’s “philosophy is definitely not to create walled gardens” overall, and pointed out that the company is planning to allow other automakers to use its Supercharger networks, as well as its autonomy software. He characterized Tesla as “more than happy to license” those autonomous technologies to “other car companies,” in fact.

One key technical hurdle required to get to a point where Tesla’s technology is able to demonstrate true reliability far surpassing that of a standard human driver is transition the neural networks operating in the cars and providing them with the analysis that powers their perception engines is to transition those to video. That’s a full-stack transition across the system away from basing it around neural nets trained on single cameras and single frames.

To this end, the company has developed video labelling software that has had “a huge effect on the efficiency of labeling,” with the ultimate aim being enabling automatic labeling. Musk (who isn’t known for modesty around his company’s achievements, it should be said) noted that Tesla believes “it may be the best neural net training computer in the world by possibly an order of magnitude,” adding that it’s also “something we can offer potentially as a service.”

Training huge quantities of video data will help Tesla push the reliability of its software from 100% that of a human driver, to 200% and eventually to “2,000% better than the average human,” Musk said, while again suggesting that it won’t be a technological achievement the company is interested into keeping to themselves.

Continue Reading

Trending