Connect with us

Uncategorized

That dreadful VPN might finally be dead thanks to Twingate, a new startup built by Dropbox alums

Published

on

VPNs, or virtual private networks, are a mainstay of corporate network security (and also consumers trying to stream Netflix while pretending to be from other countries). VPNs create an encrypted channel between your device (a laptop or a smartphone) and a company’s servers. All of your internet traffic gets routed through the company’s IT infrastructure, and it’s almost as if you are physically located inside your company’s offices.

Despite its ubiquity though, there are significant flaws with VPN’s architecture. Corporate networks and VPN were designed assuming that most workers would be physically located in an office most of the time, and the exceptional device would use VPN. As the pandemic has made abundantly clear, fewer and fewer people work in a physical office with a desktop computer attached to ethernet. That means the vast majority of devices are now outside the corporate perimeter.

Worse, VPN can have massive performance problems. By routing all traffic through one destination, VPNs not only add latency to your internet experience, they also transmit all of your non-work traffic through your corporate servers as well. From a security perspective, VPNs also assume that once a device joins, it’s reasonably safe and secure. VPNs don’t actively check network requests to make sure that every device is only accessing the resources that it should.

Twingate is fighting directly to defeat VPN in the workplace with an entirely new architecture that assumes zero trust, works as a mesh, and can segregate work and non-work internet traffic to protect both companies and employees. In short, it may dramatically improve the way hundreds of millions of people work globally.

It’s a bold vision from an ambitious trio of founders. CEO Tony Huie spent five years at Dropbox, heading up international and new market expansion in his final role at the file-sharing juggernaut. He’s most recently been a partner at venture capital firm SignalFire . Chief Product Office Alex Marshall was a product manager at Dropbox before leading product at lab management program Quartzy. Finally, CTO Lior Rozner was most recently at Rakuten and before that Microsoft.

Twingate founders Alex Marshall, Tony Huie, and Lior Rozner. Photo via Twingate.

The startup was founded in 2019, and is announcing today the public launch of its product as well as its Series A funding of $17 million from WndrCo, 8VC, SignalFire and Green Bay Ventures. Dropbox’s two founders, Drew Houston and Arash Ferdowsi, also invested.

The idea for Twingate came from Huie’s experience at Dropbox, where he watched its adoption in the enterprise and saw first-hand how collaboration was changing with the rise of the cloud. “While I was there, I was still just fascinated by this notion of the changing nature of work and how organizations are going to get effectively re-architected for this new reality,” Huie said. He iterated on a variety of projects at SignalFire, eventually settling on improving corporate networks.

So what does Twingate ultimately do? For corporate IT professionals, it allows them to connect an employee’s device into the corporate network much more flexibly than VPN. For instance, individual services or applications on a device could be setup to securely connect with different servers or data centers. So your Slack application can connect directly to Slack, your JIRA site can connect directly to JIRA’s servers, all without the typical round-trip to a central hub that VPN requires.

That flexibility offers two main benefits. First, internet performance should be faster, since traffic is going directly where it needs to rather than bouncing through several relays between an end-user device and the server. Twingate also says that it offers “congestion” technology that can adapt its routing to changing internet conditions to actively increase performance.

More importantly, Twingate allows corporate IT staff to carefully calibrate security policies at the network layer to ensure that individual network requests make sense in context. For instance, if you are salesperson in the field and suddenly start trying to access your company’s code server, Twingate can identify that request as highly unusual and outright block it.

“It takes this notion of edge computing and distributed computing [and] we’ve basically taken those concepts and we’ve built that into the software we run on our users’ devices,” Huie explained.

All of that customization and flexibility should be a huge win for IT staff, who get more granular controls to increase performance and safety, while also making the experience better for employees, particularly in a remote world where people in, say, Montana might be very far from an East Coast VPN server.

Twingate is designed to be easy to onboard new customers according to Huie, although that is almost certainly dependent on the diversity of end users within the corporate network and the number of services that each user has access to. Twingate integrates with popular single sign-on providers.

“Our fundamental thesis is that you have to balance usability, both for end users and admins, with bulletproof technology and security,” Huie said. With $17 million in the bank and a newly debuted product, the future is bright (and not for VPNs).

Continue Reading
Comments

Uncategorized

This is the most precise 3D map of the Milky Way ever made

Published

on

Data collected by the European Space Agency’s Gaia observatory has been used to create the most detailed 3D map of the galaxy ever made. The new data set could help scientists unravel many mysteries about the universe’s expansion and the solar system’s future.

What is Gaia? Launched in 2013, the Gaia observatory is intended to observe as many of the galaxy’s stars as possible. It is designed to measure stellar positions, distances, motions, and brightness with more precision than any instrument before, with the goal of cataloguing approximately 1 billion objects. It is designed to observe each object about 70 times or so in order to track their motions and velocities over time, accurate enough to measure the width of a hair from 2,000 kilometers away.

The new map: The latest data pinpoints the location and movements of just under 2 billion stars, with highly accurate measurements of about 300,000 stars within 326 light-years of the solar system. The new map shows us that our solar system’s orbit around the Milky Way is accelerating by seven millimeters every year, bringing it that much closer to the center of the Milky Way over time. 

What could we learn? The point of the mission isn’t simply to get a glimpse of the galaxy’s stars in motion. The data could help astronomers answer a number of different scientific questions, including how the Milky Way was formed over time, where the solar system and other star systems are headed, what the expansion of the universe looks like, and the distribution of regular and dark matter throughout the galaxy. Previous Gaia data sets have been used to ascertain the mass of the Milky Way and how many sun-like stars might be orbited by Earth-light planets.

What’s next: Gaia will be operational until about 2022, but it’s holding up better than expected and could see its mission extend to 2024 or beyond. The final data release should catalogue more than 2 billion objects in the galaxy.

Continue Reading

Uncategorized

YouTube introduces new features to address toxic comments

Published

on

YouTube today announced it’s launching a new feature that will push commenters to reconsider their hateful and offensive remarks before posting. It will also begin testing a filter that allows creators to avoid having to read some of the hurtful comments on their channel that had been automatically held for review. The new features are meant to address long standing issues with the quality of comments on YouTube’s platform — a problem creators have complained about for years.

The company said it will also soon run a survey aimed at giving equal opportunity to creators, and whose data can help the company to better understand how some creators are more disproportionately impacted by online hate and harassment.

The new commenting feature, rolling out today, is a significant change for YouTube.

The feature appears when users are about to post something offensive in a video’s comments section and warns to “Keep comments respectful.” The message also tells users to check the site’s Community Guidelines if they’re not sure if a comment is appropriate.

The pop-up then nudges users to click the “Edit” button and revise their comment by making “Edit” the more prominent choice on the screen that appears.

The feature will not actually prevent a user from posting their comment, however. If they want to proceed, they can click the “Post Anyway” option instead.

Image Credits: YouTube

The idea to put up roadblocks to give users time to pause and reconsider their words and actions is something several social media platforms are now doing.

For instance, Instagram last year launched a feature that would flag offensive comments before they were posted. It later expanded that to include offensive captions. Without providing data, the company claimed that these “nudges” were helping to reduce online bullying. Meanwhile, Twitter this year began to push users to read the article linked in tweets they were about to share before tweeting their reaction, and it stopped users from being able to retweet with just one click.

These intentional pauses built into the social platforms are designed to stop people from reacting to content with heightened emotion and anger, and instead push users to be more thoughtful in what they say and do. User interface changes like this leverage basic human psychology to work, and may even prove effective in some percentage of cases. But platforms have been hesitant to roll out such tweaks as they can stifle user engagement.

In YouTube’s case, the company tells TechCrunch its systems will learn what’s considered offensive based on what content gets repeatedly flagged by users. Over time, the system should be able to improve as the technology gets better at detection and the system itself is further developed.

Users on Android in the English language will see the new prompts first, starting today, Google says. The rollout will complete over the next couple of days. The company did not offer a timeframe for the feature’s support for other platforms and languages or even a firm commitment that such support would arrive in the future.

In addition, YouTube said it will also now begin testing a feature for creators who use YouTube Studio to manage their channel.

Creators will be able to try out a new filter that will hide the offensive and hurtful comments that have automatically been held for review.

Today, YouTube Studio users can choose to auto-moderate potentially inappropriate comments, which they can then manually review and choose to approve, hide or report. While it’s helpful to have these held, it’s still often difficult for creators to have to deal with these comments at all, as online trolls can be unbelievably cruel. With the filter, creators can avoid these potentially offensive comments entirely.

YouTube says it will also streamline its moderation tools to make the review process easier going forward.

The changes follow a year during which YouTube has been heavily criticized for not doing enough to combat hate speech and misinformation on its platform. The video platform’s “strikes” system for rule violations means that videos may be individually removed but a channel itself can stay online unless it collects enough strikes to be taken down. In practice, that means a YouTube creator could be as violent as calling for government officials to be beheaded and and still continue to use YouTube. (By comparison, that same threat led to an account ban on Twitter.)

YouTube claims it has increased the number of daily hate speech comment removals by 46x since early 2019. And in the last quarter, of the more than 1.8 million channels it terminated for violating our policies, more than 54,000 terminations were for hate speech. That indicates a growing problem with online discourse that likely influenced these new measures. Some would argue the platforms have a responsibility to do even more, but it’s a difficult balance.

In a separate move, YouTube said it’s soon introducing a new survey that will ask creators to voluntarily share with YouTube information about their gender, sexual orientation, race and ethnicity. Using the data collected, YouTube claims it will be able to better examine how content from different communities is treated in search, discovery and monetization systems.

It will also look for possible patterns of hate, harassment, and discrimination that could affect some communities more than others, the company explains. And the survey will give creators to optionally participate in other initiatives that YouTube hosts, like #YouTubeBlack creator gatherings or FanFest, for instance.

This survey will begin in 2021 and was designed in consultation with input from creators and civil and human rights experts. YouTube says the collected data will not be used for advertising purposes, and creators will have the ability to opt-out and delete their information entirely at any time.

Continue Reading

Uncategorized

Google’s co-lead of Ethical AI team says she was fired for sending an email

Published

on

Timnit Gebru, a leading researcher and voice in the field of ethics and artificial intelligence, says Google fired her for an email she sent to her direct reports.  According to Gebru, Google fired her because of an email she sent to subordinates that the company said reflected “behavior that is inconsistent with the expectations of a Google manager.”

Gebru, the co-leader of Google Ethical Artificial Intelligence team, took to Twitter last night, shortly after the National Labor Relations Board filed a complaint against Google alleging surveillance of employees and unlawful firing of employees.

Gebru says no one explicitly told her she was fired, but that she received an email from one of her boss’s reports, saying:

“Thanks for making your conditions clear. We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.”

That email, according to Gebru, went on to say that “certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.”

It’s not clear what exactly was contained in the email. We’ve reached out to both Gebru and Google for comment.

As Bloomberg reported, Gebru has been outspoken about the lack of diversity in tech as well as the injustices Black people in tech face. According to Bloomberg, Gebru believes Google let her go to signal to other workers that it’s not ok to speak up.

Gebru is a leading voice in the field of ethics and artificial intelligence. In 2018, Gebru collaborated with Joy Buolamwini, founder of the Algorithmic Justice League, to study biases in facial recognition systems. They found high disparities in error rates between lighter males and darker females, which led to the conclusion that those systems didn’t work well for people with darker skin.

Since Gebru’s announcement, she’s received an outpouring of support from those in the tech community.

Developing…

Continue Reading

Trending