Connect with us

Uncategorized

Leveraging collective intelligence and AI to benefit society

Published

on

A solar-powered autonomous drone scans for forest fires. A surgeon first operates on a digital heart before she picks up a scalpel. A global community bands together to print personal protection equipment to fight a pandemic.

“The future is now,” says Frédéric Vacher, head of innovation at Dassault Systèmes. And all of this is possible with cloud computing, artificial intelligence (AI), and a virtual 3D design shop, or as Dassault calls it, the 3DEXPERIENCE innovation lab. This open innovation laboratory embraces the concept of the social enterprise and merges collective intelligence with a cross-collaborative approach by building what Vacher calls “communities of people—passionate and willing to work together to accomplish a common objective.”

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff. 

“It’s not only software, it’s not only cloud, but it’s also a community of people’s skills and services available for the marketplace,” Vacher says. 

“Now, because technologies are more accessible, newcomers can also disrupt, and this is where we want to focus with the lab.” 

And for Dassault Systèmes, there’s unlimited real-world opportunities with the power of collective intelligence, especially when you are bringing together industry experts, health-care professionals, makers, and scientists to tackle covid-19. Vacher explains, “We created an open community, ‘Open Covid-19,’ to welcome any volunteer makers, engineers, and designers to help, because we saw at that time that many people were trying to do things but on their own, in their lab, in their country.” This wasted time and resources during a global crisis. And, Vacher continues, the urgency of working together to share information became obvious, “They were all facing the same issues, and by working together, we thought it could be an interesting way to accelerate, to transfer the know-how, and to avoid any mistakes.”

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next. 

This episode of Business Lab is produced in association with Dassault Systèmes. 

Show notes and links 

“How Effective is a Facemask? Here’s a Simulation of Your Unfettered Sneeze,” by Josh Mings, SolidSmack, April 2, 2020 

“Open COVID-19 Community Lets Makers Contribute to Pandemic Relief,” by Clare Scott, The SIMULIA Blog, Dassault, July 15, 2020

Dassault 3DEXPERIENCE platform

“Collective intelligence and collaboration around 3D printing: rising to the challenge of Covid-19,” by Frédéric Vacher, STAT, August 10, 2020

Full Transcript 

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Our topic today is accelerating disruptive innovations to benefit society by building and running massive simulations. The world has big problems, and it’s going to take all of us to help solve them. Two words for you: collective intelligence.  

My guest is Frédéric Vacher, who is the head of innovation at Dassault Systèmes. He is a mechanical engineer who has had a long career at Dassault. First, leading the partnership program, and then launching the 3DEXPERIENCE Lab. This episode of Business Lab is produced in association with Dassault Systèmes. Frédéric, welcome to Business Lab. 

Frédéric Vacher: Good morning, Laurel. Good morning, everyone. 

Laurel: Could you start by first telling us a bit about Dassault Systèmes? I don’t want listeners to be confused with the aviation company, because we’re talking about a 3D modeling and simulation enterprise that was founded almost 40 years ago and has more than 20,000 employees around the globe. 

FrédéricYeah, that is true. We are Dassault Systèmes, the 3D experience company. We have been digital since day one. Dassault Aviation is one of our clients—like all the aerospace companies—but our customers are also car, shipbuilding, consumer goods, consumer packaged goods companies, and so on. We are a worldwide leader in providing digital solutions, from design simulation to production and we cover 11 industries. Our purpose is to harmonize product, nature, and life. 

For the past two years, we have helped our clients’ industry roles to innovate by digitalizing and engineering their products from very complex products to simpler products. For the past 10 years, we have invested very strongly in two directions: the nature of life and from things, to life. 

Laurel: That is a complicated kind of process to sort of imagine. But with the 3DEXPERIENCE Lab, scientists and engineers can go in and build these cloud-based simulations for 3D modeling, digital twins, and product in a way that is really collaborative, taking advantage of that human system. Could you talk to us a bit more about why Dassault felt it was important to create this 3DEXPERIENCE Lab in a way that was so collaborative? 

FrédéricWe started the 3DEXPERIENCE Lab initiative five years ago to accelerate newcomers, very small actors, startups, makers, as we believe that innovation is everywhere. For 40 years, we innovated with the aerospace and defense industry. [For example] we established a partnership with Boeing on the 777, for instance, the first airplane that was fully digitized [made into a digital twin]. And not only the product, but all the processes and the factories. Now, because technologies are more accessible, newcomers can also disrupt, and this is where we want to focus with the lab. This lab is targeting open innovation with startup accelerators empowering communities online, communities of people, passionate and willing to work together to accomplish a common objective. 

Laurel: And because it is an open lab, anyone can participate. But you have created a specific program for startups. Could you tell us more about that program? 

FrédéricSince the beginning, we have been identifying and sourcing startups that provide a strong impact to society as a disruptive product or project wanting to make a real impact. This program provides those startups access to our software and professional solutions that the industry is using in their day-to-day activities. But also funds to this cloud platform for communities to create access to mentors. Mentors will help them accelerate in their development, providing know-how and knowledge. 

Laurel: That kind of access for startups is rather difficult to get, right? Because this kind of software is professional grade, it is expensive. They may not be able to afford it or understand that they even have access to it. But interestingly, it’s not just the software companies and startups that would have access to it’s also the people who work at Dassault, correct? 

FrédéricYeah, that is correct. Thanks to this 3DEXPERIENCE platform in the cloud, as you mentioned, we have 20,000 people worldwide in 140 countries. Those people are knowledgeable as they support their business in many industries in terms of technology and in science. And those people on a volunteering basis, could join a lab project as coach, mentor, or startup. Thanks to these cloud platforms, they are not only discussing and providing some insight or information or guidance, but they can really co-design with those guys. Like a Google document, many people can work on the same document while being in different locations. This program enables us to perform the same way but on a digital mock-up. 

Laurel: People can kind of really visualize what you have in mind. The 3DEXPERIENCE Lab does two things. One, it creates a way for an enterprise to build an entire product as a 3D vision, incorporating feedback from the research lab, the factory floor, and the customer. So, all of the stakeholders can work in a single environment. Could you give us an example of that and how that works? 

FrédéricIn a single environment in the cloud they can start with using some apps, maybe from CATIA or SolidWorks. They can do the engineering part of the job on the same data model they would use to perform their own simulations. Any type of digital simulation that will help those guys to announce the engineering and the design of their product. Through that, they will optimize the design and then go to the manufacturing aspect, delivering all the processes needed up to programming the machines. But that is, I would say, the standard way to operate.  

Now for this platform, you have access also to services for marketplace. This is particularly interesting for early-stage startups as they struggle to find the right partner or the right supplier to manufacture something. Here, at one click of a button, they can source components from millions of components that are available through qualified suppliers online. Just drag and drop the component into the project. 

They can access to thousands of factories worldwide, whereby, they will be able to produce their parts for those factories, managing all the business online between the two suppliers. And then, they may also have access to engineering services, where if you want to do something, but you don’t have the skills to do it, then you contract the job to a service bureau or qualified partners that could deliver the job. So, it’s not only software, it’s not only cloud, but it’s also a community of people’s skills and services available for the marketplace. 

Laurel: And it really is a platform, right? To directly offer services and innovation from one company to another, in a way that’s very visual and hands-on, so you could actually almost demo the product before you buy it because you are in this 3DEXPERIENCE environment. How does that work? With an example from a company? Am I thinking of that in the correct way? 

FrédéricYou’re correct. The complete digital project is done on the platform before the real product is produced. You want to develop a new car or a new table or a new chair or new lamp, you design everything in 3D. You simulate to make it robust and then you do the engineering to make sure that the manufacturing would be fine based on your manufacturing capacities or partners. And, you go one step further on that, then you can really produce the marketing operations, produce the advertising, the high quality pictures you need for your flyers, even the experience from the video to record the commercials. So, the digital assets that are done already at the beginning of the project, to engineer a new product, are now used not only for production, but also for communication, marketing, training, and so on. That means that those people in your marketing department can do the job in parallel and perform all their deliverables, even if the product, the physical product, is not there yet. 

Laurel: How do companies feel about sharing some of this intellectual property ahead of time before the product is even developed? You must have to have very special philosophies and outlooks to want to do this, right? 

FrédéricYeah. The IP is very important for us and obviously for our clients. We deliver to each client a dedicated platform so that they are in a 100% secure network environment. This is true for the big guys like Boeing, Airbus in the aerospace industry or BMW, Tesla in the auto industry. But it’s also true for smaller startups like we are talking about with this innovation lab. 

Laurel: The system really does bring together an enormous amount of complicated issues, including cybersecurity, as well as processing power, data science, artificial intelligence, but also that human intelligence. How does Dassault define collective intelligence? Why is that so important as a philosophy? 

FrédéricIt’s key. Behind any project, behind any companies, you have people, right? This is why on this platform, on the baseline services, you have all those services to enable people to collaborate, not only to manage their project with sequences, with milestones, with task management and so on like any corporation would do, but now in a very agile way for communities. To connect people, to help people, to work better together to match skills and needs. This is a new approach, obviously this approach is new for professionals, but these services were brought by social networks to the general public many years ago, but we applied these services to innovative processes onto engineering processes within a company. 

Laurel: You mentioned skills, and I think an interesting place to kind of look at it for a bit, is how do people transfer knowledge? And is this environment conducive to training and helping perhaps one group teach the other group how to perform basic tasks or understand a product better? Are you seeing that when companies work with the platform, they actually bring in everyone, including marketing? So, everyone can have a much better understanding of the entire product? 

FrédéricDefinitely. First, we share a common referential. So, there is no loss in email exchange, in data exchange, and so on. Everyone’s work around a digital twin of the project is accurate and up-to-date. Second, this platform enables you to capitalize knowledge and know-how and it is very important, especially when seniors are retiring, to transfer the knowledge to new generations. We have seen in the past, especially in the aerospace industry, many fellows, who have left their company have to come back to the company because they are seen as critical in the process with their knowledge. Such a platform now allows companies to keep the knowledge inside and to transfer the knowledge from one generation to another. 

Laurel: So that idea of collective intelligence really does spread throughout an entire enterprise. The lab does take on a number of themes, including healthcare. Could you talk about a few of those ideas? 

FrédéricYeah. With the lab, as I said, we have main criteria to select a project: a strong, positive impact to society and a disruptive project that calls for collective intelligence. We are very selective as we really want to think big. We want to accelerate about 10 projects a year on a global standpoint. We heavily use data intelligence and our tools to scan and to scroll through all news on the web, new VCs, the founding a new startup, [all of this is done] in order to understand the weak signals, the new trends, and be able to identify those newer innovators. We use the same platform to orchestrate this ideation process. Having a small idea, nurturing and qualifying the idea, up to validating this idea coming from the startups with the community—the lab community, which is able to challenge the project to give their insights, their suggestions, and then vote. 

On every quarter, a new batch of startups are presenting their projects. They are pitching using the platform. Having as a record, all of those discussions on the project from several, you know mentors’ experience giving their opinion, the committee’s voting includes our CEO himself, with a few members of the boards validating the project based on all these discussions. So it’s a very flexible process. A very rapid process, considering we have a big company. In less than a few months we can orchestrate a completely new project.  

It’s a complete reverse approach than building a PowerPoint document to validate a project. It’s a very cool innovation with inclusive methodology where every volunteer, every person, who wants to contribute are welcome to. And obviously when validated, the startups get free access to our software, to those mentors that are recruited. Like, you know, on dating apps, but we are doing matching between mentors that have expertise and skills with needs requested by those startups or on projects. 

Laurel: That’s quite a benefit for a startup for people be matched with mentors and other innovators in their particular field. But to have Dassault’s CEO so intimately involved in these processes? That is really quite astounding. 

FrédéricIt’s huge. Even if the startup is not selected, we are working on the project, we are challenging the project with experts. Our CEO himself is challenging the project. It’s already important information for those guys and a huge value. To answer your question about the themes, we have three main themes that drives our sourcing: life, city infrastructure, and lifestyle well-being. As I said, what we want is to positively impact the society. We believe that the only progress is human. So those themes, as you understood, are driven into a better world. 

Laurel: What’s an example of one of these startups that have come to you, what are they working on? 

FrédéricWe have a huge variety of projects. We have amazing projects that, for instance, that are performing organ 3D printing with patient-specific geometry reconstruction in order to create a virtual twin of a patient. This is in order to have a simulator for the surgeons who would use it to train before the real surgery in the operating room. It was one of the first startups, BioModex, a French startup. We accelerated at the beginning of the lab. They started at two people and are now at 50 and they started in Paris. They have now also settled in Boston to connect with the life science community. And it is huge if you look at it, especially for neurosurgery, in some complex case, the surgeon can train on your own digital twin before the real surgery. So, it reduces risk with higher efficiency. 

Another example is about mobility on drones. We are helping young startups that are working on a solar autonomous drone. You remember, there is a story about solar impulse with Bertrand Piccard, a pioneer who did a world tour with a plane powered by the sun. The limit of this project was the pilot, because you cannot stay too long, not drinking, not eating. I think a drone disrupts completely the concept. This solar autonomous drone is due to perform and operate missions, like forest fire detection. So, if a drone can stay on the radar of any fires early in the process, it would help controlling borders or coasts or pipeline monitoring. We are working on it for the past three years. Last summer, they did their first test flight–12 hours powered by the sun doing 600 kilometers. So, the first flight was a success and there is lot of potential in this project. It’s called a drone, but it’s more like a plane with two wings.  

The third one is a US-based company SparkCharge. They are creating portable, ultra-fast charging units for electrical vehicles. Two weeks ago, they were on Shark Tank on ABC and they won. They got funded by Mark Cuban. It’s a huge success. 

Laurel: We should take a minute to define digital twin. A digital twin is a copy of a system that can be manipulated to experiment with different outcomes. Sort of like making a photocopy to preserve the original, but to be able to write on or make changes to the copy. In this case, having a digital twin for a medical procedure helps the surgeon walk through what she is going to do before she does it on a live patient.  

And the second idea of a solar autonomous drone/plane, really, because it’s not a small drone that we think of, it’s a very large one with solar panels on it. Being able to autonomously fly for hours on end to survey forest fires or even oil pipelines, any kind of long flight ability – that really does sound like the future to me. Do you ever just pinch yourself and say, “I can’t believe these are some of the amazing projects people are coming to us with?”

FrédéricYeah, the future is now. This 3D printed organ is in production and it is already being used. The solar autonomous drone made its first flight, and we expect several flights next year. Things are accelerating for the good. 

Laurel: And speaking of one of the most important things that we are dealing with here in 2020 is the covid-19 pandemic. Dassault Systèmes had a direct response, as many companies are working very closely with trying to work on solutions to the virus. So, what is the Open Covid-19 project and how’s Dassault helping? 

FrédéricAs I said earlier, the 3DEXPERIENCE Lab has had two kind of projects: a very collective and collaborative project around a startup or a complete community project with special needs. We did that for instance, to reconstruct Leonardo da Vinci’s machines in 3D. We created an online community, shared the collection—all those manuscripts that were the draftings of Leonardo at that time—to engineers. We are using our software, or any 3D software, to design and engineer those machines and it worked pretty well. It started eight years ago, and it is still going. Many machines have been reconstructed and now they are forming a playground of many machines. Some of them worked and some of them did not. At that time, he invented so many things, but obviously not everything was going to work. We did the same for the covid-19 situation. 

When the pandemic started, it was in China, and our colleagues were reporting the issues to us. We saw the pandemic coming into Europe from Italy first, and then in France. So, we decided to first work with our data intelligence to understand the needs by developing dashboards to scan what people were saying. And very quickly we identified two main needs: ventilators and protection. They were the focus of things. 

So we created an open community, Open Covid-19, to welcome any volunteer makers, engineers and designers, to help, because we saw at that time that many people were trying to do things, but on their own, in their lab, in their country. They were all facing the same issues, and by working together, we thought it could be an interesting way to accelerate, to transfer the know-how, to avoid any mistakes done already. 

For this community, we accelerated more than 150 projects on the global standpoint. With around 25 ventilators in India, a startup called Inali did a complete engineering simulation and prototyping of a new ventilator in eight days. Once again thanks to the cloud and the mentoring.

For collaborative projects with industry, it was the case in Brazil and Mexico. To make these projects you have makers in the fab lab, trying to do some frugal innovation with what they have. Some of those projects have been certified, for instance, from when we worked with the Fab Foundation from MIT’s Center for Bits and Atoms (CBA). They are gathering, with this foundation, all fab labs around the world to connect local production. It was mainly the case for protection, for PPE and for face shields, so that they could 3D print those face shields. And we were able to do some data and GPS localization of those fab labs in hospitals. I think urgency dictates to connect them locally so that you can connect to a local production. A fab lab could develop on design and fabricate PPE for the healthcare workers close by. 

Laurel: And one of those projects that obviously got a lot of interest, is the way that sneeze particles are spread. And with covid-19, everyone is very interested in understanding how aerosol particles move through the air. 

FrédéricYeah, that is true. We developed a sneeze simulation model from the front of a person to model virtual particles to see the scientific simulation of the human sneeze to evaluate how pathogens, such as covid, would spread. And we did this simulation model with MIT’s CBA with Neil Gershenfeld to first announce the design of the PPE, personal protective equipment, the face shield design. And to see from two virtual persons in front of them, one- or two-meters distance where one is sneezing. What is impact on how those particles would spread from one person to the other, to optimize the design? We very quickly understood, for instance, that those face shields need a top cover since the particles are dropping down and infecting the other person. 

Laurel: So how do you see artificial intelligence augmenting human intelligence? 

FrédéricAI, for many people, AI is deep learning. It is machine learning computer vision, or data science–everybody is doing it. For us, artificial intelligence also leads to generative designs, for instance. The algorithm creates a shape that meets your design intent, your constraints. So, the designer is no longer sketching the shape he wants, he is providing the constraints. The requirements on the algorithm is proposing a design shape that meets those intents. It reverses completely the way the designers perform the function thanks to the artificial intelligence. 

We spoke about human, augmented human by leveraging the virtual twin. Your virtual me, in a way, of your body, of your organs. We have this collaborative project called Living Heart driven by our American colleagues to revolutionize, the cardiovascular science through realistic simulations. This research project delivered a heart model to explore novel digital therapies. And from this model, we accelerated a new startup, a Belgium company called FEops that now can offer the first and the only patient specific simulation model for structural heart intervention with AI, which will predict the best TAVI [those valve implants] that the surgeon would need for matching correctly his patient’s anatomy. 

Laurel: So, the simulation really does come out of the cloud, and out of the computer to real life. And, in a rapid way that helps people on a day-to-day basis, which is really fantastic. It’s not something that just lingers around for approval. You can make changes, see the effect, and then move on to see what else you can do to improve situations.  

The face shield project is also one of those that is so critical. Bringing in the makers, as you said, so many folks wanted to get involved, and still are from around the world, and helping out in their own way. So, this idea of bringing in amateur makers, as well as startups, as well as these professionals, as well as enterprises, all working together to really combat a global pandemic is really quite something else. This shows me that Dassault really does have an innovator’s mindset when it comes to science, when it comes to helping humanity. How else are you seeing the successes of the 3DEXPERIENCE Lab sort of ripple throughout the Dassault? 

FrédéricAt Dassault Systèmes yes, we are all innovators in a way. That’s why, when I established this 3D lab initiative five years ago, I decided not to create a new organization with the boss that would perform innovation. I was willing to have an inclusive management system. We decided to allow any of our 20,000 employees to take up to 10% of their time to volunteer on innovation accelerated by the lab. And bring their hard skills and their know-how knowledge.  

And again, this is possible thanks to this platform. So we invented, in a way, a new management organization with communities, completely across silos, across divisions, so that anyone could join a project for few hours, a few days or a few weeks in order to work on it. It was really a new governance for open innovation, with new management methodologies that impacted not only the person, or the employees, but also our own platform on solutions. We work closely with our R&D to enhance a few or to develop new applications, to sustain new methodologies on process. 

Laurel: And do other companies come to Dassault to ask, “how did you do this?” You’re a large corporation, with global offices, and you’ve been around for a long time. You probably have very specific ways of thinking. How did you manage in five years to become this innovative company, they must want to learn from you? 

FrédéricThat’s true. I don’t know if they want to learn from us, but at least get inspiration from us. What we do is we are always training ahead of our time. Thinking of new ways of working at the lab. We experimented with new usage, thanks to the cloud. We succeeded because now it really works with 20,000 people in operation with deliverables and KPIs. Our point is really to inspire them and to show them what is possible and what we can do to transform ourselves. It’s also digital transformation for Dassault Systèmes with these employees in order for them to think how it could also impact them, how they can also transform their management system and their companies. 

Laurel: That’s excellent. What a perfect way to end today’s interview. Thank you so much for joining us. 

Frédéric: Thank you, Laurel. 

Laurel: That was Frédéric Vacher, the Head of Innovation at Dassault Systèmes, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River. 

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at dozens of events each year around the world and online. 

For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening. 

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Mike Cagney is testing the boundaries of the banking system for himself — and others

Published

on

Founder Mike Cagney is always pushing the envelope, and investors love him for it. Not long sexual harassment allegations prompted him to leave SoFi, the personal finance company that he cofounded in 2011, he raised $50 million for new lending startup called Figure that has since raised at least $225 million from investors and was valued a year ago at $1.2 billion.

Now, Cagney is trying to do something unprecedented with Figure, which says it uses a blockchain to more quickly facilitate home equity, mortgage refinance, and student and personal loan approvals. The company has applied for a national bank charter in the U.S., wherein it would not take FDIC-insured deposits but it could take uninsured deposits of over $250,000 from accredited investors.

Why does it matter? The approach, as American Banker explains it, would bring regulatory benefits. As it reported earlier this week, “Because Figure Bank would not hold insured deposits, it would not be subject to the FDIC’s oversight. Similarly, the absence of insured deposits would prevent oversight by the Fed under the Bank Holding Company Act. That law imposes restrictions on non-banking activities and is widely thought to be a deal-breaker for tech companies where banking would be a sidelight.”

Indeed, if approved, Figure could pave the way for a lot of fintech startups — and other retail companies that want to wheel and deal lucrative financial products without the oversight of the Federal Reserve Board or the FDIC — to nab non-traditional bank charters.

As Michelle Alt, whose year-old financial advisory firm helped Figure with its application, tells AB: “This model, if it’s approved, wouldn’t be for everyone. A lot of would-be banks want to be banks specifically to have more resilient funding sources.” But if it’s successful, she adds, “a lot of people will be interested.”

One can only guess at what the ripple effects would be, though the Bank of Amazon wouldn’t surprise anyone who follows the company.

In the meantime, the strategy would seemingly be a high-stakes, high-reward development for a smaller outfit like Figure, which could operate far more freely than banks traditionally but also without a safety net for itself or its customers. The most glaring danger would be a bank run, wherein those accredited individuals who are today willing to lend money to the platform at high interest rates began demanding their money back at the same time. (It happens.)

Either way, Cagney might find a receptive audience right now with Brian Brooks, a longtime Fannie Mae executive who served as Coinbase’s chief legal officer for two years before jumping this spring to the Office of the Comptroller of the Currency (OCC), an agency that ensures that national banks and federal savings associations operate in a safe and sound manner.

Brooks was made acting head of the agency in May and green-lit one of the first national charters to go to a fintech, Varo Money, this past summer. In late October, the OCC also granted SoFi preliminary, conditional approval over its own application for a national bank charter.

While Brooks isn’t commenting on speculation around Figure’s application, in July, during a Brookings Institution event, he reportedly commented about trade groups’ concerns over his efforts to grant fintechs and payments companies charters, saying: “I think the misunderstanding that some of these trade groups are operating under is that somehow this is going to trigger a lighter-touch charter with fewer obligations, and it’s going to make the playing field un-level . . . I think it’s just the opposite.”

Christopher Cole, executive vice president at the trade group Independent Community Bankers of America, doesn’t seem persuaded. Earlier this week, he expressed concern about Figure’s bank charter application to AB, saying he suspects that Brooks “wants to approve this quickly before he leaves office.”

Brooks’s days are surely numbered. Last month, he was nominated by President Donald to a full five-year term leading the federal bank regulator and is currently awaiting Senate confirmation. The move — designed to slow down the incoming Biden administration — could be undone by President-elect Joe Biden, who can fire the comptroller of the currency at will and appoint an acting replacement to serve until his nominee is confirmed by the Senate.

Still, Cole’s suggestion is that Brooks still has enough time to figure out a path forward for Figure — and if its novel charter application is approved, and it stands up to legal challenges — a lot of other companies, too.

Continue Reading

Uncategorized

We read the paper that forced Timnit Gebru out of Google. Here’s what it says

Published

on

On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out. 

Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI, and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.

Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More than 1,400 Google staff and 1,900 other supporters have also signed a letter of protest.

Many details of the exact sequence of events that led up to Gebru’s departure are not yet clear; both she and Google have declined to comment beyond their posts on social media. But MIT Technology Review obtained a copy of the research paper from  one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.

Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” the paper lays out the risks of large language models—AIs trained on staggering amounts of text data. These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”

The paper

The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here. 

Environmental and financial costs

Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

Strubell’s study found that one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a roundtrip flight between New York City and San Francisco.

Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.

Massive data, inscrutable models

Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist, and otherwise abusive language ends up in the training data.

An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, […] undocumented training data perpetuates harm without recourse.”

Research opportunity costs

The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).

Illusions of meaning

The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.

Why it matters

Gebru and Bender’s paper has six co-authors, four of whom are Google researchers. Bender asked to avoid disclosing their names for fear of repercussions. (Bender, by contrast, is a tenured professor: “I think this is underscoring the value of academic freedom,” she says.)

The paper’s goal, Bender says, was to take stock of the landscape of current research in natural-language processing. “We are working at a scale where the people building the things can’t actually get their arms around the data,” she said. “And because the upsides are so obvious, it’s particularly important to step back and ask ourselves, what are the possible downsides? … How do we get the benefits of this while mitigating the risk?”

In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias. 

However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It’s the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.” 

The version of the paper we saw does also nod to several research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models. It argues, however, that these efforts have not been enough. “I’m very open to seeing what other references we ought to be including,” Bender said.

Nicolas Le Roux, a Google AI researcher in the Montreal office, later noted on Twitter that the reasoning in Dean’s email was unusual. “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review,” he said.

Dean’s email also says that Gebru and her colleagues gave Google AI only a day for an internal review of the paper before they submitted it to a conference for publication. He wrote that “our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.”

Bender noted that even so, the conference would still put the paper through a substantial review process: “Scholarship is always a conversation and always a work in progress,” she said. 

Others, including William Fitzgerald, a former Google PR manager, have further cast doubt on Dean’s claim: 

Google pioneered much of the foundational research that has since led to the recent explosion in large language models. Google AI was the first to invent the Transformer language model in 2017 that serves as the basis for the company’s later model BERT, and OpenAI’s GPT-2 and GPT-3. BERT, as noted above, now also powers Google search, the company’s cash cow.

Bender worries that Google’s actions could create “a chilling effect” on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. “That has been beneficial in many ways,” she says. “But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”

Continue Reading

Uncategorized

Daily Crunch: Slack and Salesforce execs explain their big acquisition

Published

on

We learn more about Slack’s future, Revolut adds new payment features and DoorDash pushes its IPO range upward. This is your Daily Crunch for December 4, 2020.

The big story: Slack and Salesforce execs explain their big acquisition

After Salesforce announced this week that it’s acquiring Slack for $27.7 billion, Ron Miller spoke to Slack CEO Stewart Butterfield and Salesforce President and COO Bret Taylor to learn more about the deal.

Butterfield claimed that Slack will remain relatively independent within Salesforce, allowing the team to “do more of what we were already doing.” He also insisted that all the talk about competing with Microsoft Teams is “overblown.”

“The challenge for us was the narrative,” Butterfield said. “They’re just good [at] PR or something that I couldn’t figure out.”

Startups, funding and venture capital

Revolut lets businesses accept online payments — With this move, the company is competing directly with Stripe, Adyen, Braintree and Checkout.com.

Health tech venture firm OTV closes new $170M fund and expands into Asia — This year, the firm led rounds in telehealth platforms TytoCare and Lemonaid Health.

Zephr raises $8M to help news publishers grow subscription revenue — The startup’s customers already include publishers like McClatchy, News Corp Australia, Dennis Publishing and PEI Media.

Advice and analysis from Extra Crunch

DoorDash amps its IPO range ahead of blockbuster IPO — The food delivery unicorn now expects to debut at $90 to $95 per share, up from a previous range of $75 to $85.

Enter new markets and embrace a distributed workforce to grow during a pandemic — Is this the right time to expand overseas?

Three ways the pandemic is transforming tech spending — All companies are digital product companies now.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

WH’s AI EO is BS — Devin Coldewey is not impressed by the White House’s new executive order on artificial intelligence.

China’s internet regulator takes aim at forced data collection — China is a step closer to cracking down on unscrupulous data collection by app developers.

Gift Guide: Games on every platform to get you through the long, COVID winter — It’s a great time to be a gamer.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Continue Reading

Trending