Connect with us


Podcast: Can you teach a machine common sense?



Artificial intelligence has become such a big part of our lives, you’d be forgiven for losing count of the algorithms you interact with. But the AI powering your weather forecast, Instagram filter, or favorite Spotify playlist is a far cry from the hyper-intelligent thinking machines industry pioneers have been musing about for decades. 

Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn only one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data—including the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence—machines that can multitask, think, and reason for themselves. 

The idea is divisive. Beyond the answer to how we might develop technologies capable of common sense or self-improvement lies yet another question: who really benefits from the replication of human intelligence in an artificial mind? 

“Most of the value that’s being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal,” says Karen Hao, MIT Technology Review’s senior AI reporter and the writer of The Algorithm. “And we haven’t really figured out how to convert that value or distribute that value to other people.”

In this episode of Deep Tech, Hao and Will Douglas Heaven, our senior editor for AI, join our editor-in-chief, Gideon Lichfield, to discuss the different schools of thought around whether an artificial general intelligence is even possible, and what it would take to get there.

Check out more episodes of Deep Tech here.

Show notes and links:

Full episode transcript:

Gideon Lichfield: Artificial intelligence is now so ubiquitous, you probably don’t even think about the fact that you’re using it. Your web searches. Google Translate. Voice assistants like Alexa and Siri. Those cutesy little filters on Snapchat and Instagram. What you see—and don’t see—on social media. Fraud alerts from your credit-card company. Amazon recommendations. Spotify playlists. Traffic directions. The weather forecast. It’s all AI, all the time.

And it’s all what we might call “dumb AI”. Not real intelligence. Really just copying machines: algorithms that have learned to do really specific things by being trained on thousands or millions of correct examples. On some of those things, like face and speech recognition, they’re already even more accurate than humans.

All this progress has reinvigorated an old debate in the field: can we create actual intelligence, machines that can independently think for themselves? Well, with me today are MIT Technology Review’s AI team: Will Heaven, our senior editor for AI, and Karen Hao, our senior AI reporter and the writer of The Algorithm, our AI newsletter. They’ve both been following the progress in AI and the different schools of thought around whether an artificial general intelligence is even possible and what it would take to get there.

I’m Gideon Lichfield, editor in chief of MIT Technology Review, and this is Deep Tech.

Will, you just wrote a 4,000 word story on the question of whether we can create an artificial general intelligence. So you must’ve had some reason for doing that to yourself. Why is this question interesting right now?

Will Douglas Heaven: So in one sense, it’s always been interesting. Building a  machine that can think and do things that people can do has been the goal of AI since the very beginning, but it’s been a long, long struggle. And past hype has led to failure. So this idea of artificial general intelligence has become,you know, very controversial and very divisive—but it’s having a comeback. That’s largely thanks to the success of deep learning over the last decade. And in particular systems like Alpha Zero which was made by DeepMind and can play Go and Shogi, a kind of Japanese chess, and chess. The same algorithm can play all three games. And GPT-3, the large language model from OpenAI, which can uncannily mimic the way that humans write. That has prompted people, especially over the last year, to jump in and ask these questions again. Are we on the cusp of building artificial general intelligence? Machines that can think and do things like humans can.

Gideon Lichfield: Karen, let’s talk a bit more about GPT-3, which Will just mentioned. It’s this algorithm that, you know, you give it a few words and it will spit out paragraphs and paragraphs of what looks convincingly like Shakespeare or whatever else you tell it to do. But what is so remarkable about it from an AI perspective? What does it do that couldn’t be done before? 

Karen Hao: What’s interesting is I think the breakthroughs that led to GPT-3 actually happened quite a number of years earlier. In 2017, the main breakthrough that triggered a wave of advancement in natural language processing occurred with the publishing of the paper that introduced the idea of transformers. And the way a transformer algorithm deals with language is it looks at millions or even billions of examples, of sentences of paragraph structure of, maybe even code structure. And it can extract the patterns and begin to predict to a very impressive degree, which words make the most sense together, which sentences make the most sense together. And then therefore construct these really long paragraphs and essays. What I think GPT-3 has done differently is the fact that there’s just orders of magnitude more data that is now being used to train this transformer technique. So what OpenAI did with GPT-3 is they’re not just training it on more examples of words from corpora like Wikipedia or from articles like the New York Times or Reddit forums or all of these things, they’re also training it on, sentence patterns, it trains it on paragraph patterns, looking at what makes sense as an intro paragraph versus a conclusion paragraph. So it’s just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded. 

So it’s just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded. 

Gideon Lichfield: And before transformers, which can extract patterns from all of these different kinds of structures, what was AI doing? 

Karen Hao: Before, natural language processing was actually.. it was much more basic. So transformers are kind of a self-supervised technique where the algorithm is not being told exactly what to look for among the language. It’s just looking for patterns by itself and what it thinks are the repeating features of language composition. But before that, there were actually a lot more supervised approaches to language and much more hard coded the approaches to language where people were teaching machines like “these are nouns, these are adjectives. This is how you construct these things together.” And unfortunately that is a very laborious process to try and curate language in that way where every word kind of has to have a label. And the machine has to be manually taught how to construct these things. And so it limited the amount of data that these techniques could feed off of. And that’s why language systems really weren’t very good. 

Gideon Lichfield: So let’s come back to that distinction between supervised and self supervised learning, because I think we’re going to see it’s a fairly important part of the advances towards something that might become a general intelligence. Will, as you wrote in your piece, there’s a lot of ambiguity about what we even mean when we say artificial general intelligence. Can you talk a bit about what are the options there? 

Will Douglas Heaven: There’s a sort of spectrum. I mean on one end, you’ve got systems which, you know, can do many of the things that narrow AI or dumb AI, if you like can do today, but sort of all at once. And Alpha Zero is perhaps the first glimpse of that. This one algorithm that can train itself to do three different things, but important caveat there, it can’t make itself do those three things at once. So it’s not like a single brain that can switch between tasks. As Shane Legg, on the co-founders of Deepmind, put it that it’s as if you or I have to, you know, when we started playing chess, we had to swap out our brain and put it in our chess brain.

That’s clearly not very general, but we’re on the cusp of that kind of thing—your kind of multi-tool AI where one AI can do several different things that narrow AI can already do. And then moving up the spectrum, what probably more people mean when they talk about AGI is, you know, thinking machines, machines that are “human-like” in scare quotes that can multitask in the way that a person can. You know we’re extremely adaptable. We can switch between, you know, frying an egg to, you know, writing a blog post to singing, whatever. Still, there are also folk, going right to the other end of the spectrum, who would rope in a machine consciousness too to talk about AGI. You know, that we’re not going to have true general intelligence or human-like intelligence until  we have a machine that can not only do things that we can do, but knows that it can do things that we can do that has some kind of self reflection in there. I think all those definitions have been around since the beginning, but it’s one of the things that makes AGI difficult to talk about and quite controversial because there’s no clear definition.

Gideon Lichfield: When we talk about artificial general intelligence, there’s this sort of implicit assumption that human intelligence itself is also absolutely general. It’s universal. We can fry an egg or we can write a blog post or we can dance or sing. And that all of these are skills that any general intelligence should have. But is that really the case or are there going to be different kinds of general intelligence? 

Will Douglas Heaven: I think, and I think many in the AI community would also agree that there are many different intelligences. We’re sort of stuck on this idea of human-like intelligence largely I think because humans for a long time have been the best example of general intelligence that we’ve had, so it’s obvious why they’re a role model, you know, we want to build machines in our own image, but you just look around the animal kingdom and there are many, many different ways being intelligent. From the sort of the social intelligence  that ants have, where they could collectively do really remarkable things to octopuses, which we’re only just beginning to understand the ways that they’re intelligent, but then they’re intelligent in a very alien way compared to ourselves. And even our closest cousins like chimps have intelligences, which are different to, and you I, they have different skill sets than, than humans do.

So I think the idea that machines, if they become generally intelligent, needs to be like us is, as you know, is nonsense, is going out the window. The very mission of building an AGI that is human is perhaps pointless because we have human intelligences, right? We have ourselves. So why do we need to make machines that do those things? It’d be much, much better to build intelligences that can do things that we can’t do. They’re intelligent in different ways to compliment our abilities.

Gideon Lichfield: Karen, people obviously love to talk about the threat of a super-intelligent AI taking over the world, but what are the things that we should really be worried about? 

Karen Hao: One of the really big ones in recent years has been algorithmic discrimination. This phenomenon we started noticing where, when we train algorithms, small or large, to make decisions based on historical data, it ends up replicating the patterns that we might not necessarily want it to replicate within historical data, such as the marginalization of people of color or the marginalization of women.

Things in our history that we would rather do without, as we move forward and progress as a society. But because of the way that algorithms are not very smart and they extract these patterns and replicate these patterns mindlessly, they end up making decisions that discriminate against people of color discriminating against women discriminate against particular cultures that are not Western-centric cultures.

And if you observe the conversations that are happening among people who talk about some of the ways that we need to think about mitigating threats around superintelligence or around AGI, however you want to call it, they will talk about this challenge of value alignment. Value alignment being defined as how do we get this super-intelligent AI to understand our values and align with our values. If they don’t align with our values, they might go do something crazy. And that’s how it sort of starts to harm people. 

Gideon Lichfield: How do we create an AI, a super intelligent AI, that isn’t evil?

Karen Hao: Exactly. Exactly. So instead of talking in the future about trying to figure out value alignment a hundred years from now, we should be talking right now about how we failed to align the values with very basic AIs today and actually solve the algorithmic discrimination problem.

Another huge challenge is the concentration of power that, um, AI naturally creates. You need an incredible amount of computational power today to create advanced AI systems and break state of the art. And the only players really that have that amount of computational power now are the large tech companies and maybe the top tier research universities. And even the top tier research universities can barely compete with the large tech companies anymore.

So the Googles Facebooks apples of the world. Um, another concern that people have, for a hundred years from now is once super-intelligent AI is unleashed, is it actually going to be benefiting people evenly? Well, we haven’t figured that out today either. Like most of the value that’s being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal. And we haven’t really figured out how to convert that value or distribute that value to other people.

Gideon Lichfield: Ok well let’s get back then to that idea of a general intelligence and how we would build it if we could. Will mentioned deep learning earlier. Which is the foundational technique of most of the AI that we use today. And it’s only about eight years old. Karen, you talked to essentially the father of deep learning Geoffrey Hinton at our EmTech conference recently. And he thinks that deep learning, the technique that we’re using for things like translation services or face recognition, is also going to be the basis of a general intelligence when we eventually get there. 

Geoffrey Hinton [ From EmTech 2020]: I do believe deep learning is going to be able to do everything. But I do think there’s going to have to be quite a few conceptual breakthroughs that we haven’t had yet. // Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reasoning, but we also need a massive increase in scale. // The Human brain has about a hundred trillion parameters, that is synapsis. A hundred trillion. What are now called really big models like GPT-3 has 175 billion. It’s thousands of times smaller than the brain.

Gideon Lichfield: Can you maybe start by explaining what deep learning is?

Karen Hao: Deep learning is a category of techniques that is founded on this idea that the way to create artificial intelligence is to create artificial neural networks that are based off of the neural networks in our brain. Human brains are the smartest form of intelligence that we have today.

Obviously Will has already talked about some challenges to this theory, but assuming that human intelligence is sort of like the epitome of intelligence that we have today, we want to try and recreate artificial brains in sort of the image of a human brain. And deep learning is that. Is a technique that tries to use artificial neural networks as a way to achieve artificial intelligence.

What you were referring to sort of is there are largely two different camps within the field around how we might go about approaching building artificial general intelligence. The first camp being that we already have all the techniques that we need, we just need to scale them massively with more data and larger neural networks.

The other camp is deep learning is not enough. We need something else that we haven’t yet figured out to supplement deep learning in order to achieve some of the things like common sense or reasoning that has sort of been elusive to the AI field today.

Gideon Lichfield: So Will, as Karen alluded to just now, the people who think we can build a general intelligence off of deep learning think that we need to add some things to it. What are some of those things? 

Will Douglas Heaven: Among those who think deep learning is, is the way to go. I mean, as well as loads more data, like Karen said, there are a bunch of techniques that people are using to push deep learning forward.

You’ve got unsupervised learning, which is.. traditionally many deep learning successes, like image recognition, just simply to use the cliched example of recognizing cats. That’s because the AI has been trained on millions of images that have been labeled by humans with “cat.” You know, this is what a cat looks like, learn it. The unsupervised learning is when the machine goes in and looks at data that hasn’t been labeled in that way and itself tries to spot patterns. 

Gideon Lichfield: So in other words, you would give it like a bunch of cats, a bunch of dogs, a bunch of pecan pies, and it would sort them into groups? 

Will Douglas Heaven: Yeah. It essentially has to first learn what the sort of distinguishing features between those categories are rather than being prompted. And that ability to identify itself, you know, what those distinguishing features are, is a step towards a better way of learning. And it’s practically useful because of course the task of labeling all this data is enormous.

And we can’t continue along this path, especially if we want the system to train on more and more data. We can’t continue on the path of having it manually labeled. And even more interestingly I think an unsupervised learning system has a potential of spotting your categories that humans haven’t. So we might actually learn something from the machine.

And then you’ve got things like transfer learning, and this is crucial for general intelligence. This is where you’ve got a model that has been trained on a set of data in one way or another. And what it’s learned in that training, you want to be able to then transfer that to a new task so that you don’t have to start from scratch each time.

So there are various ways you’d approach transfer learning, but for example you could take some of the, some of the values from one training, from one train network and sort of preload another one in a way that when you asked it to recognize, an image of a different animal, it already has some sense of, you know, what animals have, you know, legs and heads and tails.

What have you. So you just want to be able to transfer some of the things that’s learned from one task to another. And then there are things like few shot learning, which is where the system learns from or as the name implies from very few training examples. And that’s also going to be crucial because we don’t always have lots and lots of data to throw at these systems to teach them.

I mean they’re extremely inefficient when you think about it compared to humans. You know, we can learn a lesson from, you know, one example, two examples. You show a kid, a picture of a giraffe and it knows what a giraffe is. We can even learn what something is without saying any example. 

Karen Hao: yeah. Yeah. If you think about it, kids…  if you show them a picture of a horse and then you show them a picture of a rhino and you say, you know, a unicorn is something in between a horse and rhino, maybe they will actually, when they first see a unicorn in a picture book, be able to know that that’s a unicorn.  And so that’s how you kind of start learning more categories than examples that you’re seeing, and this is inspiration for yet another frontier of deep learning called low shot learning or less than one shot learning. And again, it’s the same principle as few shot learning where if we are able to get these systems to learn from very, very, very tiny samples of data, the same way that humans do, then that can really supercharge the learning process.

Gideon Lichfield: For me, this raises an even more general question; which is what makes people in the field of AGI so sure that you can produce intelligence in a machine that represents information digitally, in the forms of ones and zeros, when we still know so little about how the human brain represents information. Isn’t it a very big assumption that we can just recreate human intelligence in a digital machine?

Will Douglas Heaven: yeah, I agree. In spite of the massive complexity of some of the neural networks we’re seeing today in terms of their size and their connections, we are orders of magnitude away from anything that matches the scale of a brain, even sort of a rather basic animal brain. So yeah, there’s an enormous gulf between that idea that we are going to be able to do it, especially with the present technology, the present deep learning technology.

And of course, even though, as Karen described earlier, neural networks are inspired by the brain, the neurons neurons in our brain. That’s only one way of looking at the brain. I mean, brains aren’t just lumps of neurons. They have discrete sections that are dedicated to different tasks.

So again, this idea that just one very large neural network is going to achieve general intelligence is again, a bit of a leap of faith because maybe general intelligence will require some breakthrough in how dedicated structures communicate. So there’s another divide in you know those chasing this goal.

You know, some think that you can just scale up, neural networks. Other people think we need to step back from the sort of specifics of any individual deep learning algorithm and look at the bigger picture. Actually, you know, maybe neural networks aren’t the best model of the brain and we can build better ones, that look at how different parts of the brain communicates to, you know, the, the, the sum is greater than the whole.

Gideon Lichfield: I want to end with a philosophical question. We said earlier that even the proponents of AGI don’t think it will be conscious. Could we even say whether it will have thoughts? Will it understand its own existence in the sense that we do? 

Will Douglas Heaven: In Alan Turing’s paper from 1950 Can Machines Think,  which even, you know, that’s when AI was still just this theoretical idea, we haven’t even addressed it as a sort of an engineering possibility. He raised this question: how do we tell if a machine can think? And in that paper, he addresses, you know, this, this idea of consciousness. Maybe some people will come along and say machines can never think because  we won’t ever be able to tell that machines can think because we won’t be able to tell they’re conscious. And he sort of dismisses that by saying, well, if you push that argument so far, then you have to say the same thing about. Well, the fellow humans that you meet every, every day, there’s no ultimate way that I can say that any of you guys aren’t conscious. You know the only way that I would know that is if I experienced being you. And you get to the point that where communication breaks down and it’s sort of a place where we can’t go. So that’s one way of dismissing that question. I mean, I think the consciousness question will be around forever. One day I think we will have machines, which act as if they were.. they could think and you know, could mimic humans so well, that we might as well treat them as if they’re conscious, but as to whether they actually are, I don’t think we’ll ever know. 

Gideon Lichfield: Karen, what do you think about conscious machines?

Karen Hao: I mean, building off of what Will said is, like, do we even know what consciousness. And I guess I would draw on the work of a professor at Tufts actually. He approaches artificial intelligence from the perspective of artificial life. Like how do you replicate all of the different things?

Not just the brain, but also like the electrical pulses or the electrical signals that we use within the body to communicate and that has intelligence too. If we are fundamentally able to recreate every little thing, every little process in our bodies or in an animal’s body eventually, then why wouldn’t those beings have the same consciousness that we do?

Will Douglas Heaven: You know there’s a wonderful debate going on right now about brain organoids, which are little clumps of stem cells that are made to grow into neurons and they can even develop connections and you see in some of them this electrical activity. And there are various labs around the world studying these little blobs of brain to understand human brain diseases better. But there’s a really interesting ethical debate going on about, you know, At what point does this electrical activity raise? The possibility that these little plops in Petri dishes are conscious. And that shows that we have no good definition of consciousness, even for our own brains, let alone machine ones.

Karen Hao: And want to add, we also don’t really have a good definition of artificial. So that just adds, I mean, if we talk about artificial, general, intelligence.

We don’t have a good definition of any of those three words that compose that term. So going to the point that Will made about these organoids that were growing in Petri dishes is that considered artificial? If not, why? Do we define artificial as things that are just not made out of organic material? There’s just a lot of ambiguity and definitions around all of the things that we’re talking about, which makes the consciousness question very complicated.

Will Douglas Heaven: It also makes them fun things to talk about. 

Gideon Lichfield: That’s it for this episode of Deep Tech. And it’s also the last episode we’re doing for now. We’re working on some other audio projects that we’re hoping to launch in the coming months. So please keep an eye out for them. And if you haven’t already, you should check out our AI podcast called In Machines We Trust, which comes out every two weeks. You can find it wherever you normally listen to podcasts. 

Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. I’m Gideon Lichfield. Thanks for listening.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading


SoftBank takes a $690M stake in cloud-based Swedish CRM company Sinch



On the heels of Facebook taking a big step into customer service with the acquisition of Kustomer for $1 billion, another big move is afoot in the world of CRM. Sinch, a Swedish company that provides cloud-based “omnichannel” voice, video and messaging services to help enterprises communicate with customers, has announced that SoftBank is taking a $690 million stake in the company. Sinch said that it plans to use the proceeds of the share sale for M&A of its own.

“We see clearly how our cloud-based platform helps businesses leverage mobile technology to reinvent their customer experience,” said Oscar Werner, Sinch CEO, to TechCrunch. “Whereas people throughout the world have embraced mobile messaging to interact with friends and family, most businesses have yet to seize this opportunity. We are establishing Sinch as a leader in a global growth market that is still very fragmented, and we’re excited that SoftBank is now helping us realize that vision.”

Specifically, Sinch has issued and sold 3,187,736 shares worth SEK 3.3 billion, and large shareholders have sold a further 5,200,000 shares — with SoftBank the sole buyer.

The move underscores the growing opportunity that those in the world of CRM — which include not just Sinch and Kustomer but Salesforce and many others — are seeing to double down on their services at the moment. With people working and doing everything else remotely, and with the general upheaval we’ve had in the global economy due to Covid-19, there has been an increased demand and strain put on the digital channels that people use to communicate with organizations when they have questions or problems.

The catch is that customer relations has grown to be more than just 1-800 numbers and being on hold for endless hours: it includes social media, email, websites with interactive chats, chatbots, messaging apps, and yes those phone calls.

Organizations like Sinch and Kustomer — which build platforms to help businesses manage all of those fragmented options in what are described as omnichannel offerings, have been capitalising on the demand and are now investing and looking for the next step in their strategies to grow.

For Kustomer that has been leaping into the arms of Facebook, which itself has spotted an opportunity to build out a CRM business to complement its other services for businesses. Recall that it’s also been experimenting and working on its latest Nextdoor competitor to promote local businesses; and it has added a ton of business tools to its messaging apps too.

It will be interesting to see what Salesforce does next. While acquiring Slack gives the company an obvious channel into workplace communications, don’t forget that Slack is also a very popular tool for engaging with people outside of your employee network, too. It will be worth watching how and if Salesforce looks to develop that aspect of the business, too.

For Sinch, its strategy has been around making acquisitions of its own, including paying $250 million to pick up a business unit of SAP, Digital Interconect, which has 1,500 enterprise customers mostly in the US using it to run “omnichannel” CRM. Now the plan will be to do more, since there are still huge swathes of the market that have yet to upgrade and update their CRM approaches.

Sinch, notably, is traded publicly on Sweden’s stock exchange and it currently has a market cap of SEK70 billion ($8.2 billion at current rates). It is profitable and generating cash so has “no need to raise funding for our ongoing business,” Thomas Heath, Sinch’s chief strategy officer and head of investor relations, told TechCrunch.

For SoftBank, the investment marks another step in the company taking sizable stakes in fast-growing public or semi-public tech companies in Europe.

In October, it put $215 million into Kahoot, the online education platform aimed both at students and enterprises, built around the concept of users themselves creating “learning games” that can then be shared with others. Kahoot trades a proportion of its shares publicly on the stock exchange in Norway and like Sinch, the plan is to use a good part of the money for acquisitions.

Not all of SoftBank’s investments in scaled-up European businesses have panned out. Having put around $1 billion into German payments company Wirecard, the company turned out to be one of the biggest scandals in the history of European fintech, facing accounting scandals before collapsing into insolvency earlier this year.

Sinch, as a profitable and a steady business with predictable lines of recurring revenue, looks like a safer bet for now. Even with Salesforce, Facebook and others raising their game, there as Sinch’s CEO says, there is enough of an untapped market that playing well might be enough to do well.

Continue Reading


Despite everything, Oyo still has $1 billion in cash



India’s Oyo has been one of the worst impacted startups with the coronavirus, but it has enough cash to steer through the pandemic and then look at funding further scale, a top executive says.

In a townhall with employees last week, Oyo founder Ritesh Agarwal said the budget lodging firm “continued to hold on to close to a billion dollars of cash” across its group companies and joint venture firms and has “tracked to runway very closely.”

“At the same time, we’ve been very disciplined in making sure that we can respond to the crisis in a good way to try and ensure that we can come out of it at the right time,” he said in a fireside chat with Rohit Kapoor, chief executive of Oyo India and SA, and Troy Alstead, a board member who previously served as the chief executive officer of Startbucks.

The revelation will reassure employees of Oyo, which eliminated or furloughed over 5,000 jobs earlier this year and reported in April that the pandemic had cut its revenue and demand by more than 50%.

Oyo also reported a loss of $335 million on $951 million revenue globally for the financial year ending March 31, 2019, and earlier this year pledged to cut down on its spending.

Agarwal said the startup is recovering from the pandemic as nations relax their lockdowns, and with recent progress with vaccine trials, he is hopeful that the travel and hospitality industries will bounce back strongly.

“Together globally, we were able to get to around 85% of the gross margin dollars of our pre COVID levels. This I can tell you was extremely hard. But in my view was probably only possible because of the efforts of our teams in each one of the geographies,” he said, adding that Oyo Vacation has proven critical to the business in recent months delivering “packed” hotels and holiday homes.

During the conversation, a transcript of which was shared with employees and obtained by TechCrunch, Agarwal was heard talking about making Oyo — which was privately valued at $10 billion last year when it was in the process of raising $1.5 billion last year — ready for IPO. He, however, did not share a timeline on when the SoftBank-backed startup plans to go public and hinted that it’s perhaps not in the immediate future.

“And last but not the least, for me, it is very critical. I want the groups to know that I, our board and our broader management are fully committed to making sure that long-term wealth creation for our OYOpreneurs — beyond that of just the compensation, but the wealth creation by means of your stocks can be substantially grown.”

“At the end of the day, what is the right time to go out is frankly a decision of the board to make and from the management side, we’ll be ready to make sure that we build a company that is ready to go public. And we will look at various things like that of the market situation, opportunities outside and so on, that the board will consider and then potentially help advise on the timeline,” he said.

Alstead echoed Agarwal’s optimism, adding, “I think that OYO is made up of a combination of assets, its hotels, its homes, its vacation homes. That’s unique, I think in the industry in the category, I think it makes it probably a little more challenging sometimes for people externally to measure and compare and benchmark a unique portfolio company like this. But I’d also tell you, I think that makes OYO resilient. It makes OYO balanced for the future. It gives OYO several sorts of vertical opportunities to address both customer needs at any time, whether it be a hotel or a small hotel or a vacation home.”

“And it also gives opportunities and expands that interaction in a good healthy way with the property owners, with the partners, who have an opportunity depending on what asset type they have partnered with OYO in different ways, and also to have the access to a technology platform and a continued investment in that innovative platform for customers. So all those things, I think a balanced portfolio, a technology platform, a heavy focus on putting the customer first, putting the business partner first — all those things, in my view, are what positioned OYO for the future.”

Continue Reading


GoSite snags $40M to help SMBs bring their businesses online



There are 12 million small and medium businesses in the US, yet they have continued to be one of most underserved segments of the B2B universe: that volume underscores a lot of fragmentation, and alongside other issues like budget constraints, there are a number of barriers to building for them at scale. Today, however, a startup helping SMBs get online is announcing some significant funding — a sign of how things are changing at a moment when many businesses have realised that being online is no longer an option, but a necessity.

GoSite, a San Diego-based startup that helps small and medium enterprises build websites, and, with a minimum amount of technical know-how, run other functions of their businesses online — like payments, online marketing, appointment booking and accounting — has picked up $40 million in funding.

GoSite offers a one-stop shop for users to build and manage everything online, with the ability to feed in up to 80 different third-party services within that. “We want to help our customers be found everywhere,” said Alex Goode, the founder and CEO of GoSite. “We integrate with Facebook and other consumer platforms like Siri, Apple Maps, and search engines like Google, Yahoo and Bing and more.” It also builds certain features like payments from the ground up.

The Series B comes on the back of a strong year for the company. Driven by Covid-19 circumstances, businesses have increasingly turned to the internet to interact with customers, and GoSite — which has “thousands” of SMB customers — said it doubled its customer base in 2020.

This latest round is being led by Left Lane Capital out of New York, with Longley Capital, Cove Fund, Stage 2, Ankona Capital and Serra Ventures also participating. GoSite is clearly striking while the iron is hot: Longley, also based out of San Diego, led the company’s previous round, which was only in August of this year. It has now raised $60 million to date.

GoSite is, in a sense, a play for more inclusivity in tech: its customers are not companies that it’s “winning” off other providers that provide website building and hosting and other services typically used by SMBs, such as Squarespace and Wix, or GoDaddy, or Shopify.

Rather, they are companies that may have never used any of these: local garages, local landscapers, local hair salons, local accountancy firms, local dentists and so on. Barring the accounting firm, these are not businesses that will ever go fully online, as a retailer might, not least because of the physical aspect of each of those professions. But they will need an online presence and the levers it gives them to communicate, in order to survive, especially in times when their old models are being put under strain.

Goode started GoSite after graduating from college in Michigan with a degree in computer science, having previously grown up around and working in small businesses — his parents, grandparents and others in his Michigan town all ran their own stores. (He moved to San Diego “for the weather” he joked.)

His belief is that while there are and always will be alternatives like Facebook or Yelp to plant a flag, there is nothing that can replace the value and longer term security and control of building something of your own — a sentiment small business owners would surely grasp.

That is perhaps the most interesting aspect of GoSite as it exists today: it precisely doesn’t see any of what already exists out there as “the competition.” Instead, Goode sees his purpose as building a dashboard that will help business owners manage all that — with up to 80 different services currently available — and more, from a single place, and with minimum need for technical skills and time spent learning the ropes.

“There is definitely huge demand from small businesses for help and something like GoSite can do that,” Goode said. “The space is very fragmented and noisy and they don’t even know where to start.”

This, combined with GoSite’s growth and relevance to the current market, is partly what attracted investors.

“The opportunity we are betting on here is the all-in-one solution,” said Vinny Pujji, partner at Left Lane. “If you are a carpet cleaner or house painter, you don’t have the capacity to understand or work with five or six different pieces of software. We spoke with thousands of SMBs when looking at this, and this was the answer we heard.” He said the other important thing is that GoSite has a customer service team and for SMBs that use it, they like that when they call, “GoSite picks up the phone.”

Continue Reading