Connect with us

Uncategorized

Building a better data economy

Published

on

It’s “time to wake up and do a better job,” says publisher Tim O’Reilly—from getting serious about climate change to building a better data economy. And the way a better data economy is built is through data commons—or data as a common resource—not as the giant tech companies are acting now, which is not just keeping data to themselves but profiting from our data and causing us harm in the process.

“When companies are using the data they collect for our benefit, it’s a great deal,” says O’Reilly, founder and CEO of O’Reilly Media. “When companies are using it to manipulate us, or to direct us in a way that hurts us, or that enhances their market power at the expense of competitors who might provide us better value, then they’re harming us with our data.” And that’s the next big thing he’s researching: a specific type of harm that happens when tech companies use data against us to shape what we see, hear, and believe.

It’s what O’Reilly calls “algorithmic rents,” which uses data, algorithms, and user interface design as a way of controlling who gets what information and why. Unfortunately, one only has to look at the news to see the rapid spread of misinformation on the internet tied to unrest in countries across the world. Cui bono? We can ask who profits, but perhaps the better question is “who suffers?” According to O’Reilly, “If you build an economy where you’re taking more out of the system than you’re putting back or that you’re creating, then guess what, you’re not long for this world.” That really matters because users of this technology need to stop thinking about the worth of individual data and what it means when very few companies control that data, even when it’s more valuable in the open. After all, there are “consequences of not creating enough value for others.”

We’re now approaching a different idea: what if it’s actually time to start rethinking capitalism as a whole? “It’s a really great time for us to be talking about how do we want to change capitalism, because we change it every 30, 40 years,” O’Reilly says. He clarifies that this is not about abolishing capitalism, but what we have isn’t good enough anymore. “We actually have to do better, and we can do better. And to me better is defined by increasing prosperity for everyone.”

In this episode of Business Lab, O’Reilly discusses the evolution of how tech giants like Facebook and Google create value for themselves and harm for others in increasingly walled gardens. He also discusses how crises like covid-19 and climate change are the necessary catalysts that fuel a “collective decision” to “overcome the massive problems of the data economy.”

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Omidyar Network.

Show notes and links

We need more than innovation to build a world that’s prosperous for all,” by Tim O’Reilly, Radar, June 17, 2019

Why we invested in building an equitable data economy,” by Sushant Kumar, Omidyar Network, August 14, 2020

Tim O’Reilly – ‘Covid-19 is an opportunity to break the current economic paradigm,’” by Derek du Preez, Diginomica, July 3, 2020

Fair value? Fixing the data economy,” MIT Technology Review Insights, December 3, 2020

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is the data economy. More specifically—democratizing data, making data more open, accessible, controllable, by users. And not just tech companies and their customers, but also citizens and even government itself. But what does a fair data economy look like when a few companies control your data?

Two words for you: algorithmic rent.

My guest is Tim O’Reilly, the founder, CEO, and chairman of O’Reilly Media. He’s a partner in the early-stage venture firm O’Reilly AlphaTech Ventures. He’s also on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He recently wrote the book WTF?: What’s the Future and Why It’s Up to Us. If you’re in tech, you’ll recognize the iconic O’Reilly brand: pen and ink drawings of animals on technology book covers, and likely picking up one of those books to helped build your career, whether it’s as a designer, software engineer, or CTO.

This episode of Business Lab is produced in association with a Omidyar Network.

Welcome, Tim.

Tim O’Reilly: Glad to be with you, Laurel.

Laurel: Well, so let’s just first mention to our listeners that in my previous career, I was fortunate enough to work with you and for O’Reilly Media. And this is now a great time to have this conversation because all of those trends that you’ve seen coming down the pike way before anyone else—open source, web 2.0, government as a platform, the maker movement. We can frame this conversation with a topic that you’ve been talking about for a while—the value of data and open access to data. So in 2021, how are you thinking about the value of data?

Tim: Well, there are a couple of ways I’m thinking about it. And the first is, the conversation about value is pretty misguided in a lot of ways. When people are saying, ‘Well, why don’t I get a share of the value of my data?’ And of course, the answer is you do get a share of the value of your data. When you trade Google data for email and search and maps, you’re getting quite a lot of value. I actually did some back-of-the-napkin math recently, that basically it was about, well, what’s the average revenue per user? Facebook annual revenue per user worldwide is about $30. That’s $30 a year. Now, the profit margin is about $26. So that means they’re making $7.50 per user per year. So you get a share that? No. Do you think that your $1 or $2 that you might, at the most extreme, be able to claim as your share of that value is Facebook’s worth to you?

And I think in a similar way, you look at Google, it’s a slightly bigger number. Their average profit per user is about $60. So, OK, still, let’s just say you got a quarter of this, $15 a year. That’s a $1.25 a month. You pay 10 times that for your Spotify account. So effectively, you’re getting a pretty good deal. So the question of value is the wrong question. The question is, is the data being used for you or against you? And I think that’s really the question. When companies are using the data for our benefit, it’s a great deal. When companies are using it to manipulate us or to direct us in a way that hurts us or that enhances their market power at the expense of competitors who might provide us better value, then they’re harming us with our data.

And that’s where I’d like to move the conversation. And in particular, I’m focused on a particular class of harm that I started calling algorithmic rents. And that is, when you think about the data economy, it’s used to shape what we see and hear and believe. This obviously became very obvious to people in the last U.S. election. Misinformation in general, advertising in general, is increasingly guided by data-enabled algorithmic systems. And the question that I think is fairly profound is, are those systems working for us or against us? And if they are turned extractive, where they’re basically working to make money for the company rather than to give benefit to the users, then we’re getting screwed. And so, what I’ve been trying to do is to start to document and track and establish this concept of the ability to control the algorithm as a way of controlling who gets what and why.

And I’ve been focused less on the user end of it mostly and more on the supplier end of it. Let’s take Google. Google is this intermediary between us and literally millions or hundreds of millions of sources of information. And they decide which ones get the attention. And for the first decade and a half of Google’s existence and still in many areas that are noncommercial, which is probably about probably 95% of all searches, they are using the tools of, what I have called, collective intelligence. So everything from, ‘What do people actually click on?’ ‘What do the links tell us?’ ‘What is the value of links and page rank?’ All these things give us the result that they really think is the best thing that we’re looking for. So back when Google IPO’ed in 2004, they attached an interview with Larry Page in which he said, ‘Our goal is to help you find what you want and go away.’

And Google really operated that way. And even their advertising model, it was designed to satisfy user needs. Pay-per-click was like; we’ll only pay you if you actually click on the ad. We’ll only charge the advertiser if they click on the ad, meaning that you were interested in it. They had a very positive model, but I think in the last decade, they really decided that they need to allocate more of the values to themselves. And so if you contrast a Google search result in a commercially valuable area, you can contrast it with Google of 10 years ago or you can contrast it with a non-commercial search today. You will see that if it’s commercially valuable, most of the page is given up to one of two things: Google’s own properties or advertisements. And what we used to call “organic search results” on the phone, they’re often on the second or third screen. Even on a laptop, they might be a little one that you see down in the corner. The user-generated, user-valuable content has been superseded by content that Google or advertisers want us to see. That is, they’re using their algorithm to put the data in front of us. Not that they think is best for us, but they think is best for them. Now, I think there’s another thing. Back when Google first was founded, in the original Google search paper that Larry and Sergey wrote while they were still at Stanford, they had an appendix on advertising and mixed motives, and they didn’t think a search engine could be fair. And they spent a lot of time trying to figure out how to counter that when they adopted advertising as their model, but, I think, eventually they lost.

So too Amazon. Amazon used to take hundreds of different signals to show you what they really thought were the best products for you, the best deal. And it’s hard to believe that that’s still the case when you do a search on Amazon and almost all of the results are sponsored. Advertisers who are saying, no, us, take our product. And effectively, Amazon is using their algorithm to extract what economists called rents from the people who want to sell products on their site. And it’s very interesting, the concept of rents has really entered my vocabulary only in the last couple of years. And there’s really two kinds of rents and both of them have to do with a certain kind of power asymmetry.

And the first is a rent that you get because you control something valuable. You think of the ferryman in the Middle Ages, who basically said, yeah, you got to pay me if you want to cross the river here or pay a bridge toll. That’s what people would call rents. It was also the fact, that the local warlord was able to tell all the people who were working on “his lands” that you have to give me a share of your crops. And that kind of rent that comes as a result of a power asymmetry, I think is kind of what we’re seeing here.

There’s another kind of rent that I think is also really worth thinking about, which is that when something grows in value independent of your own investments. And I haven’t quite come to grips with how this applies in the digital economy, but I’m convinced that because the digital economy is not unique to other human economies, what it does. And that is, think about land rents. When you build a house, you’ve actually put in capital and labor and you’ve actually made an improvement and there’s an increase in value. But let’s say that 1,000, or in case of a city, millions of other people also build houses, the value of your house goes up because of this collective activity. And that value you didn’t create—or you co-created with everyone else. When government collects taxes and builds roads and schools, infrastructure, again, the value of your property goes up.

And that kind of interesting question of the value that is created communally being allocated instead to a private company, instead of to everybody, is I think another piece of this question of rents. I don’t think the right question is, how do we get our $1 or $2 or $5 share of Google’s profit? The right question is, is Google creating enough of a common value for all of us or are they keeping that increase that we create collectively for themselves?

Laurel: So no, it’s not just monetary value is it? We were just speaking with Parminder Singh from IT for Change in the value of data commons. Data commons has always been part of the idea of the good part of the internet, right? When people come together and share what they have as a collective, and then you can go off and find new learnings from that data and build new products. This really spurred the entire building of the internet—this collective thinking, this is collective intelligence. Are you seeing that in increasingly intelligent algorithmic possibilities? Is that what is starting to destroy the data commons or both perhaps, more of a human behavior, a societal change?

Tim: Well, both in a certain way? I think one of my big ideas that I think I’m going to be pushing for the next decade or two (unless I succeed, as I haven’t with some past campaigns) is to get people to understand that our economy is also an algorithmic system. We have this moment now where we’re so focused on big tech and the role of algorithms at Google and Amazon and Facebook and app stores and everything else, but we don’t take the opportunity to ask ourselves how does our economy work like that also? And I think there’s some really powerful analogies between say the incentives that drive Facebook and the incentives that drive every company. The way those incentives are expressed. Just like we could say, why does Facebook show us misinformation?

What’s in it for them? Is it just a mistake or are there reasons? And you say, “Well actually, yeah, it’s highly engaging, highly valuable content.” Right. And you say, “Well, is that the same reason why Purdue Pharma gave us misinformation about the addictiveness of OxyContin?” And you say, “Oh yeah, it is.” Why would companies do that? Why would they be so antisocial? And then you go, oh, actually, because there’s a master algorithm in our economy, which is expressed through our financial system.

Our financial system is now primarily about stock price. And you’d go, OK, companies are told and have been for the last 40 years that their prime directive going back to Milton Friedman, the only responsibility of a business is to increase value for its shareholders. And then that got embodied in executive compensation in corporate governance. We literally say humans don’t matter, society doesn’t matter. The only thing that matters is to return value to your shareholders. And the way you do that is by increasing your stock price.

So we have built an algorithm in our economy, which is clearly wrong, just like Facebook’s focus on let’s show people things that are more engaging, turned out to be wrong. The people who came up with both of these ideas thought they were going to have good outcomes, but when Facebook has a bad outcome, we’re saying you guys need to fix that. When our tax policy, when our incentives, when our corporate governance comes out wrong, we go, “Oh well, that’s just the market.” It’s like the law of gravity. You can’t change it. No. And that’s really the point of the reason why my book was subtitled, What’s the Future and Why It’s Up to Us, because the idea that we have made choices as a society that are giving us the outcomes that we are getting, that we baked them into the system, in the rules, the fundamental underlying economic algorithms, and those algorithms are just as changeable as the algorithms that are used by a Facebook or a Google or an Amazon, and they’re just as much under the control of human choice.

And I think there’s an opportunity, instead of demonizing tech, to use them as a mirror and say, “Oh, we need to actually do better.” And I think we see this in small ways. We’re starting to realize, oh, when we build an algorithm for criminal justice and sentencing, and we go, “Oh, it’s biased because we fed it biased data.” We’re using AI and algorithmic systems as a mirror to see more deeply what’s wrong in our society. Like, wow, our judges have been biased all along. Our courts have been biased all along. And when we built the algorithmic system, we trained it on that data. It replicated those biases and we go, really, that’s what we’ve been saying. And I think in a similar way, there’s a challenge for us to look at the results of our economy as the results of a biased algorithm.

Laurel: And that really is just sort of that exclamation point on also other societal issues, right? So if racism is baked into society and it’s part of what we’ve known as a country in America for generations, how is that surprising? We can see with this mirror, right, so many things coming down our way. And I think 2020 was one of those seminal years that just prove to everyone that mirror was absolutely reflecting what was happening in society. We just had to look in it. So when we think about building algorithms, building a better society, changing that economic structure, where do we start?

Tim: Well, I mean, obviously the first step in any change is a new mental model of how things work. If you think about the progress of science, it comes when we actually have, in some instances, a better understanding of the way the world works. And I think we are at a point where we have an opportunity. There’s this wonderful line from a guy named Paul Cohen. He’s a professor of computer science now at the University of Pittsburgh, but he used to be the program manager for AI at DARPA. We were at one of these AI governance events at the American Association for the Advancement of Science and he said something that I just wrote down and I’ve been quoting ever since. He said, “The opportunity of AI is to help humans model and manage complex interacting systems.” And I think there’s an amazing opportunity before us in this AI moment to build better systems.

And that’s why I’m particularly sad about this point of algorithmic rents. And for example, the apparent turn of Google and Amazon toward cheating in the system that they used to run as a fair broker. And that is that they have shown us that it was possible to use more and more data, better and better signals to manage a market. There’s this idea in traditional economics that in some sense, money is the coordinating function of what Adam Smith called the “invisible hand.” As the people are pursuing their self-interest in the world of perfect information, everybody’s going to figure out what is their self-interest. Of course, it’s not actually true, but in the theoretical world, let’s just say that it is true that people will say, “Oh yeah, that’s what that’s worth to me, that’s what I’ll pay.”

And this whole question of “marginal utility” is all around money. And the thing that’s so fascinating to me about Google organic search was that it’s the first large-scale example I think we have. When I say large scale, I mean, global scale, as opposed to say a barter marketplace. It’s a marketplace with billions of users that was entirely coordinated without money. And you say, “How can you say that?” Because of course, Google was making scads of money, but they were running two marketplaces in parallel. And in one of them, the marketplace of organic search—you remember the 10 blue links, which is still what Google does on a non-commercial search. You have hundreds of signals, page rank, and full text search, now done with machine learning.

You have things like the long click and the short click. If somebody clicks on the first result and they come right back and click on the second link, and then they come right back and they click on the third link, and then [Google] goes away and thinks, “Oh, it looks like the third link was the one that worked for them.” That’s collective intelligence. Harnessing all that user intelligence to coordinate a market so that you literally have for billions of unique searches—the best result. And all of this is coordinated without money. And then off to the side, [Google] had, well, if this is commercially valuable, then maybe some advertising search. And now they’ve kind of preempted that organic search whenever money is involved. But the point is, if we’re really looking to say, how do we model and manage complex interacting systems, we have a great use case. We have a great demonstration that it’s possible.

And now I start saying, ‘Well, what other kinds of problems can we do that way?’ And you look at a group like Carla Gomes’ Institute for Computational Sustainability out of Cornell University. They’re basically saying, well, let’s look at various kinds of ecological factors. Let’s take lots and lots of different signals into account. And so for example, we did a project with a Brazilian power company to help them take not just decide, ‘Where should we site our dam as based on what will generate the most power, but what will disrupt the fewest communities?’ ‘What will affect endangered species the least?’ And they were able to come up with better outcomes than just the normal ones. [Institute for Computational Sustainability] did this amazing project with California rice growers where the Institute basically realized that if the farmers could adjust the timing of when they released the water into the rice patties to match up with the migration of birds, the birds actually acted as natural pest control in the rice paddies. Just amazing stuff that we could start to do.

And I think there’s an enormous opportunity. And this is kind of part of what I mean by the data commons, because many of these things are going to be enabled by a kind of interoperability. I think one of the things that’s so different between the early web and today is the presence of walled gardens, e.g., Facebook is a walled garden. Google is increasingly a walled garden. More than half of all Google searches begin and end on Google properties. The searches don’t go out anywhere on the web. The web was this triumph of interoperability. It was the building of a global commons. And that commons, has been walled off by every company trying to say, ‘Well, we’re going to try to lock you in.’ So the question is, how do we get focus on interoperability and lack of lock-in and move this conversation away from, ‘Oh, pay me some money for my data when I’m already getting services.’ No, just have services that actually give back to the community and have that community value be created is far more interesting to me.

Laurel: Yeah. So breaking down those walled gardens or I should say maybe perhaps just creating doors where data can be extracted, that should belong in the public. So how do we actually start rethinking data extraction and governance as a society?

Tim: Yeah. I mean, I think there are several ways that that happens and they’re not exclusive, they kind of come all together. People will look at, for example, the role of government in dealing with market failures. And you could certainly argue that what’s happening in terms of the concentration of power by the platforms is a market failure, and that perhaps anti-trust might be appropriate. You can certainly say that the work that the European Union has been leading on with privacy legislation is an attempt by government to regulate some of these misuses. But I think we’re in the very early stages of figuring out what a government response ought to look like. And I think it’s really important for individuals to continue to push the boundaries of deciding what do we want out of the companies that we work with.

Laurel: When we think about those choices we need to make as individuals, and then as part of a society; for example, Omidyar Network is focusing on how we reimagine capitalism. And when we take on a large topic like that, you and Professor Mariana Mazzucato at the University College of London are researching that very kind of challenge, right? So when we are extracting value out of data, how do we think about reapplying that, but in the form of capitalism, right, that everyone also can still connect to and understand. Is there actually a fair balance where everyone gets a little bit of the pie?

Tim: I think there is. And I think the this is sort of been my approach throughout my career, which is to assume that, for the most part, people are good and not to demonize companies, not to demonize executives, and not to demonize industries. But to ask ourselves first of all, what are the incentives we’re giving them? What are the directions that they’re getting from society? But also, to have companies ask themselves, do they understand what they’re doing?

So if you look back at my advocacy 22 years ago, or whenever it was, 23 years ago, about open source software, it was really focused on… You could look at the free software movement as it was defined at the time as kind of analogous to a lot of the current privacy efforts or the regulatory efforts. It was like, we’re going to use a legal solution. We’re going to come up with a license to keep these bad people from doing this bad thing. I and other early open source advocates realized that, no, actually we just have to tell people why sharing is better, why it works better. And we started telling a story about the value that was being created by releasing source code for free, having it be modifiable by people. And once people understood that, open source took over the world, right? Because we were like, ‘Oh, this is actually better.’ And I think in a similar way, I think there’s a kind of ecological thinking, ecosystem thinking, that we need to have. And I don’t just mean in the narrow sense of ecology. I mean, literally business ecosystems, economy as ecosystem. The fact that for Google, the health of the web should matter more than their own profits.

At O’Reilly, we’ve always had this slogan, “create more value than you capture.” And it’s a real problem for companies. For me, one of my missions is to convince companies, no, if you’re creating more value for yourself, for your company, than you’re creating for the ecosystem as a whole, you’re doomed. And of course, that’s true in the physical ecology when humans are basically using up more resources than we’re putting back. Where we’re passing off all these externalities to our descendants. That’s obviously not sustainable. And I think the same thing is true in business. If you build an economy where you’re taking more out of the system than you’re putting back or that you’re creating, then guess what, you’re not long for this world. Whether that’s because you’re going to enable competitors or because your customers are going to turn on you or just because you’ll lose your creative edge.

These are all consequences. And I think we can teach companies that these are the consequences of not creating enough value for others. And not only that, who you have to create value for, because I think Silicon Valley has been focused on thinking, ‘Well, as long as we’re creating value for users, nothing else matters.” And I don’t believe that. If you don’t create value for your suppliers, for example, they’re going to stop being able to innovate. If Google is the only company that is able to profit from web content or takes too big a share, hey, guess people will just stop creating websites. Oh, guess what, they went over to Facebook. Take Google, actually, their best weapon against Facebook was not to build something like Google+, which was trying to build a rival walled garden. It was basically to make the web more vibrant and they didn’t do that. So Facebook’s walled garden outcompeted the open web partly because, guess what, Google was sucking out a lot of the economic value.

Laurel: Speaking of economic value and when data is the product, Omidyar Network defines data as something whose value does not diminish. It can be used to make judgments of third parties that weren’t involved in your collection of data originally. Data can be more valuable when combined with other datasets, which we know. And then data should have value to all parties involved. Data doesn’t go bad, right? We can kind of keep using this unlimited product. And I say we, but the algorithms can sort of make decisions about the economy for a very long time. So if you don’t actually step in and start thinking about data in a different way, you’re actually sowing the seeds for the future and how it’s being used as well.

Tim: I think that’s absolutely true. I will say that I don’t think that it’s true that data doesn’t go stale. It obviously does go stale. In fact, there’s this great quote from Gregory Bateson that I’ve remembered probably for most of my life now, which is, “Information is a difference that makes a difference.” And when something is known by everyone, it’s no longer valuable, right? So it’s literally that ability to make a difference that makes data valuable. So I guess what I would say is, no, data does go stale and it has to keep being collected, it has to keep being cultivated. But then the second part of your point, which was that the decisions we make now are going to have ramifications far in the future, I completely agree. I mean, everything you look at in history, we have to think forward in time and not just backward in time because the consequences of the choices we make will be with us long after we’ve reaped the benefits and gone home.

I guess I’d just say, I believe that humans are fundamentally social animals. I’ve recently gotten very interested in the work of David Sloan Wilson, who’s an evolutionary biologist. One of his great sayings is, “Selfish individuals outcompete altruistic individuals, but altruistic groups outcompete selfish groups.” And in some ways, the history of human society are advances in cooperation of larger and larger groups. And the thing that I guess I would sum up where we were with the internet—those of us who were around the early optimistic period were saying, ‘Oh my God, this was this amazing advance in distributed group cooperation’, and still is. You look at things like global open source projects. You look at things like the universal information sharing of the worldwide web. You look at the progress of open science. There’s so many areas where that is still happening, but there is this counterforce that we need to wake people up to, which is making walled gardens, trying to basically lock people in, trying to impede the free flow of information, the free flow of attention. These are basically counter-evolutionary acts.

Laurel: So speaking about this moment in time right now, you recently said that covid-19 is a big reset of the Overton window and the economy. So what is so different right now this year that we can take advantage of?

Tim: Well, the concept of the Overton window is this notion that what seems possible is framed as sort of like a window on the set of possibilities. And then somebody can change that. For example, if you look at former President Trump, he changed the Overton window about what kind of behavior was acceptable in politics, in a bad way, in my opinion. And I think in a similar way, when companies display this monopolistic user hostile behavior, they move the Overton window in a bad way. When we come to accept, for example, this massive inequality. We’re moving the Overton window to say some small number of people having huge amounts of money and other people getting less and less of the pie is OK.

But all of a sudden, we have this pandemic, and we think, ‘Oh my God, the whole economy is going to fall down.’ We’ve got to rescue people or there’ll be consequences. And so we suddenly say, ‘Well, actually yeah, we actually need to spend the money.’ We need to actually do things like develop vaccines in a big hurry. We have to shut down the economy, even though it’s going to hurt businesses. We were worried it was going to hurt the stock market, it turned out it didn’t. But we did it anyway. And I think we’re entering a period of time in which the kinds of things that covid makes us do—which is reevaluate what we can do and, ‘Oh, no, you couldn’t possibly do that’—it’s going to change. I think climate change is doing that. It’s making us go, holy cow, we’ve got to do something. And I do think that there’s a real opportunity when circumstances tell us that the way things have been need to change. And if you look at big economic systems, they typically change around some devastating event.

Basically, the period of the Great Depression and then World War II led to the revolution that gave us the post-war prosperity, because everybody was like, ‘Whoa, we don’t want to go back there.’ So with the Marshall Plan, we’re going to actually build the economies of the people we defeated, because, of course, after World War I, they had crushed Germany down, which led to the rise of populism. And so, they realized that they actually had to do something different and we had 40 years of prosperity as a result. There’s a kind of algorithmic rot that happens not just at Facebook and Google, but a kind of algorithmic rot that happens in economic planning, which is that the systems that they had built that created an enormous, shared prosperity had the side effect called inflation. And inflation was really, really high. And interest rates were really, really high in the 1970s. And they went, ‘Oh my God, this system is broken.” And they came back with a new system, which focused on crushing inflation on increasing corporate profits. And we kind of ran with that and we had some go-go years and now we’re hitting the crisis, where the consequences of the economy that we built for the last 40 years are failing pretty provocatively.

And that’s why I think it’s a really great time for us to be talking about how do we want to change capitalism, because we change it every 30, 40 years. It’s a pretty big change-up in how it works. And I think we’re due for another one and it shouldn’t be seen as “abolish capitalism because capitalism has been this incredible engine of productivity,” but boy, if anybody thinks we’re done with it and we think that we have perfected it, they’re crazy. We actually have to do better and we can do better. And to me better is defined by increasing prosperity for everyone.

Laurel: Because capitalism is not a static thing or an idea. So in general, Tim, what are you optimistic about? What are you thinking about that gives you hope? How are you going to man this army to change the way that we are thinking about the data economy?

Tim: Well, what gives me hope is that people fundamentally care about each other. What gives me hope is the fact that people have the ability to change their mind and to come up with new beliefs about what’s fair and about what works. There’s a lot of talk about, ‘Well, we’ll overcome problems like climate change because of our ability to innovate.’ And yeah, that’s also true, but more importantly, I think that we’ll overcome the massive problems of the data economy because we have come to a collective decision that we should. Because, of course, innovation happens, not as a first order effect, it’s a second order effect. What are people focused on? We’ve been focused for quite a while on the wrong things. And I think one of the things that actually, in an odd way, gives me optimism is the rise of crises like pandemics and climate change, which are going to force us to wake up and do a better job.

Laurel: Thank you for joining us today, Tim, on the Business Lab.

Tim: You’re very welcome.

Laurel: That was Tim O’Reilly, the founder, CEO, and chairman of O’Reilly Media, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of the Business Lab, I’m your host Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us inference on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

Collective, a back-office for the self-employed, raises $20M from Ashton Kutcher’s VC

Published

on

With so much focus on the ‘creator economy’, and countries hit by the effects of the pandemic, the self-employed market is ‘booming’, for good or for ill. So it’s not too much of a surprise that
Collective,a subscription-based back-office for the self-employed has raised a $20 million Series A funding after launching only late last year.

The round was led by General Catalyst and joined by Sound Ventures (the venture capital fund founded by Ashton Kutcher and Guy Oseary). Collective has now raised a total of $28.65 million. Other notable investors include: Steve Chen (Founder YouTube), Hamish McKenzie (Founder Substack), Aaron Levie (founder Box), Kevin Lin (founder Twitch), Sam Yam (founder Patreon), Li Jin (Atelier Ventures), Shadiah Sigala (founder HoneyBook), Adrian Aoun (founder Forward), Holly Liu (founder Kabam), Andrew Dudum (founder Hims) and Edward Hartman (founder LegalZoom).

Ashton Kutcher said in a statement: “We’re proud to be supporting a company that’s making it easier for creators to focus on what they do best by taking care of the back office work that creates so much friction for so many early entrepreneurs. I would have loved something like this when I was getting started.”

Launched in September 2020 by CEO Hooman Radfar, CPO Ugur Kaner and CTO Bugra Akcay, Collective offers “tailored” financial services, access to advisors that oversee accounting, tax, bookkeeping, and business formation needs. There are currently 59 million self-employed workers in the U.S. (36% of US workforce) who mostly do all their own admin. So Collective hopes to be their online back office platform.

Speaking to me over email, Radfar said that the start-up fintech market tends to serve companies like them – other start-ups and growing SMBs: “Companies like Pilot have done an amazing job at building a back-office platform that handles taxes, bookkeeping and finances for start-ups. We want to offer that same great value to the underserved business-of-one community, since they are the largest group of founders in the country.”

He added: “Before Collective, consultants, freelancers, and other solo founders had to string together their back-office solution using DIY platforms like Quickbooks, Gusto, and LegalZoom. If they were lucky, they had the help of a part-time accountant to advise them. Collective makes handling finances easy with the first all-in-one platform that not only bundles these tools into one platform, but also provides the technology and team to optimize their tax savings like the pros.”

According to some estimates, the number of lone freelancers in the US is projected to make up 86.5 million, 50% of the US workforce by 2027, with the freelancer space projected to grow three times faster than the traditional workforce.

Niko Bonatsos, Managing Director of General Catalyst said: “Collective is serving the $1.2 trillion business-of-one industry by building the first back-office platform that saves individuals significant time and money, while providing them with the appropriate tools and resources they need to help them succeed,” said “We’re excited to support Collective as they expand their team and build an exceptional service for the business-of-one community.”

Continue Reading

Uncategorized

UK publishes draft Online Safety Bill

Published

on

The UK government has published its long-trailed (child) ‘safety-focused’ plan to regulate online content and speech.

The Online Safety Bill has been in the works for years — during which time a prior plan to require age verification for accessing online porn in the UK, also with the goal of protecting kids from being exposed to inappropriate content online but which was widely criticized as unworkable, got quietly dropped.

At the time the government said it would focus on introducing comprehensive legislation to regulate a range of online harms. It can now say it’s done that.

The 145-page Online Safety Bill can be found here on the gov.uk website — along with 123 pages of explanatory notes and an 146-page impact assessment.

The draft legislation imposes a duty of care on digital service providers to moderate user generated content in a way that prevents users from being exposed to illegal and/or harmful stuff online.

The government dubs the plan globally “groundbreaking” and claims it will usher in “a new age of accountability for tech and bring fairness and accountability to the online world”.

Critics warn the proposals will harm freedom of expression by encouraging platforms to over-censor, while also creating major legal and operational headaches for digital businesses that will discourage tech innovation.

The debate starts now in earnest.

The bill will be scrutinised by a joint committee of MPs — before a final version is formally introduced to Parliament for debate later this year.

How long it might take to hit the statute books isn’t clear but the government has a large majority in parliament so, failing major public uproar and/or mass opposition within its own ranks, the Online Safety Bill has a clear road to becoming law.

Commenting in a statement, digital secretary Oliver Dowden said: “Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.

“We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”

The length of time it’s taken for the government to draft the Online Safety Bill underscores the legislative challenge involved in trying to ‘regulate the Internet’.

In a bit of a Freudian slip, the DCMS’ own PR talks about “the government’s fight to make the internet safe”. And there are certainly question-marks over who the future winners and losers of the UK’s Online Safety laws will be.

Safety and democracy?

In a press release about the plan, the Department for Digital, Media, Culture and Sport (DCMS) claimed the “landmark laws” will “keep children safe, stop racial hate and protect democracy online”.

But as that grab-bag of headline goals implies there’s an awful lot going on here — and huge potential for things to go wrong if the end result is an incoherent mess of contradictory rules that make it harder for digital businesses to operate and for Internet users to access the content they need.

The laws are set to apply widely — not just to tech giants or social media sites but to a broad swathe of websites, apps and services that host user-generated content or just allow people to talk to others online.

In scope services will face a legal requirement to remove and/or limit the spread of illegal and (in the case of larger services) harmful content, with the risk of major penalties for failing in this new duty of care toward users. There will also be requirements for reporting child sexual exploitation content to law enforcement.

Ofcom, the UK’s comms regulator — which is responsible for regulating the broadcast media and telecoms sectors — is set to become the UK Internet’s content watchdog too, under the plan.

It will have powers to sanction companies that fail in the new duty of care toward users by hitting them with fines of up to £18M or ten per cent of annual global turnover (whichever is higher).

The regulator will also get the power to block access to sites — so the potential for censoring entire platforms is baked in.

Some campaigners backing tough new Internet rules have been pressing the government to include the threat of criminal sanctions for CEOs to concentrate C-suite minds on anti-harms compliance. And while ministers haven’t gone that far, DCMS says a new criminal offence for senior managers has been included as a deferred power — adding: “This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.”

Despite there being widespread public support in the UK for tougher rules for Internet platforms, the devil is the detail of how exactly you propose to do that.

Civil rights campaigners and tech policy experts have warned from the get-go that the government’s plan risks having a chilling effect on online expression by forcing private companies to be speech police.

Legal experts are also warning over how workable the framework will be, given hard to define concepts like “harms” — and, in a new addition, content that’s defined as “democratically important” (which the government wants certain platforms to have a special duty to protect).

The clear risk is massive legal uncertainty wrapping digital businesses — with knock-on impacts on startup innovation and availability of services in the UK.

The bill’s earlier incarnation — a 2019 White Paper — had the word “harms” in the title. That’s been swapped for a more anodyne reference to “safety” but the legal uncertainty hasn’t been swapped out.

The emphasis remains on trying to rein in an amorphous conglomerate of ‘harms’ — some illegal, others just unpleasant — that have been variously linked to or associated with online activity. (Often off the back of high profile media reporting, such as into children’s exposure to suicide content on platforms like Instagram.)

This can range from bullying and abuse (online trolling), to the spread of illegal content (child sexual exploitation), to content that’s merely inappropriate for children to see (legal pornography).

Certain types of online scams (romance fraud) are another harm the government wants the legislation to address, per latest additions.

The umbrella ‘harms’ framing makes the UK approach distinct to the European Union’s Digital Service Act — a parallel legislative proposal to update the EU’s digital rules that’s more tightly focused on things that are illegal, with the bloc setting out rules to standardize reporting procedures for illegal content; and combating the risk of dangerous products being sold on ecommerce marketplaces with ‘know your customer’ requirements.

In a response to criticism of the UK Bill’s potential impact on online expression, the government has added measures which it said today are aimed at strengthen people’s rights to express themselves freely online.

It also says it’s added in safeguards for journalism and to protect democratic political debate in the UK.

However its approach is already raising questions — including over what look like some pretty contradictory stipulations.

For example, the DCMS’ discussion of how the bill will handle journalistic content confirms that content on news publishers’ own websites won’t be in scope of the law (reader comments on those sites are also not in scope) and that articles by “recognised news publishers” shared on in-scope services (such as social media sites) will be exempted from legal requirements that may otherwise apply to non journalistic content.

Indeed, platforms will have a legal requirement to safeguard access to journalism content. (“This means [digital platforms] will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content,” DCMS notes.)

However the government also specifies that “citizen journalists’ content will have the same protections as professional journalists’ content” — so exactly where (or how) the line gets drawn between “recognized” news publishers (out of scope), citizen journalists (also out of scope), and just any old person blogging or posting stuff on the Internet (in scope… maybe?) is going to make for compelling viewing.

Carve outs to protect political speech also complicate the content moderation picture for digital services — given, for example, how extremist groups that hold racist opinions can seek to launder their hate speech and abuse as ‘political opinion’. (Some notoriously racist activists also like to claim to be ‘journalists’…)

DCMS writes that companies will be “forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation”.

“Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom,” it goes on, adding: “When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.”

Platforms will face responsibility for balancing all these conflicting requirements — drawing on Codes of Practice on content moderation that respects freedom of expression which will be set out by Ofcom — but also under threat of major penalties being slapped on them by Ofcom if they get it wrong.

Interestingly, the government appears to be looking favorably on the Facebook-devised ‘Oversight Board’ model, where a panel of humans sit in judgement on ‘complex’ content moderation cases — and also discouraging too much use of AI filters which it warns risk missing speech nuance and over-removing content. (Especially interesting given the UK government’s prior pressure on platforms to adopt AI tools to speed up terrorism content takedowns.)

“The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate,” writes DCMS. “All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.”

“People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity,” it goes on.

“Category 1 services [the largest, most popular services] will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects. These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.”

Another confusing-looking component of the plan is that while the bill includes measures to tackle what it calls “user-generated fraud” — such as posts on social media for fake investment opportunities or romance scams on dating apps — fraud that’s conducted online via advertising, emails or cloned websites will not be in scope, per DCMS, as it says “the Bill focuses on harm committed through user-generated content”.

Yet since Internet users can easily and cheaply create and run online ads — as platforms like Facebook essentially offer their ad targeting tools to anyone who’s willing to pay — then why carve out fraud by ads as exempt?

It seems a meaningless place to draw the line. Fraud where someone paid a few dollars to amplify their scam doesn’t seem a less harmful class of fraud than a free Facebook post linking to the self-same crypto investment scam.

In short, there’s a risk of arbitrary/ill-thought through distinctions creating incoherent and confusing rules that are prone to loopholes. Which doesn’t sound good for anyone’s online safety.

In parallel, meanwhile, the government is devising an ambitious pro-competition ex ante regime to regulate tech giants specifically. Ensuring coherence and avoiding conflicting or overlapping requirements between that framework for platform giants and these wider digital harms rules is a further challenge.

Continue Reading

Uncategorized

Amazon updates Echo Show line with a pan and zoom camera and a kids model

Published

on

Amazon this morning announced a handful of updates across its Echo Show line of smart screens. The top-level most interesting bit here is the addition of a pan and zoom camera to the mid-tier Echo Show. The feature is similar to ones found on Facebook’s various Portal devices and Google’s high-end Nest Hub Max.

Essentially, it’s designed to keep the subject in frame – Apple also recently introduced the similar Center Stage features for the latest iPad Pro. It comes after Amazon introduced a far less subtle version in the Echo Show 10, which actually follows the subject around by swiveling the display around the base. I know I’m not alone in being a little creeped out, seeing it in action.

The new feature arrives on the Show 8’s 13-megapixel camera, which is coupled with a built-in physical shutter – a mainstay as Amazon is look to stay ahead of the privacy conversations. The eight-inch HD display is powered by an upgrade octa-core processors and coupled with stereo speakers. The new Show 8 runs $130.

The other biggest news here is the arrival of the Echo Show 5 Kids – the one really new product in the bunch. At $95, the kid-focused version of the screen features a customizable home screen, colorful design, a two-year warranty in case of creaks and a one-year subscription to Amazon Kids+.

There’s a new version of the regular Show 5, too, featuring an upgraded HD camera, new colors and additional software features. That runs $85. The new devices go up for preorder today and start shipping later this month.

 

Continue Reading

Trending