Lanier’s concern, at its simplest, is that that we are building a digital future in which a small number of companies and individuals garner great wealth, while the economy as a whole starts to shrink, taking with it jobs, the middle classes and what he terms “economic dignity.”
If the technology industry can boast a true renaissance man, it is Jaron Lanier. A polymathic computer scientist, composer, visual artist and author with a mass of dreadlocks tumbling down his back, the 53-year-old Lanier first found fame by popularizing the term “virtual reality” (VR) in the early 1980s and founding VPL Research to develop VR products. He led the team that developed the first multi-person virtual worlds using head-mounted displays, the first VR avatars and the first software platform for immersive VR applications. The company’s patent portfolio was eventually sold to Sun Microsystems.
Today, Lanier is best known as the author of two influential books on the future of the digital world in which we live: 2010’s best-selling You Are Not a Gadget, and the recently released Who Owns the Future?
Lanier’s concern, at its simplest, is that that we are building a digital future in which a small number of companies and individuals garner great wealth, while the economy as a whole starts to shrink, taking with it jobs, the middle classes and what he terms “economic dignity.” Lanier believes we can avoid that fate, but only if we start remunerating people for the digital assets (from personal data to designs for use by 3D printers) that we currently give away for nothing. Information may want to be free, but Lanier believes the world would be better served if it was affordable instead.
Lanier is far more than a scientist. He owns (and plays) one of the world’s largest collections of rare and ancient musical instruments, and often demonstrates them during his many speeches.
You laid the groundwork for Who Owns the Future? in your previous book, You Are Not a Gadget. What persuaded you of the need for a second book?
What persuaded me of the need for a second book is the discordance between what I find empirically in the world and the ideas about policy that everyone seems to return to as if there’s no alternative. We’re locked into a continued belief that investing in a particular kind of information-technology venture is good for society, when actually these often seem to be pulling society apart and creating ever more extreme income inequalities. We seem unable to connect the dots between the continued dysfunction of the financial sector, even as it expands profitability, and the rise of information technology.
What I had written about this topic before was a bit more impressionistic, and even the new book is still a relatively early approach to a more complete understanding of these topics, but I felt I needed to at least take another step toward trying to interpret how particular digital trends are impacting our economy, society and politics. But as with the previous book, I think I raised yet more questions that will require yet more work. I don’t think it’s complete.
Why is it that we all appear so willing to give away the hardest currency of the information economy — our personal data — either for free or in exchange for digital trivia?
People are relatively willing to accept suggestions so long as what they are asked to do is relatively easy and pleasant. I think technologists could just as well have proposed a different model of the digital economy, and it would have been accepted just as well. I don’t think there’s anything particularly significant about people being willing to give something a try.
The challenge is that people still don’t understand the value of their data. They have been infused with the idea that the ubiquitous fashionable arrangement, wherein you obtain free services or so-called bargains in exchange for personal data, is a fair trade. But it isn’t, because you’re not a first-class participant in the transaction. By first-class participant, I mean a party to a negotiation where everyone has roughly the same ability to bargain, so that when they do bargain, the result is a fair transaction in an open market economy. But if you’re in a structurally subordinate position, from which you have to accept whatever is offered, then you give much greater latitude and power to whoever has your data than you get in exchange.
I’m currently planning a research project to better understand the value of data. There are many ways to do this, but one is to look at loyalty cards, frequent flier memberships, and the like. There can be an argument about the difference between gathering intelligence about customers versus locking them in, but I argue the two benefits are deeply similar. The differential between using a loyalty card and not using the card for a given person in the course of a year is a measure of one small portion of the value of that person’s information. I strongly suspect that once we measure the cumulative value of personal data using techniques like this, we’ll see that it’s getting more valuable each year. I don’t know for sure, but that’s what I hypothesize.
If I’m right, then one interesting question is, will the value of information from a typical person ever transcend the poverty line? I think we’re headed towards that point. And if that does happen, then we have the potential for a new kind of society that escapes the bounds of the old debates between “Left” and “Right.” Instead, there could be an entirely new sort of more complete market that actually creates stable social security in an organic way. The possibility fascinates me. It is not irrational to imagine this future.
How much blame for data inequities do you ascribe to social media?
I don’t think social media per se is to blame — rather it’s the use of the consumer-facing Internet to achieve a kind of extreme income concentration in a way that’s similar to what has happened in finance during the last 20-25 years. I don’t think there was any evil scheme in Silicon Valley to make this happen. For example, Google didn’t have any roadmap from the start that said, wow, if we collect everybody’s personal information we can gradually be in a position to tax the world for access to transactions, or have an ability to manipulate outcomes using the power of statistical calculations on big data.
But in a sense, the basic scheme was foretold at the start of computing, when terms such as cybernetics were used more often, and people like Norbert Wiener were discussing the potential of information systems to let people manipulate each other, and how this could create power imbalances. Silicon Valley rediscovered these old thoughts, but in a way that was too immediately profitable to allow for self-reflection. You see a very similar pattern emerging in Silicon Valley to what happened in finance: the use of large-scale computation to gather enough data to gain an information advantage over whoever has a lesser computer.
I don’t know if you’ve read the early e-mails of (Facebook CEO) Mark Zuckerberg from when he was a student starting out, where he says he just can’t believe that people are giving him all their information. He seems utterly astonished that people are doing it. The reason his fellow students did it was simply that on a first pass most people are pretty trusting and good-natured and pretty game to try things. But what happens with social media is that they quickly become subject to a vaguely “blackmail-like” cycle that keeps them engaged and locked in, because if you don’t play the game intensely on Facebook, your reputation is at stake. I’m astonished at the energy people put into basically addressing this fear that the way other people perceive them, their being in the world, how they might be remembered, will be undermined unless they put all this labor into interacting.
The early years of economic transition are often accompanied by structural discontinuities, but in the case of data-driven economies, they seem particularly profound. Why is that?
Because the network-effect aspect overwhelms everything else, and network effects can build very quickly. The tulip mania of the 1600s happened fast and crashed fast, but in a world of digital networking, things happen even faster, yet the data doesn’t wilt away like a tulip. Network lock-in is persistent. Markets don’t rebalance themselves remotely as quickly or easily as in earlier times when they are disrupted by modern digital entrepreneurs.
Another problem is that you can’t automate common sense — but if you have a lot of data on a network and you use statistical algorithms to process it, you will create an illusion of having done just that. It’s an easy illusion to create.
We don’t currently have a complete enough scientific understanding of how the brain works or how common sense works. But we talk as if we do. We’re always talking about how we’ve implemented artificial intelligence or how we have created a so-called smart algorithm. But really we’re kidding ourselves. This is very hard for people who run big computers to admit. They always treasure the illusion that they are working with completed science — which they aren’t — and that they have already attained cosmic mastery of all possible cognition.
The mistake that happens again and again in these systems is that somebody believes they have the ultimate crystal ball because their statistical correlation algorithms are predictive, which they are, to some extent, for the very simple reason that the world isn’t entirely chaotic and random. Just as a matter of definition, big data algorithms will be predictive, and the bigger your computer and the more fine-tuned your feedback system, the more predictive they will be.
But, also by definition, you will inevitably hit a stage where statistics fail to predict change, because they don’t represent causal structure. That’s the point at which companies such as Long-Term Capital Management fail, mortgage schemes fail, high-frequency trading fails — and that’s where companies such as Google and Facebook would fail if we gave them a chance. However, if you construct an entire society around supporting the illusion that the falsehood of ultimate cognition is actually true, then you can sustain the illusion even longer. But eventually it will collapse.
The core problem is this idea of automating our own thought, our own responsibility, so we don’t have to take it anymore. That’s the core problem of this way of using computing. It doesn’t mean there’s anything wrong with networks or computing. Fundamentally, it just means that this is a particularly bad way of using them that’s highly seductive in the short term, and it’s a pattern we fall into again and again.
If the “siren servers” you describe dash many of our jobs against the rocks, what future do you see for data-driven economies, say, 20 years from now?
I think a data-driven economy could be a wonderful thing. The primary missing ingredient that would make it wonderful is to have more complete and honest accounting. That single change could create a data-driven economy that would be both sustainable and creative.
Right now we don’t have that. For example, when you translate a document automatically using, say, Google or Microsoft, it seems like this magic thing. Somebody can translate my document between languages and it’s free, so isn’t that great? But the truth is it’s never like that. There are no nonhuman sources of value to plug into digital networks. There are no angels or aliens showing up to fill the network with the bits that make it function.
What actually happens is that the companies that do automatic translation have scraped the Internet for preexisting translations. The result is just a statistical pastiche of preexisting translations done by real people. There are all these real people behind the curtain, and they aren’t being paid for their work.
Now, you might say that if you sent each of those people a few pennies for their contribution to the corpus of data, it would just be a negligible amount, and that would be true for any given one. But if you count up all the different auto-translators and all the different times that they happen, there would be many, many thousands of small transactions. And that would add up to something significant that would reflect the actual value human translators had contributed. In a sense, we would initiate a universal royalty scheme....
Do you think part of the problem is that many of us have already given away the crown jewels, at least in terms of our personal data?
The transition issue is by far the hardest one. But, you know, there have been transitions in the past, and I think we can achieve transitions in the future.
I’ll give you one example from American history. Originally the American west was viewed as a place where land was free, although that free land was often sort of scammy in the sense that you would only have access to it through a monopolized railroad system. But it bears some similarity to the situation today, and there was a similar romance about it. Everyone ultimately became content that we moved away from free land and monopolized railroads to more of a real economy where more people function as first-class citizens in transactions with each other. So the maturing of the American west is a useful model....
Equally, there are many transitions that have gone badly, and the model I really dislike is that of outright revolution, because the problem with it is that it’s kind of random what will come next. You know that you will break a lot of stuff, but you don’t know whether you will end up with a system that’s better or worse.
Among technical people, there’s a lot of talk of revolution — everybody wants to be god-like. Everybody wants to be the one who reforms the world, to be the one who oversees the singularity. It reminds me a lot of the people who once said they would oversee the Marxist revolution, and that everything would turn out alright. The problem is that you imagine you will be in control but you won’t. Probably all that will happen is you will hurt a lot of people. This idea that we are barreling toward a singularity and that we’re going to change everything is inherently foolish, immature and cruel. The better idea is trying to construct, however imperfectly, incremental paths that make things better.
Even in a world that fairly compensates us for our data, how do we avoid self-commoditization? How can we differentiate our contributions when we will have so little control over them?
What I propose in the book is a brutally mathematical system that I’m sure would under-represent reality, as such systems always do, but nonetheless would be pretty straightforward. It’s based on “what-if” calculations. The question is, if my bits had never existed, what would be the value differential for a particular cloud scheme? If I hadn’t provided a translated book from which a lot of examples were taken, what difference would it make to the market value of new cloud-based translations?
If an entrepreneur can’t approximate that kind of what-if calculation, then that means that the output of her cloud algorithm is kind of random or chaotic anyway, and she shouldn’t be making money from it. The digital economy should be set up in such a way that if you can’t calculate the value of corpus contributors, then you will not make money from your own scheme. To the degree that you can attribute value from contributors, that’s the measure of what you yourself should profit on, because the rest of your calculation is random and chaotic. In other words, the rest of the income you can generate is some sort of network-effect lock in rent, but not real value creation. Digital entrepreneurs should earn percentages of the wealth they help generate for others.
You envision a “middle-class-oriented” information economy, one in which information isn’t free, but is at least affordable. Where does the working class fit into such an economy?
The key thing for me is whether the outcome of people interacting in a digital system is a power-law distribution or a bell curve distribution. That keeps it simple. What I mean when I talk about a middle class is having outcomes that more often look like bell curves than power laws. A power law is a high tower plus a long tail connected by an emaciated neck. So you have a few winners and everybody else is a wannabe. Instagram and American Idol are both like that. A bell curve is the result of a measurement of a population instead of a sorting.
There’s a fundamental dilemma that is intrinsic to the math of economies, which is to the degree that you give individuals in an economy self-determination, and you have different individual outcomes and people invent their own lives and so on, you’ll engender a spread of outcomes, where by any particular measure some people will be left behind. Different measures might find different subpopulations left behind. But you’ll have some kind of a distribution in which some people are at the bottom of the distribution.
What I want to point out is that every single way of thinking about a good society or a better society that I’m aware of depends similarly on a strong middle hump in a bell curve, whether we want to call that a middle class or not. If you are an Ayn Rand fan, you have to admit that you can’t have markets without customers. And the customers have to come from the middle or the market won’t sustain itself. Equally, if you’re a government person, you should also want a strong middle. Otherwise, income concentration will corrupt your democratic process, which I think is an issue in the U.S. right now. If you’re a society person, you have to have that strong middle or the society will break into castes, which has happened repeatedly in societies all over the world.
Everybody should want the same curve. As I point out in the book, different kinds of digital network designs will give you bell curves or power laws. If we choose network designs that give us bell curves, then we can have a sustainable digital society. That’s the core idea of the book.
But you were asking who gets left behind. In my view, if we have a true data society and an honest accounting of it, the amazing thing is that just by living, you’re contributing bits to the network. Even someone who is trying to be as unproductive and uninteresting as possible might actually be contributing at least some value to modern network schemes. Sometimes it seems that those people are more active online than other people who are busy doing real work. So it’s very possible that even people at the low end might still find at least a baseline of reasonable livelihood simply from being in a world that needs a lot of bits from people to calculate all kinds of cloud things.
A full-on digital economy might actually start to capture a little bit more value from people, so that even at the bad end of the bell curve you would see a baseline that’s not quite as horrible as abject poverty. Still, a distribution means a distribution, so there will still be people at the bottom. I don’t think you can have a society that has freedom without some mechanism to help whoever ends up at the bottom of the curve.
At the heart of your thinking is the concept of “economic dignity.” Why is that so important to a healthy economy, and how does it differ from the goals of previous economic revolutions? After all, dignity was what the unions were fighting for a century ago.
Dignity means that there are many participants in the economy functioning as first-class citizens, able to be both buyers and sellers, with enough mobility to actually have a choice. If you don’t achieve that, then you don’t really have a market economy.
I argue in the book that prior to the rise of digital networks, up until the late 20th century, the way the middle block happened — the way we got something resembling a bell curve in the distribution of outcomes in a society — was mostly through special ratcheting mechanisms, which I call levees in the book. These mechanisms all had a bit of an artificial feeling to them, like a taxi medallion or union membership or something similar. There was always some kind of a hump you have to get over that put you into a protected class so that you join a collective-bargaining position, whether that was explicitly so or not. College education and middle-class mortgages functioned as leaky levees, to a degree.
What digital networks have been doing is walloping all those positions, crashing them down, and that’s been the primary mechanism by which the Internet has been harming the middle classes.
Take the taxi medallion system, which in itself is far from perfect and in some cases completely corrupt, but nonetheless has been the route to the middle class, especially for generations of immigrants. There are now Internet companies like Uber that connect passengers directly with drivers of vehicles for hire. Anyone can spontaneously compete with taxi drivers. What this does is create a race to the bottom, because these companies are choosing a kind of efficiency that creates a power law where whoever runs the Uber computer does very well, but everybody else pushes down each other’s incomes. So although such attacks on levees create efficiency from a certain very short-term perspective, they do so by creating power-law outcomes that undermine the very customer base that can make the market possible. This idea eventually eats itself and self-destructs.
If we could have an organic way of building a bell-curve outcome instead of traditional ad hoc levee systems, we might get a similar result that’s broader, more honest, fairer, and also more durable and less corrupt. I could be wrong, but just looking at the math it does make sense. There’s still more to work out, but fundamentally there’s some potential there.