Cropped image from the Jefferson Memorial by Rian Castillo on Flickr

What’d I miss?

Tim O'Reilly
From the WTF? Economy to the Next Economy

There’s a scene in Lin-Manuel Miranda’s Hamilton in which Thomas Jefferson, who has been away as ambassador to France after the American Revolution, comes home and sings, “What’d I miss?”

We all have “What’d I miss?” moments, and authors of books most of all. Unlike the real-time publishing platforms of the web, where the act of writing and the act of publishing are nearly contemporaneous, months or even years can pass between the time a book is written and the time it is published. Stuff happens in the world, you keep learning, and you keep thinking about what you’ve written, what was wrong, and what was left out.

Because I finished writing my new book, WTF? What’s the Future and Why It’s Up to Us, in February of 2017, my reflections on what I missed and what stories continued to develop as I predicted form a nice framework for thinking about the events of last year.

Our first cyberwar

“We just fought our first cyberwar. And we lost,” I wrote in the book, quoting an anonymous US government official to whom I’d spoken in the waning months of the Obama administration. I should have given that notion far more than a passing mention.

In the year since, the scope of that cyberwar has become apparent, as has how all of the imagined scenarios we used to prepare turned out to mislead us. Cyberwar, we thought, would involve hacking into systems, denial-of-service attacks, manipulating data, or perhaps taking down the power grid, telecommunications, or banking systems. We missed that it would be a war directly targeting human minds. It is we, not our machines, that were hacked. The machines were simply the vector by which it was done. (The Guardian gave an excellent account of the evolution of Russian cyberwar strategy, as demonstrated against Estonia, Ukraine, and the US.)

Social media algorithms were not modified by Russian hackers. Instead, the Russian hackers created bots that masqueraded as humans and then left it to us to share the false and hyperpartisan stories they had planted. The algorithms did exactly what their creators had told them to do: show us more of what we liked, shared, and commented on.

In my book, I compare the current state of algorithmic big data systems and AI to the djinni (genies) of Arabian mythology, to whom their owners so often give a poorly framed wish that goes badly awry. In my talks since, I’ve also used the homelier image of Mickey Mouse, the sorcerer’s apprentice of Walt Disney’s Fantasia, who uses his master’s spell book to compel a broomstick to help him with his chore fetching buckets of water. But the broomsticks multiply. One becomes two, two become four, four become eight, eight sixteen, and soon Mickey is frantically turning the pages of his master’s book to find the spell to undo what he has so unwisely wished for. That image perfectly encapsulates the state of those who are now trying to come to grips with the monsters that social media has unleashed.

This image also perfectly captures what we should be afraid of about AI — not that it will get a mind of its own, but that it won’t. Its relentless pursuit of our ill-considered wishes, whose consequences we don’t understand, is what we must fear.

We must also consider the abuse of AI by those in power. I didn’t spend enough time thinking and writing about this.

Zeynep Tufekci, a professor at the University of North Carolina and author of Twitter and Tear Gas, perfectly summed up the situation in a tweet from September: “Let me say: too many worry about what AI — as if some independent entity — will do to us. Too few people worry what *power* will do *with* AI.” That’s a quote that would have had pride of place in the book had it not already been in production. (If that quote resonates, watch Zeynep’s TED Talk.)

And we also have to think about the fragility of our institutions. After decades of trash-talking government, the media, and expertise itself, they were ripe for a takeover. This is the trenchant insight that Cory Doctorow laid out in a recent Twitter thread.

The runaway objective function

In April of 2017, Elon Musk gave an interview with Vanity Fair in which he used a memorable variation on Nick Bostrom’s image of an AI whose optimization function goes awry. Bostrom had used the thought experiment of a self-improving AI whose job was to run a paper-clip factory; Elon instead used a strawberry-picking robot, which allowed him to suggest that the robot aims to get better and better at picking strawberries until it decides that human beings are in the way of “strawberry fields forever.”

In the book, I make the case that we don’t need to look to a far future of AI to see a runaway objective function. Facebook’s newsfeed algorithms fit that description pretty well. They were exquisitely designed to show us more of what we liked, commented on, and shared. Facebook thought that showing us more of what we asked for would bring us closer to our friends. The folks who designed that platform didn’t mean to increase hyperpartisanship and filter bubbles; they didn’t mean to create an opening for spammers peddling fake news for profit and Russian bots peddling it to influence the US presidential election. But they did.

So too, the economists and business theorists who made the case that “the social responsibility of a business is to increase its profits” and that CEOs should be paid primarily in stock so that their incentives would be allied with the interests of stockholders thought that they would make the economy more prosperous for all. They didn’t mean to gut the economy, increase inequality, and create an opioid epidemic. But they did.

We expect the social media platforms to come to grips with the unintended consequences of their algorithms. But we have yet to hold accountable those who manage the master algorithm of our society, which says to optimize shareholder value over all. In my book, I describe today’s capital markets as the first rogue AI, hostile to humanity. It’s an extravagant claim, and I hope you dig into the book’s argument, which shows that it isn’t so far-fetched after all.

We need a new theory of platform regulation

In my book, I also wrote that future economic historians will “look back wryly at this period when we worshipped the divine right of capital while looking down on our ancestors who believed in the divine right of kings.” As a result of that quote, a reader asked me if I’d ever read Marjorie Kelly’s book The Divine Right of Capital. I hadn’t. But now I have, and so should you.

How our financial statements set our expectations about who gets what and why

Marjorie’s book, written in 2001, anticipates mine in many ways. She talks about the way that the maps we use to interpret the world around us can lead us astray (that is the major theme of part one of my book) and focuses in on one particular map: the profit and loss statements used by every company, which show “the bottom line” as the return to capital and human labor merely as a cost that should be minimized or eliminated in order to increase the return to capital. This is a profound insight.

Since reading Marjorie’s book, I’ve been thinking a lot about how we might create alternate financial statements for companies. In particular, I’ve been thinking about how we might create new accounting statements for platforms like Google, Facebook, and Amazon that show all of the flows of value within their economies. I’ve been toying with using Sankey diagrams in the same way that Saul Griffith has used them to show the sources and uses of energy in the US economy. How much value flows from users to the platforms, and how much from platforms to the users? How much value is flowing into the companies from customers, and how much from capital markets? How much value is flowing out to customers, and how much to capital markets?

This research is particularly important in an era of platform capitalism, where the platforms are reshaping the wider economy. There are calls for the platforms to be broken up or to be regulated as monopolies. My call is to understand their economics and use them as a laboratory for understanding the balance between large and small businesses in the broader economy. This idea came out in my debate with Reid Hoffman about his idea of “blitzscaling.” If the race to scale defines the modern economy, what happens to those who don’t win the race? Are they simply out of luck, or do the winners have an obligation to the rest of us to use the platform scale they’ve won to create a thriving ecosystem for smaller companies?

This is something that we can measure. What is the size of the economy that a platform supports, and is it growing or shrinking? In my book, I describe the pattern that I have observed numerous times, in which technology platforms tend to eat their ecosystem as they grow more dominant. Back in the 1990s, venture capitalists worried that there were no exits; Microsoft was taking most of the value from the PC ecosystem. The same chatter has resurfaced today, where the only exit is to be acquired by one of the big platforms — if they don’t decide to kill you first.

Google and others provide economic impact reports that show the benefit they provide to their customers, but they also have to consider the benefit to the entrepreneurial ecosystem that gave them their opportunity. The signs are not good. When I looked at Google’s financial statements from 2011 to 2016, I noted that the share of its ad revenue from third-party sites had declined from nearly 30% to about 18%. There may be many reasons for this, but it certainly calls for some research. Amazon deserves similar scrutiny. Fifteen of the top 20 Kindle best sellers were published by Amazon.

There are other ways that these platforms have helped create lots of value for others that they haven’t directly captured for themselves (e.g., Google open-sourcing Android and TensorFlow and Amazon’s creation of Web Services, which became an enabler for thousands of other companies). Still, how do we balance the accounts of value extracted and value created?

We need a new theory of antitrust and platform regulation that focuses not just on whether competition between giants results in lower prices for consumers but the extent to which the giant platforms compete unfairly with the smaller companies that depend on them.

Augment people, don’t replace them

Enough of the news that expands on the darker themes of my book!

The best news I read in the ten months since I finished writing the book was the research by Michael Mandel of the Progressive Policy Institute that shows that ecommerce is creating more and better jobs than those it is destroying in traditional retail. “To be honest, this was a surprise to me — I did not expect this. I’m just looking at the numbers,” Mandel told Andrew Ross Sorkin of the New York Times. Here is Mandel’s paper.

This report nicely complemented the news that from 2014 to mid-2016, a period in which Amazon added 45,000 robots to its warehouses, it also added nearly 250,000 human workers. This news supports one of the key contentions of my book: that simply using technology to remove costs — doing the same thing more cheaply — is a dead end. This is the master design pattern for applying technology: Do more. Do things that were previously unimaginable.

Those who talk about AI and robots eliminating human workers are missing the point, and their businesses will suffer for it in the long run. There’s plenty of work to be done. What we have to do is to reject the failed economic theories that keep us from doing it.

That’s my call to all of you thinking, like me, about what we’ve learned in the past year, and what we must resolve to do going forward. Give up on fatalism — the idea that technology is going to make our economy and our world a worse place to be, that the future we hand on to our children and grandchildren will be worse than the one we were born into.

Let’s get busy making a better world. I am optimistic not because the road ahead is easy but because it is hard. As I wrote in the book, “This is my faith in humanity: that we can rise to great challenges. Moral choice, not intelligence or creativity, is our greatest asset. Things may get much worse before they get better. But we can choose instead to lift each other up, to build an economy where people matter, not just profit. We can dream big dreams and solve big problems. Instead of using technology to replace people, we can use it to augment them so they can do things that were previously impossible.”

Let’s get to work.

This article originally appeared as the New Year’s 2018 issue of the O’Reilly Next:Economy Newsletter. Subscribe to get news each week about technology and its impact on the economy.

Subscribe by clicking here.