rw-book-cover

Metadata

Highlights

  • Technologies change, but economics doesn’t. (Page 9)
  • began to dawn on us that we have entered a unique moment in history— The Between Times— after witnessing the power of this technology and before its widespread adoption. (Page 14)
  • whereas other implementations require redesigning the product or service as well as the organization to deliver it in order to fully realize the benefits of AI and justify the cost of adoption. In the latter case, companies and governments are racing to find a profitable pathway for doing so. (Page 15)
  • We began to connect the dots and assemble an economic framework that distinguishes between point solutions and system solutions that would not only solve the Verafin puzzle but also provide a forecast for the next wave of AI adoption. (Page 15)
  • By focusing on system solutions rather than point solutions, we could explain how this technology will eventually sweep across industries, entrenching some incumbents and disrupting others. It was time to write another book. This is that book. (Page 15)
  • The ubiquity of electricity makes it difficult to imagine that, at the turn of the twentieth century, two decades after Thomas Edison invented the light bulb, it was pretty much nowhere. (Page 17)
  • There was plenty of enthusiasm for electricity but not much to show for it. We tend to forget this when new radical technologies emerge today. When the light goes on, rather than everything changing, little does. AI’s light is on. But we need to do more. We are now in The Between Times for AI— between the demonstration of the technology’s capability and the realization of its promise reflected in widespread adoption. (Page 17)
  • Adoption of electricity in the United States (Page 18)
  • For AI, that future is uncertain. But we have seen the pattern for electricity. Thus, to understand the challenges facing the commercialization of AI, put yourself in the minds of entrepreneurs circa the 1880s. (Page 18)
  • By all accounts, steam was the miracle driving the biggest economic revolution since agriculture. So, an entrepreneur wanting to sell electricity would have to encourage would- be customers to take a closer look at steam and identify its warts. (Page 19)
  • The value the point solution entrepreneurs promised was lower cost and other benefits specific to certain factory types. That it was plug and play made it clear what they were selling. But in many cases, it was still a hard sell. Only so much of a power bill can be sliced off by changing a power source. (Page 20)
  • What a point solution did not offer was a reason to use more power. (Page 20)
  • As economic historian Nathan Rosenberg observed, it brought about an era of “fractionalized power” where “it now became possible to provide power in very small, less costly units and also in a form that did not require the generation of excess amounts in order to provide small or intermittent ‘doses’ of power.” (Page 21)
  • The entrepreneurial insight here regarding the value of electricity was that it required less power or, more accurately, only power when needed. While this insight began to inform some changes in factory design, such as having separate power sources for different machine types, some engineers began to imagine electric motors available at each machine. But even for groups of machines, there was great value in only paying for power when the machines were being used. (Page 21)
  • Throughout the entire Industrial Revolution, factories were designed to leverage steam. As we have seen, a single source of power into the factory was distributed to individual machines through a central shaft upon which belts and pulleys were hung. To modern eyes, this was one big machine with individual people inside as mere cogs. (Page 22)
  • More entrepreneurial managers realized that the true value of electricity would come from providing a system solution— specifically, a system that could take advantage of all electricity had to offer. By system, we mean a set of procedures that together ensure something is done. (Page 22)
  • Just think of the economics of space within a factory. With steam and its central shaft, space near the shaft was more valuable than space elsewhere. So work was done near the shaft, and anything else was stored and moved away. That meant real stuff was moved back and forth according to the demands of power. (Page 22)
  • Electricity equalized the economic value of space, providing flexibility. Now it was worth organizing production on, say, a line so that you reduced the mileage incurred in moving real stuff back and forth and, instead, from one process to the next. (Page 23)
  • Henry Ford could not have invented the production line for the Model T car with steam power. Only electricity, decades after its commercial promise was shown, could achieve that. Yes, Ford was a car entrepreneur. But he was largely a system solution entrepreneur. (Page 23)
  • First, the path to large productivity increases lies in understanding what a new technology offers. (Page 23)
  • The same pattern is what we expect to see with AI. As we already noted, the initial entrepreneurial opportunities involved point solutions such as those of Verafin that swapped out one way of predicting for another that is better, faster, and cheaper. (Page 23)
  • We also see application solutions that require a redesign of devices or products around AI. All those robots powered by AI are applications, and so is much of the way AI has been implemented to enhance software on your devices. (Page 23)
  • Second, once we understand that, we need to ask a fairly straightforward but potentially hard- to- answer question. Given what we now know about AI, how would we design our products or services or factories if we were starting from scratch? (Page 24)
  • We see echoes of this in the early adoption of AI- centered system designs in the new and digitized industries of today: search, e- commerce, streaming content, and social networks. (Page 24)
  • For AI, we can ask these same two questions: (1) What is AI really giving us? (2) If we are designing our business from scratch, how would we build our processes and business models? If electricity was not “lower cost of energy” but rather “enable vastly more productive factory design,” so too, perhaps, is AI not “lower cost of prediction” but rather “enable vastly more productive products, services, and organizational design.” (Page 24)
  • Whereas the primary benefit of electricity was that it decoupled energy use from its source, which facilitated innovation in factory design, the primary benefit of AI is that it decouples prediction from the rest of the decision- making process, which facilitates innovation in organizational design via reimaging how decisions interrelate with one another. (Page 24)
  • We argue that by decoupling prediction from the other aspects of a decision and transferring prediction from humans to machines, AI enables system- level innovation. Decisions are the key building block for such systems, and AI enhances decision- making. (Page 25)
  • The third and final lesson: different solution types provide different opportunities to obtain power in markets. Entrepreneurs profit when they both create and capture value. With point solutions, the issue is often that there is relatively little value created in the first place. (Page 25)
  • we move to applications and then to systems, the value entrepreneurs create becomes more defensible. New devices can be differentiated from the competition and guarded with patents and other forms of intellectual property protection. For new systems, however, the potential is even greater. (Page 25)
  • While a factory layout may be easy to see, the procedures, capabilities, and training underlying the new system may be less visible and hard to replicate. What is more, new systems can enable scale. (Page 25)
  • Electricity took decades to do what we call “disrupt.” During its first two decades, it was used as a point solution in some factories and applications, and for lighting in others. But it only changed the economy when new systems developed. That change was profound and shifted power to those who controlled electricity generation and grids and to those who could use electricity at scale in mass production. (Page 26)
  • New systems are hard to develop and also, as we will explore, difficult to copy because they are often complex. That creates opportunities for those who can innovate on systems. But there is still considerable uncertainty. For AI, who might accumulate power from these new technologies is very much an open question. It will depend on what those new systems look like. Our task here is to light your way to anticipate who may gain and who may lose power as AI systems develop and are adopted. (Page 26)
  • The parable of the three entrepreneurs, set over a hundred years ago and focused on the market for energy, illustrates how different entrepreneurs exploiting the same technology shift, from steam to electricity, can exploit different value propositions: point solutions (lower cost of power and less loss due to friction— no design change to factory system); application solutions (individual electric drives on each machine— modular machines, so the stoppage of one does not impact others; no design change to factory system); and system solutions (redesigned factories— lightweight construction, single story, workflows optimized in terms of spatial layout and flow of workers and materials). (Page 26)
  • Some value propositions are more attractive than others. In the case of electricity, point solutions and application solutions predicated on directly replacing steam with electricity without modifying the system offered limited value, which was reflected in industries’ slow initial adoption. Over time, some entrepreneurs saw the opportunity to deliver system- level solutions by exploiting the ability of electricity to decouple the machine from the power source in a manner that was impossible or too expensive with steam. In many cases, the value proposition of system- level solutions far exceeded the value from point solutions. (Page 27)
  • Just as electricity enabled decoupling the machine from the power source and thus facilitated shifting the value proposition from “lower fuel costs” to “vastly more productive factory design,” AI enables decoupling prediction from the other aspects of a decision and thus facilitates shifting the value proposition from “lower cost of prediction” to “vastly more productive systems.” (Page 27)
  • In 1987, MIT’s Robert Solow famously quipped that “[ w] e see the computer age everywhere but in the productivity statistics.” (Page 29)
  • General purpose technologies include the steam engine and electricity, and the semiconductor and internet as more recent instantiations. To participants at our conference, AI looked like a plausible candidate to add to the list. What should we expect? Yes, historically, such technologies eventually transformed economies, businesses, and work, but what happened during the decades all that was happening? What happened in The Between Times? (Page 30)
  • AI has the transformation potential of electricity, but if history is a guide, that transformation is going to be a long and bumpy ride. (Page 30)
  • We should expect optimism about the future to coexist with disappointment about where we stand today. (Page 31)
  • In the first wave of electricity, light bulbs replaced candles and electric motors replaced steam engines. These were point solutions, with no restructuring required. The economy did not transform. AI is in the same situation. It is applied as a new tool for predictive analytics. (Page 31)
  • They already did prediction, and AI is making their predictions better, faster, and cheaper. The lowest- hanging fruit for AI are point solutions, and that fruit is being picked. (Page 31)
  • Just as electricity’s true potential was only unleashed when the broader benefits of distributed power generation were understood and exploited, AI will only reach its true potential when its benefits in providing prediction can be fully leveraged. (Page 31)
  • We sit in The Between Times, after the demonstration of AI’s clear promise and before its transformational impact. (Page 31)
  • There needs to be a way to use machine predictions to do things better. That means using predictions to make better decisions. AI’s impact will be all about the things humans can do because they can make better decisions. It is not only about the technical challenge of collecting data, building models, and generating predictions, but also about the organizational challenge of enabling the right humans to make the right decisions at the right time. And it is about the strategic challenge of identifying what can be done differently once better information is available. (Page 32)
  • Looking at existing workflows and identifying where AIs can replace humans can deliver meaningful, albeit incremental benefits. It isn’t where the biggest opportunities lie. (Page 32)
  • A point solution improves an existing procedure and can be adopted independently, without changing the system in which it is embedded. (Page 33)
  • An application solution enables a new procedure that can be adopted independently, without changing the system in which it is embedded. (Page 33)
  • A system solution improves existing procedures or enables new procedures by changing dependent procedures. (Page 33)
  • AI POINT SOLUTION: A prediction is valuable as a point solution if it improves an existing decision and that decision can be made independently. (Page 34)
  • AI APPLICATION SOLUTION: A prediction is valuable as an application solution if it enables a new decision or changes how a decision is made and that decision can be made independently. (Page 34)
  • AI SYSTEM SOLUTION: A prediction is valuable as a system solution if it improves existing decisions or enables new decisions, but only if changes to how other decisions are made are implemented. (Page 34)
  • The biggest increase in the adoption of AI is, if history is any guide, going to come from changes in systems. But such change will also be disruptive. By disruptive, we mean that it changes the roles of many people and companies within industries and, alongside those changes, causes shifts in power. (Page 35)
  • When decisions interact with one another, moving away from a rule to a decision driven by prediction actually adds a measure of unreliability to the system. Overcoming this often requires systemwide change. The problem is that rules glue the existing system together, often in subtle and nonobvious ways. Thus, it can be easier to build a new system from scratch than to change an existing system. (Page 38)
  • new entrants and startups often outperform established businesses when a total system redesign is required for optimization. Thus, system- level change is a path to disruption of incumbent firms. (Page 38)
  • We explain that when you understand that AI is all about prediction and is an input into decision- making, power comes not from machines— even though they might look powerful— but from those behind the machines, guiding how they react to predictions, what we call judgment. (Page 39)
  • The Between Times: after witnessing the power of AI and before its widespread adoption. Although point solutions and application solutions can be designed and implemented reasonably quickly, system solutions that will unlock AI’s vast potential take much more time. (Page 41)
  • The key concept in the definitions of the three types of AI solutions— point solutions, application solutions, and system solutions— is independence. (Page 41)
  • If an AI prediction creates value by enhancing the focal decision and that value creation is independent of any other changes to the system, then a point solution (enhanced existing decision) or application solution (new decision) is feasible. However, if the value of the enhanced decision is not independent but rather requires other substantive changes to the system in order to create value, then a system solution is required. (Page 41)
  • System solutions are typically harder to implement than point solutions or application solutions because the AI- enhanced decision impacts other decisions in the system. (Page 41)
  • We took all the potential complexity and hype regarding AI and reduced it to a single factor: prediction. Reducing an exciting new thing to its less sensational essence is a key tool in an economist’s playbook. (Page 42)
  • What we have is an advance in statistical techniques rather than something that thinks. But the advance in statistical techniques is very significant. As that advance reaches its potential, it will dramatically reduce the cost of prediction. And prediction is something we do everywhere. (Page 42)
  • In 2012, a team from the University of Toronto, led by Geoffrey Hinton, used deep learning to dramatically improve the ability of machines to identify what was going on in images. Using a data set of millions of images called ImageNet, teams had, for the better part of a decade, tried to devise algorithms that would accurately identify what an image was showing. (Page 43)
  • The deep learning approach conceived of the task— identifying the subjects in images— as a prediction problem. The goal was to be able to predict, when given a new image, what a human would say was in the image. (Page 43)
  • Predictions are not the only input into decision- making. To understand how prediction matters, it is necessary to understand two other key inputs into decisions: judgment and data. (Page 44)
  • judgment— the process of determining the reward to a particular action in a particular environment. (Page 44)
  • Data provides the information that enables a prediction. As AIs acquire more high- quality data, the predictions improve. By quality, we mean that you have data about the context in which you are trying to predict. Statisticians call this the need to predict something on the “support” of your data. Extrapolate too much from the data you have, and the prediction may be inaccurate. (Page 45)
  • Predictions will work when there isn’t a competitor with incentives to undermine your predictions or a customer with incentives to find a way around them. If a customer could do better by reverse engineering the key aspects of your AI and feeding it false information, then the AI will only serve your goals for as long as customers don’t discover how it works. (Page 48)
  • rather than AI engaged in a crime- fighting crusade against fraud, what AI is actually doing is improving banks’ ability to sort legitimate from fraudulent transactions at a much lower cost— that is, prediction. AI these days is a prediction machine, and that is all it is. For Verafin, that turns out to be exactly what it wanted. To make the modern payments system work requires a high degree of automation. You want to have high confidence in those approvals. That is where AI slots in. (Page 49)
  • Amazon might want to give those couple of days back to you by predicting what you want, shipping it to you, and inviting you to accept it or not at your door. In other words, Amazon ships to you based on its predictions and then you shop from the boxes delivered to your doorstep. We called this a move from shop- then- ship to ship- then- shop. (Page 52)
  • Amazon already struggles with returns so much that it never resells many returned items but sends them directly to trash. 7 With its existing system, it is cheaper for Amazon to throw away returns than put those products back into its own logistics systems. The lesson here is that ship- then- shop, while it might appear to be an application solution, is one that requires changes elsewhere in the system to be made economic. (Page 53)
  • Prediction is an input to decision- making. When the cost of an input falls, we use more of it. So, as prediction becomes cheaper, we will use more AI. As the cost of prediction falls, the value of substitutes for machine prediction (e.g., human prediction) will fall. (Page 55)
  • At the same time, the value of complements to machine prediction will rise. Two of the main complements to machine prediction are data and judgment. We use data to train AI models. We use judgment along with predictions to make decisions. While prediction is an expression of likelihood, judgment is an expression of desire— what we want. So, when we make a decision, we contemplate the likelihood of each possible outcome that could arise from that decision (prediction) and how much we value each outcome (judgment). (Page 56)
  • “economists believe that everyone is rational.” They don’t. It would be profoundly irrational to believe that. Still, treating people as if they are calculating, consistent, and acting according to a set of interests is useful for understanding the behavior of thousands or millions of people. (Page 59)
  • When people form habits or keep to rules, they are acknowledging that the costs of trying to optimize are too high. So they, in effect, decide not to decide. This is happening all over the place. Think about yourself for a bit, and you realize that most of what you decide are not actual decisions but latent ones, things you could choose but choose not to. (Page 60)
  • You want reliability. Rules are how reliability is backed into systems. However, if AI prediction is going to break rules and turn them into decisions, then one consequence will be a lack of reliability for existing systems. That consequence may render it not worthwhile to use AI unless you can redesign the system to accommodate the decisions AI is enabling. That’s why we are going to start with the decisions we have decided not to make. (Page 61)
  • is easier not to have to make a decision than to make one. That is, it is easier to avoid gathering information, processing it, weighing all the options, and then reaching a decision. (Page 61)
  • What “making do” looked like, a term of art Simon cleverly called “satisficing,” was to not make the perfect the enemy of the good. Rather than look for solutions they knew might be better, they would take actions that were good enough. Rather than deal with a complex environment, people would narrow the range of options considered. Rather than continually updating their choices based on new information received, they would adopt rules, routines, and habits that would be impervious to new information and, hence, allow them to ignore information entirely. (Page 62)
  • Two broad considerations drive decisions: high versus low consequences and cheap versus expensive information. (Page 62)
  • The second driver of whether you choose to actively decide is whether you have information or, specifically, the cost of the information you need to make a decision. Costly information can mean that a decision looks precisely like a decision with low consequences and so drives you to adopt default rules rather than deliberate. Should you carry an umbrella today? (Page 64)
  • in lieu of gathering information to make an optimal choice, when doing that is costly, we pick up habits or rules to obviate the need to consider information at all. We just do the same thing each time without having to think about it. (Page 67)
  • The point is that when you are following a rule, you may be unaware of the value of gathering information and making a decision. These examples provide evidence that there are latent and untapped benefits to decision- making. As such, we can anticipate that some forms of AI prediction may similarly unlock those possibilities. (Page 69)
  • If you are in the business of developing AI whose value is enabling decisions that are not being made, you will face an uphill battle in gaining adoption. (Page 70)
  • Despite maintaining reputations for the opposite, most organizations are not- deciding machines. At the heart of this are standard operating procedures (or SOPs). These are detailed documents describing procedures for doing things all over an organization. (Page 70)
  • While an SOP might economize on the need to revisit the wheel in terms of making decisions and, thus, play the role of an investment in reducing cognitive load similar to the personal choices we have described thus far, they bring with them another benefit: reliability. When people in an organization are following rules, they are doing things that make it easier for other people to do their things without having to engage in costly communication such as meetings. (Page 70)
  • Rules are decisions that we make preemptively. Making a decision, unlike following a rule, allows us to take into account information available at the time and place of the decision. (Page 72)
  • actions resulting from decisions are often better than those resulting from rules because they can respond to the situation. So, why would we ever use rules rather than make decisions? Decisions incur a higher cognitive cost. When is the cost worth it? When the consequences are significant and when the cost of information is small. Introducing AI does not change the consequences, but it lowers the cost of information. (Page 72)
  • The trade- off between rules and decision- making is critical in the context of AI systems because the primary benefit of AI is to enhance decision- making. AIs provide little value for rules. AIs generate predictions, and predictions are a key information input to decision- making. So, as AIs become more powerful, they lower the cost of information (prediction) and increase the relative returns to decision- making compared to using rules. Thus, advances in AI will liberate some decision- making from rule- following.• (Page 72)
  • However, rules not only incur lower cognitive costs but also enable higher reliability. One decision often impacts others. In the context of a system with interdependent decisions, reliability can be very important. (Page 73)
  • Rules arise because it is costly to embrace uncertainty, but they create their own set of problems. (Page 80)
  • The so- called Shirky Principle, put forth by technology writer Clay Shirky, states that “institutions will try to preserve the problem to which they are the solution.” The same can be said of businesses. (Page 80)
  • If you want to find opportunities by creating new AI- enabled decisions, you need to look beyond the guardrails that protect rules from the consequences of uncertainty and target activities that make bearing those costs easier or to reduce the likelihood of bad outcomes that the rules would otherwise have to tolerate. (Page 80)
  • The checklist exists because of uncertainty. As there are many interrelated parts to a complex system and many people doing tasks within them to make it all work, checklists are not simply indicators that something has been done. Instead, they are the manifestation of rules and the need to follow them. They are there to ensure reliability and reduce error. (Page 86)
  • When marketers treat everyone the same, it is because they lack information. If they had information, they would provide personalized products and personalized services. Marketers could move from rules that treat everyone the same to decisions that allow them to provide the right products to the right people at the right time. (Page 87)
  • As Cosmo Kramer put it on Seinfeld, “A rule is a rule and let’s face it, without rules, there’s chaos.” 5 (Page 89)
  • The worry that education creates uniformity has a long history. In 1859, John Stuart Mill wrote in On Liberty, a “general State education is a mere contrivance for molding people to be exactly like one another.” (Page 89)
  • When rules have existed for a long time, it can be hard to see the system that they are embedded in. Because rules are reliable, a myriad of rules and procedures can stick together. If something moves, they have to all move at once. (Page 91)
  • curriculum. “Students are educated in batches, according to age, as if the most important thing they have in common is their date of manufacture.” (Page 93)
  • Imagine, instead, a system where students progress through school as a class (their physical and social development is paced by biology), but many different tutors and teachers come and go to support different students, depending on their individual learning needs. The tutors and teachers that students work with are independent of the students’ ages but are instead determined by the nature of their questions and ability in a subject area. (Page 94)
  • introducing an AI that enables a rule to be transformed into a decision might seem attractive at first glance, but its impact may be limited because the rule it’s replacing is tightly coupled with other elements of the system. Dropping an AI that predicts the next best content into the existing school system would have limited impact because the age- based curriculum rule with a single teacher per class is a cornerstone of the current educational system, especially in elementary school. In contrast, using exactly the same AI, but embedding it in a new system designed to leverage the AI’s personalized content and pacing by coupling it with personalized discussion, group projects, and teacher support, which would require much more flexible tutor and teacher allocations and modified educator training, would likely result in a much bigger impact on education and personal growth and development. (Page 94)
  • the age- based curriculum rule is the glue that holds together much of the modern education system, and so an AI that personalizes learning content can only provide limited benefit in that system. The primary challenge for unleashing the potential of a personalized education AI is not building the prediction model but unsticking education from the age- based curriculum rule that currently glues the system together. (Page 95)
  • Like SOPs, checklists are the manifestation of rules and the need to follow them. They are there to ensure reliability and reduce error. The alternative is that people make decisions based on their own observations. While switching from a rule to a decision may improve the quality of that particular action, it may also create problems and uncertainty for other people. (Page 95)
  • Rules glue together in a system. That’s why it’s hard to replace a single rule with an AI- enabled decision. Thus, it’s often the case that a very powerful AI only adds marginal value because it is introduced into a system where many parts were designed to accommodate the rule and resist change. They are interdependent— glued together. (Page 95)
  • An example is a personalized education AI that predicts the next best content to present to a learner. Dropping this AI into a system designed around the age- based curriculum rule would stifle the benefit. In contrast, embedding the very same AI into a new system that leverages personalized (not age- based) discussion, group projects, and teacher support would likely result in a much bigger impact on overall education and personal growth and development. The primary challenge for unleashing the potential of a personalized education AI is not building the prediction model but rather unsticking education from the age- based curriculum rule that currently glues the system together. (Page 96)
  • That AI didn’t save us from Covid- 19 does not mean that AI wasn’t ready but that we weren’t ready for it. (Page 98)
  • The message from this chapter is that in order to take advantage of prediction machines, we want to turn rules into decisions. However, the system— the set of procedures according to which something is done— has to be able to accommodate that change. (Page 106)
  • Rules are our primary target when looking for new opportunities for decision- making that AI prediction might unlock. (Page 107)
  • When decisions interact, moving from rules to decisions requires an oiled system of coordination. Decision- makers need to know what others are doing, align their goals, and enable change. However, a new system may be so disruptive that you may need to start using it in a new organization, where it can grow organically, rather than trying to adapt to it in existing organizations. More broadly, uncovering uncertainty provides a first step to opening up new decisions through prediction. (Page 107)
  • For those who were infected, Covid- 19 was indeed a health problem. However, for the vast majority who were not infected, Covid- 19 was not a health problem— it was an information problem. That’s because without the information on who was infected, we had to follow the rule and treat everyone as if they could be infected. That led to shutting down the economy. If, instead, we could have made a reasonably accurate prediction, then we could have solved the information problem and only quarantined people who had a high likelihood of being infectious. Rules are our primary target when looking for new opportunities for decision- making that AI prediction might unlock. (Page 108)
  • In order to take advantage of prediction machines, we must often turn rules into decisions. However, the system has to be able to accommodate that change. If one rule is glued to another in order for the system to be reliable, putting a decision within that system may be fruitless. (Page 108)
  • The metrics are chosen in order to evaluate performance based on pure efficiency and to invite substitution based on cost. If a machine can do that task and is cheaper, replacement surely will follow. The horses may still race, but they don’t move people around anymore. (Page 111)
  • Just as machines replaced people in physical tasks, maybe they will do the same for cognition. (Page 111)
  • A decade into the current AI wave, machines have replaced humans in very few tasks. Chatbots are playing a bigger role in customer service, and machine translation is gaining an increased share of that activity. But technological unemployment is not on the horizon quite yet, and there are lots of jobs for people to do. While there are AIs that can outperform people, in many instances, those people— warts and all— are still cheaper than their machine replacements. (Page 112)
  • Stanford professor Tim Bresnahan has argued that the whole exercise of deconstructing the potential for AI into the tasks AI might perform ignores what has driven radical adoption of new technologies in the past: systemwide change. (Page 113)
  • Task level substitution plays no role in these applications of AI technology. These very valuable early applications are not ones in which labor was undertaking a task and was replaced by capital. Observers focus on task level substitution, not because it occurs, but because the definition of general AI includes “tasks usually done by humans.” Until general AI is commercialized, which is not likely in the foreseeable future, analysis should focus on the capabilities and applications of actual AI technologies. (Page 114)
  • The AI at the leading technology companies is not a demonstration project. It includes full- scale production systems that generate billions of dollars of revenue. It wasn’t built task by task, with AI involved in some of them. Instead, the large tech firms built completely new systems. The successful adoption of AI presents what we will term here the system mindset. (Page 114)
  • It stands in contrast to a task mindset in that it sees the bigger potential of AI and recognizes that to generate real value, systems of decisions, including both machine prediction and humans, will need to be reconstituted and built. This is already happening in some places, but history tells us that it is easier for those new to an industry than for established businesses to implement systemwide change to take advantage of new general purpose technologies like AI prediction. (Page 114)
  • If you go to a business and tell it you can save it $ 50,000 per year in labor costs if it eliminates this one job, then your AI product better eliminate that entire job. (Page 115)
  • The better pitches were ones that were not focused on replacement but on value. These pitches demonstrated how an AI product could allow businesses to generate more profits by, say, supplying higher quality products to their own customers. This had the benefit of not having to demonstrate that their AI could perform a particular task at a lower cost than a person. (Page 115)
  • For electricity, which we discussed in chapter 1, replacement of steam in manufacturing was slow and took decades. It was only worthwhile for existing factories to adopt electricity if it cost less than steam. That was a hard sell to factories that were already designed to run on steam. By contrast, once manufacturers realized that electricity afforded them the opportunity to redesign factories into large flat installations outside of expensive city rents, there was much greater interest in investing in new factories that promised significantly higher productivity due to their new design. (Page 116)
  • Critically, adopting a new system requires replacing an existing one. A pure cost calculus will rarely drive such replacement. There are transitional costs in building new systems, and if the best you are going to do is save a fraction of the costs of the existing system, it is unlikely to prove worthwhile. Instead, if the new system does something new— that is, leads to new value creation opportunities— then that is what will drive adoption. (Page 116)
  • Eric Topol’s book Deep Medicine: How Artificial Intelligence Can Make HealthCare Human Again explains how AI could improve diagnosis, freeing up doctors to spend time with their patients and understand their needs. (Page 117)
  • Perhaps the reason Deep Medicine is such an influential book is because Topol understands the health- care system (he’s a cardiologist and a professor of molecular medicine at Scripps Research), he understands AI (he invested significantly in learning the capabilities and limitations of this technology as it relates to health care), and he is a master communicator and translator of complicated things (he’s the founder and director of the Scripps Research Translational Institute). (Page 117)
  • There’s only one problem. He’s not an economist. So, he doesn’t write about human behavior in terms of incentives. Or perhaps he believes that doctors are above such primal instincts. (Page 117)
  • Our concern is that if we simply drop new AI technologies into the existing health- care system, doctors may not have the incentive to use them, depending on whether they will increase or decrease their compensation, which is driven by fee- for- service or volume- based reimbursement. (Page 117)
  • Topol believes that if AIs save doctors time, then doctors will spend that extra time talking and connecting with their patients. The evidence is not at all clear that past productivity- enhancing tools for doctors have increased the time they spend connecting with their patients. It might be the opposite. If AIs increase the productivity of doctors, they may be able to spend less time with each individual patient without diminishing their income. In order to achieve the worthy goals that Topol aspires to, we need more than new AI technologies. We need a new system, including new incentives, training, methodologies, and culture for doctors to utilize their technological tools in the manner aspired to in Topol’s book. (Page 117)
  • AI point solutions in health care too often provide predictions that nobody can use (e.g., because treatment options aren’t available). AI application solutions too often enable actions that nobody can take (e.g., because liability rules make adoption difficult) or wants to take (e.g., because they are misaligned with the compensation system). The challenge isn’t so much that the predictions aren’t good enough or that the actions are useless; it is that getting all the moving parts working together isn’t easy. (Page 118)
  • If AI provides diagnosis, the rules about who is allowed to do what in health care should change. 13 With machines doing diagnosis, the primary role of the physician might be in the human side of health care. This would require all sorts of other changes. Medical school would no longer require memorization of facts and would no longer select students based on their ability to understand enough biology to score well on tests. These skills might not improve much with a decade of postsecondary schooling, so instead patient- facing doctors might only need something like an undergraduate degree. That, in turn, would require major regulatory changes to who is allowed to provide which health- care services. Perhaps patient care becomes the primary role of the pharmacist. Perhaps social workers move into what used to be the domain of the doctor. (Page 119)
  • Task- level thinking is currently the dominant approach to planning for the introduction of AI into all sectors of the economy. The main idea is to identify specific tasks in an occupation that rely on predictions that AI, rather than a human, can generate more accurately, faster, or cheaper. Corporate leaders, management consultants, and academics have largely all converged on this approach. (Page 122)
  • The dominance of task- level thinking is surprising because the most dramatic implementations of AI to date are not task- level replacements of human labor, but rather new system- level designs that are only possible because of the prediction capabilities now afforded by AI (e.g., Amazon, Google, Netflix, Meta, Apple). Task- level thinking leads to point solutions that are often motivated by cost savings based on labor replacement. In contrast, system- level thinking leads to system solutions that are usually motivated by value creation, not cost savings. (Page 122)
  • AlphaFold predicts protein structures. Proteins are the building blocks of life, responsible for most of what happens inside cells. How a protein works and what it does are determined by its three- dimensional shape. In molecular biology, “structure is function.” (Page 124)
  • FIGURE 9- 1 Innovation process (Page 126)
  • The (Page 129)
  • Large companies rarely find it worth it to transform the way their industry operates, especially if their industry is currently profitable. The risk of getting it wrong is too high. This is why technological change can lead to disruption. The technology unleashes new opportunities to build businesses and serve customers, but it isn’t clear exactly how. (Page 133)
  • When startups and smaller firms have incentives to innovate and larger firms don’t, then innovation incubates in small markets served by small companies until their products mature into viable alternatives for large markets, and ultimately incumbents collapse and new ways of doing business arise from these surprising places. (Page 133)
  • Innovations in the innovation system itself can have cascading effects downstream on many other systems. (Page 133)
  • Automated hypothesis generation may enhance innovation productivity significantly. However, to fully benefit from this technology, we must reconsider the entire innovation system, not just the single step of hypothesis generation. For example, faster hypothesis generation will have little impact if the next step in the process, hypothesis testing, doesn’t change and simply creates a bottleneck downstream. (Page 134)
  • However, when considering the adoption of AI in the context of systemwide change, it is apt for three reasons. First, as we have already seen, the opportunities for the application of AI can be hidden from view, and thus existing industries are vulnerable to blind spots. Second, the challenges and trade- offs in taking down existing systems and building new ones are part and parcel of the process of creative destruction that accompanies transformational technological change. Finally, as old systems are displaced with new, there is necessarily a shift in power— specifically, the economic power— that makes the accumulation of power the reward to system innovation and potential disruption something to fear and resist. (Page 137)
  • We often see this with disruption. An industry where traditional providers have economic power suddenly becomes subject to competition and their power is diminished. But power doesn’t just disappear; it shifts. (Page 140)
  • The term disruption emerged from the work of Clayton Christensen. 2 Christensen noted that incumbent firms can find themselves “asking the wrong questions” regarding new technologies and their value to customers. Thus, they shy away from certain technologies that offer few advantages to their own customers. By contrast, those very same technologies appeal to customers who are either not served or underserved by existing market leaders. (Page 141)
  • The really challenging disruption arises when radical technological change does not improve performance along traditional metrics but, in some cases, can improve performance on metrics that are not the focus of the existing industry. This can create blind spots for incumbents. (Page 141)
  • In his 1997 book, “The Innovator’s Dilemma,” [Christensen] argued that, very often, it isn’t because their executives made bad decisions but because they made good decisions, the same kind of good decisions that had made those companies successful for decades. (Page 141)
  • Not surprisingly, a quick path to being disrupted is to miss that a technology requires organizational change. (Page 142)
  • When Clay Christensen was developing his disruption theories at Harvard in 1990, down the hall were Rebecca Henderson and Kim Clark looking at the same phenomenon. 6 Rather than focus on the demand side (i.e., missing customer values) as Christensen had, Henderson and Clark looked at the supply side (i.e., the lack of organizational fit). They identified many situations where technological change was architectural, changing the priorities of organizations, and because organizations are hard to change, giving an opportunity to greenfield organizations that could start from scratch. 7 (Page 143)
  • Herein lies what is challenging about dealing with architectural or, as we have termed it here, systemwide change. First, to implement it, you need products that do not initially look competitive because they have to make choices that sacrifice performance on what customers appear to care about. Second, as a result, existing organizations that are created to focus on that performance are not equipped to quickly understand all the trade- offs that the new technology is making. In other words, they miss the forest for the trees. (Page 144)
  • When an AI- driven decision is part of a system, adopting AI can necessitate an organizational redesign with a new system. As we just discussed, one difficulty existing organizations face in creating new systems is that they have been optimized to garner high performance from existing technologies, whereas adopting AI can necessitate a change in focus. (Page 144)
  • AI drives the organization to become more modular, while in others, it can drive it to have greater coordination among the parts. The challenge is to recognize that the current focus is the problem, and widespread change is needed. (Page 144)
  • When top management understands that a new organizational design is needed in order to adopt and integrate an AI prediction to one or more key decision areas, a further challenge arises. This is because organizational design invariably involves a change in the value and, hence, power of the suppliers of different resources within the organization. (Page 145)
  • Those who expect to lose in the resulting reallocation of power will resist change. Organizations rarely operate as a textbook dictatorship where what the CEO says goes and change just happens. Instead, those expecting to have their power diminished resist change. In the process, they can undertake actions that at best reduce the ease by which change can be implemented. At worst, the anticipation of those actions may cause an organizational redesign to be curtailed completely or reversed. (Page 145)
  • So, while Blockbuster corporate may have benefited from following the Netflix model, the franchises were disadvantaged by it. There was resistance, especially as the new model proved to be more successful. (Page 146)
  • The Blockbuster case is, of course, a dramatic example of both the failure to change in the face of a new technology and also how internal forces prevented that change before it was too late. (Page 146)
  • AI can generate organizational change that can have the effect of decentralizing power or coordination that can centralize it. Either way, those who lose from those changes can be quite clear, and precisely because they hold power based on the current organizational system, they will have a vested interest in maintaining it. (Page 147)
  • Incumbents can often adopt point solutions quite easily because they enable improvements in a specific decision or task without requiring changes to other related decisions or tasks. However, incumbents often struggle to adopt system- level solutions because those require changes to other related tasks and the organization has invested in optimizing those other tasks; furthermore the system solution may be inferior in some of those tasks, particularly in the short run. That sets the stage for disruption. (Page 147)
  • We define power as economic power. You have power if what you own or control is scarce, relative to demand. Scarcity, which underlies economic power, is something that can be ameliorated by competition, which is why economists sometimes treat economic power and monopoly power as equivalent. When something that was previously scarce is subject to competition, power shifts. (Page 148)
  • Sometimes, a system- level solution is required to fully benefit from AI. The redesign of a system may lead to a shift in power at the industry level (e.g., data- rich industries become more powerful as AI becomes more prevalent), the company level (e.g., discussed in chapter 12), or the job level (e.g., Blockbuster franchises lost power in the shift to online movie rentals and mail delivery DVDs). Those that stand to lose power will resist change. Those resistant to change often currently hold power (that’s why they resist) and therefore may be quite effective at preventing system- level change. That creates the context for disruption. (Page 148)
  • Robots and machines, in general, do not decide anything and, hence, do not have power. A human or group of humans are making the calls underlying the decisions. To be sure, it is possible to automate things and make it look like a machine is doing the dirty work. But that is an illusion. At our current level of AI, someone makes the real decisions. (Page 150)
  • While AI cannot hand a decision to a machine, it can change which human is making the decision. Machines don’t have power, but when deployed, they can change who does. When machines change who makes decisions, the underlying system must change. The engineers who build the machines need to understand the consequences of the judgment they embed into their products. The people who used to decide in the moment may no longer be needed. (Page 150)
  • Ada warns the readers about the computer’s inability to do anything about it if the user entered “untrue” information. Today we call this concept “Garbage in, Garbage out.” Here is the way she said it: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” (Page 151)
  • Another fear that arises with respect to machines having power is that prediction machines are now often responsible for the information we see to help our understanding of the world and make decisions, all the way from shopping to whom to vote for. (Page 154)
  • With the advent of voice- assisted search, people are asking more fully formed queries for which Google does have a clearer and likely more confident answer for. (Page 157)
  • To review, AI predictions are imperfect. To mitigate the risk of being wrong, we embark on two lines of attack. First, before the fact, we work through contingencies and arrive at a conclusion as to what the machine should choose for each of those contingencies. Second, after the fact, we acknowledge that not all contingencies will be covered, so we will rely on humans to step in and make the call. (Page 158)
  • As AI prediction improves, we will need to allocate more human resources to both of these judgment functions. In other words, the exceptions require a system design that includes human- machine collaboration. (Page 158)
  • nobody ever lost a job to a robot. They lost their job because of the way someone decided to program a robot. How we arrived at a time when we can so easily blame machines for what are ultimately the actions of humans is an interesting one. (Page 159)
  • In reality, it is the humans who apply judgment as coded in machines that have that power. Those humans are responsible, and there is a need for the legal and regulatory systems to understand that. (Page 159)
  • Machines cannot make decisions. However, AI can fool people into thinking that machines make decisions. Machines can appear to decide when we are able to codify judgment. The AI generates a prediction, and then the machine draws upon codified human judgment in order to execute an action (decision). (Page 160)
  • AI predictions are imperfect. To mitigate the risk of being wrong, we embark on two lines of attack. First, before deploying AI, we work through contingencies and arrive at a conclusion as to what action the machine should take for each contingency. Second, after deploying the AI, we rely on humans to step in when the AI is unable to predict with high enough confidence or when the AI predicts a scenario for which we have not codified the judgment (human in the loop). (Page 160)
  • Although machines do not have power, they can create power through scale, and they can reallocate power by shifting whose judgment is used where and when for decision- making. Systems predicated on AI can decouple the judgment from the decision such that it can be provided at a different time and place. If judgment shifts from individually deployed by people for each decision to instead codified into software, then this can lead to (1) scaling and consequently a shift in power due to a shift in market share, and (2) a change in who makes the decision and consequently a shift in power from whoever used to apply judgment to whoever provides it for codification or owns the system in which it is embedded. (Page 160)