Originally published as a featured LinkedIn Weekend Essay in February 2020.
“As I have discussed with you in other contexts, and as you have acknowledged, the algorithms which power [your] services are not designed to distinguish quality information from misinformation or misleading information, and the consequences of that are particularly troubling for public health issues…”
— Excerpts of letters from Rep. Adam Schiff (D-CA), Chairman of the House Select Committee on Intelligence, to Sundar Pichai of Google and Mark Zuckerberg of Facebook in February 2019. Jeff Bezos of Amazon received a similar letter from Schiff.
What prompted Rep. Adam Schiff to send these letters was the spread of anti-vaccination propaganda on Amazon, YouTube, Facebook, and Instagram. Schiff’s concern is not an idle one: by April 2019 the incidence of measles in the United States was at its second highest since the disease was thought to be eliminated in 2000. The information targeting the communities of users connected to these platforms was putting them in harm’s way.
Although it’s generally accepted that business leaders should always take into account the organization’s responsibilities to its customers, employees, shareholders, partners, and to the communities in which it operates, the potential for digitally enabled businesses to harm these stakeholders, has been greatly increased. The potential damage is amplified by the enormous reach and personalization possible with digital networks, which raises issues that test the limits of traditional ethical frameworks and guidelines.
I wrote about these kinds of challenges before, in “The Keystone Advantage,” which was published almost twenty years ago. Drawing from both biology and economics, I proposed an idea called a keystone strategy, which basically provided an incentive for aligning the health of the networks connected to the firm with the firm itself. Ultimately, if a firm competes in a network economy, it will draw its strength from the networks that surround it. If the networks work well, this will reflect positively on the firm, but if they are harmed, so will the firm that occupies the network hubs.
In “Competing in the Age of AI,” Karim Lakhani and I argue that the Keystone idea now applies even more broadly than it did back in 2003. Organizations as diverse as Tencent and Target, Facebook and Equifax, Baidu and Google can only sustain performance if the many customers and communities they depend on are healthy, strong and trust the platform they depend on for news, information, credit and many other things. If these communities are harmed, the firm itself will fail, as its participants abandon the platform, competitors attack the now defenseless organization, partners escape the damage, and as government institutions swoop in to further limit the firm’s behavior. On the other hand, if they take the Keystone idea seriously, they will invest in understanding, measuring, tracking, and improving the health and trust of the communities they depend on, and react to counter any threats.
These challenges are now even more important than before. As businesses are increasingly powered by data and AI, they can scale to previously unheard of impact, which has not only increased business opportunities, but also multiplied ethical challenges. If Facebook did not have the capacity to serve two billion users, Adam Schiff would not be as concerned. In new and old firms alike, leaders should be aware of how their newly deployed digital capabilities can be used and misused, even in ways they never intended.
Specifically, Rep. Schiff’s letters take aim at the algorithms that are used to optimize views, purchases, ad clicks, and personal engagement. Even a simple learning algorithm that is rewarded based on clicks and money earned can quickly become dangerous by serving content that reinforces bias and other kinds of flawed thinking, and it can efficiently find users likely to be influenced by content and reinforce mistaken views. The vast scale, scope, and learning potential of these digital operating models implies that harmful messages can be tailored and targeted to, literally, hundreds of millions of people.
The grassroots anti-vaccination movement relies on the efforts of a community of individuals who believe that certain kinds of inoculations cause severe illness. The movement dates back as far as the eighteenth century, but its impact has been vastly amplified in recent years by social networks, video streaming sites, and ad-targeting technology. A 2017 study of 2.6 million Facebook users over seven and a half years found that consumption of anti-vaccine content was boosted by echo chamber effects: users looked only at posts that affirmed their beliefs, ignored dissenting information, and joined groups reinforcing their biases.
The scale of the impact is striking. In Texas alone, at least 57,000 schoolchildren were exempted from vaccination for nonmedical reasons in 2018, a twentyfold increase since 2003. Health officials in Europe and the United States blame the “anti-vax” movement for outbreaks of dangerous diseases like measles and pertussis over the past ten years.
The anti-vax movement is by no means isolated. The same methods and mechanisms that have made it potent are being used to systematically create echo chambers of all sorts—especially political, social, and religious. In some ways, these echo chambers are similar to those that have long characterized cable TV and radio. But traditional media does not easily reach the same kind of scale as digital networks. And, unlike social networks, traditional media does not allow a message to be tuned and personalized in real time – the algorithm serving a Google search result or a Facebook social ad can automatically personalize the information seen by a user to maximize her engagement – that is where the AI comes in. And additionally, traditional media does not enable the kind of active user engagement that promotes sharing of content at zero marginal cost to like-minded individuals.
Digital scale, scope, and learning can amplify the impact of any bias, even without systemic intent to do harm or sway views. Our colleagues Mike Luca, Ben Edelman, and Dan Svirsky were among the first scholars to find examples of this: their work on Airbnb shows that people with names that sound distinctively African American were 16 percent less likely than those with European-sounding names to be accepted as guests by Airbnb hosts. Subsequent research by other scholars has found that Airbnb hosts similarly discriminate against people with Islamic-sounding names, people with disabilities, and members of the LGBTQ community.
The same sort of bias afflicts financial services. Even micro-lending platforms like Kiva, which are explicitly designed to provide financial opportunity to disadvantaged communities, have been found to exacerbate bias.
There was no organized effort to promote discrimination on Airbnb or Kiva. The digital systems simply amplified the impact of the implicit, or subconscious, bias of homeowners and even progressive lenders. Even if the percentage of individuals who are truly bad actors is small or non-existent, the amplification potential of digital operating models means that many people may be adversely impacted.
The intensifying of human bias, discord, and misinformation is not, unfortunately, the only new ethical challenge. Our considerations need to be extended by examining the intrinsic bias embedded in digital algorithms.
The leaders of modern firms cannot afford to ignore this new generation of ethical challenges. A variety of practical, implementable technical and business solutions is needed. Clearly, we are not alone in thinking this way. Google and Microsoft are investing heavily in research on algorithmic bias, and Facebook is devoting massive resources to tackling the problems of fake news and harmful posts. And even the leadership of traditional organizations like Equifax and the Democratic National Committee—having been stung by hackers—is investing in remedies. Navigating the ethics of digital scale, scope, and learning has become a universal management imperative.
The greatest responsibility lies in the organizations that wield the most power and occupy the central network positions in our economy and society. The Keystone idea comes from an analogy with biological ecosystems. Like the modern economy, biological ecosystems are highly connected networks, which collectively depend on the behavior of their most critical agents. In an ecosystem, so-called keystone species are especially critical to the sustainability of the whole. From providing nesting areas to channeling rainwater, these species perform especially critical functions, maintaining ecosystem health through specific, evolved behaviors that have effects much beyond their own species to impact the entire ecosystem. Removing keystone species will critically harm the sustainability of the whole.
In a similar fashion, companies like Facebook and Equifax effectively regulate the health of their business networks and user communities. Their activities propagate to all network nodes or members, whether they post video content, apply for loans, sell advertisements, or share messages. As these central firms occupy richly connected network positions and provide the foundation for network-wide value creation, they have become essential to the economy and social system. In each case, they provide services and technologies on which many of us depend. Their removal or even their problems can lead to potentially catastrophic events.
But as leaders in many firms already understand, the role of a network hub comes with responsibilities. This is where the keystone concept becomes really critical. A keystone strategy aligns the objectives of a hub firm with those of its networks. By improving the health of its network (or business ecosystem), a keystone strategy also benefits the long-term performance of the firm.
The central feature of this strategy is its focus on aligning internal and external needs to shape and sustain the health of the networks a firm depends on. When Google invests in technologies that remove bias from its algorithms, it’s deploying a keystone strategy. When Facebook removes harmful videos from its networks, it’s doing the same thing. The point here is that sustaining a business network is not only an ethical responsibility but also the only way to preserve a networked business for the long term.
The keystone concept is related to the idea of information fiduciary proposed recently by Jack Balkin and Jonathan Zittrain:
In the law, a fiduciary is a person or business with an obligation to act in a trustworthy manner in the interest of another. For example, financial managers and planners are entrusted to handle their clients’ money. Doctors, lawyers, and accountants are examples of information fiduciaries—that is, a person or business that deals not in money, but in information. Doctors and lawyers are obligated to keep our secrets and they cannot use the information they collect about us against our interests.
Controlling hubs in important economic networks, firms like Google and Facebook acquire extensive consumer information. As information fiduciaries, they have important responsibilities not to harm the communities they collect information from.
We have already seen how digital networks and AI are prompting the development of new operating capabilities, strategic principles, and ethical dilemmas. But beyond these immediate changes, we must also think through the broader long-term patterns and gather the wisdom required to deal with our newfound challenges.
Marco Iansiti and Karim R. Lakhani are the authors of Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, from which this article is adapted.