Why Do Social Media Platforms Put Profits Before People? Because Delaware Makes Them 

By | February 14, 2023

2022 was a bad year for social media. Twitter had its embarrassingly public meltdown under Elon Musk; Facebook had its worst fiscal year on record. And a growing chorus of U.S. policymakers called for a ban on TikTok. But should we “embrace their ruination,” as one recent Atlantic op-ed argued? “We cannot make social media good because it is fundamentally bad, deep in its very structure,” the author wrote. I agree. But the fault, I argue, is not in the platforms. It is in our corporate law and in ourselves.  

There are two contributing human flaws, and each feels deeply familiar. First, “[n]egativity captures our attention better than positivity or neutrality.” The reality is slightly more complex; research shows that not just negativity, but any “sensationalist and provocative content,” grabs our attention most effectively.  

Second, the pool of human time and attention–the underlying raw material for the digital advertising industry—is finite. Every ad-based business, which includes social media platforms, consumes human time and attention at the expense of another. Attention, in economic terms, is a “rivalrous resource.” Rivalrous resources are subject to the tragedy of the commons, entailing overuse and rapid depletion.  

The fault in our corporate law is equally banal. For the most part, social media platforms like Facebook (Meta) and Twitter (pre-privatization) are Delaware corporations. This makes them subject to Delaware corporate law, including the fiduciary duties to shareholders Delaware imposes on corporate directors. According to Delaware jurists, corporate directors have a mandatory duty to “make stockholder welfare their sole end,” a doctrine known as shareholder primacy. Though legal academics debate the scope and enforceability of Delaware’s shareholder primacy doctrine, the near-universal view among judges and practitioners is that Delaware law imposes a duty on directors to “manage [a company] for the benefit of shareholders, and not for any other constituency.” This means that “[d]irectors of for-profit entities who pursue a social or environmental mission or otherwise fail to maximize profits . . . may be liable to their shareholders for breach of fiduciary duty.” 

Digital advertising-based business models pit these three basic facts, two human and one legal, on a destructive collision course. For ad-funded social media platforms, the duty to maximize profit for shareholders translates roughly into a duty to maximize user engagement. Because Facebook, for example, operates on a fully ad-based business model, its profitability correlates to the total amount of time and attention users spend on its platforms, a metric the company calls “user engagement.” In general, Facebook’s corporate directors should be expected to make decisions that maximize user engagement in the short and long term.  

Thus, social media platforms have legal and financial incentives to design their platforms and curate content on them in ways that maximize the amount of time and attention users spend online engaging with their platforms. This leads logically to the attention “arms race” we have observed among social media platforms.  

But this general mandate to maximize user engagement does not equate to a mandate to design digital platforms or invest in content moderation in ways that are optimal for society. The reason is simple: the types of content that drive the most user engagement are not beneficial for society. As Zuckerberg explained as early as 2018, “[o]ne of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.” Facebook’s research “suggests that no matter where [they] draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average—even when they tell [Facebook] afterwards they don’t like the content.” 

This makes user engagement a vastly different metric than a user’s subjective enjoyment or objective health and well-being. But Facebook’s business model requires maximizing user engagement, not user enjoyment or well-being. Consequently, director decisions that lead to foreseeable and meaningful decreases in user engagement in the short term must be justified as necessary to increase user engagement or otherwise ensure profitability in the long term. 

Unfortunately, many of the most serious and prominent ideas for reform demand exactly the sort of unprofitable changes that Delaware law forbids. Leading calls for reform include: (1) hiring large numbers of human content moderators; (2) creating more institutional separation between business and safety teams; (3) reducing the addictiveness of platforms, particularly for younger users; and (4) cutting back on sensationalist content. But each of these reforms would significantly raise costs or decrease engagement, rendering them unprofitable in both the short and long terms and thus impossible for social media corporate directors to undertake voluntarily and in good faith.  

Moreover, platforms cannot escape the structures of shareholder primacy through self-regulation. Another doctrine of Delaware corporate law imposes a limit on the extent to which directors may delegate their decision-making powers to others before such delegations become an unlawful abdication of their fiduciary duties. Under this standard, Facebook’s elaborate and high-profile attempt to self-regulate through its “independent Oversight Board,” an initiative that many see as a salutary effort to mitigate Facebook’s digital harms, may ultimately be illegal under Delaware law as an unlawful delegation of director duties. 

The problem, once laid bare, is obvious. Shareholder primacy mandates that social media directors maximize user engagement and externalize their platforms’ harms, using exploitative means for as long as Pigouvian mechanisms (means of reimposing the externalized social costs of an activity onto the activity’s doer) remain ineffective. The platform can reap windfall profits from its exploitation. Meanwhile, the director delegation doctrine prevents platforms from effectively tying their own hands to avoid such exploitation. The ad-based business model of social media is indeed “fundamentally bad, deep in its very structure.” 

But if the corporate law roots of the problem are so obvious, why haven’t tech accountability advocates, journalists, or legal scholars already surfaced them? After Frances Haugen released the Facebook Files, why was Congressional and public outrage directed at the company’s choice of “profits over people” when Delaware corporate law requires that very prioritization? In the words of Leo Strine, former Chief Justice of the Delaware Supreme Court and Vice Chancellor of the Delaware Court of Chancery, “lecturing others to do the right thing without acknowledging the actual rules that apply to their behavior, and the actual power dynamics to which they are subject, is not a responsible path to social progress.”  

It is not an accident that the regulatory conversation has failed to consider social media’s corporate-law constraints. Instead, social media companies have successfully pulled off an impressive feat of “deep capture.” Legal scholars John Hanson and David Yosifon theorized that the classic theory of regulatory capture required expansion in light of human susceptibility to environmental and situational influences. Their core insight was that “[b]eneath the surface of behavior, the interior situation of relevant actors is also subject to capture.” In other words, powerful actors will attempt to capture not only “the way that people think” but “the way that they think they think,” a phenomenon that Hanson and Yosifon label “deep capture.”  

For sellers of typical goods and services, opportunities to manipulate a buyer’s situational variables are relatively constrained since the buyer is alert to the fact that they are engaging in a transaction. Buyers who are aware that they are engaged in a transaction are more likely to be on guard against a seller’s attempts to manipulate the situation to influence the buyer’s preferences or obfuscate the terms of the bargain. But consumers cannot be on guard against merchant manipulation when they lack the prerequisite awareness that they are engaged in a transaction in the first place. 

To that end, social media platforms engage in several behaviors that perfectly illustrate Hanson and Yosifon’s deep capture hypothesis. Facebook, for example, intentionally perpetuates this deep capture through two fictions: first, the fiction that users receive access to these platforms for free; second, the fiction that Facebook’s platforms are primarily user-directed social media and communication tools. 

Because Facebook users pay a cash price of zero to access the company’s platforms, a large proportion of its users lack awareness of the very fact of, let alone the terms of and nature of, their ongoing transfer of value to Facebook. Facebook’s decision to set a cash price of zero for its services is likely strategic. “Consumers . . . react to ‘free’ prices in ways that may be irrational, generally overvaluing free goods and undervaluing their costs.” Thus, even Facebook’s decision to price access to its platforms at zero dollars manipulates a psychological vulnerability in users to encourage them to spend additional time on the platform. 

Setting the cash price of access at zero also allows the company to rhetorically represent its services as beneficent social media and communications tools intended merely to “connect the world.” In a perfect illustration of Hanson and Yosifon’s deep capture hypothesis, Facebook attempts to capture not only users’ time and attention, but even what users think they are doing when they trade the company their time and attention. Users believe they engage with content from groups, creators, news outlets, friends, family, and colleagues. In reality, users’ economically relevant activity, and all that matters to Facebook’s shareholders, is that they are viewing advertisements. 

Facebook’s incredible success in deep capture can be seen in the very term most often used to describe the company: it is a social media platform instead of a digital advertising company. Facebook has intentionally perpetuated this fiction among regulators and law enforcement officials, arguing in court filings that its users are not “consumers.” This form of deep capture also structures the language of regulatory debates among lawmakers. Is Facebook truly a neutral “platform,” or is it more akin to a “publisher” with editorial control? Even that binary is a misdirection. The meaningful answer is, fundamentally, neither; Facebook is an “attention merchant,” a seller of advertising opportunities. The regulatory conversation has centered not on Facebook’s manipulative collection of user time and attention but on the company’s failure to moderate user-generated content adequately, even though content moderation is only incidental to the company’s underlying business model.  

By deeply capturing users’, lawmakers’, and the public’s understanding of what the company is, Facebook has succeeded in causing the time- and attention-based transactions at the core of its business model to fade into the background of regulatory conversations. At the same time, the company continues to speak to shareholders in the language of maximizing user engagement, just as corporate law mandates and incentivizes. 

The “rivalrous-ness” of attention also increases ad-based businesses’ motive to manipulate. The relentless competition for human time and attention exists between social media companies and more traditional publishing outlets. For a generation, global population growth, expanded internet access, and the invention of increasingly portable, mobile, and wearable devices steadily increased the total amount of human attention available to be sold. But eventually, due to slowing population growth, market saturation by social media, and other factors, the size of the “whole pie” of global user time and attention will stop growing. Facebook’s major stock tumbles in 2022 may indicate that the company is already starting to feel some of these constraints on its “raw materials.” The stock lost nearly $200 billion in value after releasing poor financial results, largely due to competition for attention from TikTok and other ad-based platforms. 

As competition for attention becomes ever fiercer, it will drive companies toward increasing manipulation of the psychological vulnerabilities in human reasoning. Companies unwilling or unable to exploit consumers’ situational susceptibility will fall behind and eventually shut down.  

Through this lens, the tech industry’s recent pivot to the “Metaverse” signals an awareness by not just Facebook but many digital advertising companies about the long-term stagnation risks posed by their global market saturation for attention. If Facebook and other ad-based companies are approaching maximal monetization of the attention of their global user base on existing devices and platforms, then ensuring continued long-term growth requires luring users into ever more immersive platforms and encouraging them to spend more time in all-encompassing digital worlds where ads can be displayed. Facebook’s corporate-law paradox is not likely to fade in the “Metaverse” era. If anything, companies with ad-based business models will face growing pressure to manipulate their users and viewers in the context of increasingly stiff competition for attention. 

We arrive back at the premise that social media is “fundamentally bad, deep in its very structure.” The last few disastrous months for social media platforms seem to have raised public awareness of the harmful side effects of platforms’ deep capture tactics. In a recent New York Times op-ed, Ezra Klein wrote:  

The competition to create and own the digital square may be good business, but it has led to terrible politics . . . . There are those who believe the social web is reaching its terminal point. I hope they’re right. Platform after platform was designed to make it easier and more addictive for us to share content with one another so the corporations behind them could sell ever more of our attention and data. 

But recognizing the deep human and corporate law roots of this unscrupulousness is essential to carving a different path forward. It is a failure of imagination to think that our choice is the social media platforms we have now or nothing.”  

My recent article outlines how corporate law might be modified to correct this destructive alignment of incentives. One idea is looking to different corporate forms for platform governance, such as not-for-profits or B-Corps. Wikipedia, for example, is run by a 501(c)(3). “It thrives, quietly and gently, as a reminder that a very different internet, governed in a very different way, intended for a very different purpose, is possible.” 

If corporate structures cannot be modified, I identify other Pigouvian mechanisms that could require platforms to internalize their harmful externalities. But such mechanisms require us to develop new analogies for social media, reframing it as a societal health hazard on par with smoking and climate change. That conversation, it seems, has already begun. “It’s seemingly as hard to give up on social media as it was to give up smoking en masse as Americans did in the 20th century. Quitting that habit took decades of regulatory intervention, public-relations campaigning, social shaming, and aesthetic shifts.”  

In my article, I also suggest taxing Facebook for the “social cost of connection,” akin to the “social cost of carbon” metric used in environmental regulatory cost-benefit analysis. Facebook’s corporate law paradox is analogous to the incentive structure facing major corporate greenhouse gas emitters that profit from carbon emissions and experience environmental harms as negative externalities. Just as it took decades to identify and persuade the public about the negative externalities of carbon, digital platforms profited from an early golden age where more connection was seen as an unadulterated social good.  

But two decades on, it is clear that the byproducts of increased connection include massive societal harms. Social media platforms experience the harms they create as negative externalities, and Delaware corporate law mandates that they exploit those externalities for windfall profits. In the cases of smoking and environmental degradation, corporate law created a destructive alignment of power and incentives. Other legal doctrines were necessary to correct it. “That process must now begin in earnest for social media.” 

 

Abby Lemert is a J.D. Candidate at Yale Law School, Class of 2023.  

This post was adapted from her paper, “Facebook’s Corporate Law Paradox,” available on SSRN and published in the Virginia Law & Business Review. 

Leave a Reply

Your email address will not be published. Required fields are marked *