When Machines Call the Shots: Legal Considerations for the AI-Powered Board of Directors 

By | April 3, 2023

Back in 2014, the Hong Kong-based venture capital group Deep Knowledge Ventures announced that it had successfully appointed an algorithm named “VITAL” to its board of directors. This system of artificial intelligence (AI) was purportedly given the right to vote on whether the firm needed to invest in a specific company or not, similar to the other – human – directors of the corporation. Consequently, VITAL has been widely acknowledged as the world’s first robo-director. After successes stemming from VITAL’s decisions, other companies such as Tietoevry and Salesforce also de facto appointed AI systems to their board of directors.  

While only a handful of companies have chosen this untrodden path of robo-directors, many have assigned a supportive role to AI in its corporate decision-making process. In fact, assisting algorithms are already used in the management models proposed by McKinsey, Bain and BCG as strategic advisors for investments. Relatedly, one of the most popular applications of AI in corporate governance today is its support for the discovery and due diligence process of mergers and acquisitions. In addition, AI systems are deployed by directors to profile investors, audit annual reports, review the risk of financial instruments and determine optimal market supply and demand. In general, the use of AI in the corporate realm is alleged to have several benefits, such as an overall rationalization of board decision-making. Assistance from AI could also decrease the psychological phenomenon of groupthink within the board and strengthen the independence of corporate directors. 

A recent study of Ernst & Young, ordered by the European Commission, claims that 13% of the respondent EU-companies already use AI in their boardroom, and an additional 26% plan to do so in the future. On top of that, in a survey report, the World Economic Forum made the claim that by 2026, corporate governance will have faced a robotization process of a massive scale, with the result that human directors sharing their decision-making powers with artificial directors will become the new normal. It is therefore reasonable to assume that momentum in computational power, breakthroughs in AI technology and advanced digitalization will lead to a more widespread support of corporate directors by AI, if not their full replacement by autonomous systems. 

The factual rise of AI in corporate governance is contrasted by static company law, which has not kept pace at all with governance-relevant advances at the technological front. VITAL may have been widely acknowledged as the world’s first robo-director, but from a legal point of view, Hong Kong corporate law did not recognize the AI system as such. To bypass the law, VITAL was treated as a member of the board with “observer status.” On a more general note, corporate frameworks around the world are not compatible for the implementation of AI, since they are essentially rooted in human decision-making and deny the role of technology in the context of corporate governance. The absence of specific corporate rules for AI creates legal uncertainty about whether it is lawful at all to support the business judgement of directors with the help of AI and how liability should be attributed when a decision based on AI output causes harm to business partners of the company or third parties. Hence, corporate law will have to cope with novel legal questions once the use of AI as a support tool or replacement of human directors becomes more common. 

In the corporate realm, implemented AI systems can be classified into several categories on the basis of their level of autonomy, which determines the allocation of decision rights between the AI system vis-à-vis the board of directors. This classification is based on the general AI taxonomy of Anand Rao, which distinguishes assisted, augmented and autonomous intelligence. In the case of assisted intelligence, human directors selectively rely on AI for administrative tasks. The assisted autonomy level does not intrude on the basic principles of board practice. Consequently, its legal permissibility is not disputed, even though liability questions may arise when harm is caused by such systems. At the stage of augmented intelligence, human directors use AI output to enhance the informative basis of their decisions. Here, AI contributes to the core decision-making or judgement work of the board, but it does not enjoy standalone decision rights. In its final autonomous stage, AI is bestowed with independent decision rights through a delegation of core governance powers or its appointment as director. Greater legal uncertainty arises for these two upper-tier autonomy levels of AI, where the decision-making process is to a large extent influenced by AI output, with autonomous intelligence as an extreme as it might remove humans completely out of the loop for some decision types. Here, the question arises if directors have the legal right to rely on AI output or delegate governance powers to AI, and ultimately, if a human director may be fully replaced by an AI system. 

Company directors already use AI output to improve the informative basis of their decisions, often when pure data analysis is at the heart of the decision. In the asset management industry for instance, investment firms have already to a large extent handed over share and bond-trading to algorithms. At a higher autonomy level, AI could be trusted with certain tasks, decision rights or core powers, such as monitoring the management and the overall performance of the company. However, no clarity exists about the legality of such delegations in many corporate frameworks. For example, the UK Model Articles provide the option for directors to delegate their powers to a person or committee if foreseen in the articles of incorporation, but it is doubtful whether AI can be considered as one of both. Even if the delegation would be legally permitted, restrictions to the delegation authority of directors still need to be taken into account. To illustrate, Delaware courts insist that the “heart of the management” remains with the board of directors.1 In fact, most corporate laws do not allow the delegation of core management decisions, although it is usually unclear what these decisions include. After a power delegation has taken place, the human director should at least generally oversee the operations of the AI system, which requires the director to have a basic understanding of the design of these devices. 

It should be noted that directors might even have the duty to rely on the analytical capabilities of AI for some decisions in the future. Most corporate laws expect the board to make governance decisions on a well-informed basis. Considering that the analytical capabilities of AI may be superior to those of humans for a number of specific tasks, the ubiquitous expectation for directors to act on a well-informed basis may very well evolve into the duty to rely on the output of AI. Delaware case law already facilitates such a potential duty of AI delegation, as the reasonable use of formal monitoring systems in corporate governance has been interpreted to follow from a director’s duty of loyalty.2 As of now, however, the costs of data governance and the AI system’s operation do not justify the establishment of any obligation to use AI. 

A more distant prospect is the full replacement of human directors by AI-powered robo-directors. It is possible that human directors will soon share seats in the boardroom with one or more computers (a hybrid board), that a single algorithm will replace all human directors (a fused board) or that the board will be composed by multiple robo-directors, potentially originating from differing manufacturers (an artificial board). AI is only able to replace a human director if two conditions are met. First, it must be technologically conceivable for an AI system to conduct both the administrative work and judgement work of directors. In this respect, management literature acknowledges that, on the one hand, administrative work could be placed in the hands of AI. Judgement work, on the other hand, requires creative, analytical and strategic skills where it is debated if AI will ever achieve them. It is also uncertain if AI is able to balance stakeholder interests. Second, AI must fulfill the eligibility requirements for directors, imposed by corporate law. Most corporate systems presuppose that only natural and legal persons may be appointed as directors, while AI is neither of those. In spite of the apparent impossibility to appoint AI as a director, prominent scholars contend that algorithmic entities (i.e. shareholderless entities governed by an autonomous system) can be created in countries with flexible regulatory standards. 

Existing corporate frameworks are unfit for the implementation of autonomous systems in the boardroom, should the legislator decide to allow the appointment of an AI system as director. Current corporate governance best practices are predominantly based on human agency conflicts, which will not necessarily occur when the goals of the AI system are set in favor of the shareholders. Moreover, robo-directors earn no money nor work towards the objective of doing so, with the consequence that pay-for-performance regimes will be of no use to make AI pursue the corporate interest. Finally, fiduciary duties such as the duty of loyalty and care are hardly intelligible for algorithms, while the business judgement rule seems impossible to apply to AI since AI reasons linear in pursuance of its set goals, which excludes a margin of discretion. Considering the incompatibility of corporate systems with autonomous intelligence, the introduction of robo-directors would prompt fundamental – if not existential – challenges for corporate law. 

Therefore, the existing ex post remedies of corporate law, such as the control of directorial behavior through fiduciary duties and director’s liability, must be reimagined for the hypothesis of autonomous systems entering the boardroom. An AI system cannot be held liable and does not have its own interests, although inherent biases of its controllers may be reflected as AI is only as good as its inputs and programming. The system can be programmed to pursue the interests of its principals, yet there is no guarantee that it will follow all applicable legal rules and have a reasonable aversion to risks and losses. As a result, rule-compliant behavior will need to be embedded in the algorithm’s code beforehand. The latter calls for cutting-edge ex ante regulatory strategies, such as abstract coding requirements for appointed robo-directors and the regulation of corporate objectives, which will implicate far-reaching changes to the anatomy of corporate law. 

While it is clear that the steady emergence of AI in the management of traditional corporations will create great legal uncertainty in absence of regulatory action, new phenomena such as entities without leaders (decentralized autonomous organizations) or entities without members (algorithmic entities) will undoubtedly challenge worldwide corporate law systems even more. Hence, further research on the shifting anatomy of corporate law is needed to ensure that novel corporate rules are not dictated by the quickly evolving AI technology but instead based on calm reasoning. 

 

Floris Mertens is a PhD Researcher at the Financial Law Institute of Ghent University, Belgium. 

 

This post is adapted from his paper, “The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis,” available on SSRN. 

Leave a Reply

Your email address will not be published. Required fields are marked *