Robo-Advisers and the Fiduciary Duty

By | August 1, 2017

This post is inspired by and includes excerpts from “The Rise of Robo-Advisers: Can an Algorithm be a Fiduciary”, 67 Duke L.J (2017) (forthcoming).

Imagine all your retirement savings is managed by an online company (robo-adviser) that utilizes an extremely sophisticated machine learning algorithm to construct a continuously adjusting portfolio designed to maximize risk-adjusted returns subject to your current risk appetite. This algorithm has access to your credit card statements and your online bank accounts – all in real time – and uses this information to construct your wealth profile, your preferences, and your ability to take investment risks.

The algorithm also has access to countless external data sources, and continuously digests this data to identify previously unknown asset correlations and changing asset correlations; using this information to adjust its investment advice accordingly. The algorithm, through its monitoring of social media, begins to notice Twitter users discussing a certain technology company in negative terms. Predicting that this will lead to a drop in the company’s stock price, the algorithm decreases your portfolio’s allocation to that company, unbeknownst to you.

Several weeks later, the New York Times reports that the technology company’s competitors had utilized twitter bots to generate the false impression that the company was facing serious allegations of sexual harassment. Their goal was to hurt the company’s public image and, hopefully, its sales. The price of the company’s stock subsequently rebounds to its pre faux-scandal level, and the company announces that its quarterly sales are in fact going to exceed analysts’ expectations. The stock price jumps. The algorithm reinvests. Your portfolio suffered a minor loss and you missed the initial gains resulting from the sales news. To add insult to injury, you also paid transaction fees on both the purchase and sale of the security. Upset, you call up the robo-adviser and demand to know what happened. But because the algorithm is so complex and is continuously learning based upon immeasurable inputs, no one at the company can answer you. After investigating the situation, the company becomes aware of the Twitter bot scam and informs you that is likely the culprit. You want to sue the company so you call your lawyer friend and ask him, “can they be held liable for not knowing what the algorithm was doing?”

Liability

While the current capabilities of robo-advisers fall short of the above scenario, given the pace of technological innovation occurring within the field of artificial intelligence and machine learning,[1] the concept of an autonomous investment adviser, unknowable to even its own designers, is a very real possibility in the not too distant future. But before that future becomes a reality, courts and lawmakers would be wise to consider the current liability framework for investment advisers, and ask themselves if robo-advisers – both present and future – are capable of meeting the federal fiduciary standard.

I believe that, as presently constituted, robo-advisers do meet the fiduciary standard and that the fiduciary framework provides an adequate liability scheme for most of a robo-adviser’s activities. But as robo-advisers develop, and provide additional services, alternate liability schemes may be necessary to cover gaps created by an increasingly autonomous decision maker operating both within and around the fiduciary framework.

Fiduciary Duty

The fiduciary duty became enshrined in The Investment Advisers Act of 1940.[2] According to the SEC, an investment adviser provides individualized, “competent, unbiased, and continuous advice regarding the sound management of [a client’s] investments.”[3] Providing this advice requires an adviser know the client’s financial situation, and often results in providing a range of investment advice. Facts about a client’s life, like how many kids one has, can all play into an adviser’s recommendation.

Registration under the Adviser’s Act subjects the adviser to the federal fiduciary standard. The general requirement imposed by the duty is the adviser act in the “best interest” of his client. In order to satisfy this “best interest” standard, three important conditions must be met: the adviser discloses any conflicts of interest that may prejudice his advice,[4] the adviser seeks the lowest cost execution of securities trades,[5] and the adviser provides “suitable” recommendations[6] that have a reasonable basis in the client’s specific financial situation.[7] To determine if a robo-adviser meets the fiduciary standard, we must assess if they are capable of meeting these three conditions.

  1. Do robo-advisers provide personalized investment advice?

Investment advisers only have as much info as they ask for and clients provide. Some argue robo-advisers are more limited than their human counterparts since robo-advisers rely on clients to provide the necessary information. But human advisers face the same issue – and like a robo-adviser may rely on questionnaires as well as in-person interviews to elicit customer information. So long as a robo-adviser and human adviser ask the right questions and clarify conflicting information, both can meet this fiduciary requirement. Further, a human adviser faces the same issues as the robo-adviser in terms of staying updated on the client’s financial position. Clients are not always in constant contact with their human adviser – likely the case for most passive investment strategies. Procedures to update customer preferences provide the same opportunities for robo-advisory customers to update their adviser on changes to their financial goals and situation.

  1. Do robo-advisers sufficiently disclose any conflicts of interest?

Robo-advisory firms have the same potential for certain conflicts of interest as human advisers. The SEC’s guidance on robo-advisers stresses that although robo-advisory firms do not have to “make investment advisory personnel available to clients to highlight and explain important concepts,” the disclosures must be such that users see and understand them.[8] Some suggestions include using “interactive text” or “pop-up boxes.”[9] Again, both robo-advisery firms and human advisers face the same potential for engaging in conflicts of interest, but both can fulfill their respective duties through sufficient disclosure.

  1. Can a robo-adviser fulfill the requirements of best execution?

Best execution requires an adviser identify the brokerage service with the lowest total cost to the client under the circumstances to carry out the transaction.[10] This is a constantly evolving duty, meaning the adviser should periodically review its policies to ensure its getting the best deal for its clients.[11] This does not mean that an adviser cannot use an affiliated or specific broker – but the conflicts of interest must be disclosed.[12] Like a human adviser, the robo-adviser should periodically reviews its methods for executing client transaction. Like the two concerns detailed above, this concern is not unique to robo-advisers or insurmountable.

Who is responsible for the robo-adviser?

The previous analysis relies on the fact that today’s algorithms exists solely within the scope of investment advising subject to the fiduciary duty. It is worth examining the liability framework that follows from the fiduciary duty before analyzing hypothetical hi-tech scenarios that may require alternate liability theories better suited to artificial intelligence more generally.

The federal fiduciary duty attaches to the firm, which is registered as the investment adviser.[13] Per the SEC’s guidance, poor design – like not clearly disclosing conflicts – or inaccurate selection algorithms, may give rise to a violation of the fiduciary duty for the robo-adviser. This analysis centers on what the firm has done in creating and using the algorithms – treating the algorithm as a tool used by the firm. Thus, investors have a means to recover injuries caused by the algorithms from the registered investment adviser (the firm).

But robo-advisers of the future may face additional hurdles to meet the fiduciary standard. Innovation like the development of artificial neural networks[14], and massive data collection and creation spurred by the adoption of tech in all areas of life,[15] allow the field of machine learning to grow at exponential rates. As these technologies work their way into investment algorithms, the reasoning behind the algorithms selections may become impossible for the firm to explain. This failure to explain the algorithm’s reasoning would likely violate the firm’s fiduciary duty. This failure to supervise the algorithm’s decision making would likely violate the adviser’s duty to have a “reasonable basis” in its recommendation if it cannot actually explain why the algorithm did what it did.[16] Thus, designers should ensure that future neural network architectures are designed so that its reasoning can be identified, investigated and explained.

What if the robo-adviser provides other services not covered by the fiduciary standard?

While current robo-advisory architecture is not as sophisticated as some of the more complex artificial neural networks used for tasks like image recognition, it is not unrealistic to expect that in the near future robo-advisers could be designed to work in tandem with other data collection services. For instance, suppose the following:

To allow the robo-adviser to get a better idea of a consumer’s financial picture, the algorithm collects data from that user’s online bank accounts or financial aggregator services like Mint. Based on spending habits, it could then be prompted to ask more direct or probing questions to get a fuller view of the consumer’s financial health. For instance, if the robo-adviser notices higher levels of entertainment spending it may ask if the consumer has come into more money. Or if the user is spending more frequent and larger amounts at home improvement stores the algorithm may ask if there is something like a renovation or new home purchase upcoming and if there has been a change in the user’s debt level.

Basically, as the algorithm is better developed for cross platform integration, the architecture will likely have to change and become more complex. If that happens, developers must ensure that the adviser component can still easily explain to a client why the algorithm chose to make a trade. Another issue that could affect an adviser’s ability to explain the algorithms action is if there is a market shock which dramatically, and quickly, changes the algorithms weighting scheme. To complicate matters further, imagine the robo-adviser has better cross platform integration:

The robo-adviser also flags suspicious spending as it monitors my spending patterns and flags suspicious transactions for my bank. Imagine Duke Energy (a power company) was recently the unwitting victim of a hack and the hackers used the collected account information to charge client accounts. The algorithm sees that transactions to Duke Energy (the fraudulent ones) were constantly cancelled by users and banks. The algorithm then reports for cancellation several Duke Energy charges on my account – one of which was legitimate. I then receive a notice alerting me to my lateness and am charged a late fee.

Of course, the late fee isn’t high enough to warrant me filing suit against the robo-adviser. But should the robo-advisory firm be liable for these damages? This activity falls outside of the fiduciary liability scheme of a financial adviser despite it possibly using my spending patterns to better inform its investment advice. Assuming a more sophisticated robo-adviser that is integrated into multiple aspects of a person’s finances, let us now turn to the task of evaluating multiple theories of liability.

  1. Artificial Intelligence and quasi-personhood

If the algorithm is effectively taking the place of a human employee, why should law treat the two different for the firm’s liability? While adopting the legal fiction that artificial intelligence is a quasi-person for legal purposes may seem far-fetched, law has considered other artificial entities as person-like before, for instance, corporations.[17] And unlike a corporation which can only act through human agency, artificial intelligence can make decisions and act independently by means of technology.[18]

While clear defects in the program’s development might lend itself to a strict or product liability analysis,[19] more complex issues will arise when the developer has done everything right but a sophisticated neural net makes the autonomous decision to change its reasoning. For instance, a sophisticated robo-adviser may reprogram itself because of market shocks, leading the program to tweak its allocation, selection criteria or, more drastically, its overall investing strategy.[20] If the program is taking larger steps to redesign itself, should the robo-advisory firm still be liable despite the program’s true autonomy?

The European Parliament has suggested the European Commission create a separate legal status for robots and AI, in the long run, for situations where artificial intelligence interact with third parties independently.[21] Like the EU, the United States should investigate how electronic personhood would work with its current liability schemes with the goal of creating a legislative scheme for quasi-personhood.[22]

General agency law holds an employer liable for employees that were acting within the scope of their employment.[23] The test would be if the autonomous decision-maker acted in part to serve the firm that utilizes the machine learning algorithm. It would be hard to imagine a situation where the artificial intelligence was not at least in part serving the employer, since the machine learning algorithm constantly works to achieve a certain programmed objective – after all, the employer dictates what that overall objective is.

This theory considers the machine learning algorithm as a stand-alone entity before imparting liability but puts the liability on the firm, who can most likely bear the loss. At the same time, it doesn’t mean that liability is automatically imparted to the firm – unlike strict liability – because liability hinges on the actions of the algorithm. If a neural network is hacked and does harm, it arguably is no longer serving the employer.

  1. Strict liability

There are compelling arguments for adopting a strict liability framework, especially since developers or firms are in a better position to cover any losses due to their profiting off the technology’s use.  Employers benefit from the use of artificial intelligence by saving on labor costs and applicable taxes on labor. Because of these cost savings, employers are better able to shoulder the costs relating to injuries caused by the implementation of artificial intelligence.[24]

The European Parliament acknowledges that as robotics and artificial intelligence evolve, strict liability may no longer be appropriate.[25]  In the United States, current strict liability regimes generally bar claims for purely economic damage.[26] In claims against robo-advisers, plaintiffs would have to convince a court to recognize a loss in a portfolio’s value as property damage. In other contexts, such as divorce, stock portfolios are often considered property,[27] but it is unclear how courts would react to this argument, especially since recognizing investment portfolios as property for tort suits would appear to open state courts to securities litigation on a much broader scale. Further, the situation is even bleaker for plaintiffs in a scenario where the adviser causes something like a late fee (as mentioned above).

Regardless, strict liability has the capacity to cripple innovation. There are alternate liability schemes that can remedy injured consumers, encourage oversight of algorithms by firms and not disincentive firms with massive liability.

  1. Mandatory Insurance and Compensation Funds

Other mechanisms to ensure payment for damages, such as mandating insurance for employers and owners of artificial intelligence, or requiring payment into compensation funds, could also ensure that strict liability rules are not crippling to innovation by providing some kind of limited liability for the developer – much like the European Parliament has suggested. These could operate similar to worker compensation funds, [28] where employers using artificial intelligence could pay a percentage of their cost savings from utilizing the programs to float the funds in return for shielding themselves from general tort liability.

This approach seems the most feasible. The use of artificial intelligence provides for certain cost-savings, and while those cost-savings are generally passed on to the consumer, a portion of those savings should be put forward to either pay for an insurance premium or compensation fund. In return, the firm would receive limited liability in cases where the artificial intelligence has developed and engaged in an action that: (a) the firm could not reasonably foresee, and (b) was not because of any fault of design. The injured could recover all, or a percentage, of his or her actual and provable damages. In return, the firm would not be responsible for any incidental or consequential damages. Further, total liability could be capped at a certain amount per claim. This properly puts the burden of overseeing the artificial intelligence on the firm, but also encourages experimenting with complex neural nets by limiting total liability in the case the artificial intelligence acts in a truly autonomous and unforeseeable manner.

Notice that this compensatory scheme should be layered on the alternate schemes discussed above (but not strict liability). The firm is liable for any breaches of fiduciary duty under the federal fiduciary standard (as well as any state law fiduciary standards, if applicable). For actions outside the scope of the fiduciary duty, the firm is responsible for actions that should be attributed to the firm under existing theories of agency by creating legal recognition of autonomous algorithms and machines. Finally, when an algorithm acts outside the scope of agency law, the injured is granted some relief by the compensation fund. The firm is incentivized to restructure the algorithm but isn’t driven to bankruptcy. Innovation can continue, and the injured is granted some relief.

Final Thoughts

As machine learning algorithms become more advanced, consumers should expect to see more of them employed in innovative ways. Thus, while current robo-adviser firms can design their programs to meet the fiduciary standard, which provides an adequate liability scheme, more sophisticated robo-advisers of the future will likely operate both within and around the fiduciary framework, thereby necessitating the adoption of additional liability schemes.

In preparation, the United States should follow in Europe’s footsteps, and take steps to design a comprehensive legal regime for autonomous machines and algorithms. As this scheme is being developed, alternate liability regimes, like implementing a compensation fund, could ensure that victims of autonomous machines are compensated. These schemes could also provide some protection to manufacturers and developers by providing limited liability in return for payments to the fund, and avoid crippling innovation with the threat of uncapped liability.

 

 

 

 

 

[1] Machine learning is the creation of an artificial neural network that adjusts its decision-making based on previous iterations of processing data through the model. It adjusts depending on whether it correctly identified a pattern or not.

[2] Investment Advisers Act of 1940, 15 U.S.C. §§ 80b-1 to 80b-21 (2012).

[3] Investment Trusts and Investment Companies, Report of the SEC, Pursuant to § 30 of the Public Utility Holding Company Act of 1935, on Investment Counsel, Investment Management, Investment Supervisory, and Investment Advisory Services, HR Doc No 477, 76th Cong., 2d Sess. 1.

[4] Belmont v. MB Inv. Partners, Inc., 708 F.3d 470, 503 (3d Cir. 2013) (“[T]he federal fiduciary standard thus focuses on the avoidance or disclosure of conflicts of interest between the investment adviser and the advisory client.”); see also https://www.sec.gov/divisions/investment/advoverview.htm (stating an adviser “[has] a fundamental obligation to act in the best interests of [her] clients and to provide investment advice in [her] clients’ best interests”).

[5] 17 C.F.R. § 275.206(3)-2(c) (2017).

[6] In the Matter of George E. Brooks & Associates, Inc., Investment Advisers Act Release No. 1746, 1998 WL 479756, at *4 (Aug. 17, 1998).

[7] In the Matter of Alfred C. Rizzo, Investment Advisers Act Release No. 897, 1984 WL 470013, at *3 (Jan. 11, 1984).

[8] SEC Division of Investment Management, Guidance Update: Robo-Advisers, No. 2017-02, at 5–6 (Feb. 2017). The disclosures should be “in plain English.” Id. at 3 n.14. Cognizant that robo-advisors will primarily communicate with clientele online or through email, the SEC suggests taking advantage of this platform to make disclosures more apparent. See Id. at 5–6.

[9] Id. at 5–6.

[10] See SEC, Study on Investment Advisers and Broker-Dealers 28 (January 2011), https://www.sec.gov/news/studies/2011/913studyfinal.pdf [hereinafter SEC Study].

[11] Id. at 29.

[12] Id.

[13] Firms like Wealthfront and Betterment are registered investment advisors. The investment adviser is any “person” which is defined as either a natural person or company. 15 U.S.C. §§ 80b-2(a)(11), (16) (2012). By definition, the actual algorithm could not be the registered “investment adviser.”

[14] An artificial neural network is a kind of statistical model that can identify non-linear trends in data.

[15] Bringing Big Data to the Enterprise, IBM, https://www-01.ibm.com/software/data/bigdata/what-is-big-data.html (“90% of the data in the world today has been created in the last two years alone.”).

[16] See SEC Study, supra note 10, at 24–25. FINRA has stated that broker-dealers utilizing algorithms “cannot rely on the tool as a substitute for the requisite knowledge about the securities or customer necessary to make a suitable recommendation.”FINRA, Report on Digital Investment Advice, 5 (2016).

[17] See David Millon, Theories of the Corporation, 1990 Duke L. J. 201, 206 (1990); see also Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231, 1238–1240 (1992) (providing historical examples where different societies conferred legal rights on inanimate things); Nina Totenberg, When Did Companies Become People? Excavating The Legal Evolution, NPR (July 28, 2014, 4:57 AM), http://www.npr.org/2014/07/28/335288388/when-did-companies-become-people-excavating-the-legal-evolution (providing a brief summary of the legal evolution of corporations possessing rights previously reserved for natural persons). In the case of corporations, the Dictionary Act was amended to clarify that a person “include[s] corporations.” Dictionary Act, 1 U.S.C. § 1 (2012).

[18] This distinction becomes even more important as the “internet of things” continues to develop. With increased communication between machines, the more opportunities for artificial intelligence.

[19] Some autonomous machines, like a self-driving car, may better lend itself to assigning fault based on failures of human design – a kind of product liability or enterprise liability analysis. For an application of product liability theory to autonomous vehicles see generally David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (2014). Another example where semi-autonomous tools were considered under a product liability lens is the Da Vinci surgical robot. O’Brien v. Intuitive Surgical, Inc., 2011 WL 3040479, at * 1–3 (N.D. Ill. July 25, 2011).

[20] Current robo-advisers likely lack the ability to fundamentally change the coded investment strategies. However, it is continuously tweaking its allocation and selection criteria. How this responds to market shocks is unknown, and likely will not be fully understood until it happens. This hypothetical poses an interesting issue for robo-advisers. As artificial neural networks become more sophisticated, the ability to explain why it reached a certain decision decreases, depending on its architecture. Thus, shifts in selection criteria unexplainable to the client likely violates the firms fiduciary duty since it could not be sure that the selection was in the “best interest” of the investor. As a result, robo-advisers should ensure that changing market conditions do not leave the firm unable to explain the algorithms actions.

[21] Civil Law Rules on Robotics, at 59(f) (Feb. 16, 2017), http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2017-0051+0+DOC+PDF+V0//EN [hereinafter European Parliament Report].

[22] This may be as simple as amending the Dictionary Act to include categories of autonomous machine learning algorithms in its definition of “person.” Whereby US Courts would then decide whether the specific law at issue should be read conjunction with this definition. See Burwell v. Hobby Lobby Stores, Inc., 134 S.Ct. 2751, 2768 (2014) (“[U]nless there is something about the [act’s] context that ‘indicates otherwise,’ the Dictionary Act provides a quick, clear and affirmative answer . . . .”).

[23] See Restatement (Second) of Agency § 228 (1958).

[24] Of course, firms that are built around the use of artificial intelligence do not necessarily operate at higher profit margins than their counterparts since they often offer lower cost alternatives then their competitors. However, for the sake of argument, we will assume that we are talking about an established company implementing artificial intelligence in a cost-cutting manner.

[25] European Parliament Report, supra note 22, at AI.

[26] This is known as the “economic loss doctrine.” See, e.g., Grund v. Deleware Charter Guarantee & Trust Co., 788 F. Supp. 2d 226, 246 (S.D.N.Y. 2011). Not all states subscribe to the economic loss doctrine but those that do pose a potential hurdle to plaintiffs.

[27] See, e.g., Kapler v. Kapler, 755 A.2d 502, 505 (Me. 2000) (describing how the court “distributed the couple’s marital property” including a stock portfolio).

[28] Since worker’s compensation is created under state law, different states may treat the programs differently. For an overview of the fifty states’ different worker’s compensation regimes, see Workers’ Compensation Law Compendium, ALFA International, http://www.alfainternational.com/workers-compensation-law-compendium.

Category: Uncategorized Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *