Regulating Choice Architecture with Behavioral Audits

By | January 16, 2024

Regulation of behavioral science is coming. The Biden Administration has committed to tackling “junk fees” which emerge from deceptive, online behavioral design practices, as has the Federal Trade Commission. In the UK, the Competition and Markets Authority (CMA) is increasingly concerned with harms arising from ‘behavioral design’ and ‘choice architecture’. So too is the European Union (EU), whose recent Digital Services Act takes aim at deceptive choice architecture and dark patterns. With mounting scrutiny, applied behavioral science is entering a new era.

Behavioural insights can help citizens and consumers. For instance, they have been used to design better product labels for consumers. Other behavioral ‘nudges’ have been used to tackle the manipulative effects of online dis- and misinformation, supporting online communities without threatening online freedoms. There is clear evidence, globally, of behavioral insights supporting consumers and citizens, not harming them.

Worries about behavioral insights have always existed. However, current concerns are driven by the growing use of online choice environments. Entertainment, finance, government services, and much more, are increasingly accessed via smartphone applications and websites.

Manipulative design practices in online choice environments have been a point of discussion and debate within the user interface and user experience (UI and UX, respectively) domains for several years. Coined in 2011, dark patternsdescribe UI design features that lead users to outcomes which they do not want, but which benefit whatever service has introduced the design. Today, several studies now exist which document the features and pervasiveness of dark patternsand manipulative behavioral techniques.

Some dark patterns are not based on behavioral science. For instance, forcing a person to accept a service’s terms and conditions is not about influencing choices; there is no choice. But other dark patterns, such as time scarcity, default options, and social priming, certainly share much overlap with the behavioral science literature. For this reason, regulatory concerns about dark patterns often come to include discussions of behavioral insights.

Regulatory responses are already emerging. The European Commission’s Digital Services Act (DSA) bans practices such as making services harder for individuals to leave than they are to join. The UK’s Financial Conduct Authority has recently introduced a new ‘Consumer Duty’ standard requiring financial services to prioritize safeguarding consumer interests. This extends to online choice architecture. Despite these efforts, however, regulatory challenges remain.

The first is the subjective nature of choice architecture. What one person considers a sensible, useful arrangement of choice architecture, another may find wholly manipulative and unacceptable. People are different, have different preferences, and may experience choice architecture in different ways. As a result, it will be extremely difficult to determine ‘objectively’ unacceptable choice architecture.

The second is preserving the positive use of behavioral insights. The motivation for much of the modern, applied behavioral science movement, particularly in the US and the UK is that choice architecture is unavoidable. For a decision to be made, a choice must be presented. Therefore, why not design the choice to promote positive, rather than harmful, social outcomes? The risk of regulating away the positive use of behavioral insights should be taken seriously.

I argue that regulators, working in conjunction with the behavioral science community, should develop behavioral audits as a tool for supporting effective behavioral science regulation. In doing so, I contend these challenges can be met.

Behavioral audits should be used to ensure organizations comply with the spirit of regulation, accepting that objectively manipulative designs may be difficult to determine. But they should also offer organizations insights for improving its choice architecture, to promote positive outcomes for individuals.

Challenges

Choice architecture captures the context in which decisions are taken, including the ordering of options, use of color, and many other elements of a decision. Changing choice architecture can lead to statistically significant shifts in a population’s behavior. However, individual responses to choice architecture will vary—some might be nudged only marginally, some a great deal, and some not at all (or even negatively).

Choice architecture which benefits one individual, or indeed many, is still likely to be regarded as detrimental to some within the population. For instance, a 2018 study on automatic credit repayments undertaken by the FCA found that changing the choice architecture surrounding repayment options encouraged significantly more people to choose to repay more than a minimum repayment default. In the long run, this behavior would save consumers money, and should be regarded as positive. However, the removal of the minimum payment default also increased the number of people choosing to pay nothing toward their repayments. Further studies have found similar unintended consequences of choice architecture. Alternatively, the reverse might also be true—choice architecture intended to exploit consumers may actually benefit some consumers, given the specific characteristics of said consumers.

It is very difficult to determine an example of choice architecture that is objectively deceptive, insofar as it alwayscauses economic, material harm to consumers. Behavioral spillovers and changing preferences further confound this challenge. Choice architecture might induce a choice which only later causes harm. Not only does this complicate the assigning of blame to choice architecture; it extends the necessary foresight of a regulator (and well-meaning choice architect).

Regulators recognize this challenge of subjective experiences. The CMA, for instance, acknowledge that, “it is difficult to see the combined effect of smaller non-price ‘nudges’ [on consumer welfare]” and the absence of research which readily resolves the challenges of determining ‘harmful’ choice architecture from ‘non-harmful.’ Likewise, the FTCargue, in addition to the challenge of noticing and reporting deceptive practices, that deceptive designs will be experienced differently depending on the medium through which one accesses a service (e.g., smartphone versus desktop) and the socioeconomic circumstances of the individual being manipulated. For instance, someone who cannot afford a junk fee is likely to experience the deceptive design which leads to the fee differently than someone who can readily afford it. Emerging research into the role of inequality and administrative burdens offers further support to this perspective.

The second challenge is safeguarding the positive uses of choice architecture and behavioral science, while reducing harm caused by deceptive applications. Choice architecture can be used for well-meaning purposes. Behavioral science can be a countermeasure against harmful dis- and misinformation online, and it can protect consumers within the gambling industry. Smart disclosures can help consumers understand critical information more easily, and choice architecture can support citizens in accessing government provisions, such as in education. Regulators acknowledge these positive uses. Nevertheless, a two-fold risk emerges from regulating choice architecture.

Firstly, that of over-regulation, or at least, naïve standard-setting. For instance, the European Commissionassociates “exploitative design choices,” with “presenting choices in a non-neutral manner.” A longstanding argument within the behavioral policy community has been the impossibility of neutral choice environments. This argument follows that for a choice to be presented, some presentation of said choice must be made, and this presentation may influence the decision-maker. Efforts to design ‘neutral’ choice architectures are likely only to encourage a retreat from any purposeful use of behavioral insights, for good or for ill.

Secondly, that regulatory promotes fears of accidentally causing harm. This is a substantial concern for the CMA, who acknowledge the often-unforeseeable effects of choice architecture, particularly when designed in conjunction with complex, algorithmic systems. Yet, despite this acknowledgement, neither they nor others offer substantial recourse to fears which may arise in the minds of business leaders and policymakers from using choice architecture when penalties hang over them. A regulatory landscape which discourages the use of welfare-enhancing choice architecture would be deleterious to the goals of the regulators.

Behavioral Audits

Behavioral audits can resolve these challenges. Behavioral audits should serve to ensure compliance with regulatory principles, allowing experts to evaluate choice architecture in comparison to the spirit of regulation. But they should also offer organizations insights into how to improve outcomes for themselves and their clients. This function is particularly useful. As acknowledged by regulators, many poor interactions between individuals and services arise because of careless, or accidental, designs. Consumer welfare is ‘left on the table,’ if behavioral audits are only used for compliance, and not to improve individual experiences and organizational processes.

A behavioral audits program must draw from a small but growing number of sludge audits—audits of services for unnecessarily burdensome choice architecture. Approaches to auditing in this literature vary. Some ‘auditor-led’ approaches allow auditors discretion in their analyses, which can lead to more ‘realistic’ experiences, but also substantial variation between auditors. Other ‘checklist-led’ approaches use pre-made checklists to reduce variability and standardize outcomes, but may fail to capture nuanced experiences.

Skill- and experience-level of the auditor is a central question highlighted by regulators and the literature. The DSA, for instance, assumes an “average” user within its discussion. The CMA suggests one approach to auditing may be to “enlist consumers to act as digital “mystery shoppers”” as these experiences are likely to be representative of typical user experiences. Sludge audits undertaken by the New South Wales Behavioral Insights Unit draw on customer experience surveys. Such proposals may be promising but re-emphasize the methodological question of auditor-led versus checklist-led approaches.

One may be to simulate user experiences using generative AI. Websites—even extremely complicated ones—can be understood as decision trees with various branching paths. Such trees can be ‘solved’ using pathfinding algorithms. A pathfinding solution represents an ‘optimal baseline’ against which all other simulated experiences can be compared. Agents simulated through generative AI have been found to demonstrate behavior more like real behavior than alternative simulation approaches. Agents could be given an identity, interests, goals, motivations, and behavioral traits, and asked to navigate a website as a decision tree. Comparisons with the optimal baseline would then allow auditors to estimate relative difficulties of navigating such choice environments for different simulated groups. These tools have yet to be developed, and such research yet to be undertaken, but behavioral auditing via simulation may be one means of resolving the ‘meta’ problem of auditors themselves influencing the results of the audit.

Best practice for behavioral auditing remains an open question, but research and experimentation are already occurring within the literature, and in practice. Another open question is the degree of detail that behavioral audits should go into. Colleagues and I have developed a set of auditing tools for undertaking a ‘high-level’ audit, designed to support regulators to identify problem areas to target with limited resources. Yet, ‘high-level’ approaches are limited in several ways. Firstly, the insights gleamed from a high-level approach may be insufficient to validate compliance, or to penalize non-compliance. High-level approaches may best serve as initial audits pending a more detailed investigation. Secondly, high-level audits are limited in the ‘behaviors’ which can be investigated. Thirdly, high-level audits may lack the granular detail needed to offer positive recommendations to vendors.

The Behavioral Insights Team investigate various user behaviors and offer a more granular dissection of elements of a user ‘journey’ beyond high-level perspectives. Others have undertaken similar approaches. More detailed auditing approaches likely can overcome the drawbacks of only taking a high-level view. Nevertheless, more detailed approaches come with challenges of their own. Such audits are much larger undertakings than high-level analyses of ‘basic’ processes. More granular audits require more time and resources than high-level approaches. Depending on the regulatory environment, these requirements may exceed regulatory resources.

To an extent, the degree of detail required from a behavioral audit may be driven by both regulatory demands and client expectations. Thus, an important question for behavioral auditing is what should the outputs be?

Given the subjective nature of choice architecture, a behavioral audit should at least investigate the motivations of organizations implementing potentially deceptive designs. This should include scrutinizing the claims made by organizations that said designs help, rather than hinder, consumers. Such scrutiny is likely to be paramount in informing regulatory responses to repeat offenders.

Yet, behavioral audits should not merely be seen as bureaucracy undertaken to ensure compliance. They should also strive to offer valuable insights to organizations which benefit relevant stakeholders. There are various justifications for this position, not least because consumer and citizen welfare can be enhanced not merely through the introduction of protections, but through proactive efforts to support individuals. One may also speculate that audits undertaken with the dual objectives of ensuring compliance and informing positive organizational practices incentivizes cooperation between audits and regulators, on the one hand, and audited parties, on the other (e.g., providing insights on organizational culture). Such cooperation may be vital when dealing with extremely technical and complex choice environments. Finally, many ‘deceptive’ design practices are not intentionally deceptive, and might emerge accidentally, often as the result of legacy decisions. Behavioral audits focused on ensuring regulatory compliance may penalize vendors for ‘deceptive’ design practices which were merely oversights, and be unhelpful for vendors if substantial violations are found, but no recommendations given.

Thus, the program presented here advocates for behavioral audits that also provide behavioral insights to organizations, as well as ensuring compliance. This, however, introduces a substantial risk of conflict of interest, in particular when audits are undertaken by independent behavioral auditing firms, rather than regulators. These firms are likely to already be professional behavioral consultancies, owing to the dispersion of skills within applied behavioral science at present. Such firms are thus incentivized to find ‘problems’ in an organization’s choice architecture, as this creates the opportunity for further business. Overcoming this conflict of interest must be a central part of a research program.

Stuart Mills is a Lecturer in Economics at the Leeds University Business School, and a Visiting Fellow of Behavioural Science at the London School of Economics and Political Science.

This post was adapted from their paper, “Deceptive Choice Architecture and Behavioral Audits,” available on SSRN.

Leave a Reply

Your email address will not be published. Required fields are marked *