Measuring Regulatory Complexity

By | February 26, 2020

Courtesy of Jean-Edouard Colliard and Co-Pierre Georg

Since the global financial crisis of 2007–08, regulators around the world have been busy overhauling global financial regulation. The culmination of these efforts, the Basel III accords on capital regulation and their national implementations, have the financial services industry up in arms. They argue that regulation has become complex, but not better.

Not only industry participants are concerned about the complexity of financial regulation, but regulators as well. The Bank of England’s Chief Economist Andy Haldane, for instance, voices fears that bank capital regulation has become so complex that it could be counterproductive and lead to regulatory arbitrage. The Basel Committee on Banking Supervision itself is aware of the issue, and considers simplicity a desirable objective, one that can be traded off against the precision of regulation. In the United States, similar concerns have led to a proposal to exempt small banks from some rules provided that they appear sufficiently capitalized (as Calomiris discusses here).

However, despite a heated debate on the perceived increase in the complexity of financial regulation, we still have no measure of regulatory complexity other than the mere length of regulatory documents. For instance, in their seminal “The Dog and the Frisbee” article, Haldane and Madouros use the number of pages of the different Basel Accords (from 30 pages for Basel I in 1988 to more than 600 pages for Basel III in 2014) as a measure of regulatory complexity. While informative, such a measure is quite crude and difficult to interpret. For instance, should one control for the fact that Basel III deals with a significantly higher number of issues than Basel I (see here for a similar point)? Is a longer but more self-contained regulation more or less complex? To guide us through such questions, we lack a framework to think about what complexity means in this context and how it can be measured.

To fill this gap, we propose to apply simple measures from the computer science literature by treating regulations like algorithms—fixed sets of rules that determine how an input (e.g., a bank balance sheet) leads to an output (a regulatory decision). In a new research paper, we apply our measures to actual regulatory texts, including Basel I and the Dodd-Frank Act. Our measures capture dimensions of complexity beyond the mere length of a regulation.

Regulation as an Algorithm

We start with simple measures proposed in the computer science literature. We apply these measures in a variety of contexts and start by “translating” an actual regulation—the Basel I capital requirements—into a functioning algorithm, and compute measures of the complexity of this algorithm. We also compute these measures based on the regulatory text itself, both for the Basel I capital requirements and for the Dodd-Frank Act.

Our framework allows us to formally define measures of regulatory complexity in a way that captures different dimensions of complexity. In particular, we make a distinction between: (i) “problem complexity”, a regulation that is complex because it aims at imposing many different rules on the regulated entities; (ii) “psychological complexity”, a regulation that is complex because it is difficult for a human reader to understand; and (iii) “computational complexity”, a regulation that is complex because it is long and costly to implement. Our measures rely on the analysis of the text describing a regulation, and so our analysis focuses on problem complexity and psychological complexity.

Among the many measures of algorithmic complexity that have been studied in the computer science literature, we focus on the measures pioneered by Maurice Halstead in the 1970s. These measures rely on a count of the number of “operators” (e.g., +,- , logical connectors) and “operands” (e.g. variables, parameters) in an algorithm, and the measures of complexity aim at capturing the number of operations and the number of operands used in those operations. In the context of regulation these measures can help capture the number of different rules (“operations”) in a regulation, whether these rules are repetitive or different, whether they apply to different economic entities or to the same ones, and so on.

Measuring the complexity of an algorithm—and consequently the complexity of a regulatory text—boils down to counting the number of operators and operands. If we wanted to use the length (or volume) of an algorithm as a measure of complexity, we could simply use the total number of operators plus the total number of operands. This is a simple measure of psychological complexity, as it is more difficult for humans to understand longer pieces of text. But it’s far from clear that volume is a good measure of problem complexity, i.e. the complexity of the algorithm without reference to how exactly it is implemented. The shortest possible algorithm to implement any given problem would at least contain the number of inputs, the number of outputs, and the most high-level function call to compute the output based on the inputs. So the potential volume of an algorithm equals two plus the number of unique inputs.

This measure is independent from how the function to compute the output is implemented and therefore the potential volume is a measure of problem complexity. With this nomenclature we can now determine how close a given algorithm is to the shortest possible algorithm. We call the ratio of the potential volume over actual volume the levelof the algorithm. If the level is high, it means that the regulation has a very specific vocabulary, a technical jargon opaque to outsiders. Conversely, a small volume means that the regulation starts from elementary concepts and operations. In particular, a low level means that the number of unique operators is greater than 2, so that the representation of regulation defines auxiliary functions (operators) in terms of more elementary ones. Under this interpretation, we can see that there is a very intuitive trade-off between volume and level. One can make the regulation shorter by using a more specialized vocabulary, but this is going to increase the level and make the regulation more opaque. Conversely, one can make regulation more accessible or self-contained by defining the specialized words in terms of more elementary ones, but the cost is a greater length.

Our choice of these “Halstead measures” is motivated by two factors. First, these measures are simple and transparent, and thus well-designed for a “proof of concept” study showing that applying measures of algorithmic complexity to financial regulation is potentially fruitful. Second, due to their simplicity, the computation of these measures can to some extent be automated and generalized to many regulatory texts, so that our approach can easily be replicated and used by other researchers.

The Complexity of Basel I

We show how to measure the complexity of capital regulation in practice by considering the design of risk weights in the Basel I Accords. This is a nice testing ground because this part of the regulation is very close to being an actual algorithm. We compare two different methods: (i) We write computer code corresponding to the instructions of Basel I and measure the algorithmic complexity of this code, that is, we use the measures of algorithmic complexity literally; and (ii) We analyze the text of the regulation and classify words according to whether they correspond to what in an algorithm would be an operand or an operator, and compute the same measures, this time trying to adapt them from the realm of computer science to an actual text.

So, for example, the fact that the regulatory text “Claims on banks incorporated in the OECD and loans guaranteed by OECD incorporated banks” is in the 20% risk-weight category, translates into the following computer code:

We can easily identify the operands and operators in such a piece of code, and compute our measures of complexity. The operands in the code corresponding to Basel I are the different asset classes (e.g. ASSET_CLASS, claims), attributes (e.g. ISSUER_COUNTRY, GUARANTOR), values of those attributes (e.g. oecd, bank), and risk-weights (e.g., risk_weight, 0.2). The operators are if, and, or, else, ==, >, ≤, and !=.

Given our algorithmic representation of Basel I, we find that there are 172 operators and 184 operands, of which 8 are unique operators and 45 are unique operands. To give sense to these numbers, we can go further by computing how much the regulation of a given asset class contributes to the total level of the regulation (see Figure 1).

Figure 1: Marginal complexity of different parts of Basel I regulation.

 

The measures we obtain using both approaches are highly correlated, from which we conclude that our measures can be used without actually “translating” a regulatory text into a computer code, which is of course a time-consuming task, as they can be proxied by studying the text directly.

The Complexity of the Dodd-Frank Act

Given the encouraging results obtained with the Basel I Accords, we then turn to the question of how the Halstead measures can be computed at a much larger scale by applying our text analysis approach to the different titles of the 2010 Dodd-Frank Act. Because the Dodd-Frank Act covers many different aspects of financial regulation, when doing this analysis we created a large dictionary of operands and operators in financial regulation, as well as specialized software that helped us in manually classifying a large body of text.

Our paper provides informative descriptive results on which titles are more complex according to different dimensions (see Figure 2). In particular, we note that some titles have approximately the same length and yet differ very significantly along other measures, which shows that our measures capture something different from the mere length of a text.

 

Figure 2: Volume vs. Level of different sections of the Dodd-Frank Act.

 

The dictionary and the code to generate the classification software can be found in our github repository. Applying the same approach as before to the 16 Titles of the Dodd-Frank Act plus its introduction, our dictionary contains 667 unique operators (374 logical connectors and 293 regulatory operators), 16,474 unique operands (12,910 economic operands, 560 attributes, and 3,004 legal references), as well as 711 function words and 291 other, unclassified words. In other words, we classify 98.4% of the 18,143 unique words used in the Dodd-Frank Act.

The Remaining Question: Is This How Humans Perceive Complexity?

The major open question that remains is whether our measures of complexity are indeed how humans perceive complexity. At the end of the day, this is what matters: do our measures capture what those who have to deal with regulations perceive as “complex”? To address this question, our paper describes an experimental protocol.

Experimental subjects are given a regulation consisting in (randomly generated) Basel-I type rules, and the balance sheet of a bank. They have to compute the capital ratio of a bank and say whether the bank satisfies the regulatory threshold. The power of a measure of regulatory complexity is given by its ability to forecast whether a subject returns a wrong value of the capital ratio, and the time taken to answer. Moreover, we can test whether the relation between the measure of regulatory complexity and the outcome depends on the student’s background and training, etc. Importantly, our protocol can be used to validate any measure of regulatory complexity based on the text of a regulation, not only ours, and thus opens the path to comparing the performance of different measures Ultimately, the objective would be to establish a standard method to measure the power of a measure of complexity. This has been done in computer science, in which there is a literature testing whether different measures of algorithmic complexity correlate with mistakes made by the programmers or the time they need to code the program.

Conclusion

Our paper is only a first step in applying this new approach to the study of regulatory complexity, and is meant as a “proof of concept”. We show how some of the simplest measures of regulatory complexity can be applied to financial regulation in different contexts: (i) an algorithmic “translation” of the Basel I Accords; (ii) the original text of the Basel I Accords; (iii) the original text of the Dodd-Frank Act; and (iv) experiments using artificial “Basel-I like” regulatory instructions.

While the results we present are preliminary, we believe they are encouraging and highlight several promising avenues for future research. First, the dictionary that we created will allow other interested researchers to compute various complexity measures for other regulatory texts and compare them to those we produced for Basel I and the Dodd-Frank Act. Moreover, the dictionary can be enriched in a collaborative way. Such a process would make the measures more robust over time and allow us to compare the complexity of different regulatory topics, different updates of the same regulation, different national implementations, etc. This can also serve as a useful benchmarking tool for policymakers drafting new regulations. Second, the conceptual framework and the experiments we propose to separate three dimensions of complexity (problem, psychological, computational) can be applied to other measures that have been proposed in the literature, so as to better understand what each one is capturing. Finally, our measures could be used in empirical studies aimed at testing the impact of regulatory complexity, and in particular testing some of the mechanisms that have been proposed in the theoretical literature.

Views expressed are not necessarily the views of Deutsche Bundesbank or the Eurosystem.

Leave a Reply

Your email address will not be published. Required fields are marked *