Monthly Archives: November 2018

The Logic of Profiling: Fairness vs. Efficiency

ABSTRACT There are several strategies available to police “stopping” suspects. Most efficient is to stop only members of the group with the highest a priori probability of guilt; least efficient is indiscriminate stopping.  The best profiling strategy is one that biases stops of different groups so that the absolute number of innocents stopped is equal for all groups.  This strategy is close to maximally efficient, allows some sampling of low-crime sub-groups, and seems fair by almost any criterion.

 Profiling is selecting or discriminating for or against individuals, based on easily measured characteristics that are not directly linked to the behavior of interest.  For example, age, sex or racial appearance are used as partial proxies for criminal behavior because crime rates differ among these groups.  Old people, women and whites are less likely than young people, men and blacks to be guilt of certain types of crime.   Hence, preferentially ‘stopping’ young black males is likely to catch more criminals than stopping the same number of people at random.   Just how many more, and at what cost in terms of ‘fairness’ is the topic of this note.

The term “profiling” is usually associated with stop-and-search procedures (see, for example, Callahan & Anderson, 2001; Glaser, 2006 ), but a similar process occurs in other contexts also.  Almost any kind of selective treatment that is based on a proxy variable is a form of profiling.   In life, health and car insurance, for example, people of different ages and sexes are usually treated differently.  Often controversial, profiling nevertheless goes unquestioned in some surprising places.  Take speeding by motorists, for example.  Exceeding a posted speed limit is an offence and few question laws against it.  But most speeding causes no direct harm to anyone.  The legitimacy of a law against speeding rests on the accuracy which speeding predicts the probability and severity of an accident: the statistically expected cost of speeding is the product of accident probability times damage caused.  While it is obvious that an accident at high speed will usually cause more damage than one at lower speed, the relation between speed and accident probability is more contingent.  If drivers go fast only when it is safe to do so, there may be no, or only a weak or even negative, correlation between speed and the likelihood of an accident.  Hence, a car’s speed may not be a reliable proxy for accident risk, in which case penalizing – profiling – speeders would be unfair.  The same is true of alcohol and driving.  If drunks drive more cautiously (as some do), their proven sensory-motor deficiencies may become irrelevant to accident risk.   In both these cases the fairness of profiling rests on its accuracy.  If drinking and speeding really are correlated with higher accident risk, sanctions against them may be warranted.

There is also the issue of personal responsibility.  Speed is under the motorist’s control,  just like smoking – which is used in life-insurance profiling.  Fewer objections are raised to profiling that is based on proxies that are under the individual’s control and for which he can therefore be held responsible.  Race is of course not something over which the individual has any control, which is one reason racial profiling is subject to criticism.   On the other hand, age and sex are also involuntary, yet fewer objections are raised against profiling on these grounds.  The reasons for these policy differences and the problems of measuring the statistics on which they are based are larger topics for another time.

The utility and legitimacy of profiling depend on two related characteristics: accuracy and fairness.  How well do the measured characteristic or characteristics predict the variable of interest?  And how fair is it to pick on people so identified?

Fairness is not the same as accuracy.  In health insurance, for example, the whole idea of “insurance” arose partly because people cannot predict when they will get sick.  But as biological science advances and it becomes possible to predict debilitating genetic conditions with high accuracy, insurance companies may become reluctant to insure high-risk applicants, who may therefore be denied insurance.  How fair is this?  In general, the greater the ability of an insurer to predict health risk, the more questionable health profiling becomes, because the concept of insurance – spreading risk – is vitiated.  But this is not a problem for profiling to catch criminals.  Few would object to profiling that allowed airport screeners to identify potential terrorists with 99% probability.  The better law-enforcement authorities are able to profile, the fewer innocent people will be stopped and the more acceptable the practice will become.

The political and ethical problems raised by profiling and associated practices, and some of the utilitarian aspects of stop-and-search profiling, have been extensively reviewed (see for example, Dominitz, 2003; Glaser, 2006; Persico, 2002; Risse & Zeckhauser, 2004).  But no matter what the political and moral issues involved, it is essential to be clear about the quantitative implications of any profiling strategy.  With this in mind, this note is devoted to a simple quantitative exploration of the accuracy and ‘fairness’ of profiling in “stop-and-search” situations such as driver stops or airport screening.  The quantitative analysis in fact allows us to identify a possible profiling strategy that is both efficient and fair.

Fair Profiling

Age and sex profiling are essentially universal: police in most countries rarely stop women or old men; young males are favored. The reason is simple.  Statistics in all countries show that a young man is much more likely to have engaged in criminal acts, particularly violent acts, than a woman or an older man. The same argument is sometimes advanced for racial profiling, stopping African-American drivers, or airline passengers of Arab appearance, more frequently than whites or Asians, for example.

I look at the very simplest case: a population with two sub-populations, A and B, that differ in the proportion of criminals they contain.  To do the math we need to define the following:

population size = N

proportion of A in population = x

proportion of B in population = 1-x

target probability A = r

target probability B = r < v; 0 < v,r < 1

(r and v define the relative criminality of As and Bs: if r = .2, for example, that means that 20% of A stops find a criminal.  If r = v, profiling offers no benefits because the probability that a given A has engaged in crime is the same as for a given B.  If v > r, the case I will consider here, a B is more likely to be a criminal than an A, and so should Bs be favored by profilers.)

The probability a given A or B will be stopped depends on two parameters, p and qp is the overall probability of a stop, i.e., the fraction of the total population that will be stopped.  q is the profiling parameter,

A-weight = q

B-weight = 1- q

(q is the profiling weight for A, i.e., q/(1-q) is the bias in favor of A. If q = .5 there is no profiling; if q < .5, a B is more likely to be stopped than an A.)

For a sample population of size N, the probability of sampling (stopping) an A= pz, and the probability of sampling a B  = p(1-z), where z is defined below.


With these terms defined, and given values for population size N, stop probability p, and target probabilities r and v, which define the relative criminality of the A and B populations, it is possible to write down expressions that give the total number of criminals detected and the number of innocents stopped in each sub-population.

It is easiest to see what is going on if we consider specific case: a population of, say N = 10,000, and limit the number of stops to one in ten – 1000 people (p = 0.1).  Profiling is only worthwhile if the proportion of criminals in the A and B sub-populations differs substantially.  In the example I will look at, the probability that a given A is criminal is r = 0.1 and for B v = 0.6 (i.e., a B is six times more likely to be a criminal than an A).  I also assume the Bs are in the minority: 1000 Bs and 9000 As (x = 0.9) in our 10,000-person population.

The degree of profiling is represented in this analysis by the parameter q, which can vary from 0 to 1.  When q = 0, only Bs are stopped; when q = 1, only As are stopped.  The aim of the analysis is to see what proportion of our 1000 (pN) stops are guilty vs. innocent as a function of profiling ranging from q = 0 (only Bs stopped) to q = 0.5 (no profiling, As and Bs stopped with equal probability).  The math is as follows:

I first define a term z that represents the proportion of As in a fixed-size sample of, say, 1000 ‘stops’:



is the proportion of Bs; z allows q, the bias – profiling – parameter, to vary from 0 (only Bs stopped) to 1 (only As stopped) for a fixed sample size.

The number of As sampled (stopped) is pzN and the number of Bs sampled is p(1-z)N.  Multiplying these qualities by criminality parameters r and v gives the number of guilty As and Bs caught: guilty As caught is  and the number of guilty Bs caught is .  We can then look at how these numbers change as the degree of profiling goes from q= 0 (all Bs) to q = .5 (A and B stopped in proportion to their numbers, i.e., no profiling).

This sounds complicated, but as the curves show, the outcome is pretty simple.  The results, for N = 10,000, p = 0.1,  r = 0.1, v = 0.6.,  x = 0.9 are in Figure 1, which shows the number of criminals caught (green squares) as a function of the degree of profiling, q.  The number of innocents stopped, which is just 1000 minus the number of guilty since there are only 1000 stops, is also shown (red triangles).  As you might expect, the most efficient strategy is to stop only Bs (q = 0).  This yields the most guilty and the fewest stops of innocent people, 600 guilty and 400 innocent out of 1000 stops.  The number of guilty detected falls off rapidly as the degree of profiling is reduced, from a maximum of 600, when only Bs are stopped, to a minimum of 150 when As and Bs are stopped with the same probability.  So the cost of not profiling is substantial.

But the pure profiling strategy is obviously flawed in one important way.  Because no As are stopped, the profiler has no way to measure the incidence of criminality in the A population, hence no way to update his profiling strategy so as to maintain an accurate measurement of parameter r.   Another objection to pure profiling is political.  Members of group B and their representatives may well object to the fact that innocent Bs are more likely to be stopped than innocent As, even though this can be justified by statistics.  What to do, since there is considerable cost, in terms of guilty people missed, to backing off from the pure profiling strategy?

Profiling entails a higher stop probability for the higher-crime group.  Innocent Bs are more likely to be stopped than innocent As.  Nothing can be done about that.  But something can be done to minimize the difference in numbers of innocent As and B stopped.  The ratio of innocent As to innocent Bs stopped is shown by the line with blue diamonds in Figure 1.   As you can see, with Bs and As in a ratio of nine to one and rates of criminality in a relation of one to six, the ratio of innocent stops A/B increases rapidly as the degree of profiling is reduced.   With no profiling at all, twenty times as many innocent As as innocent Bs are stopped.  But this same curve shows that it is possible to equalize the number of innocent As and Bs that are stopped.   When the profiling parameter, q = .047, the numbers of innocent As and Bs stopped are equal (red arrow, A/B = 1).   At this point, enough As are in fact stopped, 277 out of 1000 total stops, to provide a valid estimate of the A-criminality parameter, r, and the drop in efficiency is not too great,  446 captured versus the theoretical maximum of 600.   Thus, for most values of r, v and x, it is possible to profile in a way that stops an equal number of innocent people in both groups.  This is probably as fair a way of profiling as is possible.

Doing it this way of course sets the cost of stopping innocent As lower than the cost of stopping innocent Bs.  In the most efficient strategy, 400 innocent Bs are stopped and zero innocent As, but in the ‘fair’ strategy 277 of each are stopped, so the reduction of 400-277  = 123 innocent B stops is more than matched by an increase from zero to 277 in the number of innocent A stops.   Some may feel that this is just as unfair as the pure profiling strategy.  But, given the need to sample some As to get accurate risk data on both sub-populations, the ‘fair’ strategy looks like the best compromise.


When base criminality rates differ between groups, profiling – allocating a limited number of stops so that members of one group are more likely to be stopped than members of another – captures more criminals than an indiscriminate strategy.  The efficiency difference between the two strategies increases substantially as the base-rate difference in criminality increases, which can lead to a perception of unfairness by innocent members of the high-risk group.

Profiling entails unequal stop probabilities between the two groups.  Nevertheless, because no one seeks to minimize the stops of guilty people, it seems more important to focus on the treatment of innocent people rather the population as a whole.  And because we live in a democracy, numbers weigh more than probabilities.  These two considerations suggest a solution to the fairness problem.  A strategy that is both efficient and fair is to profile in such a way that equal numbers of innocent people are stopped in both the high-crime and low-crime groups.  This may not be possible if the high-crime  population is too small in relation to the disparity in criminality base rates.  But it is perfectly feasible given current US statistics on racial differences in population proportions and crime rates.




Callahan, G & W. Anderson The roots of racial profiling: Why are police targeting minorities for traffic stops? Reason, August-September, (2001),

Dominitz, J. (2003) How Do the Laws of Probability Constrain Legislative and Judicial Efforts to Stop Racial Profiling?  American Law and Economics Review, 5(2) (412±432)

Glaser, J. (2006) The efficacy and effect of racial profiling: A mathematical  simulation approach.  Journal of Policy Analysis and Management,  March 2.

Persico, N. Racial Profiling, Fairness, and Effectiveness of Policing, 92(5), The American Economic Review, 1472-1497 (2002)

  1. Risse & R. Zeckhauser, Racial Profiling, 32, 2, Philosophy and Public Affairs, Research Library, 131-170. (Spring 2004)



Glenn Beck: Why do they hate him so?

In January 2011 Vanity Fair published Tea’d Off, an article by Christopher Hitchens which is an attack on the Tea Party movement and its chief icon, broadcaster Glenn Beck. I have long admired Mr. Hitchens, for his prose, his erudition, his independence, and, not least, his courage now in the face of a dreadful disease. Mr. Hitchens is also one of our most brilliant debaters and polemicists. In short, I’m a fan; but I’m very disappointed by his caricature account of Glenn Beck.

I have watched Beck’s TV program many times, but, apart from the ‘tear-stained’ jibe (Beck does tear-up from time to time), I do not recognize Beck in Mr. Hitchens picture of him. Hitchens’ most egregious charge is that Beck peddles ideas that are “viciously anti-democratic and ahistorical.” Beck is sarcastic and funny and, yes, a bit paranoid, but in my experience not in any way vicious. He spends a lot of time on his show urging people to check his facts and respond peaceably no matter how upset they may be. He said the same thing in his huge, peaceful, tidy (!) and largely apolitical, 8/28/2010 event in Washington. Maybe this is all crafty double-talk; if so, it fooled me.

I have never heard Beck criticize democracy; one of his themes is “We the people.” ‘Anti-elite’ would be a more accurate charge. Mr. Hitchens should at least give us a quote and a context or two to back up his ‘anti-democratic’ charge.

As for ‘ahistorical,’ Beck’s TV shows and books have far more historical material—from Edward Gibbon and the Founding Fathers through C. E. M. Joad and F. A. Hayek to Niall Ferguson—than any other comparable show. Mr. Hitchens may disagree with Beck’s interpretations—I’d like to hear how and in what ways—but ‘ahistorical’ Beck is not. Instead of providing something substantive, Hitchens goes off on a rant about some unnamed ‘paranoid right’ radio host who was obsessed with the supposed murder of Vince Foster. Hitchens is smart enough to know that insults are not argument.

The core of Hitchens’ disdain seems to be that Beck has said good things about The Five Thousand Year Leap, a millennial book by one Cleon Skousen, a Mormon one-time FBI operative and polemical conservative active in the McCarthy era and after. Well, Skousen was in many ways an unappetizing character, but my reading does not confirm Hitchens charge that he “justified slavery.” His best known quote on the topic seems to be “… the emancipation of human beings from slavery is an ongoing struggle. Slavery is not a racial problem. It is a human problem.” Hitchens is right that Skousen did use the word ‘pickaninny’ to refer to black children. Like Hitchens, I am British born. I first heard the word as a child many years ago in England. My memory is that it was affectionate, maybe a bit patronizing, but not derogatory—rather like the golliwog on jars of Robertson marmalade. It was not a diminutive of the N-word. The golliwogs are gone now, and perhaps things were different in the US. Certainly, things are different in 2010. But to treat then like now is, well, ahistorical.

Skousen was religious of course, which Hitchens is not. Few would go along with the rather strange Mormon mythology that Skousen offers as the basis for his beliefs about America. But we can look at the beliefs themselves: just how offensive are they? Skousen lists 28 of them. A few affirm the necessity of religion and ‘natural law’ to good government. No consensus there. But others advocate respect for property rights and the rights of the individual, equality of rights, the right of the people to replace a tyrannical government, the need for virtue in a free republic, the importance of checks and balances. Some are more controversial: America’s ‘manifest destiny’ to be an example to the world (a little dippy to some, but hardly fascistic), the evils of national debt, allegiance to the ‘free market’ with a minimum of regulations. Simplistic, a little extreme for some tastes—but I’m not sure that the list deserves the level of excoriation that Hitchens directs at it.

Hitchens also accuses Beck, “a tear-stained semi-literate shock-jock” of claiming that “The president is a Kenyan. The president is a secret Muslim…” I’ve heard Beck criticize ‘birthers,’ not support them; but I must have missed the ‘Obama is a (secret) Muslim’ show.

And Beck is semi-literate compared to who, exactly? Hitchens also seems to be living in a bit of a cultural bubble when he writes “…does anybody believe that unemployment would have gone down if the hated bailout had not occurred and GM had been permitted to go bankrupt?” Well, actually, yes, quite a few non-stupid people do believe that we would be out of the recession by now if fiscal policy had been more responsible. Check out anything by the Austrian school of economists, for example (Tom Woods’ Meltdown is a good start.) Hitchens goes on to sneer at “caricature English peer” climate-change critic Lord Monckton. Monckton is not a scientist, and certainly not a member of the climate-change establishment, but he is smart enough to have won an Oxford Union debate on the topic.

Finally and most gratuitously, Hitchens sees the current malaise as a reflection of white people’s fear that they “will no longer be the majority in this country…” Well, some—probably not a majority—of Americans, white and black, do have a fear that traditional American culture may be supplanted by something alien. But I don’t see any real evidence that whites are worried about the numbers of non-whites as non-whites. Oprah would not dominate TV, nor could Barack Obama have been elected, if race-consciousness were a serious problem in America.

I wish Christopher Hitchens well; I look forward to reading his future writings; I just hope that his visceral dislike for religion and the religious and for certain kinds of conservative populist—a dislike shared by most of his intellectual set—does not continue to distort and enfeeble his writing as it did in this article.

TEAM PLAYER: Robert Shiller and Finance as Panacea

This is a review of a relatively old book by a famous economist.  This book is a surprising contrast to Shiller’s prescient Irrational Exuberance (2000, now in its 3rd edition). It was amiably reviewed by the New York Times and reviewed critically by the free-market Austrian Economics journal. The book is an apologia for some of the cleverest — and most destructive — inventions of the finance industry, so another review is probably justified.

Shiller, Robert J. (2012-03-21). Finance and the Good Society. Princeton University Press. Kindle Edition.

Yale professor Robert Shiller is one of the most influential economists in the world.  Co-inventor of the oft-cited Case-Shiller index, a measure of trends in house prices, he is author or co-author of several influential books about financial crises – including Irrational Exuberance (2000) and (with George Akerloff) and Animal Spirits (2009).  He shared the 2013 Economics Nobel with Eugene Fama and Lars Peter Hansen.

In 2012 Professor Shiller published a full-throttle apologia for plutocracy: Finance and the Good Society.  FATGS is a reaction to the hostility to finance provoked by the 2007+ crisis.

Shiller sees the solution to our still-unfolding problems not as less financial invention, but more: “Ironically, better financial instruments, not less activity in finance, is what we need to reduce the probability of financial crises in the future.”  He adds “There is a high level of public anger about the perceived unfairness of the amounts of money people in finance have been earning [no kidding!], and this anger inhibits innovation: anything new is viewed with suspicion. The political climate may well stifle innovation and prevent financial capitalism from progressing in ways that could benefit all citizens.”

Is he right?  Is financial innovation always good?  Have the American people turned into fin-Luddites, eager to crush quant creativity and settle into a life of simplistic poverty, uncorrupted by the obscure and self-serving creations of financial engineering?

Yes and yes, says Professor Shiller, who applauds what others deplore, the rise of ‘financial capitalism’: “a system in which finance, once the handmaiden of industry, has taken the lead as the engine driving capitalism.”

Finance capitalism, a new name but an old idea, has been unpopular for years.  In the 1930s, especially, right after the Great Depression, the big finance houses, like J. P. Morgan were seen as conspirators against the public interest.  Goldman Sachs, the ‘great vampire squid’ of Rolling Stone’s Matt Taibbi, plays the same role these days.

How does Shiller defend the financiers?  What is so good about financial capitalism?  What improvements may we expect in the future?

Some of Shiller’s defense is simply puzzling because it is pretty obvious nonsense.  This is what he has to say about securitization – the bundling of hundreds of mortgages into layered bonds that have been sold all over the world:

Securitized mortgages are, in the abstract, a way of solving an information asymmetry problem—more particularly the problem of “lemons.” This problem, first given a theoretical explanation by George Akerlof, refers to the aversion many people have to buying anything on the used market, like a used car. (p. 54)

The claim that securitization solves the information problem is paradoxical to say the least.  How can removing a mortgage from the initial lender improve the buyer’s knowledge of the borrower?  Surely the guy who actually originates the loan is in the best position to evaluate the creditworthiness of the borrower?

Ah, the answer is apparently the rating agencies:  “Bundling mortgages into securities that are evaluated by independent rating agencies, and dividing up a company’s securities into tranches that allow specialized evaluators to do their job, efficiently lowers the risk to investors of getting stuck with lemons.”

Really?  Not everyone agrees.  Here’s another comment about rating agencies.  It’s from Michael Burry, who was one of the few to spot the eroding quality of sub-prime mortgages in the years leading up to the 2007 crash (this is a bit long, but bear with me):

So you take something like NovaStar, which was an originate and sell subprime mortgage lender, an archetype at the time. The names [of the bonds] would be NHEL 2004-1, NHEL 2004-2, NHEL 2004-3, NHEL 2005-1, etc. NHEL 2004-1 would for instance contain loans from the first few months of 2004 and the last few months of 2003, and 2004-2 would have loans from the middle part, and 2004-3 would get the latter part of 2004. You could pull these prospectuses, and just quickly check the pulse of what was happening in the subprime mortgage portion of the originate-and-sell industry. And you’d see that 2/28 interest-only ARM mortgages were only 5.85% of the pool in early 2004, but by late 2004 they were 17.48% of the pool, and by late summer 2005 25.34% of the pool. Yet average FICO [consumer credit] scores for the pool, percent of no-doc [“Liar”] loan-to-value measures and other indicators were pretty static…. The point is that these measures could stay roughly static, but the overall pool of mortgages being issued, packaged and sold off was worsening in quality, because for the same average FICO scores or the same average loan to value, you were getting a higher percentage of interest only mortgages[1].

In other words, the proportion of crap increased over the years, but the credit scores remained the same!  So much for the credit-rating agencies which were, in effect, captives of (and paid by!) the bond issuers.  Just how critical will a rating agency be of a bond when it is paid by the issuer of the bond? Moral hazard, anyone?

Shiller concedes that securitization “turns out not to have worked superbly well in practice,” but he blames optimism about house prices not the built-in opacity and erosion-of-responsibility of securitization itself.  But optimism is much more an effect than a cause; it should not be invoked whenever economists fail to to explain something.

Securitization can only justify its name if several underlying assumptions are true.  One key assumption is that mortgage default rates differ from place to place – are uncorrelated.  Things may go bad in Nevada, say, but that will have no effect on default rates in New York.  The risks associated with individual mortgages, scattered across the country, might have been more or less uncorrelated, before securitization.  But afterwards, “[r]ather than spreading risk, securitization concentrated it among a group of electronically linked investors subject to herd-like behavior”[2].  Mortgages now rose and fell in synch: bubble followed by bust.  The attempt to reduce individual risk leading (after some delay) to increased systemic risk – what I have called the malign hand.   Securitization rested on an assumption that was as false as it was convenient.  Securitization was anything but…

So what’s good about financial capitalism?  Well, FC is democratic, says Shiller.  “there is nothing in financial theory that specifies that control of capital should be confined to a few ‘fat cats.’  Think of the broadly democratic proliferation of insurance, mortgages, and pensions—all basic financial innovations—in underwriting the prosperity of millions of people in the past century.”  There are a couple of problems with this.  First, by seeking universal security, finance has instead arrived at collective instability – as the pensions and credit crises of recent years have proved.  All too often, illusory individual security has been achieved only at the cost of systemic breakdown.

The second problem is Shiller’s assumption that democracy, vaguely defined, is always good.  Well, there are many forms of democracy; some work well and others badly.  Some preserve the rights of minorities; others degenerate into tyranny of the majority.  The ‘financial contagion’ involved in bubbles looks more like the latter than the former.  The fact that many people are involved in something is no proof of its virtue.

Shiller also seems to think that the ‘democratization of finance’ will lead to a more equal world – after taxes, at least.  He would probably agree with Washington Post columnist Robert Samuelson that despite all those K Street lobbyists, the rich pay most of the taxes and the middle class get most of the benefits: “In 2009, $2.1 trillion (60 percent) of federal spending went for ‘payments for individuals.’  This included 52.5 million people receiving Social Security; 46.6 million on Medicare (many of the same people); 32.9 million on food stamps; 47.5 million on Medicaid; 3.9 million with veterans’ benefits. Almost all these benefits go to the poor and middle class. Meanwhile, the richest 5 percent of Americans pay 44 percent of federal taxes.  Does this look like government for the rich?”

But these statistics are a bit misleading.  The rich do indeed pay the lion’s share of taxes, but they also make more than a lion’s share of the income.  In 2011, for example, the top 1% made 21% of the income and paid…21.6% of the taxes!  That’s essentially the same fraction of their $1.37M average income as it is of the $67 thousand average income of the fourth 20%.  When you get into the middle class and below, income taxes are not in fact very progressive.  And the Gini index, a measure of inequality, rose and fell in almost perfect synchrony with the rise and fall of the financial sector in the US economy from 1967 to 2005.  More finance has gone along more inequality, not less as Shiller implies.

The tax issue is horribly complex, of course.  These simple figures ignore income forfeited by tax-efficient investment via low-interest municipal and other tax-exempt bonds, double taxation of investment income, etc.  But overall, the tax system is less progressive than it looks.

It’s also hard to ignore the eye-watering compensation awarded to Wall Street’s ‘masters of the universe’ in recent years. Finance doesn’t look very democratic to me.

Most people think the financial sector is too big, admits Shiller, who disagrees.  Well, just how big is it – and how big should it be?

Financial activities consume an enormous amount of time and resources, increasingly so over the years. The gross value added by financial corporate business was 9.1% of U.S. GDP in 2010…By comparison it was only 2.3% of GDP in 1948. These figures exclude many more finance-related jobs, such as insurance.  Information technology certainly hasn’t diminished the number or scope of jobs in finance.  [p. 12, emphasis added]

But why hasn’t IT reduced the size of the financial industry – made it cheaper – in relation to the rest of the economy, just as mechanization reduced the number of people involved in farming?  The financial industry has grown mightily.  But If any sector should benefit from pure computational power, it is surely finance.  Many clerks and human computers should have been made redundant as digital-computer power has increased and its cost has decreased.  But no: computation has not been used to increase the real efficiency of the financial industry.  It has been used to create money – in the form of credit (leverage) – through ‘products’ that have become increasingly hard to understand.  Many trace the recent instability of financial markets in part to derivatives and other complex products made possible by the growth of financial IT.   But Professor Shiller sees these things as creative innovation and contributors to general prosperity.  Creative they may be, but the evidence is that expansion of finance is associated with slowed growth of the economy as a whole.

So, how big should finance be?  Professor Shiller makes a comparison to the restaurant industry.

To some critics, the current percentage of financial activity in the economy as a whole seems too high, and the upward trend is cause for concern. But how are we to know whether it really is too high or whether the trend is in fact warranted by our advancing economy? …People in the United States spend 40% as much (3.7% of GDP) eating out at restaurants as the corporate financial sector consumes. Is eating out a wasteful activity when people could just as well stay home and eat?

Is finance comparable to eating out?  Hardly.  Eating out is end-use, of value in itself.  Shiller seems to think that finance can create wealth directly, like the auto industry or farming.  But finance exists only to allocate(which includes create via credit) resources efficiently.  A bond or a swap has no value in and of itself.  Its value is its contribution to building ‘real’ industry.  Yet now finance seems to consume more than the resources it allocates.  In 2002 it comprised a staggering 45% of US domestic corporate profits, for example, a huge increase from an average of less than 16% from 1973 to 1985.

Shiller is right that no one knows, or can know, exactly how big the financial industry should be.  But when it makes almost half of all profits, even its fans may suspect that it has grown too great.

So what is the promise of finance?  What benefits may we expect in the future?  Shiller devotes a whole chapter to “Insurers”, tracing the expansion, which he terms “democratization,” of insurance to areas most of us would never have thought “insurable” at all.

Livelihood insurance is one possibility.  This would be a long-term insurance policy that an individual could purchase on a career, an education, or a particular investment in human capital.  One could choose to specialize far more narrowly than is commonly done today—say, on a particularly interesting career direction—developing the expertise for such a career without fear of the consequences if the initiative turned out badly.

Other examples that may surprise are futures markets in careers outcomes by occupation, long-term catastrophe insurance (e.g., against the possibility that hurricanes will increase in frequency over the next fifty years), and home-equity insurance (insurance against a loss in value of your home).  Shiller concludes “Pushing the concept of insurance to new horizons can be inspiring work.”

Really?  To me Shiller’s enthusiasm for insuring everything in a quest for a riskless society borders on the delusional.  Risk is, after all, the main source of financial discipline.  Render debt riskless and there is little to prevent it rising without limit.

And there are practical problems.  Unless you think it a panacea that can potentially solve all problems, the first question about insurance, surely, should be how do you compute the odds?  The answer for most of these novel insurables, is “guess” because there is no principled way to compute odds.  For life insurance there are mortality tables and a reasonable expectation that the pattern from the past will hold in the future, or at least for a generation or so.  Much the same is true for property insurance – the insurer knows historic fire, burglary and theft rates, and so on.   But ‘equity insurance’?  Who could have computed the odds on the recent property bubble collapse?  Insuring against such an event is itself gambling on a planetary scale.

So what?  Shiller might respond “no one can compute the exact odds on a horse race either.”  Presumably many of the novel insurances he proposes would have to compute odds just based on the market – how many people want X amount of insurance on Y events? – just as the TOTE does on a racetrack.

What’s wrong with that, you might ask?  Well, in a horse race, you at least know that there is going to be only one loss for the bookie (only one winner).  But in a bet on a bunch of mortgages, the number of losers is uncertain.  And betting on a horse race affects only the punters who, usually, are betting with their own money.

But betting on house equity, with borrowed money, has implications for the whole economy.  Would equity insurance encourage house purchase?  Yes.  Would it make buyers less anxious about a possible decline in house prices?  Sure.  Would insurance have helped the housing bubble inflate?  Almost certainly.  So would insurance have added to the crash?  And would it have been able to cope with the resulting losses?  Yes – and no, it would not have been able to pay out to everyone.  So how great an idea is equity insurance after all!

And finally there is the problem of feedback: it is dangerous to insure someone against a hazard over which they have control.  If I take out insurance against failing in a career, for example, my incentive for working hard at it will surely be somewhat reduced and my tendency to give up correspondingly increased.  Risk is a great motivator; eliminating risk must therefore impair motivation.

Similar misgivings arise for most of the creative financial products puffed by Prof. Shiller.  They may benefit individuals, but at the cost of increasing systemic risk, which is borne by others – the malign hand again.

Shiller’s vigorous defense of a controversial industry made me uneasy, but it took some time to discover just what it is in his philosophy that is so disturbing.  Here is a paragraph which makes the point quite clearly: “At its broadest level, finance is the science of goal architecture—of the structuring of the economic arrangements necessary to achieve a set of goals and of the stewardship of the assets needed for that achievement… In this sense, finance is analogous to engineering.”

What’s wrong with that, you may say?  People have goals and surely the purpose of our social arrangements is to help achieve them?  Well perhaps, but contrast Shiller’s comment with this from Apple’s Steve Jobs: “people don’t know what they want until you show it to them.”[3]  Who has what goal in Jobs’ world?  Not the consumer, who doesn’t know what he wants until he sees it, and not even Apple, which works on each new product until it just seems right.  Much of the creativity of capitalism is bottom-up – the goal emerges from the process.  It’s not imposed from above.  But for Professor Shiller, the goal always comes first.  His ideal is command from above, not the kind of “spontaneous order” that Friedrich Hayek and other free-market pioneers have identified as the secret of capitalism’s success.

Another problem is that Prof. Shiller thinks that financial engineering is, well, engineering.  Engineering is the application of valid scientific principles to achieve a well-defined and attainable goal.  Financial ‘engineering’ employs valid mathematics, but based on shaky assumptions.  Its predictive powers are minimal.  The desired objective may or may not be possible – no one can prove that Prof. Shiller’s equity insurance is not destabilizing, for example.  It is fanciful to compare financial engineering to real engineering: the proper comparison is more like astrology vs. astronomy.

Finally, there is the problem of risk itself.  The finance industry accepts without question that shedding – sharing, distributing – risk is always a good thing.  And risk itself is treated as a thing, like a load – of bricks, say – that can usefully be split up and shared.  Like a load of bricks, its total amount doesn’t change when it is split up.  Nor do individual bricks get heavier or lighter with each change of carrier.

But risk is not a thing.  It is a property of an economic arrangement.  As the arrangement changes, so does the risk.  The bricks do change as they pass from one carrier to another.   Both the total mount of risk (if that even means anything) and, more importantly, who exactly is at risk, change as the arrangement changes – as we move from individual mortgages to mortgage-backed securities, for example.  The idea of ‘sharing risk’ is a very dangerous, metaphor.

But Robert Shiller’s impassioned defense of the evolutions of finance accurately reflects the belief system of an industry that has lost a firm connection to reality.  Risk is treated as a thing instead of a property.   Financial ingenuity can and should reduce risk whenever possible.  Insurance is always good.  It should be possible to insure against any eventuality, if we are just clever enough.  The more people are involved in finance, the more ‘democratic’ it becomes.  Ingenious packaging like securitization, shares risk without increasing it.   All these beliefs are more or less false.  Yet they are at the heart of modern finance.

[1] Quoted by Michael Lewis in The Big Short, location 545 (Kindle edition).

[2] The Death of Capital, Michael E. Lewitt (John Wiley, 2010)