Home » society » The Logic of Profiling: Fairness vs. Efficiency


The Logic of Profiling: Fairness vs. Efficiency

ABSTRACT There are several strategies available to police “stopping” suspects. Most efficient is to stop only members of the group with the highest a priori probability of guilt; least efficient is indiscriminate stopping.  The best profiling strategy is one that biases stops of different groups so that the absolute number of innocents stopped is equal for all groups.  This strategy is close to maximally efficient, allows some sampling of low-crime sub-groups, and seems fair by almost any criterion.

 Profiling is selecting or discriminating for or against individuals, based on easily measured characteristics that are not directly linked to the behavior of interest.  For example, age, sex or racial appearance are used as partial proxies for criminal behavior because crime rates differ among these groups.  Old people, women and whites are less likely than young people, men and blacks to be guilt of certain types of crime.   Hence, preferentially ‘stopping’ young black males is likely to catch more criminals than stopping the same number of people at random.   Just how many more, and at what cost in terms of ‘fairness’ is the topic of this note.

The term “profiling” is usually associated with stop-and-search procedures (see, for example, Callahan & Anderson, 2001; Glaser, 2006 ), but a similar process occurs in other contexts also.  Almost any kind of selective treatment that is based on a proxy variable is a form of profiling.   In life, health and car insurance, for example, people of different ages and sexes are usually treated differently.  Often controversial, profiling nevertheless goes unquestioned in some surprising places.  Take speeding by motorists, for example.  Exceeding a posted speed limit is an offence and few question laws against it.  But most speeding causes no direct harm to anyone.  The legitimacy of a law against speeding rests on the accuracy which speeding predicts the probability and severity of an accident: the statistically expected cost of speeding is the product of accident probability times damage caused.  While it is obvious that an accident at high speed will usually cause more damage than one at lower speed, the relation between speed and accident probability is more contingent.  If drivers go fast only when it is safe to do so, there may be no, or only a weak or even negative, correlation between speed and the likelihood of an accident.  Hence, a car’s speed may not be a reliable proxy for accident risk, in which case penalizing – profiling – speeders would be unfair.  The same is true of alcohol and driving.  If drunks drive more cautiously (as some do), their proven sensory-motor deficiencies may become irrelevant to accident risk.   In both these cases the fairness of profiling rests on its accuracy.  If drinking and speeding really are correlated with higher accident risk, sanctions against them may be warranted.

There is also the issue of personal responsibility.  Speed is under the motorist’s control,  just like smoking – which is used in life-insurance profiling.  Fewer objections are raised to profiling that is based on proxies that are under the individual’s control and for which he can therefore be held responsible.  Race is of course not something over which the individual has any control, which is one reason racial profiling is subject to criticism.   On the other hand, age and sex are also involuntary, yet fewer objections are raised against profiling on these grounds.  The reasons for these policy differences and the problems of measuring the statistics on which they are based are larger topics for another time.

The utility and legitimacy of profiling depend on two related characteristics: accuracy and fairness.  How well do the measured characteristic or characteristics predict the variable of interest?  And how fair is it to pick on people so identified?

Fairness is not the same as accuracy.  In health insurance, for example, the whole idea of “insurance” arose partly because people cannot predict when they will get sick.  But as biological science advances and it becomes possible to predict debilitating genetic conditions with high accuracy, insurance companies may become reluctant to insure high-risk applicants, who may therefore be denied insurance.  How fair is this?  In general, the greater the ability of an insurer to predict health risk, the more questionable health profiling becomes, because the concept of insurance – spreading risk – is vitiated.  But this is not a problem for profiling to catch criminals.  Few would object to profiling that allowed airport screeners to identify potential terrorists with 99% probability.  The better law-enforcement authorities are able to profile, the fewer innocent people will be stopped and the more acceptable the practice will become.

The political and ethical problems raised by profiling and associated practices, and some of the utilitarian aspects of stop-and-search profiling, have been extensively reviewed (see for example, Dominitz, 2003; Glaser, 2006; Persico, 2002; Risse & Zeckhauser, 2004).  But no matter what the political and moral issues involved, it is essential to be clear about the quantitative implications of any profiling strategy.  With this in mind, this note is devoted to a simple quantitative exploration of the accuracy and ‘fairness’ of profiling in “stop-and-search” situations such as driver stops or airport screening.  The quantitative analysis in fact allows us to identify a possible profiling strategy that is both efficient and fair.

Fair Profiling

Age and sex profiling are essentially universal: police in most countries rarely stop women or old men; young males are favored. The reason is simple.  Statistics in all countries show that a young man is much more likely to have engaged in criminal acts, particularly violent acts, than a woman or an older man. The same argument is sometimes advanced for racial profiling, stopping African-American drivers, or airline passengers of Arab appearance, more frequently than whites or Asians, for example.

I look at the very simplest case: a population with two sub-populations, A and B, that differ in the proportion of criminals they contain.  To do the math we need to define the following:

population size = N

proportion of A in population = x

proportion of B in population = 1-x

target probability A = r

target probability B = r < v; 0 < v,r < 1

(r and v define the relative criminality of As and Bs: if r = .2, for example, that means that 20% of A stops find a criminal.  If r = v, profiling offers no benefits because the probability that a given A has engaged in crime is the same as for a given B.  If v > r, the case I will consider here, a B is more likely to be a criminal than an A, and so should Bs be favored by profilers.)

The probability a given A or B will be stopped depends on two parameters, p and qp is the overall probability of a stop, i.e., the fraction of the total population that will be stopped.  q is the profiling parameter,

A-weight = q

B-weight = 1- q

(q is the profiling weight for A, i.e., q/(1-q) is the bias in favor of A. If q = .5 there is no profiling; if q < .5, a B is more likely to be stopped than an A.)

For a sample population of size N, the probability of sampling (stopping) an A= pz, and the probability of sampling a B  = p(1-z), where z is defined below.


With these terms defined, and given values for population size N, stop probability p, and target probabilities r and v, which define the relative criminality of the A and B populations, it is possible to write down expressions that give the total number of criminals detected and the number of innocents stopped in each sub-population.

It is easiest to see what is going on if we consider specific case: a population of, say N = 10,000, and limit the number of stops to one in ten – 1000 people (p = 0.1).  Profiling is only worthwhile if the proportion of criminals in the A and B sub-populations differs substantially.  In the example I will look at, the probability that a given A is criminal is r = 0.1 and for B v = 0.6 (i.e., a B is six times more likely to be a criminal than an A).  I also assume the Bs are in the minority: 1000 Bs and 9000 As (x = 0.9) in our 10,000-person population.

The degree of profiling is represented in this analysis by the parameter q, which can vary from 0 to 1.  When q = 0, only Bs are stopped; when q = 1, only As are stopped.  The aim of the analysis is to see what proportion of our 1000 (pN) stops are guilty vs. innocent as a function of profiling ranging from q = 0 (only Bs stopped) to q = 0.5 (no profiling, As and Bs stopped with equal probability).  The math is as follows:

I first define a term z that represents the proportion of As in a fixed-size sample of, say, 1000 ‘stops’:



is the proportion of Bs; z allows q, the bias – profiling – parameter, to vary from 0 (only Bs stopped) to 1 (only As stopped) for a fixed sample size.

The number of As sampled (stopped) is pzN and the number of Bs sampled is p(1-z)N.  Multiplying these qualities by criminality parameters r and v gives the number of guilty As and Bs caught: guilty As caught is  and the number of guilty Bs caught is .  We can then look at how these numbers change as the degree of profiling goes from q= 0 (all Bs) to q = .5 (A and B stopped in proportion to their numbers, i.e., no profiling).

This sounds complicated, but as the curves show, the outcome is pretty simple.  The results, for N = 10,000, p = 0.1,  r = 0.1, v = 0.6.,  x = 0.9 are in Figure 1, which shows the number of criminals caught (green squares) as a function of the degree of profiling, q.  The number of innocents stopped, which is just 1000 minus the number of guilty since there are only 1000 stops, is also shown (red triangles).  As you might expect, the most efficient strategy is to stop only Bs (q = 0).  This yields the most guilty and the fewest stops of innocent people, 600 guilty and 400 innocent out of 1000 stops.  The number of guilty detected falls off rapidly as the degree of profiling is reduced, from a maximum of 600, when only Bs are stopped, to a minimum of 150 when As and Bs are stopped with the same probability.  So the cost of not profiling is substantial.

But the pure profiling strategy is obviously flawed in one important way.  Because no As are stopped, the profiler has no way to measure the incidence of criminality in the A population, hence no way to update his profiling strategy so as to maintain an accurate measurement of parameter r.   Another objection to pure profiling is political.  Members of group B and their representatives may well object to the fact that innocent Bs are more likely to be stopped than innocent As, even though this can be justified by statistics.  What to do, since there is considerable cost, in terms of guilty people missed, to backing off from the pure profiling strategy?

Profiling entails a higher stop probability for the higher-crime group.  Innocent Bs are more likely to be stopped than innocent As.  Nothing can be done about that.  But something can be done to minimize the difference in numbers of innocent As and B stopped.  The ratio of innocent As to innocent Bs stopped is shown by the line with blue diamonds in Figure 1.   As you can see, with Bs and As in a ratio of nine to one and rates of criminality in a relation of one to six, the ratio of innocent stops A/B increases rapidly as the degree of profiling is reduced.   With no profiling at all, twenty times as many innocent As as innocent Bs are stopped.  But this same curve shows that it is possible to equalize the number of innocent As and Bs that are stopped.   When the profiling parameter, q = .047, the numbers of innocent As and Bs stopped are equal (red arrow, A/B = 1).   At this point, enough As are in fact stopped, 277 out of 1000 total stops, to provide a valid estimate of the A-criminality parameter, r, and the drop in efficiency is not too great,  446 captured versus the theoretical maximum of 600.   Thus, for most values of r, v and x, it is possible to profile in a way that stops an equal number of innocent people in both groups.  This is probably as fair a way of profiling as is possible.

Doing it this way of course sets the cost of stopping innocent As lower than the cost of stopping innocent Bs.  In the most efficient strategy, 400 innocent Bs are stopped and zero innocent As, but in the ‘fair’ strategy 277 of each are stopped, so the reduction of 400-277  = 123 innocent B stops is more than matched by an increase from zero to 277 in the number of innocent A stops.   Some may feel that this is just as unfair as the pure profiling strategy.  But, given the need to sample some As to get accurate risk data on both sub-populations, the ‘fair’ strategy looks like the best compromise.


When base criminality rates differ between groups, profiling – allocating a limited number of stops so that members of one group are more likely to be stopped than members of another – captures more criminals than an indiscriminate strategy.  The efficiency difference between the two strategies increases substantially as the base-rate difference in criminality increases, which can lead to a perception of unfairness by innocent members of the high-risk group.

Profiling entails unequal stop probabilities between the two groups.  Nevertheless, because no one seeks to minimize the stops of guilty people, it seems more important to focus on the treatment of innocent people rather the population as a whole.  And because we live in a democracy, numbers weigh more than probabilities.  These two considerations suggest a solution to the fairness problem.  A strategy that is both efficient and fair is to profile in such a way that equal numbers of innocent people are stopped in both the high-crime and low-crime groups.  This may not be possible if the high-crime  population is too small in relation to the disparity in criminality base rates.  But it is perfectly feasible given current US statistics on racial differences in population proportions and crime rates.




Callahan, G & W. Anderson The roots of racial profiling: Why are police targeting minorities for traffic stops? Reason, August-September, (2001), http://reason.com/0108/fe.gc.the.shtml

Dominitz, J. (2003) How Do the Laws of Probability Constrain Legislative and Judicial Efforts to Stop Racial Profiling?  American Law and Economics Review, 5(2) (412±432)

Glaser, J. (2006) The efficacy and effect of racial profiling: A mathematical  simulation approach.  Journal of Policy Analysis and Management,  March 2.

Persico, N. Racial Profiling, Fairness, and Effectiveness of Policing, 92(5), The American Economic Review, 1472-1497 (2002)

  1. Risse & R. Zeckhauser, Racial Profiling, 32, 2, Philosophy and Public Affairs, Research Library, 131-170. (Spring 2004)



Leave a comment

Your email address will not be published. Required fields are marked *