Monthly Archives: May 2018

Lemuel Pepys Esq., time traveler

This satirical piece is based on the leaked transcript of an editorial staff meeting at The Atlantic magazine in April 2018. The two principals are black writer Ta-Nehisi Coates and the editor, Jeffrey Goldberg.

In which a Gentleman of the Eighteenth Century is Miraculously transported to a Twenty-First Century Editorial Meeting

Through a worm hole, unknown in the 18th century, but now routinely available on Twitter, Mr. Pepys, a distant cousin of the well-known diarist Mr. Samuel Pepys, has been time-traveled, in the guise of young staffer, to a meeting at a major publication of the US East Coast intellectual elite. Much is unfamiliar. He perceives that he has been taken to the remote future. He recognizes the jobs of the scribblers he is witnessing. He is puzzled at the presence of ‘negroes’ and young women. He is baffled by their discussion.  The following are extracts from his diary.

Friday April 6, 2018

Two men, one black, one white address the meeting. The white one seems troubled. The tall black one appears to be the master. They all labour for a periodical called The Atlantic (we seem to be in the colonies).

“Of no party or clique” is their motto. (Is it true that this publication supports no particular faction? Rare!)

The black man complains about his previous employment at an organ called the New Republic: “No black people worked there. I’ve actually verified this. No black people worked there at all. And to my mind — other people will probably feel quite differently about this — but as far as I was concerned, it was basically a racist publication.” We learn later that The Atlantic suffers from the same distemper: “basically white dudes”.  (And what is this “dude”?)

What is racism?

None dissented from the black man’s claim: absence of black people is racism, which is a sin, it seems. But what meaneth ‘racist’? No Negroes are at my own employment, in the Royal Navy Sick and Hurt Board. Are the Navy yards therefore ‘racist’? But of course, few blacks are available – more in the colonies, I believe, though most in bondage.

Apparently, there are many free black people in this new time. Are they excluded from all literary employment? Can they not write? Unlikely, since my black man writes much. Perhaps he is possessed of a Royal Prerogative? Are other black men in some way not fit for employment by The New Republic?

The black man is aggrieved. He missed black people at the NR because, he said “there was no me to learn from.”  I am puzzled. As a child, my teachers were actually women. Though myself a boy, I yet learned quite well from them. I learned a little music from Signor Ottocelli, an Italian gentleman, a very foreign person. Are black people somehow different?  Can they learn only from their ilk?

The black man is sad: “I don’t know how to put this without sounding like an a–hole.” But after debating the matter within himself, he decided that it was after all good to learn, even from people he believed to be “f—-ing racist” – that word racist again.

The black man has difficulty learning from others if they differ from him either by color or opinion.  He is concerned that his teachers did not see him “completely as a human being.” What does this mean? It is natural to see negroes as different, of course; they look different from Englishmen. Those who have arrived in our island since 1600 were savages, mostly, naked and illiterate. But many free black people are now in the colonies and, as I later learn, look and behave more or less as others do. I cannot comprehend his difficulty.

The black man apparently has one white colleague with whom he differs, but to whom he can speak: “You can go into The Atlantic archives right now, and you can see me arguing with Andrew Sullivan about whether black people are genetically disposed to be dumber than white people. I actually had to take this seriously, you understand?” But Mr. Sullivan is evidently an exception: The black man can talk to Mr. Sullivan, but not to any others of his party (except Kevin, apparently). And what is “genetically”?

Are black people (in general, I suppose, there must be exceptions) in fact stupider than white people?  Apparently the proposition is too silly to debate, according to the black man (but he would say that, wouldn’t he?).

The trouble with Kevin

There is a discussion about a former colleague. A man called Kevin was recently ejected from the group after a very short stay.  Evidently, Kevin is one of those white folk who fails to see the black man and others like “as fully realized human beings.”  What does this mean? That Kevin doesn’t like them? That they don’t like Kevin? That he thinks black people are but hairless apes (tho’ he doth deny it!)? Apparently Kevin has views that are “batsh-t crazy” — not explained. But it is clear that “batsh-t crazy” opinions are anathema, like Popery or disbelief in the Trinity.

The willful disposing of unborn infants is a contentious issue.  The practice is a crime in my time. Kevin apparently is of the same view. But — O, tempora, O, mores! — Apparently, abortion is permitted now in some parts of the colonies and embraced by the present company.

The black man refers to the execution of criminals (I discovered later that criminals are now executed in a barbarous and ignoble fashion, by a medical procedure. Surely, hanging, which would at least preserve the honor and dignity of the condemned man, is to be preferred?)  The black man seems to believe it is wrong to execute anyone, no matter how heinous his crime.

After a brief jocosity with the white man, the black man speaks again: “you know, I was an admirer of Kevin’s work, and I think I can say this, you know, Jeff [the white man] talked to me about this. And I was not like, don’t hire that dude. To the contrary, I thought, OK, well he can come in and represent the position, and then we can fight it out…I feel like I failed the writers of color here in that advice.” Why “failed”? Are black people approving of abortion, as Kevin apparently is not? Can they not bear a contrary view? Do they not enjoy vigorous debate, as we do? Later discussion suggests that white people at the publication also fear debate. And what is a “dude”?

The black man at last explains the difficulty: “This publication is diversifying…What is debatable comes up for question because you bring different people in, and those people are not just brown-skinned or dark-skinned or women who would normally — you know, who are just the same as any other. Their identity — and I know this is bad in certain quarters, but I don’t think it is — that identity cannot be neatly separated from the job.”  By “diverse” he seems to mean adding women and colored people to the group.

Diversity impedes debate?

It is clear at last:  This “diversity” is the problem. So long as the scribblers were all white men, they could converse and debate freely. But now colored people and women are in the room (yes, young women are present! Although they wear trousers and shirts, like men – only exposing more chest). Since the paper has become ‘diverse’, free debate is no longer possible: “So maybe the job changes a little bit” says the black man.

Now I think I begin to understand the dilemma at the New Republic: to have a vigorous and open group of writers, they needed to be all men, or at least not diverse. (Would all women, or even all black people, work as well? Or are such groups considered to be ‘diverse’, hence incapable of robust debate?)  “Like, those two things [diversity and a ‘broad range of debate’] actually, as you said, they’re part of each other. And I guess what I’m suggesting is they actually might also be in conflict with each other”, as the black man points out later. Though awkwardly expressed, the black man sees the problem: with women and blacks in the room, debate is stifled. Best go back to the old way, men only, as in my time? I well understand that many things may not be discussed in the presence of women.

The white man speaks. He has failed to grasp the black man’s point: “trying so hard to diversify gender, race ethnicity, orientation, whatever, part of it is to make sure that we’re of no party or clique.” So, he wishes to be ‘diverse’ but cannot understand that it conflicts with their motto.  The black man perceives that free debate is not possible in a ‘diverse’ group. The white man admits that certain issues cannot be discussed. He wishes debate “without touching the third rails of gender and abortion and race.” So, gender, abortion and race cannot be discussed? Which is a puzzlement, since they seem to be at the top of everyone’s minds. (And what is a “third rail”?)

The black man speaks again: perhaps I have mistook him: “I think the deal is that in the ’90s, when this room would not have looked like this room does [i.e., no women or blacks?], there were things that were considered out of bounds. I don’t think we would have published ‘The Case for Reparations’ then.”

Much is made of this important “Reparations” production, which appeared in The Atlantic some years earlier. The black man refers to it frequently, making no mention of criticism that has appeared elsewhere.  “And I think the problem is, some of those things — this is the huge, huge problem — some of those things that I would argue should be out of bounds, actually a large number of Americans actually believe.”  He doth not say what those things are — perhaps a suggestions that there may be differences between black and white people? (But if blacks and whites truly are the same, why keep treating them separately? Why complain, as the black man frequently does, that “I was the only writer of color”?) Or is it just anathema to discuss things believed by the common people?

We cannot know whether “The case for reparations” would have been published in The Atlantic in years past. But if not, the reason might have been that its thesis seems unjust. Should living white people pay living blacks for injuries inflicted by dead whites on dead blacks?  Especially as some blacks believe themselves better off than if their ancestors had remained in Africa.  Or, as some have suggested, because the argument made is feeble.  Or that the style of writing is too enthusiastic for a scholarly publication.  We cannot know.

The white man speaks: “Do you think The Atlantic would be diminished if we narrowed the bounds of acceptability in ideological discourse, even as we grow in diversity?” He begins to see the black man’s argument. He begins to discern, as through a glass, darkly, the conflict between diversity of race and diversity of thought. A young woman later asks a similar question. She had heard “a certain amount of nostalgia for that time, which was the ability to just get out there and punch each other and people debating and actually having genuinely different ideas and having that spirit of really wanting to engage. And we just don’t have that anywhere on our website.” (What is this “website”?)

In the end, ‘diversity’ seems to win over open debate at The Atlantic.

Towards the end of the meeting, it becomes clear that the white man is supposed to be in charge. He is the Editor of The Atlantic, ‘tho he always defers to the black man. Indeed, he says at one point: “I mean he’s one of the dearest people in my life. I’d die for him.”

The black man seems to object, and the white man responds ruefully: “Can’t I just express my love for you? What’s so bad? What’s so wrong?”  To which the black man responds: “Can I just say — and I would only say this sitting in this room — but that was a very white response.” This seems to be a condemnation. Is love a bad thing? Is love from a white man bad. Do white men always express love for black men?

Or is the black man’s response in fact (that word again) racist?

by ‘POSSUM’

 

 

Open Letter to Tom Wolfe

Author and journalist Tom Wolfe died On May 14 of pneumonia, at the age of 88.  He was a wonderful writer, of fiction as well as non-fiction, and a penetrating popper of rhetorical bubbles. I corresponded with a him a few times about various neuroscience topics.

In 2016 in his book on language, The Kingdom of Speech, Mr. Wolfe wrote about two eminent scientific figures, one from the nineteenth century one from the twentieth. His take on Noam Chomsky, that the great linguist is somewhat arrogant and immune to empirical evidence, is very plausible, But, his view of Charles Darwin, as class-privileged and ambitious for personal fame, does not at all fit with my knowledge of him.  I wrote to Mr. Wolfe to make my argument, but he did not respond.  Here is the letter:

Dear Tom Wolfe:

You are much too tough on poor old Charlie Darwin who, from everything I have read by him and about him, was a very decent man.

In 1992 or so I made notes for a review of a tendentious and inflated book on Darwin by Desmond and Moore, a book much admired by the propagandist Stephen Jay Gould (I could give you much chapter and verse on Gould’s mendacious treatment of the Bell Curve book and the IQ/heritability controversy in general, for example).  D & M interpreted much of Darwin’s science in social/political terms.  Like you, they think he cheated Wallace.  D & M also favor the reader with many magical intrusions into Darwin’s private thoughts.  I never wrote the review, but most of my notes could do as well for your class-conscious attack on Darwin.

I have read much Darwin and never saw any evidence of snobbery.  (And I can claim first-hand knowledge of British snobbery, having a left-school-at-14 cockney father and Anglo-Indian mother and being a grammar-school boy myself! And Darwin married into “trade” – his cousin Emma Wedgewood) Yes, Darwin was hooked in to the establishment, but it was an intellectual establishment not one based on wealth or class.  Darwin and Wallace met and corresponded all amiably.  As far as I can tell, they got along just fine.  Wallace was deferential, but Darwin was the older man and better established.

D & M’s main thesis, like yours, is that Darwin cheated Wallace.  But that is not correct because they, and you, make an implicit assumption that is completely wrong. The wrong assumption is that being first to publish an idea is, and should be, the only basis for assigning scientific credit.  Not true.  The weight of evidence behind a theory – which takes time to collect – is just as important as the theory itself.  Darwin hesitated to publish for some 20 years because he was building his case.  Unlike many modern scientists he did not look for the LPU – “least publishable unit” – as a way to puff up his CV.  He did the right thing by holding back from publication until he had an overwhelming case.  He should not be punished for acting responsibly. And he did think of natural selection first!

That is why Lyell and his other friends wanted him to share credit – not because they were of the same social class.  They knew he had been working for years to find evidence in support of his theory.  Or contrary to it: Darwin was very good about considering contradictory evidence – just read the Origin.

What is more, Wallace agreed he had been treated fairly.  He never held anything against Darwin, calling one of his books Darwinism, as you point out.  So what right have we, knowing less and living in a different time, what right have we to blame Darwin if Wallace did not? (And do you really want to appear to parrot Desmond and Moore?)

Finally, natural selection and language: I agree with you and others that the evolution of language, and human intelligence generally, is still a problem.  But I think Darwin was also well aware of the difficulties.  Unlike Noam C, he was a cautious and thoughtful scientist.  Darwin did make a mistake, though.  He believed that variation – the raw material on which selection must act – is always, or almost always, random and small in extent (he did know about large variants called “sports”, though: he just thought them too rare to have much evolutionary effect). He was wrong on both counts: variation is sometimes large and not random.  He also believed in some Lamarckian effects, inheritance of acquired characters, for which he has been much criticized.  But of course recent work on epigenetics shows he was to some extent right about that.

Incidentally, Darwin also well knew about what he called “correlated variation” the fact that selection for one characteristic often brings other irrelevant ones along with it – tameness and floppy ears (dogs, Russian foxes), large beak and large feet (pigeons) large hands and large…(Donald Trump) and so on.  Sickle-cell anemia is the classic example: if you have one sickle gene you have limited immunity to malaria, if you have two, you are sick.

I think you and others are correct in doubting that the evolution of language and human intelligence depends much on natural or even sexual selection.  It seems obvious to me that it depends much, much more on the very neglected topic of variation: what are the kinds of changes in cognitive repertoire offered up from generation to generation by genetic and epigenetic variation?  More generally, is variation small from one generation to the next (as Darwin implies) or is it sometimes large?  Is it directional? Does it tend to move in a preferred direction (recurrent mutations are one case where there is clearly a built-in trend)? And so forth..

With that sole correction – that the humans’ apparent leap in language and cognitive development depends much more on the (largely unknown) properties of genotypic and phenotypic variation than on natural selection – human beings and their  evolution may be safely reunited with rest of the animal kingdom.  Darwin was wrong about variation, but not wrong about natural selection.  His problem is that natural selection may indeed be almost irrelevant to the evolution of whatever it is that makes people smarter than chimps.

And finally, are language and culture simply a manifestation of human cognitive abilities in general – nothing special to see here, move on!  That simply re-labels the problem.  Neither a chimp nor even a border collie can spontaneously construct tools or sentences in the way that a human child can.  What does the kid have that the ape does not?  That is still a problem, whether you call the evolution of language the evolution of intelligence or the evolution of culture.

Sincerely,

John Staddon

On Responsibility and Punishment

Published as: Staddon, J. (1995) On responsibility and punishment.  The Atlantic Monthly, Feb., 88-94.

The litany of social dysfunction is now familiar.  The rates of violent crime are higher than they have ever been: Americans kill and maim one another at per-capita rates an order of magnitude higher than other industrialized nations.  The rate of marriage has been generally declining and the rate of illegitimacy hits new highs each year.  Tens of thousands of children have no fathers and no family member or close acquaintance who has a regular job.  This pattern is now repeating into a second and third generation.  Illiteracy is becoming a problem and schools have so lost authority that the accepted response to armed pupils is to install metal detectors.  Senator Moynihan in a celebrated article recently pointed out how we cope with social disintegration by redefining deviancy, so that crimes become “normal” behavior.

How did we arrive at this condition?  There’s no short answer, but I have come increasingly to believe that my own profession — psychology — bears a large part of the blame.  The story began many years ago, when psychology defined itself as a science. By thus anointing itself, psychology gained great prestige.  People accepted with little demur prescriptions that would earlier have been condemned on moral grounds.  Don’t spank your child.  Don’t attempt to deter sexual exploration by young people — deterrence is probably bad and will certainly fail.  Punishment is ineffective and should be replaced by positive reinforcement.  Self-esteem is good, social stigma bad.  It is not clear that this advice was all wrong.  What is clear, and what I will show in this article, is that it was not based on science.

Some questions about behavior can be answered — either now or in the future — through the methods of science.  How does visual perception work?  What are the effects of different reward schedules?  How accurate is memory for words and faces?  What lighting conditions are best for different kinds of task?  Which people are likely to succeed in which professions?  Other questions, including apparently simple ones such as the value of some teaching techniques or the legitimacy of corporal punishment, cannot be answered.  They cannot be answered by science because they have consequences that go beyond the individual or far into the future.  Corporal punishment and teaching methods affect not just the child but, eventually, the nature of society.  Society cannot be the subject of experiments, and even if it could, effects of social changes usually take decades or even centuries to play out.  Hence we cannot expect to get hard scientific answers to many social questions.

Obviously, we need to separate those questions that belong in the domain of science from those that do not; to separate questions which can be answered definitively from those which cannot.  Unfortunately, psychology as a profession tends to assume that all questions about human action fall within its domain and all can eventually be answered with the authority of science — and this imperialism has gone largely unquestioned.

Psychologists and behavioral psychiatrists seem like a diverse crew.  At one end we have “touchy-feelies” who say things like “any of us who were raised in the traditional patriarchal system have trouble relating because we’ve been ‘mystified’ to some degree by an upbringing that compels obedience and rules by fear, a raising that can be survived only by a denial of the authentic self (John Bradshaw).”  At the other we have the behaviorists, who say things like “In the scientific view. . . a person’s behavior is determined by a genetic endowment traceable to the evolutionary history of the species and by the environmental circumstances to which as an individual he has been exposed (B. F. Skinner).”

Skinner and Bradshaw seem to agree on little.  Skinner had no time for “authentic selves” or “feelings”; Bradshaw undoubtedly feels little kinship with Skinnerian “rat psychology.”  It may come as a surprise, therefore, to learn that psychological pundits from Bradshaw to Skinner agree on several important things.  Almost all have a perspective that is entirely individual.  All reject what John Bradshaw calls “fear,” Fred Skinner called “aversive control” and the rest of us call punishment.  Nearly all psychologists believe that behavior is completely determined by heredity and environment.  A substantial majority agree with Skinner that determinism rules out the concept of personal responsibility.  This opposition between determinism and responsibility is now widely accepted, not just by behaviorists but by every category of mental-health professional, by journalists, by much of the public — and by many in the legal profession.

Behaviorism is the most self-consciously “scientific” of the many strands that make up psychology.  Although recently somewhat overshadowed by other movements such as cognitive psychology, the influence of behaviorism during most of the short history of psychology has been overwhelming.  Consequently, when behaviorists have produced “hard” evidence in favor of beliefs already shared by other psychologists, the combined effect has always been decisive.  I will describe just such a confluence in this article.

About moral positions, argument is possible.  But about scientific “facts” there can be no argument.  Skinner, and the behaviorist movement of which he was the head, delegitimized both individual responsibility and punishment.  Responsibility was dismissed by philosophical argument.  Punishment was ruled out not by moral opposition but by supposedly scientific laboratory fact.  Less “scientific” psychologists and psychiatrists have also agreed that punishment is bad, but the reasons for their consensus are more complex and to do with the social function of psychotherapy.  Nevertheless, for the majority of psychologists and psychiatrists, the “facts” established by the behaviorists have always constituted an unanswerable argument — especially if they support preexisting beliefs.  This “scientific” consensus has had a devastating effect on  the moral basis of American society.                I will argue just two things in this article: first, that there is no opposition between behavioral determinism and the notion of individual responsibility.  And second, that the supposedly scientific basis for blanket opposition to punishment as a legitimate social instrument –in the family, school and workplace, and the judicial system –is nonexistent.  My focus is Skinnerian behaviorism, because it is the area of psychology that has been most concerned with large social issues.  But the key ideas have been carried forward by a much larger number of psychologists and psychiatrists who have never thought of themselves as behaviorists.

  1. F. Skinner’s 1971 best-seller Beyond Freedom and Dignity contains his most concerted, and successful, attack on traditional methods of social control. Most psychotherapists, behaviorist and nonbehaviorist alike, have come to agree with the substance of Skinner’s message: that punishment is bad and that that the idea of individual responsibility is a myth.  Skinner’s argument is simply wrong.  It will be a task for future sociologists to understand why such a bad argument received such ready assent.

Skinner contrasts the “prescientific” view that “a person’s behavior is at least to some extent his own achievement” with the “scientific” view that behavior is completely determined by heredity and environment.  The conventional view, says Skinner, is that “[A] person is free.  He is autonomous in the sense that his behavior is uncaused.  He can therefore be held responsible for what he does and justly punished if he offends.  That view, together with its associated practices, must be re-examined when a scientific analysis reveals unsuspected controlling relations between behavior and environment.”  What’s wrong with these apparently reasonable claims?

FREEDOM

Is man free?  Well, as the professor used to say, it depends on what you mean by “freedom.”  The bottom line is that you’re free if you feel free.  Skinner’s definition is simpler: to him, freedom is simply the absence of punishment (“aversive contingencies”).  But we are all “punished” by gravity if we don’t obey its rules.  The punishment can sometimes be quite severe, as beginning cyclists and skaters can attest.  Yet we don’t feel unfree when we learn to skate or cycle.  Punishment doesn’t always abolish freedom — and freedom is not just absence of punishment.

Skinner has another definition for freedom: absence of causation (“autonomous man”).  This is an odd notion indeed.  How can one ever prove absence of causation.  In science, a conjecture like this is called “proving the null hypothesis” and everyone accepts its impossibility.  We might prove the converse, however, that people feel unfree when their behavior is determined, that is to say, when it can be predicted.   For example, suppose a rich and generous aunt offers her young niece a choice between a small sum of money and a large sum.  In the absence of any contrary factors, the niece will doubtless pick the larger over the smaller (classical economics rests on the assumption that this will always be the free choice).  Can we predict the niece’s behavior?  Certainly.  Is her behavior determined?  Yes, by all the usual criteria.  Is she unfree?  She certainly doesn’t feel unfree.  People generally feel free when they follow their preferences, no matter how predictable those preferences may be.  Behavior can be predicted in other contexts as well.  Mathematicians predictably follow the laws of arithmetic, architects the laws of geometry and baseball players the laws of physics.  The behavior of all is determined; yet all feel free.  Ergo, predictability — determinism — doesn’t equal absence of freedom as Skinner proposes.

So, even if we could predict all human behavior with the precision of these examples, this wonderful new science would have no bearing at all on the idea of freedom.

PUNISHMENT

There’s another strand in Skinner’s assault on traditional practices, his attack on punishment.  He rejects punishment not because it’s morally wrong, but because it doesn’t work.  (W. H. Auden had no such doubts about punishment when he remarked “Give me a no-nonsense, down-to-earth behaviorist, a few drugs, and simple electrical appliances, and in six months I will have him reciting the Athanasian creed in public.”)  Since everyone knows that some punishments work, sometimes, you’ll naturally be curious to know how Skinner defended this position.  His argument boils down to three points: punishment is ineffective because when you stop punishing, the punished behavior returns; punishment provokes “counterattack”; positive reinforcement is better.  Let’s look at each of these.

Punishment is ineffective.  Well, no, it isn’t.  Common sense aside, laboratory studies with pigeons and rats (the data base for Skinner’s argument) show that punishment (usually a brief electric shock) works very well to suppress behavior, so long as it is of the right magnitude and follows promptly on the behavior that is to be suppressed.  If the rat gets a moderate shock when he presses the bar, he stops pressing more or less at once.  If the shock is too great, the rat stops doing anything; if it’s too weak, he may still press the bar once in a while; if it’s just right, he quits pressing, but otherwise behaves normally.  Does the punished behavior return when the punishment is withdrawn?  It depends on the training procedure.  A rat well-trained on an avoidance procedure called shock postponement, in which he gets no shock so long as he presses the lever every now and then, may keep pressing indefinitely even after the shock generator is disconnected.  In this case, punishment has very persistent effects indeed.

Punishment provokes counterattack.  Sure; if a food-producing lever also produces shock, the rat will try to get the food without getting the shock.  A famous picture in introductory psychology texts is called “breakfast in bed.”  It shows a rat in a shock-food experiment that learned to press the lever while lying on its back, insulated by its fur from the metal floor grid.  Skinner was right that rats, and people, try to beat a punishment schedule.

Positive reinforcement is more effective.  Not true.  The effects of positive reinforcement also dissipate when the reinforcement is withdrawn, and there is no positive-reinforcement procedure that produces such persistent behavior as a shock-postponement schedule.  Positive reinforcement also provokes “counterattack.”  Every student who cheats, every gambler who rigs the odds, every robber and thief, shows the “counterattack” provoked by positive reinforcement schedules.

There are other arguments on both sides, but the net conclusion must be that the scientific evidence is pretty neutral in deciding between reward and punishment.  They both have their advantages and disadvantages: punishment is better for suppressing behavior, positive reinforcement better for generating behavior; avoidance (punishment) schedules tend to produce more persistent behavior than reward schedules, and so on.  If we wish to favor reward over punishment, we must make a moral, not a scientific, case.

JUSTICE AND DETERMINISM

All this might be academic, but for its impact on legal thinking.  The opposition between determinism and responsibility, and the doubts cast on punishment, do seem to raise issues of justice.  If “the Devil (or, at least, “my environment”) made me do it!” surely the rigors of just punishment (of dubious effectiveness in any case, according to psychologists) should be spared?  In the era of Lorena Bobbit, the Reginald Denny attackers and the Menendez brothers, this argument evidently strikes a receptive chord in the hearts of American juries.

Too bad, because the argument is false.  I’ve already argued that behavior can be both determined (in the sense of predictable) and free.  I’ll argue now that the legal concept of personal responsibility is founded on this kind of predictability.  Personal responsibility demands that behavior be predictable, not the opposite, as Skinner contended.

What is the purpose of judicial punishment?  Legal scholars normally identify two purposes, retribution and deterrence.  Retribution is a moral concept, which need not concern us here.  But deterrence is a practical matter.  Arguments about deterrence are clouded by ideology and the impossibility of deciding the issue by the methods of science.  Nevertheless, there is an approach to deterrence that is straightforward and acceptable to most people which much simplifies a jury’s task.  The idea is that the purpose of legal punishment is to minimize the total amount of suffering in society, the suffering caused by crime as well as the suffering caused by punishment.  The concept is simple: if thievery is punished by amputation, the level of thievery will be low, but the suffering of thieves will be very high, higher perhaps than warranted by the reduction in theft.  On the other hand, if murderers go free, the level of murder will be high and the ease of the killers will not be balanced by the suffering of the rest.  We may argue about how to measure suffering and how to assess the effect of a given level of legal punishment for a given crime, but the principle, which I call the social view of punishment, seems reasonable enough.  It is consistent with the fundamental principle that government exists for the welfare of society as a whole, not for the good of any particular individual.  Once they understand the argument, most people seem to agree that the social view of punishment is acceptable, although not, perhaps, the whole story.  What people do not seem to realize is that this perfectly reasonable view is not opposed to determinism: it requires determinism.

From an objective point of view — the only legitimate point of view for science — “holding a man responsible” for his actions means nothing more than making him subject to punishment if he breaks the law.  The social view of punishment assumes that people are sensitive to reward and punishment, that behavior be predictably subject to causal influences.  If criminal behavior is predictably deterred by punishment, the justly punished criminal is less likely to disobey the law again, and serves as an example to other potential lawbreakers.  This is the only objective justification for punishment.  But if behavior were unpredictable and unaffected by “reinforcement contingencies” — if it were uncaused, in Skinner’s caricature of “freedom” — there would be absolutely no point to punishment or any other form of behavioral control, because it would have no predictable effect.  In short, legal responsibility requires behavioral determinism, not the reverse.

It is interesting to reflect that the objective case for personal responsibility rests entirely on the beneficial collective effects (on the sum total of human suffering) of just punishment.  It does not rest on philosophical notions of individual autonomy, or personal intent, or anything else at the level of the individual — other than normal susceptibility to reward and punishment.  The idea that the law is somehow concerned with the mental state of the accused, rather than with the consequences of judicial action, has taken root because Skinner, like most other psychologists, focused so exclusively on the individual.

If a person’s “behavior is at least to some extent his own achievement” then, says Skinner, he can be blamed for failure and praised for success.  Since personal responsibility is a myth (he concludes) praise and blame are irrelevant.  But if personal responsibility is defined as I have defined it, praise and blame need not –should not — be abandoned.  In the social view, the use of praise and blame has nothing to do with the ontology of personal responsibility, the epistemology of intention or whatnot.  It has everything to do with reward and punishment (in other contexts, Skinner admits as much, at least with respect to praise).  We praise good behavior because we wish to see more of it; we blame the criminal because we wish less crime.  Praise and blame are perhaps the strongest incentives available to society.  By giving them up, Skinner gave up our best tools for social order.

It is extraordinary that Skinner seems to have missed the connection between determinism and the sanctions imposed by the legal system.  He spent his life studying how the behavior of animals is determined by the conditions of reward and punishment.  He and his students discovered dozens of subtle and previously unsuspected regularities in the actions of reward and punishment.  Yet he failed to see that the system of rewards and punishments imposed by society works in much the same way as his reinforcement schedules.

Remarkably, law and science seem to agree on the social view of punishment.  Only when punishment is likely to be completely ineffective as a deterrent does the law limit its use.  If the criminal is insane, or if injury was the unintended result of actions whose harmful outcome was unforeseeable, no guilt is attached to the perpetrator and no punishment is given — presumably because punishment can play no role in preventing the recurrence of such acts.  There is surprising congruence between the legal concept of responsibility and the function of punishment as a deterrent.   “Guilt” is established not so much by the act, as by the potential of punishment to deter the act.

THE “VICTIM” DEFENSE: WHAT SHOULD THE JURY DO?

These arguments greatly simplify a jury’s task.  Jurors have no need to puzzle through philosophical questions about “intent” or knowledge of right and wrong.  Nor do they need to ask whether criminal behavior was determined by the defendant’s past history.  (The scientific answer will almost always be, “yes,” because almost all behavior is determined.)  History is not the point.  The point is: Did the defendant know that his actions would have an illegal outcome?  And, if the accused had known, in advance of the act, that sure punishment would follow, would he still have acted as he did?  If the criminal would have been deterred by the prospect of punishment then, says the social view, he should be punished.  Did the Menendez brothers know that their actions would result in the death of their parents?  Presumably, yes.  If they had known that these acts would result in severe punishment (life in prison, death), would have they have acted nevertheless?  Probably not.  Verdict: guilty.  On the other hand, if the jury has reason to believe that the defendants’ past history was so horrific that they would have murdered even in the face of certain punishment, then some other verdict (which might still involve removing these damaged men from society) would be appropriate.

THE PROPER ROLE OF PSYCHOLOGY

The social view of punishment is as far as psychology can go towards prescribing social policy.  Given a certain set of values, psychology may help us decide what system of rewards and punishments will be helpful in promoting them.  But the social view of reward and punishment does not by itself prescribe social policy.  Our value system, our morality, plays a legitimate role in measuring “suffering,” in evaluating known outcomes and in judging the rightness of wrongness of particular rewards and punishments.  We’re less moved by the plight of the disappointed thief who breaks open an empty safe, than by the suffering of a mugging victim, for example.  Psychology can tell us a little (only a little, since we don’t do such experiments on human beings) about the individual effects of corporal punishment vs. the effects of a jail term; it cannot tell us whether corporal punishment is cruel or not.  Social science can tell us that more people will be killed by guns if guns are freely available than if they are not.  It cannot tell us whether the freedom to bear arms is an inalienable right.  Psychology can tell us something about the extent of homosexuality in different cultures; it cannot tell us whether homosexuality is good, bad or a matter of indifference.  Psychology can also tell us that social opprobrium — Hester Prynne’s “A”, blame, or the big red “D” some have proposed for drunk drivers — is often an effective deterrent.  It cannot tell us whether such punishments are “right” or not.  Scientific psychology, like all science, is amoral: it tells us what is, or what might be — not what should be.  Psychologists who offer more, promoters of “authentic selves” or punishment-free societies, are peddling not science but faith.

 

Was Darwin Wrong?

Or have critics – and some fans – missed the point?

Christopher Booker is a contrarian English journalist who writes extensively on science-related issues.  He has produced possibly the best available critical review of the anthropogenic global warming hypothesis. He has cast justifiable doubt on the alleged ill effects of low-level pollutants like airborne asbestos and second-hand tobacco smoke.

Booker has also lobbed a few hand-grenades at Darwin’s theory of evolution.  He identifies a real problem, but his criticism misses a point which is also missed even by some Darwin fans.

Is anti-Darwin ‘politically incorrect’?

In that 2010 article, Booker was reacting to a seminar of Darwin skeptics, many very distinguished in their own fields.  These folk had faced hostility from the scientific establishment which seemed to Booker excessive or at least unfair. Their discussion provided all the ingredients for a conspiracy novel:

[T]hey had come up against a wall of hostility from the scientific establishment. Even to raise such questions was just not permissible. One had been fired as editor of a major scientific journal because he dared publish a paper sceptical of Darwin’s theory. Another, the leading expert on his subject, had only come lately to his dissenting view and had not yet worked out how to admit this to his fellow academics for fear that he too might lose his post.

The problem was raised at an earlier conference:

[A] number of expert scientists came together in America to share their conviction that, in light of the astonishing intricacies of construction revealed by molecular biology, Darwin’s gradualism could not possibly account for them. So organizationally complex, for instance, are the structures of DNA and cell reproduction that they could not conceivably have evolved just through minute, random variations. Some other unknown factor must have been responsible for the appearance of these ‘irreducibly complex’ micromechanisms, to which they gave the name ‘intelligent design’. [my emphasis]

I am a big fan of Darwin. I also have respect for Booker’s skepticism.  The contradiction can be resolved if we look more carefully at what we know now – and at what Darwin actually said.

The logic of evolution

There are three parts to the theory of evolution:

  1. The fact of evolution itself. The fact that the human species shares common ancestors with the great apes.  The fact that there is a phylogenetic “tree of life” which connects all species, beginning with one or a few ancestors who successively subdivided or became extinct in favor of a growing variety of descendants.  Small divergences became large ones as one species gave rise to two and so on.
  2. Variation: the fact that individual organisms vary – have different phenotypes, different physical bodies and behaviors – and that some of these individual differences are caused by different genotypes, so are passed on to descendants .
  3. Selection: the fact that individual variants in a population will also vary in the number of viable offspring to which they give rise. If number of offspring is correlated with some heritable characteristic – if particular genes are carried by a fitter phenotype – then the next generation may differ phenotypically from the preceding one.
    Notice that in order for selection to work, at every stage the new variant must be more successful than the old.

An example: Rosemary and Peter Grant looked at birds on the Galapagos Islands.  They studied populations of finches, and noticed surprisingly rapid increases in beak size from year to year. The cause was weather changes which changed the available food for a few years from easy- to hard-to-crack nuts.  Birds with larger beaks were more successful in getting food and in leaving descendants.  Natural selection operated amazingly quickly, leading to larger average beak size within just a few years.  Bernard Kettlewell observed a similar change, over a slightly longer term, in the color of the peppered moth in England.  As tree bark changed from light to dark to light again as industrial pollution waxed and waned over the years, so did the color of the moths. There are several other “natural experiments” that make this same point.

None of the serious critics of Darwinian evolution seems to question evolution itself, the fact that organisms are all related and that the living world has developed over many millions of years.  The idea of evolution preceded Darwin. His contribution was to suggest a mechanism, a process – natural selection – by which evolution comes about.  It is the supposed inadequacy of this process that exercises Booker and other critics.

Looked at from one point of view, Darwin’s theory is almost a tautology, like a theorem in mathematics:

  1. Organisms vary (have different phenotypes).
  2. Some of this variation is heritable, passed from one generation to the next (have different genotypes).
  3. Some heritable variations (phenotypes) are fitter (produce more offspring) than others because they are better adapted to their environment.
  4. Ergo, each generation will be better adapted than the preceding one. Organisms will evolve.

Expressed in this way, Darwin’s idea seems self-evidently true.  But the simplicity is only apparent.

The direction of evolution

Darwinian evolution depends on not one but two forces: selection, the gradual improvement from generation to generation as better-adapted phenotypes are selected; and variation: the set of heritable characteristics that are offered up for selection in each generation.  This joint process can be progressive or stabilizing, depending on the pattern of variation.  Selection/variation does not necessarily produce progressive change.  This should have been obvious, for a reason I describe in a moment.

The usual assumption is that  among the heritable variants in each generation will be some that fare better than average.  If these are selected, then the average must improve, the species will change – adapt better – from one generation to the next.

But what if  variation only offers up individuals that fare worse than the modal individual?  These will all be selected against and there will be no shift in the average; adaptation will remain as before.  This is called stabilizing selection and is perhaps the usual pattern.  Stabilizing selection is why many species in the geological record have remained unchanged for many hundreds of thousands, even millions, of years.  Indeed, a forerunner of Darwin, the ‘father of geology’ the Scot, James Hutton (1726-1797), came up with the idea of natural selection as an explanation for the constancy  of species.  The difference – progress or stasis – depends not just on selection but on the range and type of variation.

The structure of variation

Darwin’s process has two parts: variation is just as important as selection.  Indeed, without variation, there is nothing to select. But like many others Richard Dawkins, a Darwinian fundamentalist, puts all weight on selection: “natural selection is the force that drives evolution on.” says Dawkins in one of his many TV shows.  Variation represents “random mistakes” and the effect of selection is like “modelling clay”.  Like Christopher Booker, he seems to believe that natural selection operates on small, random variations.

Critics of evolution simply find it hard to believe that the complexity of the living world can all be explained by selection from small, random variations.  Darwin was very well aware of the problem: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” [Origin]  But he was being either naïve or disingenuous here.  He should surely have known that outside the realm of logic, proving a negative, proving that you can’t do something, is next to impossible.  Poverty of imagination is not disproof!

Darwin was concerned about the evolution of the vertebrate eye: focusing lens, sensitive retina and so on.  How could the bits of an eye evolve and be useful before the whole perfect structure has evolved?  He justified his argument by pointing to the wide variety of primitive eyes in a range of species that lack many of the elements of the fully-formed vertebrate eye but are nevertheless better than the structures that preceded them.

There is general agreement that the focusing eye could have evolved in just the way that Darwin proposed.  But there is some skepticism about many other extravagances of evolution: all that useless patterning and behavior associated with sexual reproduction in bower birds and birds of paradise, the unnecessary ornamentation of the male peacock and many other examples of apparently maladaptive behavior associated with reproduction, even human super-intelligence – we seem to be much smarter than we needed to be as hunter-gatherers.  The theory of sexual selection was developed to deal with cases like these, but it must be admitted that many details are still missing.

The fundamental error in Booker’s criticism of Darwin as well as Dawkins’ celebration of him, is the claim that evolution always occurred “just through [selection of] minute, random variations.  Selection, natural or otherwise, is just a filter.  It creates nothing.  Variation proposes, selection just disposes.  All the creation is supplied by the processes of variation.  If variation is not totally random or always small in extent, if it is creating complex structures, not just tiny variations in existing structures, then it is doing the work, not selection.

Non-random variation

In Darwin’s day, nothing was known about genetics.  He saw no easy pattern in variation, but was impressed by the power of selection, which was demonstrated in artificial selection of animals and crops.  It was therefore reasonable and parsimonious for him to assume as little structure in variation as possible.  But he also discussed many cases where variation is neither small nor random.  So-called “sporting” plants are  examples of quite large changes from one generation to the next, “that is, of plants which have suddenly produced a single bud with a new and sometimes widely different character from that of the other buds on the same plant.” What Darwin called correlated variation is an example of linked, hence non-random, characteristics.  He quotes another distinguished naturalist writing that “Breeders believe that long limbs are almost always accompanied by an elongated head” and “Colour and constitutional peculiarities go together, of which many remarkable cases could be given among animals and plants.”  Darwin’s observation about correlated variation has been strikingly confirmed by a long-term Russian experiment with silver foxes selectively bred for their friendliness to humans.  After several generations, the now-friendly animals began to show many of the features of domestic dogs, like floppy ears and wagging tails.

“Monster” fetuses and infants with characters much different from normal have been known for centuries.  Most are mutants and they show large effects.  But again, they are not random.  It is well known that some inherited deformities, like extra fingers and limbs or two heads, are relatively common, but others – a partial finger or half a head, are rare to non-existent.

Most monsters die before or soon after birth.  But once in a very long while such a non-random variant may turn out to succeed better than the normal organism, perhaps lighting the fuse to a huge jump in evolution like the Cambrian explosion.  Stephen Jay Gould publicized George Gaylord Simpson’s “tempo and mode in evolution” as punctuated equilibrium, to describe the sometimes sudden shift from stasis to change in the history of species evolution.  Sometimes these jumps  may result from a change in selection pressures.  But some may be triggered by an occasional large monster-like change in phenotype with no change in the selection environment.

The kinds of phenotypic (observed form) variation that can occur depend on the way the genetic instructions in the fertilized egg are translated into the growing organism.  Genetic errors (mutations) may be random, but the phenotypes to which they give rise are most certainly not.  It is the phenotypes that are selected not the genes themselves.  So selection operates on a pool of (phenotypic) variation that is not always “small and random”.

Even mutations themselves do not in fact occur at random.  Recurrent mutations occur more frequently than others, so would resist any attempt to select them out.  There are sometimes links between mutations so that mutation A is more likely to be accompanied by mutation B (“hitchhiking”) and so on.

Is there structure to variation?

An underlying mystery remains: just how is the information in the genes translated during development into the adult organism?  How might one or two modest mutations sometimes result in large structured changes in the phenotype?  Is there any directionality to such changes?  Is there a pattern?  Some recent studies of the evolution of African lake fish suggests that there may be a pre-determined pattern. Genetically different cichlid fish in different lakes have evolved to look almost identical.  “In other words, the ‘tape’ of cichlid evolution has been run twice. And both times, the outcome has been much the same.” There is room, in other words, for the hypothesis that natural selection is not the sole “driving force” in evolution.  Some of the process, at least, may be pre-determined.

The laws of development (ontogenesis), if laws there be, still elude discovery. But the origin of species (phylogenesis) surely depends as much on them as on selection.  Perhaps these largely unknown laws are what Darwin’s critics mean by ‘intelligent design’?  But if so, the term is deeply unfortunate because it implies that evolution is guided by intention, by an inscrutable agent, not by impersonal laws.  As a hypothesis it is untestable.  Darwin’s critics are right to see a problem with “small, random variation” Darwinism.  But they are wrong to insert an intelligent agent as a solution and still claim they are doing science. Appealing to intelligent design just begs the question of how development actually works. It is not science, but faith.

Darwin’s theory is not wrong. As he knew, but many of his fans do not, it is incomplete.  Instead of paying attention to the gaps, and seeking to fill them, these enthusiasts have provided a straw man for opponents to attack.  Emboldened by its imperfections they have proposed as an alternative ‘intelligent design’: an untestable non-solution that blocks further advance.   Darwin was closer to the truth than his critics – and closer than some simple-minded supporters.

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

John Staddon is James B. Duke Professor of Psychology and Professor of Biology, Emeritus, at Duke University. Recent books are (2016) Adaptive Behavior and Learning (2nd edition) Cambridge University Press and Scientific Method: How science works, fails to work or pretends to work. (2017) Routledge.

 

A study in perception: Feelings cause…feelings

Statistical correlations and thousands of subjects are not enough

The #MeToo movement has taken off and so have the bad effects attributed to anything from mildly disagreeable or misperceived ‘microaggressions’ to physical assault.  Naturally, there is a desire among socially concerned scientists to study the issue. Unfortunately, it is tough to study the effects of a bad social environment. You can’t do experiments – vary the environment and look at the effect – and feelings are not the same thing as verifiable data. But the pressure to demonstrate scientifically what many ‘know’ to be true is irresistible. The result is a plethora of supposedly scientific studies, using methods that pretend to prove what they in fact cannot. Here is a recent example.

“Recent social movements such as the Women’s March, #MeToo, [etc.] draw attention to the broad spectrum gender-related violence that is pervasive in the United States and around the world”, the authors claim in a May 5 op-ed in the Raleigh News and Observer. The title of their study is: “Discrimination, Harassment, and Gendered Health Inequalities: Do Perceptions of Workplace Mistreatment Contribute to the Gender Gap in Self-reported Health?”  It captures in one place some of the worst errors that have crept into social science in recent decades: correlations treated as causes, and subjective judgement treated as objective data.  This study even manages to combine the two: subjective judgments are treated as causes of…subjective judgments.

The article, in the Journal of Health and Social Behavior, is based on reports from 5579 respondents collected in three surveys in 2006, 2010 and 2014. The report applies a battery of statistical tests (whose assumptions are never discussed) to people’s answers to questions about how they feel about mental and physical health, gender, age and racial discrimination, sexual and other harassment.  The large number of subjects just about guarantees that some ‘statistically significant’ correlations will be found.

The study looks at two sets of subjective variables – self-reports – and associates them in a way that will look like cause-effect to most readers.  But the link between these two sets is not causal – no experiments was done or could be done – but a statistical correlation.

Did the authors check to see if self-reports (by “economically active respondents” healthy enough to answer a survey) are reliable predictors of actual, physical health? No, they did not. Their claim that self-reports give an accurate picture of health is inconsistent even with data they do report “In general, studies show that men report better self-rated health than women…[self-report] is nonetheless an important dimension of individuals’ well-being and is strongly correlated with more ‘objective’ indicators of health, including mortality.” Er, really, given that women live longer than men but (according to the authors) report more ill-health? And why the ‘scare’ quotes around ‘objective’?

The authors long, statistics-stuffed, report is full of statements like “Taken together, these studies suggest that perceptions of gender discrimination, sexual harassment, and other forms [of] workplace mistreatment adversely affect multiple dimensions of women’s health.[my emphasis]” So, now perceptions (of gender discrimination) affect [i.e., cause] not mere perceptions but “multiple dimensions” of women’s health.  Unfortunately, these “multiple dimensions” include no actual, objective measures of health.  In other words, this study has found nothing – because finding a causal relation between one ‘perception’ and another is essentially impossible, and because a health study should be about reality, not perceived reality.

The main problem with this and countless similar studies is that although they usually avoid saying so directly, the authors treat a correlation between A and B as the same as A causes B.  Many, perhaps most, readers of the report will conclude that women’s bad experiences are a cause of their bad mental and physical health.  That may well be true, but not because of this study. We have absolutely no reason to believe either that people’s self-reports are accurate reflections of reality or, more importantly, that a correlation is guaranteed to be a cause. Even if these self-reports are accurate, it is impossible to conclude that one causes the other: either that feeling harassed causes sickness, or that feeling sick makes you feel harassed.

Studies like this are nothing but “noise” tuned to prevailing opinion. They overwhelm the reader with impressive-sounding statistics which are never discussed. They mislead and muddle.

The periodical The Week has a column called “Health Scare of the Week”; that is where items like this belong, not on the editorial pages – or in a scientific journal.

Is this why so many NHST studies fail to replicate?

Most ‘significant’ results occur on the first try

Leif Nelson has a fascinating blog on the NHST method and statistical significance and the chance of a false positive.  The question can be posed in the following way: Suppose 100 labs begin the same bad study, i.e., a study involving variables that in fact have no effect. Once a lab gets a “hit”, it stops trying. If the chosen significance level is p (commonly p = 0.05), then approximately 5 of the 100 labs will, by chance, get a “hit”, a significant result, on the first try.  If the remaining 95 labs attempt to replicate, again a fraction between 4 and 5 will “hit” – and so on.  So, the number of ‘hits’ is a declining (exponential) function of the number of trials – even though the chance of a hit is constant, trial-by-trial.

The reason for the trial-by-trial decline, of course, is that every lab has an opportunity for a hit on trial 1, but a smaller number, 1-p = 0.95, has a chance at a second trial, and so on. The ratio of hit probability per opportunity remains constant, p.  The average number of trials per hit is 1/p = 20 in this case. But the modal number is just one, because the opportunity is maximal on the first trial.

On the other hand, the more trials are carried out, the more likely that there will be a ‘hit’ – this even though the maximum number (but not probability) of hits is on the first trial.  To see this, imagine running the hundred experiments for, say 10 repeats each. The probability of non-significance on trial 1 is 1-0.05 = 0.95, on trial 2, (1-p), on trial 3 (1-p)2 and so on.  These trials are independent, so the probability of failure, no ‘hit’ from trials 1 through N is obviously (1-p)N. The probability of success, a ‘hit’ somewhere from trial 1 to trial N is obviously the complement of that:

P(‘hit’|N) = 1-(1-p)N,

Which is  an increasing, not a decreasing function of N. In other words, even though, most false positives occur on the first trial (because opportunities are then at a maximum), it is also true that the more trials are run, the more likely one of them will be a false positive.

But Leif Nelson is undoubtedly correct that it is those 5% that turned up ‘heads’ on the very first try that are so persuasive, both to the researcher who gets the result and the reviewer who judges it.

 

 

 

Response to Vicky: Is racism everywhere, really?

This is a response to a thoughtful comment from Vicky to my blog critical of the supposed ubiquity of racism.  This response turned out to be too long for a comment; hence this new blog. (It also made Psychology Today uncomfortable).

Apropos race differences in IQ and SAT: They do exist, both in the US and in comparisons between white Europeans and Africans.  What they mean is much less clear.  Since IQ and SAT predict college performance, we can expect that blacks will on average do worse in college than whites and Asians, and they do.  Consequently, the pernicious “disparate impact” need not (although it may) reflect racial discrimination.

If a phenomenon has more than one possible cause, you cannot just favor one – as British TV person Cathy Newman did repeatedly in her notorious interview with Canadian psychologist Jordan Peterson.  She kept pulling out “gender discrimination” as the cause for wage disparities and Peterson kept having to repeat his list of possible causes – of which discrimination was only one.  Since there are at least two possible causes for average black-white differences in college performance, it is simply wrong to blame one – racism – exclusively.

I believe you agree, since you refer to “hundreds of variables that could each play a role in explaining why someone of very low SES might fail academically.”  Even Herrnstein and Murray say as much in their much-maligned The Bell Curve.  Nevertheless, the late Stephen Jay Gould falsely accused them of just this crime, writing that “Herrnstein and Murray violate fairness by converting a complex case that can yield only agnosticism into a biased brief for permanent and heritable difference.”  Herrnstein died in 1994, just as the book was published. But the accusation dogs Murray to this day, despite the fact that what they actually said was: “It seems highly likely to us that both genes and environment have something to do with racial differences.  What might the mix be?  We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. (my emphases)” Gould’s mendacious influence lives on as their critics continue to misrepresent Herrnstein and Murray’s position.

The genetic component might well be less than they suspected. African immigrants to the US presumably have a smaller admixture of “white” genes than African Americans, descended from slaves – and their masters.  If “white” genes make you smarter than “black” genes, American-born blacks should do better than immigrants. Yet immigrants seem to do better socioeconomically than American-born blacks. There are many possible reasons for this, of course. But it serves to remind us that statistical differences between groups need not reflect genetic effects.

A more worrying issue is the assumption that racism is everywhere.  At one time, a religious nation accepted as axiomatic that “we are all sinners!”  The idea of sin has fallen out of favor in a secular age, but racism has taken its place.   We are all racist, whether we know it or not.  Vicky writes: “we are all implicitly biased against people of color”.

Are we, really? There are at least two problems with the concept of implicit bias. It appears to be a “scientifically proven” version of sin.  The first problem is: it isn’t scientifically proven at all.  The clever ‘scientific test’ for implicit bias – especially racial bias – has not been, and perhaps cannot be, scientifically validated.  The test is the ‘scientific’ equivalent of telling entrails or reading tea leaves.  (The problem is that you can validate a test for an unconscious process only by showing that it predicts some actual behavior. In other words, to validate implicit bias, you must show that it predicts explicit, overt bias. If there is in fact explicit bias, the test is validated – but then you don’t need it, since you have the actual overt bias. Otherwise, no matter what the test says, you can conclude nothing.)

The implicit bias test also inverts the standard for criminal prosecution. Guilty until proven innocent makes the task for race-baiters so much easier.

We have had a black president for two terms; there are more than a hundred black members of congress and many more state and local black elected officials.  Many beloved icons of sports and entertainment are black. The rate of interracial marriage continues to increase.  The racial situation in the US is infinitely better than it was 40 or 50 years ago.  It is time to stop imagining, or at least exaggerating, racial bias when little exists. Let’s pay some attention to more critical problems, like the development character and citizenship in the young, the roles of men and women, the place of marriage in a civilized society, and a dozen others more important than a tiny racial divide which agitation about an imaginary implicit bias serves only to widen.