Home » Uncategorized (Page 2)

Category Archives: Uncategorized

essay

A study in perception: Feelings cause…feelings

Statistical correlations and thousands of subjects are not enough

The #MeToo movement has taken off and so have the bad effects attributed to anything from mildly disagreeable or misperceived ‘microaggressions’ to physical assault.  Naturally, there is a desire among socially concerned scientists to study the issue. Unfortunately, it is tough to study the effects of a bad social environment. You can’t do experiments – vary the environment and look at the effect – and feelings are not the same thing as verifiable data. But the pressure to demonstrate scientifically what many ‘know’ to be true is irresistible. The result is a plethora of supposedly scientific studies, using methods that pretend to prove what they in fact cannot. Here is a recent example.

“Recent social movements such as the Women’s March, #MeToo, [etc.] draw attention to the broad spectrum gender-related violence that is pervasive in the United States and around the world”, the authors claim in a May 5 op-ed in the Raleigh News and Observer. The title of their study is: “Discrimination, Harassment, and Gendered Health Inequalities: Do Perceptions of Workplace Mistreatment Contribute to the Gender Gap in Self-reported Health?”  It captures in one place some of the worst errors that have crept into social science in recent decades: correlations treated as causes, and subjective judgement treated as objective data.  This study even manages to combine the two: subjective judgments are treated as causes of…subjective judgments.

The article, in the Journal of Health and Social Behavior, is based on reports from 5579 respondents collected in three surveys in 2006, 2010 and 2014. The report applies a battery of statistical tests (whose assumptions are never discussed) to people’s answers to questions about how they feel about mental and physical health, gender, age and racial discrimination, sexual and other harassment.  The large number of subjects just about guarantees that some ‘statistically significant’ correlations will be found.

The study looks at two sets of subjective variables – self-reports – and associates them in a way that will look like cause-effect to most readers.  But the link between these two sets is not causal – no experiments was done or could be done – but a statistical correlation.

Did the authors check to see if self-reports (by “economically active respondents” healthy enough to answer a survey) are reliable predictors of actual, physical health? No, they did not. Their claim that self-reports give an accurate picture of health is inconsistent even with data they do report “In general, studies show that men report better self-rated health than women…[self-report] is nonetheless an important dimension of individuals’ well-being and is strongly correlated with more ‘objective’ indicators of health, including mortality.” Er, really, given that women live longer than men but (according to the authors) report more ill-health? And why the ‘scare’ quotes around ‘objective’?

The authors long, statistics-stuffed, report is full of statements like “Taken together, these studies suggest that perceptions of gender discrimination, sexual harassment, and other forms [of] workplace mistreatment adversely affect multiple dimensions of women’s health.[my emphasis]” So, now perceptions (of gender discrimination) affect [i.e., cause] not mere perceptions but “multiple dimensions” of women’s health.  Unfortunately, these “multiple dimensions” include no actual, objective measures of health.  In other words, this study has found nothing – because finding a causal relation between one ‘perception’ and another is essentially impossible, and because a health study should be about reality, not perceived reality.

The main problem with this and countless similar studies is that although they usually avoid saying so directly, the authors treat a correlation between A and B as the same as A causes B.  Many, perhaps most, readers of the report will conclude that women’s bad experiences are a cause of their bad mental and physical health.  That may well be true, but not because of this study. We have absolutely no reason to believe either that people’s self-reports are accurate reflections of reality or, more importantly, that a correlation is guaranteed to be a cause. Even if these self-reports are accurate, it is impossible to conclude that one causes the other: either that feeling harassed causes sickness, or that feeling sick makes you feel harassed.

Studies like this are nothing but “noise” tuned to prevailing opinion. They overwhelm the reader with impressive-sounding statistics which are never discussed. They mislead and muddle.

The periodical The Week has a column called “Health Scare of the Week”; that is where items like this belong, not on the editorial pages – or in a scientific journal.

Is this why so many NHST studies fail to replicate?

Most ‘significant’ results occur on the first try

Leif Nelson has a fascinating blog on the NHST method and statistical significance and the chance of a false positive.  The question can be posed in the following way: Suppose 100 labs begin the same bad study, i.e., a study involving variables that in fact have no effect. Once a lab gets a “hit”, it stops trying. If the chosen significance level is p (commonly p = 0.05), then approximately 5 of the 100 labs will, by chance, get a “hit”, a significant result, on the first try.  If the remaining 95 labs attempt to replicate, again a fraction between 4 and 5 will “hit” – and so on.  So, the number of ‘hits’ is a declining (exponential) function of the number of trials – even though the chance of a hit is constant, trial-by-trial.

The reason for the trial-by-trial decline, of course, is that every lab has an opportunity for a hit on trial 1, but a smaller number, 1-p = 0.95, has a chance at a second trial, and so on. The ratio of hit probability per opportunity remains constant, p.  The average number of trials per hit is 1/p = 20 in this case. But the modal number is just one, because the opportunity is maximal on the first trial.

On the other hand, the more trials are carried out, the more likely that there will be a ‘hit’ – this even though the maximum number (but not probability) of hits is on the first trial.  To see this, imagine running the hundred experiments for, say 10 repeats each. The probability of non-significance on trial 1 is 1-0.05 = 0.95, on trial 2, (1-p), on trial 3 (1-p)2 and so on.  These trials are independent, so the probability of failure, no ‘hit’ from trials 1 through N is obviously (1-p)N. The probability of success, a ‘hit’ somewhere from trial 1 to trial N is obviously the complement of that:

P(‘hit’|N) = 1-(1-p)N,

Which is  an increasing, not a decreasing function of N. In other words, even though, most false positives occur on the first trial (because opportunities are then at a maximum), it is also true that the more trials are run, the more likely one of them will be a false positive.

But Leif Nelson is undoubtedly correct that it is those 5% that turned up ‘heads’ on the very first try that are so persuasive, both to the researcher who gets the result and the reviewer who judges it.

 

 

 

Response to Vicky: Is racism everywhere, really?

This is a response to a thoughtful comment from Vicky to my blog critical of the supposed ubiquity of racism.  This response turned out to be too long for a comment; hence this new blog. (It also made Psychology Today uncomfortable).

Apropos race differences in IQ and SAT: They do exist, both in the US and in comparisons between white Europeans and Africans.  What they mean is much less clear.  Since IQ and SAT predict college performance, we can expect that blacks will on average do worse in college than whites and Asians, and they do.  Consequently, the pernicious “disparate impact” need not (although it may) reflect racial discrimination.

If a phenomenon has more than one possible cause, you cannot just favor one – as British TV person Cathy Newman did repeatedly in her notorious interview with Canadian psychologist Jordan Peterson.  She kept pulling out “gender discrimination” as the cause for wage disparities and Peterson kept having to repeat his list of possible causes – of which discrimination was only one.  Since there are at least two possible causes for average black-white differences in college performance, it is simply wrong to blame one – racism – exclusively.

I believe you agree, since you refer to “hundreds of variables that could each play a role in explaining why someone of very low SES might fail academically.”  Even Herrnstein and Murray say as much in their much-maligned The Bell Curve.  Nevertheless, the late Stephen Jay Gould falsely accused them of just this crime, writing that “Herrnstein and Murray violate fairness by converting a complex case that can yield only agnosticism into a biased brief for permanent and heritable difference.”  Herrnstein died in 1994, just as the book was published. But the accusation dogs Murray to this day, despite the fact that what they actually said was: “It seems highly likely to us that both genes and environment have something to do with racial differences.  What might the mix be?  We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. (my emphases)” Gould’s baleful influence lives on as their critics continue to misrepresent Herrnstein and Murray’s position.

The genetic component might well be less than they suspected. African immigrants to the US presumably have a smaller admixture of “white” genes than African Americans, descended from slaves – and their masters.  If “white” genes make you smarter than “black” genes, American-born blacks should do better than immigrants. Yet immigrants seem to do better socioeconomically than American-born blacks. There are many possible reasons for this, of course. But it serves to remind us that statistical differences between groups need not reflect genetic effects.

A more worrying issue is the assumption that racism is everywhere.  At one time, a religious nation accepted as axiomatic that “we are all sinners!”  The idea of sin has fallen out of favor in a secular age, but racism has taken its place.   We are all racist, whether we know it or not.  Vicky writes: “we are all implicitly biased against people of color”.

Are we, really? There is a problem is with the concept of implicit bias. It appears to be a “scientifically proven” version of sin.  The problem is: it isn’t scientifically proven at all.  The clever ‘scientific test’ for implicit bias – especially racial bias – has not been, and perhaps cannot be, scientifically validated.  The test is the ‘scientific’ equivalent of telling entrails or reading tea leaves.  (The problem is that you can validate a test for an unconscious process only by showing that it predicts some actual behavior. In other words, to validate implicit bias, you must show that it predicts explicit, overt bias. If there is in fact explicit bias, the test is validated – but then you don’t need it, since you have the actual overt bias. Otherwise, no matter what the test says, you can conclude nothing.)

We have had a black president for two terms; there are more than a hundred black members of congress and many more state and local black elected officials.  Many beloved icons of sports and entertainment are black. The rate of interracial marriage continues to increase.  The racial situation in the US is infinitely better than it was 40 or 50 years ago.  It is time to stop imagining, or at least exaggerating, racial bias when little exists. Let’s pay some attention to more critical problems, like the development character and citizenship in the young, the roles of men and women, the place of marriage in a civilized society, and a dozen others more important than a tiny racial divide which agitation about an imaginary implicit bias serves only to widen.

Adaptive Behavior and Learning

This site is about behaviorism, a philosophical movement critical of the idea that the contents of consciousness are the causes of behavior.  The vast, inaccessible ‘dark matter’ of the unconscious is responsible for recollection, creativity and that ‘secret planner’ whose hidden motives sometimes overshadow conscious will.   But early behaviorism went too far in its attempts to simplify.  ‘Thought’ is not just covert speech.  B. F. Skinner’s claim that “Theories of learning are [not] necessary” is absurd.   The new behaviorism proposes simple, testable processes that can summarize the learned and instinctive adaptive behavior of animals and human beings.

Sourcebooks:

The New Behaviorism

Adaptive Behavior and Learning

Where operant conditioning went wrong

Operant conditioning is BF Skinner’s name for instrumental learning, for learning by consequences.  Not a new idea, of course.  Humanity has always known how to teach children and animals by means of reward and punishment.  What gave Skinner’s label the edge was his invention of a brilliant method of studying this kind of learning in individual organisms.  The Skinner box and the cumulative recorder  were an unbeatable duo.

Three  things have prevented the study of operant conditioning from developing as it might have: a limitation of the method, over-valuing order and distrust of theory.

The method.  The cumulative record was a fantastic breakthrough in one respect: it allowed the study of the behavior of a single animal to be studied in real time.  Until Skinner, the data of animal psychology consisted largely of group averages – how many animals in group X or Y turned left vs. right in maze, for example.  And not only were individual animals lost in the group, so were the actual times – how long did the rat in the maze take to decide, how fast did it run?  What did it explore before deciding?

But the Skinner-box setup is also limited – to a single response and to changes in its rate of occurrence.  Operant conditioning involves selection from a repertoire of activities: the trial bit of trial-and-error.  The Skinner-box method encourages the study of just one or two already-learned responses.  Of the repertoire, that set of possible responses emitted for “other reasons” – of all those possible modes of behavior lurking below threshold but available to be selected – of those covert responses, so essential to instrumental learning, there is no mention.

Too much order? The second problem is an unexamined respect for what might be called “order at any price”.  Fred Skinner frequently quoted Pavlov: “control your conditions and you will see order.”   But he never said just why “order” in and of itself is desirable.

The easiest way to get order, to reduce variation, is to of course take an average.  Skinnerian experiments involve single animals, so the method discourages averaging across animals.  But why not average all those pecks?  Averaging responses was further encouraged by Skinner’s emphasis on probability of response as the proper dependent variable for psychology.   So the most widely used datum in operant psychology is response rate, the number of responses that occur over a time period of minutes or hours.

Another way to reduce variability is negative feedback.  A thermostatically controlled HVAC system reduces the variation in house temperature.  Any kind of negative feedback will reduce variation in the controlled variable.  Operant conditioning, almost by definition, involves feedback.  The more the organism responds, the more reward it gets – subject to the constraints of whatever reinforcement schedule is in effect.  This is positive feedback.  But the most-studied operant choice procedure – concurrent variable-interval schedule – also  involves negative feedback.  When the choice is between two variable-interval schedules, the more time is spent on one choice the higher the  payoff probability for switching to the other.   So no matter the difference in payoff rates for the choices, the organism will never just fixate on one.

As technology advanced, these two things converged: the desire for order, enabled by averaging and negative feedback, and Skinner’s idea that response probability is an appropriate – the appropriate – dependent variable.  Variable-interval schedules either singly or in two-choice situations, became  a kind of measuring device.  Response rate on VI is steady – no waits, pauses or sudden spikes.  It seemed to offer a simple and direct way to measure response probability.    From response rate as response probability to the theoretical idea of rate as somehow equivalent to response strength was but a short step.

Theory Response strength is a theoretical construct.  It goes well beyond response rate or indeed any other directly measureable quantity.  Unfortunately, most people think they know what they mean by “strength”.  The  Skinnerian tradition made it difficult to see that more is needed.

A landmark 1961 study by George Reynolds illustrates the problem (although George never saw it in this way).   Here is a simplified version:  Imagine two experimental conditions and two identical pigeons.  Each condition runs for several daily sessions.  In Condition A, pigeon A pecks a red key for food reward delivered on a VI 30-s schedule.  In Condition B, pigeon B pecks a green key for food reward delivered on a VI 15-s schedule.  Because both food rates are relatively high, after lengthy exposure to the procedure, the pigeons will be pecking at a high rate in both cases: response rates – hence ‘strengths’ – will be roughly the same.  Now change the procedure for both pigeons.  Instead of a single schedule, two schedules alternate, for a minute or so each, across a one-hour experimental session.  The added, second schedule is the same for both pigeons: VI 15 s, signaled by a yellow key (alternating two signaled schedules in this way is called a multiple schedule).  Thus, pigeon A is on a mult VI 30 VI 15 (red and yellow stimuli) and pigeon B on a mult VI 15 VI 15 (green and yellow stimuli).  In summary, the two experimental conditions are (stimulus colors above):

Experiment A:  VI 30 (Red), mult VI 30 (Red) VI 15 (Yellow)

Experiment B:   VI 15 (Green), mult VI 15 (Green) VI 15 (Yellow)

Now look at the second condition for each pigeon.  Unsurprisingly, B’s response rate in green will not change.  All that that has changed for him is the key color – from green all the time to green and yellow alternating, both with the same payoff.  But A’s response rate in red, the VI 30 stimulus, will be much depressed, and response rate in yellow for A will be considerably higher than B’s yellow response rate, even though the VI 15-s schedule is the same in both.  The effect on responding in the yellow stimulus by pigeon A, an increase in response rate when a given schedule is alternated with a leaner one, is called positive behavioral contrast and the rate decrease in the leaner schedule for pigeon A is negative contrast.

The obvious conclusion is that response rate alone is inadequate as a description of the ‘strength’ of an operant response.  The steady rate maintained by VI schedules is misleading.  It looks like a simple measure of strength.  Because of Skinner’s emphasis on order, because the  averaged-response and feedback-rich variable-interval schedule seemed to provide it and because it was easy to equate response probability with response rate, the idea took root.  Yet even in the 1950s, it was well known that response rate can itself be manipulated – by so-called differential-reinforcement-of-low-rate (DRL) schedules, for example.

Conclusion: response rate does not equal response strength; hence our emphasis on rate may be a mistake.  If the strength idea is to survive the demise of rate as its best measure, something more is needed: a theory about the factors that control an operant response.  But because Skinner had successfully proclaimed that theories of learning are not necessary, real theory was not forthcoming for many years.