Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.4, n.1 (1996)

Joan B. Garfield
Department of Educational Psychology
University of Minnesota
332 Burton Hall
Minneapolis, MN 55455
612-625-0337
jbg@maroon.tc.umn.edu

J. Laurie Snell
Department of Mathematics and Computing
Dartmouth College
Hanover, NH 03755-1890
603-646-2951
jlsnell@dartmouth.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Joan abstracts information from the literature on teaching and learning statistics, while Laurie summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


"Projects in Introductory Statistics Courses"

by Johannes Ledolter (1995). The American Statistician, 49(4), 364-367.

Many statistics instructors are interested in using student projects in their classes as a way to provide students with experience solving real world problems by gathering and analyzing data. This article describes the experience of one professor who uses group projects in his large lecture classes. Ledolter encourages students to formulate their own problems which are investigated using teams of about five students. In one class there are 50 to 60 teams, each producing an extensive written report on the completion of their project. This article not only includes reasons for using projects and issues related to using group projects in a large lecture class, but it also describes interesting topics studied by students in Ledolter's class (e.g., predicting future student enrollment at the University of Iowa, a telephone survey of students' radio station preferences, and an analysis of transportation issues and road safety).


"The Craft of Teaching"

by David Moore (1995). FOCUS, 15 (April), 5-8.

This article is based on a talk given by Moore at the 1995 MAA meeting where he received the 1994 Award for Distinguished College or University Teaching of Mathematics. Moore describes teaching as a craft, "a collection of learned skills accompanied by experienced judgment." By thinking of teaching as a craft, he deduces that good teaching is based on the teacher's learning -- not only learning our subject, but also learning about teaching. He feels much can be learned from those who study teaching and learning. A discussion of TQM is included to remind us that teaching is a system that we should manage. Moore's conclusions include a recommendation to adjust incentives so that good teaching is rewarded.


"The Journal of Statistics Education Information Service and Other Internet Resources for Statistics Teachers"

by J. Laurie Snell (1995). The American Statistician, 49(4), 372-375.

Laurie Snell provides a great service to statistics educators, not only by describing appropriate Internet resources but by including links to these web sites on the Chance Database he maintains. In this article Snell describes the JSE Information Service as well as some important articles from early issues of JSE. Other Internet resources described in the article include the Census Bureau, statistics departments at several universities, and his own Chance Web.


The following abstracts appeared in the January 1996 edition of the Newsletter of the International Study Group for Research on Learning Probability and Statistics, edited by Carmen Batanero, University of Granada (e-mail: batanero@goliat.ugr.es)

"Intuitive Strategies and Preconceptions About Association in Contingency Tables"

by Carmen Batanero, Antonio Estepa, Juan D. Godino, and David R. Green (1996). Journal for Research in Mathematics Education, 27(2), 2-20.

The aim of this research was to identify students' preconceptions concerning statistical association in contingency tables. An experimental study was carried out with 213 pre-university students, and it was based on students' responses to a written questionnaire including 2x2, 2x3, and 3x3 contingency tables. In this article, the students' judgments of association and solution strategies are compared with findings of previous psychological research on 2x2 contingency tables. We also present an original classification of students' strategies from a mathematical point of view. Correspondence analysis is used to show the effect of item task variables on students' strategies. Finally, we include a qualitative analysis of the strategies of 51 students, which has served to characterize three misconceptions concerning statistical association.

"Conceptions d'eleves sur la notion de probabilite conditionnelle revelees par une methode d'analyse des donnees: implication - similarite - correlation (Students' conceptions on conditional probability revealed by a data analysis method: implication - similarity - correlation)"

by Regis Gras and Andre Totohasina (1995). Educational Studies in Mathematics, 28, 337-363.

The introduction of notions of probability, above all conditional probability, poses a thorny problem in didactics due to students' preconceptions (Fischbein et al., 1991), stemming from the concrete references indispensable for this introduction. These preconceptions may be reinforced by conceptions which become epistemological and especially didactic obstacles. Using a new method of data analysis -- statistical implication -- and a method of post-correlative treatment, we reveal these conceptions using students' work, and make explicit the procedures of problem solving used that are a reflection of these conceptions.

"Training in the Law of Large Numbers and Everyday Inductive Reasoning: A Replication, With Implications for Statistics Course Design"

by K. Beattie (1995). International Journal of Mathematics Education in Science and Technology, 26(6), 795-808.

This study investigated the effect of a previously successful lesson in the law of large numbers on the everyday inductive reasoning of 64 Australian higher education students with humanities and science backgrounds. It also tested one aspect of Reigeluth's Elaboration Theory of Instruction. Word problems were used to test reasoning. The results closely replicated several of Fong, Krantz and Nisbett's findings with American subjects and supported Reigeluth's theory, although they raise a question about the effectiveness of formal statistics course training in the local population. A possible salience or demand effect and the interaction between moral development and the willingness to generalize statistical heuristics are discussed. Several new approaches to the teaching of introductory statistics are suggested.

"Children's Concepts of Average and Representativeness"

by Jan Mokros and Susan Jo Russell (1995). Journal for Research in Mathematics Education, 26(1), 20-39.

Whenever the need arises to describe a set of data in a succinct way, the issue of representativeness arises. The goal of this research is to understand the characteristics of this through eighth graders' constructions of "average" as a representative number giving a data set. Twenty-one students were interviewed, using a series of open-ended problems that called on children to construct their own notion of representativeness. Five basic constructions of representativeness are identified and analyzed. These approaches illustrate the ways in which students are (or are not) developing useful, general definitions for the statistical concept of average.

"Conceptions probabilistes d'eleves marocains du secondaire"

by Omar Rouan and Richard Pallascio (1994). Recherches en Didactique des Mathematiques, 14(3), 393-428.

The following article presents a research in mathematics education about probabilistic conceptions of Moroccan students of 18-19 years old. The article is about the analysis and interpretation of part of the data, collected during the experiment in this research. In the experiment, we used a questionnaire and several interviews that we prepared from a set of works done in the field, and from social phenomena as sports, games... The results of this experiment laid us to formulate a group of hypotheses concerning the conceptions that individuals may have about probability notions. In the conclusion, we can find several trails of research based on the hypothesis brought out from the analysis and interpretation of the experimental results.

"Using Simulation to Study Estimation"

by M. Perry and G. Kader (1995). Mathematics and Computer Education, 29(1), 53-64.

Estimation of a population parameter based on random samples is a fundamental statistical problem considered in undergraduate statistical education. In an introductory course, estimation is frequently presented as methodology with, at best, cursory attention given to the underlying concepts. On the other hand, a more advanced treatment may approach the subject with formal mathematics; the student masters mathematical derivations and proofs but again without developing an intuition for the concepts modeled by the mathematics. Some instructors have found that simulation provides a powerful tool to enhance conceptual understanding as well as a tool for finding answers. This paper will extend these pedagogical ideas to show how computer simulation models may be used to study the "quality" of an estimation procedure and concurrently subtle concepts of randomness and convergence. Special emphasis is given to the use of graphical representations. These ideas have been used by the authors in a mathematical statistics class as well as in introductory intermediate courses.

"Learning Probability Through Building Computational Models"

by U. Wilensky (1995). Proceedings of the 19th Psychology of Mathematics Education Conference, Vol. 3, eds. L. Meira and D. Carraher, Universidade Federal de Pernambuco, pp. 152-159.

While important efforts have been undertaken to advance understanding of probability using technology, the research reported is distinct in its focus on model building by learners. The work draws on theories of Constructionism and Connected Mathematics. The research builds from the conjecture that both the learners' own sense making and the cognitive researchers' investigations of this sense-making are best advanced by having the learners build computational models of probabilistic phenomena. Through building these models learners come to make sense of core concepts in probability. Through studying the model building process and what learners do with their models, researchers can better understand the development of probability learning. This report briefly describes two case studies of learners engaged in building computational models of probabilistic phenomena.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, p.holmes@sheffield.ac.uk, Center for Statistical Education, University of Sheffield, Sheffield S3 7RH, UK.


Teaching Statistics, Spring 1996
Volume 18, Number 1

"How Many Fish are in the Pond?" by Roger Johnson

This article describes the capture-recapture method of estimating the size of an animal population and illustrates it with a hands-on classroom activity. Properties of the estimate, such as its variability, may be explored with the Minitab macro provided.

"Odds That Don't Add Up, Revisited" by Robert J. Quinn

This article investigates two interesting applications of probability by considering future best on the Superbowl and the odds of the favored team winning in a championship series.

"A Classic Probability Puzzle" by Ruma Falk

A classic probability puzzle, presenting an absurd result, is analyzed. Clearing the confusion caused by this problem sheds light on basic concepts of probability theory.

"Testing the Significance of Correlations" by Eric Nordmoe

A simple problem of the sort often faced by elementary statistics students forms the basis for a discussion of several important hypothesis testing issues. These issues are often overlooked when moving from simple tests of means to bivariate inference problems.

"Efficiency in Minitab" by Peter Martin and Lyn Roberts

This simple Minitab simulation exercise illustrates the concept of efficiency of estimators. Students compare the sampling distributions of the sample mean, median, and mid-range from a normal distribution and from a uniform distribution. The striking differences observed give rise to much discussion.

In addition to these articles, this issue includes the columns Statistics At Work, Classroom Note, Computing Corner, Practical Activities, Problem Page, Software Review, and Book Reviews. This issue also includes a review of the 1995 ISI Conference held in Beijing and the Spring issue of IASE Matters.


Topics for Discussion from Current Newspapers and Journals


"Intelligence: Knowns and Unknowns"

by Ulric Neisser et al. (1996). American Psychologist, 51(2), 77-101.

In the fall of 1994, the book The Bell Curve by Herrnstein and Murray discussed the concept of intelligence, its measurement, and its relation to human behavior and political decisions. This book led to heated discussions in the press. When science and politics are mixed, scientific studies tend to be evaluated in relation to their political implications rather than their scientific merit. To help in the ongoing debate of issues raised by The Bell Curve, the American Psychological Association appointed a task force to provide an authoritative report on the present state of knowledge on intelligence.

While the report was inspired by The Bell Curve, the authors make no attempt to analyze the arguments in this book. Rather they follow their charge: "to prepare a dispassionate survey of the state of the art: to make clear what has been scientifically established, what is presently in dispute, and what is still unknown." While it is less lively reading than the The Bell Curve and its critics, the report is well written with a minimum of jargon. It provides an admirably balanced view of the present state of knowledge on the following issues:

What are the significant conceptualizations of intelligence at this time?
What do intelligence test scores mean, what do they predict, and how well do they predict it?
Why do individuals differ in intelligence, and especially in their scores on intelligence tests? In particular, what are the roles of genetic and environmental factors?
Do various ethnic groups display different patterns of performance on intelligence tests, and, if so, what might explain these differences?
What significant scientific issues are presently unresolved?

An interesting analysis of arguments in The Bell Curve by four statisticians can be found in "Galton Redux: Eugenics, Intelligence, Race, and Society: A Review of The Bell Curve: Intelligence and Class Structure in American Life" by Devlin, Fienberg, Resnick, and Roeder (1995), Journal of the American Statistical Association, 90(432), 1483-1488, and a related article by the same four authors, "Wringing The Bell Curve: A Cautionary Tale About the Relationships Among Race, Genes, and IQ" (1995), Chance 8(3), 27-36.


"In a First, 2000 Census Is to Use Sampling"

by Steven A. Holmes. The New York Times, 29 February 1996, A16.

For the year 2000 census, the Census Bureau will not attempt the traditional complete enumeration of the population but instead will incorporate some sampling into the enumeration. Specifically, the plan is to obtain information from 90% of the households from interviews and questionnaires that can be returned by mail, telephone, and possibly even the Web. A 10% sample will be used to estimate the remaining population. It is stated that this will decrease the projected cost of the 2000 census from 4.8 billion dollars to 3.9 billion dollars and could result in a more accurate count.

The author remarks that some are concerned that this violates the constitution's requirement for an "actual enumeration." Here is what the constitution says:

Representatives and direct taxes shall be apportioned among the several states which may be included within this union, according to their respective numbers, which shall be determined by adding to the whole number of free persons, including those bound to service for a term of years, and excluding Indians not taxed, three fifths of all other Persons. The actual Enumeration shall be made within three years after the first meeting of the Congress of the United States, and within every subsequent term of ten years, in such manner as they shall by law direct.

It is well known that attempts at complete enumeration lead to an undercount of the population. This undercount is more severe for certain groups, including minorities. This is an important issue since the census count of the population can change representation in Congress, and the amount of federal sums available to a state depends upon the latest estimate of the population of the state. After the last official census was presented, arguments were made to adjust the count by further statistical analysis. After much debate, the government decided not to do this. This decision was challenged in the courts by states adversely affected by it. The government's decision has, just recently, been supported by a ruling of the Supreme Court.

To avoid this problem in the year 2000 census, the Bureau is planning to do all of the statistical analysis before reporting an integrated final census count by the deadline needed for forming new districts. The Census Bureau is currently experimenting with the most efficient method to evaluate the undercount.

A working paper by Tommy Wright in the Census Bureau suggests the following kind of simulation to compare two methods being considered.

It is desired to find the number of people who live in a six-block area A. The census carries out an initial enumeration in this area such that each person is counted independently with probability .75. To estimate the true number in this area, a second independent enumeration is made of an area B consisting of two blocks chosen at random. More effort is put into this enumeration so that each person is counted with probability .95.

In the first method for estimating the true population of A, the number counted in A by the first enumeration is multiplied by the factor

(number counted in B by the second enumeration)/
(number counted in B by both enumerations).

This is the well known "capture-recapture method" used to estimate the number of fish in a lake or the number of crackers in a package of goldfish crackers.

In the second method, called CensusPlus, the number counted in A by the first enumeration is multiplied by the factor

(number found in B by either the first
or second enumeration)/
(number found in B by the first enumeration).

If we know the true number in each of the blocks, we can carry out simulations to see which method results in the better estimate of the true population. Our simulations suggest that the two methods perform similarly, suggesting that the decision which to use can be based on practical considerations related to carrying out the second enumeration.


"Low Taxes, High Death Rate for Smokers"

by Andrea Neal. Indianapolis Star, 1 February 1996, A8.

"Smoker Statistics"

by James M. Hall. Indianapolis Star, Letter to the Editor, 8 February 1996, A9.

Neal reports that data released by the Centers for Disease Control showed that Indiana ranks seventh in the number of deaths linked to smoking. At the same time, Indiana has the eighth lowest excise tax on cigarettes. David Richards, the chairman of the public affairs committee of the American Heart Association's Indiana chapter, said:

There appears to be a definite correlation between the high prevalence of smoking and ensuing deaths and those states with low excise taxes. If you look at the eight states with the lowest taxes on cigarettes, you will see that all of them are grouped in the higher rates of deaths related to smoking. Lower prices simply stimulate the purchase and consumption of cigarettes.

Commenting on this article, James Hall writes:

I took the time to plot the data, released recently by the Centers for Disease Control and Prevention, for all states on a graph that shows the percent who smoke vs. the per pack tax. The points are randomly scattered and there appears to be little if any correlation between the state tax now imposed on cigarettes and the percentage who smoke.

Hall goes on to note that "Nevada has the highest percent who smoke and a relatively high tax and Utah the lowest percentage who smoke of all the states and a tax lower than many states." He suggests that this might be because gamblers need to smoke, and Mormons are trained at home not to smoke.

We also thought it would be interesting to look at the data. You can find it at

http://www.geom.umn.edu/docs/education/chance/course/Princeton96/Class11.html

The correlation between cigarette tax and smoking rate is -0.34 and between tax and death rate for smoking-related illnesses -0.18. The correlation between smoking and death rate for smoking-related illnesses is 0.68.


"Research Links Writing Style to the Risk of Alzheimer's"

by Gina Kolata. The New York Times, 21 February 1996, A12.

Researchers from the University of Kentucky designed a study to see if education and an active mind protect against Alzheimer's disease. They studied 93 nuns born before 1917 who lived together in the same environment for 60 years. Now in their 80's, nearly a third have developed Alzheimer's disease, a rate consistent with the general population. Fourteen have died, and autopsies were carried out on their brains to look for marks of Alzheimer's disease.

The researchers were surprised to find that education appeared to offer no protection against Alzheimer's disease. However, they found something even more surprising.

Four years after the nuns entered the convent, and before they took their vows to permanently join the convent, they were asked to write brief autobiographies. The researchers examined these and found that the nuns whose sentences were grammatically complex and full of ideas had sharp minds right up to their 80's, while most of those whose sentences were simple and without complex grammatical constructions were demented by their 80's. They found that the writing characteristics observed when the nuns were in their 20's continued throughout their lives.

The researchers' finding, if confirmed, would suggest that Alzheimer's disease is a lifelong disease that progresses very slowly. This is consistent with a German study that concluded that pathological changes in the brain, characteristic of Alzheimer's disease, could be traced back about 50 years, maybe even into adolescence.

Experts in Alzheimer's disease described the study as elegant and called for further research to see if these results could be replicated and explained.


Samprit Chatterjee suggested the next article and provided the abstract and a discussion of how he and his colleagues have used this example in their classes.

"Time Trouble for Geyser: It's No Longer Old Faithful"

by James Brooke. The New York Times, 5 February 1996, D1.

Rick Hutchison, Yellowstone National Park's research geologist, reported that Old Faithful, the park's leading tourist attraction, has been slowing down. In 1950, the average time interval between eruptions was 62 minutes, in 1970 it was 66 minutes, and today it is 77 minutes. It is also apparently becoming more difficult to predict the time until the next eruption, with forecasts now being accurate to within plus or minus ten minutes.

The changes of recent years seem to be produced by seismic activity. Scientists theorize that earthquakes can have two effects on geysers, either speeding up or slowing down the rate of supply of water. Quakes can either shake loose debris that clog rock channels that feed water to a geyser, resulting in more water and steam, or the quakes can crack open new underground channels, redirecting water to other geysers or hot springs. It is speculated that the latter process is affecting Old Faithful.

Comments from Samprit Chatterjee, Mark Handcock and Jeffrey Simonoff:

The Old Faithful geyser is a wonderful national icon, and is also a wonderful source of interesting data, due to its non-faithful appearance. What do we mean by "non-faithful"? It is well-known that the time interval between eruptions of the geyser is not faithful at all (in the sense of being around one value consistently), as it has a bimodal distribution. About one-third of the time, the time between eruptions is roughly 55 minutes, while about two-thirds of the time it is roughly 80 minutes. Unfortunately, the description in the article of average time intervals in different years gives the mistaken impression of a unimodal distribution.

We have used Old Faithful eruption data (circa 1978, 1979, and 1985) in our introductory classes for many years. We made a case based on these data the lead case in our book A Casebook for a First Course in Statistics and Data Analysis (1995, Wiley), because students are often very surprised to find out just how faithful (or non-faithful) Old Faithful is. The specific characterization of the bimodal distribution mentioned in the previous paragraph comes from those data. The case also points out that a simple way to predict the time interval until the next eruption is to check whether the duration of the previous eruption was short (less than three minutes) or long (more than three minutes), and predict accordingly (55 minutes until the next eruption, or 80 minutes, respectively). This rule, derived using the 1978/1979 data, correctly predicts the 1985 values to within plus or minus 10 minutes about 90article states.

The bimodality has a direct effect on the question of lengthening average time intervals between eruptions. There are two obvious ways that the average time interval could increase, without changing the basic underlying pattern of eruptions: a general shift upwards of the entire distribution, or a change in the probabilities of a short time interval versus a long time interval (with long intervals becoming more probable). Presumably these two different possibilities would have different implications from a geothermal point of view, so it would be interesting to see if either possibility (or what other possibility) describes the observed lengthening of the average time interval.


"How Fair is Monopoly"

by Ian Stewart. Scientific American, April 1996, 104-105.

Stewart suggests that the popular game of Monopoly is a fair game based on a Markov Chain analysis. He looks at a single player's movement around the board as a random walk on a circle with forty points. The length of a step is determined by the outcome of the roll of a pair of dice. Then standard Markov Chain theory shows that the probability that the player is at any particular square on the board after a large number of plays is 1/40. He concludes from this that Monopoly can be considered a fair game.

Stewart comments briefly on the fact that the first player is more apt to get properties like Oriental Avenue or Vermont Avenue, but he suggests that things will get evened out in the long run. The author does not say much about the possible effects of the many simplifying assumptions he makes, such as neglecting stays in jail and Chance cards.

A much more realistic calculation of the limiting distribution can be found in "Monopoly as a Markov Process" by Robert B. Ash and Richard L. Bishop (1972), Mathematics Magazine, 45, 26-29. These authors make very few simplifying assumptions. Their more careful calculations show that the limiting probabilities that a player is on any particular square after a large number of plays are not equal. They observe that much of the variation in the limiting probabilities is due to the effect of going to jail. These authors also find the expected income from holding a group of properties with hotels. Green should be your favorite color.

I assume there is an advantage to going first and it would be interesting to know how much it is.


"Hawking Fires a Brief Tirade Against the Lottery"

by Robert Uhlig. The Daily Telegraph, 14 February 1996, 7.

Hawking writes in Radio Times Magazine that he thinks that gambling profits are a sleazy way to raise money even for good causes. However, the most interesting thing in this article is the comment that "statisticians have determined that if you buy a National Lottery ticket on a Monday you are 2,500 times more likely to die before the Saturday draw than land the jackpot."


"In Long Running Wolf-Moose Drama, Wolves Recover From Disaster"

by Les Line. The New York Times, 19 March 1996, C1.

Isle Royale is an island about 35 miles long and seven miles wide in the middle of Lake Superior. (For the last 60 years I have enjoyed visiting our family cabin on this island.) Moose have been on the island since at least the early 1900's. In 1949, nine years after Isle Royale became a National Park, the lake between Isle Royale and Canada was completely frozen, and wolves came across the ice to the island. Since that time biologists have had a wonderful place to study a predator-prey relation.

Biologist Rolf Peterson has been studying this relationship since 1970. The park is completely closed during the winter except for his annual trip to observe the state of the moose and wolf herd.

Peterson's studies provide a wealth of data on this predator-prey relation. You can find a graph of the population of wolves and moose from 1960 to 1994 in Peterson's new book The Wolves of Isle Royale: A Broken Balance (1996, Willow Creek Press, Wisconsin). Peterson states that "for the period from 1959 to 1980 the wolf and moose population appeared to cycle in tandem, with wolves peaking about a decade after moose." In 1980 there were 50 wolves and about 1000 moose. This was followed by a dramatic drop in the number of wolves, believed to be caused by a disease brought to the island by visitors to the park. Recently the number of wolves dropped to as low as ten, and this led to speculation that the wolves would die out. In addition to the low numbers, concern about the future of the wolves comes from DNA studies that have verified that all the wolves on the island have a single recent ancestor.

This winter, when Peterson made his annual seven-week study he found that the wolves had increased in number to 22; the moose population was reduced to approximately 1200, about half the number from the previous year. Like the rest of us, the moose suffered a severe winter this year. In addition they suffered from a heavy winter tick infection that caused hair loss and made them more vulnerable to the cold.

Despite the continued concern about inbreeding, the possibility that this predator-prey relationship will continue has dramatically improved.


"Silent Sperm"

by Lawrence Wright. New Yorker, 15 January 1996, 42-55.

"Sperm Counts: Some Experts See a Fall, Others Poor Data"

by Gina Kolata. The New York Times, 19 March 1996, C10.

The New Yorker article is a typical long and very thorough discussion of the apparent decrease in the quality and amount of sperm produced by men. A number of studies in the last decade claim to show a dramatic drop in the sperm count in men. The results of a meta-study that reviewed 61 papers published between 1938 and 1991 were published by Carlsen, Giwercman, Keiding, and Skakkebaek in the British Medical Journal (1992, 305(6854), 609-613). The authors reported that the average sperm count had declined from 1,300,000 per milliliter to 660,000. The randomness of the movement of the sperm and the number of things that have to go right for success in fertilization suggests that such a drop could have a serious effect on the fertility rate if the sperm count continues to drop.

The New Yorker article discusses the many theories that have been put forth to explain this drop in sperm count. Some of the more interesting theories try to explain why the sperm counts for Finnish men have not decreased, while those for neighboring Danish men have. The theory that modern industrialization is the culprit receives support from the fact that Finland was industrialized much later than Denmark. A related theory that the damage is done by synthetic chemicals in the environment from industries is popularized in a new book called Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival? -- A Scientific Detective Story by Colborn, Dumanoski, and Myers (1996, Dutton, New York).

The impression that one gets from reading the New Yorker article and some other recent articles is that the evidence for a dramatic and potentially disastrous decrease in sperm counts worldwide is very convincing. In her article, Gina Kolata gives us some hope. She states that a significant number of experts have challenged the methodology of many of the studies showing a decrease in the sperm count. One of the principle concerns about earlier studies comes from recent studies showing that not only is there considerable variation in the average sperm counts between countries, but even within the same country. For example, the average sperm count in New York is much higher than in Los Angeles. One influential previous study compared data from one country at one time with data from other countries at a later time. We obviously have not heard the last word on this fascinating controversy.


"Evaluation of the Military's Twenty-Year Program on Psychic Spying"

by Ray Hyman. Skeptical Inquirer, March/April 1996, 20(2), 21-23.

"The Evidence for Psychic Functioning: Claims vs. Reality"

by Ray Hyman. Skeptical Inquirer, March/April 1996, 20(2), 28-33.

"An Assessment of the Evidence for Psychic Functioning"

by Jessica Utts. Unpublished paper available at http://www-stat.ucdavis.edu/users/utts/

In the early 1970's, the Central Intelligence Agency (CIA) supported a program to see if extrasensory perception (ESP) could help in intelligence gathering. Laboratory studies were done at the Stanford Research Institute. In addition to this research, psychics were employed to provide information on targets of interest to the CIA. The program was abandoned by the CIA in the late 1970's; it was taken over by the Defense Intelligence Agency (DIA) until 1995 when it was suspended. The DIA studied foreign use of ESP, employed psychics, and continued laboratory research at Stanford and later at the Science Applications International Corporation (SAIC) in Palo Alto, California.

The program was declassified in 1995 to allow an outside evaluation. The evaluation was carried out by Ray Hyman and Jessica Utts. Hyman is a psychologist known for his skepticism of psychic behavior, and Utts is a statistician known to support the reality of psychic behavior. Utts and Hyman concentrated their efforts on experiments carried out after 1986 that were designed to meet the objections aimed at previous studies by the National Research Council and other critics. Hyman describes the typical experiment in the following way:

A remote viewer would be isolated with an experiment in a secure location. At another location, a sender would look at a target that had been randomly chosen from a pool of targets. The targets were usually pictures taken from the National Geographic. During the sending period the viewer would describe and draw whatever impressions came to mind. After the session, the viewer's description and a set of five pictures (one of them being the actual target picture) would be given to a judge. The judge would then decide which picture was closest to the viewer's description. If the actual target was judged closer to the description, this was scored as a "hit."

Hyman and Utts agreed that these experiments seem to have eliminated obvious defects of previous experiments. They also agreed that the results of the ten best experiments could not reasonably be accounted for by chance. These articles by Utts and Hyman explain why, despite this agreement, Hyman remains a skeptic and Utts a believer.


Return to Table of Contents | Return to the JSE Home Page