Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.6, n.1 (1998)

Robert C. delMas
General College
University of Minnesota
333 Appleby Hall
Minneapolis, MN 55455
612-625-2076

delma001@maroon.tc.umn.edu

William P. Peterson
Department of Mathematics and Computer Science
Middlebury College
Middlebury, VT 05753-6145
802-443-5417

wpeterson@middlebury.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Bob abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


"Counting Penguins"

by Mike Perry and Gary Kader (1998). The Mathematics Teacher, 91(2), 110-116.

Perry and Kader present an interesting activity that demonstrates sampling procedures and the central limit theorem to students. Students are asked to see what happens when random samples of different sizes are drawn from three different "populations" of penguins from a region of Antarctica. The Antarctic region is simulated by a 10 x 10 matrix with different counts of penguins in each of the 100 cells. Three different populations of penguin counts are used. The three populations have the same mean but differ in the values of the population standard deviation. Students draw random samples of two different sizes (n = 10 and n = 20), and the class creates a sampling distribution for each sample size and population. The activity helps students explore the shape of the sampling distributions and the relationship of the sampling distribution means and standard deviations to the population parameters. Questions for classroom discussion are suggested.


"Push-Penny: What is Your Expected Score?"

by Gary Kader and Mike Perry (1998). Mathematics in the Middle School, 3(5), 370-377.

The authors describe an activity to help students develop an intuitive feeling for the consequences of randomness through data handling and the construction of graphs and tables. Students push a coin on a board trying to make the coin land on one of five lines. The lines are drawn perpendicular to the push direction, with lines spaced exactly two coin widths apart. The activity allows students to create time-series charts and probability distributions to explore concepts such as runs, randomness, central tendency, and the law of large numbers.


The American Statistician: Teacher's Corner


"An Inequality for a Measure of Deviation in Linear Models"

by Thomas Mathew and Kenneth Nordstrom (1997). The American Statistician, 51(4), 344-349.

A matrix inequality is established that provides an upper bound for a quadratic form that involves the difference between two linear unbiased estimators of the same linear parametric function in a general linear model. Various special cases of the inequality are discussed. Certain inequalities that arise in the problem of outlier detection and prediction of observations come out as special cases. In addition, some extensions of Samuelson's inequality are also obtained.

"Major League Baseball Player Salaries: Bringing Realism into Introductory Statistics Courses"

by Frederick Wiseman and Sangit Chatterjee (1997). The American Statistician, 51(4), 350-352.

A dataset consisting of salaries of major league baseball players is published at the start of each season in USA Today, and is also made available on the Internet. It is argued that such an easily available dataset and those similar to it can be successfully used by students in a first statistics course for an interesting introduction to data analysis through summary measures and graphical displays. Such an approach is most natural for many students because of a strong interest in sports and economics. Other statistical ideas can be explored as a natural consequence of the discussions that ensue from such on analysis.

"Risk -- A Motivating Theme for an Introductory Statistics Course"

by G. R. Dargahi-Noubary and Jo Anne S. Growney (1998). The American Statistician, 52(1), 44-48.

This article describes an idea for motivating students in an introductory probability and statistics course. The motivating theme is risk, and the process begins with a first-day-of-class questionnaire that samples attitudes of students toward risk and involves them in analysis of events and decisions from their daily lives. Questionnaire responses serve as a context for the instructor to develop the technical concepts of probability and statistics. Moreover, the questionnaire provides a way to increase substantially student motivation and involvement in the course.

"The Mississippi Problem"

by Gunnar Blom, Jan-Eric Englund, and Dennis Sandell (1998). The American Statistician, 52(1), 49-50.

We present a tricky combinatorial problem, primarily intended for entertainment. Two more problems are given as a challenge to the reader at the end of the article.

"The Effects of Alternative HOME-AWAY Sequences in a Best-of-Seven Playoff Series"

by G. W. Bassett and W. J. Hurley (1998). The American Statistician, 52(1), 51-53.

In the NBA and NHL, the usual playoff format is a best-of-seven series where the stronger team (based on regular season performance) is given the benefit of four games scheduled in its home building. Typical HOME-AWAY schedules are HHAAAHH (the 2-3 format) for the NBA and HHAAHAH (the 2-2 format) for the NHL. Assuming that games are independent Bernoulli trials, we show that each team's probability of winning the series is unaffected by HOME-AWAY sequencing but that the average length of a series is affected by HOME-AWAY sequencing. For instance, if one team is stronger than the other in both buildings, the 2-3 format has a higher expected number of games than does the 2-2 format. The results follow from simple probability calculations. The sporting context makes this an interesting exercise for students of statistics.

"Algorithms for Sums of Squares and Covariance Matrices Using Kronecker Products"

by Barry Kurt Moser and Julia K. Sawyer (1998). The American Statistician, 52(1), 54-57.

This article presents Kronecker product algorithms for constructing sums of squares and covariance matrices in complete, balanced designs. The algorithms can be applied to fixed, random, or mixed models with any number of factors. The covariance matrices are constructed under the usual infinite and finite model assumptions. The algorithms are then extended for use with incomplete designs or designs with missing data.


Journal of Educational and Behavioral Statistics: Teachers Corner


"Using Exams for Teaching Concepts in Probability and Statistics"

by Andrew Gelman (1997). Journal of Educational and Behavioral Statistics, 22(2), 237-243.

We present several classroom demonstrations that have sparked student involvement in our introductory undergraduate courses in probability and statistics. The demonstrations involve both experimentation using exams and statistical analysis and adjustment of exam scores.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, ph@maths.nott.ac.uk, RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England. Teaching Statistics has a website at http://www.maths.nott.ac.uk/rsscse/TS/.


Teaching Statistics, Spring 1998
Volume 20, Number 1

"Countering Indifference using Counterintuitive Examples" by Larry Lesser

The author demonstrates how counterintuitive examples in statistics can be used to motivate rather than demoralize students. In a survey, the author found a correlation of .67 between student ratings for interest in and degree of surprise with 20 true but counterintuitive examples. Counterintuitive examples with high surprise ratings can be used to motivate discussion.

"How Long is a Piece of String?" by Ralph Riddiough and John H. McColl

The authors describe an in-class experiment that can motivate discussion of estimation, experimental design, and graphical representation of data. Each student is asked to cut off a length of string of a specified length without using a measuring device. Each length of string is removed from sight after each cut. The order in which the strings were cut is recorded, and the lengths measured after all 10 pieces are produced. Half of the students receive feedback on the length of the string after each cut, while the other half do not. The authors illustrate how discussion of measurement and graphic display can ensue from this activity as students design ways to compare the two groups. Statistical concepts such as central tendency, sources of variation, and independence can be motivated with this activity.

"Coincidences: The Truth Is Out There" by Robert Matthews and Fiona Stones

The authors provide a simple way of testing predictions against actual observed data. The Birthday Paradox predicts that in a random gathering of 23 people, there is a fifty-percent chance that at least two people will have the same birthday. The authors have students test out this prediction by looking at the birth dates of players in the starting line-ups of football (soccer) matches. There are 11 players on each team, so inclusion of the referee's birth date provides the required 23 people per match. Observed frequencies of coincident birthdays can be compared to those predicted by the Birthday Paradox to illustrate how probability theory can accurately predict unexpected results in the real world.


Topics for Discussion from Current Newspapers and Journals


"GOP Just Wants to Check Whether Anybody Likes the IRS"

by Richard W. Stevenson. The New York Times, 7 November 1997, A28.

The United States Congress has recently been debating an overhaul of the Internal Revenue Service (IRS). Last fall, the House approved a taxpayer "bill of rights," with discussion next to be taken up in the Senate. This article discusses a proposal, made by House Speaker Newt Gingrich, to mail a 14-question voluntary response survey about the IRS to every taxpayer during 1998. While the total cost of the plan -- estimated at $30-35 million -- was criticized as excessive, Gingrich noted that it amounts to less than 50 cents a return, a small price to pay to give citizens the chance to tell the government how the IRS is doing.

Democrats countered that Congress had already commissioned a professional poll on attitudes towards the IRS, at a cost of only $20,000. There 48% of respondents rated customer service by the IRS as "excellent" or "good," compared to 44% who found it "not so good" or "poor." While 58% percent said tax forms were difficult to complete because of complexities in the tax code, only 10% attributed their difficulties to IRS inefficiency.

In a letter to the House Appropriations Committee, Treasury official Linda Robinson criticized the Gingrich plan, arguing that it "would ill serve the American taxpayer to spend an inordinately large amount of money on an unscientific survey whose results could provide misleading guidance on how to improve the tax system."


"Current Issues: Student Ratings of Professors"

edited by Anthony G. Greenwald (1997). American Psychologist, 52(11), pp. 1182-1225.

This issue presents a special collection of papers, edited by Anthony G. Greenwald of the University of Washington, that discuss student evaluation of teaching. In his introductory piece, "Validity Concerns and Usefulness of Student Ratings of Instruction" (pp. 1182-1186), Greenwald relates an experience he had during the period 1989-90. In 1989, he received the highest student ratings he ever received; a year later, teaching the same course with only slight modifications to the syllabus, he received his lowest ratings. The two sets of scores were 2.5 standard deviations apart, representing an 8-decile separation according to the university's norms.

The experience spurred him to explore the literature about student evaluations of teaching (SETs) and to collect more data of his own. He begins here by summarizing historical trends in research on ratings, from the early 1970s to the present. Electronic searches of PsychINFO and ERIC data bases indicate that activity in this area seems to have peaked with 71 publications in the 5-year period 1976-80, shrinking to a low of 8 publications in 1991-95. The 1976-80 period saw the largest proportion of publications critical of the validity of SETs.

During the 1970s, a major source of concern was the possibility that grading practices were biasing the evaluations. Some experimental tests were done to show that manipulating grades upwards or downwards did indeed influence ratings. Although critics later questioned the methodology of these studies, Greenwald maintains that the conclusions have never been empirically refuted.

Publications in the 1980s focused on the "convergent validity" of student ratings -- that is, the extent to which the ratings are correlated with other measures of teaching effectiveness. Among other things, this research pointed out that positive correlation between grades and ratings did not necessarily represent contamination of the ratings by grading practices, but might be attributable to other variables such as student motivation.

One might conclude from the publication record that earlier concerns about validity had been settled. This is not the case, and a variety of issues are described in the four articles that follow Greenwald's:

"Making Students' Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias and Utility"

by Herbert W. Marsh and Lawrence A. Roche, University of Western Sydney, pp. 1187-1197.

"Navigating Student Ratings of Instruction"

by Sylvia d'Apollonia and Philip C. Abrami, Concordia University, pp. 1198-1208.

"Grading Leniency is a Removable Contaminant of Student Ratings"

by Anthony G. Greenwald and Gerald M. Gillmore, University of Washington, pp. 1209-1217.

"Student Ratings: The Validity of their Use"

by Wilbert J. McKeachie, University of Michigan, pp. 1218-1225.


"The Truth is Staring Us in the Back"

by Robert Matthews. Sunday Telegraph, 2 November 1997, p. 6.

Biologist Rupert Sheldrake has designed experiments to show that it is possible for people to tell when someone is staring at them from behind. Like other popularly reported phenomena of extrasensory perception, this is something that many people believe they have experienced, but which skeptics will certainly challenge. Matthews reports that Sheldrake has carried out a simple experiment with a large number of pairs of children. In a single experiment, say with Jim and Mary, Jim is blindfolded with his back to Mary. Mary then decides, by referring to a list of random zeros and ones, whether or not to stare at Jim. Jim then records whether or not he thinks he is being stared at. This is repeated for a sequence of, say, 20 trials. Matthews reports that, in a total of more than 18,000 trials carried out worldwide, the children reported they were being stared at in 60% of the trials when they actually were being stared at, and in 50% of the trials when they were not being stared at. To avoid claims of communication between participants, Sheldrake carried out similar experiments with a sound-proof window between the two children.

On his homepage (http://www.sheldrake.net), Sheldrake describes his experiment in detail, and suggests that you try it out with children or students. It is simple enough to make a feasible classroom activity. If it is as successful as Sheldrake predicts, it should provoke a lot of discussion.


"Colleges Look for Answers to Racial Gaps in Testing"

by Ethan Bronner. New York Times, 8 November 1997, A1.

Bronner writes: "Universities around the country are facing an agonizing dilemma: If they retain their affirmative-action admission policies, they face growing legal and political challenges, but, if they move to greater reliance on standardized tests, the results will be a return to virtual racial segregation."

The Journal of Blacks in Higher Education states that, if admission to the nation's top colleges and universities were based primarily on test scores, black enrollments at these institutions would drop by at least one half, and, in many cases, by as much as 80%. A study published by the New York University Law Review in April came to the same conclusion for law schools.

Organizations such as FairTest have long argued that SAT scores play too prominent a role in admission decisions. SAT scores are not very good predictors of even the first-year grade point average, and have worse predictive ability for each succeeding class. Researchers at ETS have found that these problems can be blamed, to some extent, on grade inflation and on differences in grading between departments. Controlling for these factors gives their scores a higher validity.

The reasons for the differences in test scores between blacks and whites continue to be debated. According to the article, the current feeling is that there is a complex blend of psychology and culture. But some studies have shown that those admitted with lower test scores and grades under affirmative action programs end up as successful as others as measured by income and professional achievements.

Michele A. Hernandez, a former admissions official at Dartmouth College, has recently written a book about the admissions process, entitled A is for Admission (Warner Books, 1997). She describes the admission procedure typically used by Ivy League schools and provides quite detailed information about how students are rated, especially in the Ivy League and in particular at Dartmouth. Many of these schools use SAT scores as part of a so-called Academic Index, which is the average of three values: (1) the average of the student's highest SAT I math and verbal scores, using the first two digits only; (2) the average of the students three highest SAT II subject tests (again using only the first two digits); and (3) the student's high school rank-in-class, converted to take into account differences in the schools, with a maximum score of 80.


"Ruling Out a Meteorite"

by Matthew L. Wald. The New York Times, 12 December 1997, B6.

Dr. William A. Cassidy, a professor of geology and planetary science at the University of Pittsburgh, was asked by the National Transportation Safety Board (NTSB) to estimate the probability that the crash of TWA's Flight 800 was caused by a meteorite. He began by looking at incidents in this century of houses being hit by meteorites and found 14 cases. He proceeded to estimate the total roof area, for use as a denominator in an estimate of how many meteorites hit a given area in a given year. Then, from data provided by the NTSB, Cassidy calculated the target area represented by all planes in the air, taking into account the average number of hours that each plane is in flight.

His conclusion: "The expected frequency of meteorites hitting planes in flight is one such event every 59,000 to 77,000 years."


"Earth's Past Offers Chilling Global Warming Scenarios"

by David L. Chandler. The Boston Globe, 2 December 1997, A1.

This article reports that a number of recent climatological studies have used the earth's geologic record to predict the effects of future global warming. Emerging evidence indicates that at the end of the last ice age, about 11,600 years ago, worldwide temperatures changed at a much faster rate than predicted by today's computer forecasts for global warming. In some locations, the average temperatures changed by as much as 18 degrees Fahrenheit over a period of several decades. This contrasts sharply with computer models that show global warming from greenhouse gases unfolding gradually over the next century. The record from the past shows that some sort of threshold was crossed, beyond which changes of devastating proportions took place very rapidly.

In a recent article in Science, climatologist Wallace Broeker of Columbia University argues that abrupt changes in ocean currents are the only possible explanation for such dramatic changes. This point of view is consistent with an MIT study published a few weeks ago in Nature, which analyzed sediment cores drilled from the ocean floor off Bermuda. It concluded that climatic shifts are related to "deep water reorganizations" and that these can occur in a few hundred years or less. And three months ago, an article in Nature reported computer simulations showing that greenhouse gas buildup in the range projected for the next century could be sufficient to shut down an ocean circulation pattern known as the Atlantic Conveyer.

What would be the effects of such a shutdown? So far, no models have been developed that include such complexities -- in fact, none can reproduce the dramatic changes now apparent in the historical record. But Ronald Prinn of MIT says that such a shutdown would mean that "all bets are off" as to the trajectory of global climate change. Noting that some ice age models start with warming, he adds that it is even conceivable that the shutdown could trigger a new ice age. Wallace Broeker concludes: "There's no way to predict whether this is going to happen. We can get some indication that we're approaching the edge of the cliff, but there's no way to say if it's 1 chance in 2 ... or 1 in 200. It's Russian roulette where you don't know how many chambers are in the gun."


"Sampling is Not Enumerating"

by William Safire. The New York Times, 7 December 1997, A23.

William Safire is not happy about the United States Census Bureau's upcoming plans for using sampling as part of the year 2000 census. He begins with some general observations about polls, noting that "as elections demonstrate, a poll is an educated guess and not a hard count," and that "often pollsters are mistaken." He then describes how "polling warps politics," and for evidence cites the results of several polls taken prior to the 1996 presidential election that put Clinton anywhere from seven points ahead of Dole (Zogby for Reuters), to 12 points ahead (ABC, NBC, and the Wall Street Journal), on up to 18 points ahead (CBS and the New York Times). Safire contends that early polls such as the latter two kept Republicans from voting. "On Election Day," he writes, "the actual enumeration showed Zogby alone to be within one point of accuracy."

Turning attention to the 2000 Census, Safire claims that "liberals want to replace, or `augment,' laborious counting with the educated guesswork of sampling." It is clear that he believes that the decision to use sampling techniques is politically motivated, and that the goal is to "pick up a dozen House seats and increase spending on the poor." It is not clear, however, if he has a complete understanding of how this will be done, or indeed of how sampling will be used at all. He merely states that after doing a sloppy head count, Democrats will somehow "redo slums with a vengeance" and then "extrapolate those redone samples to skew -- or `weight' -- the earlier count." Meanwhile, he gives several suggestions for improving an enumerative count, such as improving mailing lists and training census takers to be better at finding the homeless.

The full text of the article is reproduced in the February 1998 issue of Amstat News (pp. 2-3), where ASA President David Moore deals squarely with Safire's misstatements about the nature of statistical sampling.


"Study Suggests Light to Back of the Knees Alters Master Biological Clock"

by Sandra Blakeslee. New York Times, 16 January 1998, A20.

This article reports on a study entitled "Extraocular Circadian Phototransduction in Humans," which appeared in the journal Science (16 January 1998, Vol. 279, pp. 396-397). Circadian rhythms are oscillations in various bodily functions, and are observed in many animal species. The periods of these rhythms are usually close to, but not exactly, 24 hours and thus require a daily adjustment to synchronize the functions with the external environment. Fish, birds, amphibians, and reptiles have various photoreceptor systems that allow this synchronization to occur. For mammals, one might expect the eyes to play the key role in this function. However, blind people still suffer jet lag. Also, experiments done with "retinally degenerate" mice have shown that such mice can synchronize their bodies to a light-dark cycle.

The study investigated other receptor mechanisms in humans. It involved 15 individuals, in a total of 33 phase-shifting trials. The participants were assigned to a control group or an active group at random. In the active group, the individuals were subjected to a 3-hour pulse of light which was shone on the back of the knee. Great care was taken to make sure that the participants could not distinguish, by the presence of heat or light, whether or not they were in the active group. It was found that a significant shift in the phase of the circadian rhythm could be achieved by the light pulse. If the light was presented before the time of minimum body temperature, a delay in the phase of the circadian rhythm was produced, while light presented after the minimum produced an advance. The average delay was 1.43 hours, and the average advance was 0.58 hours.

The New York Times article quotes experts as saying that the results are quite surprising, but that they can see no flaw in what appears to be very good research. It is hoped that the findings may lead to new treatments for seasonal depression, sleep disorders, and jet lag.


"Study Finds Conflicts in Medical Reports"

by Richard A. Knox. Boston Globe, 8 January 1998, A12.

In 1995, a case control study suggested that the use of calcium-channel blockers to treat hypertension led to an increase risk of heart disease. This led to an intense debate both in technical journals and in the press. Researchers writing in the New England Journal of Medicine ("Conflict of Interest in the Debate over Calcium Channel Antagonists," 8 January 1998, p. 101) looked at the 70 reports that appeared during 1996-1997, classifying them as favorable, neutral, or critical toward the drugs. In all, 30 of the reports were classified as favorable, 17 as neutral, and 23 as critical. The researchers then contacted the authors of the reports and questioned them about financial ties to drug companies.

It turned out that 96% of the authors of the favorable reports had received money from manufacturers of calcium-channel blockers. Only 60% of the writers of neutral reports had ties to such companies, and 37% of those who had written critical reports had received money from the drugs' manufacturers. The researchers further investigated whether authors of critical reports were more likely to have financial ties with competing companies. Here the answer seemed to be no: 88% of the authors of favorable reports, 53% of those neutral, and 37% of those critical had financial ties to companies producing competing products.

The researchers cited widespread failure of medical journals to disclose authors' financial ties.


"The Role of Numeracy in Understanding the Benefit of Screening Mammography"

by Lisa M. Schwartz, Steven Woloshin, William C. Black, and H. Gilbert Welch. Annals of Internal Medicine, 1 December 1997, 127, pp. 955-972.

The appropriate role of mammogram screening for women in the age group 40-49 continues to be vigorously debated. Some breast cancer experts have now suggested that rather than issuing a blanket recommendation, doctors might instead give women in this age group the relevant information, and allow them to decide for themselves. This article investigates whether such an option is realistic in light of the general public's numeracy. A full description of the study is available on the web at http://www.acponline.org/journals/annals/01dec97/numeracy.htm

The authors sent questionnaires to test numeracy and understanding of risk reduction to 500 women from a registry maintained by the Department of Veterans Affairs Medical Center in White River Junction, Vermont. Sixty-one percent of these women returned questionnaires that could be used in the study.

In the first part of the questionnaire the women were asked to answer the following three questions:

  1. Imagine that we flip a coin 1,000 times. What is your best guess about how many times the coin would come up heads in 1,000 flips? ________times out of 1,000.

  2. In the BIG BUCKS LOTTERY, the chance of winning a $10 prize is 1%. What is your best guess about how many people would win a $10 prize if 1000 people each buy a single ticket to BIG BUCKS? ________person(s) out of 1,000.

  3. In ACME PUBLISHING SWEEPSTAKES, the chance of winning a car is 1 in 1,000. What percent of tickets to ACME PUBLISHING SWEEPSTAKES win a car? ________%

The subjects were then given a scenario randomly chosen from the following four possible scenarios.

Scenario 1:
12 in 1000 women will die from breast cancer in 10 years without mammography. Mammography will reduce breast cancer deaths by 33%.

Scenario 2:
Mammography will reduce breast cancer deaths by 33%.

Scenario 3:
12 in 1000 women will die from breast cancer without mammography. Mammography will reduce breast cancer deaths by 4 in 1000.

Scenario 4:
Mammography will reduce breast cancer deaths by 4 in 1000.

The women were then asked to answer the following two questions:

Imagine 1,000 women exactly like you. Of these women, what is your best guess about how many will die from breast cancer during the next 10 years if...

they are not screened every year for breast cancer by mammogram. ________out of 1000

they are screened every year for breast cancer by mammogram. ________out of 1000

For the three preliminary numeracy questions the percentages of correct answers were:

For the final two questions the percentages of correct answers were:

For the two cases where no baseline figures were given, the answer was considered correct if the difference between the number of deaths with mammograms and those without was estimated correctly.

The authors consider these results quite discouraging. However, they were encouraged by the fact that those who did better on the three-question "numeracy" test also did better on the two real-life questions.


After the skiing deaths of Sonny Bono and Michael Kennedy, several newspapers evaluated safety on ski slopes. Here are some examples.

"Thinking twice on the slopes"

by Tony Chamberlain. Boston Globe, 7 January 1998, A1.

Steven Over, head of the National Ski Patrol in Denver, is quoted as saying:

Records kept by his organization and the National Safety Council show that, since 1984, skiers have made 52.25 million visits to the slopes annually, and an average of 34 of them have died each year.

The article states that those figures are confirmed by the National Ski Areas Association, a trade group representing 330 resorts that account for 90% of America's skier visits. Over goes on to say:

In an ironic way, this kind of publicity sheds light on the figures, and the figures show that, compared to other outdoor sports, skiing and boarding are safer than many other activities -- like boating or biking, for instance.

"Bono, Kennedy Fit Ski Accident Profile"

by Brigid Schulte. News and Observer (Raleigh, NC), 7 January 1998, A10.

This article gives specific figures that appear to support the claim that skiing is less dangerous than some other activities:

With the publicity of these two high-profile skiing deaths turning a spotlight on ski injuries, the ski industry has been quick to point out that in 1995, 716 people died in boating accidents, 800 bicyclists died and 89 people were hit by lightning.

"Ski Safety Becomes Hot Topic"

by Penny Parker. Denver Post, 7 January 1998, A20.

Here are some more numbers for comparison. This article quotes Rose Abello, spokeswoman for Colorado Ski Country USA:

Nationally, skiing fatalities are less than one per one million skier days according to the National Ski Areas Association. A skier day is one lift ticket sold or given to one skier for all or part of a day.

The article goes on to say:

In 1995, there were 17 drowning deaths per 1 million water-sport participants, and 7.1 deaths per million bicyclists, according to the National Safety Council.

"Two Deaths Put Focus on Skiing Safety"

by Jim Shea. Hartford Courant, 7 January 1998, A1.

This article suggests that men are the real problem! Jasper Shealy, the country's leading expert on skiing injuries and an engineering professor at the Rochester Institute of Technology stated:

Death on the slopes is not random. There is one group of people who almost never die, and another that is at great risk. If you want to eliminate deaths in skiing, eliminate male participants. Males account for 85 percent of all skiing deaths.

In fact, the article cites figures from the National Ski Areas Association indicating that of the 36 skiers who died in 1996, 32 were male.

"Sonny Bono Dies in Ski Accident"

by David Ferrell. Los Angeles Times, 7 January 1998, A1.

It seems that no discussion of risks is complete without some mention of lightning strikes! Rick Kahl, editor in chief of Skiing magazine, quotes figures from the National Severe Storms Laboratory:

Fewer people die skiing than get killed by lightning every year. Lightning takes the lives of 89 people per year in the United States. It's an incredible fluke that anyone famous gets killed skiing. It's a fluke beyond flukes that two famous people get killed within a week of each other.


"Cheerios Lower Cholesterol a Little in Study"

by Gordon Slovut. Minneapolis Star Tribune, 14 January 1998, 3E.

General Mills, the maker of Cheerios, has mounted a new advertising campaign, based on the results of a study conducted at the University of Minnesota's Heart Disease Prevention Clinic. The following is from the back panel of a new Cheerios box:

... participants who added two 1-1/2 ounce servings (3 cups daily) of Cheerios to a diet low in saturated fat and cholesterol dropped their cholesterol levels more than those who followed the low saturated fat and cholesterol diet alone. Researchers concluded that soluble fiber from whole-grain oats in Cheerios may give an additional cholesterol lowering boost when added to a heart healthy diet!

The present article complains that General Mills initially overstated its case by suggesting that the reduction may be up to 18%. Only one of the 62 people in the study who ate Cheerios experienced that decrease. At the beginning of the six-week study period, the average cholesterol level of the 62 men and women assigned to eat Cheerios was 245.5 milligrams per deciliter. At the end of the study, their average was 239.96, which represents a 2.4% decrease. During the same period, the cholesterol levels for 62 subjects assigned to eat corn flakes increased from an average of 243.7 to an average of 247.3, an increase of 1.4%. Both groups adhered to an overall low-fat, low-cholesterol diet for the six weeks. In addition, they ate 1.5 ounces of their assigned cereal in the morning and evening.

Dietitian Helenbeth Reiss Reynolds, a co-author on the study who is also a paid consultant to General Mills, expressed regret about the emphasis on the 18% figure. But she added that five people in the Cheerios group saw their cholesterol drop by more than 10%, while only one person in the corn flakes group experienced that level of improvement.

The University of Minnesota researchers noted in their research report that a number of studies have now shown that foods that are high in soluble fiber (such as the whole grain oats found in Cheerios) can reduce cholesterol beyond what would be achieved by a low-fat, low-cholesterol diet alone.


Return to Table of Contents | Return to the JSE Home Page