Teaching Bits: A Resource for Teachers of Statistics

Topics for Discussion from Current Newspapers and Journals

William P. Peterson
Middlebury College

Journal of Statistics Education Volume 11, Number 1 (2003), jse.amstat.org/v11n1/peterson.html

Copyright © 2003 by the American Statistical Association, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent.


"Even Elders Reflect Broad Bias Against the Old, Study Finds"

by Carey Goldberg, Boston Globe, October 28, 2002, p. A1.

A recent study suggests that societal prejudice against the elderly is especially deep-seated, even among elders themselves. According to the article:

Over the last several years, using a novel test that employs a rapid succession of faces or names to study attitudes, psychologists found that unconscious prejudice against the elderly among all age groups is even more widespread than unconscious racism. Among tens of thousands of people tested, more than 80 percent associated old faces with negative words such as "failure" or "agony," while similar bias against African-Americans showed up in only about 70 percent of white and Asian test-takers.

Most surprisingly, the researchers found that the old people among the research subjects - more than 10,000 of them - were every bit as biased against other old people as young people were.

The test referred to here is known as the Implicit Association Test (IAT). It was developed by Anthony G. Greenwald, a University of Washington psychologist. The test involves sorting two series of words and pictures. The words represent good or bad qualities, and the pictures show young or old faces. In the first series, you must decide classify each item as being either "old or bad" or "young or good." In the second series, you must classify items as "young or bad" or "old or good." People generally take longer on the second series. The difference is construed as an indirect measure of prejudice against the elderly.

The easiest way to understand this process is to try the online demonstration of the IAT for yourself. In fact, the study itself was based on a voluntary response sample of over 130,000 Internet users (10,000 of whom were elderly). While acknowledging that such a sample is not representative, the researchers noted that it would be difficult for respondents to fake their attitudes in this format. For more discussion of this approach, you can download a pdf version of the paper "Harvesting Implicit Group Attitudes and Beliefs from a Demonstration Web Site."

The Globe article concludes with a wry quote from Wallace Madsen, the author of Old is not a Dirty Word, who says "Growing old in this country is a crime. When you grow old, these idiot sociologists just keep sticking you in buses and taking you to see the fall colors."


"Threats and Responses: Missile Shield; U.S. Ignores Failure Data At Outset Of Flights"

by William J. Broad, New York Times, December 18, 2002, p. A 16.

"Debunking the Missile Defense Agency's 'Endgame Success' Argument"

by George N. Lewis and Lisbeth Gronlund, Arms Control Today, December, 2002.

How is the US military progressing with its proposed missile interceptor technology? It seems to depend on how you keep score. Lt. Gen. Ronald T. Kadish of the United States Air Force is the head of the Pentagon's Missile Defense Agency. In a report to Congress last June, he reported an 88% success rate for the prototype system, based on 25 tests which included both long-range and short-range interceptors. The Times article quotes him as saying This high rate of endgame success shows the feasibility of missile defense. The availability of technologies to protect the nation should not be in question.

The online article from Arms Control Today argues that this is an overly optimistic assessment. The "endgame" refers to what happens after a missile has successfully launched and has begun seeking its target. Missiles that fail earlier in their flight are not being counted in the Pentagon figures. The authors write:

Endgame success rate is irrelevant: There is no reason to consider the endgame success rate rather than the overall success rate because quality control errors can and have occurred in all phases of the tests. Taking into account failures that occur both prior to and during the endgame, the overall success rate for midcourse systems drops to only 41 percent (11 of 27).


"Traffic Citations Reveal Disparity in Police Searches"

by Bill Dedman and Francie LaTour, Boston Globe, January 6, 2003, p. A1.

"Totals, Hometowns Key to Computations"

by Bill Dedman and Francie LaTour, Boston Globe, January 6, 2003, p. A7.

The problem of racial profiling by police in stopping and searching vehicles has been a topic of national debate for several years now. Massachusetts decided to conduct its own study, which looked at all 764,065 traffic tickets issued from April 2001 to November 2002 by 367 police departments in the state. The Globe obtained the data under the state's open-information law, and maintains a Web page devoted to the investigation. Along with archived stories from the paper, the site presents numerous charts and data graphics.

The data show that minorities are ticketed and searched at disproportionate rates. Blacks make up 4.6% of the state's population, but got 10.0% of the traffic tickets; Hispanics make up 5.6% of the population, but got 9.6% of the tickets. Overall, one stop in 60 results in a vehicle search. However, for blacks and Hispanics, the rate is one in 40. Because a primary goal of the searches is to find illegal drugs, it is interesting to look at the "hit rate"; that is, the percentage of searches that actually lead to drug charges. It turns out that the hit rate is higher for whites than for minorities, which makes the difference in search rates even harder to explain.

The second article observes that because "you can get a ticket when you are out of town ... the race of people ticketed can't be compared directly with a town's racial mix." Therefore, one part of the Globe analysis focused on tickets written by town police to their own residents. The results are presented in the first article, which notes that in 45 communities the black residents' share of the tickets is at least four times their proportion of the population.


"Death Penalty Found More Likely When Victim is White"

by Mark Memmott, New York Times, January 8, 2003, p. A12.

In 2000, Governor Parris Glendening of Maryland launched an investigation of whether the death penalty was being fairly imposed in his state. The newly issued report from that study considers 6000 capital cases from the past two decades. Overall, the findings were consistent with previous studies in other states, which have found that the race of the victim is the crucial factor in determining whether prosecutors seek the death penalty. Specifically, the Maryland study found that the death penalty was sought more often in cases of blacks killing whites than in cases when blacks killed other blacks or when the killer was white. Race of the defendant did not appear to be a factor.

University of Maryland Professor Raymond Paternoster, who directed the study, cited the following dramatic illustration of the problem: "Baltimore County and Baltimore City are contiguous. But defendants in Baltimore County are 26 times more likely to get the death penalty. In social science, you don't find many things that huge." Baltimore County had one of the highest rates of death sentencing in the state; it also had one of the highest proportions of cases involving white victims and black defendants.

Maryland currently has 13 prisoners on death row, eight of whom are black. All 13 were convicted of killing whites. However, since the 1978 reinstatement of the death penalty, 55 percent of the cases where prosecutors could have sought the death penalty involved non-white victims.


"Study Shows Teenagers Prefer Their Toothbrush"

by David Arnold, Boston Globe, January 21, 2003, p. C2.

When a recent poll asked 400 teenagers to pick the invention that they could not live without, the toothbrush came out on top. This sounds surprising until you read further and learn that the poll only provided five choices. Here they are, with the percentage of respondents rating them tops given in parentheses: the toothbrush (34%), the automobile (31%), the personal computer (16%), the cell phone (10%), and the microwave (7%).

The poll was conducted by the Lemelson-MIT Foundation, whose mission is to encourage inventors. The foundation has conducted annual polls for the last eight years on various aspects of inventing. You can read more on their press release Web page. For example, this year's poll was given to a sample 1000 adults as well. Among the adults, the top two responses were the toothbrush (42%) and the automobile (37%).

Last year's poll asked respondents to rank the top inventions of the 20th century, again from a list of five. The teens chose the personal computer (32%), the pacemaker (26%), wireless communications (18%), water purification (14%), and television (10%). The adults chose the pacemaker (34%), the personal computer (26%), television (15%), wireless communications (14%), and water purification (11%). The accompanying press release on the Web site quotes program director Merton Flemings as saying "The generational differences are quite striking. Teen preference for mobile devices over television - the opposite of their parents - is an interesting indicator of lifestyle changes ahead."


"Future World: Privacy, Terrorists, and Science Fiction"

by John Allen Paulos, ABCNEWS.com, January 5, 2003.

"Private Clicks: Mathematical Balance Between Public Statistics, Personal Information"

by John Allen Paulos, ABCNEWS.com, February 2, 2003.

These are two installments from Paulos' online "Who's Counting" column, both of which concern privacy in the information age.

The "Future World" article describes a Pentagon surveillance program called Total Information Awareness (TIA), which is headed by retired admiral John Poindexter. (As Paulos reminds us, Poindexter is infamous for his role in the Iran-contra affair during the Reagan presidency.) The program is designed to use computer information gathering and data mining techniques to identify and thwart potential terrorists. Paulos describes the goals as "detect, classify, ID, track, understand, pre-empt," and notes the eerie parallels with the recent movie Minority Report. In the movie, however, the information about future crimes was provided by psychics. The TIA program would presumably be invading citizens' privacy by examining credit card records, telephone calls and Internet transactions.

The fact is that most people are not terrorists. Paulos points out that even if the system had a 99% success rate at both identifying true terrorists and clearing honest citizens, there would still be an enormous problem with false positives. To illustrate, he assumes a population of 300 million people of whom 1,000 are future terrorists. Then the system will correctly identify 990 future terrorists which is pretty good. However, it will also identify 1% of the other 299,999,000 people or 2,999,990 who are not terrorists as terrorists, leading "the system to swoop down on 2,999,990 innocent people as well as on the 990 guilty ones, apprehending them all."

The "Private Clicks" article describes a new stategy for online customers who object to marketing surveys that request their age, income, or other personal information. The traditional approach, he says, has been simply to lie. Now two IBM researchers have developed a program that would let customers enter their true data. The computer would then add random noise before reporting the information to businesses. For example, a random number between 0 and 20 could be added to or subtracted from the age field before sending it on. Bayesian statistical techniques would still allow the businesses to make the inferences they need. Paulos compares this with the randomized response approach to asking sensitive questions.


"What are the Chances?"

by Seth Schiesel, New York Times, February 6, 2003, p. G1.

"Dangers of Gauging Space Safety; Effectiveness of NASA's Risk Assessments is Questioned"

by Guy Gugliotta and Rick Weiss, Washington Post, February 17, 2003, p. A14.

The Times story describes the methods of probabilistic risk assessment in applications that include hurricane forecasting, nuclear safety, and system analyses for the space shuttle. In the mid-1990s, NASA estimated the risk of catastrophic failure for the shuttle as 1 in 145 missions. At that time, the the shuttle's main engines were identified as the principal source of concern. The article reports that prior to the 1986 Challenger disaster, NASA had not shown much interest in probabilistic risk assessment.

The Post article echoes this view of NASA. It notes that three years after the 1-in-145 estimate, a number of safety improvements had been made, and the estimate was decreased to 1-in-245. But the article is still critical of NASA's response to safety concerns, noting that many proposed improvements have gone unfunded. It quotes Theofanis G. Theofanus, of the Center for Risk Studies and safety at the University of California at Santa Barbara, as saying "Once you have done your risk assessment, money becomes a factor, so you decide what is an acceptable risk. There is at least a nominal understanding between the people who perpetrate the risk [NASA] and the people on the receiving end [the astronauts]. You take your chance."

The Times article discusses a number of other risk assessment scenarios. For example, it devotes considerable attention to estimation of hurricane damage. Continuing advances in computing power have made it possible to perform thousands of simulation runs using historical data to predict future patterns of storms in terms of frequency, intensity, and path. Detlef Steiner, a mathematician at the Clarendon Insurance Group in New York, says that the most likely scenario for any given year would represent $50 million in liability for his company. He adds that "Every 100 years we might have $600 million. A thousand-year event might cost us a billion. But remember, a thousand-year event hasn't happened. A thousand-year event tells you Florida is gone."

The article explains that estimating hurricane damage is considerably easier than modeling the reliability of a complicated system like the space shuttle or a nuclear plant. Insurers can focus on final outcomes, while engineers must try to identify - and hopefully eliminate - the potential causes of failures. Assessing the threat of terrorism is seen as being more complicated still, because of the human factor. Hemant Shah, of the risk modeling firm RMS, explains that "hurricanes do not try to strike your weak points. In the case of terrorism ... you're modeling an adversary in the context of conflict."


"More May Not Be Better, Medicare Study Says; Researchers Find Room for Savings of 30% on Care, But Say Cuts Must be Made Cautiously"

by Carla Hall, Los Angeles Times, February 18, 2003, Section 1, p. 1.

Given the large sums of money spent on health care in the US, we would naturally like to believe that additional resources translate into better care. But a large study of Medicare patients, recently published in the Annals of Internal Medicine, indicates that this is not always the case.

The study included almost one million patients in 306 cities and counties (identified as "hospital referring regions") across the country. Average Medicare spending during the last six months of life was used to compare regions, after adjusting for demographic variables and variations in Medicare pricing. Los Angeles ranked third with an average spending of of $15,479 during the last six months of life. Manhattan was second at $16,333; Miami was first, at $17,564.

In regions where spending was high, patients with serious conditions did indeed get more treatment. In the aggregate, however, these patients were not found to have a longer life expectancy, nor did they experience better quality of life. The researchers estimate that Medicare outlays could be reduced by 30% without negative effects on health, but cautioned that any such reductions would need to be made judiciously rather than through across-the-board cuts.

The story was featured on National Public Radio's "All Things Considered." NPR has a Web page where you can listen to the audio file from the broadcast, and also follow links to related references. Of particular interest is an editorial from the Annals of Internal Medicine that explains how the study used "small area variation" as the best available alternative to a true randomized experiment.


William P. Peterson
Department of Mathematics and Computer Science
Middlebury College
Middlebury, VT 05753-6145
USA
wpeterson@middlebury.edu


Volume 11 (2003) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications