Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.6, n.3 (1998)

Robert C. delMas
General College
University of Minnesota
333 Appleby Hall
Minneapolis, MN 55455
612-625-2076

delma001@maroon.tc.umn.edu

William P. Peterson
Department of Mathematics and Computer Science
Middlebury College
Middlebury, VT 05753-6145
802-443-5417

wpeterson@middlebury.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Bob abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


Statistical Education -- Expanding the Network

eds. Lionel Pereira-Mendoza (Chief Editor), Lua Seu Kea, Tang Wee Kee, and Wing-Keung Wong (1998). Proceedings of the Fifth International Conference on Teaching of Statistics, Singapore, June 21-26, 1998.

This three-volume set contains over 200 papers presented by statistics educators from around the world. Each paper falls into one of the following broad categories:

VOLUME 1
Statistical Education at the School Level
Statistical Education at the Post-Secondary Level
Statistical Education for People in the Workplace
Statistical Education and the Wider Society

VOLUME 2
An International Perspective of Statistical Education
Research in Teaching Statistics
The Role of Technology in the Teaching of Statistics

VOLUME 3
Other Determinants and Developments in Statistical Education
Contributed Papers
Posters

The three volume set can be purchased from:

CTMA Ltd., 425 Race Course Road, Singapore 218671
Telephone: (65) 299 8992
FAX: (65) 299 8983

ctmapl@singnet.com.sg

The cost is:
IASE/ISI Member $65 plus shipping/handling
Non-member $80 plus shipping/handling


"Dice and Disease in the Classroom"

by Marilyn Stor and William L. Briggs (1998). The Mathematics Teacher, 91(6), 464-468.

This article presents an interesting mathematical modeling project. The goal of the activity is to model the exponential growth of communicable diseases by adding a risk factor. The required equipment is fairly simple: each student needs a die and a data sheet that is illustrated in the article. To simulate disease transmission, a student walks around the classroom and meets another student. The two students then roll their dice and sum the two outcomes. If the sum is below some predetermined cutoff, the encounter is designated as "risky," meaning that if either student was a carrier, the disease has been passed on to the other student. At the end of the activity, one student is randomly chosen to be the initial carrier of the disease, and the spread of the disease through the classroom is tracked. The article describes how the data are collected, graphed, and analyzed, and also presents suggestions for discussion and variations on the activity.


"Roll the Dice: An Introduction to Probability"

by Andrew Freda (1998). Mathematics: Teaching in the Middle School, 4(2), 8-12.

The author describes a simple dice game that helps students understand why it is important to collect data to test beliefs. The game can also be used to help students develop an understanding of why large samples are more beneficial than smaller samples. To play the game, two students each roll a die. The absolute difference of the two outcomes is computed. Player A wins if the difference is 0, 1, or 2. Player B wins if the difference is 3, 4, or 5. Most students' first impressions are that this is a fair game. Over several rounds of the game the students come to realize that the stated outcomes are not equiprobable. The activity promotes skills in data collection and hypothesis testing, as well as mathematical modeling. The author presents examples of a program written in BASIC and a TI-82 calculator simulation, both of which can be used to help students explore the effects of sample size in modeling the dice game.


The American Statistician: Teacher's Corner


"A One-Semester, Laboratory-Based, Quality-Oriented Statistics Curriculum for Engineering Students" (with discussion)

by Russell R. Barton and Craig A. Nowack (1998). The American Statistician, 52(3), 233-243.

This article describes a new laboratory-based undergraduate engineering statistics course being offered by the Department of Industrial and Manufacturing Engineering at Penn State. The course is intended as a service course for engineering students outside of industrial engineering. We describe the topics covered in each of the eight modules of the course and how the laboratories are linked to the lecture material. We describe how the course is implemented, including facilities used, the course text, grading, student enrollment, and the implementation of continuous quality improvement in the classroom. We conclude with some remarks on the benefits of the laboratories, the effects of CQI in the classroom, and dissemination of course materials.

"The Blind Paper Cutter: Teaching About Variation, Bias, Stability, and Process Control"

by Richard A. Stone (1998). The American Statistician, 52(3), 244-247.

The intention of this article is to provide teachers with a student activity to help reinforce learning about variation, bias, stability, and other statistical quality control concepts. Blind paper cutting is an effective way of generating tangible sequences of a product, which students can use to address many levels of questions. No special apparatus is required.

"Expect the Unexpected from Conditional Expectation"

by Michael A. Proschan and Brett Presnell (1998). The American Statistician, 52(3), 248-252.

Conditioning arguments are often used in statistics. Unfortunately, many of them are incorrect. We show that seemingly logical reasoning can lead to erroneous conclusions because of lack of rigor in dealing with conditional distributions and expectations.

"Some Uses for Distribution-Fitting Software in Teaching Statistics"

by Alan Madgett (1998). The American Statistician, 52(3), 253-256.

Statistics courses now make extensive use of menu-driven, interactive computer software. This article presents some insight as to how a new class of PC-based statistical software, called "distribution fitting" software, can be used in teaching various courses in statistics.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, ph@maths.nott.ac.uk, RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England. Teaching Statistics has a website at http://www.maths.nott.ac.uk/rsscse/TS/.


Teaching Statistics, Autumn 1998
Volume 20, Number 3

"Lawn Toss: Producing Data On-the-Fly" by Eric Nordmoe, 66-67.

The author describes an activity based on common lawn tossing games such as horseshoes, flying disc golf, and lawn darts. The activity involves helping students to design and carry out a two-factor experiment. The two factors are Distance to the Target (short or long) and Hand Used for Throwing (left or right). An example of a tabulation sheet for collecting the data is provided, and a simple paired t-test is described for testing whether people tend to have a dominant hand. Suggestions are also provided for how to use the data to illustrate the effects of outliers, conduct a two-sample t-test, explore issues of experimental design, and conduct nonparametric tests.

"Why Stratify?" by Ted Hodgson and John Borkowski, 68-71.

The article describes an activity to help students understand the benefits of using stratified random sampling. The materials consist of equal numbers of red cards and black cards. Each card has a number written on one side. The values on the red cards are consistently smaller than those on the black cards, which creates two strata in the population of cards that differ with respect to the measure of interest. Students draw simple random samples and stratified random samples for samples of the same size from the population of cards and create distributions for the sample means. Students can determine that the average sample mean from both types of samples provide good estimates of the population mean. Comparison of the two distributions illustrates that the sample means from stratified random samples show much less variability.

"Introducing Dot Plots and Stem-and-Leaf Diagrams" by Chris du Feu, 71-73.

The author describes an activity that introduces dot plots and stem-and-leaf diagrams to elementary school students by capitalizing on a basic skill that is well rehearsed: measuring the length of things with a ruler. The list of equipment includes a worksheet, a ruler, a protractor, and a piece of string. An example of the worksheet, which contains lines and angles that can be measured, is provided for photocopying. One line is curved in a complex way so that the piece of string is used to match the curved line, and then a ruler is used to measure the length of the string as an estimate. The measurements produced by the children for a particular line typically show some variation. A dot plot or stem-and-leaf diagram of the measurements can visually demonstrate that there is variability, yet one measurement occurs more often than other measurements. The author has found this activity to spontaneously generate discussion of many statistical ideas.

"A Constructivist Approach to Teaching Permutations and Combinations" by Robert J. Quinn and Lynda R. Wiest, 75-77.

The authors present an approach to teaching about permutations and combinations where students are given an opportunity to explore these concepts within the context of a problem situation. Students are not instructed in the formal mathematics of permutations and combinations. Instead, they are asked to solve a problem regarding how many different ways someone can wallpaper three rooms given there are four different patterns to choose from. Students work in groups to come up with answers to the question. Each group reports back to the class with their answer and provides arguments for why their answer is reasonable. The instructor then identifies which groups have produced answers based on permutations and which have produced answers based on combinations. The formal terminology, symbolic notation, and methods are then introduced and students are shown how the approach taken by a group is modeled by the formal approach. The authors find that this approach empowers students by reinforcing the validity of their own intuitions.

"BUSTLE -- A Bus Simulation" by John Appleby, 77-80.

The author describes a computer simulation that demonstrates to students how statistical modeling can be used to account for the perception that buses always arrive in bunches. The program assumes that the rate at which passengers arrive at a bus stop follows a Poisson process, and that the delay caused as passengers board can account for buses eventually bunching up along a route. The program allows various parameters to be changed such as the initial delay between bus departures from the terminal, the number of buses, and the number of stops along the route. The author provides a web address from which the program can be downloaded and used for educational purposes.

"Testing for Differences Between Two Brands of Cookies" by Rhonda C. Magel, 81-83.

The author describes two activities that involve measurements of two different brands of chocolate chip cookies. In the first activity, students count the number of chips in cookies from samples of each brand. The students use the data to conduct a two-sample t-test, first testing for assumptions of normality. In the second activity, each student provides a rating for the taste of each brand. This provides an opportunity for students to conduct a matched-pairs t-test, as well as an opportunity to see the distinction between the two-sample and matched-pairs situations. The author has found these to be very motivating activities that provide students with a concrete and personal understanding of hypothesis testing.

"A Note on Illustrating the Central Limit Theorem" by Thomas W. Woolley, 89-90.

The article describes the use of the phone book to generate data for illustrating the Central Limit Theorem. In class, students discuss the expected shape of the distribution of the last four digits of a telephone number. It can be argued that if the digits are produced randomly, they should form a uniform distribution of digits between 0 and 9 with an expected value of 4.5 and a standard deviation of approximately 2.872. Outside of class, students randomly select a page of the phone book, randomly select a telephone number from the page, record the last four digits of the telephone number, and compute the average of the four digits. Each student repeats this process until 20 sample means are generated. The sample means from all students are collected in class and entered into statistical software to produce a distribution and obtain summary statistics. The distribution is typically unimodal, normal in its shape, centered around 4.5, with a standard deviation of approximately 1.436. This concrete example allows students to empirically test the Central Limit Theorem.


Topics for Discussion from Current Newspapers and Journals


"Following Benford's Law, or Looking Out for No. 1"

by Malcolm W. Browne. The New York Times, 4 August 1998, F4.

This article is based on "The First Digit Phenomenon" by Theodore P. Hill (American Scientist, July-August 1998, Vol. 86, pp. 358-363). It describes how Hill and other investigators have successfully applied a statistical phenomenon known as Benford's Law to detect problems ranging from fraud in accounting to bugs in computer output. The law is named for Dr. Frank Benford, a physicist at General Electric who identified it using a variety of datasets some sixty years ago. But in fact, as Hill's original article notes, the law had already been discovered in 1881 by the astronomer Simon Newcomb. Newcomb observed that tables of logarithms showed greater wear on the pages for lower leading digits. Most people intuitively expect leading digits to be distributed uniformly, but the log table evidence suggests that everyday calculations involve relatively more numbers with lower leading digits. Newcomb theorized that the chance of leading digit d is given by the base 10 logarithm of (1 + 1/d) for d = 1,2,...,9. Thus the chance of a leading 1 is not one in nine, but rather log(2) = .301 -- nearly one in three!

Benford's law has been observed to hold in a wide range of datasets, including the numbers on the front page of The New York Times, tables of molecular weights of compounds, and random samples from a day's stock quotations. Dr. Mark Nigrini, an accounting consultant now at Southern Methodist University, wrote his Ph.D. dissertation on using Benford's Law to detect tax fraud. He recommended auditing those returns for which the distribution of digits failed to conform to Benford's distribution. In a test on data from Brooklyn, his method correctly flagged all cases in which fraud had previously been admitted. However, Nigrini points out that Benford's Law is not universally applicable. For example, analyses of corporate accounting data often turn up too many 24's, apparently because business travelers have to produce receipts for expenses of $25 or more. He also notes that the law won't help you pick lottery numbers. For example, in a "Pick Four" numbers game, lottery officials take great pains to ensure that the four digits are independently selected and are uniformly distributed on d = 1,2,...,9.

The Times article describes a classroom experiment conducted by Dr. Hill in his classes at Georgia Tech. For homework, Hill asks those students whose mother's maiden name begins with A through L to flip a coin 200 times and record the results and the rest of the students to imagine the outcomes of 200 flips and write them down. As the article points out, the odds are overwhelming that there will be a run of at least six consecutive heads or tails somewhere in a sequence of 200 real tosses. In his class experiment, Hill reports a high rate of success at detecting the imaginary sequences by simply flagging those that fail to contain a run of length six or greater. While this is not an example of Benford's Law, it is another situation in which people are surprised by the true probability distribution. Readers of JSE may be familiar with this coin tossing experiment from the activity "Streaky Behavior: Runs in Binomial Trials" in Activity-Based Statistics by Schaeffer, Gnanadesikan, Watkins, and Witmer (1996, Springer).


"Fate ... or Blind Chance"

by Bruce Martin. The Washington Post, 9 September 1998, H1.

This article on coincidences is excerpted from Martin's article in the September-October issue of The Skeptical Inquirer. The article starts with the famous (to statisticians!) birthday problem, illustrated with birthdays and death days of US presidents. The problem is extended to treat the chance that at least two people in a random sample will have birthdays within one day (i.e., on the same day or on two adjacent days). In this formulation, only 14 people are required to have a better than even chance. The article then mentions some popularly reported examples of coincidences, such as the similarities between the Lincoln and Kennedy assassinations.

Martin observes that, as far as we know, the decimal digits of the number $\pi$ are "random." It follows that we can model coin-tossing by using the expansion for $\pi$ as a random digit generator, interpreting an even digit as "heads" and an odd digit as "tails." Examining the results for the first 100 digits reveals a streak of eight heads in a row, from the 62nd to 69th digits. As noted in the previous article on Benford's Law, this does not show that the digits of $\pi$ are non-random; rather, it is nothing more than the behavior one should expect in a random sequence. Martin's point is that such "coincidences" occur every day in people's lives. It is only our search for patterns that leads us to select some of these events after the fact and treat them as amazing.

In the original Skeptical Inquirer article, Martin discusses the above examples, and also investigates the randomness of prices in the stock market. Again using the digits from the expansion of $\pi$, he simulates stock prices, and proceeds to find in these random data the kinds of behavior that market analysts consider peculiar to the stock market. Martin's position on all of these examples is that the principle of Occam's Razor should apply: among competing explanations for a phenomenon, the simplest should be preferred. So if chance alone would produce the patterns we perceive, why do we search for more complicated causes?


Deadly Disparities; Americans' Widening Gap in Incomes May be Narrowing our Lifespans"

by James Lardner. The Washington Post, 16 August 1998, C1.

Since the 1970s, virtually all income gains in the US have gone to households in the top 20% of the income distribution -- the greatest inequality observed in any of the world's wealthy nations. Beyond the fairness issues, a growing body of research indicates that countries with more pronounced differences in incomes tend to experience shorter life expectancies and greater risks of chronic illness in all income groups. Moreover, the magnitude of these risks appears to be larger than the more widely publicized health risks associated with cigarettes or high-fat foods.

Richard Wilkinson, an economic historian at Sussex University, found that, among nations with gross domestic products of at least $5000 per capita, one nation could have twice the per capita income of another, yet still have a lower life expectancy. On the other hand, income equality emerged as a reliable predictor of health. This finding ties together a variety of international comparisons. For example, the greatest gains in British civilian life expectancy came during World War I and World War II, periods characterized by compression of incomes. In contrast, over the last ten years in Eastern Europe and the former Soviet Union, small segments of the population have had tremendous income gains, while living conditions for most people have deteriorated. These countries have actually experienced decreases in life expectancy. Among developed nations, the US and Britain today have the largest income disparities and the lowest life expectancies. Japan has a 3.6 year edge over the US in life expectancy (79.8 years vs. 76.2 years) even though it has a lower rate of spending on health care. The difference is roughly equal to the gain the US would experience if heart disease were eliminated as a cause of death!

The July 1998 issue of the American Journal of Public Health presents analogous data comparing US states, cities, and counties. Research directed by John Lynch and George Kaplan of the University of Michigan found that mortality rates are more closely associated with measures of relative, rather than absolute, income. Thus the cities Bixoli, Mississippi; Las Cruces, New Mexico; and Steubenville, Ohio have both high inequality and high mortality. By contrast, Allentown, Pennsylvania; Pittsfield, Massachusetts; and Milwaukee, Wisconsin share low inequality and low mortality.


"Driving While Black; A Statistician Proves that Prejudice Still Rules the Road"

by John Lamberth. The Washington Post, 16 August 1998, C1.

Lamberth is a member of the psychology department of Temple University. In 1993, he was contacted by attorneys whose African-American clients had been arrested on the New Jersey Turnpike for possession of drugs. It turned out that 25 blacks had been arrested over a three-year period on the same portion of the turnpike, but not a single white. The attorneys wanted a statistician's opinion of the trend. Lamberth was a good choice. Over 25 years his research on decision-making had led him to consider issues including jury selection and composition, and application of the death penalty. He was aware that blacks were underrepresented on juries and sentenced to death at greater rates than whites.

In this article, Lamberth describes the process of designing a study to investigate the highway arrest issue. He focused on four sites between Exits 1 and 3 of the Turnpike, covering one of the busiest segments of highway in the country. His first challenge was to define the "population" of the highway, so he could determine how many people traveling the turnpike in a given time period were black. He devised two surveys, one stationary and one "rolling." For the first, observers were located on the side of the road. Their job was to count the number of cars and the race of their occupants during randomly selected three-hour blocks of time over a two-week period. From June 11 to June 24, 1993, his team carried out over 20 recording sessions, counting some 43,000 cars, 13.5% of which had one or more black occupants. For the "rolling survey," a public defender drove at a constant 60 miles per hour (5 miles per hour over the speed limit), counting cars that passed him as violators and cars that he passed as non-violators, noting the race of the drivers. In all, 2096 cars were counted, 98% of which were speeding and therefore subject to being stopped by police. Black drivers made up 15% of these violators.

Lamberth then obtained data from the New Jersey State Police and learned that 35% of drivers stopped on this part of the turnpike were black. He says, "In stark numbers, blacks were 4.85 times as likely to be stopped as were others." He did not obtain data on race of drivers searched after being stopped. However, over a three-year period, 73.2% of those arrested along the turnpike by troopers from the area's Moorestown barracks were black, "making them 16.5 times more likely to be arrested than others." These findings led to a March 1996 ruling by New Jersey Superior Court Judge Robert E. Francis, who ruled that state police were effectively targeting blacks, violating their constitutional rights. Francis suppressed the use of any evidence gathered in the stops.

Lamberth speculates that department drug policy explains police behavior in these situations. Testimony in the Superior Court case revealed that troopers' performance is considered deficient if they do not make enough arrests. Since police training targets minorities as likely drug dealers, the officers had an incentive to stop black drivers. But when Lamberth obtained data from Maryland (similar data has not been available from other states), he found that about 28% of drivers searched in that state have contraband, regardless of their race. Why, then, is there a continued perception that blacks are more likely to carry drugs? It turns out that, of 1000 searches in Maryland, 200 blacks were arrested, compared to only 80 non-blacks. But the problem is that the sample was biased: of those searched, 713 were black, and 287 were non-black.


"Excerpts from Ruling on Planned Use of Statistical Sampling in 2000 Census."

The New York Times, 25 August 1998, A13.

Ruling on a lawsuit filed by the House of Representatives against the Commerce Department, a three-judge Federal panel says that plans to use sampling in the 2000 Census violate the Census Act. Since the Constitution requires an "actual enumeration," opponents of sampling have long argued that no statistical adjustment can be allowed. Significantly, the court did not rule on these constitutional issues. It more narrowly addressed whether sampling for the purpose of apportioning Representatives is allowed under Congress's 1976 amendments to sections 141(a) and 195 of the Census Act.

The amended version of section 141(a) states:

The Secretary shall, in the year 1980 and every 10 years thereafter, take a decennial census of population ... in such form and content as he may determine, including the use of sampling procedures and special surveys.

The 1976 amendment to section 195 more directly addresses the apportionment issue:

Except for the determination of population for purposes of apportionment of Representatives in Congress among the several States, the Secretary shall, if he considers it feasible, authorize the use of the statistical method known as sampling in carrying out the provisions of this title.

The court ruled that these amendments must be considered together. Therefore, the case hinges on whether the exception stated in the amendment to section 195 meant "you cannot use sampling methods for purposes of apportionment" or "you do not have to use sampling methods." The court provided the following two examples of the use of the word except:

Except for Mary, all children at the party shall be served cake.

Except for my grandmother's wedding dress, you shall take the contents of my closet to the cleaners.

The court argues that the interpretation of "except" must be made in the context of the situation. In the first example, one could argue it would be all right if Mary were also served cake. But in the second example, the intention is more clearly that grandmother's delicate wedding dress should not be taken to the cleaners. The judges stated that "the apportionment of Congressional representatives among the states is the wedding dress in the closet..." The Clinton administration appealed this ruling to the Supreme Court, which is scheduled to begin hearings on the matter on 30 November. A decision could come by March. This should be an interesting story to follow.

After the Federal court ruling, an excellent discussion of the issues surrounding the Census was presented on "Talk of the Nation" (National Public Radio, August 28, 1998, http://www.npr.org/ramfiles/980828.totn.01.ram).

The first hour of the program is entitled "Sampling and the 2000 Census." Guests are Harvey Choldin, sociologist and author of Looking for the Last Percent: The Controversy over Census Undercounts, statistician Stephen Fienberg, who has written a series of articles on the census for Chance, and Stephen Holmes, a New York Times correspondent. At the end of the hour, the group specifically addresses the need for statisticians to explain the issues surrounding sampling in a way that the public and Congress can understand. Fienberg describes the proposed Census adjustment in the context of the capture-recapture technique for estimating the number of fish in a lake. (The Activity-Based Statistics text mentioned previously devotes a chapter to a classroom experiment designed to illustrate the capture-recapture method.)

In a previous "Teaching Bits" (Vol. 6, No. 1), we described ASA President David Moore's response to a William Safire editorial on adjusting the Census. Safire gave a laundry list of concerns about public opinion polling, and Moore properly took him to task for failing to address sampling issues in the specific context of the Census. However, some professional statisticians still worry about whether the current sampling proposals can provide sufficiently accurate estimates to improve the Census. For a good summary of these concerns, see "Sampling and Census 2000" by Morris Eaton, David A. Freedman, et al. (SIAM News, November 1998, 31(9), p. 1).


"Prescription for War"

by Richard Saltus. The Boston Globe, 21 September 1998, C1.

At a Boston conference session on "Biology and War," psychologists Christian G. Mesquida and Neil I. Wiener of York University in Toronto presented a new theory about what triggers war: a society that is "bottom-heavy with young, unmarried and violence-prone males." This theory is based on an analysis of the relationship of population demographics to the occurrence of wars and rebellions over the last decade. Societies in which wars and rebellions had occurred tended to have a large population of unmarried males between the ages of 15 and 29.

From the standpoint of evolutionary biology, it makes sense to ask whether war-like behavior confers an evolutionary advantage. The researchers explain that war is "a form of intrasexual male competition among groups, occasionally to obtain mates but more often to acquire the resources necessary to attract and retain mates." They point out that the argument makes sense as an explanation for offensive, but not defensive, wars. For example, the United States was reluctantly drawn into World War II, so there the theory applies to the young Nazis in Germany. Similarly, it applies to the Europeans who conquered the native populations of America, but not to the native peoples.

In nearly half of the countries in Africa, young, unmarried males comprise more than 49% of the overall population. In the last 10 years, there have been at least 17 major civil wars in countries in Africa, along with several conflicts that crossed national borders. In contrast, Europe has few countries where the young, unmarried male population makes up even 35% of the total. In the last 10 years there has been only one major civil war, and that was in Yugoslavia, which has more than 42% young, unmarried males.


"Ask Marilyn: Good News for Poor Spellers"

by Marilyn vos Savant. Parade Magazine, 27 September 1998, p. 4.

A letter from Judith Alexander of Chicago reads as follows: "A reader asked if you believe that spelling ability is a measure of education, intelligence or desire. I was fascinated by the survey you published in response. The implication of the questions is that you believe spelling ability may be related to personality. What were the results? I'm dying to know."

The "biggest news," according to Marilyn, is that poor spelling has no relationship to general intelligence. On the other hand, she is sure that education, intelligence, or desire logically must have something to do with achieving excellence in spelling. But her theory is that, even if one has "the basics," personality traits can interfere with success at spelling.

She bases her conclusions on a write-in poll, to which 42,603 of her readers responded (20,188 by postal mail and 22,415 by e-mail). Participants were first asked to provide a self-assessment of their spelling skills, on a scale from 1 to 100. They then ranked other personal traits on the same scale. For each respondent, Marilyn identified the quality or qualities that were ranked closest to spelling. She considers this quality to be most closely related to spelling ability. She calls her analytical process "self-normalization," explaining that matching up the ratings for each individual respondent overcomes the problem that respondents differ in how accurately they can assess their own abilities.

The trait that she found most frequently linked to spelling in this analysis was "ability to follow instructions." Next was "ability to solve problems," followed by "rank as an organized person." The first two were related very strongly for strong spellers, but hardly related at all for weak spellers. Marilyn reports that only 6% of the weak spellers ranked their ability to follow instructions closest to their spelling ability, and only 5% ranked their ability to solve problems closest to their spelling ability. On the other hand, the relationship with organizational ability showed up at all spelling levels, with top spellers being the most organized, and weak spellers being the least organized.

Marilyn says she asked for a ranking of leadership abilities in order to validate her methods. She did not believe this trait was related to spelling, and, indeed, leadership was linked least often to spelling in the data. Similarly, she reports that creativity appeared to be unrelated to spelling ability.


"When Scientific Predictions Are So Good They're Bad"

by William K. Stevens. The New York Times, 29 September 1998, F1.

This article discusses various problems that can occur when the public is presented with predictions. A key problem is the tendency to rely on point estimates without taking into account the margin of error. For example, in spring 1997, when the Red River of the North was flooding in North Dakota, the National Weather Service forecast that the river would crest at 49 feet. Unfortunately, the river eventually crested at 54 feet. Those residents of Grand Forks who relied on the estimate of 49 feet later faced evacuation on short notice. In this case, the article argues, it would have been far better for the weather service to report an error statement along with its point estimate.

The discussion over global warming illustrates our difficulties in dealing with predictions. Models used to predict the increase in the Earth's average temperature over the next century are necessarily based on many assumptions, each of which entails substantial uncertainty. Popular reports of the predictions place relatively little emphasis on the sizes of the possible errors, and the public may be misled by the implied precision. On the other hand, governments are seen as taking any uncertainty as a reason to avoid action on the issue.

With other types of forecasting, people have learned from experience that the predictions are fallible, and have adjusted their behavior accordingly. Weather forecasts are a prime example. Similarly, the public has learned not to expect accurate prediction of the exact timing and magnitude of earthquakes, and seismologists focus instead on long-range forecasts.


"Placebos Prove So Powerful Even Experts Are Surprised"

by Sandra Blakeslee. The New York Times, 13 October 1998, D1.

A recent study of a baldness remedy found that 86% of the men taking the treatment either maintained or increased the amount of hair on their heads -- but so did 42% of the placebo group. Dr. Irving Kirsch, a University of Connecticut psychologist, reports that placebos are 55-60% as effective as medications like aspirin and codeine for treating pain. But if some patients really do respond to placebos, might there not be a biological mechanism underlying the effect? Using new techniques of brain imagery, scientists are now discovering that patients' beliefs can indeed produce biological changes in cells and tissues. According to the article, some of the results "border on the miraculous."

One explanation explored here is that the body's responses may be based more on what the brain expects to happen based on past experience, rather than on the information currently flowing to the brain. Thus a patient who expects a drug to make him better would have a positive response to a placebo. A related idea is that, by reducing stress, a placebo may allow the body to regain a natural state of health. In fact, a recent study showed that animals experiencing stress produce a valium-like substance in their brains, provided they have some control over the stress.


Return to Table of Contents | Return to the JSE Home Page