Turning the Tables: A t-Table for Today

Robert J. MacG. Dawson
Saint Mary's University

Journal of Statistics Education v.5, n.2 (1997)

Copyright (c) 1997 by Robert J. MacG. Dawson, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.


Key Words: p-values; Student's t distribution; Tabulated distribution.

Abstract

Despite advances in computer technology, quantiles of Student's t (among other distributions) are still calculated using printed tables in most classroom situations. Unfortunately, the structure of the tables found in textbooks (and even in books of tables) is usually better suited to fixed-level hypothesis testing than to the p-value approach that many modern statisticians favor. This article presents a novel arrangement of the table that allows p-values to be determined quite precisely from a table of manageable size.


1 It is widely (though not universally) accepted that students should be taught to report p-values in most circumstances, rather than to perform fixed-level hypothesis tests. Statistical packages such as Minitab routinely give p-values when performing hypothesis tests. Students who are fortunate enough to have access to a computer under all circumstances (including their examinations) can easily find p-values, and there are a few calculators that will determine p-values for Student's t statistic. However, most students still rely on a set of tables, usually (in North America) from the back of their textbooks.

2 The tables of critical values of Student's t that are found in textbooks have a strong family resemblance. A typical table might have seven columns, corresponding to one-sided significance levels of 10%, 5%, 2.5%, 1%, 0.5%, 0.25%, and 0.1%, and 34 rows, corresponding to 1, 2, ... , 30, 40, 60, 120, and infinite degrees of freedom (df). Such tables are often taken from the Biometrika tables, or those compiled by Fisher and Yates, despite the fact that modern computer technology makes it very easy to create a table from scratch. Table 1 summarizes the t-tables in a non-random convenience sample of books within easy reach of my desk.


Table 1. Dimensions of t-tables

                                         df values  Probabilities
     Elementary textbooks
          Siegel and Morgan                  35           7 (a)
          Moore and McCabe                   37          12
          Sincich                            34           7
          Milton, McTeer, and Corbet        101           9
          Freedman et al.                    25           6
          Triola                             30           6
     More advanced textbooks
          Walpole and Myers                  34          14
          Devore                             34           7
          Hogg and Craig                     30           5
          Mendenhall and Sincich             34           7
          Hastings                           30           6
     Books of tables
          Selby (CRC Math Tables)            34           8
          Abramowitz and Stegun              34          13
          Cheng and Fu                       51           7
          Lindley and Scott                  39          12 (b)

     Notes:
     (a)  Separate table for each significance level
     (b)  Another table has a quite nontraditional layout, with
          25 columns for df values, a variable number of rows for
          t values, and probabilities in the body.


3 Some instructors do favor the use of fixed-level testing under all circumstances and consider p-values to be an unnecessary complication. There may also be some (though, I would imagine, very few at a first-year level) who would go far enough in the other direction as to simply tell their classes that hypothesis testing is a waste of time, with or without p-values. Let us imagine an instructor who belongs to neither of these radical groups, who has explained what a p-value is, and is about to demonstrate with an example (or let the class do an example themselves).

4 Suppose that the class is using a standard textbook such as Devore (1995) or Sincich (1993). The example involves a set of 25 data points with a sample mean of 10.45 and a sample standard deviation of 1. The students are asked to find the p-value for testing the hypothesis that the population mean is 10.00, against a two-sided alternative. This yields a test statistic T = 2.25, which under the null hypothesis has the t distribution with 24 degrees of freedom. Looking in the textbook's t-table, they find that T = 2.024 corresponds to a two-sided p-value of 0.05, while T = 2.492 would have yielded 0.02. But where is the value they calculated? The instructor has several options.

5 First, the example could have been rigged so that the p-value would magically come out to be one of the seven in the t-table in the textbook. After seeing a few of these, the more intelligent students will ask "But what if it doesn't?," which essentially reduces the problem to one of the next two cases.

6 If the value of T is between those for (say) a 5% and 2% significance level for a two-sided alternative, the instructor can say "We can't find the exact value in the table, so we'll say the p-value is `less than 5%'." After seeing this a few times, the more intelligent students may start to wonder why this is any better than saying "we reject at the 5% level."

7 A fortunate instructor will have access to a computer in the classroom, and the students will have access to computer labs for their assignments. Most computer packages produce p-values by default when doing hypothesis tests, and for a while everybody will be happy. Unfortunately, few campuses have the computer facilities to allow a class of 250 first-year students to write their exams with a computer. So, eventually, all of the students will start to wonder, "How are we going to do this on the exam?" If the answer is "using the tables in your book," they may start to wonder why the instructor is wasting their time on something that won't help them pass the exam, or begin to demand computer access for their exam. (All of the student reactions in the last few paragraphs are based on my actual classroom experience.)

8 It is thus apparent that, for modern practice, the traditional t-table has too few probability values. On the other hand, there is a ubiquitous tendency to give superfluous values for degrees of freedom; the most extreme example of this that I have seen is Milton, McTeer, and Corbet (1997), whose table gives all degrees of freedom from 1 to 100. Between rows 99 and 100, for instance, only two entries differ -- in each case, by 0.001. In the same table, t-values in adjacent columns differ by around 0.3 -- a factor of three hundred!

9 For most practical purposes, it is sufficient to have around 20 tabulated degrees of freedom. If these are carefully chosen, the entire range from 1 to infinity can be covered with the t-values in adjacent columns differing by only about 0.1 over most of the table. Thus, if the closest value is chosen, the resulting error will be only about 0.05. One possible set of values is (1, 2, ... , 10, 12, 15, 20, 25, 30, 40, 50, 100, 1000).

10 Conversely, it is useful to have 30 to 40 significance levels in a table. This allows for steps of 0.5% in the commonly-used range from 1% to 10% (two-sided), and for ample values below and above that range. All `standard' values used in fixed-level testing and the construction of confidence intervals should, of course, be included.

11 Such a layout can be made to fit a regular page if it is printed in landscape mode, with degrees of freedom indexing the columns and probabilities indexing the rows. It took me only a few hours to prepare such a table. Values were generated in Minitab, edited with a generic file editor, and converted into a LaTeX file. The LaTeX file is available at this site; note that your printer must be able to print a DVI file in landscape mode (or print on paper at least 11 inches wide.) The file could be modified to fit the smaller pages found in most textbooks, or to form a two-page spread.

12 Working our sample problem again using this table, we note that we do not have an entry for 24 degrees of freedom. (The entire table is too large for an 80-column screen, but an excerpt is given below as Table 2.) Rounding to the nearest entry, we use the column for 25 degrees of freedom. We find the t-value 2.243 in that column, in the row corresponding to a two-sided p-value of 0.034. Note that even if we had rounded in the wrong direction to 20 degrees of freedom, we would have gotten the same answer; for most of the table, the intervals between the given degrees of freedom are quite small. Adjacent entries in that column give p = 0.04 (for T = 2.16) and p = 0.03 (for T = 2.30). Thus, p-values can be determined in this region to a precision better than 1% -- adequate for most purposes.


Table 2. An Excerpt from a Modified t-table

      2-sided tail               Degrees of Freedom
      probability   ...  15     20     25     30     40     ...
      .                  .      .      .      .      .
      .                  .      .      .      .      .
      .                  .      .      .      .      .
      0.05          ...  2.131  2.086  2.060  2.042  2.021  ...
      0.04          ...  2.249  2.197  2.167  2.147  2.123  ...
      0.034         ...  2.333  2.276  2.243  2.222  2.195  ...
      0.03          ...  2.397  2.336  2.301  2.278  2.250  ...
      0.024         ...  2.511  2.442  2.403  2.378  2.346  ...


13 To ensure LaTeX portability, the usual diagram at the top of the table showing the area under the curve represented by the numbers in the table has been omitted from the LaTeX file. The instructor can sketch in such a diagram before distributing the table to students. The table includes columns of tail probabilities for both one-tailed and two-tailed tests, and columns of confidence levels for both one-sided and two-sided intervals. A second version of the table is provided for those instructors who prefer to include only one-tailed probabilities and two-sided confidence levels.

14 Should other statistical tables also be rethought? The standard normal tables in most textbooks have ample precision. While a far smaller table would usually suffice, there is little gain from cutting it down below the traditional one or two pages. Moreover, as the normal distribution is often used as a population model, the table is used in several ways, and sometimes the precision is useful.

15 The chi-squared and F tables are more difficult to amend. F tables are inherently "three-dimensional" due to the two parameters that must be represented along with probability; including twenty or thirty probability levels would produce a table the size of a small book, no matter what economies were practiced elsewhere.

16 The chi-squared table is more tractable, but does represent one problem. The t distribution for 40 degrees of freedom is very similar to that for 100 degrees of freedom; the corresponding chi-squared distributions are very different. This makes interpolation or extrapolation difficult. A solution would be to tabulate percentage points divided by degrees of freedom, rather than the percentage points of the chi-squared distribution. This would result in a table, like that for Student's t, in which one could skip from (say) 50 to 70 degrees of freedom without much change in the corresponding rows (or columns).

Acknowledgments

I would like to thank the anonymous referees for helpful suggestions.


References

Abramowitz, M., and Stegun, I. A. (1965), Handbook of Mathematical Functions, New York: Dover.

Cheng, S. W., and Fu, J. C. (1995), Statistical Tables for Students, Toronto, ON: Nelson.

Devore, J. L. (1995), Probability and Statistics for Engineering and the Sciences (4th ed.), Belmont, CA: Duxbury.

Freedman, D., Pisani, R., Purves, R., and Adhikari, A. (1991), Statistics (2nd ed.), New York: Norton.

Hastings, K. J. (1997), Probability and Statistics, Reading, MA: Addison Wesley Longman.

Hogg, R. V., and Craig, A. T., (1995), Introduction to Mathematical Statistics (5th ed.), Englewood Cliffs, NJ: Prentice-Hall.

Lindley, D. V., and Scott, W. F. (1984), New Cambridge Statistical Tables (2nd ed.), Cambridge: Cambridge University Press.

Mendenhall, W., and Sincich, T. (1996), A Second Course in Statistics: Regression Analysis (5th ed.), Upper Saddle River, NJ: Prentice-Hall.

Milton, J. S., McTeer, P. M., and Corbet, J. J. (1997), Introduction to Statistics, New York: McGraw-Hill.

Moore, D. S., and McCabe, G. P. (1993), Introduction to the Practice of Statistics (2nd ed.), New York: Freeman.

Selby, S. M. (ed.) (1972), CRC Standard Mathematical Tables (20th ed.), Cleveland, OH: Chemical Rubber Co.

Siegel, A. F., and Morgan, C. J. (1996), Statistics and Data Analysis: An Introduction, New York: Wiley.

Sincich, T. (1993), Statistics by Example (5th ed.), New York: Macmillan.

Triola, M. F. (1995), Elementary Statistics (6th ed.), Reading, MA: Addison-Wesley.

Walpole, R. E., and Myers, R. H. (1993), Probability and Statistics for Engineers and Scientists (5th ed.), New York: Macmillan.


Robert J. MacG. Dawson
Department of Mathematics and Computing Science
Saint Mary's University
Halifax, Nova Scotia B3H 3C3
CANADA

rdawson@husky1.stmarys.ca


The following files are available:

LaTeX file for the modified t-table: table1.tex
Postscript version of the modified t-table: table1.ps
Tables that include only one-tailed probabilities and two-sided confidence levels: table2.tex and table2.ps


Return to Table of Contents | Return to the JSE Home Page