Basic Math Skills and Performance in an Introductory Statistics Course

Marianne Johnson and Eric Kuennen
University of Wisconsin - Oshkosh

Journal of Statistics Education Volume 14, Number 2 (2006), jse.amstat.org/v14n2/johnson.html

Copyright © 2006 by Marianne Johnson and Eric Kuennen all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.


Key Words: Determinants of student performance; Introductory collegiate statistics; Mathematical skills.

Abstract

We identify the student characteristics most associated with success in an introductory business statistics class, placing special focus on the relationship between student math skills and course performance, as measured by student grade in the course. To determine which math skills are important for student success, we examine (1) whether the student has taken calculus or business calculus, (2) whether the student has been required to take remedial mathematics, (3) the student's score on a test of very basic mathematical concepts, (4) student scores on the mathematics portion of the ACT exam, and (5) science/reasoning portion of the ACT exam. The score on the science portion of the ACT exam and the math-quiz score are significantly related to performance in an introductory statistics course, as are student GPA and gender. This result is robust across course formats and instructors. These results have implications for curriculum development, course content, and course prerequisites.

1. Introduction

We seek to identify the key skills and characteristics that help students become successful in introductory statistics, placing a particular emphasis on mathematical skills. Most previous research in this area has focused on other factors, including using writing in statistics classes (Garfield, Hogg, Schau, and Whittinghill 2002; Magel 1996; Stromberg and Ramanathan 1996; Utts, Sommer, Acredolo, Jaher, and Matthews 2003), participation (Magel 1996), active problem solving (Hillmer 1996), on-line learning (Ward 2004), and exam structure (Krieg and Uyar 2001). Studies that examine the characteristics most associated with student performance in statistics courses often overlook mathematics skills as a determinant variable. For example, Krieg and Uyar (1997) identify a number of student characteristics associated with performance in introductory statistics, but do not include any measure of mathematical skills. Cohn (1972) considers the number of mathematics credits as an explanatory variable for predicting performance in an advanced statistics class, but he does not examine the type of mathematical skills. Further, his results for an optional advanced class are unlikely to apply to students required to take introductory statistics.

The importance of mathematical skills to student performance in other quantitative disciplines is widely recognized, however. Studies have found that high Scholastic Aptitude Test (SAT) or American College Test (ACT) mathematics scores or having taken calculus have a significant and beneficial effect on student grades in economics courses (Anderson, Benjamin, and Fuss 1994; Durden and Ellis 1995; Ely and Hittle 1990; Johnson and Kuennen 2004). Further, Ballard and Johnson (2004) find that mastery of very basic mathematics concepts- of the kind covered in remedial or developmental mathematics courses-are positively and statistically significantly related to student success in introductory economics. Grillo, Latif, and Stolte (2001) find mathematics skills are important for pharmacology students, and Ely and Hittle (1990) document the importance of math skills for finance majors.

We realize that while advanced statistics is very much a mathematical discipline, introductory statistics is generally considered not to be a mathematics course, and the amount of mathematics used in the course can vary widely amongst instructors. Some instructors require students to be able to compute certain statistical measures by hand, maintaining that the students' ability to do the calculation themselves strengthens their understanding of the meaning of the statistic, while others will rely on calculators or software packages to do the calculations, stressing instead the ability of the student to correctly interpret the meaning of the calculation.

Naturally an instructor who requires a large amount of technical computations in statistics will find mathematical skills to be an important factor in their students' success, but the importance of mathematics skills may go beyond merely the ability to do the calculations, and also influence the ability to analyze data, reason quantitatively, and interpret the results of numerical computations. For example, while only arithmetic is needed to compute a standard deviation, other basic mathematics skills, such as understanding ratios, may be important in the ability to understand what the standard deviation measures, understand when to apply it and how to interpret the results of the computation, whether done by hand or the result of pushing a key on a calculator or computer. Hence, regardless of the level of computational rigor required by the instructor, basic mathematics skills may be an important determinant of student success in introductory statistics.

The purpose of this study is to identify the types of mathematics skills most associated with student success in an introductory business statistics course. We include a range of measures of mathematics skills, including the math courses students have taken and student scores on the mathematics and science portions of the ACT exam. We also actively measure math skills by student scores on a test of very basic mathematical skills, such as the ability to calculate the slope of a line or the area of a triangle, or to divide by a fraction. (see End Note 1)

2. Course Description and Objectives

“Economics and Business Statistics” is a 200-level (sophomore) university course in introductory statistical analysis taught by the Economics Department. Three different economics professors participated in this study. The course is designed primarily for business majors; however students of other majors also enroll in the course, most notably journalism majors. There is a mathematics prerequisite of precalculus-level skills demonstrated by either completing precalculus (college algebra) with a grade of C or better, or placing sufficiently highly on the university's math placement exam taken by entering freshmen. Class sizes average roughly 35 students.

The catalog description states only that the course will cover “descriptive methods, probability and inference, regression and correlation, index numbers, and time series.” As is typical of many universities, instructors have considerable freedom in designing course content, choosing texts, and developing their own evaluative procedures to meet the course objectives. A comparison of the course syllabi indicates that the three professors covered the same topics, and all three had course objectives that included the ability to apply appropriate statistical techniques to analyze data, and the ability to interpret the results of statistical calculations.

Professor 1's graded material included weekly multiple-choice quizzes and three multiple-choice exams. Professors 2 and 3 required bi-weekly homework assignments involving pure technical computations, analysis and explanation, and some computer exercises. In addition, each section had different textbooks and relied more or less heavily on computer software (Minitab) as part of the course. All three professors agree with the categorization that Professor 1 is “high math,” Professor 2 is “low math” and Professor 3 is “medium to medium-low math.” These categorizations refer to the complexity of the mathematics used in lectures (e.g., Professor 1 would frequently use calculus in lectures) and the amount and complexity of numerical calculation on exams and homework. Professor 2 did not require students to compute any statistics by hand, and Professors 2 and 3 placed greater emphasis on verbal description and explanation of statistical techniques and results than Professor 1.

While it naturally complicates our study to have three different professors, each with different teaching styles, course structures, and levels of mathematical content, we view this as a strength of our study. Since we are interested in whether good mathematics skills are important determinants of student performance in statistics, we seek to know if the results are robust across teaching methodologies and course structures. If so, our results would be more widely applicable to other instructors and at other universities.

3. Design and Methodology

3.1 The Data

The data for this study were collected from six sections of Introductory Economics and Business Statistics in the Spring Semester of 2004 and four additional sections in the Fall Semester of 2004. Each section had 35 students initially enrolled. Four sections each were taught by professor 1 and professor 3. Professor 2 taught two of the sections.

The data for the independent variables were gathered from a 26-question survey given on the first day of class. Thus the sample population of this study consists of 292 individuals who participated in the survey and completed the course; these students are a subset of the 388 students enrolled in the participating sections. Since some of the students did not participate in the survey, there is a possibility of bias in our estimates if those who filled out the survey were systematically different from those who did not (Chan, Shum, and Wright 1997; Douglas and Sulock 1995). We return to this point below. The survey consisted of questions dealing with demographics, motivation, and previous math experience. We attempt to control for motivation, attendance, and ability by using variables generated from the survey or provided by the university. These variables include students' official university GPAs, official ACT scores broken down by subject field, and student reported hours spent studying, working, and in other activities. (see End Note 2) The summary statistics for the variables are reported in Table 1. As is evident from Table 1, the students in this sample are predominantly white, male, and sophomores. Nearly all students were taking statistics because it is required for their major.


Table 1. Summary Statistics for Demographic Variables


Variable Percentage Mean Standard Deviation

Gender
     Female
     Male
39.42
60.58
Class
     Freshman
     Sophomore
     Junior
     Senior
     Other
2.12
49.74
41.27
5.82
1.06
Ethnicity
     Minority
     Not
23.22
76.78
Class Required for Major
     Yes
     No
94.71
5.29
English is Native Language
     Yes
     No
97.35
2.65
Weekly Hours Work for Pay 14.42 12.18
     Work more than 0 but less than or equal to 20 hours/week
     Work more than 20 but less than or equal to 30 hours/week
     Work more than 30 hours/week
77.26
16.16
6.58
Weekly Hours in an Extracurricular Activity 4.49 6.00
     Paricipate more than 0 but less than or equal to 10 hours/week
     Work more than 10 but less than or equal to 20 hours/week
     Work more than 20 hours/week
65.54
12.16
2.34
Weekly Hours Study for All Courses 11.54 7.36
Official University GPA 2.80 0.52
ACT English Score 22.05 3.32
ACT Reading Score 21.56 3.53
Grade in Course 2.81 1.03
     4.0
     3.5
     3.0
     2.5
     2.0
     1.5
     1.0
     0.5
     0.0
20.44
15.53
26.16
12.53
14.17
3.81
1.36
5.99

*We collected additional information that turned out not to be significantly related course performance. Variables included parental education levels, the grade students expected to earn in the course, sleeping habits, and self-reported attendance. In addition, we define “minority” = 1 if the student reported being nonwhite and minority = 0 if the student reported his or her race as white.


In this part of the country, students routinely take the ACT exam instead of the SAT exam. The ACT Exam is a multiple-choice test taken by many high-school students seeking entrance to college. It is largely equivalent to the SAT exam, but is divided into four areas, rather than two. The ACT exam contains sections on English, reading comprehension, mathematics, and science. We are particularly interested in student scores on the mathematics and science parts of the exam, as the tested skills and knowledge seem related to those most used in introductory statistics. The mathematics portion of the exam contains questions on pre-algebra and elementary algebra, intermediate algebra, geometry, and trigonometry. The science portion of the exam asks students to interpret data, read and analyze graphs, tables, and scatter plots, to comment on experimental design and the interpretation of experimental results, and to compare and evaluate conflicting viewpoints and hypotheses.

The background information gathered on students in this study is consistent with previous work in statistics and other related fields. Utts, et. al. (2003) identify grade point average (GPA), class standing, and gender as important confounding variables. They also include an expectations inventory and a computer-literacy inventory identifying the level of technology skills in students as control factors in their covariance analysis. Krieg and Uyar (1997, 2001) include similar control variables.

3.2 The Dependent Variable

Measuring student learning and course performance are processes fraught with difficulties because of the subjective nature of assessment. Instructors may differ as to the components that make up the final course grade and the relative weights of those components. Instructors may also place more or less emphasis on calculating correctly in a given circumstance as compared to the interpretation of that calculation or description of the statistical process. Further, this subjectivity cannot be easily eliminated in education assessment studies. One option is to use a standardized multiple-choice exam across all sample students. However, while multiple-choice exams remove the opportunity for subjectivity in grading, they introduce other issues. Women have been shown to fare significantly worse than men on multiple-choice exams (Williams, Waldauer, and Duggal 1992). Minority students also tend to do more poorly, as do non-native English speakers and students with some particular types of learning disabilities (Bresnock, Graves, and White 1989).

Three different professors participated in the study. Professors gave identical exams in each of their own sections, but exams differed across professors and across semesters. Thus, while not ideal, we choose to use “Grade in Course” as our dependent variable, rather than more instructor-dependent measures such as point total, and control for instructor-specific effects with dummy variables. We argue “Grade in Course” is additionally relevant since it is the variable of interest to the students, and has been used extensively in other studies of statistical learning and assessment (see Krieg and Uyar 1997; Krieg and Uyar 2001).

The summary statistics for the dependent variable, “Grade in Course” are also reported in Table 1. At this university, course grades are given out as 4.0, which is equivalent to an A, 3.5, which is equivalent to a B+, 3.0 which is equivalent to a B, etc. Roughly 20% of students received an A, across all 10 sections. Students most commonly earned a B, though over 10% of students earned a grade lower than a C.

3.3 Measuring Math Skills

In addition to student scores on the mathematics and science portions of the ACT exam, we examine several other measures of student math ability, including the ability of our students to answer 15 simple, multiple-choice mathematics questions. The math portion of the survey was designed to supplement traditional sources of information regarding student math skills. The math quiz provides additional information, including: (1) student mathematical knowledge on a given day, without preparation or studying, and (2) student knowledge of extremely basic material, not extensively covered by collegiate entrance exams. (see End Note 3) See Table 2 for the math quiz, as well as for the percentage of the survey respondents that answered each question incorrectly.


Table 2. Mathematics Quiz

Answer the following mathematics questions to the best of your ability.  Please do not use a calculator.

1.  Solve the following system of equations for x:
	x = y - 6
	y = 10

(a)  -60   (b)  10/6   (c)  3    (d) 4    (e) -4

3.01% of students answered this question incorrectly.

2.  Solve the following system of equations for x:
	y = 2x + 3
	y = 3x

(a)  0	 (b)  3	 (c)  3/5    (d) -3/2     (e)  none of the above

15.11% of students answered this question incorrectly.

3.  Suppose that .  Then if a = 6 and b = 2, solve for x.

(a)  12    (b)  8    (c)  3    (d)  4    (e)  1/3

2.47% of students answered this question incorrectly.

4.  Suppose that .  Then if x = 4 and b = 2, solve for a.

(a)  1/2    (b)  2    (c)  4    (d)  8   (e)  16

7.65 % of students answered this question incorrectly.

5.  Suppose that .  Then if x = 4 and a = 8, solve for b.

(a)  1    (b)  2   (c)  32   (d)  4    (e)  1/2

22.13 % of students answered this question incorrectly.

6.  Perform the following division:    

(a)  3    (b)  3/2   (c) 3/4   (d) 4/3  (e)  1/3 

38.40% of students answered this question incorrectly.

7.  Find the area of the right triangle drawn below.



The length of side a = 3 and the length of side b = 4, and the length of side c = 5.  The area of the triangle is:

(a)  3    (b)  4   (c)  6    (d)  12    (e)  25

31.68% of students answered this question incorrectly.

8. 



The coordinates of point A are (1,2) and the coordinates of point B are (2,4).  Find the slope of the line.

(a) 1/2    (b)  1    (c)  -1  (d)  2    (e)  -2

23.90% of students answered this question incorrectly.
        
9.                    
                  

                                                                
The coordinates of point C are (1,4) and the coordinates of point D are (5,2).  Find the slope of the line.

(a)  1/2    (b)  -1/2   (c)  2    (d)  -2    (e)  5/4

30.58% of students answered this question incorrectly.

10.  Suppose you want to carpet a rectangular room that is 6 feet by 12 feet.  Carpet costs $10 per square yard.  
     Note that 1 yard = 3 feet.  How much does it cost to carpet the room?

(a)  $720   (b)  $2160   (c)  $240   (d)  $80   (e)  $8

57.14% of students answered this question incorrectly.

11.  The fraction 13/38 is approximately 

(a) 0.15	     (b)  0.25     (c) 0.35     (d)  0.45    (e)  0.55

27.90% of students answered this question incorrectly.

12.  The square root of 100,000 is about

(a)  30    (b)  100    (c)  300   (d)  1,000   (e)  3,000

77.07% of students answered this question incorrectly.

13.  In a group of 900 voters, two-thirds said they would vote for the incumbent in the race for Governor.  How 
     many of the 900 voters said they would vote for the incumbent?  

(a)  200   (b)  300  (c)  330   (d)  600   (e)  660

9.89% of students answered this question incorrectly.

14.  In 1997, a total of 3,000 students were enrolled at Moo University.  In 1998, the corresponding figure was 3300.
     What is the percent increase in the number of students from 1997 to 1998?

(a)  1%   (b)  3%   (c) 10%  (d)  30%  (e)  33%

33.70% of students answered this question incorrectly.

15.  What is 80% of 60?

(a)  24   (b)  36   (c)  40   (d)  48   (e)  50

17.46% of students answered this question incorrectly.


The mean score on the math quiz is 11.1 out of 15. As indicated in Table 2, some 22% of the students could not solve for b, given that x = 4 and a = 8. Further, 38% of the students could not divide 1/2 by 2/3; 33.4% of the students could not find the area of a right triangle; and between 24% and 30% of the students could not find the slope of a line, depending on whether the line slopes upward or downward.

While these exact mathematical skills may not be used directly in statistical calculations, these results suggest that a significant number of students would likely have difficulty in not only performing statistical calculations, but also understanding or interpreting statistical calculations. For example, a student that cannot compute areas will likely struggle with manipulating standard normal probabilities, a student that does not understand fractions or division may have difficulty in understanding means or standard deviations, and a student that cannot find the slope of a line will likely be unable to correctly interpret the slope in a linear regression.

We also include measures of the mathematics courses students have taken. At our university, students with sufficiently low scores on a university math-placement exam are required to take a remedial-math course. (In our sample, 16.8% of the students report facing this requirement. The university reports that, on average, roughly 21% of students are required to take remedial mathematics.) We also asked whether students took a calculus or business calculus course. In our regressions, we use a dummy variable for whether the student was required to take remedial math or if they had taken some form of calculus. The variables are summarized in Table 3.

Thus, we have several distinct measures of quantitative ability: (1) student scores on the Math ACT, (2) student scores on the Science ACT, (3) the score on the math quiz administered early in the semester, (4) whether the student has taken calculus or business calculus, and (5) whether the student had been required to take remedial math.


Table 3. Summary Statistics of the Mathematics Variables


Variable Percentage Mean Standard Deviation

Calculus or Business Calculus
     Yes
     No
68.25
31.75
Remedial Mathematics
     Yes
     No
16.76
83.24
ACT Mathematics Score 21.46 2.98
ACT Science Score 22.43 2.74
Most Recent Math Course
     Taking a math course this semester
     Took a math course last semester
     Took a math course within the last year
     Took a math course two or more years ago
47.09
42.59
6.61
3.70
Math Quiz Score 11.10 2.31
     15 correct
     14
     13
     12
     11
     10
     9
     8
     7
     6
     5 or fewer correct
3.97
8.58
10.04
23.22
25.94
11.09
6.69
5.44
2.72
1.46
0.84


3.4 Issues in the Data

Since some students did not participate in the survey, we also have to address the problem of selectivity bias in our survey sample. The 292 students for whom we have complete survey and performance information were a subset of 388 individuals who were enrolled in the ten sections of introductory statistics. Our concern is that students who do not attend class regularly may be more likely to have missed taking the survey. Twenty-one students who were initially enrolled dropped the course; 11 of these students completed the survey before doing so. An additional 3 students have outstanding incompletes in the course. Further, 71 students missed filling out the survey. Thus, we are missing information on 21% of students who remained enrolled.

For students enrolled in the course, but who did not complete the survey, we have university provided information about their GPA and ACT scores. Comparing these students to the survey sample, we find that the missing students have GPAs that are on average 0.10 lower than survey students (p-value < 0.05). The missing students also scored statistically significantly lower on their Math ACT than did survey students (p-value < 0.01). However, there were no statistically significant differences for English, Reading, or Science ACT scores. In addition, the missing students earned lower grades in introductory statistics compared to students who completed the survey. Thus, since poorer students were more likely to miss class the day the survey was completed, and since poorer students are also more likely to have problems with basic mathematics skills, as indicated by their Math ACT scores, we argue that if these students were included in the sample, our results would actually be strengthened. (see End Note 4)

An additional issue with our data is that for some students, we do not have values for their ACT scores. We are missing ACT scores for 94 students because transfer students and some special scholarship students are not required to provide an ACT score for admission to this university. We replace the missing ACT scores with predicted ACT scores from a regression of ACT score on a vector of student demographic and academic explanatory variables.

While one might expect the different measures of mathematics ability to be highly correlated, the actual correlation coefficients are surprisingly modest.(see End Note 5) As expected, there are positive correlations among the math-quiz score, GPA, and scores on the math and science portions of the ACT exam, though the value of the correlation coefficients never exceed 0.30. Student GPA and math quiz score are most highly correlated with the grade earned in introductory statistics (r = 0.50 and r = 0.22, respectively), though scores on the math and science portions of the ACT exam are also positively related to course grade (r = 0.14; r = 0.16, respectively). These results indicate that mathematics skills are complex and not easily represented by a single measure.

4. Data Analysis

4.1 The Estimation Model

We use an ordered probit regression model with the dependent variable of “Grade in Course” and a range of explanatory variables, as described in the previous sections. Ordered probit is a regression technique for ordinal (categorical and ordered) dependent variables, such as our “Grade in Course” variable, that maintains the ordering and reports probabilities as estimated coefficients. Following Wooldridge (2002, pp. 504 - 509), we begin with a latent variable model. Assume that our latent variable y*, which represents student performance in the statistics course, is determined by

where the vector x contains k explanatory variables, is a defined as a k x 1 constant, and the error | x follows a standard normal distribution. Let be a set of seven undetermined threshold parameters. We define our dependent variable y (Grade in Course) by

Given this, we can derive the series of response probabilities determining y given the explanatory variables x:

where is the cumulative normal distribution function.

The parameters of this model can be estimated using a maximum likelihood function. The procedure is simple using statistical packages such as Stata, which contain an ordered probit estimation command. See also Greene (2000, pp. 875-879).

4.2 Analysis of the Estimation Model

In Table 4, we report the results from ordered probit regressions on the dependent variable of “Grade in Course” on a range of explanatory variables. The results are for our preferred specification, in which we exclude some variables that do not pass an F-test. In addition, the results in Table 4 do not include some of the more subjective measures collected, such as self-reported attendance and expected grade in the course, which also were not statistically significant. The results for the quantitative-skills variables are fairly consistent across a wide variety of specifications. (see End Note 6)


Table 4. Estimated Coefficients and Significance (Dependent Variable = Course Grade)


Explanatory Variable Estimated Coefficient Standard Error z-value

Female 0.38 0.14 2.73***
Sophomore 0.20 0.14 1.49
Minority -0.21 0.43 -0.48
Hours Work per Week 0.00 0.01 0.20
Hours Study per Week 0.02 0.01 1.65*
GPA 1.45 0.16 9.10***
Professor
     Professor 1
     Professor 2

0.31
-0.59

0.17
0.19

1.78*
-3.20***
Math ACT Score -0.01 0.02 -0.53
Science ACT Score 0.05 0.03 1.90*
Reading ACT Score -0.03 0.02 -1.36
English ACT Score 0.00 0.02 0.20
Calculus 0.01 0.15 0.06
Remedial Mathematics -0.23 0.15 -1.56
Math Quiz Score 0.09 0.03 2.73***
 
N, R-squared (End Note 7) 292, 0.16

* Ordered probit estimation. Significance levels are indicated as: *=10%,**=5%, and *** =1%. Professor 3 serves as the comparison category. “Sophomore” = 1 if the student reported being a sophomore, and = 0 if the student reported any other university class status. “Minority” = 1 if the student reported a racial/ethnic category other than white, and = 0 if the student reported being white or Caucasian.


We find that the most important determinants of student performance are GPA, student score on the science portion of the ACT exam, student score the math quiz score, student gender, and professor. All of these measures are significant at the ten-percent level or better. We also find that the coefficients on whether a student is a sophomore, hours reported studying per week, and student score on the reading portion of the ACT exam are marginally significant (p-value < 0.18). Professor 1 gave significantly higher and Professor 2 gave significantly lower grades than Professor 3. Since no difference was found across semesters in terms of professor grading, the semester dummy variable was dropped in the final regression specification.

Holding all other values constant at their means, we can evaluate how values for a particular explanatory variable influence the probability that a student earns a higher course grade. Women are likely to earn significantly higher grades than men. While the estimated coefficient on minority indicates that these students performed more poorly in the course, this result is not statistically significant. Students taking the course as a sophomore are expected to do better than students taking the course out of sequence (p-value < 0.16). The number of weekly hours that students report working at paid jobs has a negative impact on course grade, though this is not statistically significant. (see End Note 8) On the other hand, the reported number of hours spent studying had a strong and positive impact on performance in the class (p-value < 0.10). A student who studies 20 hours a week increases their probability of a higher grade by 0.17 compared to a student who only studies 10 hours per week. And, as expected, GPA is positively and highly significantly correlated with statistics course grade, suggesting that the best predictor of student academic performance is past academic performance.

Student score on the mathematics portion of the ACT exam is negative, though the estimated coefficient is highly insignificant. Student score on the reading portion of the ACT exam is also negative, and this result is marginally significant (p-value < 0.17). While the estimated coefficient on the English portion of the exam is positive, this result is statistically insignificant. However, student score on the science portion of the ACT exam is positive and is statistically significant (p-value < 0.06). Thus, out of all the skills and knowledge tested by the ACT exam, those tested in the science portion are most highly related to those useful in introductory statistics.

We find it surprising that having taken calculus or business calculus is not related to course performance, and whether the student has taken remedial mathematics is only marginally significant. We hypothesize that this result is due to the fact that all students who wish to enter the College of Business at our university are required to take both this introductory statistics course and business calculus. Thus the homogeneity in the student reported values obscures any positive effect that calculus skills might have for students taking introductory statistics. Further, because this is a non-calculus based introductory statistics course, calculus skills may be less beneficial than solid basic math skills.

The math-quiz score is positively and significantly related to student performance (p-value < 0.01). This suggests that very basic math skills may be more important than previously recognized. Our coefficients indicate that, all else equal, a student who answers all 15 math questions correctly is likely to earn a half to full letter grade higher in the course. While we might expect this to be the case for professors who rely on heavily mathematical presentation or technical homework exercises, the result holds even when controlling for which professor the student had, as “Professor” is included as an explanatory variable in the regression. Thus, in each instructor's course, holding constant all the other explanatory variables, there is a positive relationship between math skills and course performance. The extent of this positive relationship varies across the professors, but it is consistently significant.

4.3 The Effect of the Professor

While the results of the regression discussed above indicated that, even when controlling for professor, the math quiz score is positively and significantly related to student performance in the course, we wish to analyze further the relationship between professors, mathematical skills, and course performance. To this end, we determine which variables have significant interactions with the professor variable by conducting Chow tests. Chow tests are performed with the constraint that all of the professors' classes have the same estimated coefficient values for any given explanatory variable. We then consider a new specification of the model that includes interactions terms by professor.

The variables that pass the Chow test are gender, sophomore, work, GPA, remedial mathematics, and English ACT score, and so we add interaction terms that measure the effect of these variables for each professor. For example, to measure the effect of being female in the various professor's classes, we add interaction terms female*professor 1 and female*professor 2. For a discussion the use of interaction terms with dummy variables, refer to Suits (1957, pp. 548-551).

The remaining variables in our estimation - minority, hours study per week, whether a student has taken calculus, math ACT score, science ACT score, reading ACT score, and mathematics quiz score - do not pass the Chow test, so the estimated coefficients for these variables do not differ significantly across professor. Hence, for these variables we do not add interaction terms. The results of the regression are shown in Table 5.


Table 5. Estimated Coefficients with Interaction Terms (Dependent Variable is Course Grade)


Explanatory Variable Estimated Coefficient Standard Error z-value

 
Interaction Terms
Female*Professor1 -0.77 0.34 -2.24**
Female*Professor2 0.50 0.35 1.43
Sophomore*Professor1 -0.48 0.36 -1.33
Sophomore*Professor2 0.58 0.37 1.57
Work*Professor1 0.03 0.02 1.61
Work*Professor2 0.00 0.02 0.19
GPA*Professor1 -0.77 0.38 -2.03**
GPA*Professor2 0.36 0.39 0.92
Remedial*Professor1 0.13 0.53 0.25
Remedial*Professor2 0.02 0.53 0.03
English ACT*Professor1 0.10 0.05 1.91*
English ACT*Professor2 -0.05 0.05 -1.00
 
Non-Interaction Terms
Female 0.53 0.27 1.97**
Sophomore 0.08 0.28 0.28
Work -0.01 0.01 -0.71
Minority -0.34 0.47 -0.72
GPA 1.82 0.31 5.82***
Study 0.02 0.01 1.69*
Calculus 0.14 0.15 0.92
Remedial 0.00 0.48 0.00
Professor 1 0.58 1.57 0.37
Professor 2 -0.83 1.64 -0.50
English ACT Score -0.01 0.04 -0.23
Math ACT Score -0.01 0.03 -0.55
Science ACT Score 0.05 0.03 1.86*
Reading ACT Score -0.02 0.02 -0.83
Math Quiz Score 0.10 0.03 2.84***
 
N, R-squared (End Note 7) 292, 0.22

* Ordered probit estimation. Significance levels are indicated as: *=10%,**=5%, and *** =1%.


The interaction terms created between the professor dummy variables and gender, sophomore, work, GPA, remedial mathematics, and English ACT score show the relative importance of these variables for the each of the three professors. Consider the example of the gender (female) variable. Recall that in the original regression (Table 4), women did significantly better overall. In Table 5, we can see to what extent being female is valued in each of the professor's courses. Since Professor 3 serves as the comparison category in the regression, the coefficient on “female” of 0.53 indicates that in Professor 3's classes being female contributes positively and significantly to the course grade. In Professor 2's class being female has an even stronger effect, with a total coefficient of 0.53 + 0.50 = 1.03. However, in Professor 1's course, being female contributed negatively to course grade (0.53 + -0.77 = -0.24).

The results from the interaction term specification of the model shows that some characteristics or skills are valued more highly in some professors' courses than in others. But more important are the skills that are consistently valued across the three professors to approximately the same degree - science skills, as measured by the science portion of the ACT exam, and the math skills assessed by our basic mathematics quiz. This result has important implications for statistical teaching. If certain skills are consistently useful for students, regardless of the course format or teaching style of the professor, then we should be paying more attention to developing these skills prior to or concurrently with students taking the course.

4.4 Analysis of the Basic Math Quiz

We find that students who scored 7 or less on the math quiz were significantly more likely to get a course grade of less than a 2.0 than those who scored 10 or more, after controlling for the other explanatory variables. The results of the previous section indicate that basic mathematics skills, as tested in our math quiz score, are consistently valued across all three professors in the study. This means that regardless of the level of mathematics presented in the statistics course, or the relative emphasis on technical computation versus statistical interpretation, basic mathematics skills are an important determinant of student success in elementary statistics.

To analyze these basic mathematics skills further, we make another specification of the original model that, in addition to the total math-quiz score, includes each math question individually. That is, this regression includes a set of fifteen dummy variables, one for each question on the math quiz. Questions 2, 4, 6, 10 and 12 are independently significant at the ten-percent level or better. These questions deal with very basic concepts in arithmetic, algebra, and geometry including manipulating simple systems of equations, manipulating ratios, dividing fractions, a two-step word problem to find the area of a rectangle, and estimating square roots. Further, questions 5, 9, and 15 are marginally significant (p-value < 0.19). Thus, our regression results indicate that a variety of measures of quantitative skill have important effects on student achievement, including measures of some extremely basic skills. The results also indicate the particular types of skills that are important for introductory statistics.

No grade was attached to the math quiz, so some questions arise as to student motivation on the quiz. We examine the reliability of student scores on the math quiz by comparing them to other measures of student performance, such as their grade in the course, their GPA, and their ACT scores. Using Cronbach's Alpha as a test of reliability, we find a scale reliability coefficient of 0.669 for test items including math quiz score, GPA, mathematics ACT score, science ACT score, and course grade. This suggests that the math quiz score is largely consistent with other measures of student academic performance, and can be taken as a reliable measure of student mathematics ability. (see End Note 9)

We also perform sensitivity testing, following subjective cleaning of the data, removing the students for whom we have reason to believe that their math quiz score may be unreliable. We examine several regressions similar to that in Table 4, but dropping students with seemingly inconsistent values. We drop (1) students who scored a 7 or less on the math quiz, but earned a 3.0 or better in the course, (2) students who scored 7 or less on the math quiz, but earned the mean or higher on the composite ACT exam, and (3) students who scored 7 or less on the math quiz, but were not identified as needing remedial math work when they entered the university. This meant dropping roughly 16 students in each of the three situations examined. In all three cases, our results are stronger for the selected subsample. The coefficients on Science ACT and math quiz score rise and become more significant.

5. Discussion and Recommendations

5.1 Discussion

We study the determinants of success in an introductory economics and business statistics course, using data from a sample of 293 students who took the course in the Spring and Fall semesters of 2004, including a wide range of explanatory variables. We find that the most important determinants of student performance include student gender, GPA, ACT science score, and score on a quiz of basic math skills.

That student scores on the ACT science exam is positively and significantly related to course grade in introductory statistics is perhaps not surprising, since the ACT science exam tests skills such as interpreting graphs, tables and scatterplots, and has questions on experimental design, interpreting experimental results, and comparing alternative viewpoints and hypotheses. These same abilities are a large part of what we require of students in introductory statistics.

It is also not controversial that quantitative skills are important to success in introductory statistics. However, it is especially informative that very basic mathematics skills are among the most important indicators of student success in a course where many of the skills directly assessed (such as analyzing data with descriptive statistics, hypothesis testing, or linear regression) are not necessarily of a basic skills nature. In contrast, we find that neither taking calculus nor ACT Math score (measuring higher mathematics skills such as algebra, geometry and trigonometry) had a significant effect on course performance.

Moreover, we find that importance of the basic math quiz score, as well as the science ACT score, is consistent across the three professors in the study to approximately the same degree, even though the three professors had differing teaching styles and course emphases. This means that basic mathematics skills are an important determinant of student success in elementary statistics regardless of the level of mathematics presented, or the relative emphasis on computation versus interpretation by the instructor. That we find the basic math quiz score an important factor even for the “low math” professor who de-emphasized calculations and stressed interpretation of results, indicates that the importance of mathematics skills may go beyond merely the ability of students to “do the math”, it may also help students to analyze and reason quantitatively, and to understand and interpret statistical measures. The fact that our results are robust across three different teaching methodologies and course structures indicate that this study may be widely applicable to other instructors and at other universities.

5.2 Recommendations

The basic-math-quiz variable and ACT science variable can help us to identify the students most in need of help in statistics. Further, the basic mathematics and graph-reading skills that these variables measure may be the ones that can most easily be addressed in an introductory statistics course. We make the following recommendations:

  1. Instructors should pay special attention to the mathematics skills needed to master the statistics concepts they are teaching. One possibility is to include reviews of math concepts prior to introducing the related statistics concept. While adding additional in-class content to the course is costly - other options that can be considered, such as making use of online mathematics reviews as a required part of the course, but to be completed by the students outside of class.

  2. Introductory statistics is commonly a prerequisite course for students in many programs of study. Thus, universities and colleges looking to improve student performance and retention should take a careful look at what the necessary prerequisites for introductory statistics should be, and how course prerequisites are enforced. Students scoring sufficiently poorly on a mathematics placement exam, or on the mathematics portion of a college entrance exam, should perhaps be required to complete a developmental level mathematics course prior to taking introductory statistics.

  3. This study also helps to identify the particular mathematics skills that are important to emphasize in the prerequisites for introductory statistics. Universities should be sure that mathematics placement exams that are used for statistics course prerequisites assess basic math skills, including understanding simple equations, ratios and area.

  4. We find that ACT Science score is a significant indicator of student success in statistics. Students with low ACT Science scores may lack important skills that statistics instructors rely upon, such as the ability to read graphs or evaluate hypotheses. This readily-available measure should be used by universities and instructors to identify students at risk in introductory statistics.

  5. This study finds no significant relationship between student score on the mathematics portion of the ACT exam and performance in statistics. This stands in contrast to the importance of the basic math quiz score. This finding may have implications for universities that rely solely on ACT scores for mathematics placement. However, whether the ACT mathematics exam fails to accurately assess the basic mathematics skills identified in this study merits further analysis. This study also finds no relationship between the language and reading skills measured on the ACT exam and performance in statistics suggesting that the ACT exam may not provide a good assessment of the language skills important for statistics students; perhaps future research could explore this relationship as well.


Acknowledgements

We would like to thank Ryan Haley and Denise Robson for their participation and assistance, and Kevin McGee for his insightful comments. We would also like to thank the University of Wisconsin Oshkosh for support through a Scholarship of Teaching and Learning Grant.


End Notes

  1. Formal IRB approval was granted for this study. Appropriate records are on file with the university and can be provided upon request. In addition, students were provided with information sheets and consent forms to indicate their agreement to participate in the study, as well as to our retrieval of their official GPA and ACT scores from the university. Only three of the students who completed the course refused to give consent.

  2. Consistent with Maxwell and Lopus (1994), we find that students tend to overstate their GPA and ACT scores. Students overstate their GPA, compared to official university records, by an average of 0.05. Similarly, students overstate their composite ACT score by 5 (out of 36). Both results are statistically significant.

  3. The math quiz was originally developed for introductory economics, based on years of teaching. The particular math concepts covered by the quiz are similar to those reviewed in introductory statistics textbooks, as well as questions on our university's math placement exam for incoming freshmen. Tests of reliability indicate that none of the questions should be eliminated.

  4. In terms of our estimated regression results, we argue that the selection bias is not a problem. Consider an equation determining attendance with an error u. This equation can be represented as: where is the constant and is the vector (j) of coefficients on the exogenous variables xj for all observations, i, including the math variables such as math quiz score. We argue that the error u is positively correlated with the error e in the grade equation that specifies that student i's grade depends on a vector (j) of explanatory variables (z): . In this regression equation, is the constant and is the vector of coefficients on the exogenous variables zj for all observations i. This type of relationship between the error terms would suggest that students who are more likely to attend class are also more likely to get higher grades and have correspondingly better math skills. This expected value of the error is like an omitted variable in the grade determination equation. So when the regression is run, this negative correlation between math variables and the expected e would cause the coefficients on the math variables to be underestimated (see Ballard and Johnson 2004 for further details).

  5. We also check for multicollinearity between the various math and academic variables and between the math variables and race and gender. We find little evidence that multicollinearity is a problem examining correlation coefficients. In addition, we compare regressions, dropping one of the math variables and then looking at the effect on the t-statistics of the remaining variables; we find no significant changes.

  6. We include a math-squared term in these regressions, and the evidence from the regressions suggests that the relationship between student performance in the class and the math quiz is not expressed better as a quadratic.

  7. Ordered probit analysis does not allow the computation of a straight-forward R-squared value. The adjusted R-squared reported here is one that is simulated, and the usefulness of these simulated values is the subject of much discussion. Perhaps a more clear proximate measure would be the R-squared value from a standard OLS regression, with grade as the dependent variable and including the same explanatory variables. In doing so, we find an R-squared of 0.4277. In either case, a small R-squared value only implies that the variance of the error is large relative to the variance in the dependent variable. Thus it may be difficult to precisely estimate ; however, larger sample sizes can allow us to estimate the partial effects precisely, even with many unobservable factors unconsidered. Therefore, while we may be explaining only a fraction of the total variance in statistics grades with these explanatory variables, we can still be confident that our estimated coefficients on the explanatory variables are accurate.

  8. We attempted to enter the “hours worked per week” variable in a number of different ways because the variable has skewed distribution. However, none of the categorical variables (work versus do not work; work less than 10 hours a week, work more; work less than 20 hours a week, work more) we tried generated significant results. We also considered that “work” may enter in a non-linear form, and thus included both “work” and “work-squared,” but with no impact on the general results of the model.

  9. Cronbach's Alpha is calculated as the square of the correlation between the measured scale and the underlying factor; generally alpha values of 0.70 or higher are considered acceptable. However, our scale is an imperfect yardstick for judging the reliability of the math quiz score as it includes items related to student intelligence or effort such as GPA, but that is not necessarily related to math skills, and thus we argue that a value of 0.669 is sufficiently indicative of the math quiz scores' reliability. Another way to identify this is by looking at the without item-test correlation. For well-fitting items, the alpha decreases when you shorten the test list. Removing math quiz score from the list of items reduces Cronbach's alpha from 0.6686 to 0.6161.


References

Anderson, G., Benjamin, D., and Fuss, M. (1994), “The Determinants of Success in University Introductory Economics Courses,” Journal of Economic Education, 25, 99-119.

Ballard, C. L. and Johnson, M. (2004), “Basic Math Skills and Performance in Introductory Microeconomics,” Journal of Economic Education, 35, 3-23.

Becker, W. (1987), “Teaching Statistical Methods to Undergraduate Economic Students,” American Economic Review, 77, 18-23.

Bresnock, A. E., Graves, P. E., and White, N. (1989), “Multiple-Choice Testing: Question and Response Position,” Journal of Economic Education, 20, 239-245.

Chan, K., Shum, C., and Wright, D. (1997), “Class Attendance and Student Performance in Principles of Finance,” Financial Practice and Education, 7, 58-65.

Cohn, E. (1972), “Students' Characteristics and Performance in Economic Statistics,” Journal of Economic Education, 3, 106-111.

Douglas, S. and Sulock, J. (1995), “Estimating Educational Production Functions with Correction for Drops,” Journal of Economic Education, 26, 101-112.

Ely, D. and Hittle, L. (1990), “The Impact of Math Background on Performance in Managerial Economics and Basic Finance Courses,” Journal of Financial Education, 16, 59-61.

Garfield, J., Hogg B., Schau, C., and Whittinghill, D. (2002), “First Courses in Statistical Science: The Status of Educational Reform Efforts,” Journal of Statistics Education [On line] 10(2).
jse.amstat.org/v10n2/garfield.html

Greene, W. (2000), Econometric Analysis, 4th Ed., Upper-Saddle River, NJ: Prentice Hall.

Grillo, J.A., D.A. Latif, and S.K. Stolte. (2001), “The Relationship Between Preadmission Indicators and Basic Mathematics Skills at a New School of Pharmacy,” Annals of Pharmacotherapy, 35, 167-172.

Hillmer, S. (1996), “A Problem-Solving Approach to Teaching Business Statistics,” The American Statistician, 50, 249-256.

Johnson, M. and Kuennen, E. (2004), “Delaying Developmental Mathematics: The Characteristics and Costs,” Journal of Developmental Education, 28, 24-30.

Krieg, R. and Uyar, B. (1997), “Correlates of Student Performance in Business and Economics Statistics,” Journal of Economics and Finance, 21, 65-74.

Krieg, R. and Uyar, B. (2001), “Student Performance in Business and Economics Statistics: Does Exam Structure Matter?” Journal of Economics and Finance, 25, 229-241.

Magel, R. (1996), “Increasing Student Performance in Large Introductory Statistics Classes,” The American Statistician, 50, 51-56.

Maxwell, N., and Lopus, J. (1994), “The Lake Wobegon Effect in Student Self-Reported Data,” American Economic Review, 84, 201-205.

Park, K. and Kerr, P. (1990), “Determinants of Academic Performance: A Multinomial Logit Approach,” Journal of Economic Education, 21, 101-111.

Stromberg, A. and Ramanathan, S. (1996), “Easy Implementation of Writing in Introductory Statistics Courses,” The American Statistician, 50, 159-163.

Suits, Daniel B. (1957), “Use of Dummy Variables in Regression Equations,” Journal of the American Statistical Association, 52, 548-551.

Utts, J., Sommer, B., Acredolo, C., Maher, M., and Matthews, H. (2003), “A Study Comparing Traditional and Hybrid Internet-Based Instruction in Introductory Statistics Classes,” Journal of Statistics Education [On line] 11(3).
jse.amstat.org/v11n3/utts.html

Ward, B. (2004), “The Best of Both Worlds: A Hybrid Statistics Course,” Journal of Statistics Education [On line] 12(3).
jse.amstat.org/v12n3/ward.html

Williams, M. L., Waldauer, C. , and Duggal, V. G. (1992), “Gender Differences in Economics Knowledge: An Extension of the Analysis,” Journal of Economic Education, 23, 219-231.

Wooldridge, Jeffrey M. (2002), Econometric Analysis of Cross Section and Panel Data, Cambridge, MA: MIT Press.


Marianne Johnson
Department of Economics
University of Wisconsin Oshkosh
Oshkosh, WI 54901
U.S.A.
johnsonm@uwosh.edu

Eric Kuennen
Department of Mathematics
University of Wisconsin Oshkosh
Oshkosh, WI 54901
U.S.A.
kuennene@uwosh.edu


Volume 14 (2006) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications