Journal of Statistics Education Volume 12, Number 1 (2004), ww2.amstat.org/publications/jse/v12n1/jordan.html
Copyright © 2004 by Joy Jordan, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.
Key Words: Assessment; Feedback; Grading; Learning styles; Technology.
While written comments are a popular and potentially effective method of student exam feedback, these comments are often overshadowed by students’ focus on their grades. In this paper I discuss the additional use of orally recorded exam feedback in introductory statistics classes of 40 or fewer students. While grading and writing comments on a student’s exam solution, I create a personalized sound file of detailed oral feedback for each question. The student can then securely access this file. The oral feedback in combination with written comments is more understandable for and motivating to the students, and accommodates a broader range of student learning styles. In support of this new feedback method, I provide and discuss classroom data collected from my students. Furthermore, I make suggestions for the use of orally recording feedback when time and resources are scarce.
Assessment in statistics education is a central issue. The definition of assessment has become more than simply assigning points to student answers (Garfield 1994). It is now important for statistics instructors to view assessment more broadly, and to find creative and effective ways to communicate with students. That said, traditional in-class exams are still a popular method of assessing student knowledge. If exams continue to be used, then it is important to create new techniques for providing exam feedback. The feedback needs to be “timely, constructive, and regenerative” (Chance 1997, p. 6). Written comments are typically intended to communicate sources of student error and provide suggestions for correcting mistakes. However, these comments are often overshadowed by students’ focus on their grades. Orally recorded exam feedback can provide personalized support and more thorough explanations, thereby engaging students in the learning process.
Another significant issue in higher education lies in understanding and accommodating student learning styles (see, for example, Grasha 1996; Sarasin 1999). Sarasin (1999) classifies students as auditory, visual, or tactile learners. Furthermore, “... to successfully address the learning needs of their auditory learners, instructors should emphasize the oral mode as much as possible” (Sarasin 1999, p. 49). Orally recorded exam feedback is ideal for auditory learners, while the accompanying written feedback appeals to visual learners. Therefore, providing students with both written and oral feedback supports a broader range of student learning styles.
Each term I teach an Elementary Statistics course to at least 35 students. Among other class requirements, the students have two in-class exams and a final exam. My tests consist of open-ended questions, and I always strive to carefully analyze student answers, give partial credit, and provide detailed written feedback. I am frequently frustrated, though, by the limitations of written comments. I never feel that I write as much detail as I know would be helpful, and most of my students only get corrections and no real positive feedback. Furthermore, I never know if students actually closely read and understand my comments, or if they simply look at their grades. The use of supplemental oral feedback alleviates many of these concerns.
Oral comments can be clearer, more detailed, and thus more understandable to students than written comments. Written comments are often cryptic and, depending on the handwriting, difficult to read. Key points can be clearly explained and elaborated on more quickly via speaking than writing. Because the students get more detail, they are more likely to ask further questions, rather than simply forgetting about the exam. The oral exam comments are saved on the college’s network, thus the students can access them easily and listen to them repeatedly, both immediately after the exam and throughout the rest of the course.
This supplemental method of communicating with the students motivates them to learn from their exam performance. The motivation stems from the personal nature of the voice comments and the novelty of the method, which encourages them to give serious consideration to the feedback.
Recording voice comments is more personal for the students, which is especially helpful in the introductory statistics classroom where the subject intimidates many students. The tone of my voice combined with the content of my comments conveys much more than only written comments. With written feedback I often only correct or penalize student answers, and I rarely have time to write positive comments. The voice recording allows me to give positive feedback on a correct question, even if it is the only question a student gets right. This method also permits me to summarize a student’s performance and to suggest a few manageable remediation goals to the student.
Because the oral feedback is individualized and fun for the students, they are more likely to listen to my comments. In the past I have always wondered if my students actually read the comments that I wrote, or if they simply looked at their grades. With the voice recording, I can know whether students are listening to my feedback. During one term of using voice comments, in each of the sound files I gave an individual password that was the answer to a future quiz question. This small step ensured that the students actually listened to the file. While I cannot then conclude that each student thoroughly processed my feedback, at least I know that they took a step toward understanding.
My implementation of this new feedback method requires shared network space, voice recording software, and a computer lab with headphones.
At Lawrence University the computer services office provides a shared network drive where instructors can create class folders. Computer services and the individual instructor have permission to manipulate the class folder (e.g., save files, change files, etc.), but all other users only have the ability to view the information in the folder. After discussing my project with a computer services administrator, I was shown how to create individual student folders within my class folder and how to set the permissions for these folders. Thus my students can securely access their exam sound files (i.e., each folder can be accessed only by the individual student and no one else).
When grading and commenting on student exam answers, I work question by question to ensure consistency. While assessing each student answer, I write a few short comments or draw a picture to which I refer in my voice recording. For example, when discussing a normal distribution problem it is much easier to refer to a picture of the normal curve than it is to orally describe the curve. Then I create a sound file for that student answer (i.e., a student receives a separate sound file for each question).
Simple sound recording software is now standard on personal computers, and more complex recording software can be downloaded from the Internet. When recording my comments, I use a set of headphones that has an attached microphone. Although I create the sound files at my computer, it is also possible to use a portable voice recorder, which allows for the flexibility of grading in this manner even when a computer is not available.
After producing the sound file, I save it to the respective student folder on the shared network drive. I am very careful when saving the file, as a slight oversight may lead to the violation of a student’s privacy (i.e., copying the file to the wrong student’s folder will allow that student to hear a classmate’s personalized comments).
As most instructors know, student exam answers sometimes contain common mistakes. When this occurs in my class, I create voice templates for a few questions that were frequently missed by students (a quick glance through the exams can typically illuminate frequently missed questions). Thus, for most of the questions I use personalized voice comments, but if a student makes a common mistake, then I simply direct the student to the voice template (which I copy to the student folder). This allows me to keep the process personalized, yet also saves me time.
Once the sound files are created, I construct a handout that describes the steps in accessing the sound files, and I distribute it to the students along with their graded exams. Making the listening process as easy as possible, including providing headphones, encourages students to listen to the voice comments. As previously mentioned, I sometimes ensure that students listen to the feedback by including a password in each sound file and then asking for that password on a future quiz (specifically, I assign each student a unique color and then, as a quiz question, I ask for that color).
To assess the helpfulness and effectiveness of orally recorded feedback, I conducted classroom research in three of my classes during two different terms.
In the fall term of 2001, I used the oral feedback in both my classes (Elementary Statistics and Introduction to Probability and Statistics - a total of 71 students). Sixty-eight of the 71 students correctly provided their password (i.e, color) on the quiz, indicating that they had listened to their sound files.
An anonymous questionnaire was distributed near the end of the course. Of the 66 students who filled out the questionnaire, 55 students said they liked the oral feedback better than the written feedback, 7 students said they liked the two methods the same, and only 4 students said they liked the oral feedback less than the written feedback.
According to the students who favored the oral feedback, the voice comments were more personal, more understandable, and more helpful than the written comments. A sample of specific student comments is included below.
“I liked it [the oral feedback]. It felt more personal and it seemed like you understood my way of thinking.”
“I liked the oral comments better, since it just had a more personal feeling to it. And you could go into more depth about mistakes and everything.”
“I found it [the oral feedback] much more helpful. You can usually say more than you can write, plus there’s no issue with legibility.”
“I thought the oral comments were fantastic - it was like having a personal session, everything I did wrong was very clear and helpful. The file was easy to access - it was personable - a great idea - keep doing it!”
Of the 4 students who preferred written comments, 1 student thought the voice comments went too fast and the other 3 students wanted a written record of the feedback. I acknowledge the feelings of these students, and I think these issues can be addressed. Even if the comments do move quickly, a student has the option of listening repeatedly. Furthermore, a future improvement to the process would be for my oral comments to be transcribed. (Although it may be a more useful exercise for the students to take notes on my comments and then discuss the feedback with me.)
While the positive student reaction from the 2001 fall term supports the use of orally recorded feedback, investigating actual student improvement is also essential. In the winter term of 2003, I randomly divided my 36 Elementary Statistics students into two groups. The treatment group (n = 18) received both written and oral feedback on the two exams, while the control group (n = 18) received only written feedback. Both groups received a written answer key to each exam. The students were fully aware of my classroom research and they all signed letters of consent.
The week following both exams, the students were given a quiz that retested the exam material (either verbatim questions or slight changes to questions). I hoped this second assessment would encourage the students to learn from their exams and the feedback I gave them. The group that received sound files was encouraged, but not required, to listen to my comments, since I could not make the same requirement of the group receiving only written feedback. I compared improvement in performance from the exam to the retest between the two groups (improvement = retest% - exam%). For the first exam, the average improvement of the treatment group (3.9%) was higher than that of the control group (1.1%); likewise, for the second exam, the average improvement of the treatment group (10.6%) was higher than that of the control group (7.5%). Neither of the differences was statistically significant, though, as there was high variability in the data. That said, it is arguable whether focus should be placed on the statistical significance of the additional orally recorded feedback. As mentioned by a reviewer, it is the practical significance of oral feedback that should be considered and assessed. While I found the student improvements practically significant, even in lingt of the extra time I spent with the feedback process it is also important to assess whether or not the students found the orally recorded feedback to be practically significant.
In the last week of class, I collected information from the students via an anonymous questionnaire. Of the 18 students who received sound files, 7 students said they liked the oral feedback better than the written feedback, 5 students said they liked the two methods the same, and 5 students said they liked the oral feedback less than the written feedback (1 student never listened to the sound files and therefore could not compare the two methods). Furthermore, 12 students listened to both sound files, 5 listened to only one, and 1 listened to neither. There was an interesting relationship between the number of sound files a student listened to and the feedback method a student preferred. The students who listened to both voice recordings preferred the oral feedback more strongly (58% preferred oral while 17% preferred written) than those who listened to only one recording (0% preferred oral while 60% preferred written). This result may be because the students who initially (and finally) favored written comments were simply not interested in listening to the sound files, or because the more the students listened (i.e., listening to 2, rather than just 1 file), the more they liked the oral comments.
A student who preferred the oral comments wrote, “Oral was more helpful as it allowed more information to be given than what could be written - just like one-on-one, so a better explanation.” On the other hand, a student who liked the written comments better wrote, “I am a visual learner - so it helps to be able to see what I did wrong.” These opposing views illustrate a difference in student learning styles and indicate that exam feedback may be best when students have options on how they get the feedback.
According to Peter Sternberg, “...teachers must accommodate an array of thinking and learning styles, systematically varying teaching and assessment methods to reach every student” (Sternberg 1994, p. 38). Giving written and oral exam feedback to students supports both auditory and visual learners. Furthermore, the communication with students is more individual. Such contact between teacher and student is sometimes missing in the introductory statistics classroom.
Obviously, creating sound files for each student can be time consuming, especially if the class enrollment is large. Furthermore, not all colleges have audio-equipped computer lab facilities (although most campuses usually have at least a few computers with sound cards). In situations when time or computer lab space is an issue, the oral feedback can simply be an option provided to students. Those students who are aware they are auditory learners can then request a feedback method in line with their learning style, and students who are visual learners can elect to have only written comments (if too many students ask for oral feedback, then perhaps a thorough conversation about learning styles is in order).
As one of my students wrote, “The more senses that I use, the better my understanding.” Accommodating different learning styles in classroom assessment allows for more student involvement and understanding. Supplementing written comments with orally recorded exam feedback is an interesting way to reach more students.
Chance, B. (1997), “Experiences with Authentic Assessment Techniques in an Introductory Statistics Course,” Journal of Statistics Education [Online], 5(3). ( ww2.amstat.org/publications/jse/v5n3/chance.html )
Garfield, J. (1994), “Beyond Testing and Grading: Using Assessment to Improve Student Learning,” Journal of Statistics Education [Online], 2(1). ( ww2.amstat.org/publications/jse/v2n1/garfield.html )
Grasha, A. F., and Richlin, L. (1996), Teaching with Style: A Practical Guide to Enhancing Learning by Understanding Teaching and Learning Styles, Pittsburgh, PA: Alliance Publishers.
Sarasin, C. S. (1999), Learning Style Perspectives: Impact in the Classroom, Madison, WI: Atwood Publishing.
Sternberg, R.J. (1994), “Allowing for Thinking Styles,” Educational Leadership, 52(3), 36-40.
Department of Mathematics
P. O. Box 599
Appleton, WI 54912-0599
Volume 12 (2004) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications