ERIC Number: EJ773580
Record Type: Journal
Publication Date: 2003
Pages: 57
Abstractor: Author
ISBN: N/A
ISSN: ISSN-0735-6331
EISSN: N/A
Evaluation of an Automated Reading Tutor that Listens: Comparison to Human Tutoring and Classroom Instruction
Mostow, Jack; Aist, Greg; Burkhead, Paul; Corbett, Albert; Cuneo, Andrew; Eitelman, Susan; Huang, Cathy; Junker, Brian; Sklar, Mary Beth; Tobin, Brian
Journal of Educational Computing Research, v29 n1 p61-117 2003
A year-long study of 131 second and third graders in 12 classrooms compared three daily 20-minute treatments. a) Fifty-eight students in six classrooms used the 1999-2000 version of Project LISTEN's Reading Tutor, a computer program that uses automated speech recognition to listen to a child read aloud, and gives spoken and graphical assistance. Students took daily turns using one shared Reading Tutor in their classroom while the rest of their class received regular instruction. b) Thirty-four students in the other six classrooms were pulled out daily for one-on-one tutoring by certified teachers. To control for materials, the human tutors used the same set of stories as the Reading Tutor. c) Thirty-nine students served as in-classroom controls, receiving regular instruction without tutoring. We compared students' pre- to post-test gains on the Word Identification, Word Attack, Word Comprehension, and Passage Comprehension subtests of the Woodcock Reading Mastery Test, and in oral reading fluency. Surprisingly, the human-tutored group significantly outgained the Reading Tutor group only in Word Attack (main effects p less than 0.02, effect size 0.55). Third graders in both the computer- and human-tutored conditions outgained the control group significantly in Word Comprehension (p less than 0.02, respective effect sizes 0.56 and 0.72) and suggestively in Passage Comprehension (p = 0.14, respective effect sizes 0.48 and 0.34). No differences between groups on gains in Word Identification or fluency were significant. These results are consistent with an earlier study in which students who used the 1998 version of the Reading Tutor outgained their matched classmates in Passage Comprehension (p = 0.11, effect size 0.60), but not in Word Attack, Word Identification, or fluency. To shed light on outcome differences between tutoring conditions and between individual human tutors, we compared process variables. Analysis of logs from all 6,080 human and computer tutoring sessions showed that human tutors included less rereading and more frequent writing than the Reading Tutor. Micro-analysis of 40 videotaped sessions showed that students who used the Reading Tutor spent considerable time waiting for it to respond, requested help more frequently, and picked easier stories when it was their turn. Human tutors corrected more errors, focused more on individual letters, and provided assistance more interactively, for example getting students to sound out words rather than sounding out words for students as the Reading Tutor did. (Contains 10 tables.)
Descriptors: Computer Assisted Instruction, Identification, Computer Software, Tutors, Tutoring, Reading Fluency, Mastery Tests, Effect Size, Control Groups, Reading Instruction, Word Recognition, Comparative Analysis, Urban Schools, Outcomes of Education, Instructional Effectiveness, Formative Evaluation
Baywood Publishing Company, Inc. 26 Austin Avenue, P.O. Box 337, Amityville, NY 11701. Tel: 800-638-7819; Tel: 631-691-1270; Fax: 631-691-1770; e-mail: info@baywood.com; Web site: http://baywood.com
Publication Type: Journal Articles; Reports - Evaluative
Education Level: Elementary Education; Grade 2; Grade 3
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: Woodcock Reading Mastery Test
Grant or Contract Numbers: N/A