NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission13
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Li, Haiying; Cai, Zhiqiang; Graesser, Arthur – Grantee Submission, 2018
In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries.…
Descriptors: Computer Assisted Testing, Scoring, Semantics, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhongdi Wu; Eric Larson; Makoto Sano; Doris Baker; Nathan Gage; Akihito Kamata – Grantee Submission, 2023
In this investigation we propose new machine learning methods for automated scoring models that predict the vocabulary acquisition in science and social studies of second grade English language learners, based upon free-form spoken responses. We evaluate performance on an existing dataset and use transfer learning from a large pre-trained language…
Descriptors: Prediction, Vocabulary Development, English (Second Language), Second Language Learning
McLaughlin, Tara W.; Snyder, Patricia A.; Algina, James – Grantee Submission, 2017
The Learning Target Rating Scale (LTRS) is a measure designed to evaluate the quality of teacher-developed learning targets for embedded instruction for early learning. In the present study, we examined the measurement dependability of LTRS scores by conducting a generalizability study (G-study). We used a partially nested, three-facet model to…
Descriptors: Generalizability Theory, Scores, Rating Scales, Evaluation Methods
Michelle M. Neumann; Jason L. Anthony; Noé A. Erazo; David L. Neumann – Grantee Submission, 2019
The framework and tools used for classroom assessment can have significant impacts on teacher practices and student achievement. Getting assessment right is an important component in creating positive learning experiences and academic success. Recent government reports (e.g., United States, Australia) call for the development of systems that use…
Descriptors: Early Childhood Education, Futures (of Society), Educational Assessment, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Falk, Carl F.; Cai, Li – Grantee Submission, 2015
In this paper, we present a flexible full-information approach to modeling multiple userdefined response styles across multiple constructs of interest. The model is based on a novel parameterization of the multidimensional nominal response model that separates estimation of overall item slopes from the scoring functions (indicating the order of…
Descriptors: Response Style (Tests), Item Response Theory, Outcome Measures, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese, Joseph F. T.; Kamata, Akihito; Alonzo, Julie – Grantee Submission, 2015
Assessing reading fluency is critical because it functions as an indicator of comprehension and overall reading achievement. Although theory and research demonstrate the importance of ORF proficiency, traditional ORF assessment practices are lacking as sensitive measures of progress for educators to make instructional decisions. The purpose of…
Descriptors: Oral Reading, Reading Fluency, Accuracy, Reading Rate
Gordon, Rachel A.; Peng, Fang – Grantee Submission, 2020
The standard scoring of the CLASS PreK produces three domain scores that are widely used in research, practice and policy. Despite these domains being based on developmental theory and research, limited empirical evidence exists for the three-domain structure as operationalized in the CLASS PreK. Using the 2009 and 2014 Head Start Family and Child…
Descriptors: Preschool Education, Low Income Groups, Federal Programs, Factor Structure
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Jacovina, Matthew E.; McNamara, Danielle S. – Grantee Submission, 2016
The development of strong writing skills is a critical (and somewhat obvious) goal within the classroom. Individuals across the world are now expected to reach a high level of writing proficiency to achieve success in both academic settings and the workplace (Geiser & Studley, 2001; Powell, 2009; Sharp, 2007). Unfortunately, strong writing…
Descriptors: Writing Skills, Writing Instruction, Writing Strategies, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorin, Joanna S.; O'Reilly, Tenaha; Sabatini, John; Song, Yi; Deane, Paul – Grantee Submission, 2014
Recent advances in cognitive science and psychometrics have expanded the possibilities for the next generation of literacy assessment as an integrated domain (Bennett, 2011a; Deane, Sabatini, & O'Reilly, 2011; Leighton & Gierl, 2011; Sabatini, Albro, & O'Reilly, 2012). In this paper, we discuss four key areas supporting innovations in…
Descriptors: Literacy Education, Evaluation Methods, Measurement Techniques, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xiong, Wenting; Litman, Diane – Grantee Submission, 2014
We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…
Descriptors: User Satisfaction (Information), Electronic Publishing, Documentation, Metadata
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods