NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Pamela A. – Journal of Educational Measurement, 2013
Studies of data use illuminate ways in which education professionals have used test scores and other evidence relevant to students' learning--in action in their own contexts of work--to make decisions about their practice. These studies raise instructive challenges for a validity theory that focuses on intended interpretations and uses of test…
Descriptors: Validity, Test Use, Test Interpretation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Randall, Jennifer; Engelhard, George, Jr. – Journal of Educational Measurement, 2009
In this study, we present an approach to questionnaire design within educational research based on Guttman's mapping sentences and Many-Facet Rasch Measurement Theory. We designed a 54-item questionnaire using Guttman's mapping sentences to examine the grading practices of teachers. Each item in the questionnaire represented a unique student…
Descriptors: Student Evaluation, Educational Research, Grades (Scholastic), Public School Teachers
Peer reviewed Peer reviewed
Sheehan, Kathleen M. – Journal of Educational Measurement, 1997
A new procedure is proposed for generating instructionally relevant diagnostic feedback. The approach involves constructing a strong model of student proficiency and then testing whether individual students' observed item-response vectors are consistent with that model. The approach is applied to the Scholastic Assessment Test's verbal reasoning…
Descriptors: Academic Achievement, Educational Assessment, Educational Diagnosis, Feedback
Peer reviewed Peer reviewed
Hamilton, Lawrence C. – Journal of Educational Measurement, 1981
Errors in self-reports of three academic performance measures are analyzed. Empirical errors are shown to depart radically from both no-error and random-error assumptions. Self-reports by females depart farther from the no-error and random-error models for all three performance measures. (Author/BW)
Descriptors: Academic Achievement, Error Patterns, Grade Point Average, Models
Peer reviewed Peer reviewed
Lunz, Mary E.; Bergstrom, Betty A. – Journal of Educational Measurement, 1994
The impact of computerized adaptive test (CAT) administration formats on student performance was studied with 645 medical technology students who also took a paper-and-pencil test. Analysis of covariance indicates no significant interactions among test administration formats and provides evidence for adjusting CAT test to more familiar modalities.…
Descriptors: Academic Achievement, Adaptive Testing, Analysis of Covariance, Computer Assisted Testing
Peer reviewed Peer reviewed
Tan, E. S.; And Others – Journal of Educational Measurement, 1994
A study of the relationship between first-year results for 115 Dutch medical students and achievement during medical school was studied using an item-response theory model for the longitudinal measure of change with stochastic parameters (developed by Albers et al., 1989) indicates that a low rate of growth in the first year persists. (SLD)
Descriptors: Academic Achievement, Change, Comparative Analysis, Foreign Countries
Peer reviewed Peer reviewed
Young, John W. – Journal of Educational Measurement, 1990
A new measure of academic performance was developed through a new application of item response theory (IRT). This new criterion, an IRT-based grade point average (GPA), was used to determine the predictive validity of certain preadmissions measures for 1,564 students admitted to Stanford University in 1982. (SLD)
Descriptors: Academic Achievement, Admission Criteria, College Entrance Examinations, College Students
Peer reviewed Peer reviewed
Shavelson, Richard J.; And Others – Journal of Educational Measurement, 1993
Evidence is presented on the generalizability and convergent validity of performance assessments using data from six studies of student achievement that sampled a wide range of measurement facets and methods. Results at individual and school levels indicate that task-sampling variability is the major source of measurement error. (SLD)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Young, John W. – Journal of Educational Measurement, 1991
Item response theory (IRT) is used to develop a form of adjusted cumulative grade point average (GPA) for use in predicting college academic performance appropriately for males and females. For 1,564 students at Stanford University (California), the IRT-based GPA was more predictable from preadmission measures than the cumulative GPA. (SLD)
Descriptors: Academic Achievement, College Students, Grade Point Average, Higher Education
Peer reviewed Peer reviewed
Johnson, Sandra; Bell, John F. – Journal of Educational Measurement, 1985
The assessment framework underlying a science performance monitoring program is process-oriented and intended to appeal to generalizability theory for a suitable estimation paradigm. Preliminary applications are described. Results suggest that computerized question-banking, domain-sampling of questions, and generalizablity theory together provide…
Descriptors: Academic Achievement, Computer Assisted Testing, Educational Assessment, Foreign Countries