NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Emmerich, Walter; And Others – 1991
The aim of this research was to identify, develop, and evaluate empirically new reasoning item types that might be used to broaden the analytical measure of the Graduate Record Examinations (GRE) General Test and to strengthen its construct validity. Six item types were selected for empirical evaluation, including the two currently used in the GRE…
Descriptors: Construct Validity, Correlation, Evaluation Methods, Sex Differences
Enright, Mary K.; And Others – 1995
A previous study of new item types for the analytical measure of the Graduate Record Examinations (GRE) General Test found that the new items had many factors labeled verbal reasoning, informal reasoning, formal-deductive reasoning, and quantitative reasoning. The present study examined how processing differed for these item types in the context…
Descriptors: College Entrance Examinations, College Students, Deduction, Evaluation Methods
Peer reviewed Peer reviewed
Stricker, Lawrence J. – Educational and Psychological Measurement, 1984
The stability was evaluated of a partial correlation index, comparisons of item characteristic curves, and comparisions of item difficulties in assessing race and sex differences in the performance of verbal items on the Graduate Record Examination Aptitude Test. All three indexes exhibited consistency in identifying the same items in different…
Descriptors: College Entrance Examinations, Comparative Analysis, Correlation, Difficulty Level
Schnipke, Deborah L. – 1995
Time limits on tests often prevent some examinees from finishing all of the items on the test; the extent of this effect has been called the "speededness" of the test. Traditional speededness indices focus on the number of unreached items. Other examinees in the same situation rapidly fill in answers in the hope of getting some of the…
Descriptors: Computer Assisted Testing, Educational Assessment, Evaluation Methods, Guessing (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Terwilliger, James S.; And Others – 1989
The Graduate Record Examination (GRE) in Physics was analyzed as a specific indicator of learning outcomes in that undergraduate major. Textbooks were selected as a measure of curriculum content. A representative sample of colleges and universities (n=59) responded to questions about physics courses and textbooks and the existence of a graduate…
Descriptors: College Entrance Examinations, College Students, Content Validity, Course Content
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition