NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Anna Keune – International Journal of Computer-Supported Collaborative Learning, 2024
A key commitment of computer-supported collaborative learning research is to study how people learn in collaborative settings to guide development of methods for capture and design for learning. Computer-supported collaborative learning research has a tradition of studying how the physical world plays a part in collaborative learning. Within the…
Descriptors: Design Crafts, Visual Arts, Algorithms, Cooperation
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stella Eteng-Uket – Numeracy, 2023
This paper describes a study that focused on developing, validating and standardizing a dyscalculia test, henceforth called the Dyscalculia Test. Out of the 4,758,800 students in Nigeria's upper primary and junior secondary schools, I randomly drew a sample of 2340 students, using a multistage sampling procedure that applied various sampling…
Descriptors: Test Construction, Learning Disabilities, Elementary School Students, Junior High School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa – International Journal of Assessment Tools in Education, 2019
This study investigated the effectiveness of statistical adjustments applied to rater bias in many-facet Rasch analysis. Some changes were first made in the dataset that did not include "rater × examinee" bias to cause to have "rater × examinee" bias. Later, bias adjustment was applied to rater bias included in the data file,…
Descriptors: Statistical Analysis, Item Response Theory, Evaluators, Bias
Madeline Tate Hinckle – ProQuest LLC, 2023
As science becomes increasingly computationally intensive, the need for computational thinking (CT) and computer science (CS) practices in K-12 science education is becoming paramount. Incorporation of CT/CS practices in K-12 education can be seen in national standards and a variety of allied initiatives. One way to build capacity around an…
Descriptors: Middle School Students, Science Instruction, Computation, Thinking Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boulden, Danielle Cadieux; Wiebe, Eric; Akram, Bita; Aksit, Osman; Buffum, Philip Sheridan; Mott, Bradford; Boyer, Kristy Elizabeth; Lester, James – Middle Grades Review, 2018
This paper reports findings from the efforts of a university-based research team as they worked with middle school educators within formal school structures to infuse computer science principles and computational thinking practices. Despite the need to integrate these skills within regular classroom practices to allow all students the opportunity…
Descriptors: Computation, Thinking Skills, Middle School Students, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2014
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Descriptors: Item Response Theory, Models, Educational Assessment, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lockwood, J. R.; Castellano, Katherine E. – Grantee Submission, 2015
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Descriptors: Statistical Analysis, Achievement Gains, Academic Achievement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas – Educational and Psychological Measurement, 2015
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
Descriptors: Measurement, Computation, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mouza, Chrystalla; Yang, Hui; Pan, Yi-Cheng; Ozden, Sule Yilmaz; Pollock, Lori – Australasian Journal of Educational Technology, 2017
This study presents the design of an educational technology course for pre-service teachers specific to incorporating computational thinking in K-8 classroom settings. Subsequently, it examines how participation in the course influences pre-service teachers' dispositions and knowledge of computational thinking concepts and the ways in which such…
Descriptors: Educational Technology, Preservice Teachers, Pedagogical Content Knowledge, Technological Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen; Heldsinger, Sandra; Andrich, David – Applied Measurement in Education, 2014
One of the best-known methods for setting a benchmark standard on a test is that of Angoff and its modifications. When scored dichotomously, judges estimate the probability that a benchmark student has of answering each item correctly. As in most methods of standard setting, it is assumed implicitly that the unit of the latent scale of the…
Descriptors: Foreign Countries, Standard Setting (Scoring), Judges, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Michaelides, Michalis P.; Haertel, Edward H. – Applied Measurement in Education, 2014
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Descriptors: Equated Scores, Test Items, Sampling, Statistical Inference
Previous Page | Next Page »
Pages: 1  |  2