Publication Date
In 2025 | 109 |
Descriptor
Source
Author
Alina Kadluba | 2 |
Andreas Obersteiner | 2 |
Christopher DeLuca | 2 |
Guher Gorgun | 2 |
Okan Bulut | 2 |
Stefanie A. Wind | 2 |
Xiaoping Fan | 2 |
A-Ceng Li | 1 |
Abdul Halim Abdullah | 1 |
Abi Roper | 1 |
Abigail Goben | 1 |
More ▼ |
Publication Type
Journal Articles | 109 |
Reports - Research | 87 |
Information Analyses | 13 |
Reports - Evaluative | 7 |
Reports - Descriptive | 6 |
Tests/Questionnaires | 2 |
Education Level
Audience
Researchers | 2 |
Counselors | 1 |
Practitioners | 1 |
Location
Canada | 5 |
Australia | 3 |
Indonesia | 3 |
Iran | 3 |
United States | 3 |
Ireland | 2 |
Israel | 2 |
Jordan | 2 |
New Zealand | 2 |
Spain | 2 |
Turkey | 2 |
More ▼ |
Laws, Policies, & Programs
Head Start | 1 |
Individuals with Disabilities… | 1 |
Assessments and Surveys
Ages and Stages Questionnaires | 1 |
Bayley Scales of Infant and… | 1 |
MacArthur Bates Communicative… | 1 |
Mullen Scales of Early… | 1 |
Teaching and Learning… | 1 |
Test of Gross Motor… | 1 |
Vineland Adaptive Behavior… | 1 |
What Works Clearinghouse Rating
Stefanie A. Wind; Benjamin Lugu; Yurou Wang – International Journal of Testing, 2025
Mokken Scale Analysis (MSA) is a nonparametric approach that offers exploratory tools for understanding the nature of item responses while emphasizing invariance requirements. MSA is often discussed as it relates to Rasch measurement theory, which also emphasizes invariance, but uses parametric models. Researchers who have compared and combined…
Descriptors: Item Response Theory, Scaling, Surveys, Evaluation Methods
Yan Xia; Xinchang Zhou – Educational and Psychological Measurement, 2025
Parallel analysis has been considered one of the most accurate methods for determining the number of factors in factor analysis. One major advantage of parallel analysis over traditional factor retention methods (e.g., Kaiser's rule) is that it addresses the sampling variability of eigenvalues obtained from the identity matrix, representing the…
Descriptors: Factor Analysis, Statistical Analysis, Evaluation Methods, Sampling
Lee Nelson; Nic James; Scott Nicholls; Nimai Parmar; Ryan Groom – Sport, Education and Society, 2025
The discipline of performance analysis is founded upon the collection and analysis of objective and reliable data to support the coaching process. While research has begun to identify the potential importance of trust in applied sporting environments, there remains a paucity of inquiry that seeks to explicitly investigate trustworthiness in the…
Descriptors: Trust (Psychology), Work Environment, Athletics, Performance
Jeff Coon; Paulina N. Silva; Alexander Etz; Barbara W. Sarnecka – Journal of Cognition and Development, 2025
Bayesian methods offer many advantages when applied to psychological research, yet they may seem esoteric to researchers who are accustomed to traditional methods. This paper aims to lower the barrier of entry for developmental psychologists who are interested in using Bayesian methods. We provide worked examples of how to analyze common study…
Descriptors: Developmental Psychology, Bayesian Statistics, Research Methodology, Psychological Studies
Aline Godfroid; Brittany Finch; Joanne Koh – Language Learning, 2025
Eye tracking has taken hold in second language acquisition (SLA) and bilingualism as a valuable technique for researching cognitive processes, yet a comprehensive picture of reporting practices is still lacking. Our systematic review addressed this gap. We synthesized 145 empirical eye-tracking studies, coding for 58 reporting features and…
Descriptors: Eye Movements, Second Language Learning, Bilingualism, Cognitive Processes
Christopher DeLuca; Michael Holden; Nathan Rickey – British Educational Research Journal, 2025
We are at a critical moment for assessment in schools. Teachers are called to navigate advances in classroom assessment research, top-down assessment policies, and lingering effects of the COVID-19 pandemic on teaching and learning. Embedded in this context are also systemic challenges to teachers' assessment practice. This paper analyses these…
Descriptors: Evaluation Methods, Educational Innovation, Foreign Countries, Psychological Patterns
Yangmeng Xu; Stefanie A. Wind – Educational Measurement: Issues and Practice, 2025
Double-scoring constructed-response items is a common but costly practice in mixed-format assessments. This study explored the impacts of Targeted Double-Scoring (TDS) and random double-scoring procedures on the quality of psychometric outcomes, including student achievement estimates, person fit, and student classifications under various…
Descriptors: Academic Achievement, Psychometrics, Scoring, Evaluation Methods
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Yasuhiro Yamamoto; Yasuo Miyazaki – Journal of Experimental Education, 2025
Bayesian methods have been said to solve small sample problems in frequentist methods by reflecting prior knowledge in the prior distribution. However, there are dangers in strongly reflecting prior knowledge or situations where much prior knowledge cannot be used. In order to address the issue, in this article, we considered to apply two Bayesian…
Descriptors: Sample Size, Hierarchical Linear Modeling, Bayesian Statistics, Prior Learning
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods
Christian Mazimpaka; Rashmi Paudel; Beverly Heinze-Lacey; Patricia A. Elliott – Journal of School Nursing, 2025
This scoping review explores leadership training opportunities for school nurses. The review was conducted to inform the development of a new leadership training program for school nurses in Massachusetts. A search conducted across four databases (PubMed, CINAHL, ERIC, and Web of Science) yielded four articles meeting the search criteria published…
Descriptors: School Nurses, Leadership Training, Program Evaluation, Competence
Sun Kyung Kim; Youngho Lee; Hye Ri Hwang; Oe Nam Kim – Journal of Computer Assisted Learning, 2025
Background: Comprehensive assessment of skills and performance are necessary to improve the quality of care in nursing education. Various factors pose challenges to accurate assessments, including high student-teacher ratio and observer bias. Objectives: To establish an assessment system based on first-person video of smart glasses and validate…
Descriptors: Handheld Devices, Technology, Video Technology, Evaluation Methods
Emily C. Shepard; Mollie Ruben; Lisa L. Weyandt – Journal of Attention Disorders, 2025
Objective: The aim of the present systematic review was to consolidate findings related to emotion recognition accuracy among individuals with attention deficit hyperactivity disorder (ADHD). The review also examined emotion recognition accuracy assessment methods as well as the contribution of gender to emotional recognition accuracy. Method: A…
Descriptors: Attention Deficit Hyperactivity Disorder, Psychological Patterns, Gender Differences, Recognition (Psychology)
Yinying Wang; Joonkil Ahn – Educational Management Administration & Leadership, 2025
School leadership research literature has a large number of widely used constructs. Could fewer constructs bring more clarity? This study evaluates construct content validity, defined as the extent to which a measure's items reflect a theoretical content domain, in school leadership literature. To do so, we reviewed 29 articles that used Teaching…
Descriptors: Network Analysis, Construct Validity, Content Validity, Instructional Leadership
Rachael Ruegg; Jennifer Yphantides – Higher Education Quarterly, 2025
Although an increasing amount of research has focussed on the relationship between student language proficiency and English-medium instruction (EMI) programme outcomes, there has been little focus on the broader assessment of progress and learning within EMI programmes, especially in Asia. The purpose of this study was to determine the kinds of…
Descriptors: Foreign Countries, Language of Instruction, English (Second Language), Program Evaluation