NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 783 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan – Asia Pacific Education Review, 2024
As evidence from evaluation and experimental studies continue to influence decision and policymaking, applied researchers and practitioners require tools to derive valid and credible inferences. Over the past several decades, research in causal inference has progressed with the development and application of propensity scores. Since their…
Descriptors: Probability, Scores, Causal Models, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Jingwen Wang; Xiaohong Yang; Dujuan Liu – International Journal of Web-Based Learning and Teaching Technologies, 2024
The large scale expansion of online courses has led to the crisis of course quality issues. In this study, we first established an evaluation index system for online courses using factor analysis, encompassing three key constructs: course resource construction, course implementation, and teaching effectiveness. Subsequently, we employed factor…
Descriptors: Educational Quality, Online Courses, Course Evaluation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mosquera, Jose Miguel Llanos; Suarez, Carlos Giovanny Hidalgo; Guerrero, Victor Andres Bucheli – Education and Information Technologies, 2023
This paper proposes to evaluate learning efficiency by implementing the flipped classroom and automatic source code evaluation based on the Kirkpatrick evaluation model in students of CS1 programming course. The experimentation was conducted with 82 students from two CS1 courses; an experimental group (EG = 56) and a control group (CG = 26). Each…
Descriptors: Flipped Classroom, Coding, Programming, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bouwer, Renske; Koster, Monica; van den Bergh, Huub – Assessment in Education: Principles, Policy & Practice, 2023
Assessing students' writing performance is essential to adequately monitor and promote individual writing development, but it is also a challenge. The present research investigates a benchmark rating procedure for assessing texts written by upper-elementary students. In two studies we examined whether a benchmark rating procedure (1) leads to…
Descriptors: Benchmarking, Writing Evaluation, Evaluation Methods, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Baumgartner, Michael; Ambühl, Mathias – Sociological Methods & Research, 2023
Consistency and coverage are two core parameters of model fit used by configurational comparative methods (CCMs) of causal inference. Among causal models that perform equally well in other respects (e.g., robustness or compliance with background theories), those with higher consistency and coverage are typically considered preferable. Finding the…
Descriptors: Causal Models, Evaluation Methods, Goodness of Fit, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Suleyman – International Journal of Assessment Tools in Education, 2022
This study aims to compare normality tests in different sample sizes in data with normal distribution under different kurtosis and skewness coefficients obtained simulatively. To this end, firstly, simulative data were produced using the MATLAB program for different skewness/kurtosis coefficients and different sample sizes. The normality analysis…
Descriptors: Sample Size, Comparative Analysis, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Corinne Huggins-Manley; Anthony W. Raborn; Peggy K. Jones; Ted Myers – Journal of Educational Measurement, 2024
The purpose of this study is to develop a nonparametric DIF method that (a) compares focal groups directly to the composite group that will be used to develop the reported test score scale, and (b) allows practitioners to explore for DIF related to focal groups stemming from multicategorical variables that constitute a small proportion of the…
Descriptors: Nonparametric Statistics, Test Bias, Scores, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Jing Chen; Bei Fang; Hao Zhang; Xia Xue – Interactive Learning Environments, 2024
High dropout rate exists universally in massive open online courses (MOOCs) due to the separation of teachers and learners in space and time. Dropout prediction using the machine learning method is an extremely important prerequisite to identify potential at-risk learners to improve learning. It has attracted much attention and there have emerged…
Descriptors: MOOCs, Potential Dropouts, Prediction, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
David Bezeau; Audrey-Anne De Guise – Journal of Teaching in Physical Education, 2024
Purpose: To gain a better understanding of the assessment practices currently implemented by Quebec physical education teachers regarding reporting and grading. Method: Exploratory mixed-methods study using semistructured interviews (n = 13), interviews to the double (n = 12), and a questionnaire (n = 164) with elementary and high school physical…
Descriptors: Physical Education Teachers, Foreign Countries, Evaluation Methods, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Margarita Pivovarova; Audrey Amrein-Beardsley – Educational Assessment, Evaluation and Accountability, 2024
In this study, we estimated the relationship between two popular measures of teacher effectiveness--teachers' value-added model (VAM) estimates, represented in this study via median growth percentiles (MGPs), and teachers' observational scores, derived from the TAP System for Teacher and Student Advancement. We examined the relationship between…
Descriptors: Teacher Evaluation, Evaluation Methods, Value Added Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rubén Abbas; Andrés Sebastián; Jesús Casanova – Education and Information Technologies, 2025
Classroom response systems (CRS) represent an innovative educational technology that can be used to promote active learning and student engagement. This study explores the effectiveness of CRS in enhancing student learning and performance across various engineering courses related to heat engines. During five academic years, CRS have been used…
Descriptors: Engineering Education, Audience Response Systems, Classroom Techniques, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Freire, Carla; Barbosa, Iris – Education & Training, 2023
Purpose: The purpose of this article is to compare graduates' score rates in two multiple mini-interview (MMI) stations designed to assess graduates from several academic areas: confidant vs stress interview and synchronous vs asynchronous. This relates to three transversal competences (TCs) (learning to learn [LL], positive professional attitude…
Descriptors: College Graduates, Competence, Scores, Semi Structured Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Van Meenen, Florence; Coertjens, Liesje; Van Nes, Marie-Claire; Verschuren, Franck – Advances in Health Sciences Education, 2022
The present study explores two rating methods for peer assessment (analytical rating using criteria and comparative judgement) in light of concurrent validity, reliability and insufficient diagnosticity (i.e. the degree to which substandard work is recognised by the peer raters). During a second-year undergraduate course, students wrote a one-page…
Descriptors: Evaluation Methods, Peer Evaluation, Accuracy, Evaluation Criteria
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  53