Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 12 |
Descriptor
Artificial Intelligence | 13 |
Scoring | 13 |
Automation | 6 |
Computer Assisted Testing | 6 |
Essays | 5 |
Educational Assessment | 4 |
Evaluation Methods | 3 |
High Stakes Tests | 3 |
Models | 3 |
Student Evaluation | 3 |
Writing Evaluation | 3 |
More ▼ |
Source
Author
Aloisi, Cesare | 1 |
Alzubi, Omar A. | 1 |
Beigman Klebanov, Beata | 1 |
Bennett, Randy Elliot | 1 |
Beseiso, Majdi | 1 |
Brandon J. Yik | 1 |
Burstein, Jill | 1 |
Chen, Shyi-Ming | 1 |
Clesham, Rose | 1 |
David G. Schreurs | 1 |
Dorsey, David W. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 13 |
Journal Articles | 12 |
Education Level
Higher Education | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
California | 1 |
Turkey | 1 |
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ferrara, Steve; Qunbar, Saed – Journal of Educational Measurement, 2022
In this article, we argue that automated scoring engines should be transparent and construct relevant--that is, as much as is currently feasible. Many current automated scoring engines cannot achieve high degrees of scoring accuracy without allowing in some features that may not be easily explained and understood and may not be obviously and…
Descriptors: Artificial Intelligence, Scoring, Essays, Automation
Aloisi, Cesare – European Journal of Education, 2023
This article considers the challenges of using artificial intelligence (AI) and machine learning (ML) to assist high-stakes standardised assessment. It focuses on the detrimental effect that even state-of-the-art AI and ML systems could have on the validity of national exams of secondary education, and how lower validity would negatively affect…
Descriptors: Standardized Tests, Test Validity, Credibility, Algorithms
Ormerod, Christopher; Lottridge, Susan; Harris, Amy E.; Patel, Milan; van Wamelen, Paul; Kodeswaran, Balaji; Woolf, Sharon; Young, Mackenzie – International Journal of Artificial Intelligence in Education, 2023
We introduce a short answer scoring engine made up of an ensemble of deep neural networks and a Latent Semantic Analysis-based model to score short constructed responses for a large suite of questions from a national assessment program. We evaluate the performance of the engine and show that the engine achieves above-human-level performance on a…
Descriptors: Computer Assisted Testing, Scoring, Artificial Intelligence, Semantics
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Dorsey, David W.; Michaels, Hillary R. – Journal of Educational Measurement, 2022
We have dramatically advanced our ability to create rich, complex, and effective assessments across a range of uses through technology advancement. Artificial Intelligence (AI) enabled assessments represent one such area of advancement--one that has captured our collective interest and imagination. Scientists and practitioners within the domains…
Descriptors: Validity, Ethics, Artificial Intelligence, Evaluation Methods
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
Brandon J. Yik; David G. Schreurs; Jeffrey R. Raker – Journal of Chemical Education, 2023
Acid-base chemistry, and in particular the Lewis acid-base model, is foundational to understanding mechanistic ideas. This is due to the similarity in language chemists use to describe Lewis acid-base reactions and nucleophile-electrophile interactions. The development of artificial intelligence and machine learning technologies has led to the…
Descriptors: Educational Technology, Formative Evaluation, Molecular Structure, Models
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Richardson, Mary; Clesham, Rose – London Review of Education, 2021
Our world has been transformed by technologies incorporating artificial intelligence (AI) within mass communication, employment, entertainment and many other aspects of our daily lives. However, within the domain of education, it seems that our ways of working and, particularly, assessing have hardly changed at all. We continue to prize…
Descriptors: Artificial Intelligence, High Stakes Tests, Computer Assisted Testing, Educational Change
Beigman Klebanov, Beata; Burstein, Jill; Harackiewicz, Judith M.; Priniski, Stacy J.; Mulholland, Matthew – International Journal of Artificial Intelligence in Education, 2017
The integration of subject matter learning with reading and writing skills takes place in multiple ways. Students learn to read, interpret, and write texts in the discipline-relevant genres. However, writing can be used not only for the purposes of practice in professional communication, but also as an opportunity to reflect on the learned…
Descriptors: STEM Education, Content Area Writing, Writing Instruction, Intervention
Grimes, Douglas; Warschauer, Mark – Journal of Technology, Learning, and Assessment, 2010
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Descriptors: Automation, Writing Evaluation, Essays, Artificial Intelligence
Wang, Hui-Yu; Chen, Shyi-Ming – Educational Technology & Society, 2007
In this paper, we present two new methods for evaluating students' answerscripts based on the similarity measure between vague sets. The vague marks awarded to the answers in the students' answerscripts are represented by vague sets, where each element u[subscript i] in the universe of discourse U belonging to a vague set is represented by a…
Descriptors: Artificial Intelligence, Student Evaluation, Evaluation Methods, Educational Technology
Bennett, Randy Elliot – 1990
A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…
Descriptors: Artificial Intelligence, Constructed Response, Educational Assessment, Measurement Techniques