Publication Date
In 2025 | 5 |
Since 2024 | 54 |
Since 2021 (last 5 years) | 162 |
Since 2016 (last 10 years) | 262 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 2 |
Administrators | 1 |
Students | 1 |
Teachers | 1 |
Location
Brazil | 5 |
Germany | 5 |
Australia | 4 |
China | 4 |
South Korea | 3 |
Spain | 3 |
United Kingdom | 3 |
Illinois | 2 |
Iran | 2 |
Italy | 2 |
Ukraine | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stanojevic, Miloš; Brennan, Jonathan R.; Dunagan, Donald; Steedman, Mark; Hale, John T. – Cognitive Science, 2023
To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFGs), yet such formalisms are not…
Descriptors: Correlation, Language Processing, Brain Hemisphere Functions, Natural Language Processing
Q. Feltgen; G. Cislaru – Discourse Processes: A Multidisciplinary Journal, 2025
The broader aim of this study is the corpus-based investigation of the written language production process. To this end, temporal markers have been keylog recorded alongside the writing processes to exploit pauses to segment the speech product into linear units of performance. However, identifying these pauses requires selecting the relevant…
Descriptors: Writing Processes, Writing Skills, Written Language, Intervals
Liunian Li – ProQuest LLC, 2024
To build an Artificial Intelligence system that can assist us in daily lives, the ability to understand the world around us through visual input is essential. Prior studies train visual perception models by defining concept vocabularies and annotate data against the fixed vocabulary. It is hard to define a comprehensive set of everything, and thus…
Descriptors: Artificial Intelligence, Visual Stimuli, Visual Perception, Models
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Todd Cherner; Teresa S. Foulger; Margaret Donnelly – TechTrends: Linking Research and Practice to Improve Learning, 2025
The ethics surrounding the development and deployment of generative artificial intelligence (genAI) is an important topic as institutions of higher education adopt the technology for educational purposes. Concurrently, stakeholders from various organizations have reviewed the literature about the ethics of genAI and proposed frameworks about it.…
Descriptors: Artificial Intelligence, Natural Language Processing, Decision Making, Models
Teo Susnjak – International Journal of Artificial Intelligence in Education, 2024
A significant body of recent research in the field of Learning Analytics has focused on leveraging machine learning approaches for predicting at-risk students in order to initiate timely interventions and thereby elevate retention and completion rates. The overarching feature of the majority of these research studies has been on the science of…
Descriptors: Prediction, Learning Analytics, Artificial Intelligence, At Risk Students
Albornoz-De Luise, Romina Soledad; Arevalillo-Herraez, Miguel; Arnau, David – IEEE Transactions on Learning Technologies, 2023
In this article, we analyze the potential of conversational frameworks to support the adaptation of existing tutoring systems to a natural language form of interaction. We have based our research on a pilot study, in which the open-source machine learning framework Rasa has been used to build a conversational agent that interacts with an existing…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Artificial Intelligence, Models
Gani, Mohammed Osman; Ayyasamy, Ramesh Kumar; Sangodiah, Anbuselvan; Fui, Yong Tien – Education and Information Technologies, 2023
The automated classification of examination questions based on Bloom's Taxonomy (BT) aims to assist the question setters so that high-quality question papers are produced. Most studies to automate this process adopted the machine learning approach, and only a few utilised the deep learning approach. The pre-trained contextual and non-contextual…
Descriptors: Models, Artificial Intelligence, Natural Language Processing, Writing (Composition)
Gerald Gartlehner; Leila Kahwati; Rainer Hilscher; Ian Thomas; Shannon Kugley; Karen Crotty; Meera Viswanathan; Barbara Nussbaumer-Streit; Graham Booth; Nathaniel Erskine; Amanda Konet; Robert Chew – Research Synthesis Methods, 2024
Data extraction is a crucial, yet labor-intensive and error-prone part of evidence synthesis. To date, efforts to harness machine learning for enhancing efficiency of the data extraction process have fallen short of achieving sufficient accuracy and usability. With the release of large language models (LLMs), new possibilities have emerged to…
Descriptors: Data Collection, Evidence, Synthesis, Language Processing
A Method for Generating Course Test Questions Based on Natural Language Processing and Deep Learning
Hei-Chia Wang; Yu-Hung Chiang; I-Fan Chen – Education and Information Technologies, 2024
Assessment is viewed as an important means to understand learners' performance in the learning process. A good assessment method is based on high-quality examination questions. However, generating high-quality examination questions manually by teachers is a time-consuming task, and it is not easy for students to obtain question banks. To solve…
Descriptors: Natural Language Processing, Test Construction, Test Items, Models
Reese Butterfuss; Harold Doran – Educational Measurement: Issues and Practice, 2025
Large language models are increasingly used in educational and psychological measurement activities. Their rapidly evolving sophistication and ability to detect language semantics make them viable tools to supplement subject matter experts and their reviews of large amounts of text statements, such as educational content standards. This paper…
Descriptors: Alignment (Education), Academic Standards, Content Analysis, Concept Mapping
John Hollander; Andrew Olney – Cognitive Science, 2024
Recent investigations on how people derive meaning from language have focused on task-dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems…
Descriptors: Verbs, Symbolic Language, Language Processing, Semantics
Mishra, Swaroop – ProQuest LLC, 2023
Humans have the remarkable ability to solve different tasks by simply reading textual instructions that define the tasks and looking at a few examples. Natural Language Processing (NLP) models built with the conventional machine learning paradigm, however, often struggle to generalize across tasks (e.g., a question-answering system cannot solve…
Descriptors: Natural Language Processing, Models, Readability, Mathematical Logic
Samah AlKhuzaey; Floriana Grasso; Terry R. Payne; Valentina Tamma – International Journal of Artificial Intelligence in Education, 2024
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective.…
Descriptors: Test Items, Test Construction, Difficulty Level, Prediction
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing