NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers4
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 543 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stanojevic, Miloš; Brennan, Jonathan R.; Dunagan, Donald; Steedman, Mark; Hale, John T. – Cognitive Science, 2023
To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFGs), yet such formalisms are not…
Descriptors: Correlation, Language Processing, Brain Hemisphere Functions, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Q. Feltgen; G. Cislaru – Discourse Processes: A Multidisciplinary Journal, 2025
The broader aim of this study is the corpus-based investigation of the written language production process. To this end, temporal markers have been keylog recorded alongside the writing processes to exploit pauses to segment the speech product into linear units of performance. However, identifying these pauses requires selecting the relevant…
Descriptors: Writing Processes, Writing Skills, Written Language, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Teo Susnjak – International Journal of Artificial Intelligence in Education, 2024
A significant body of recent research in the field of Learning Analytics has focused on leveraging machine learning approaches for predicting at-risk students in order to initiate timely interventions and thereby elevate retention and completion rates. The overarching feature of the majority of these research studies has been on the science of…
Descriptors: Prediction, Learning Analytics, Artificial Intelligence, At Risk Students
Peer reviewed Peer reviewed
Direct linkDirect link
Albornoz-De Luise, Romina Soledad; Arevalillo-Herraez, Miguel; Arnau, David – IEEE Transactions on Learning Technologies, 2023
In this article, we analyze the potential of conversational frameworks to support the adaptation of existing tutoring systems to a natural language form of interaction. We have based our research on a pilot study, in which the open-source machine learning framework Rasa has been used to build a conversational agent that interacts with an existing…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Artificial Intelligence, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Gani, Mohammed Osman; Ayyasamy, Ramesh Kumar; Sangodiah, Anbuselvan; Fui, Yong Tien – Education and Information Technologies, 2023
The automated classification of examination questions based on Bloom's Taxonomy (BT) aims to assist the question setters so that high-quality question papers are produced. Most studies to automate this process adopted the machine learning approach, and only a few utilised the deep learning approach. The pre-trained contextual and non-contextual…
Descriptors: Models, Artificial Intelligence, Natural Language Processing, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Gerald Gartlehner; Leila Kahwati; Rainer Hilscher; Ian Thomas; Shannon Kugley; Karen Crotty; Meera Viswanathan; Barbara Nussbaumer-Streit; Graham Booth; Nathaniel Erskine; Amanda Konet; Robert Chew – Research Synthesis Methods, 2024
Data extraction is a crucial, yet labor-intensive and error-prone part of evidence synthesis. To date, efforts to harness machine learning for enhancing efficiency of the data extraction process have fallen short of achieving sufficient accuracy and usability. With the release of large language models (LLMs), new possibilities have emerged to…
Descriptors: Data Collection, Evidence, Synthesis, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Hei-Chia Wang; Yu-Hung Chiang; I-Fan Chen – Education and Information Technologies, 2024
Assessment is viewed as an important means to understand learners' performance in the learning process. A good assessment method is based on high-quality examination questions. However, generating high-quality examination questions manually by teachers is a time-consuming task, and it is not easy for students to obtain question banks. To solve…
Descriptors: Natural Language Processing, Test Construction, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Reese Butterfuss; Harold Doran – Educational Measurement: Issues and Practice, 2025
Large language models are increasingly used in educational and psychological measurement activities. Their rapidly evolving sophistication and ability to detect language semantics make them viable tools to supplement subject matter experts and their reviews of large amounts of text statements, such as educational content standards. This paper…
Descriptors: Alignment (Education), Academic Standards, Content Analysis, Concept Mapping
Peer reviewed Peer reviewed
Direct linkDirect link
John Hollander; Andrew Olney – Cognitive Science, 2024
Recent investigations on how people derive meaning from language have focused on task-dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems…
Descriptors: Verbs, Symbolic Language, Language Processing, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Ramotowska, Sonia; Steinert-Threlkeld, Shane; Maanen, Leendert; Szymanik, Jakub – Cognitive Science, 2023
According to logical theories of meaning, a meaning of an expression can be formalized and encoded in truth conditions. Vagueness of the language and individual differences between people are a challenge to incorporate into the meaning representations. In this paper, we propose a new approach to study truth-conditional representations of vague…
Descriptors: Computation, Models, Semantics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Zhaya, Rana; Arnon, Inbal; Borovsky, Arielle – Cognitive Science, 2022
Meaning in language emerges from multiple words, and children are sensitive to multi-word frequency from infancy. While children successfully use cues from single words to generate linguistic predictions, it is less clear whether and how they use multi-word sequences to guide real-time language processing and whether they form predictions on the…
Descriptors: Sentences, Language Processing, Semantics, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Behzad Mirzababaei; Viktoria Pammer-Schindler – IEEE Transactions on Learning Technologies, 2024
In this article, we investigate a systematic workflow that supports the learning engineering process of formulating the starting question for a conversational module based on existing learning materials, specifying the input that transformer-based language models need to function as classifiers, and specifying the adaptive dialogue structure,…
Descriptors: Learning Processes, Electronic Learning, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen J. Lupker; Giacomo Spinelli – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Rastle et al. (2004) reported that true (e.g., walker) and pseudo (e.g., corner) multi-morphemic words prime their stem words more than form controls do (e.g., brothel priming BROTH) in a masked priming lexical decision task. This data pattern has led a number of models to propose that both of the former word types are "decomposed" into…
Descriptors: Models, Morphemes, Priming, Vocabulary
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  37