NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 934 results Save | Export
Huteng Dai – ProQuest LLC, 2024
In this dissertation, I establish a research program that uses computational modeling as a testbed for theories of phonological learning. This dissertation focuses on a fundamental question: how do children acquire sound patterns from noisy, real-world data, especially in the presence of lexical exceptions that defy regular patterns? For instance,…
Descriptors: Phonology, Language Acquisition, Computational Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Thornton, Chris – Cognitive Science, 2021
Semantic composition in language must be closely related to semantic composition in thought. But the way the two processes are explained differs considerably. Focusing primarily on propositional content, language theorists generally take semantic composition to be a truth-conditional process. Focusing more on extensional content, cognitive…
Descriptors: Semantics, Cognitive Processes, Linguistic Theory, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Haffenden, Chris; Fano, Elena; Malmsten, Martin; Börjeson, Love – College & Research Libraries, 2023
How can novel AI techniques be made and put to use in the library? Combining methods from data and library science, this article focuses on Natural Language Processing technologies, especially in national libraries. It explains how the National Library of Sweden's collections enabled the development of a new BERT language model for Swedish. It…
Descriptors: Foreign Countries, Artificial Intelligence, Models, Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Jeon, Jaeho; Lee, Seongyong – Education and Information Technologies, 2023
Artificial Intelligence (AI) is developing in a manner that blurs the boundaries between specific areas of application and expands its capability to be used in a wide range of applications. The public release of ChatGPT, a generative AI chatbot powered by a large language model (LLM), represents a significant step forward in this direction.…
Descriptors: Artificial Intelligence, Man Machine Systems, Natural Language Processing, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rashid, M. Parvez; Xiao, Yunkai; Gehringer, Edward F. – International Educational Data Mining Society, 2022
Peer assessment can be a more effective pedagogical method when reviewers provide quality feedback. But what makes feedback helpful to reviewees? Other studies have identified quality feedback as focusing on detecting problems, providing suggestions, or pointing out where changes need to be made. However, it is important to seek students'…
Descriptors: Peer Evaluation, Feedback (Response), Natural Language Processing, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Hao Zhou; Wenge Rong; Jianfei Zhang; Qing Sun; Yuanxin Ouyang; Zhang Xiong – IEEE Transactions on Learning Technologies, 2025
Knowledge tracing (KT) aims to predict students' future performances based on their former exercises and additional information in educational settings. KT has received significant attention since it facilitates personalized experiences in educational situations. Simultaneously, the autoregressive (AR) modeling on the sequence of former exercises…
Descriptors: Learning Experience, Academic Achievement, Data, Artificial Intelligence
Ryan Daniel Budnick – ProQuest LLC, 2023
The past thirty years have shown a rise in models of language acquisition in which the state of the learner is characterized as a probability distribution over a set of non-stochastic grammars. In recent years, increasingly powerful models have been constructed as earlier models have failed to generalize well to increasingly complex and realistic…
Descriptors: Grammar, Feedback (Response), Algorithms, Computational Linguistics
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Brehm, Laurel; Cho, Pyeong Whan; Smolensky, Paul; Goldrick, Matthew A. – Cognitive Science, 2022
Subject-verb agreement errors are common in sentence production. Many studies have used experimental paradigms targeting the production of subject-verb agreement from a sentence preamble ("The key to the cabinets") and eliciting verb errors (… "*were shiny"). Through reanalysis of previous data (50 experiments; 102,369…
Descriptors: Sentences, Sentence Structure, Grammar, Verbs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Matsuda, Noboru; Wood, Jesse; Shrivastava, Raj; Shimmei, Machi; Bier, Norman – Journal of Educational Data Mining, 2022
A model that maps the requisite skills, or knowledge components, to the contents of an online course is necessary to implement many adaptive learning technologies. However, developing a skill model and tagging courseware contents with individual skills can be expensive and error prone. We propose a technology to automatically identify latent…
Descriptors: Skills, Models, Identification, Courseware
Peer reviewed Peer reviewed
Direct linkDirect link
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Condor, Aubrey; Litster, Max; Pardos, Zachary – International Educational Data Mining Society, 2021
We explore how different components of an Automatic Short Answer Grading (ASAG) model affect the model's ability to generalize to questions outside of those used for training. For supervised automatic grading models, human ratings are primarily used as ground truth labels. Producing such ratings can be resource heavy, as subject matter experts…
Descriptors: Automation, Grading, Test Items, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Unger, Layla; Yim, Hyungwook; Savic, Olivera; Dennis, Simon; Sloutsky, Vladimir M. – Developmental Science, 2023
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of…
Descriptors: Natural Language Processing, Language Usage, Vocabulary Development, Linguistic Input
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  63