Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 19 |
Descriptor
Source
Grantee Submission | 19 |
Author
McNamara, Danielle S. | 5 |
Dascalu, Mihai | 4 |
Danielle S. McNamara | 2 |
Jeevan Chapagain | 2 |
Mihai Dascalu | 2 |
Priti Oli | 2 |
Rabin Banjade | 2 |
Vasile Rus | 2 |
Adar, Eytan | 1 |
Akihito Kamata | 1 |
Allen, Laura | 1 |
More ▼ |
Publication Type
Reports - Research | 15 |
Speeches/Meeting Papers | 11 |
Journal Articles | 4 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 4 |
Early Childhood Education | 2 |
Middle Schools | 2 |
Primary Education | 2 |
Adult Education | 1 |
Grade 2 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Intermediate Grades | 1 |
More ▼ |
Audience
Location
Illinois | 1 |
Louisiana | 1 |
Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Flesch Reading Ease Formula | 1 |
What Works Clearinghouse Rating
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Maria Goldshtein; Jaclyn Ocumpaugh; Andrew Potter; Rod D. Roscoe – Grantee Submission, 2024
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language…
Descriptors: Language Attitudes, Computational Linguistics, Computer Software, Natural Language Processing

Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages
Bogdan Nicula; Mihai Dascalu; Tracy Arner; Renu Balyan; Danielle S. McNamara – Grantee Submission, 2023
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while…
Descriptors: Reading Comprehension, Language Processing, Models, STEM Education

Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Cioaca, Valentin Sergiu; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Numerous approaches have been introduced to automate the process of text summarization, but only few can be easily adapted to multiple languages. This paper introduces a multilingual text processing pipeline integrated in the open-source "ReaderBench" framework, which can be retrofit to cover more than 50 languages. While considering the…
Descriptors: Documentation, Computer Software, Open Source Technology, Algorithms
Subramonyam, Hariharan; Seifert, Colleen; Shah, Priti; Adar, Eytan – Grantee Submission, 2020
Learning from text is a "constructive" activity in which sentence-level information is combined by the reader to build coherent mental models. With increasingly complex texts, forming a mental model becomes challenging due to a lack of background knowledge, and limits in working memory and attention. To address this, we are taught…
Descriptors: Visual Aids, Natural Language Processing, Reading Strategies, Educational Technology
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Memory, Inferences, Syntax
Hazelton, Lynette; Nastal, Jessica; Elliot, Norbert; Burstein, Jill; McCaffrey, Daniel F. – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Grantee Submission, 2021
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Zhongdi Wu; Eric Larson; Makoto Sano; Doris Baker; Nathan Gage; Akihito Kamata – Grantee Submission, 2023
In this investigation we propose new machine learning methods for automated scoring models that predict the vocabulary acquisition in science and social studies of second grade English language learners, based upon free-form spoken responses. We evaluate performance on an existing dataset and use transfer learning from a large pre-trained language…
Descriptors: Prediction, Vocabulary Development, English (Second Language), Second Language Learning
Cai, Zhiqiang; Siebert-Evenstone, Amanda; Eagan, Brendan; Shaffer, David Williamson; Hu, Xiangen; Graesser, Arthur C. – Grantee Submission, 2019
Coding is a process of assigning meaning to a given piece of evidence. Evidence may be found in a variety of data types, including documents, research interviews, posts from social media, conversations from learning platforms, or any source of data that may provide insights for the questions under qualitative study. In this study, we focus on text…
Descriptors: Semantics, Computational Linguistics, Evidence, Coding
Olney, Andrew M. – Grantee Submission, 2021
This paper explores a general approach to paraphrase generation using a pre-trained seq2seq model fine-tuned using a back-translated anatomy and physiology textbook. Human ratings indicate that the paraphrase model generally preserved meaning and grammaticality/fluency: 70% of meaning ratings were above 75, and 40% of paraphrases were considered…
Descriptors: Translation, Language Processing, Error Analysis (Language), Grammar
Previous Page | Next Page ยป
Pages: 1 | 2