NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission161
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 161 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Kylie L. Anglin; Vivian C. Wong; Arielle Boguslav – Grantee Submission, 2021
Though there is widespread recognition of the importance of implementation research, evaluators often face intense logistical, budgetary, and methodological challenges in their efforts to assess intervention implementation in the field. This article proposes a set of natural language processing techniques called semantic similarity as an…
Descriptors: Natural Language Processing, Program Implementation, Measurement Techniques, Intervention
Renu Balyan; Danielle S. McNamara; Scott A. Crossley; William Brown; Andrew J. Karter; Dean Schillinger – Grantee Submission, 2022
Online patient portals that facilitate communication between patient and provider can improve patients' medication adherence and health outcomes. The effectiveness of such web-based communication measures can be influenced by the health literacy (HL) of a patient. In the context of diabetes, low HL is associated with severe hypoglycemia and high…
Descriptors: Computational Linguistics, Patients, Physicians, Information Security
Dascalu, Marina-Dorinela; Ruseti, Stefan; Dascalu, Mihai; McNamara, Danielle; Trausan-Matu, Stefan – Grantee Submission, 2020
Reading comprehension requires readers to connect ideas within and across texts to produce a coherent mental representation. One important factor in that complex process regards the cohesion of the document(s). Here, we tackle the challenge of providing researchers and practitioners with a tool to visualize text cohesion both within (intra) and…
Descriptors: Network Analysis, Graphs, Connected Discourse, Reading Comprehension
Bogdan Nicula; Mihai Dascalu; Tracy Arner; Renu Balyan; Danielle S. McNamara – Grantee Submission, 2023
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while…
Descriptors: Reading Comprehension, Language Processing, Models, STEM Education
Peer reviewed Peer reviewed
Ha Tien Nguyen; Conrad Borchers; Meng Xia; Vincent Aleven – Grantee Submission, 2024
Intelligent tutoring systems (ITS) can help students learn successfully, yet little work has explored the role of caregivers in shaping that success. Past interventions to support caregivers in supporting their child's homework have been largely disjunct from educational technology. The paper presents prototyping design research with nine middle…
Descriptors: Middle School Mathematics, Intelligent Tutoring Systems, Caregivers, Caregiver Attitudes
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Bogdan Nicula; Marilena Panaite; Tracy Arner; Renu Balyan; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Self-explanation practice is an effective method to support students in better understanding complex texts. This study focuses on automatically assessing the comprehension strategies employed by readers while understanding STEM texts. Data from 3 datasets (N = 11,833) with self-explanations annotated on different comprehension strategies (i.e.,…
Descriptors: Reading Strategies, Reading Comprehension, Metacognition, STEM Education
Peer reviewed Peer reviewed
Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Watanabe, Micah; McNamara, Danielle S.; Crossley, Scott Andrew – Grantee Submission, 2021
The ability to objectively quantify the complexity of a text can be a useful indicator of how likely learners of a given level will comprehend it. Before creating more complex models of assessing text difficulty, the basic building block of a text consists of words and, inherently, its overall difficulty is greatly influenced by the complexity of…
Descriptors: Multilingualism, Language Acquisition, Age, Models
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Patience Stevens; David C. Plaut – Grantee Submission, 2022
The morphological structure of complex words impacts how they are processed during visual word recognition. This impact varies over the course of reading acquisition and for different languages and writing systems. Many theories of morphological processing rely on a decomposition mechanism, in which words are decomposed into explicit…
Descriptors: Written Language, Morphology (Languages), Word Recognition, Reading Processes
Anglin, Kylie; Boguslav, Arielle; Hall, Todd – Grantee Submission, 2020
Text classification has allowed researchers to analyze natural language data at a previously impossible scale. However, a text classifier is only as valid as the the annotations on which it was trained. Further, the cost of training a classifier depends on annotators' ability to quickly and accurately apply the coding scheme to each text. Thus,…
Descriptors: Documentation, Natural Language Processing, Classification, Research Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11