NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED659404
Record Type: Non-Journal
Publication Date: 2023-Sep-29
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Assessing Child Development in Challenging Circumstances: A Multi-Sample Assessment of the Performance of a Remote Adaptation of the IDELA
Kate Schwartz; Lina Torossian; Duja Michael; Jamile Youssef; Hiro Yoshikawa; Somaia Razzak; Katie Murphy
Society for Research on Educational Effectiveness
Background/Context: The COVID-19 pandemic challenged the way we conduct research. For some modes of data collection, such as interviews, there was a ready (if not perfect) analog: face-to-face became phone-based; paper and pen surveys moved online. Others, such as direct assessments of child development, proved more challenging. Despite the challenges, there is a benefit to being able to conduct child assessments entirely remotely. Even in the absence of a pandemic, some areas are difficult to access due to conflict, natural disasters, or other crises. Others are prohibitively expensive to access due to their remoteness. Given reliable tools, monitoring and evaluating programs in such circumstances may best be done remotely. Study objective: This paper presents a case study of how we adapted, piloted, used, and then further assessed a remote/videocall based version of the International Development and Early Learning Assessment (IDELA) and offers a psychometric assessment of how the remote IDELA performed and what changes might increase its reliability and validity. Findings are critical for future early childhood development programming and research in contexts where in-person assessments are not feasible and remote assessments present their own (often daunting) challenges. Intervention and Setting: The IDELA is a tool for assessing 3- to 6-year old numeracy, literacy, motor, and social-emotional development that has been used in over 75 countries and undergone multiple cross-country validations (e.g. Halpin et al. 2019; Pisani, Borisova, & Dowd, 2018). It has traditionally been conducted one-on-one and in person. During the COVID-19 pandemic, when in-person programming and data collection was no longer possible, we adapted the IDELA to be collected using caregiver-enabled, video-call data collection. Items needing resources families might not have on hand were modified (e.g. substituting an e-book for a hard copy book) or dropped (e.g. dropping the jigsaw puzzle item). The resulting tool was piloted with 3- to 6-year-olds in the Akkar, Bekaa, and Tripoli regions of Lebanon using WhatsApp video calls in Fall 2021. It was then used in an impact evaluation of a remote early learning program in Jan-Mar 2022 (baseline) and June-July 2022 (endline). We also, once in-person services and assessment were back in place, tested the remote IDELA directly against the in-person IDELA (with children randomized into one or the other) in Jan-Feb 2023. Population/Participants/Subjects: Our pilot sample (Fall 2021) included 495 3-6 year olds currently in their first (35%), second (47%), or third (18%) year of early childhood education (ECE: what Lebanon calls KG1, KG2, KG3, but would in some other contexts be called early preschool, preschool, and kindergarten). Our impact evaluation sample (Jan-July 2022) included 1,606 five (or just turned six) year old children (96% Syrian refugees) from hard-to-access areas of Lebanon with little to no prior access to ECE. Our follow-up study (Jan-Feb 2023) included 647 3-6 year olds currently in KG1 (32%), K2 (34%), and KG3 (34%). For this sample, half were randomly assigned to receive the remote IDELA and half received the in-person IDELA. All three samples were from Akkar, Baalbek, Bekaa, and Tripoli. Research Design: Our pilot samples were all in remote ECE programming and all assessed, one time, using the remote IDELA. Our randomized controlled trial impact evaluation included wait list control children as well as treatment children. They were all assessed prior to the beginning of ECE services and again after the treatment received the 11-week program (and just before it was offered to the control group). Our follow-up sample was randomly assigned to either remote or in-person IDELA and assessed once. Data Collection and Analysis: We conduct factor analyses and item response theory on all three samples to assess the structure and reliability of the remote tool and compare it to past work with the in-person IDELA both in Lebanon and in other contexts. For the follow-up study we also compare performance on the remote IDELA directly to performance when assessed in-person And assess whether either modality works better for younger vs. older children, children with different temperaments, or children with different levels of past experience with video calls. We also, for the follow-up sample, examine enumerator reports of caregiver behavior and attempted helping during the remote tool as well as; enumerator report of child attention and focus during remote assessment versus in-person. Findings/Results: Analyses are currently ongoing. We have thus far found that the remote tool performs similarly to the in-person tool in terms of factor loadings and reliability, but does introduce a new dimension of caregiver attempted helping, encouragement, and commentary. This helping does not appear to correlate with child ability, current ECE status, or expected in-person performance though we are currently collecting (and will present) qualitative data from enumerators who administered both versions of the tool in the follow-up study in order to further understand it. Conclusions: Even in the absence of a pandemic, some areas - especially those facing multiple crises such as Lebanon is currently - are difficult (or too costly) to access for in-person data collection. And yet, direct assessments still present the best way to monitor and support child development and improve early childhood programming. As such, it is critical that we better understand how to conduct direct assessments under a wide array of conditions, including fully remote programming, and that we examine what new variation or noise such changes in assessment modality introduce into our data collection tools. We already know that even the best in-person tools, while reliable on average, perform better or worse for different types of children (e.g., children who do not warm up quickly to strangers likely under perform in in-person assessments). We need to understand what new noise, and possible biases, remote versions introduce and how to think about these in further adapting and using these tools and in comparing them to in-person assessments. In this presentation, we will focus on what we have learned through these three samples on which we have used the remote IDELA as well as recommendations for future use and research around this tool.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Early Childhood Education; Preschool Education; Elementary Education; Kindergarten; Primary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Lebanon
Grant or Contract Numbers: N/A