Showing Is Knowing: The Potential and Challenges of Using Neurocognitive Measures of Implicit Learning in the Classroom
ABSTRACT
The value of neurocognitive measures to study memory, attention, cognition, and learning is well established. However, the vast majority of work using these tools is performed in tightly controlled lab experiments using simple lab stimuli. This article looks at the viability of using multimodal neurocognitive instruments to measure implicit knowledge in real-world learning contexts. We focus on some of the most promising neurocognitive tools for this purpose, including eye-tracking, electroencephalography (EEG), and functional near infra-red spectroscopy (fNIRS). The specific challenges and potential of each tool are considered for use within learning contexts. These tools may be of particular importance to student populations that typically underperform on traditional learning assessments, including students with disabilities, English language learners, and students from low socioeconomic status backgrounds, among others. This review concludes with recommendations to the field for further work required to bring objective measures of implicit knowledge to real world learning contexts.
Implicit knowledge—knowledge not yet articulated by the learner—is argued to be foundational to the development of explicit knowledge (Polanyi, 1966) and key to understanding how we support and measure learning and cognition (Brown, Roediger, & McDaniel, 2014; Busch, 2008; Collins, 2010; Kahneman, 2011; Reber, 1993; Underwood, 1996). Implicit knowledge, by definition, is unexpressed and thus the development of implicit knowledge is hard to measure. This article builds upon literature about the measurement of implicit knowledge and considers the possibility for the use of neurocognitive methods to reveal implicit learning that has remained invisible to traditional assessment methods and can be replicated in a reliable and scalable manner.
The onset, and rapid reduction in price, of neurocognitive devices makes the idea of doing classroom research with these instruments attractive and feasible, and may be key to making advances in the measurement of implicit learning. Here, we define neurocognitive devices as any tool used by neurocognitive researchers to make inferences about neural or cognitive events including electroencephalography (EEG), near infra-red spectroscopy (fNIRS), and eye-tracking. Moving neurocognitive research that has been previously conducted primarily in laboratory settings into traditional learning settings presents a number of new challenges that must be considered. Addressing these challenges, however, offers the potential for transformative tools to assess knowledge of cognitively diverse students, who are often the most disadvantaged by today's typical educational assessments.
While educational research has had a long and well established tradition, it is fraught with challenges prompting Berliner to call it the hardest science of all (2002). These challenges include difficulty with uncontrolled variables, fidelity of implementation, generalization, and more (Kline, 2008). Reliable outcome measures may be the biggest challenge facing researchers in this area. To date, educational research has relied primarily on self-reported measures or standardized tests to assess educational outcomes. Unfortunately, these measures are primarily summative in nature and open to bias (Popham, 2001). Furthermore, these measures typically rely on an array of skills including language processing, motivation, executive functions, and test-taking skills which often vary across student populations (Haladyna & Downing, 2004). The availability of more portable, affordable, and usable neurocognitive tools provides researchers working in applied learning settings with potentially more objective measures that can shed insight on the learning process as a whole and not merely learning outcomes.
IMPLICIT LEARNING
There is ample research showing that traditional academic tests do not measure all the cognitive abilities required by many everyday activities (Fletcher et al., 1998; Ginsburg, Lee, & Boyd, 2008; Sternberg, 1996). This may be particularly true for traditionally underperforming student populations including students with disabilities, students from low socioeconomic status backgrounds, English language learners (ELLs), and veterans (Sackett, Borneman, & Connelly, 2008; Tierney, 2013). These students often have a wealth of implicit knowledge that is not well articulated on standardized assessments. The implicit knowledge and skills of learners often goes unrecognized when assessments contain construct-irrelevant factors such as an unrelated storyline or distracting graphics. Baroody, Bajwa, and Eiland (2009) argue that current assessments are often unable to distinguish between learners who have actual mathematics misunderstandings from those who have had inadequate preschooling, or have nonmathematical cognitive challenges such as slow processing speed (Hopkins & Lawson, 2006) or limited working memory and phonological skills (Chong & Siegel, 2008). When an assessment cannot distinguish between poor performance due to language processing difficulties and those due to a lack of conceptual understanding, or between executive function challenges and a lack of procedural knowledge, educators are unable to provide students with the teaching interventions needed to improve their outcomes (Brendefur et al., 2015).
To study the development of implicit knowledge, learners must be observed while participating in activity where the learner is motivated to improve. Implicit knowledge developed through everyday practical activities has been demonstrated in the mathematical abilities of assembly line workers (Scribner, 1985, 1986), comparison shoppers (Lave, Murtaugh, & de la Rocha, 1984), and racetrack handicappers (Ceci & Liker, 1986, 1988). However, few standardized approaches have been developed for assessing this implicit mathematical knowledge with proven validity, reliability, and objectivity (Rittle-Johnson & Schneider, 2014). Traditional qualitative research studies of implicit learning in a classroom require intensive individual observation and cannot typically be scaled in a replicable and reliable way.
One area of promise for the measurement of implicit learning in recent research is in game-based learning and game-based assessment of learning. Well-crafted learning games can compel players to persist in complex problem solving (Asbell-Clarke et al., 2012; Qian & Clark, 2016; Shute, Ventura, & Ke, 2015; Steinkuehler & Duncan, 2008), and engage in deep science, technology, engineering, and math (STEM) learning (Clark et al., 2011). Game-like digital activities provide a natural and engaging environment that allows actions to inform the assessment of learning (Gee & Shaffer, 2010; Shute & Kim, 2014). The design of stealth assessments (Shute, 2011) within games has helped move researchers away from a traditional assessment design of formal pretests/posttests to measure learning. Game-based learning assessments use data generated automatically through gameplay. Many game-based assessment researchers use an evidence-centered game design (ECD) model (Mislevy & Riconscente, 2006; Shute, Ventura, & Kim, 2013) where explicit learning outcomes and measures are designed and developed as part of the game design process. However, implicit knowledge is often demonstrated progressively through patterns of activity and decisions, that are as difficult for researchers to articulate as for the students themselves. Educational data mining (EDM) allows the automation of the detection of these behaviors as they unfold in the process of problem solving, and the identification of behaviors that humans can recognize but cannot always explain (Rowe et al., 2017; Rowe, Asbell-Clarke, & Baker, 2015; Sao Pedro, Baker, & Gobert, 2012). With today's sensors and wearable devices, we are not far from a time when measurements of cognitive functions can be performed in class with individual students, and digital tools and educators will be able to provide real-time interventions based upon those measurements.
To prepare for this eventuality, this article examines what multimodal analytics using neurocognitive tools can reveal about learning, whether these measures may be used to build comprehensive models of implicit learning, and what are the challenges of using these tools in natural learning settings.
NEUROCOGNITIVE TOOLS TO MEASURE IMPLICIT LEARNING
A variety of tools are now available at reasonable cost and mobility that can measure neurological data that may be valuable in revealing implicit learning. Many tools that measure cortical activity (e.g., EEG and fNIRS) are providing scalable means to obtaining evidence associated with working memory and executive function. In addition, tools such as eye-tracking headsets can examine attention and processing difficulty. Several other sensors are being used to study physiological and facial expression data to assess emotional states (Azevedo et al., 2013; Bosch et al., 2015; D'mello & Kory, 2015; Harley, Bouchet, Hussain, Azevedo, & Calvo, 2015; Picard, Fedor, & Ayzenberg, 2016). Together this work provides great promise to dig more deeply into the nature of implicit learning and be used to unleash the potential of cognitively diverse learners. Researchers, however, should be well aware of the risks of overinterpreting sensor signals. Although research has established relationships between sensor signals and cognitive processes, these devices are far from being able to directly measure anything as complex as thought or knowledge. Below we focus on the measurement of cortical and visual data for the study of memory, and cognition.
The value of neurocognitive measures to study memory, attention, cognition, and learning is well established (Gazzaniga, 2009). However, the vast majority of work using these tools is performed in tightly controlled lab experiments using simple lab stimuli. By comparison, work on collecting and analyzing data of this type in unconstrained complex learning settings is only now emerging (Ansari, Coch, & De Smedt, 2011). Thus far, commonly used neurocognitive tools including functional magnetic resonance imaging (fMRI), EEG, fNIRS, eye-trackers, and others have been expensive, difficult to use setup and analyze, and difficult to transport. More recently, however, advances in software and hardware have made portable, robust, and relatively affordable research-grade tools available (Chi et al., 2013; Liston, Simpson, Wong, Rich, & Stone, 2016; Si et al., 2015), opening up the potential for collecting neurocognitive data in larger-scale applied research settings.
Measures of attention provide feedback on whether a learner is engaging with the content (Wills, Lavric, Croft, & Hodgson, 2007). Measures of short-term memory use and cognitive load provide information on whether content is being processed and whether the learner's cognitive processing capacity is being overwhelmed or underutilized (Antonenko, Paas, Grabner, & van Gog, 2010). Measures of long-term memory retrieval and markers influenced by prior knowledge provide indirect evidence of learning (Khader & Rösler, 2011). These measures will help not only identify whether an intervention is working but also how to improve and optimize it. Note, however, that these tools are a long way from being able to directly measure learning or knowledge and at best can allow us to make inferences about these complex process by studying attention and learning during carefully structured tasks.
Bringing these tools into the field remains challenging. While cost and portability are less of a hindrance today than they once were, many technical and practical challenges remain. Setup and calibration are challenging to conduct at scale. Excessive head movement, lighting conditions, individual differences, and many other factors can introduce significant measurement error. Experimental protocols must evolve beyond well controlled trials in a lab setting to capture the dynamics of real world learning interactions. Data collection and synchronization algorithms are needed to integrate multiple data streams at a low latency. More sophisticated data analysis techniques are needed to reconcile information coming from multiple sources. Consistent methods are needed to allow replication across settings. These challenges are not insurmountable, and in the coming sections we will outline the potential and challenges for the use of each of the tools in authentic learning settings. Note that here we will only focus on neurocognitive tools that show the most promise in seeing wide use across a broad range of authentic learning settings.
EYE-TRACKING
Eye movements have long been seen as a potential window into the underlying cognitive processing of visual information (Rayner, 1998), and as the best near-term indicator of where visual attention appears to be directed (D'Mello, 2016). Eye-tracking research is often found in studies of reading as well as scene perception, attention allocation, visual search, mental rotation, and countless other cognitive processing tasks (Rayner, 2009). While much of this work has focused on highly controlled lab tasks, the availability of low cost portable solutions and work toward smartphone-based eye tracking makes this tool one of the most promising to consider (Emrich, 2017; Metz, 2016).
Attention
While the mechanisms for attention control and eye movement control are distinct, they share many neural anatomy and often co-occur (Corbetta et al., 1998). Visual acuity and color perception are strongest at the center of a fixation and drop off rapidly at the periphery of the visual field. Eye movements closely follow shifts in attention for both voluntary and involuntary eye movements (Peterson, Kramer, & Irwin, 2004). Exceptions to this coupling do exist on occasion, but primarily in highly automatized tasks with items that are closely packed (e.g., simple words skipped during reading). Even when this coupling is disrupted, as is the case occasionally in reading, close examination of the eye movement record allows for the development of robust models of attention and eye movement control (Reichle, Pollatsek, & Rayner, 2012), and for modeling the use of distinct cognitive strategies in solving mental processing tasks (Dahlstrom-Hakki, Pollatsek, Fisher, Miller, & Rayner, 2008).
Eye-tracking offers researchers the best means of determining what visual information is being processed and when. At its most basic level, eye-tracking can provide researchers with a coarse view of attention allocation during a learning activity. Many applied researchers have opted for a simple form of analysis that looks at the rough percentage of visual attention allocation across regions of interest in the visual field (Jacob & Karn, 2003). This type of analysis is easier to perform with low sampling rate eye-trackers and easier to interpret. While this analysis provides a valuable view into student learning, more sophisticated eye-tracking data collection and analysis methods have the potential to provide far greater insight. Portable eye-trackers capable of high spatial and temporal accuracy are now readily available. The current challenge, however, is bringing tight data synchronization and sophisticated analysis techniques to the far less constrained learning environments outside the lab.
Working Memory and Cognitive Load
Cognitive-processing difficulty or cognitive load has long been a focus of eye-tracking research. This has primarily been measured by subtle changes in fixation or gaze durations during the performance of time-sensitive tasks. For example, in reading, fixations on low-frequency words are longer as a result of the increase in processing difficulty (Inhoff & Rayner, 1986), and word length and predictability impact fixation durations on both the word itself and the subsequent word (Calvo & Meseguer, 2002). Similarly, fixation or gaze durations tend to increase during the performance of visual processing activities such as search tasks (Hooge & Erkelens, 1996; Meghanathan, van Leeuwen, & Nikolaev, 2015). The main challenge with using these measures is that they tend to be task-specific and need to be carefully vetted on a case-by-case basis. It is often hard to ascertain a priori exactly how eye movements will be impacted by the processing difficulty of a novel task. A measure showing some promise in providing a fairly domain independent measure of cognitive processing load is pupil size (Klingner, Kumar, & Hanrahan, 2008). More work in needed to address issues with this approach due to other factors that influence pupil size such as ambient light and its lack of sensitivity to low levels of processing load (Meghanathan et al., 2015; Steinhauer, Siegle, Condray, & Pless, 2004).
While direct evidence is not available in the eye movement record, careful examination of that record can provide reasonable evidence of what visual information is likely to make it into working memory. Visual information is far more likely to enter working memory if it has been fixated, and research shows that the length of a fixation on an object correlates with the likelihood of detecting a subsequent change in that object (Dahlstrom-Hakki & Pollatsek, 2006; Hollingworth & Henderson, 2002). Therefore, visual information that is not fixated or is fixated only briefly is highly unlikely to make it into working memory, whereas information that receives longer and more frequent fixations is more likely to be processed.
Long-Term Memory
General measures of long-term memory formation and retrieval are not readily available in the eye-movement record. However, domain-specific measures can be developed and can provide fairly robust objective evidence of the acquisition of some forms of knowledge. For example, the impact of word frequency on fixation durations is well established (Inhoff & Rayner, 1986), and is indicative of one's familiarity with that word. Novice drivers exhibit starkly different scanning patterns than expert drivers (Pradhan et al., 2005). Indeed, difference in eye movement allocation or anticipatory eye movement patterns based on prior knowledge have been observed across a wide range of domains including chess, sports, medicine, and many others (Gegenfurtner, Lehtinen, & Säljö, 2011). In most learning domains, researchers can look for anticipatory patterns that are unlikely to occur unless the learner has acquired and is able to access prior knowledge. By looking for these patterns the researcher can gather evidence of the successful retrieval of that prior knowledge which in most cases is less prone to bias than traditional summative educational assessments.
ELECTROENCEPHALOGRAPHY
EEG is widely used in neurocognitive research, particularly in the last couple of decades, because it is a relatively affordable means of gaining insight into cognitive processes with a high level of temporal accuracy (Beres, 2017; da Silva, 2013; Jackson & Bolger, 2014; Luck, 2014). EEG research typically focuses on either band power analysis or event-related potentials (ERPs). Research looking at band power analysis compares the amount of power of different bands of brainwave frequencies over time. Research has associated activity in various brain frequency bands with mental states (Anderson, Devulapalli, & Stolz, 1995). The ERP paradigm focuses instead on looking for a relatively consistent evoked response following the introduction of a stimulus (Luck, 2014). This paradigm relies on repeated presentation of the stimulus to allow identification of the evoked response signal despite the presence of significant noise.
Attention
Attention has been extensively studied using both band power and ERP paradigms. ERP research has revealed the N2pc component, the second negative deflection following stimulus onset found in the posterior region contralateral to the location of attention allocation (Luck & Hillyard, 1994). This component has been well studied and provides a robust indication of the onset and location of selective attention allocation across modalities (Mazza, Turatto, & Caramazza, 2009). Indeed, ERP work is able to provide a time course of attention allocation during visual search tasks starting with the P1 component, the first major positive peak following stimulus onset, believed to be associated with distractor suppression. This is followed by the N1 component, the first major negative peak following stimulus onset, believed to be associated with target enhancement, followed shortly later by the N2pc component indicating the onset and location of focused attention allocation (Luck, 2006).
Several studies have implicated Alpha band power with the allocation of focused attention, particularly at the 10 Hz frequency (Rana & Vaina, 2014). Participants allocating attention across a variety of modalities show increased Alpha band power associated with the suppression of task-irrelevant information (Foxe & Snyder, 2011). The location of the increase in Alpha band power is indicative of the modality that is suppressed and can provide a general idea of what is being suppressed. For example, differential increases in the occipital lobe have been associated with suppression of visual distractors and the relative location of the power increase is indicative of the location of the distractor (Thut, Nietzel, Brandt, & Pascual-Leone, 2006). Both band power and ERP paradigms can provide researchers with powerful tools to measure students' attention during learning tasks. Again, however, the challenge here is collecting and analyzing the data outside of tightly controlled lab settings.
Working Memory and Cognitive Load
Both band power and ERP studies have been used to study working memory use and capacity across a range of modalities. Visual working memory in particular is fairly well studied with robust ERP signals in the occipital and posterior parietal regions found to correspond to the working memory load of the task and plateauing once working memory capacity has been reached (Vogel & Machizawa, 2004). In addition, several studies have implicated theta band power in the frontal regions of the brain with working memory load (Jensen & Tesche, 2002; Klimesch, Schack, & Sauseng, 2005). Direct measures of cognitive-processing load based on increases in frontal lobe theta band power provide a potentially powerful means of assessing processing difficulty of a learning task (Antonenko et al., 2010). This type of analysis may be particularly useful for struggling student populations, such as ELLs or students with disabilities, where the format of the learning activity may significantly impact the student's cognitive-processing load independent of the activity's specific learning goals.
Long-Term Memory
EEG signals can be useful in both characterizing the memory formation process and in providing evidence for the presence of prior knowledge. Neuronal synchronization is thought to play a key role in memory formation where researchers have associated Gamma band synchronization with memory formation (Axmacher, Mormann, Fernández, Elger, & Fell, 2006). A strong Gamma band response has been associated with memory retrieval as well (Gruber, Tsivilis, Montaldi, & Müller, 2004). In addition, several ERPs have been identified that are associated with memory formation and access. Depending on the task, a subset of these could help researchers to more accurately assess student knowledge relative to traditional assessment instruments.
The N400, a negative deflection that occurs about 400 ms following exposure to a memory retrieval target, has been associated with the access of meaning. The N400 has been found across a wide range of stimuli including both written and spoken words, images, representations, and movements (Kutas & Federmeier, 2011). This makes it a potentially powerful means of assessing the presence of a semantic memory. The P300, on the other hand, has been observed during tasks involving the detection of an oddball stimulus including sematic oddballs. This signal is believed to have two subcomponents, the P3a associated with attention and the P3b associated with memory (Polich, 2007). Paradigms using the P300 can be used to probe a learner's perceptions of category membership. Two additional well researched ERPs can be helpful in assessing the presence of information in long-term memory, the error-related negativity (ERN) and the error-related positivity (Pe; Gehring, Liu, Orr, & Carp, 2012). The ERN is a negative deflection occurring 50–150 ms following the occurrence of an erroneous response and may be present even when not consciously registered by the individual. The Pe is a positive deflection occurring 200–400 ms following the occurrence of an error and is thought to be related to processing aimed at reducing the recurrence of the error although this interpretation is not well established (Hajcak, McDonald, & Simons, 2003; Overbeek, Nieuwenhuis, & Ridderinkhof, 2005). These signals can potentially be used during learning activities to determine when errors are due to a lack of knowledge.
FUNCTIONAL NEAR INFRA-RED SPECTROSCOPY
Of the portable neurocognitive tools discussed in this article, fNIRS is the newest and has the least well established literature to draw on. However, neurocognitive researchers have been using this tool for a quarter century and it offers a number of advantages over eye-tracking and EEG technologies (Boas, Elwell, Ferrari, & Taga, 2014). An fNIRS device measures changes in the concentration of oxygenated and deoxygenated hemoglobin using sensors that can detect small changes in light used to illuminate the head (Ferrari & Quaresima, 2012). fNIRS provides better temporal resolution than fMRI and better spatial resolution than EEG. It is also fairly portable and is less prone to electromagnetic noise than EEG devices, making it a potentially good choice for certain learning settings.
Because of the limited literature available in support of fNIRS measures, we will focus here on the most well established signals. Foremost among them are measures of working memory or cognitive-processing load. Several studies have shown strong evidence for the measurement of processing difficulty based primarily on changes in oxygen levels in the prefrontal cortex (Herff et al., 2014; Hirshfield et al., 2009; Sassaroli et al., 2008; Shimizu et al., 2009). More recent work by Ayaz et al. (2012) and Ayaz et al. (2013) has localized this activity primarily in the left inferior frontal gyrus in the dorsolateral prefrontal cortex, close to AF7 in the International 10–20 System. More limited evidence has identified some fNIRS measures with attention. Greater activation in the right frontotemporal cortex and the left ventrolateral prefrontal cortex has been associated with a state of high alertness (Herrmann, Woidich, Schreppel, Pauli, & Fallgatter, 2008), whereas increased activity in the default mode network (DMN), a brain area known to exhibit high levels of activity when participants are off-task, has been associated with mind wandering (Durantin, Dehais, & Delorme, 2015). In addition, some have found an increase in blood oxygenation in the visual cortex in anticipation of a visual target that is predictive of the N2pc ERP signal previously discussed and of the anticipated location of the target (Huang et al., 2015). Further research is needed to determine whether these measures are robust and whether they provide significant value over existing EEG measures.
BRAIN SYNCHRONY AND CLASSROOM DYNAMICS
In recent years, a growing number of researchers have explored brain-to-brain synchrony across two or more individuals as a means of studying elements of classroom and learning dynamics (Babiloni & Astolfi, 2014; Bhattacharya, 2017). Although this area of research is not as well established as many of the measures discussed above, it does provide a potentially powerful approach. EEG data recordings from multiple participants watching the same video provide a potential means of measuring engagement by correlating signals between individuals (Cohen, Henin, & Parra, 2017; Poulsen, Kamronn, Dmochowski, Parra, & Hansen, 2017). Interperson synchrony of student EEG measures collected across an entire semester were found to significantly predict self-reported measures of social dynamics and class engagement (Dikker et al., 2017). Similar work using fNIRS has found significant synchrony in interperson hemodynamics between individuals engaged with a common narrative (Liu et al., 2017). These early findings point to a powerful and promising approach for the use of neurocognitive measures in classrooms.
CHALLENGES USING NEUROCOGNITIVE MEASURES IN AUTHENTIC LEARNING SETTINGS
So far, we've looked at the potential for the use of neurocognitive tools to provide measures that go beyond what is typically available using traditional educational assessments. However, significant challenges face researchers looking to bring these tools into authentic learning settings. First and foremost, these tools have primarily been used in highly constrained lab settings and will be exposed to far more noise and variability in the field. In addition, lab tasks have mainly focused on discrete units, for example, individual words or single objects. By contrast, work looking at more complex learning tasks involving multifaceted abstract concepts or unconstrained learning tasks is far less common and may not elicit the same signals. Therefore, researchers must tread carefully when interpreting data from these tools outside the lab.
Another challenge facing researchers interested in bringing these methods into open-ended learning settings is that these methods rely on averaging data across a large number of similar trials. These measures tend to be noisy, and to identify the signal in the noise averaging across repeated measures is necessary. Data from a single trial is insufficient to provide useful information about cortical activity. This is a problem particularly in event-related paradigms, as very precise timing locked across multiple trials is needed. We therefore have to look for innovative research designs to allow us to bring these measures out of the lab.
One potential approach is the use of what may be termed pseudotrials. As part of most learning activities, learners will often be repeatedly exposed to the same underlying learning task to help them develop fluency. While the timing and nature of these events is variable, they will tend to place learners in similar cognitive states that can potentially be averaged across pseudotrials. For example, researchers have found that they can reliably detect several ERP signals including the P300 in an open-ended video game by averaging signals time locked to commonly occurring game events (Cavanagh & Castellanos, 2016). The same approach can potentially be used in game-based learning settings or other online problem-solving or learning environments where learner behavior can be closely monitored.
Another major challenge encountered by researchers in this area is a technical one. Effective use of these tools in a learning environment requires the collection, synchronization, and analysis of large amounts of data from multiple data streams. While modern technology makes data collection and storage fairly simple, synchronization of the data at the millisecond level is quite challenging due to the background processing latencies associated with modern operating systems, game development platforms, and network protocols. As part of a National Science Foundation-funded collaborative project (DRL-1417456 and DRL-1417967), the authors are synchronizing eye-tracking, mouse, and gamelog data from a physics particle simulator game. In prior work, the authors developed automated detectors of implicit understandings of Newtonian mechanics from gamelog data (Rowe et al., 2015, 2017). The goal of this current project is to study (1) how eye behaviors correlate with detectors of implicit physics understandings, and (2) what eye behaviors may reveal about implicit physics understandings that game data alone misses.
- Different devices have unsynchronized clocks, sometimes by significant amounts of time.
- LSL provides a local clock function which provides a relative timestamp on the machine running LSL software, often the driver of the device. There can be a delay between the device itself and this script obtaining the sample data.
- Less commonly, the local clock on a computer or device can be changed while you are running a game/experiment.
LSL provides a basic receiving application, Lab Recorder, but it does not handle these conversions. Custom code was created inside the game to list and connect to the available outlets. At game startup, the game calculates the difference between the time as recorded by LSL when it synchronizes itself over the local area network and the time that the local computer sees. The game stores this offset. When converting from relative to absolute times, the game prioritizes game time over LSL time because LSL is an optional, add-on feature. This offset value is added to all samples received from LSL in order to transform them into the absolute time space of the game. While the authors have done this for eye-tracking and mouse data, it can easily be replicated for other multimodal streams.
WHERE WE ARE AND WHERE MORE WORK IS NEEDED
The field is not yet in a place where we can easily apply the aforementioned neurocognitive measures to most learning settings; however, there are two main ways that they can advance research in education today. Measures of a learner's general mental state can be continuously collected over the duration of an intervention and can be of use in virtually any learning environment. Band power analyses, pupillometry, and average regional oxygen use measures can all be applied to assess gross differences across interventions presented over a discrete period of time (Bhattacharya, 2017; Dikker et al., 2017; Ko, Komarov, Hairston, Jung, & Lin, 2017; Poulsen et al., 2017).
Although this type of analysis is of great value and goes beyond what can be done with traditional assessment instruments, it provides limited information about the underlying cognitive processes involved. Event-related paradigms are key to revealing information about cognitive-level events but are much harder to implement in unconstrained learning settings. However, in digital environments where learner behavior can be accurately tracked and synchronized with neurocognitive measures, we can now begin to explore the usefulness of event-related signals. With evidence showing that ERPs can be identified in unconstrained environments (Cavanagh & Castellanos, 2016) and can be reliably detected from as few as six trials (Olvet & Hajcak, 2009), the potential has been established. What is needed now is more work on ways to define pseudotrials across learning tasks, what signals are evident across tasks of varying complexity, and additional software, hardware, and research tools and methods to move the field forward. Addressing these challenges will help enable researchers and educators to move beyond traditional assessments and toward inclusive measures of implicit learning.