A System's View of E-Learning Success Model
ABSTRACT
The past several decades of e-learning empirical research have advanced our understanding of the effective management of critical success factors (CSFs) of e-learning. Meanwhile, the proliferation of measures of dependent and independent variables has been overelaborated. We argue that a significant reduction in dependent and independent variables and their measures is necessary for building an e-learning success model, and such a model should incorporate the interdependent (not independent) process nature of e-learning success. We applied structural equation modeling to empirically validate a comprehensive model of e-learning success at the university level. Our research advances existing literature on CSFs of e-learning and provides a basis for comparing existing research results as well as guiding future empirical research to build robust e-learning theories. A total of 372 valid unduplicated responses from students who have completed at least one online course at a university in the Midwestern United States were used to examine the structural model. Findings indicated that the e-learning success model satisfactorily explains and predicts the interdependency of six CSFs of e-learning systems (course design quality, instructor, motivation, student-student dialog, student-instructor dialog, and self-regulated learning) and perceived learning outcomes.
INTRODUCTION
From its unpretentious origins approximately 40 years ago (Eom & Arbaugh, 2011a; Hiltz & Turoff, 1978), distance education has become undoubtedly mainstream and entered a golden age. The 13th survey tracking online education in the United States by the Babson Survey Research Group provides evidence to support e-learning as a mainstream delivery medium and provides three important facts about the extent of distance education. First, distance education enrollments have increased continually at a greater rate (3.9%) than those of overall higher education. Second, a large proportion (63.3%) of chief academic leaders believe that e-learning is a critical component of their long-term growth strategies. Third, a substantial proportion (71.4%) of chief academic leaders rated the learning outcomes of e-learning as comparable or superior to those in face-to-face instruction (Allen, Seaman, Poulin, & Straut, 2016; Eom & Arbaugh, 2011b).
A major theme of e-learning empirical research addresses student performance relative to face-to-face courses. Some meta-analytical studies (Means, Toyama, Murphy, Bakia, & Jones, 2009; Sitzmann, Kraiger, Stewart, & Wisher, 2006) suggest that e-learning outcomes are equal to or in some cases better than those of face-to-face learning. Meanwhile, many researchers also expressed concerns regarding the effectiveness of e-learning systems (Kellogg & Smith, 2009; Morgan & Adams, 2009). Some studies found no significant difference in examination scores across delivery mediums or disciplines (Friday, Friday-Stroud, Green, & Hill, 2006). Consequently, a large number of studies during the past decades attempted to identify e-learning critical success factors (CSFs) that must be managed to increase the effectiveness of e-learning systems.
Broadly, there are two distinctive groups of e-learning empirical research on CSFs. The first group deals with the direct relationships between each success factor and learning outcomes and/or students’ satisfaction. However, these studies totally ignore synergistic effects of success factors interacting together (Arbaugh, 2005; Barbera, Clara, & Linder-Vanberschot, 2013; Eom & Ashill, 2016; Eom, Ashill, & Wen, 2006; Johnson, Hornik, & Salas, 2008; Kim, Kwon, & Cho, 2011; Mashaw, 2012; Peltier, Drago, & Schibrowsky, 2003; Sun, Tsai, Finger, Chen, & Yeh, 2008).
The second group of research deals with modeling several CSFs that consider the interdependence of the CSFs that affect e-learning outcomes (LaPointe & Gunawardena, 2004; Peltier, Schibrowsky, & Drago, 2007; Young, 2005; Wan, 2010; Wan, Wang, & Haggerty, 2008; Wilson, 2007). However, these studies examined the relationships between a subset of CSFs and learning outcomes and/or satisfaction, using only part of key predictors of e-learning outcomes.
All these studies have advanced our understanding of the effective management of CSFs of e-learning. Nevertheless, we believe that two issues hamper progress toward building an e-learning success model: (1) the proliferation of measures of dependent and independent variables and (2) the need for a holistic success model with multiple dimensions (constructs). The proliferation of measures of dependent and independent variables, the first issue, has been overelaborated. Using 120 articles on e-learning and blended learning published between 2000 and 2009, Arbaugh, Desai, Rau, and Sridhar (2010) reviewed and classified 158 independent variables and 107 dependent variables into five categories (course performance, psychological, course delivery, student skill, demographic, and others). The myriad of variables, both independent and dependent in empirical research in any field, hampers the progress toward a cumulative research tradition (DeLone & McLean, 1992; Keen, 1980). Thus, a significant reduction in the number of dependent and independent variable measures is a guiding principle to make progress toward a cumulative tradition in e-learning empirical research. The second issue is the need for building a holistic e-learning success model with multiple dimensions of CSFs. We view e-learning as an open system of human entities (students and instructor) and nonhuman entities (learning management systems and information systems) to maximize e-learning outcomes and student satisfaction (Figure 1). An e-learning system as a purposeful system is synergistic. There exists a dynamic relationship among student motivation, course design quality, instructor's facilitating roles, and students’ academic engagement. The total effects of synergistic interdependent entities working together are more than the sum of individual effects.

The purpose of this article is to present a learning theory-based, integrative, and holistic e-learning success model at the university level with empirical testing of the validity of the model. The model we present depicts the important relationships among a set of interdependent pivotal factors of e-learning systems working together. The current study further extends the study of Eom and Ashill (2016), which provided a theoretically grounded conceptualization and incorporated more fully developed e-learning success measures to revisit the question of key predictor of perceived learning outcomes and learner satisfaction, derived from three constructivist models (constructivism, collaborativism, and cognitive information processing model) (Eom, Ashill, & Arbaugh, 2016). However, their study suffers from a failure to consider the dynamic nature of e-learning success factors. The term dynamic model is often used as a synonym for systemic model. A dynamic model can be defined as an interactive system with a defined sequence of inputs, processes, and outputs over time and feedback loops. Our systemic model of e-learning (Figure 1) describes an e-learning system's behavior as a set of states that occur in a defined sequence of inputs, processes, and outputs over time (e.g., a semester), not necessarily sampling from multiple periods. The students’ perceived learning outcomes and satisfaction are results of the systemic process of e-learning over time.
The next section presents a system's view of e-learning. This is followed by the description of the research model and hypotheses development, research methodology including the development of a survey instrument to collect data, structural equation modeling (SEM) methodology, and the results of a partial least square (PLS) analysis of the research model. We then discuss the study findings and outline implications for future university e-learning. The final section describes limitations and directions for future research.
A SYSTEM'S VIEW OF E-LEARNING
A system is a whole that cannot be taken apart without loss of its essential characteristics, and hence it must be studied as a whole. Now, instead of explaining a whole in terms of its parts, parts began to be explained in terms of the whole. Therefore, things to be explained are viewed as parts of larger wholes rather than as wholes to be taken apart.
The systems approach to e-learning helps us view and analyzes e-learning systems as a dynamic set of interdependent subentities interacting together, and e-learning systems are not explainable from characteristics of isolated subentities. The components of a systemic model consist of inputs, processes, and outputs (Figure 1).
Inputs: The theoretical foundation of our research model (see Figure 2) is based on the constructivist learning theories as discussed in Eom and Ashill (2016). This model is in part derived from the virtual learning environment (VLE) effectiveness model of Piccoli, Ahmad, and Ives (2001). The VLE model postulates that two antecedents (human dimension and design dimension) determine the effectiveness of e-learning systems. The human dimension is concerned with two human entities (students and instructor) and their various attributes; and the design dimension includes learning management systems (LMS), self-regulated learning (SRL) and learner control, course design quality, and interaction among human entities.

Processes: There are three distinct types of processes to produce learning outcomes.
The students’ learning/cognitive process: The cognitive process is composed of a series of phases (perception, attention, cognitive load, coding, retrieval/transfer, and metacognition) supported by the different types of memories (sensory memory, working memory, and long-term memory) (Alonso, López, Manrique, & Viñes, 2005).
The students’ SRL process: According to Zimmerman, self-regulated students are the ones who are “‘meta-cognitively, motivationally, and behaviorally active participants in their own learning process” (Zimmerman, 1986); and they are characterized by three inseparable features: their use of SRL strategies, their responsiveness to self-oriented feedback about learning effectiveness, and their interdependent motivational processes (Zimmerman, 1990).
Self-regulated learners are the ones who motivate themselves and put forth strenuous effort, even studying materials that are uninteresting. They also self-manage the learning process of planning, monitoring, organizing, and controlling. The planning includes setting their goals, selecting the appropriate learning strategies (time management, metacognition, effort regulation, and organization), and controlling (evaluating their own progress and dynamically responding to it).
Dialog: One thing that sets e-learning apart from traditional face-to-face learning is the psychological and communication space (transactional distance) between the instructor and students (Moore, 1993). The transactional distance in e-learning can be reduced by many types of interactions: learner-content, learner-instructor, learner-learner, and learner-technology interaction (Hillman, Willis, & Gunawardena, 1994; Moore, 1989). Learner-technology (learner-interface) interaction permits a learner to interact with content, the instructor, and other learners.
Of these four types of interactions, the constructivist model of learning views the interaction and dialog between students and between the instructor and students as being critical ingredients to the success of e-learning. Therefore, our model includes only interaction between human entities. Unlike many empirical studies that measured the effects of all types of interaction (negative, neutral, and positive) on learning outcomes and satisfaction, our model incorporates only purposeful, constructive, meaningful interaction valued by each party (dialog). Dialog promotes learning through active participation and enables deep cognitive engagement for developing higher order knowledge (Moore, 1993; Muirhead & Juwah, 2004).
At the bottom of Figure 1, there are four different types of contextual variables that are not controlled by any of the entities in e-learning systems but affect the learning process and consequently the outputs. The learning process and outcomes are affected by multiple dimensions of learners’ characteristics including biological characteristics/senses (physiological dimension); personality characteristics such as attention, emotion, motivation, and curiosity (affective dimension); information processing styles such as logical analysis or “gut” feelings (cognitive dimension); and psychological/individual differences (psychological dimension) (Dunn, Beaudry, & Klavas, 1989).
Outputs: Learning outcomes used in e-learning empirical research are based on the taxonomy of educational objectives in the domains of cognitive behaviors (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956), affective behaviors (Krathwohl, Bloom, & Masia, 1964), and psychomotor behaviors (Simpson, 1966). The cognitive domain learning outcomes measure intellectual learning in terms of theories, comprehension, and application of course materials to problem solving. The affective domain learning includes appreciation, feeling, satisfaction, and attitude changes. The psychomotor domain refers to movement of body associated with mental activities. The majority of e-learning empirical studies include the learning outcomes in the cognitive and affective domains.
Feedback: Feedback refers to the return of information about the result of the learning process to the input so that a system is regulated to improve the process or outputs. This feedback is used to improve the inputs of students’ engagement and efforts, the instructors’ communication behaviors, and the information quality of learning management systems. Feedback is an essential element of a system to improve the process or the outcomes (Karoly, 1993).
RESEARCH MODEL AND HYPOTHESES DEVELOPMENT
Without well-defined e-learning success measures (the dependent variables), it is difficult to measure independent variables and vice versa. We examined three constructivist models in terms of goals, assumptions, and their implications to extract six predictor constructs (see table 1 of Eom & Ashill, 2016). The left side of the model contains more “exogenous” or “preexisting” latent variables (also known as predictor [independent] latent variables), while the right side includes more “outcome” or “endogenous” latent variables (also known as criterion [dependent] latent variables). The predictor constructs are further classified into the inputs and processes, as shown in Figure 2.
Construct Items | Loading | t-Valuea | AVE | Internal Consistencyb |
---|---|---|---|---|
Course design quality | .67 | .91 | ||
Design1 | .78 | 22.90 | ||
Design2 | .81 | 29.03 | ||
Design3 | .82 | 35.96 | ||
Design4 | .83 | 39.64 | ||
Design5 | .83 | 37.46 | ||
Instructor Involvement | .75 | .94 | ||
Ins1 | .90 | 73.16 | ||
Ins2 | .84 | 43.40 | ||
Ins3 | .79 | 27.42 | ||
Ins4 | .91 | 92.15 | ||
Ins5 | .88 | 62.47 | ||
Student-Student Dialog | .87 | .97 | ||
Diastu1 | .91 | 89.71 | ||
Diastu2 | .96 | 195.43 | ||
Diastu3 | .95 | 150.05 | ||
Diastu4 | .91 | 68.81 | ||
Student-Instructor Dialog | .80 | .94 | ||
Diaist1 | .89 | 66.81 | ||
Diaist2 | .91 | 89.93 | ||
Diaist3 | .86 | 59.76 | ||
Diaist4 | .91 | 81.02 | ||
Student Motivation | .53 | .82 | ||
Extm1 | .81 | 28.09 | ||
Extm2 | .61 | 27.88 | ||
Extm3 | .65 | 11.60 | ||
Intm3 | .82 | 15.21 | ||
Self-Regulatory Learning Strategies | .57 | .84 | ||
Sreg1 | .71 | 16.62 | ||
Sreg2 | .80 | 29.20 | ||
Sreg3 | .81 | 29.65 | ||
Sreg4 | .70 | 14.30 | ||
Learning Outcomes | .77 | .93 | ||
Out1 | .89 | 84.75 | ||
Out2 | .90 | 83.29 | ||
Out3 | .84 | 37.98 | ||
Out4 | .87 | 59.01 | ||
User Satisfaction | .75 | .92 | ||
Sat1 | .85 | 48.26 | ||
Sat2 | .92 | 81.89 | ||
Sat3 | .74 | 25.09 | ||
Sat4 | .94 | 132.36 |
- Note: AVE = average variance extracted.
- aBootstrapping results (N = 5,000).
- bPLS uses an alternative measure to Cronbach's Alpha as a measure of internal consistency.
Course Design Quality and Dialog
Course design quality and course structure can influence the learning process and learning outcomes (Swan, Matthews, Bogle, Boles, & Days, 2011). This is because “online classes are more successful in supporting deep learning when they are characterized by a community of inquiry” (Rubin & Fernandes, 2013, p. 125). Online course design needs a more active, cooperative learning mode, which requires more group discussions and student-to-student (SS) interactions/dialog and student-to-instructor (SI) interactions/dialog.
The university under study implemented a policy that all courses to be taught online must meet the quality matters (QM) standards, and therefore they must pass QM internal review. QM rubric standards mandate that the “learning activities provide opportunities for interaction that support active learning” and “the instructor's plan for classroom response time and response time and feedback on assignments is clearly stated” by assigning three points of 3, the most possible, indicating that meeting these standards is mandatory. Peltier et al. (2007) proposed and tested a structural model of online education success factors including mentoring, roles of students and the instructor, course content, course structure, SI interaction, and SS interaction. The findings of their study indicate that course structure is positively related to the perceived quality of SI interaction, but failed to show a positive relationship with the perceived quality of SS interaction.
- H1: Students who perceive course design in online courses more favorably will report a higher level of dialog among other students.
- H2: Students who perceive course design in online courses more favorably will report a higher level of dialog between students and the instructor.
Instructor's Involvement and Dialogs
In response to a concern that the role of the instructor has been a neglected area of some e-learning researchers with regard to the role of the instructor in distance education (Arbaugh, 2010), a recent study (Eom & Ashill, 2016) presented an instructor-centric view of e-learning systems where several e-learning CSFs consist of either instructor-centric constructs (instructor and course design quality) or constructs that can be enhanced by the input of instructor (SI dialog, SS dialog, SRL, and student motivation). Under the constructivist learning models, the primary role of the instructor becomes “guide on the side” supporting learner-centered active learning, instead of becoming “sage on the stage” (Collison, Elbaum, Haavind, & Tinker, 2000; Heuer & King, 2004). E-learners typically learn through the shared understanding of a group of learners. Therefore, instruction becomes communication-oriented, and the instructor becomes a discussion leader.
A survey of students' perceptions of instructors' roles in blended and e-learning environments identified the five constructs that represent the five most important instructor roles in blended and e-learning environments: course designer and organizer, discussion facilitator, social supporter, technology facilitator, and assessment designer (Hung & Chou, 2015). The instructor facilitates SI dialogs by initiating group discussion and providing individual feedback of various forms (cognitive, diagnostic, prescriptive, and metacognitive), and thereby creates learning communities to induce students to become active learners.
The formal instructor activities include the act of facilitating discourse and direct instruction of cognitive social processes to produce meaningful and educationally worthwhile learning outcomes (Anderson, Rourke, Garrison, & Archer, 2001). The informal instructor activities, or immediacy behaviors, refer to communication behaviors that reduce social and psychological distance between students and the instructor (Arbaugh, 2010). Consequently, the immediacy behaviors lead to a higher level of SI dialog. For example, Peltier et al. (2007) found a positive relationship between the perceived quality of SS interaction and instructor mentoring as well as a positive relationship between the perceived quality of instructor-to-student interaction and instructor mentoring.
Eom and Ashill (2016) reported a significant positive relationship between instructor activities and learning outcomes and between dialogs (SS and SI) and learning outcomes in online undergraduate and graduate courses. The specific instructor activities include active involvement in facilitating the online class by the following: providing timely, helpful feedback on exams, assignments, and projects; stimulating students to intellectual effort beyond that required by face-to-face classes; caring about each individual student's learning through careful monitoring of his/her progress; and being responsive to student concerns. Building on previous research about these direct effects between instructor roles and learning outcomes and between dialog and learning outcomes, the current study focuses on the roles of SS and SI dialog as mediating constructs between instructor roles and learning outcomes.
- H3: A higher level of instructor activities will lead to a higher level of dialog among students.
- H4: A higher level of instructor activities will lead to a higher level of dialog between students and the instructor.
Instructor's Involvement and Self-Regulated Learning
According to Zimmerman, self-regulated students are the ones who are “‘meta-cognitively,’ motivationally, and behaviorally active participants in their own learning process” (Zimmerman, 1986). They are characterized by three inseparable features: their use of SRL strategies, their responsiveness to self-oriented feedback about learning effectiveness, and their interdependent motivational processes (Zimmerman, 1990).
Self-regulated learners are the ones who motivate themselves and put forth strenuous effort, even studying materials that are uninteresting to them. They also self-manage the learning process of planning, monitoring, organizing, and controlling. The planning includes setting their goals and selecting the appropriate learning strategies (time management, metacognition, effort-regulation, and organization) and controlling (evaluating their own progress and dynamically responding to it).
- H5: A higher level of instructor involvement will lead to a higher level of using SRL strategies.
Instructor's Roles and Student Motivation
Research in educational psychology reports that students’ motivation (both intrinsic and extrinsic) is affected by a range of social and environmental factors that facilitate or undermine motivation (Ryan & Deci, 2000a). The instructor is one of the critical factors that influence the level of students’ motivation. Several different ways are suggested to enhance intrinsic and extrinsic motivation (Dev, 1997), including providing positive responses to students’ questions and praising students, thereby helping them to develop a feeling of competence. Moreover, the instructor sets learning goals and creates assignments and class materials that challenge students. Moreover, the instructor is a motivator who stimulates students to intellectual effort with a variety of stimulating and challenging assignments.
- H6: A higher level of instructor activities will lead to a higher level of student motivation.
Motivation and Self-Regulated Learning Strategies
Motivation is defined as the self-generated energy that gives behavioral direction toward a particular goal (Zimmerman, 1985). Motivation can be either intrinsic or extrinsic. Intrinsic motivation is the psychological feature that makes an individual do an activity for its inherent satisfaction. Extrinsic motivation, on the other hand, makes an individual take an action toward a goal to attain some separable outcome such as rewards and recognition (Ryan & Deci, 2000a).
Motivation can be better and fully understood through the broad framework of the self-determination theory (Ryan & Deci, 2000b). Self-determination theory (SDT) is a macrotheory that comprises six subtheories. One of them is the organismic integration theory (OIT), which describes different attributes of extrinsic motivation: different forms, contextual factors, and its determinants. OIT further divides extrinsic motivation into four types in terms of regulatory styles: external, introjected, identified, and integrated regulation. Moreover, SDT postulates that a human has innate psychological needs that are the basis for self-motivation: autonomy, competence, and relatedness. Autonomy is the state where one's actions and behavior are autonomous. Therefore, humans perceive that their behavior emanates from the self (Ryan & Deci, 2000b). The roles of autonomy in intrinsic motivation are well explained by two subtheories of SDT: causality orientation theory (Deci & Ryan, 2008) and basic psychological needs theory (Su & Reeve, 2011). Relatedness is another psychological need that motivates people. To understand and explain this human need, relationship motivation theory is developed as a subtheory of SDT (Deci & Ryan, 2008), and many studies support the hypothesis that the need to belong is a fundamental and extremely pervasive motivation (Baumeister & Leary, 1995). A recent empirical study also investigated the relationship between the level of peer relatedness and graduate students' self-determined motivation in synchronous hybrid learning environments (Butz & Stupnisky, 2016).
Education psychologists such as Pintrich, Smith, Garcia, and McKeachie (1991) believe that motivation and SRL strategies are inseparable, and therefore these two constructs cannot be separated. Furthermore, student motivation triggers the SRL process (Zimmerman, 2008). It is composed of (1) selecting and applying the SRL strategies to achieve desired learning outcomes, and (2) continuously monitoring the learning process (Zimmerman, 1990). The learning process may include the internal mental process of information processing, planning, organizing, monitoring, evaluating, and controlling learning efforts and activities, and interaction among students and between students and the instructor. A repertoire of SRL strategies includes rehearsal, elaboration, organization, critical thinking, time/study environmental management, effort regulation, peer learning, help-seeking, and metacognitive self-regulation (Pintrich, Smith, Garcia, & McKeachie, 1993).
Another aspect of SRL is the regulation of motivation (Wolters, 2003). Unlike Pintrich et al. (1991) and Zimmerman (2008), who treated motivation as a constant, Wolters viewed it as a variable. Wolters found that students’ regulation of motivation levels was positively associated with motivational constructs. Therefore, it could improve learning outcomes. It is further suggested that more attention should be paid to the regulation of motivation, and therefore, more research is needed to investigate particular strategies to regulate learners’ motivation for academic tasks (Wolters, 2003).
Using path analysis, the study by Young (2005) specifically aimed to answer the question: Do intrinsically motivated students employ different learning strategies than extrinsically motivated students? The study found that extrinsic motivation had a positive relationship only to superficial learning strategy (rehearsal), whereas intrinsic motivation had a positive relationship to each of the SRL strategies (from time management to organization), as shown in Figure 1.
A recent study, using the same data as the current study, specifically addressed the effects of online student intrinsic and extrinsic motivation on the SRL strategies and on the students’ perceived e-learning outcomes and satisfaction (Eom, 2015). The results indicated that both intrinsic and extrinsic student motivation did have a significant positive association with SRL strategies and learning outcomes. But with our goal of building a parsimonious model, the research model (Figure 2) did not separate the motivation construct into two. Student motivation was measured with six items. Three items measured extrinsic motivation, and three items measured intrinsic motivation. However, in the measurement model phase of data analysis, two of these six items, which measured intrinsic motivation, were dropped due to poor measurement properties despite drawing upon well-established measures for both types of motivation. We were therefore not able to parcel out differences between intrinsic and extrinsic motivation in the current study.
While previous studies examined the single directional effects of motivation on the students' use of self-regulatory strategies, it is also important to understand the reciprocal interplay between motivation and SRL constructs. A cross-lagged structural equation model identified significant reciprocal effects whereby the students' SRL strategies can also be used to predict the students’ subsequent motivation (Ning & Downing, 2010).
Broadbent and Poon (2015), using relevant databases of studies published between 2004 and 2014, investigated the effect of SRL strategies on academic achievement in online higher education settings. The findings of this study indicated that each SRL strategy had a differential effect level on learning outcomes. Specifically, the strategies of time management, metacognition, effort regulation, and critical thinking were positively correlated with learning outcomes; whereas rehearsal, elaboration, and organization had the least empirical support.
- H7: Student motivation will be positively related to the level of SRL.
SI Dialog and Self-Regulated Learning Strategies
Empirical investigation of relationships between SS and SI interaction and learning outcomes over the past decade has produced conflicting results due to many inconsistent research methodologies and different measures of research constructs (Arbaugh & Rau, 2007; Eom et al., 2006; Kuo, Walker, Schroder, & Belland, 2014; Wilson, 2007). Eom and Ashill (2016) assert that a possible factor that has contributed to the inconsistent results is the mix of the use of interaction and dialog (positive and meaningful interaction), and they established a significant positive relationship between SI dialog and learning outcomes. This finding underscores SI dialog as the predictor of e-learning outcomes, in accordance with prior research (Hirumi, 2002; Moore, 1993; Vrasidas & McIsaac, 1999; Woo & Reeves, 2007).
- H8: A higher level of perceived dialog between students and instructor in online courses will be positively related to the level of SRL.
Dialog and Learning Outcomes
Numerous studies have investigated the direct effects of SI and SS dialog on learning outcomes (see review by Eom & Ashill, 2016). Eom and Ashill (2016) report a significant positive relationship between SI dialog and leaning outcomes and between SS dialog and learning outcomes. Our proposed mediation model treats SS dialog, SI dialog, and selection of SRL strategies as mediator constructs that link the three constructs (course design quality, instructor, and student motivation) and learning outcomes as shown in Figure 2.
The proposed model is employed to better understand known relationships between a set of constructs (course design quality, instructor, and motivation) and perceived learning outcomes by exploring the underlying process by which constructs on the left-hand side influence perceived learning outcomes through mediator variables (SI dialog, SS dialog, and SRL). The mediation model proposed helps us better understand the dynamic relationships among CSFs of e-learning.
The extant literature (Hirumi, 2002; Moore, 1993; Vrasidas & McIsaac, 1999; Woo & Reeves, 2007) suggests that meaningful and positive interactions (dialog) between the instructor and students and among students influence the learning outcomes positively. A higher level of perceived dialog is measured by frequency (survey questions 17, 18, 21, and 22) and quality of dialog that improves the quality of the learning outcomes (survey questions 19, 20, 23, and 24).
- H9: A higher level of perceived dialog between students and students in online courses will lead to a higher level of perceived learning outcomes.
- H10: A higher level of perceived dialog between students and instructor in online courses will lead to a higher level of perceived learning outcomes.
Self-Regulated Learning Strategies and Perceived Learning Outcomes
Pintrich et al. (1991) discuss nine SRL strategies that are correlated with learning outcomes. They are broadly categorized into two strategies: cognitive and metacognitive strategies (rehearsal, elaboration, organization, critical thinking, and metacognitive self-regulation) and resource management strategies (time management and study environment, effort regulation, peer learning, and help seeking).
Cognitive and metacognitive strategies refer to a set of strategies that promote the awareness and control of thought. Rehearsal is a learning strategy to improve the learning outcome by repetition, including course readings. Elaboration refers to learners’ strategies that help the students combine prior information and knowledge with the new material so that new knowledge can be stored into their long-term memory. Organization is another cognitive and metacognitive strategy that helps students to organize their thoughts and course materials via outlining the course material and making visualization tools (such as charts, tables, and diagrams). Critical thinking refers to a learning strategy that applies the course material to developing new ideas and/or further extending and questioning a theory, interpretation, and conclusion in the class. Metacognition refers to planning, monitoring, and regulating the learners’ learning processes. Metacognitive activities include setting learning goals as well as adapting to the course requirements, the course design, and the instructor's teaching style.
Time management and study environment, the first of the resource management strategies, are concerned with the learners’ ability to plan their study time and study environment—managing study time and the setting of where students do their class activities. Effort regulation is defined as the management of effort in learning activities when faced with difficulties or uninteresting subjects. Peer learning refers to the collaborative learning with other students. The last strategy, help seeking, refers to learners seeking help from both peers and the instructors when necessary.
The extant literature has shown that students’ use of the SRL strategies (metacognition, time management, and effort regulation) in a traditional face-to-face learning environment is strongly associated with higher learning outcomes (Richardson, Abraham, & Bond, 2012). In the e-learning area, the SRL strategies of time management, metacognition, effort regulation, and critical thinking were positively correlated with academic outcomes (Asarta & Schmidt, 2013; Baugher, Varanelli, & Weisbord, 2003); but, on the other hand, rehearsal, elaboration, and organization had the least empirical support (Broadbent & Poon, 2015). Eom and Ashill (2016) investigated the direct relationship between perceived learning outcomes and SRL strategies and between intrinsic motivation/extrinsic motivation and perceived learning outcomes. They failed to establish a significant relationship between motivation and perceived learning outcomes and between SRL strategies and perceived learning outcomes. This, we argue, is because they ignored the fact that motivation and SRL strategies are inseparable and they must be placed in tandem to produce learning outcomes. Any attempt to investigate the effects of either motivation or SRL independently on learning outcomes may produce insignificant or invalid results. Three well-known SRL assessment instruments are the Learning and Study Strategies Inventory (LASSI), the Motivated Strategies for Learning Questionnaire (MSLQ), and the SRL Interview Scales (SRLIS). All these inventories essentially measure the essential elements of SRL (motivation, metacognition, and behavior) as inseparable elements (Zimmerman, 2008).
- H11: Students with a higher level of SRL in online courses will report higher perceived learning outcomes.
Perceived Learning Outcomes and Satisfaction
E-learners’ learning outcomes and satisfaction have been two major dependent constructs in e-learning empirical studies (Eom & Ashill, 2016; Eom et al., 2006; Marks, Sibley, & Arbaugh, 2005). The learning outcomes and satisfaction in this study, as shown in Figures 1 and 2, originate from the taxonomy of educational objectives in the domains of cognitive behaviors (Bloom et al., 1956) and affective behaviors (Krathwohl et al., 1964). The cognitive domain learning outcomes measure each category of the cognitive learning process: knowledge, comprehension, application, analysis, synthesis, and evaluation of course materials. The affective domain learning includes appreciation, feeling, satisfaction, and attitude changes.
The majority of e-learning empirical studies include the learning outcomes in the cognitive and affective domains. However, major issues arise due to different ways of measuring dependent constructs: overall perceived effectiveness (a mix of cognitive learning outcome and affective learning outcomes) (Benbunan-Fich & Arbaugh, 2006; LaPointe & Gunawardena, 2004; Peltier et al., 2003), satisfaction only (Arbaugh, 2000; Sun et al., 2008), perceived learning only (Marks et al., 2005), and satisfaction and perceived learning outcomes (Barbera et al., 2013; Eom & Ashill, 2016; Eom et al., 2006). Since learning outcomes and satisfaction measure two distinctive domains of cognitive and affective behaviors, it is logical to measure each of them separately.
A goal of this study is to investigate the relationship between students’ perceived learning outcomes and satisfaction. The community of inquiry (CoI) framework is a viable theoretical framework to explain the relationship. The CoI model has become a prominent model of online learning that can be used to achieve higher levels of learning outcomes and students’ satisfaction (Akyol & Garrison, 2011; Garrison, 2009). A CoI refers to a group of learners who interact together to learn and construct knowledge collaboratively. The CoI model depicts the process of constructing a purposeful discourse and reflection via the interaction of three interdependent subsystems: social presence, cognitive presence, and teaching presence. Together, the three elements create a learning system that produces the higher level of learner satisfaction and learning outcomes.
The measurement of satisfaction as the ultimate barometer of e-learning success is due to its potential impacts on students dropout (Rovai, 2002), recruitment efforts, and retention (Schreiner, 2009). Rovai provides evidence that e-learners who have a higher level of learning outcomes feel less isolated and have a high level of satisfaction with their e-learning systems. Consequently, Rovai concludes that e-learners who have a stronger sense of community and a higher level of satisfaction may be less likely to become dropouts. Schreiner's empirical study concludes that satisfaction is a significant predictor of students’ intention to reenroll as well as of their actual enrollment the subsequent year (Schreiner, 2009).
- H12: Students with a higher level of perceived learning outcomes in online courses will report a higher level of perceived user satisfaction.
METHODOLOGY
Survey Instrument Development and Measurement
A substantial portion of the survey questionnaire (see Appendix A) was selected from previous work (Eom et al., 2006), which is, in part, adapted from the commonly administered Individual Development & Educational Assessment student rating system developed by Kansas State University. In addition, the four questions on motivation (6, 7, 9, and 10) were drawn from the MSLQ (Pintrich et al., 1993), and the two questions on motivation (8 and 11) were taken from the AIM inventory (Shia, 1998). Questions on SRL (30, 31, and 33) were adapted from the MSLQ (Pintrich et al., 1993). Question 32 was drawn from the college student inventory (Stratil, 1988). Questions related to course design quality construct were created based on categories 1–4 of the QM standards.
All of the multi-item constructs were measured using five-point Likert scales. All model constructs were measured with reflective indicators since they measured the same underlying phenomenon. With reflective measurement, all indicators are interchangeable. This is a key principle of reflective measures (Chin, 1998). Appendix A presents a summary of constructs and measures. Various control variables were also examined to provide a rigorous test of the hypothesized theoretical associations including age, gender, and study year.
Data Collection
We collected the e-mail addresses of 3,285 students from the student data files archived with every online course delivered through the online program of a university in the Midwestern United States. The Institutional Review Boards (IRB) determined that the research proposed presents minimal risk and falls into Category 2 of research that is eligible for exemption from IRB approval. The university where the sample was collected uses the following definition for online and blended courses. Online courses are courses with no face-to-face class meetings, while blended courses are defined as having 25–75% of class time replaced with online activities with three different categorizations: lightly blended, moderately blended, or heavily blended. The 41 survey questions were created using SurveyMonkey©. The survey URL and instructions were emailed to 3,285 students taking online courses with no face-to-face meetings. We collected 382 valid, unduplicated responses from the survey (11.63% response rate). Of these responses, 10 incomplete responses with missing values were deleted. Appendix B summarizes the characteristics of the student sample of 372.
Analytical Techniques
We used PLS for the data analysis. In the current study, PLS was considered more appropriate relative to covariance-based SEM for the following reasons. First, unlike covariance structural analysis such as LISREL, the objective of PLS is to explain variance in the endogenous variables in a model that has managerial relevance (such as user satisfaction and learning outcomes). PLS is particularly well suited to operationalizing research models in an applied setting (Edvardsson, Johnson, Gustafsson, & Strandvik, 2000) and has been widely used in studies of online education (Eom & Ashill, 2016; Eom et al., 2006). Second, PLS works efficiently when used to estimate path models comprising many constructs (typically more than five), as is the case in the current research (Sarstedt et al., 2014). Third, in terms of data characteristics, PLS makes practically no assumptions about data distributions, making it particularly useful for handling data collected for social science disciplines that often fail to follow a multivariate normal distribution (Hair, Hult, Ringle, & Sarstedt, 2017). The data in the current study did not meet the strict distributional assumptions of covariance-based SEM approaches such as LISREL.
The test of the measurement model included the estimation of internal consistency, plus convergent and discriminant validity (Hair et al., 2017). To evaluate the structural model, the R2 values for the endogenous constructs and the size, t-statistics, and significance level of the structural path coefficients were computed using the bootstrap resampling procedure (5,000 bootstrap samples) (Efron & Tibshirani, 1993).
To assess the extent of methods bias in our study, the Harman one-factor test was performed following the approach described by Podsakoff, Mackenzie, Lee, and Podsakoff (2003). All the items measuring the model constructs were entered into a common factor analysis with OBLIM rotation. The results revealed an eight-structure with no one factor accounting for more than 50% of the variance. Therefore, method bias, per se, cannot explain our study results.
RESULTS
Measurement Model Assessment
The measurement model was assessed by examining individual item reliability, internal consistency, and discriminant validity. Two items measuring student motivation exhibited low loadings and were subsequently dropped from further analysis, leaving four items to measure this construct. Table 1 shows the item loadings for the revised measurement model. All but two of the loadings (item reliability) exceeded the stringent threshold of .707 (Barclay et al., 1995) and ranged from .61 to .96. Two items measuring student motivation exhibited loadings less than .707 but greater than .60. Loadings of .60 are considered acceptable if there are additional indicators in the block for comparative purposes (Chin, 1998). The two items were retained because they were theoretically grounded and there were other measures in the block for comparison purposes (Hair et al., 2017). AVE scores for all constructs were above .50 (Fornell & Larcker, 1981).
As shown in Table 2, all constructs in the estimated model fulfilled the condition of discriminant validity. Since none of the off-diagonal elements exceeded the respective diagonal element, discriminant validity was achieved (Fornell & Larcker 1981). Examination of cross-loadings also confirmed that no indicator was incorrectly assigned to a wrong factor.
Construct | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|
1. Course design quality | .81 | |||||||
2. Instructor activities | .74 | .87 | ||||||
3. Student-student dialog | .69 | .66 | .93 | |||||
4. Student-Instructor dialog | .50 | .52 | .57 | .90 | ||||
5. Student motivation | .16 | .12 | .12 | .15 | .73 | |||
6. Self-regulatory learning strategies | .20 | .24 | .11 | .24 | .60 | .76 | ||
7. Learning outcomes | .69 | .70 | .71 | .62 | .18 | .22 | .88 | |
8. User satisfaction | .75 | .77 | .76 | .54 | .15 | .16 | .80 | .87 |
Mean | 3.84 | 3.61 | 3.01 | 3.33 | 3.25 | 4.15 | 3.00 | 3.71 |
SD | .86 | 1.04 | 1.01 | 1.09 | .78 | .57 | 1.09 | 1.10 |
- Notes: SD = Standard deviation. The bold numbers on the diagonal are the square root of the average variance extracted. Off-diagonal elements are correlations among constructs. All correlations are significant at the .05 level.
Recently, Henseler, Ringle, and Sarstedt (2015) have suggested that the Fornell and Larcker criterion and cross-loadings are sufficiently insensitive to detect discrimination validity problems. To address this issue, we used the heterotrait-monotrait ratio of correlations (HTMT), a new criterion for discriminant validity (Henseler et al., 2015; Voorhees, Brady, Calantone, & Ramirez, 2016). Specifically, we computed the HTMT criteria for each pair of constructs on the basis of the item correlations. Using a conservative criterion of .85 (Kline, 2015), our findings indicated discriminant validity.
Structural Model Results
Structural models in PLS are evaluated on the basis of the R2 values for the dependent constructs, the size, t-statistics, and significance level of the structural path coefficients (based on 5,000 bootstrapping runs), the f2 effect size, and the Stone-Geisser Q-square test (Geisser, 1975; Stone, 1974) for predictive relevance (Hair et al., 2017).
The structural model results are shown in Table 3. Falk and Miller (1992) suggest that the variance explained (R2) for endogenous variables should be greater than .10. The model explained 2% of the variance in student motivation, 30% of the variance in student-instructor dialog, 74% in student-student dialog, 39% in self-regulatory learning strategies, 58% in learning outcomes, and 65% in user satisfaction. These results indicate that with the exception of student motivation, Falk and Miller's (1992) rule of .10 has been met for the model's endogenous variables.
Hypothesized Relationshipsa | Standardized Coefficient | t-Valueb | Test Result |
---|---|---|---|
H1. CD→ SS Dialog | .13*** | 2.94 | Supported |
H2. CD→ SI Dialog | .26**** | 4.11 | Supported |
H3. IA → SS Dialog | .76**** | 19.23 | Supported |
H4. IA → SI Dialog | .32**** | 5.29 | Supported |
H5. IA → SRL | −.09** | 1.90 | Not supported |
H6. IA → Mot | .12** | 2.17 | Supported |
H7. Mot → SRL | .58**** | 14.56 | Supported |
H8. SI Dialog → SRL | .20**** | 4.29 | Supported |
H9. SS Dialog → Out | .54**** | 13.37 | Supported |
H10. SI Dialog → Out | .29**** | 6.76 | Supported |
H11. SRL → Out | .09*** | 2.54 | Supported |
H12. Out → Sat | .81**** | 51.07 | Supported |
Main Effects Model Evaluation Statistics: | |||
R2 for: Mot = .02, SI Dialog = .30, SS Dialog = .74, SRL = .39, Out = .58 and Sat = .65 | |||
Communality Q-square values all above zero | |||
The indirect effects of course design quality and instructor on learning outcomes are all significant beyond the .05 level. The indirect effect of motivation on learning outcomes is nonsignificant at the .05 level. |
- Notes: aCD = course design quality; IA = instructor activities; Mot = student motivation; SS Dialog = student-student dialog; SI Dialog = student-instructor dialog; SRL = self-regulatory learning strategies; Out = learning outcomes; Sat = user satisfaction.
- bt-values corresponding to one-tail tests at: t >1.28, p < .10; t >1.65, p < .05; t > 2.33, p < .01; t > 3.09, p < .001. Significance levels: ****p < .001, ***p < .01, **p < .05, *p < .10, ns not significant.
Regarding the overall quality of the research model, Tenenhaus, Vinzi, Chatelin, and Lauro (2005) have developed an overall goodness-of-fit (GoF) measure for PLS based on taking the square root of the product of the variance extracted with all constructs with multiple indicators and the average R2 value of the endogenous constructs. However, Henseler and Sarstedt (2013) have recently challenged the usefulness of the GoF both conceptually and empirically. Following the recommendations of Hair et al. (2017) and Henseler and Sarstedt (2013), we did not apply this measure in the current study.
We next tested specific hypotheses. Hypothesis 1 examined the relationship between course design quality and student-student dialog. The relationship was positive and significant (β = .13, t = 2.94). The relationship between course design quality and student-instructor dialog was also positive and significant (β = .26, t = 4.11), thus supporting H2. The effect of instructor activities on both student-student dialog (β = .76, t = 19.23) and student-instructor dialog (β = .32, t = 5.29) was also significant, thus supporting H3 and H4. The relationship between instructor activities and student self-regulatory learning strategies was significant but in the opposite direction to what was hypothesized (β = −.09, t = 1.90). H5 was therefore rejected. The effect of instructor activities on student motivation was positive and significant (β = .12, t = 2.17), thus supporting H6. The relationship between student motivation and student self-regulatory learning strategies was also positive and significant (β = .58, t = 14.56), thus supporting H7. Our results also confirm a positive and significant relationship between student-instructor dialog and student self-regulatory learning strategies (β = .20, t = 4.29), thus supporting H8.
Student-student dialog (β = .54, t = 13.37), student-instructor dialog (β = .29, t = 6.76), and student self-regulatory learning strategies (β = .09, t = 2.54) demonstrated positive and significant effects on learning outcomes, thus supporting H9, H10, and H11. These findings indicated that student-student dialog is the strongest predictor of learning outcomes. Finally, learning outcomes exerted a positive and significant effect on user satisfaction (β = .81, t = 51.09), thus supporting H12.
To fully examine the mediating role of process variables in our model (SS dialog, SI dialog, and SRL), indirect effects were tested. We first examined direct paths from course design quality, instructor activities, and motivation to learning outcomes, in addition to the indirect or mediated paths as shown in our conceptual model. The direct link between course design quality and learning outcomes was significant (β = .26, t = 4.10). Using the Sobel (1982) test, the indirect effect of course design quality on learning outcomes through the mediator of SS dialog was significant (β = .07, t = 2.95). The indirect effect of course design quality on learning outcomes through the mediator of SI dialog was also significant (β = .08, t = 3.49). These findings indicate that SS dialog and SI dialog partially mediate the effects of course design quality on learning outcomes.
The direct relationship between instructor involvement and learning outcomes was also significant (β = .17, t = 2.31). Using the Sobel (1982) test, the indirect effect of instructor activities on learning outcomes through the mediator of SS dialog was significant (β = .41, t = 11.15). The indirect effect of instructor involvement on learning outcomes through the mediator of SI dialog was also significant (β = .09, t = 4.82). The indirect effect of instructor involvement on learning outcomes through SRL was not significant (β = −.01, t = −1.55). These findings indicate that SS dialog and SI dialog partially mediate the effects of instructor involvement on learning outcomes. The indirect effect of instructor involvement on SRL was also significant (β = .016, t = 3.47). Finally, the direct relationship between motivation and learning outcomes was not significant (β = .01, t = .24). However, the indirect effect of motivation on learning outcomes through SRL was significant (β = .05, t = 2.21), suggesting that SRL fully mediates the effect of motivation on learning outcomes.
In summary, the above findings demonstrate the importance of SS dialog, SI dialog, and SRL as mediating variables in our research model. All three variables play a partial or full mediating role in relationships between e-learning inputs (course design quality, instructor involvement, and student motivation) and learning outcomes.
The model in Figure 2 was also tested with and without the control variables, and findings showed that the direction and strength of the hypothesized relationships remained the same. The Stone-Geisser test of predictive relevance was also performed to further assess model fit in PLS analysis (Geisser, 1975; Stone, 1974). The blindfolding estimates are shown in Table 4. Using omission distances of 10 and 25 produced similar results, indicating that the estimates are stable. The communality Q-square was greater than 0 for all constructs, indicating that the model has predictive relevance.
Construct | R2 | Omission Distance = 10 Communality Q-Square | Omission Distance = 25 Communality Q-Square |
---|---|---|---|
Course design quality | n/a | .49 | .49 |
Instructor activities | n/a | .61 | .61 |
Student motivation | .02 | .15 | .16 |
Student-student dialog | .74 | .65 | .64 |
Student-instructor dialog | .30 | .76 | .75 |
Self-regulatory learning strategies | .39 | .29 | .29 |
Learning outcomes | .58 | .60 | .59 |
User satisfaction | .65 | .58 | .58 |
- Note: n/a is not applicable.
DISCUSSION
Our study presents an integrated view of e-learning success, making a number of significant contributions to literature on the effectiveness of e-learning systems. First, we present a learning theory-based integrated and comprehensive e-learning success model of a dynamic interdependent set of CSFs interacting together. Therefore, the model depicts the important relationships among a set of interdependent pivotal factors of e-learning systems working together. An e-learning system as a purposeful system is synergistic. The total effect of synergistic interdependent entities is more effective than the sum of individual effects. Specifically, there exists a dynamic relationship among student motivation, self-regulatory learning strategies, instructor's facilitating roles, and students’ dialog with the instructor. Two previous studies (Eom & Ashill, 2016; Eom et al., 2006) found no direct significant relationships between students’ SRL behavior and perceived learning outcomes. The current study examined interdependent relationships (H4, H5, H6, H7, H8, and H11) among instructor, student motivation, students’ SRL behavior, and learning outcomes as shown in Figure 2.
Second, the findings of the current study reveal that instructor activities have a limited role in motivating students. Moreover, student motivation and self-regulatory learning behavior are interdependent. These two constructs must therefore be treated in such a way that motivation is a component of self-regulation phases, which consist of a continuing cycle of forethought, performance, and self-regulation. In the forethought phase, learners set goals, decide on the intended learning outcomes, and select learning strategies and problem-solving skills (Zimmerman & Campillo, 2003). The motivation is the trigger that activates the next cycle of forethought, performance, and self-regulation.
Third, our study failed to establish a positive direct link between instructor activities and student self-regulation behavior (H5). However, indirect findings suggest that students’ self-regulation behaviors are enhanced indirectly via the student-instructor dialog, rather than directly by instructor activities. Contrary to our hypothesis that the instructor activities will be positively associated with self-regulatory learning strategies, our results showed a negative relationship between the two constructs. Although we acknowledge that the correlation is weak, it is statistically significant. This finding suggests that if instructors are more actively involved in facilitating the online class by providing helpful feedback on exams and being more responsive to student concerns, students are less likely to self-manage the learning process by setting their own goals and selecting their own learning strategies. As noted earlier, self-regulation involves three general aspects of learning: self-regulation of behavior, self-regulation of motivation, and self-regulation of cognition (Zimmerman, 1989). Thus, a self-regulated learner is empowered and able to make sense of the learning task, to create goals and strategies, and to implement actions to meet his or her goals within a learning context. It may be that when there is a higher level of instructor activities in facilitating the online class, the less online students feel they need to self-regulate their learning processes.
Fourth, our research presented an inclusive view of how instructor activities, SI dialog, and student motivation jointly contribute to self-regulation behavior affecting learning outcomes. Specifically, our research makes a useful contribution to the understanding and explanation of the dynamic roles of instructor, SI dialog, and student motivation in a recursive learning process as joint inputs that contribute to learners’ self-regulation behavior. Our research provides empirical evidence that supports a model of SRL that views instructor feedback delivered through SI dialog as a prime determinant of the SRL process (Butler & Winne, 1995). Further, our model enriches our understanding of how other constructs in addition to feedback jointly affect the self-regulatory learning process, a pivot that leads to student learning outcomes and satisfaction.
Finally, our study provides empirical evidence that shows that learning outcomes have a direct association with the level of student satisfaction, which, in turn, is a significant predictor of the retention and dropout rates. Due to the changing educational marketplace and shifts in government funding, administrators at many universities are finding it increasingly important to prioritize student retention efforts and to seek strategies that yield a higher financial benefit as well as serving students more effectively. A vital contributing factor to student retention is student satisfaction (Roberts & Styron, 2010; Schreiner, 2009). Student satisfaction influences student registration decisions and the level of retention, both of which may be of concern to faculty and administration.
PRACTICAL AND THEORETICAL IMPLICATIONS
SRL research has investigated either the direct relationship between SRL and learning outcomes (Broadbent & Poon, 2015; Santhanam, Sasidharan, & Webster, 2008) or relationships involving a limited set of constructs (Butler & Winne, 1995). A broadly framed research model of SRL like the one presented in the current study provides a better understanding of the dynamic relationships among CSFs of e-learning. Our empirically tested, holistic model of e-learning success demonstrates that learning outcomes critically depend on two pivots—dialog and self-regulatory behaviors—and these processes facilitate higher student learning outcomes. Our model thus expands the traditional view of SRL research that investigates the relationship between feedback and SRL (Butler & Winne, 1995).
The results of the current study have significant implications for distance educators and administrators. According to the 13th (and final) 2016 annual report that tracks online education in the United States (Allen et al., 2016), 71.4% of chief academic officers rated the learning outcomes for online education “as good or better than” those for face-to-face education, while 28.6% of them believe that the learning outcomes for online education are inferior to those for face-to-face education. Our research provides convincing evidence to support the majority of chief academic officers’ view on the quality of outcomes of online education.
The system's approach-based solution demands that all relationships between the entities that affect the e-learning outcomes are simultaneously examined. E-learning systems can be improved only when all the entities (chief academic officers, instructors, students, and course design quality) work together.
Chief academic officers: The success or failure of online education is contingent in part on the chief academic officers’ commitment to build and manage course technology and its infrastructure and to support the two entities (the instructor and course design quality) that propel an e-learning system to be successful. The primary responsibility is to create an environment in which online instructors can be provided incentives to perform their formal and informal roles and continuously improve their skills and knowledge to perform better. Besides, the QM standards clearly state that the course technologies are current. Current technology includes, for example, synchronous conferencing tools, mobile apps, and Web-based voice tools among others.
Instructors: This study along with a previous study (Eom & Ashill, 2016) reaffirms the roles of the instructor as a cornerstone of e-learning systems. We present an instructor-centric view of e-learning systems where all other CSFs are interdependent with each other. The e-learning success model comprised three interdependent pivotal factors: the instructor, students, and course design quality. The course design is an instructor-centric parameter, and all other factors are parameters that can be enhanced by the input of the instructor. The instructor is responsible for the majority of the categories and standards to maintain quality and be certifiable. This study specifically signifies that the vital roles of the instructor are multidimensional (course facilitator, course monitor who gives prompt feedback, intellectual stimulator, and social supporter with a caring attitude).
Course design quality: Effective e-learning course design/redesign is needed to ensure the success of e-learning outcomes. It is a campus-wide project requiring concerted effort by the instructors, instructional designers, multimedia designers, quality assurance evaluators, etc. Course design in e-learning is a labor-intensive, process-oriented activity, which uses a student-centered active learning approach (Lavoie & Rosman, 2007). Course design/redesign processes consist of several core activities: identifying learning objectives; developing an operational definition by translating the learning objectives into session outcomes; creating learning objectives-driven activities including assignments and exams that are interesting, stimulating, and challenging; identifying and creating learning resources such as reading materials and multimedia resources; and organizing the course modules in a logical way.
LIMITATIONS AND DIRECTIONS FOR FUTURE RESEARCH
Although this study expands our knowledge of the determinants of students’ satisfaction and their perceived learning outcomes in the context of university online courses, it has several limitations; and viable prospects for further research remain.
First, our study was undertaken among online students of one university in the Midwestern United States. This may limit generalizations. To broaden the database for further generalizations, testing viability of our model in other online programs at other universities would be fruitful. While the use of a single organizational setting allows control of confounding effects originating from interorganizational differences, it limits generalizability. Therefore, future studies among students of online programs in other universities are in order for conclusive generalizations.
Second, the cross-sectional nature of the present study does not allow causal inferences. As a result, we are unable to rule out biases due to common method; although the potential for common method bias was reduced by administering the survey in two stages, and the Harman one-factor test confirms that such a bias was not operational. Therefore, future studies should adopt longitudinal designs. Our empirical results demonstrate the importance of SRL, motivation, instructor, dialog, and course design quality in fostering high user satisfaction and positive learning outcomes. However, a stronger test of our hypotheses may require a longitudinal design.
Third, our research presents a gray box model where some parts of internal processing activities (students’ SRL and SI and SS dialogs) are known, while other internal cognitive process are not known. A future research agenda should examine the relationships between internal cognitive process and actual learning outcomes, as opposed to perceived knowledge gained and/or grade.
Finally, a fruitful direction for future research includes the investigation of mobile technology's role in relationships between dialogs, SRL, learning outcomes, and student satisfaction. Mobile devices such as cell phones and tablets enable and facilitate the use of social networking software, a.k.a. Web 2.0. According to an empirical study (Ractham, Kaewkitipong, & Firpo, 2012), the use of Facebook in a course resulted in building and nurturing personal relationships between the instructors and their students, promoting invigorating and meaningful interaction among students and between students and the instructor. Mobile technologies have expanded the realm of e-learning to make it rich and more fruitful. Our future research intends to critically examine the roles of IT and mobile technology from a holistic view where IT and mobile technologies are considered as entities to increase dialog and to facilitate self-regulatory learning processes, thereby enhancing e-learning outcomes and students’ satisfaction.
APPENDIX A
Survey Questions
- 1. What is your age?
- 2. What is your gender?
- 3. What is your year in school?
- 4. What is your area of study?
- 5. Are/were you enrolled in an online course at this university? If so, please give feedback of just one online class you are/were enrolled. Write course number and title of that course.
Student Motivation
- 6. In an online class like this, I prefer class material that really challenges me, so I can learn new things (Intm1).
- 7. When I have the opportunity in this online class to choose class assignments, I choose the assignments that I can learn from even if they do not guarantee a good grade (Intm2).
- 8. I do all that I can do to make my assignments turn out perfectly (Intm3).
- 9. I work hard to get a good grade even when I do not like a class (Extm1).
- 10. I want to do well in this online class because it is important to show my ability to my family, parents, or others (Extm2).
- 11. I like to be one of the most recognized students in a class (Extm3).
Instructor Activities
- 12. The instructor was actively involved in facilitating (teaching) this online class (Ins1).
- 13. The instructor in this online class provided timely helpful feedback on assignments, exams, or projects (Ins2).
- 14. The instructor in this online class stimulated students to intellectual effort beyond that required by face-to-face classes (Ins3).
- 15. The instructor cared about my individual learning in this class (Ins4).
- 16. The instructor in this online class was responsive to student concerns (Ins5).
Dialog with Students
- 17. I had positive and constructive interactions with other students frequently in this online class (Diastu1).
- 18. In this online class, the level of positive and constructive interactions between students was high (Diastu2).
- 19. In this online class, I learned more from my fellow students than in other classes at this university (Diastu3).
- 20. The positive and constructive interactions between students in this online class helped me improve the quality of the learning outcomes (Diastu4).
Dialog with the Instructors
- 21. I had positive and constructive interactions with the instructor frequently in this online class (Diaist1).
- 22. The level of positive and constructive interactions between the instructor and students was high in this online class (Diaist2).
- 23. The positive and constructive interactions between the instructor and students in this online class helped me improve the quality of the learning outcomes (Diaist3).
- 24. Positive and constructive interactions between students and the instructor were an important learning component (Diaist4).
Course Design Quality/Structure
- 25. The course objectives and procedures of this online class were clearly communicated (Design1).
- 26. The structure of the modules of this online class was well organized into logical and understandable components (Design2).
- 27. The course materials of this online class were interesting and stimulated my desire to learn (Design3).
- 28. The course materials of this online class supplied me with an effective range of challenges (Design4).
- 29. Student grading components such as assignments, projects, and exams were related to learning objectives of the class (Design5).
Self-Regulated Learning
- 30. In the beginning, I set my goals and plan according to what I need to do to make desired learning outcomes (Sreg1).
- 31. Even when study materials are dull and uninteresting, I keep working until I finish (Sreg2).
- 32. I keep up with my grades in each course, and if one seems to be sliding, I will stress that class more in my studying (Sreg3).
- 33. When I study for a test, I try to put together the information from class notes and from the book (Sreg4).
Learning Outcomes
- 34. The academic quality of this online class is on par with face-to-face classes I have taken (Out1).
- 35. I have learned as much from this online class as I might have from a face-to-ace version of the course (Out2).
- 36. I learn more in online classes than in face-to-face classes (Out3).
- 37. The quality of the learning experience in online classes is better than in face-to-face classes (Out4).
User Satisfaction
- 38. I would recommend this instructor to other students (Sat1).
- 39. I would recommend this online class to other students (Sat2).
- 40. I would take an online class at this university again in the future (Sat3).
- 41. I was very satisfied with this online class (Sat4).
Note: Each question includes item names used in Table 1.
APPENDIX B
STUDENT CHARACTERISTICS
Sample | Proportion (%) | Population | |
---|---|---|---|
Age | |||
<20 | 81 | 21.77 | not available (n.a.) |
20–30 | 201 | 54.03 | n.a. |
31–40 | 59 | 15.85 | n.a. |
41–50 | 21 | 5.65 | n.a. |
51–60 | 9 | 2.42 | n.a. |
>60 | 1 | .27 | n.a. |
Total | 372 | 100.00 | |
Gender | |||
Male | 104 | 27.96 | 4,960 |
Female | 268 | 72.04 | 6,534 |
Total | 372 | 100.00 | 11,494 |
Year in school | |||
Freshman | 10 | 2.69 | 2,165 |
Sophomore | 51 | 13.71 | 2,314 |
Junior | 89 | 23.92 | 2,428 |
Senior | 155 | 41.67 | 2,834 |
Graduate | 67 | 18.01 | 1,753 |
Total | 372 | 100.00 | 11,494 |
Area of study by colleges | |||
Education | 85 | 22.85 | 793 |
Business | 121 | 32.53 | 1,577 |
Health and Human Services | 61 | 16.40 | 2,233 |
Science and Math | 38 | 10.22 | 2,545 |
Speech Communication | 18 | 4.84 | 178 |
University Studies | 23 | 6.18 | 2,362 |
Polytechnic Studies | 11 | 2.96 | 441 |
Others | 15 | 4.03 | 1,365 |
Total | 372 | 100.00 | 11,494 |
- aPopulation includes all students taking online classes and face-to-face classes.
- bThe university's academic programs are organized into multiple academic units—business, education, health and human services, liberal arts, science and math, polytechnic studies, visual and performing arts, graduate studies, university studies, and extended learning. The university offers a diverse range of undergraduate programs and a select number of graduate programs, leading to associate, baccalaureate, masters, doctorate, and specialist degrees in over 200 different areas of study within 42 academic departments.
Biographies
Sean B. Eom is a Professor of Management Information Systems (MIS) at the Harrison College of Business of Southeast Missouri State University. He received his PhD in Management Science from the University of Nebraska - Lincoln. He also received an MS in international business from the University of South Carolina at Columbia. His research areas include decision support systems, bibliometrics, and E-learning systems. He is the author/editor of 12 books and has published more than 70 refereed journal articles and more than 120 articles in encyclopedias, book chapters, and conference proceedings.
Nicholas J. Ashill is a Full Professor in the Department of Marketing and Information Systems, School of Business Administration, American University of Sharjah, UAE. He also holds the position of Chalhoub Group Professor of Luxury Brand Management. His research interests include topics related to online education, services marketing, customer satisfaction, brand management, job performance, and human resource management practices. He is highly research active and has published articles in many of the world's leading journals in marketing and management including Journal of Management, Journal of Retailing, Industrial Marketing Management, and Journal of Business Research among others. He is currently an Associate Editor for two leader marketing journals—the European Journal of Marketing and the Journal of Services Marketing.