Volume 50, Issue 6 p. 722-747
Research Article
Full Access

How do students in an innovative principle-based mechanics course understand energy concepts?

Lin Ding

Corresponding Author

Lin Ding

Department of Teaching and Learning, The Ohio State University, 1945 N. High St., Columbus, Ohio, 43210

Correspondence to: Lin Ding; E-mail: [email protected]Search for more papers by this author
Ruth Chabay

Ruth Chabay

Department of Physics, North Carolina State University, Raleigh, North Carolina

Search for more papers by this author
Bruce Sherwood

Bruce Sherwood

Department of Physics, North Carolina State University, Raleigh, North Carolina

Search for more papers by this author
First published: 10 July 2013
Citations: 33

Abstract

We investigated students' conceptual learning of energy topics in an innovative college-level introductory mechanics course, entitled Matter & Interactions (M&I) Modern Mechanics. This course differs from traditional curricula in that it emphasizes application of a small number of fundamental principles across various scales, involving microscopic atoms, macroscopic deformable objects, and large-scale planetary systems. To best match the unique features of this course, a multiple-choice energy assessment was developed. We followed the development framework and explicitly delineated test purpose, scope, and specifications to guide the design, implementation and evaluation of the energy assessment. Also, particular attention was given to: (1) categorizing content and cognition levels and (2) determining reasoning steps of each test item—aspects that often were not explicitly addressed in designing prior concept assessments. We implemented the energy assessment as a written test before and after course instruction with M&I students at two research universities. Interviews were also conducted to explore students' reasoning in applying energy concepts. Results showed that positive change in student conceptual understanding after course instruction was significant on the entire assessment, on individual items, and on individual test objectives. Subsequent interviews further revealed that after instruction students could properly apply the Energy Principle and perform qualitative analysis without using formula sheets. However, students still showed difficulty in dealing with systems involving deformable objects. This study exemplifies practical means of establishing and evaluating key assessment features, provides evidence for the effectiveness of a principle-focused approach to physics learning, and offers useful implications for teaching energy topics. © 2013 Wiley Periodicals, Inc. J Res Sci Teach 9999: 1–26, 2013

Previous studies on teaching and learning of energy topics in introductory physics courses have revealed that students have serious difficulties with basic energy concepts. Students often confuse energy with force and power (Goldring & Osborne, 1994; Viennot, 1979), mistake work for force (Singh & Rosengrant, 2003), and use work, heat, and internal energy interchangeably (Loverude, Kautz, & Heron, 2002; Meltzer, 2004). Also, students have trouble determining the work done on an object, particularly the sign of the work. Beyond all these difficulties, students often fail to use the work-energy theorem in answering relevant questions (Lawson & McDermott, 1987; Pride, Vokos, & McDermott, 1998).

Some scholarly discussions suggest students' conceptual deficiencies of energy concepts are due to at least two underlying causes. First, energy concepts at the introductory physics level are complex and difficult even for physics teachers to gain a clear understanding (Kemp, 1984; Warren, 1972). Besides the fact that energy, work, and heat constitute related yet distinctive concepts of abstraction, the “work-energy theorem” alone has given rise to much confusion, particularly in cases of deformable and rotatable systems. Second, energy concepts, albeit fundamental, often are introduced late in traditional physics courses and consequently are viewed by students as secondary or tertiary topics in the course. Students tend to spend less time studying work and energy than memorizing kinematics formulas such as s = v0t + (1/2)at2. What's worse, the traditional approaches to energy concepts in physics courses often are oversimplified or even erroneous. As a result, physicists like Arons (1989, 1999) and Bauman (1992a, 1992b) earnestly called for more accurate treatment of energy concepts for beginning college students.

Matter & Interactions (M&I) Modern Mechanics (Chabay & Sherwood, 2011) is an innovative physics course that affords students a scientific and precise view of energy concepts. In this course, much emphasis is placed on a small number of fundamental principles, one of which describes the thermodynamically valid relations between energy, work, and heat, namely the Energy Principle. Students learn to specify systems of interest, cope with various energy forms, become familiar with the pair-wise nature of potential energy, and perform accurate analysis of work in different situations. In so doing, students apply the Energy Principle to both particle-like systems and deformable systems to explain and predict a wide range of real-world phenomena (Chabay & Sherwood, 1999, 2004).

Motivated by an increasing need for gauging student learning in this principle-focused course, we sought to probe students' conceptual understanding of central topics. Given that only limited prior research targeted student learning of energy concepts at the college level, and that confusions are pervasive even among physics teachers, we selected this topic as our focus. Moreover, since the M&I approaches differ significantly from traditional mechanics courses in content and emphasis, the existing assessments on energy concepts are either inappropriate or misleading. For these reasons, we initiated a research effort to develop a new multiple-choice energy assessment suitable for the M&I mechanics course or courses with similar goals and content. This energy assessment is a 33-item multiple-choice test that covers basic energy topics. Most items in the assessment are qualitative questions, and only a few require semi-quantitative calculations, such as calculating the difference between discrete energy levels. We used the energy assessment in both written tests and interviews to unpack the meaning of conceptual understanding as well as to measure student learning outcomes.

Since M&I is primarily a university-level physics course (although it has also been taught as an advanced course in several high schools), our study is most directly of interest to researchers who are concerned with postsecondary education and promising practices in discipline-based undergraduate science education (NRC, 2011b, 2012a). However, given the fact that energy is a crosscutting concept emphasized in the new K-12 science standards (NRC, 2011a, 2012b), our study of student conceptual understanding of this important concept may offer a useful model of how energy topics can be addressed and assessed at the K-12 level as well. From a broad perspective, the principled approach to energy topics highlighted in our assessment aligns well with what is emphasized in the new science standards regarding student “ability to examine, characterize, and model the transfers and cycles of matter and energy” (NRC, 2011a, p. 83). To this end, our study is relevant to K-12 science education.

Energy in Matter and Interactions Modern Mechanics

Because energy is a crosscutting topic and plays a fundamental role in nearly all scientific domains, teaching and learning of this topic is of interest to many researchers and educators. A wealth of prior research has been conducted at the K-12 level, focusing primarily on student understanding of energy forms, transfer, transformation, conservation, and degradation, as well as on the learning progression of student conceptual development (Lee & Liu, 2010; Liu & McKeough, 2005; Liu & Ruiz, 2007; Papadouris, Constantinou, & Kyratsi, 2008; Zacharia, Olympiou, & Papaveripidou, 2008). Relatively fewer studies have targeted college-level students (Lawson & McDermott, 1987; Liu, Ebenezer, & Fraser, 2002; Meltzer, 2004; Pride et al., 1998).

Energy in college sciences is a challenging topic, much as it is in primary and secondary classes. In college-level introductory physics, common confusions about energy are prevalent among both students and instructors. These confusions were not explicated until about three decades ago, when several physicists published a series of scholarly discussions regarding energy conservation in the case of extended, deformable and rotatable objects. Physicists then began to re-examine some previously overlooked issues through a different lens and continued their conversations into recent years (Jewett, 2008a, 2008b, 2008c, 2008d, 2008e; Leff & Mallinckrodt, 1993; Mallinckrodt & Leff, 1992, 2002; Mungan, 2005, 2007a, 2007b; Penchina, 1978; Sherwood, 1983; Sherwood & Bernard, 1984). Nonetheless, energy topics are still often not properly treated in introductory physics textbooks. For instance, consider a system where an extended object is pushed over a certain distance on a rough table; the traditional approach to work and energy using a simplified particle-like system fails to explain why the object and the table become warmer.

Differing from traditional physics curricula, the M&I Modern Mechanics course is designed to present students with a cohesive view of energy topics under a unifying fundamental principle—the Energy Principle. In what follows, we briefly introduce the M&I mechanics course and its unique approaches to energy topics.

M&I Modern Mechanics

M&I Modern Mechanics is the first semester of a two-semester curriculum, designed for calculus-based introductory physics at the college level. In this course, emphasis is placed on a small number of fundamental principles, namely the Momentum Principle, the Energy Principle, and the Angular Momentum Principle.

A goal of this course is to cultivate a principle-based view central to the modern physics enterprise; that is, a wide range of phenomena can be described, explained and predicted by using a small number of fundamental principles. It is well established that students in traditional courses typically tend to view physics as a large set of unrelated facts and formulas rather than a hierarchically organized body of knowledge (Hammer, 1994). In the M&I course, however, students learn to apply fundamental principles across diverse contexts and experience the benefits of using a top-down, principled approach to solve various problems. A number of empirical studies support this principle-focused approach to physics learning. For example, Chi, Feltovich, and Glaser (1981) and Hardiman, Dufresne, and Mestre (1989) conducted problem sorting studies and reported that beginning students tended to group physics problems according to surface features, whereas experts categorized problems based on deep structure. In light of these findings, Leonard, Dufresne, and Mestre (1996) used writing tasks to draw students' attention to key principles in solving problems and found improved student performance. Similarly, Reif and Scott (1999), Dufresne, Gerace, Hardiman, and Mestre (1992), and Mestre, Dufresne, Gerace, Hardiman, and Touger (1993) developed computer tutoring systems to cue student consideration of basic physics principles. They too reported that this approach benefited student learning of physics.

Another goal of the M&I course is to familiarize students with application of the fundamental principles at different scales, involving microscopic atoms, macroscopic extended objects, and large-scale planetary systems. Through comparisons between different scales, students learn to appreciate the atomic structure of matter, astrophysical models, limitations of classical theories, and the unified nature of the modern physics enterprise. In applying the fundamental principles across different scales, students also encounter modeling of messy complex systems. Learning to make approximations, idealizations, and estimations is another important goal of this course.

Approaches to Energy Topics in M&I Modern Mechanics

Energy topics in the M&I mechanics course are centered on one of the fundamental principles—the Energy Principle: ΔEsystem = Wexternal + Q. Throughout this course, no special-purpose formulas are introduced; instead students are explicitly and consistently directed to tackle any energy related problems by starting with this fundamental principle. Some unique approaches to energy topics are described below.

Conventional discussion of the Energy Principle often takes place in the context of single particles without rotation, vibration, or temperature change, and hence requires no consideration of internal energy. In the M&I modern mechanics course, however, energies of a system can take on different forms and are expanded to include thermal energy in terms of atomic-molecular rotations and vibrations. One connecting thread that is introduced early and runs through the entire course is the discussion of the ball-and-spring atomic model of a solid. A number of activities and problems in this course lend themselves to this model. For instance, students conduct both hands-on experiments with balls and springs to study mechanical energy in macroscopic situations as well as use computer simulations to model the rotational and vibrational energy of atomic-molecular particles at the microscopic level. By doing these activities, students are sensitized to the connections of classical mechanics with thermal physics, and start to form a unified picture of the modern physics enterprise.

Moreover, students in the M&I mechanics course are introduced to the concept of particle energy in a relativistic form: urn:x-wiley:00224308:media:tea21097:tea21097-math-0001. Particle energy is further separated into rest energy plus kinetic energy (K), thereby setting a stage for discussion of the low-speed approximation to kinetic energy and its range of validity. Also, the introduction of rest energy facilitates introduction to particle identities and their changes. In this course, students are frequently given energy problems that involve a change of particle properties, such as fission and fusion reactions. Even in cases where there is no change in particle identity, students are still encouraged to write down Δ(mc2) = 0 to indicate their awareness that in the microscopic world mass is not constant.

In dealing with the Energy Principle, students also learn to carefully define systems of interest. Oftentimes, students have trouble separating systems from surroundings. This is particularly true when the two are in direct contact, and when students are asked to separate the work done on the system by the surroundings from the work done on the surroundings by the system (Loverude et al., 2002). Also, without careful training of system specification, beginning students often fail to account for potential energy as associated with pairs of objects, but instead attribute it to single objects (Sherwood, 1983). For example, in dealing with a system of a falling stone, students would consider both the gravitational potential energy of the stone and the work done on it by the earth. As such, the same term would be mistakenly counted twice.

In M&I Modern Mechanics, these issues are carefully addressed through explicit instruction on system specifications. In the above example, students in the M&I mechanics course are trained to analyze the same situation by using two different systems: the stone, and the stone plus the earth. In the former system, students come to realize that there is no gravitational potential energy, but the work done by the earth is counted. On the other hand, in the second system involving the stone plus the Earth, gravitational potential energy emerges, and no external work on the stone-earth system is considered. By analyzing both systems, students become increasingly aware of the consequential significance of system specification on energy accounting.

In applying the Energy Principle to discuss various means of changing a system's energy, attention is given not only to the concept of work but also to other processes, such as heat. By doing so, the connection of classical mechanics with thermal physics is reinforced. Also, the inclusion of the heat concept in the energy principle makes it easier to emphasize the ontological differences between thermal energy and heat: the former is a state or a condition of a system, whereas the latter is a dynamic process that causes a change in the energy of a system. This difference is also extended to the entire Energy Principle: ΔEsystem = Wexternal + Q. Instead of considering the equal sign in the Energy Principle as identity, students are explicitly directed to conceptualize the underlying causal relationship between the terms on the left hand side (representing changes in the energy of a system) and those on the right hand side (representing causes of change). As such, students start to keenly appreciate the deep causal meaning contained in this fundamental principle.

In the M&I mechanics course, students are also introduced to discrete energy levels at the atomic scale. By applying the Energy Principle in this context, students learn to explain and predict photon absorption or emission in terms of atomic energy transition. This discussion opens up a new connection with quantum physics, allowing students to experience the idea that classical mechanics is not a universal theory but has limits with approximations valid for macroscopic objects.

Framework for the Energy Concept Assessment

Because of the aforementioned unique approaches to energy topics in the M&I mechanics course, we need to design a valid and reliable energy assessment to specifically match the goals of the course. It is only by so doing that we can measure student learning in a proper and effective manner. As Shepard (2006) pointed out, assessment tasks must represent the full range and depth of what students are expected to understand and be able to do. She also proposed to use the term “embodiment” for a better description of the alignment between assessment and learning goals. Similarly, the Standards for Psychological and Educational Testing set forth a list of guidelines for test design, among which delineation of test purpose and scope is a leading consideration and requires careful examination to ensure a complete and substantive embodiment (AERA, APA, & NCME, 1999). We followed this framework to guide the design of our energy assessment and attended specifically to the following considerations: (a) test purpose and scope; (b) test specifications; and (c) test development and evaluation.

The purpose of the energy assessment in the present study is to evaluate student understanding of energy topics addressed in the M&I mechanics course. To best embody the goals of the course, we define “understanding” of energy topics as application of the Energy Principle and its associated concepts in different contexts and across different scales (microscopic, macroscopic, and large scales) with proper approximations and estimations.

The scope of the assessment is anchored in the theoretical framework of the Energy Principle, which consists of five major components in a hierarchical structure (see Figure 1). These five components are derived directly from, and represent the entire scope of, the energy topics in M&I Modern Mechanics, including those that are unique to the course. Specifically, they are (1) application of the Energy Principle, (2) identification/determination of different energy forms, (3) specification of systems, (4) determination of/differentiation between work and heat, and (5) interpretation/prediction of atomic absorption/emission spectra. Among them, application of the Energy Principle plays a central role, therefore taking the top position in the hierarchical structure. Some components further contain lower-level sub-components. For instance, under “identification and determination of different forms of energy” there are three sub-components, each of which pertains to one of the three basic energy forms—rest energy, kinetic energy (K), and potential energy (U).

Details are in the caption following the image
The hierarchical structure of the test objectives.

Test specifications for our energy assessment focused primarily on the format, Bloom's content and cognition levels, and psychometric properties of the items. In the study, the energy assessment was designed to be a concept test in the multiple-choice format. Prior research showed that although students could solve algorithmic problems using formula manipulations, they performed poorly on multiple-choice concept questions (McDermott, 2001). Since the purpose of the assessment is to evaluate students' flexible application of energy concepts, it is sensible to use multiple-choice concept questions to achieve this purpose. To ensure that the questions actually measure what they are intended to measure, which is student application of the Energy Principle and related concepts (not just recall of definitions or algorithmic use of special-purpose formulas), we used Bloom's taxonomy to check whether their content and cognition levels matched the purpose of the assessment (Anderson, Krathwohl, & Bloom, 2001; Krathwohl, 2002). In addition, we designed a set of coding schemes to analyze the number of reasoning steps in each item to further examine whether these items are suitable for the multiple-choice format. Moreover, the psychometric properties of the assessment were evaluated to examine its reliability and discrimination power. Details are reported in the next section.

Guided by the above framework, we carried out the development and evaluation of the energy assessment to answer the following research questions.
  1. How can the energy assessment embody the goals of what we expect our students to be able to do with basic energy topics?
  2. What information of test content, cognition levels and psychometric characteristics is available to establish validity and reliability evidence for the energy assessment?
  3. How do M&I students, who have received principle-focused physics instruction, perform on the energy assessment (in terms of their strengths as well as weaknesses)?
  4. What implications does this study offer for teaching and learning of energy topics at the college level?

Methods

The development and evaluation of the energy assessment followed an iterative design process, including creation of test items, analysis of Bloom's levels, determination of reasoning steps, and evaluation of psychometric properties (see below for details). Since our ultimate goal is to be able to use this assessment to draw valid and reliable inferences about student understanding of energy topics, much attention was paid to the establishment of its content and construct related evidence. Content related evidence, also known as content validity, was established through expert consultation during the test creation and revision stage to ensure that all questions were relevant and represented adequate coverage. Construct related evidence, which allowed us to confirm the assessment indeed measures what it is purported to measure, was established through evaluating the Bloom's levels, reasoning steps and psychometric properties of the questions.

Since student responses to the questions can further provide in-depth information regarding the construct of the assessment, we administered both large-scale written tests and small-scale interviews to examine how students understood energy topics, particularly how they applied the Energy Principle and relevant concepts to answer questions of various contexts and scales. This effort not only allowed us to empirically observe student performance in relation to the intended construct of the assessment, but also gave us an opportunity to further unpack the meaning of conceptual understanding in terms of flexible application of fundamental principles. From an even broader perspective, examination of student performance can better help establish a close alignment between assessment and learning goals, hence giving rise to an instructionally sensitive assessment that can feed back to curriculum and teaching (Ruiz-Primo et al., 2012).

Samples and Settings

We implemented the energy assessment with M&I students as both a pre- and a post-test. The pretest was given as an online password-protected homework assignment in the 2nd week of the course, and 319 students completed the pretest. To simulate an in-class test environment and to encourage students to answer the questions seriously, we took the following measures. First, before taking the assessment, students were explicitly advised not to refer to any notes, textbooks, or discuss with others. Students were also assured that they would not be penalized for incorrect answers. Second, we established a time limit of 60 minutes for this assignment and permitted only one access. Students were explicitly instructed not to open the online test until they were ready to dedicate up to an hour to the questions. Third, only one submission was allowed for the entire assignment. Students were asked to work continuously on the test without distractions from partial submissions. After submission, students did not receive any feedback, check marks or scores. Furthermore, the assignment was scheduled to be invisible immediately after it was due. As such, the questions were secured for the posttest use. Since the pretest was given as a low-stakes test and students who completed it would all receive the same amount of credit for participation regardless of their performance, fraudulent activities would be unlikely. Besides, students had few normative conceptions about energy before instruction, so the online administration of the pretest would have minimal effect on the outcomes.

The posttest was given as an in-class paper-and-pencil test in the 2nd-to-last week of the course and 308 M&I students participated. Among them, 262 also took the pretest. Students were given 50 minutes to complete the posttest and were urged to try their best in answering the questions. Most students completed the questions within 30 minutes.

Interviews were also conducted to further explore student thought processes. Recruited from the M&I mechanics classes, nine paid student volunteers participated in hour-long one-on-one private interviews. All of them took the posttest prior to the interviews, and their posttest scores were at diverse levels. During the interviews, students were instructed to talk aloud while answering the questions and justifying their answers. A textbook and a formula sheet were available for students to use. They were also allowed to refer to their own notes if needed.

Methodological Limitations

While our approaches to assessing student conceptual learning were intended to maximize valid and reliable inferences, certain aspects of validity evidence were not investigated due to the uniqueness of the course of interest in the study. Specifically, we have not quantitatively compared our energy assessment with other physics tests through correlation analyses to establish convergent or divergent evidence. This is because none of the extant assessments is suitable for the M&I course, which is exactly what motivated us to take the initiative to carry out this study. Due to the same reason, we only used the energy assessment with M&I students and have not yet expanded to other college-level introductory physics courses to establish comparative evidence on the effectiveness of the M&I mechanics course. Moreover, although we conducted both written tests and interviews to analyzed student performance, the scale of our qualitative interviews was relatively small compared to that of the written tests (see below for details). That said, the rich information from a limited number of interviews offered detailed case examples of student reasoning, therefore casting useful light on some underpinning thinking that guided student responses.

In what follows, we detail the development and evaluation of the energy assessment, followed by findings from both large-scale written tests and small-scale interviews. Implications of the study are then discussed.

Development and Evaluation of the Energy Assessment

Creation of Test Items

We followed the framework depicted in Figure 1 as test objectives to design test items. We first created a pool of over 50 questions to cover all the objectives and sub-objectives. These questions were initially designed in an open-ended format and were used with students in the M&I mechanics course. Common errors were then identified from student responses as well as from our observations in classroom teaching. Using these common errors, we created alternative choices and converted the open-ended questions into multiple-choice items. For example, one of the questions (Q23) in the energy assessment asks students to determine the total work done on a spring when it is stretched on both sides by a force of magnitude F over a distance d—a situation that requires students to apply the concept of work (see Figure 2). Student free responses to an open-ended version of this question revealed four major types of errors. One was that students thought the net force on the spring was zero and therefore no network. Another error was that students only attended to one side of the spring and overlooked the total effect (Fd). A third common error related to the confusion between the work done on the spring by the hands and the work done on the hands by the spring (−2Fd). A fourth type of error was a combination of the above (e.g., −Fd). Given these findings, we generated a list of alternative choices to address these common errors. In cases where a finite number of possible answers exist, we exhausted all the possibilities as alternative choices. For instance, one item in the assessment requires students to determine the sign of gravitational potential energy. Since the possible answer is either positive, zero, negative, or not enough information to determine, we included all of these possibilities in the choices. It is worth noting that the questions in our energy assessment contained varying numbers of choices and in some cases we purposely included “none of the above” (NOTA) as an alternative to allow student responses other than those listed. According to Haladyna and Downing (1989, 2002), “NOTA should remain an option in the item writer's toolbox” for use “in items in which the possible incorrect responses are relatively few”. They also noted that “the key in distracter development is not the number of distracters but the quality of distracters” (Haladyna & Downing, 1989, 2002). In addition, as shown below, our analysis of the energy assessment provided no indication that questions with varying numbers of choices or with NOTA were problematic.

Details are in the caption following the image
A sample energy assessment question. The alternative choices were derived from student common errors in their responses to an open-ended version of this question.

All test items were first inspected by a panel of four physicists to ensure they matched the test objectives, were technically correct, and contained unambiguous statements, proper representations, adequate content coverage and sensible distracters. Then ten professors from three different research universities, who had taught the M&I mechanics course, were invited as external reviewers to provide written critiques on these items independently. Questions deemed as problematic by two or more external reviewers were eliminated from the pool. Those that underwent revisions were further examined by two senior physicists to ascertain that the revisions resolved the concerns raised by the external reviewers. After this process, 33 items were retained (see Supporting Information C), and they covered the entire hierarchical structure in Figure 1 with each test objective or sub-objective being addressed by a minimum of two questions.

Bloom's Levels of Test Items

Since we were more interested in gauging students' application of the Energy Principle and its related concepts than in testing students' retention of factual knowledge, we examined the content and cognition levels of each item to eschew questions that test low-level thinking such as rote memorization. We employed the revised Bloom's taxonomy to classify items along two dimensions—content and cognition (Anderson et al., 2001; Haladyna, 2004; Krathwohl, 2002). The content dimension describes “what” (noun), whereas the cognition dimension addresses “how” (verb; see Figure 3).

Details are in the caption following the image
Two-dimensional revised Bloom's taxonomy.

Within the content dimension, classification focused on the type of knowledge involved in each item, which included, from the lowest to the highest level, facts, concepts, principles, and procedures (Anderson et al., 2001; Haladyna, 2004). In this study, no items were intentionally designed to test procedural knowledge, so the first three types of knowledge were used to categorize item content levels. Within the cognition dimension, classification focused on the cognitive processes required in each item. These processes, from lower to higher level, include: recall, comprehend, apply, and even higher-level processes, such as synthesize, evaluate and create (Anderson et al., 2001; Haladyna, 2004). Because there has been disagreement among researchers on the names and order of the levels higher than “apply” (Anderson et al., 2001; Anderson, Sosniak, & Bloom, 1994; Haladyna, 1997; Miller, Linn, & Gronlund, 2009), and also because no items in the energy assessment were deliberately designed to invoke processes like evaluation or creation, we used the first three levels to categorize each item. Although one could argue that proper application of the Energy Principle would inevitably require students to perform analysis or even synthesis and evaluation of multiple pieces of information, in the present study we did not intend to distinguish them and therefore combined them together for consideration. So, any item requiring application or higher level would be categorized as “application.”

Two researchers independently categorized both content and cognition levels for each item. For content, the agreement between the two researchers was 94%; for cognition, the agreement was 97%. Discussions between the two researchers ensued, and the divergence was eventually resolved. Table 1 shows the distribution of test items in each content and cognition category (also see Supporting Information A). As seen, all items in the energy assessment targeted either concepts or principles and most items required application of them. Only four items fell into the “comprehension of concepts” category, and all these four items required interpretation of graphical representations of energy. In short, this energy assessment generally tested higher level of thinking and was aligned closely with its prescribed purpose of measuring student application of the Energy Principle and its related concepts across different contexts.

Table 1. Number of items in each combination of content and cognition category
Content Cognition
Recall Comprehend Apply
Fact 0 0 0
Concept 0 4 (12%) 20 (61%)
Principle 0 0 9 (27%)

Reasoning Steps of Test Items

The number of reasoning steps required in multiple-choice questions can often determine how difficult or easy it is to interpret test results. If a question requires long-chain reasoning and if a student fails to answer the question correctly, then it is difficult to pinpoint at which particular step the student fails. Conversely, if a question requires only a short reasoning process, the interpretation of student performance becomes relatively easier and more specific. Since the energy assessment is a multiple-choice test, it is beneficial that the questions undergo inspections regarding the reasoning steps involved to ensure that they are suitable for the multiple-choice format. To do so, we designed the following schemes.

If an item provides information on a specific quantity/variable X and asks for information on another quantity/variable Y, then the number of reasoning steps for this item is determined by the number of extra quantities/variables needed to relate X and Y. For example, suppose the given X cannot directly generate information on Y, but can directly generate information on quantity/variable Z that is not given in the question. This quantity/variable Z, however, can directly generate information on Y, the final answer. Then the number of reasoning steps from X to Y is one. Simply put, if X → Z → Y is the relation between X, Y, and Z, then the number of steps needed to obtain the answer Y from X is one—only one extra quantity/variable of Z is needed. Similarly, if the given X can directly generate information on the unknown quantity/variable Y in terms of a single relation (for instance, a cause–effect relation) without other extra quantities/variables, then the relation between X and Y is X → Y and hence the number of reasoning steps is zero.

For simplicity, the number of steps that require the following actions is regarded as zero: identifying the kinds of energy or the state of a system, indicating the interactions between two bodies, naming objects that do work on a system, and choosing a system. In addition, interpretation or determination of the relevancy of given information is not counted as a step. Also, a repetition of the same operation is not counted as a reasoning step. Furthermore, any pure mathematical calculation without physics is not counted as a step. A final caveat is that if there exists more than one approach to answering a question, only the approach with the least number of reasoning steps is considered.

It is worth noting that our schemes do not, and by no means intend to, capture every cognitive process needed to answer a question correctly. Some processes are not considered as a “step” solely for simplicity. So a zero-step reasoning process does not mean no reasoning at all; rather it indicates a short reasoning process with no extra quantities needed to bridge the given and the unknown. Since we seek functionally reliable schemes to see if any items in the energy assessment require relatively more steps than others, parsimony is desired.

Two researchers independently applied the schemes to determine the number of reasoning steps for each item and agreed on 91% of all cases. All disagreements, which turned out to differ only by one step, were resolved through discussion. Using the schemes, we found 20 items in the energy assessment require zero-step reasoning, 12 items involve one-step reasoning, and only one item is of a two-step reasoning process (see Supporting Information B). Clearly, the majority of the items require a short reasoning process, suitable for the multiple-choice test format. For items involving one or two steps, their reasoning processes often are an integration of the shorter reasoning processes required in zero-step questions. For instance, one question (Q32) asks students to compare the kinetic energy of two pucks of different masses, given that they are launched by two equally compressed identical springs (see Figure 4). This one-step question (spring compression → spring potential energy → kinetic energy) requires students to specify a system, apply the Energy Principle, and determine spring potential energy and kinetic energy, all of which are addressed separately in other zero-step questions.

Details are in the caption following the image
A one-step question that requires students to identify a system, apply the Energy Principle, and determine spring potential energy and kinetic energy.

Psychometric Analysis of the Energy Assessment

We used classical test theory (CTT) to analyze five psychometric features of the energy assessment; they are: item difficulty index, item discrimination index, item point bi-serial coefficient, KR-20 reliability, and Ferguson's delta (Ding, Chabay, Sherwood, & Beichner, 2006). Item difficulty, which indicates the easiness of a question, is simply the portion of students who provide a correct answer. To reach a decent level of measurement accuracy, difficulty values between 0.3 and 0.9 are preferred. As evident from Table 2, most items on the energy assessment fall into this range with an average of 0.53, suggesting that this is a medium difficult test. Item discrimination index measures the power of test items in distinguishing strong students from weak students. It compares the correct percentage of an upper group (upper 50% in total score) with that of a lower group (lower 50% in total score) for each question. Typically, a value of 0.3 or above is desirable (Doran, 1980). The average discrimination index for the energy assessment is 0.40 and a majority of the items maintain a value close or above 0.3, therefore considered acceptable. Point bi-serial coefficient is another measure in CTT; it calculates a correlation between item scores and total scores to reflect how consistent the individual questions are in relation to the entire test. The desired value for this coefficient is ≥0.2 (Kline, 1986). In our study, all items exhibit a point bi-serial coefficient close or above 0.2 with an average of 0.33 and thus are considered satisfactory. KR-20 reliability index (also known as Cronbach's alpha) is yet another measure in CTT; it is used to examine the internal consistency of an entire test (Ding et al., 2006). If this index reaches 0.7 or above, it suggests the test can be used reliably for group measurement (Doran, 1980). The KR-20 index for our energy assessment is 0.74 and therefore is considered reliable. Ferguson's delta is also a measure of an entire test. It examines the discriminatory power of the test by comparing the broadness of the total score distribution against a possible range. An acceptable Ferguson's delta is ≥0.9 (Kline, 1986). Our energy assessment yields a value of 0.98, suggesting that it is a discriminatory test. Taking all of the above into account, it is evident that the energy assessment is a valid and reliable test with a satisfactory level of discrimination, and can be used to measure how students apply the Energy Principle and its related concepts to answer questions of different contexts and different scales.

Table 2. Difficulty, discrimination and point bi-serial coefficient of the energy assessment items
Item Difficulty Discrimination Point Bi-Serial
Q1 0.24 0.40 0.39
Q2 0.65 0.57 0.46
Q3 0.39 0.64 0.52
Q4 0.88 0.26 0.24
Q5 0.76 0.33 0.25
Q6 0.79 0.28 0.18
Q7 0.31 0.57 0.50
Q8 0.26 0.41 0.37
Q9 0.82 0.29 0.26
Q10 0.52 0.45 0.37
Q11 0.83 0.27 0.26
Q12 0.72 0.50 0.39
Q13 0.65 0.59 0.42
Q14 0.79 0.42 0.32
Q15 0.43 0.62 0.47
Q16 0.32 0.47 0.38
Q17 0.73 0.39 0.23
Q18 0.34 0.39 0.35
Q19 0.65 0.30 0.30
Q20 0.66 0.41 0.25
Q21 0.33 0.30 0.26
Q22 0.37 0.39 0.27
Q23 0.43 0.41 0.31
Q24 0.13 0.20 0.15
Q25 0.80 0.37 0.30
Q26 0.59 0.44 0.16
Q27 0.39 0.56 0.47
Q28 0.36 0.20 0.18
Q29 0.67 0.41 0.34
Q30 0.67 0.45 0.36
Q31 0.33 0.32 0.28
Q32 0.29 0.54 0.43
Q33 0.32 0.33 0.29
Average 0.53 0.40 0.33

M&I Students' Performance on the Energy Assessment

Pretest Results

The average of student pretest performance was 27.9% (SD ± 8.3%). A Monte Carlo simulation was conducted to examine whether student pretest responses were a mere random guess. Results showed that random guessing would yield an average of 20.6% (SD ± 7.0%); so it is unlikely that students' pretest responses were based on mere guessing [t(318) = 15.73, p < 0.0001]. Student seemed to have some prior knowledge about a few commonly encountered energy forms, such as kinetic energy at the macroscopic level.

However, student pretest performance on the individual questions was generally poor. In particular, the percentages of correct responses were markedly low for Q1, Q22, and Q24 (all ≤10%). It is worth noting that all the three questions target unique topics in energy that are not covered in traditional high school physics courses. Specifically, Q1 addresses the validity of applying an approximation of the relativistic form for kinetic energy at the microscopic level; Q22 and Q24 test application of the work concept in a macroscopic multi-particle system—a person jumping off a rigid floor in Q22 and a car crashing into a concrete wall in Q24. Additionally, students performed poorly on Q7, Q13, and Q16 (correct percentages between 12% and 15%). These questions also address unique topics not covered in traditional high school physics, including the gravitational potential energy of a two-asteroid system (Q7), graphical representation of energy (Q13), and bound/unbound state of a planetary system (Q16).

We also examined student pretest performance on the individual test objectives. All pretest objective scores were fairly low (see Table 3), and the one for “interpretation/prediction of absorption/emission spectrum” was the lowest, indicating that students had little prior knowledge on discrete energy levels for atomic spectra. Another two objectives with noticeably low scores pertained to “work and heat” and “the Energy Principle.”

Table 3. Questions for individual test objectives and pre–post objective comparisons
Test objective Questions Pretest% (±SE) Posttest% (±SE) Paired Comparison (df = 261)
Application of the Energy Principle 8, 21, 27, 28, 29, 30, 31, 32, 33 25.3 (±0.9) 34.9 (±0.9) t = 8.07, p < 0.0001
Determination of energy & interpretation of energy graphs 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16 31.4 (±0.7) 56.8 (±1.0) t = 24.24, p < 0.0001
Specification of systems 17, 18 28.2 (±1.8) 54.9 (±2.2) t = 9.22, p < 0.0001
Determination of & differentiation between work and heat 19, 20, 22, 23, 24, 29 24.6 (±1.2) 47.3 (±1.3) t = 14.92, p < 0.0001
Calculation of absorption/emission spectra 25, 26 16.8 (±1.7) 73.5 (±2.1) t = 20.84, p < 0.0001

Posttest Results

The posttest average was 49.7% (SD ± 12.1%), and over forty percent of the students scored between 42% and 52%. Evidently, the energy test can be used to measure additional improvements that students may be able to achieve. In the posttest students performed better on nearly all the questions. However, Q1 and Q24 displayed the lowest correct percentages. For Q1, student choice distribution was noticeably different in the posttest than in the pretest. In the pretest, 47% of the students answered that the kinetic energy of a fast moving electron would double if its speed was twice as fast [choice (b)]. In the posttest, the most popular answer shifted to (c)—kinetic energy would be quadrupled because kinetic energy is proportional to speed squared. Although the correct answer is (d) none of the above, because the final speed is close to the speed of light, most students after course instruction appeared to know the relation between speed and kinetic energy in a low-speed approximation. The fact that these students chose (c) over (d) may have been because of their carelessness about the electron's final speed. As our subsequent student interviews revealed, almost all the interviewees knew that they should consider the relativistic form of kinetic energy for an electron moving at a speed close to the speed of light. But they overlooked the fact that the final speed of the electron is close to the speed of light. As for Q24 which requires application of the Energy Principle in a macroscopic multi-particle system, the correct answer is (e) zero work, as the contact point at which the force is applied on the car does not move. However, nearly two-thirds of the students chose either (a) or (c). It seemed that students did not focus on the contact point; rather they were overwhelmingly distracted by the configuration change of the car.

We further examined posttest objective scores and found they were all near or above 0.35 (see Table 3). Specifically, the “interpretation/prediction of emission/absorption spectrum” objective displayed the highest score, indicating that most students after course instruction were able to apply the concept of discrete energy levels to analyze atomic emission/absorption spectra across different situations presented in the questions. However, “application of the Energy Principle” had the lowest score, reflecting the comprehensive and challenging nature of this objective. This result is consistent with the design in that “application of the Energy Principle” was placed on the top in the hierarchical structure of the test objectives shown in Figure 1.

Pretest and Posttest Comparisons

To better study how M&I instruction affected student understanding of the energy topics, we conducted pair-wise comparisons using the 262 pre- and post-matched data to gauge student normalized gains (Hake, 1998):
urn:x-wiley:00224308:media:tea21097:tea21097-math-0002
Figure 5 displays the distribution of student normalized gains (Avg. = 30.7%, SD = 17.0%.) A t-test further shows that the gains are statistically significant [t(261) = 29.17, p < 0.0001]; in other words, after receiving the principle-based instruction in M&I Modern Mechanics, student application of the Energy Principle and related concepts, as measured by the energy assessment, was noticeably improved.
Details are in the caption following the image
Distribution of student normalized gains for the 262 matched data points.

We also conducted McNemar pair-wise comparisons for each item to see whether students achieved a gain on the individual questions after instruction. McNemar test allows us to compare dichotomous data (0 and 1) for repeated measurements (pre and post) of one sample (Agresti & Finlay, 2009). Results show that there is a significant positive gain for all the questions except Q8 (S = 0.346, p = 0.889), Q21 (S = 0.505, p = 0.477), Q28 (S = 0.527, p = 0.468), Q31 (S = 0.346, p = 0.683), and Q33 (S = 0.251, p = 0.112). A close inspection reveals that these questions all require application of the Energy Principle. For example, Q33 asks students to compare the work done on an object by a constant force during two equal time intervals, given a speed increase from 0 to v in the first time interval and from v to 2v in the second time interval (see Figure 6). In the pretest, student responses displayed a random guessing pattern with a fairly even distribution among the first three choices. In the posttest, nearly half of the students answered “the work done during the two time intervals is the same.” Subsequent interviews revealed that students often focused on the equal change in speed and mistakenly invoked the Momentum Principle in lieu of the Energy Principle.

Details are in the caption following the image
An energy assessment question on which student posttest performance did not show a significant improvement after instruction.

We further analyzed matched objective scores, and a positive gain was detected for all the five objectives. A pair-wise t-test further shows that all gains are significant (see Table 3). It is clear that after the M&I mechanics instruction students were more capable of applying the Energy Principle and related concepts to answer various questions in the energy assessment.

Student Interviews

Student interviews further revealed interesting findings that were not uncovered in the above analyses. Since the time constraint did not allow students to talk aloud for the entire assessment during each interview session, fifteen items covering the entire spectrum of the objectives were selected (Q3, Q6–Q11, Q16, Q18–Q19, Q21, Q24, Q26, Q29, and Q32). If there was time left for more questions, student interviewees would answer additional questions (Q1, Q15, Q23, Q27–Q28, and Q31). During the interviews, the interviewer minimized interventions. In case of confusion, the interviewer would ask the students to repeat or explain more. Occasionally, the interviewer would intervene with general follow-up questions to clarify the student's explanations, such as “Can you tell me the difference between a real system and a point-particle system?” All interviews were video-taped and transcribed for analysis.

In this study, we used thematic analysis (Boyatzis, 1998) to inductively explore emergent patterns in student verbal responses to the energy questions. Specifically, with the aim of explaining the ways students applied the Energy Principle and relevant concepts, we chose to start with analysis of observables, such as students' behaviors, choice of approaches and expressed justifications thereof. We examined each interview session and made comparisons across all interviewees to inductively find recurring patterns instead of using a priori schemes to confine our analysis. These patterns were first identified by the first author and then were independently examined by another researcher to ensure they accurately reflected what was captured in the interviews and were relevant to the research questions we sought to answer. In what follows, we report three aspects that were found present in at least seven of the total nine student interviewees. These three aspects pertained to student application of the Energy Principles, qualitative analysis of questions, and major difficulty in dealing with deformable systems.

Students Were Able to Use the Energy Principle in answering Relevant Questions as Long as They Chose to Start From This Fundamental Principle

However, if a student did not invoke the Energy Principle as a starting point, it was almost always true that the student would fail to provide a correct answer. A typical example is Q27 (see Figure 7a), in which a person leans against a wall with arms stretched and applies a pushing force to stop a box of mass m moving on a low-friction surface. Students were asked to find the work done on the box by the person given the initial speed of the box v and the final speed zero. Students who provided a correct answer all started from the Energy Principle; some students were very careful and provided detailed explanations as follows (though this student confused heat with internal thermal energy):

So when I think of this question, I think of Energy Principle. So I'll think of change in energy in a system equals work external plus Q, no thermal energy so that equals zero. Umm, it has rest energy, but no mass is changing, so I won't write that down. There's nothing else: negligible friction, no change in E internal. So, I'll just say change in kinetic energy equals work external [writing down “ΔK = Wext”]. Um, I guess I'll write everything out: one-half m v final squared minus one-half m v initial squared equals work external [writing down “urn:x-wiley:00224308:media:tea21097:tea21097-math-0003”]. Finally the person brings the box to a full stop. So, to me, that will mean the final velocity is zero. So that would cancel that term out [crossing out “urn:x-wiley:00224308:media:tea21097:tea21097-math-0004”]. So you're left with initial kinetic energy [writing down “urn:x-wiley:00224308:media:tea21097:tea21097-math-0005”], which is why I put (d).

Details are in the caption following the image
Two energy assessment questions that require students to apply the Energy Principle in different contexts.

Some students struggled a bit before thinking of the Energy Principle. But once they started considering the Energy Principle, their subsequent explanations flowed smoothly:

I am not sure how to take the kinetic energy into account, like these answers have one-half m v squared. I am not sure what that means, since … [Pause] … Oh! I guess … delta E is the work external, which is delta K in this case [writing down “ΔE = Wext = ΔK”], because there's no temperature change, and there's no change in rest energy. So, since the final is, the final kinetic energy is zero, I guess that will be negative one-half mv squared is delta K [writing down “ΔK = −(1/2)mv2”]. Yeah, I think it's (d).

Contrary to the above cases, some students did not invoke the Energy Principle but solely focused on the definition of work. Consequently, these students failed to determine the value of the work done on the box. The following excerpt is a typical case.

… You need more information. You'll need to know how, over what distance the person stops the box. You need to know from the initial contact to when it stops. And you could figure out the force the person had to apply over that distance to stop it in that zone …

One common error students made in answering this question is that some students mistakenly confused K with ΔK. As a result, these students chose “(a) the amount of work done on the box by the person was +(1/2)mv2.” This was the most popular wrong answer among students in the posttest, which accounted for nearly a quarter of the total responses.

Another example is Q28 (see Figure 7b), in which an asteroid was initially moving in one direction at a constant speed, and later was observed to be moving in the opposite direction at the same constant speed. Students were asked to determine if the total external work done on the asteroid during the two times was positive, negative, zero, or not enough information to determine. Students who started from the Energy Principle managed to answer the question correctly. The following is an example.

Well, I think work external will be delta K in this case [writing down “Wext = ΔK”]. And speed is the same both times, so it has the same kinetic energy both times, so there's no change in kinetic energy. So work external must be zero.

One student struggled over the change in motion direction before coming to think of the Energy Principle. But immediately after she started from the Energy Principle, it took her no time to figure out the correct answer.

Okay, so, I am thinking… I am trying to visualize this situation in my head… Um… It could be positive or negative. I am not sure… Because again I was thinking of the same equation: change in kinetic equals work external [writing down “ΔE = Wext”]. So… [writing down “urn:x-wiley:00224308:media:tea21097:tea21097-math-0006”] I think it is the same thing really… My original thought was—I was thinking if it was moving towards the right, this would be positive [pointing at “vf”]. But it's squared, so the direction doesn't matter. So, zero, zero—yeah, zero, it's my final answer.

For students who did not invoke the Energy Principle, they dwelled on the motion directions of the asteroid in the initial and final state. Since the question does not specify the initial or final direction, some students either came up with an initial direction themselves and worked from it [consequently choosing either (a) positive work or (b) negative work as the final answer] or simply answered “not enough information to determine.” In any case, we did not find that students answered (a) positive work simply because they thought work must always be positive.

Students Were Able to Qualitatively Analyze Different Energy Forms Without Referring to a Formula Sheet

Specifically, some students could not remember exact formulas but still managed to answer the questions correctly through sensible reasoning. For example, Q7 (see Figure 8a) asks for the sign of gravitational potential energy in a context of two passing asteroids. One student explicitly commented that she could not remember the exact formula. However, she resorted to the gravitational potential energy graph and correctly answered the question:

I remember it being, it's related to the distance between them and their masses. I can't remember the exact formula, but… The curve always looks like, U versus r always looks like something like that [drawing the U gravitational curve below the x-axis in an xy coordinate]. I remember something like that. And this is zero [pointing at the x-axis]. That's why.

Details are in the caption following the image
Three energy assessment questions that require students to apply related energy concepts across different contexts and scales. Students were able to qualitatively answer these questions during the interviews.

Ironically, students who single-mindedly attempted to recall the gravitational potential energy formula eventually provided a wrong answer. These students mistakenly used either “urn:x-wiley:00224308:media:tea21097:tea21097-math-0007” or “urn:x-wiley:00224308:media:tea21097:tea21097-math-0008” for gravitational potential energy and consequently answered that gravitational potential energy was greater than zero.

Another example is Q9 (see Figure 8b); this question involves Rutherford scattering in which an alpha particle is scattered by a gold nucleus along a trajectory. Given several marked points on the trajectory, students were asked to decide at which point the system of the alpha particle and the gold nucleus has the greatest electric potential energy. Almost all the student interviewees provided a correct answer with appropriate reasoning. Although students frequently commented that they could not remember the exact formula and thus were not confident about their answers, they knew qualitatively how electric potential energy should be related to charge and distance (which is sufficient to answer the question). The following excerpt illustrates this situation:

Um, they are both positively charged, so they should be repelling each other. So, I think the potential will be greatest when they are closer together… So I think it should be (c). That's also something, I always forget these equations…

Yet another example is Q11 (see Figure 8c). It asks students to compare the spring potential energy of two identical springs: one being compressed and the other being stretched by the same amount. One student candidly admitted being unable to recall the exact formula for the spring potential energy. Nonetheless, this student correctly related spring potential energy to stretch and managed to provide a correct answer:

I think that's the same. I can't remember what the formula is. But I don't think it has something to do with the direction. I think it has something to do with the absolute value of the stretch or something. So, I think it will be the same.

Students Had Noticeable Difficulties in Determining the Work Done on Deformable Systems

When asked to find the work done on an extended object by a force, students often failed to pay attention to the contact point at which the force is applied on the object, but rather they tended to be easily distracted by the configuration change of the object. For example, Q24 asks students to determine the work done on a car by a wall during a collision, given that the average force applied on the car by the wall is of magnitude F and the car finally becomes ΔL shorter (see Figure 9). During the interviews, only two students realized that they ought to focus on the contact point rather than the length change of the car. All other student interviewees chose either (a) FΔL or (d) −FΔL, and their explanations were all similar to the following excerpt:

When you are dealing with work done to a real system as opposed to a point particle system, you pay attention to the actual length of the entire car and any actual forces that might be in this situation.

Details are in the caption following the image
An energy assessment question that requires students to apply the work concept in a deformable macro system.

This observation is consistent with written posttest results, where over 70% of the students chose either (a) FΔL or (d) −FΔL.

Discussion and Implications for Course Instruction

Increasing student conceptual understanding has been well recognized as an important learning goal in science education at all levels. However, despite the fact that researchers tend to assume the definition of “conceptual understanding” is shared among all scholars (diSessa & Sherin, 1998), the exact meaning of the term has typically not been clarified. In our study we define this construct as the flexible application of fundamental principles across different contexts and scales to explain various phenomena. This viewpoint not only matches the desired epistemological perspective of learning that we strive to cultivate among our students (Lindsey, Hsu, Sadaghiani, Taylor, & Cummings, 2012; Milner-Bolotin, Antimirova, Noack, & Petrov, 2011; Zhang & Ding, 2013), but also allows us to operationally observe and monitor learners' performance on this important aspect, thereby better informing teaching and curriculum in the long run.

Using this position as an entry point, our research effort in assessing student conceptual understanding of energy topics is specifically grounded in the development of a valid and reliable assessment that can maximally embody the target construct of applying the Energy Principle and its related concepts. This guiding framework drove us to pay particular attention to the cognition levels and reasoning steps of individual questions—features that were often not explicitly addressed in the prior studies of test design. Our approaches to the energy assessment, such as categorization of Bloom's levels and determination of reasoning steps, offer a practical means to delineate these important features and hence can facilitate a close alignment between test purpose, scope, and specifications.

Our analysis of student performance on the energy assessment further explicated the meaning of conceptual understanding in terms of principle application. As detailed above, if students chose to consider the Energy Principle as a start, they would likely be able to use it correctly in answering relevant questions. Conversely, those who did not start from the fundamental principle almost always failed in finding correct solutions to solve a problem. Simply put, consideration of the fundamental principle increased success in student performance. This indeed is a focal point of conceptual understanding that we attempted to target in the study. Also important in light of the construct of conceptual understanding is student application of energy related key concepts to analyze a phenomenon. In our study, students demonstrated sufficient capabilities of engaging in qualitative reasoning to answer questions correctly without referring to formulas. A body of literature has emphasized the important role of qualitative analysis in solving physics problems (Ding, Reay, Lee & Bao, 2011; Dufresne et al., 1992; Mestre et al., 1993). This emergent pattern, identified from our observations of interviews, provides useful insight into the intimate relationship between conceptual understanding and qualitative reasoning.

Findings from our study also suggest that in general the principle-based approach in the M&I mechanics course succeeded in enhancing student understanding of energy topics. As mentioned before, students in the M&I mechanics course are not introduced to special-purpose formulas but instead are consistently instructed to tackle problems by always starting with the fundamental principles. With the Energy Principle as the central hub for energy topics, the M&I mechanics course systemically incorporates discussions on various energy forms (at both macro and micro scales), system specification, different means of changing the energy of a system, and discrete energy levels of atomic spectra. These unique principle-centered discussions have noticeably improved student application of energy topics across diverse contexts. As evident from the pre and post instruction comparisons, students made significant improvement in their performance on the entire energy assessment, on the individual questions and on the individual test objectives. That said, many students still exhibited noticeable difficulties in dealing with the work done on deformable systems. Specifically, they were easily distracted by configuration or shape change in objects and did not focus on the contact point of the force exerted on the deformable system.

From a pedagogical perspective, our study provides useful implications for effective instruction on the scientifically important energy topics. Indeed, energy is a crosscutting concept that appears in every domain of science (NRC, 2011a, 2012b). Teaching and learning of this concept not only pertains to college-level introductory physics but also directly relates to science, technology and engineering education at all grade levels. Findings from our study suggest that starting from fundamental principles can positively influence student performance on applying energy concepts to solve related problems. It therefore follows that anchoring instruction in fundamental principles can be an effective approach to teaching energy topics in particular and other scientific concepts in general. In fact, besides the present study, research on curriculum development in biology and secondary physical sciences also suggests similar implications (Pankratius, 1990; Wilson et al., 2006).

In order to encourage students to start from fundamental principles, instructors may need to increase student confidence in these principles by deliberately bringing to the fore the causal meanings behind them. For example, in conveying the essence of the Energy Principle, instructors can follow the aforementioned M&I approach by explicitly leading students' attention to the causal relationship between energy transfer processes and the energy change of a system (see the “Approaches to Energy Topics in M&I Modern Mechanics” section). Continual practice on the Energy Principle across various contexts and scales, as is frequently implemented in M&I Modern Mechanics, is also useful to guide students toward more flexible application of fundamental principles.

In fostering student conceptual understanding of energy concepts, it is important that students learn to analyze physical situations qualitatively without using formulas. However, as we found in the study, students often feel uncomfortable in solving problems without formula sheets at hands. This is a typical situation in learning physics (or any other science). A major reason is that students typically lack a proper understanding of the nature of science. Instead of thinking of physics as a unified enterprise centered on a small number of fundamental principles, many students tend to view physics (or any science in general) as a large collection of isolated facts or formulas, and hence equate learning with memorizing unrelated information. In light of these situations, it is important that besides teaching content knowledge instructors also need to explicitly emphasize what science learning is and how it can be made more effective. As is consistently stressed in the M&I mechanics course, a message that needs to be explicitly conveyed to students is that blind use of special-case formulas is not a healthy way of practicing science, and instead qualitative analysis often can be an effective means to tackle problems.

Finally, it is worth noting that although this study is focused on assessing students' conceptual understanding of energy topics in a college-level introductory physics course, the aforementioned implications are equally useful for lower level science education as well as for education in other subject domains. In a broader sense, besides its benefits for teaching the crosscutting concept of energy topics (NRC, 2011a, 2012b), the principled approach highlighted in the study in general can serve as a productive framework for teaching and assessment of various crosscutting concepts or disciplinary core ideas (NRC, 2011a, 2012b). This can occur because by underscoring the importance of applying fundamental concepts in science education, we will be better able to unpack and operationalize intended learning goals, which in turn facilitates assessment development and measurement of learning outcomes. Ultimately, this chain effect can feed back to teaching and curriculum to perpetuate a healthy cycle.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.