CAL-laborate Volume 5 October 2000










*******

Is There a Right Way to Teach Physics?

Ian Johnston
and
Rosemary Millar
School of Physics, The University of Sydney, NSW 2006, Australia

Background

An important development in university physics teaching in the last two decades has been the emergence of a world-wide Physics Education Research community. In physics departments at relatively large numbers of institutions throughout the world, particularly the USA and Europe, academic physicists are doing research into the difficulties associated with teaching their subject1. Among the many directions this research has taken is the identification of "misconceptions" (sometimes referred to as "alternative conceptions"). These are ideas or concepts which students have constructed for themselves, based on their own experience of the natural world, which are often in conflict with the agreed view of practicing scientists. Research has shown that these "misconceptions" are very widely shared, very often in conflict with other concepts the student holds, and very difficult to change.

Following on from this research, as it were, a lot of work has been done to develop special diagnostic tests to uncover which, if any, of these misconceptions particular students hold. They normally consist of series of multiple choice questions, in which the "right" answer is hidden among very tempting distracters, each one targeting one or more common misconceptions. Among the best known of these tests, in the subject area of kinematics and dynamics, are the Force Concept Inventory (FCI)2 and the Force and Motion Conceptual Evaluation (FMCE)3.

This research has, in turn, prompted the development of teaching strategies which target specific classes of misconceptions - in the (understandable) belief that, if students can get the fundamental concepts "right", they have a better chance of understanding the rest of the subject. The results of these strategies are reported in the literature, and there is coming to be a consensus within the physics education community that, for example, traditional (chalk and talk, lectures plus laboratories) teaching is relatively ineffective in changing misconceptions. On the other hand, one recent survey of over 7000 students in the USA has shown that teaching which employs interactive methods can result in significant increases in understanding (as measured by these diagnostic tests).4

It would seem important therefore that teachers everywhere should take these findings seriously, and, where possible, test whether the same gain in understanding can be achieved in other teaching contexts.

Interactive Lecture Demonstrations

Many of the new techniques just mentioned involve quite elaborate teaching materials and preparation time on the part of the teacher. In today's university climate, increasing workloads and student numbers often mean that time is just what university teachers do not have. Therefore many of these new techniques are destined to be little used. However, one particular new technique, which originated at Tufts University, Boston, involves the use of Interactive Lecture Demonstrations5, and is designed to be used in a traditional teaching context, that is in an ordinary lecture. ILDs consist of a number of simple experiments which use a microcomputer to log data from a motion sensor, and to display it in graphical form on a data projector, while the instructor performs a number of simple "experiments". Students are told what is going to happen, and write their predictions of what the graphs will look like on specially prepared sheets. Only when they have done this and resolved any disagreements, by discussions among themselves, are they shown the actual experiment and the data the computer has collected and graphed. After this, class discussion is devoted to where any incorrect predictions went wrong.

Clearly such a technique means that the instructor must follow a pretty rigidly imposed scenario. Although the demonstrations are done in an ordinary lecture setting, there is little scope for the instructor doing what he or she wants to do. Questions of "covering the syllabus" and "giving good sets of notes" have to take second place. Luckily there are only four one-hour sessions specified, and the instructor has the rest of the allotted periods to do what is normally considered necessary in a lecture course (and which, it will be remembered, research shows to be not very useful).

Results from this teaching technique have been reported in the literature over the last five or six years. Typical are those reported from the University of Oregon in 1996, shown in Figure 16. The diagnostic instrument used was the FMCE, and student responses are reported for four groups of questions concerning Newton's Laws, though it is not particularly relevant what material the questions covered.

Figure 1.

Figure 1. Showing the percentage of correct responses to questions in four groupings, as published by Thornton and Sokoloff. Results are (1) responses from students in all classes before instruction, (2) responses after instruction from classes taught by traditional methods, and (3) responses after instruction from classes taught using ILDs. The figure is adapted from reference.

Several points will be noticed from this figure. Firstly that student "understanding" (or whatever is being measured by these tests) is very low on entry. It must be noted that the physics course in question was calculus-based, and the students would be planning on a physics major. Some of the more prestigious universities in the USA have students who attain higher scores on entry, but nevertheless, scores similar to the above are not untypical of students just out of high school in the USA.

Secondly it will be noted that there is no very great improvement after a semester of traditional instruction. Such results are also typical of universities and colleges reported in the literature, and are part of the accepted body of evidence which suggests that traditional teaching is relatively ineffective in generating this kind of understanding.

Lastly there is the very impressive improvement in "understanding" demonstrated by those students who were exposed to four one-hour sessions using the ILDs and the stipulated interactive teaching. The results reported here are not the only ones who show such improvements. Therefore this particular teaching technique seems able to claim, prima facie, to be one which promises that other teachers can expect similar improvement. It would obviously be important to test this expectation in another context - for example, with a class of Australian students.

Evaluating the effectiveness of ILDs

In March 1999, such a test was held with physics introductory students at The University of Sydney. The roughly 450 physics students were divided into four calculus-based classes, one at the "Advanced" level and three at "Regular". Of the latter, one group was taught using ILDs, and the other two, taught by a different lecturer, were regarded as a control. The structure of the course is similar to most physics departments in the country. The areas of kinematics, force and motion, work and energy, collisions, rotational dynamics are taught over five weeks, usually by 15 one-hour lectures with a weekly tutorial and regular homework assignments. For the trial being reported, the experimental class had 11 one-hour lectures and four one-hour ILD sessions, but everything else was the same. All classes shared the same assignments and end-of-semester examination.

All 450 students were tested during the first lecture period, using the MPCE diagnostic test, and two weeks after the end of the module, in the seventh week of semester, all were asked to take exactly the same test again.

Results

Results of the experiment are shown in Figure 2, in which student responses are reported for ten groups of questions on that test, including the four groups singled out in Figure 1.

Figure 2.

Figure 2. Showing the percentage of correct responses to questions in ten groupings, (including the four groups represented in Figure 1) as found in the 1999 experiment. Results are (1) responses from students in the experimental classes before instruction, and (2) responses from the same class after instruction using ILDs.

The first point to be noted is that Australian students are clearly very well prepared when they enter university. The on-entry scores are comparable with, or better than, the very best US institutions. In these times when high school teachers are being criticized, this finding deserves to be better known.

The second point, however, is less palatable. It is immediately obvious that the same gains in understanding, as were reported in the literature, did not occur. There was some gain, but the absolute values for the fraction of students answering the questions correctly fell far short of those in Figure 1. And the relative gain - the proportion of students who were unable to answer the questions before instruction, who were able to answer them after instruction - was even worse, considering that the Australian students had so much better scores on entry.

The teaching effectiveness of the ILD method, compared with the control classes, is shown in Figure 3, in which the relative gain for both groups of students is shown.

Figure 3.

Figure 3. Showing the relative gain, as determined by post- and pre-testing (results expressed as a percentage) as a result of instruction in 1999 for (1) students in the control class, taught by traditional methods, and (2) students in the current experimental class, taught using ILDs.

On the basis of this data, a case can be made that the new teaching technique is more effective than traditional methods, at least so far as student understanding (as measured by the MPCE test) is concerned. The conclusion would seem to be that this new method of teaching, while effective in itself, does not yield the very impressive results claimed for it. There are of course many possible explanations for this: the teacher (IJ) may not have done things properly; the students may have been atypical; and/or the testing protocols may not have been careful enough. To answer some of these, the experiment was repeated in 2000, exactly as in the previous year.

Results from the second attempt

Again there was one experimental class (of some 120 students) and two control classes, totalling about 300. All other logistical details were unchanged. However more attention was given to controlling what actually happened in the classroom. The teacher's performance had been videotaped in 1999, and inspection of those tapes suggested that, while he had done everything that was prescribed, balance between the different parts of the lecture were not ideal. To put it bluntly, he talked too much. For the second trial it was decided he should stick to saying only what was necessary and spend more time getting the students to interact more and to write more. It should be remarked that most lecturers rather like to hear themselves talk, and this constraint can be somewhat burdensome.

The results of the second trial are presented in Figures 4 and 5, which should be compared with Figures 2 and 3. (Note that responses to the last question on the previous graphs were not included in the 2000 results, for reasons that are not important).

Figure 4.

Figure 4. Showing the percentage of correct responses to questions in ten groupings, (including the four groups represented in Figure 1) as found in the 2000 experiment. Results are (1) responses from students in the experimental classes before instruction, and (2) responses from the same class after instruction using ILDs.

Figure 5.

Figure 5. Showing the relative gain, as determined by post- and pre-testing (results expressed as a percentage) as a result of instruction in 2000 for (1) students in the control class, taught by traditional methods, and (2) students in the current experimental class, taught using ILDs.

It is immediately clear that the performance of the students in the experimental class had improved very markedly, while that of the students of the control class was similar to the previous year. Although the percentage gains are not as great as those in Figure 1, it is clear from the "raw" scores in Figure 4, that a very large fraction of the class seemed to understand the material - in the sense that, of all the students that can get to first base, that is who can answer the early questions on elementary kinematics, most of them could answer most of the rest of the questions.

That raises the very interesting question about those who couldn't answer the kinematics questions. Inspection of the original scripts show that these students got essentially none of the questions right (or at least about the same number they could have got by pure guessing). Yet all of the students in the class had passed physics at high school. It is almost as though this group had reached some kind of personal limit in their ability to understand the subject. This group needs to be studied very carefully in subsequent research.

Conclusions

Clearly this experiment needs to be repeated, with different teachers and with different classes of students. Nevertheless it seems possible to conclude that this teaching method does yield results similar to those claimed for it. There is a note of caution to be sounded however. The unstated hope driving the experiment in the first place was that the ILD method might have been a teaching technique that could in some sense guarantee student learning, given only reasonable teachers and teaching administration. The previously published results seemed to suggest that that might have been the case. The fact that only modest gains were recorded on the first attempt in this case, when it was used by a very experienced teacher, suggests that there is no real guarantee.

Nevertheless, if this method is indeed a "right" way to teach physics (there may be others of course), very thorny questions suggest themselves, about the freedom of teachers to teach as they believe best. This paper would not dare address such questions.

References

  1. A comprehensive bibliography of this field can be found in: McDermott, L. C. and Redish, E. F., "Resource Letter: PER-1: Physics Education Research", Am. J. Phys, 67(9), 1999, 755-67.
  2. Hestenes, D., Wells, M. and Swackhammer, G. (1992) The Force Concept Inventory, Physics Teacher, 30, 141-158.
  3. Thornton, R. K. and Sokoloff, D. R. (1998) Assessing student learning of Newton's laws: The Force and Motion Conceptual Evaluation and the Evaluation of Active Learning Laboratory and Lecture Curricula, Am. J. Phys, 66(4), 338-52.
  4. Hake, R. R. (1998) Interactive-engagement vs traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses, Am. J. Phys, 66, 64-74.
  5. Thornton, R. K. (1997) Learning Physics Concepts in the Introductory Course, Microcomputer-based labs and Interactive Lecture Demonstrations, in Proceedings Conference on Intro Phys Course, Wiley, NY, 69-85.
  6. Thornton, R. K. and Sokoloff, D. R. (1996) Microcomputer-based Laboratory Tools, "Using Interactive Lecture Demonstrations to Create an Active Learning Environment", in The Changing Role of Physics Departments in Modern Universities, eds Redish, E. F. and Rigden, J. S., American Institute of Physics, 1061-1074.

Ian Johnston
School of Physics
The University of Sydney
NSW 2006
Australia
idj@physics.usyd.edu.au

and

Rosemary Millar
School of Physics
The University of Sydney
NSW 2006
Australia
millar@physics.usyd.edu.au


Return to Contents

CAL-laborate Volume 5 October 2000

[an error occurred while processing this directive]

Page Maintained By: PhySciCH@mail.usyd.edu.au
Last Update: Monday, 30-Apr-2012 15:17:00 AEST
URL: http://science.uniserve.edu.au/pubs/callab/vol5/johnston.html