UniServe Science News Volume 15 March 2000


Development and Evaluation of a Model-building Tutorial - How to decide what worked and what didn't!

Debbi A. Weaver, Peter J. Harris and Lea M. D. Delbridge
Department of Physiology, The University of Melbourne


Students in biological sciences often have problems understanding the concepts involved in operation of control systems. We have found that they are very good at rote learning diagrams supplied in lectures, but have a great deal of trouble relating this to real world systems. Additionally, teaching difficult concepts in a manner which is engaging to students has always been a challenge. This project is a direct response to student and staff difficulties with the teaching of biological feedback control.

Understanding control of blood pressure

The incentive for development of this program came from a desire to replace a practical class in which students investigate the control of blood pressure in a hands-on practical class using anaesthetized animals. This class was very successful in teaching experimental techniques, and ways of investigating acute blood pressure interventions, but was less successful in assisting students to understand the overall operation of the blood pressure control system. In line with the worldwide move to reduce animal experimentation, we initially replaced the practical class with a video session, but this proved less than helpful in teaching the concepts involved, largely due to the passive nature of the experience for students.

It was then decided to develop an interactive computer tutorial, to be used in conjunction with lectures and other classes. The major aim in designing this tutorial was to concentrate on the underlying concepts of reflex control of blood pressure, not to provide a data-rich replacement for lectures or textbooks.

Principles used in software design

The pedagogical principles and interface design issues considered in development of our interactive programs have been described previously (Weaver et al., 1996; Kemm et al., 1997). Many of these principles and issues are relevant to the present project involved with control system investigation. The program has been designed to be used in a collaborative learning environment, where students work in small groups of two to three, with a tutor present. Our experience has shown this to be very effective at provoking discussion of the learning issues.

Targeting the curriculum need

The initial stage in designing the tutorial involved identification of specific needs. We analyzed students' answers to an examination question on the control of blood pressure and identified their major misconceptions. This allowed us to target the areas of greatest difficulty, and to design accordingly. We found students displayed difficulty understanding the concepts of feedback control in general, and the signalling sequence of reflex control (i.e. detection of imbalance must occur before reflex correction can take place).

Model construction

The major feature of this program is a model-building exercise, where students use the tools provided to construct their own model of a reflex system. They are able to test their model at any stage, and receive substantive and specific feedback on their choices. This approach allows students to test different scenarios, and so gain a deeper understanding of the concepts involved.

The model represents a neural circuit which will allow a person to maintain the blood supply to the brain when they change posture. Students are given a palette of components of the nervous system, which they can position onto a background diagram of a human body. These components consist of receptors (signal detectors), input (afferent) neurones, processors (interneurones), and output (efferent) neurones (see Figure 1). These components are fully manoeuvrable, and the neurones can be stretched or rotated as required to make appropriate connections.

Figure 1.

Figure 1. Model-building components

Students can test their model for completeness and correctness at any stage. They will receive feedback in the form of a simple animation of an electrical impulse travelling around the system, and releasing chemical neurotransmitter where appropriate, followed by specific textual feedback. The textual feedback is always in the format of a positive statement (what is correct so far), followed by a statement about what is not yet correct, and a hint about what to consider next.

The completed model is very complex, albeit a simplified real-life model (see Figure 2), and we anticipated that students would find this task challenging. Some attributes of the model were limited e.g. choice of neurotransmitters, and questions were set at the end of the tutorial to prompt consideration of these. At all stages, our aim was to concentrate on the concepts involved, and then to introduce detail later, to avoid overloading the students at any one stage.

Figure 2.

Figure 2. Completed model on screen, showing complexity of parallel output

Implementation and evaluation

The first draft of this program was tested on 200 Medical students at The University of Melbourne in 1998. Students were working in groups of two or three, in scheduled classes, with at least one of the developers present as an observer in every class. Written questionnaire responses were collected from every student. At the same time, electronic audit trails of student progress through the model construction were collected. These audit trails were matched to questionnaire responses.

The audit trails collected selective data only, to limit the amount of information to be evaluated to a manageable and interpretable amount. Instead of every mouse click, we tracked student viewing of feedback panels, so that we could map exactly which path was taken to reach the complete model. This also allowed us to identify areas of greatest difficulty, as seen by repeated viewing of feedback panels, or repeated attempts to get past a particular stage.

Analysis of evaluation data

Observations of the class by the developers confirmed our expectation that students would find the model-building task challenging. Students generally took some time to familiarize themselves with the program, but were able to complete the given tasks in the time provided. Some students experienced difficulty in getting started, and these students quickly became frustrated, and became less likely to read instructions or textual feedback. The level of difficulty of the model, and lack of familiarity with the symbols used, was such that they were overwhelmed by so much new information.

Questionnaire analysis

The questionnaire responses were generally consistent with the observations of the developers, i.e. the students enjoyed the interactive task of constructing their own model, and reported a sense of achievement on completing this, although most experienced some difficulty, especially in getting started. We had not anticipated that students were unfamiliar with the symbolism commonly used to represent components of the nervous system, and therefore they had trouble at first distinguishing the functions of the different components of the neural circuit.

Students enjoyed the animated sequences, with many comments that they appreciated the visual representation of the electrical impulse travelling around the circuit, although many found the textual feedback less clear. Typically, they asked for more directional hints ("Tell me what to do!"), which is something we were consciously trying to avoid. Importantly, most students reported that they used the textual feedback in constructing the model, even if most of them still experienced difficulties along the way.

The most commonly reported difficulty was in getting started. This problem was perceived as an early cognitive overload, where students were faced with learning the symbolism used, how to use the tools provided, and how to interpret the feedback before they could begin to construct their model. This has been dealt with in subsequent versions of the program (see below).

Students were also asked for their suggestions for improving the tutorial, and many took this opportunity to provide some excellent feedback. Several of these proposals have since been incorporated into the tutorial design.

Audit trail analysis

Collection and analysis of electronic audit trails allowed us to track the paths taken by students through the model-building exercise. We collected information of the numbers of groups/pairs of students that viewed any particular feedback screen more than three times. This was used to indicate difficulty in moving past a particular model-building stage (see Figure 3).

It was possible to identify sites of common confusion in the model construction process, where repeated viewings of feedback panels significantly above background levels could be detected. By matching the audit trails with the written questionnaires, we could identify the cause of the spikes in this diagram, which indicate that 10 or more groups of students were revisiting the same feedback panels three or more times. Generally, the peaks in the early stages of model construction (input pathway stage) correspond with the difficulties reported by students in getting started. High numbers of repeat viewings in the central processing stage correspond with the high level of complexity of the model in this stage, and with the complicated anatomical terms used. Once students reached the output stages of the model, they had learnt how to use the tools and to interpret the feedback, and so peaks of repeat viewings in this stage generally indicated particular feedback panels which were confusing or ambiguous. It can be seen that at many stages (47 out of 97), all student groups found that viewing of any particular feedback panel less than 3 times (signified by empty columns) was adequate.

Figure 3.

Figure 3. Frequency plot of number of student groups returning for repeated (>2) viewings of feedback panels during the model construction, segmented into 4 stages. (Total number of student groups = 93, total number of feedback panels = 88)

Program redesign in response to evaluation data

In response to student comment following the prototype testing, several modifications were introduced to the program. The major problem identified was the large cognitive load experienced by students in the early stages of model-building, which was found to originate from three separate sources.

The first area of difficulty was in the lack of familiarity with symbols representing components of the nervous system. We included an introductory screen with more information on these, with links to definitions, and simple animations of electrical impulses moving along neurones. This provided the necessary familiarization opportunity, not only with the symbols themselves, but also with the concept of connecting components and transmission and modification of electrical signals.

Secondly, many students found the use of anatomically-correct terms and fine structural detail of the model to be too complex for this introductory stage of their course, and distracted them from learning the underlying concepts involved in reflex control. In response to this, we simplified several stages of the model, and used terms to describe regions of the brain which better represented their physiological function. However, more detailed information was retained as links, so students may still access this information once they have consolidated their basic knowledge.

Finally, students reported difficulty in learning how to use the tools provided at the same time as they were attempting the task of constructing the model. To reduce this cognitive load, we introduced a 'playground' screen, immediately prior to the model-building screen, where students can rehearse using the tools representing components of the nervous system. Importantly, students are presented with a task of constructing a simple feedback circuit on this screen, to provide direction for this practice exercise. This not only provides experience with selecting and modifying the tools, but also using these to construct a circuit, and interpreting the animation and textual feedback provided.

Once these major difficulties were dealt with, we then revisited the textual feedback panels, using the results of the audit trail analysis to indicate problem stages. Any peaks of repeat viewing of feedback panels which could not be related to the problems described above were interpreted to represent feedback which was either ambiguous, confusing or misleading. These feedback panels were identified and rewritten to provide more helpful diagnosis and prompting.


This process of implementation and evaluation has proved invaluable in learning where students are experiencing difficulty with using this program. In particular, the matching of electronic audit trails with written questionnaire responses has enabled us to identify precisely where these difficulties arise, and allowed us to accurately address these problems.

Evaluation of this program is ongoing. We have tested the modified version on second year Science students in 1999, with encouraging results. We will be repeating the evaluation on Medical students in 2000, concentrating on measurement of learning outcomes.


Funding for the development and initial evaluation of this project was provided by CUTSD (Committee for University Teaching and Staff Development), with funding for ongoing evaluation being provided by the Department of Physiology, The University of Melbourne.

We also wish to thank the students of the Faculty of Medicine, Dentistry and Health Sciences at The University of Melbourne, for their ongoing and cheerful assistance in evaluating this program.


Weaver, D. A., Petrovic, T., Harris, P. J., Dodds, A., Delbridge, L. M. and Kemm, R. E. (1996) Interactive tutorials designed to encourage deeper learning practices, Making new connections ASCILITE '96, eds. A. Christie, P. James and B. Vaughan, 501-515. Adelaide: University of South Australia.

Kemm, R. E., Weaver, D. A., Dodds, A., Evans, G., Gartland, D., Petrovic, T., Delbridge, L. and Harris, P. J. (1997) Designing and Evaluating an Interactive Hypothesis Testing Tool to Aid Student Understanding - Gastric Acid Secretion and Its Regulation. ASCILITE '97 What Works and Why, eds. R. Kevill, R. Oliver and R. Phillips, 324-330. Perth: Curtin University of Technology.

Return to Contents

UniServe Science News Volume 15 March 2000

[an error occurred while processing this directive]

Page Maintained By: PhySciCH@mail.usyd.edu.au
Last Update: Monday, 30-Apr-2012 15:40:18 AEST
URL: http://science.uniserve.edu.au/newsletter/vol15/weaver.html