Article

UniServe Science News Volume 15 March 2000










*******

Towards a Method for Evaluating CBL in Chemistry: Interaction Maps and Profiles

Tony Wright and Slavica Pavlinic, Paul Buckley and Judy Edwards
Chemistry, Institute of Fundamental Science, Massey University, NZ

Software for the different areas of science education is appearing at a very rapid rate. We need to be able to evaluate it, either for its suitability for our course, or if we are authoring it, for revision and development.

The primary thing we want to know when evaluating a piece of software is how useful, in terms of learning, the software is for the students who will be using it. The strongest data will therefore come from students using the software. Appropriate analysis of the data will then give information about how well the software matches the needs of different groups of students and allow judgements about whether or not it will be worth implementing the software in the course, and for the software authors, information about revisions that might be necessary.

The important components of the evaluation are therefore the data collected and the method of analysis subsequently used. In this paper we describe the preliminary findings of a study based on a data collection technique that has received only scant attention for software evaluation.1 The main emphasis of the paper however lies in the analytical method being explored. We believe that this analysis offers promise of a robust and convenient way for analysing qualitative data.

We rejected a traditional quantitative study methodology because such studies rarely achieve the aim of a properly constructed and controlled test.2 Also, and more importantly, such studies usually fail to pin down the successes and deficiencies of the CBL tasks.

The obvious way to address this latter issue is to use a qualitative data collection technique that looks carefully at the student doing the task. Stimulated recall interviews3 in which data is collected via videotapes while the student is engaged with the task, and subsequently used as the basis of a reflective interview (immediately on completion of the task), is a useful technique for such studies. The technique combines the attributes of observational methods with those of interview methods.

We have used the technique previously to probe a range of commercial CBL software which we had introduced to our courses.4 One of the major difficulties with interview techniques lies in the method for analysing the data. In our first study we used a method of analysis based on judgements about student understanding in relation to the learning objectives of the tasks. While the conclusions were convincing, the analysis was difficult and time consuming.

In this study, the analysis involves classifying the individual interactive events that occur, according to the performance of students. We show that this approach can be developed into a useful method for evaluating the software which looks promising for a wide range of types of CBL.

The method allows judgements about:

  • the relative value of different pieces of software;
  • parts of the software where formative revision is necessary; and
  • which groups of students will benefit most from using the software.
  • The study

    Following the preliminary study described above,4 we decided to author tutorials for the major areas of difficulty in our first year classes. The first such tutorial covers organic stereochemistry.

    The aim of the tutorial is to develop and refine students' concepts of stereochemistry following their introduction in the lecture course. It contains a series of questions giving students practice in applying the concepts to progressively more complicated organic molecules. In parallel with the series of exercises is an extensive glossary and a range of simpler ancillary exercises. Students can either consciously navigate their way through the tutorial, or allow the computer to do the navigation based on their responses.

    In this study, the tutorial was evaluated during its first delivery by performing stimulated recall interviews on a group of 11 volunteers, some working individually and some in pairs. The groups were selected from the volunteers with a variety of different abilities and interests. The primary aim of the evaluation was to gather data for revising the tutorial for subsequent years.

    To help with the interviews, an observation log of the student performance was taken. This log was then used to identify important points in the task performance to be discussed in the subsequent interview which was aimed at uncovering the student's thoughts during the performance.

    Analysis

    In the analysis described here we have chosen to focus on a particular group of student-computer interactions. These are the ones in which the interaction results from the student using his or her knowledge about stereochemistry. This group of interactive decisions is easy to identify and is distinct from the navigational and other interactions involved in the performance.

    The next step in the analysis was to map the interactions for each student (or pair of students). These interaction maps (Figure 1) are useful because they give a quick guide to how the student has used the tutorial.

    Figure 1.

    Figure 1. Map of a student's progress through the stereochemistry tutorial. The dotted lines show where the student gave unexpected responses and moved to-and-fro between the exercise and the feedback.

    The most noticeable feature of these maps was that each task performance gave a different map, emphasising the variety of usage needed to match the needs of students.

    The maps also produced useful information for formative evaluation of the software, focussing attention on the specific items that challenged students and points of possible revision.

    Profiles

    Interactivity as it relates to computer based learning is receiving wide-ranging attention.5 We have chosen to analyse a small subgroup of the interactions involved in the task performances we have examined, those that involve students making a decision that relates to the chemical learning objectives of the task. Careful study of these interviews revealed that the interactions fall into six categories (see Table 1), three of which involve the student making the expected decision, and three giving the unexpected decision.

    Reflection on the categories led to a further classification based on whether or not the decision leads to a useful learning opportunity since the provision of useful learning opportunities is an obvious goal of CBL software.

    Thus, for example, if the student gave the unexpected response and the subsequent interview showed that they got adequate feedback to correct their ideas, then this was classified as a useful learning opportunity.

    It was easy to further subdivide this category on the basis of whether or not the student was struggling with the concept or the application of the concept. This is a useful distinction because the type of feedback required is different for the two possibilities.

      Type in Interaction Usefulness Comments
    1. UNEXPECTED response - adequate feedback - flawed concept USEFUL, powerful learning Student is able to change or develop their concept on the topic. Can have a negative affective outcome if it occurs too frequently.
    2. UNEXPECTED response - adequate feedback - flawed application USEFUL, powerful learning, students learn to apply their knowledge Student knows the concept and learns how to apply it.
    3. EXPECTED response - considerable thought required - adequate feedback USEFUL, positive learning outcome Student refines his/her concept or applies ideas to a new application.
    4. EXPECTED response - quick, confident NOT USEFUL for learning Student already confident on the topic. May enhance student confidence.
    5. EXPECTED response (guess) - inadequate feedback NOT USEFUL for learning Student uncertain about response and does not receive adequate help.
    6. UNEXPECTED response - inadequate feedback NOT USEFUL and a sign of a serious mismatch of task and student need  

    Table 1. Types of interaction and the learning opportunities they provide

    In terms of learning opportunities, the expected response is not necessarily useful. If the student gives the response confidently, without careful thought, no useful learning will occur. It is probably a good thing to have a number of such events during a task because the student's confidence is supported. However too many and the task wastes the student's time.

    However, an expected response which requires careful thought is probably a useful learning opportunity because the student has to either modify their concept or discover a new application. Either possibility can be readily confirmed in the interview. These two categories are readily distinguished in the interview and are usually categorised by the time the student takes to give the response: The quick response matching an "expert" response to the question.

    The two categories which caused the most ill-feeling amongst the students were those in which they either gave the unexpected response and then did not get sufficient feedback to develop their ideas on the topic, or gave the correct response but were not confident of the answer, and did not receive enough feedback to learn from the experience. This latter category represents a particularly important learning opportunity being missed because the exercise has focused the student's attention on an area of their uncertainty and then failed to help them learn from it.

    Figure 2.

    Figure 2. Profiles of two task performances on the stereochemistry task (Fictitious names have been used.) Categories 1 - 3 represent USEFUL interactions, 4 - 6 represent NOT USEFUL interactions. (See Table 1)

    Having classified the interactions, they can be plotted to give a profile of the student's performance on the task (Figure 2).

    The pair of students, Jane and Rosemary, were capable students who understood most of the concepts. For them, the tutorial was most useful for refining their understanding and application of concepts. In contrast Helen was struggling with some of the concepts and, as might be expected, the number of non-useful interactions was increased.

    The method of analysis has also been applied retrospectively to similar data collected for some commercial pieces of software used in our course (Figure 3).

    Figure 3.

    Figure 3. Interaction profiles for task performances on a laboratory simulation and a molecular shape tutorial

    Cathy and Judith, both capable students, were using a simulated experiment that was part of the laboratory course and Robin and Simon, also capable, were using a tutorial on molecular shape. The major conclusion of the earlier study was that the software in most cases did not match the learning needs of the students and this can be seen in the profiles. In both cases the non-useful interactions outweighed the useful interactions and there were very few of the most useful interactions in categories one and two.

    These examples are important because they establish that a range of profiles can be expected when different pieces of software are examined using the method. However the sample analysed so far is too small for confident conclusions or comparisons of pieces of software.

    Our analysis indicates that the profiles will provide useful information:

  • for judging the appropriateness of software for particular courses;
  • for deciding how widely useful the software will be for different groups of students; and
  • to authors for formative evaluation of software.
  • The interaction maps and profiles give complementary information about student performance on a CBL task. We are currently exploring the use of these tools for a range of different types of software to establish their wider utility.

    References

    1. Harvey, J. (1998) Evaluation Cookbook, Learning Technologies Dissemination Initiative, Heriot-Watt University, http://www.icbl.hw.ac.uk/ltdi/
    2. Reeves, T. (1993) Pseudoscience in Computer Based Learning, Journal of Computer-Based Instruction, 20, 39-46.
    3. O'Brien, J. (1993) Action Research through Stimulated Recall Interviews, Research in Science Education, 23, 214-221.
    4. Pavlinic, S., Buckley, P. D. and Wright, A. H. (2000) Students using Chemistry Courseware - Insights from a Qualitative Study, Journal of Chemical Education, in press (February). http://www.massey.ac.nz/~wwifs/staff/wright.htm
    5. Sims, R. (1999) The Interactive Conundrum I: Interactive Constructs and Learning Theory, Proceedings of ASCILITE '99, 307-314.

    Tony H. Wright
    Chemistry
    Institute of Fundamental Science
    Massey University
    Private Bag 11222
    New Zealand
    awright@massey.ac.nz


    Return to Contents

    UniServe Science News Volume 15 March 2000

    [an error occurred while processing this directive]

    Page Maintained By: PhySciCH@mail.usyd.edu.au
    Last Update: Monday, 30-Apr-2012 15:40:20 AEST
    URL: http://science.uniserve.edu.au/newsletter/vol15/wright.html