From the Director
Ian Johnson is the Director of UniServe*Science
Assessing computer assessment
On page five of this newsletter are preliminary findings from some surveys we have conducted about how academics in science departments are using Information Technology in their teaching. Perhaps the most important finding is that, on the whole, we make very little use of Computer Managed Learning. Even though we are happy enough to use the computer as an aid in our teaching, we are reluctant to hand over control of our students' learning to computers and computer software writers. It seems a bit perverse that, in times when our teaching loads are increasing and the cost of teaching must come more and more into question, we are embracing the uses of IT in teaching which involve add-on costs to traditional methods and not paying much attention to employing computers in a way which might actually save money and time.
A good case in point is in how we assess students' progress. Currently the only kind of formative assessment most science departments use with their large first year classes is to set homework exercises. These are (sometimes) marked and returned with comments (ideally soon after they are handed in: in practice, often much later). The same exercises may contribute a small fraction to summative assessment, but this is more usually done by means of end-of-semester, unseen, written examinations. The marking of these homework exercises and examination scripts is a chore which is expensive, time-consuming and (all too often) depressing. When budgetary pressures force us to cut back somewhere it is often the homework marking that goes first; thus depriving students of virtually the only feedback we ever give them. Surely if any part of our teaching should be (at least partly) automated, it is assignment and examination marking.
Nevertheless we academics remain unconvinced. All too often we have a naive view of what computers are capable of doing in this area nowadays. Many of us remember, cringingly, the PLATO packages of a decade or so since, which marked you wrong if you said that the velocity of a body at rest was 0 m/s instead of 0.00 m/s. Many of us still think that multiple choice questions are crude instruments more suited to the kindergarten, to which any half intelligent students can guess the right answer by grammatical clues alone. And how often have I heard it argued that no computer can ever judge as well as a real person whether a student understands something, even though in practice the real person might have to mark hundreds of three-hour scripts in a few days.
Nowadays, computer based assessment (CBA) can be very sophisticated indeed. It can encompass forms such as: true/false selection, matching items on a list, multiple choice, multiple completion, assertion/ reason choice, best answer, word/phrase/ number matching, image selection. For a good review of what is possible, have a look at Newsletter 7(1) of the CTI Centre for Biology (www.liv.ac.uk/ctibiol.html). There really is no reason to believe that CBA cannot make a valuable contribution in the assessment we do of our students. It may not be able to take over completely but it can certainly help.
In Australia a few university science departments are using CBA in their mainstream teaching. We at UniServe*Science believe it would be useful to bring representatives from those departments together to share their experiences with anyone else who is interested. Therefore we are planning to hold a second workshop, and devote it to this topic. We had initially planned to do this in October 1995, but the response from departments was that it was too near the end of the year and everyone was too busy. So we will hold on February 14 1997. Put it in your calendar of musts for next year. See below and check the web site for further details.