UniServe Science News Volume 18 October 2001


Flexible Assessment for Flexible Delivery: Online Examinations that Beat the Cheats

Jeremy B. Williams
Queensland University of Technology
This paper was presented at 'Moving Online I: A conference to explore the challenges for workplace, colleges and universities', August 2000 and published in the proceedings of that conference. It is reprinted with permission from the author and Southern Cross University.


This paper argues that a commitment to flexible delivery necessarily requires a commitment to flexible assessment. It describes how an MBA program at one Queensland university has embraced flexible delivery out of pedagogical and strategic necessity and how, among other things, this has led to online teaching (OLT) sites being extended to incorporate online assessment. A popularly held view is that only formative assessment can be set online because of the problem of effective invigilation. This view is challenged by the author, who produces evidence to suggest that the concern about cheating is exaggerated, and that there are significant benefits to be gained by both students and academics from setting online (open book) 'take-home' examinations.


The increasing market orientation of higher education has brought sweeping changes within universities throughout Australia and elsewhere in the world. Among other things, changes to government funding have forced universities to become more innovative in their resourcing arrangements. Partnerships with professional bodies and the private sector are becoming more widespread, universities have expanded geographically to tap into non-traditional markets, and for course developments to proceed, tangible evidence of student demand and cost efficiency is of the utmost importance. The product of these changes (and others), is that universities have developed a much stronger 'customer focus'. In short, the hard reality of life in the higher education sector as we move into the new millennium is that failure to recognise students as 'clients' is to run the risk of anonymity in the marketplace (or, worse still, notoriety), which can only lead to reduced funding, cuts to courses and staffing levels, and even closure.

One way in which some institutions have sought to enhance their international competitiveness is through the flexible delivery of their programs. Flexible delivery is, by definition, a client-oriented approach because it is a commitment, on the part of the education provider, to tailor courses to meet the various individual needs of its students. Furthermore, it is tacit recognition of the 'massification' of higher education (Carrier 1990), whereupon the student profile has changed quite dramatically - socially, culturally, economically and that, pedagogically, there is a need to cater for this increasingly diverse student body.

As this paper will highlight, flexible delivery provides students with a number of different options for study. It is not prescriptive in the sense that one approach to study is identified as being superior to another. A student can chart a route through a degree that is most compatible with their budget, their social, family and working lives, and their preferred learning style. In short, flexible delivery is non-discriminatory, catering equally for an international student, a single parent working part-time, a business executive travelling regularly overseas and interstate, or a school leaver.

The main aim of this paper will be to focus on one important and frequently overlooked aspect of flexible delivery, the assessment component. It will demonstrate, by reference to a case of a flexibly delivered unit within the MBA program at the Brisbane Graduate School of Business (BGSB) at Queensland University of Technology (QUT),1 that flexible assessment systems are effective, easy to administer, and very popular with the student body.

The discussion will concentrate, first of all, on the defining characteristics of a flexibly delivered program. This is followed by a brief discussion on the flexible assessment model (FAM) that is being used in a number of MBA units, and how greater flexibility in the format of the final examination is a natural extension of this model. The results of some preliminary research into student views regarding an online 'take-home' examination are then evaluated. The final section draws together the strands of the discussion and puts forward some tentative conclusions.

What is Flexible Delivery?

There has been a commitment to the principle of flexible delivery at QUT since the mid-1990s when it became a central feature of the University's Teaching and Learning strategic plan. Care has been taken not to define flexible delivery too rigidly so as not to erect any obstacles to innovative ways of delivering programs. However, to assist staff in developing some notion of flexible delivery, the following internal QUT memorandum was circulated in April 1999:

A flexibly delivered unit is one in which the options for delivery include alternatives to the traditional ways of on-campus in-classroom lectures, seminars, tutorials and practical classes. Such a unit will be designed with the aim of meeting students' diverse learning needs by incorporating one or more aspects of flexibility in time, place and/or technology, such as:
  • delivery in the workplace or remote from the campuses of the university
  • delivery in block mode, other intensive mode, or other non-standard delivery time format either on or off campus
  • delivery with non-standard beginning and completion dates for the units (insofar as the Student Information System can cope with this kind of flexibility)
  • the use of technology and resources for learning support to provide options for any student to access and use materials at his or her own time and place (e.g. Web-based teaching materials and exercises), or to be assessed without having to attend examinations at a specific place and time.
These aspects of flexibility will form the majority of the delivery mechanisms used in any flexibly delivered unit.

Within a relatively short period of time, flexible delivery has become a fairly prominent feature within courses offered by the Faculty of Business adopted, to varying degrees, in individual units, specialisations (groupings of units) and whole courses.

The BGSB, in particular, has embraced the notion of flexible delivery with some alacrity. It offers a range of customised certificate courses in management for the defence forces, government departments, professional institutes and individual companies in the private sector. Integral to this strategy has been the production of study guides for all units, thus facilitating intensive block mode, on or off campus, to meet the special needs of these groups of students.

A further initiative has been the development of online teaching (OLT) sites.2 There are currently 457 OLT sites University-wide, with the Faculty of Business housing the largest number. Commencing with two sites in 1997, there are currently 157 OLT sites in Business, 43 of which are located in the BGSB. The framework for OLT sites varies from unit to unit but, typically, there is a download facility where students can access PowerPoint lecture slides, tutorial solutions, past examination papers and the like, discussion forums (electronic bulletin boards), chat space, (Internet relay chat or IRC), and discipline-relevant links to the Web. Until recently, however, little attention has been devoted to assessment, and how this might be integrated with the OLT system.

The Case for Flexible Assessment

Continuous assessment has, in the past, been perceived as being 'flexible' in that a student, by undertaking it, is not forced to rely wholly on their ability to perform well in final examinations. More innovative continuous assessment regimes have gone so far as to permit student choice with respect to the assignment topic and maybe the submission date. However, while it is true that this does provide students with an element of flexibility, there is not sufficient flexibility that a person could elect not to undertake a piece of continuous assessment if it was incompatible with their preferred learning style, or their work patterns. The fact of the matter is, that while there may be some latitude in terms of the essay title or handing-in date, continuous assessment is typically summative in nature and, for this reason, it is not a truly flexible assessment system.

Consider the example of the conscientious student who submits an assignment early in the semester but scores below average marks. It would be very rare for permission to be granted for this person to be given a 'second bite at the cherry' by resubmitting or attempting an alternative assignment with a later handing-in date.3 In short, when a term-time essay is submitted there is no going back. For this reason, standard assessment systems are unforgiving because a student is not permitted to learn from their mistakes.

To summarise, if there is a publicly stated commitment to flexible delivery, it is fatuous to persist with the kind of assessment systems that have come to characterise the university sector. In the BGSB, steps were taken during 1999 to address this problem with the introduction of the flexible assessment model (FAM) into a number of MBA units.4 In essence, this system gives students the choice of completing all, or some combination of a series of optional assessment items, or, indeed, no optional assessment at all, such that it is possible to have anywhere between 50 and 100 per cent riding on the final examination.5 Importantly, a computer spreadsheet identifies which combination of assessment items maximises a student's mark. That is, it is not up to students to nominate their preferred assessment combination in advance. The spreadsheet performs this task for them. Any assessment item that yields a lower final mark is simply disregarded, and a summative assessment item is treated as formative assessment. In this way, a student can take advantage of the feedback they receive and avoid making the same mistake in the final examination. By electing not to complete the continuous assessment, a student foregoes this opportunity - but this is their choice based on their preferred learning style, their work patterns, or family commitments.6

Student endorsement of the FAM within the MBA program has been quite resounding,7 a reflection of the fact that a high proportion of the student body study part-time, and they often struggle to complete items of continuous assessment given the large number of competing interests on their time. It is important to note, however, that the FAM does not discriminate in favour of one type of student over another. The numerous assessment options it provides accommodate student diversity and cater for different learning styles. Indeed, the full-time MBA students are equally as supportive of the system as their part-time counterparts. All students find it comforting knowing that, unlike conventional assessment systems, they can complete a continuous assessment item under the FAM and not worry that a poor performance will have an irreversible effect on their final grade.

The commendations received by the FAM notwithstanding, much work remains to be done. At this point, only the assessment system has been determined. Within this framework, attention must be given, not just to outcomes, but also to the student experiences that lead to these outcomes (Boud 1991). To this end, academics are being encouraged to focus on these experiences with a view to improving the whole of student learning. This has to be a collaborative activity, involving people from across the University community, including students themselves, and from outside the University, specifically employers. This has been a high priority in the Faculty of Business at QUT following the release of the Faculty Education Committee's report Review of Assessment Policy and Practices in August 1999. Subsequently, there has, in the BGSB, been a determination to investigate the effectiveness of all the assessment regimes currently in operation, and the usefulness of the assessment instruments within them, as part of its five-year rolling review of units.

Online Assessment and the Authentication Problem

The FAM, together with the unit study guides and the OLT sites, genuinely allows for the possibility of self-paced learning. This is something that will be further facilitated from September 2000, when plans are implemented in a number of units for multiple-choice tests to be put online. These tests were previously conducted in class as part of formal assessment, and invigilated in the normal way.

The tests, to be accessed via the OLT system, will be marked by computer instantaneously, and are primarily designed to give students continuous feedback on their progress. Importantly, a student will receive a mark for participation (5 per cent for the completion of 5 tests) rather than a mark for performance. The decision to go along this path arose for a variety of reasons including the view that while multiple-choice is a useful mechanism for testing content knowledge and the understanding of simple concepts, it is an instrument that does not lend itself to the kind of higher order skills required of MBA graduates (critical thinking skills, for example). Another related reason is that by getting multiple-choice tests out of the classroom, more time is available to concentrate efforts on the development of these higher order generic skills. Of all the reasons, however, perhaps the most compelling was the fact that students have the opportunity to cheat if the tests are unsupervised.

With online teaching becoming increasingly common, not to mention the burgeoning number of colleges and universities around the world offering whole courses over the Internet, the problem of 'cybercheats' has come to occupy the mind. Indeed, a whole industry has grown up around trying to devise ways of offering educators sufficient security that they can feel comfortable about offering online examinations. A recent study by Fröhlich (2000) offers a comprehensive evaluation of the various technologies that could facilitate what he calls 'Authenticated Secure Online Computer Assisted Assessment (ASOCAA)'. These include fingerprint recognition, smart cards, hand geometry, retinal scans, iris recognition, facial recognition, voice recognition and remote invigilation. Aside from the prohibitive cost of some of these devices, Fröhlich acknowledges (through personal communication) that while their reliability is increasing as time goes by, none are 100 per cent cybercheat-proof.

This obviously presents something of a quandary. If there is no reliable mechanism for preventing students from cheating in examinations and tests, has the considerable amount of time, effort and money invested in the development of online teaching resources been misdirected? What is the point of an institution committing itself to the flexible delivery of a course if, come examination time, it has to resort to the highly inflexible practice of dragging students into classrooms and examination halls to be watched over by a team of invigilators? Clearly, this is not what these institutions have in mind. To quote from the official QUT memorandum detailed earlier in this paper, flexible delivery is, among other things, about 'the use of technology and resources for learning support to provide options for any student to access and use materials at his or her own time and place ... or to be assessed without having to attend examinations at a specific place and time [emphasis added]'.

A simple solution would be to abandon the idea of examinations and opt for assessment by assignment work only. Many institutions do this already, of course, on the basis that timed, closed-book examinations constitute an ineffective means of assessing a person's ability. It might test the ability to cram material into one's head the night before an examination, and reproduce it the following day in a reasonably coherent form, but it does not necessarily assess whether any deep learning has taken place. This said, courses assessed by assignment are not without their problems either. Not least of these is the problem of authentication. How can we be sure that the work submitted is the work of the student concerned? This is an age-old question, and the fact there is no readily available answer is one reason why the BGSB at QUT has a policy of compulsory examinations in all units (with the exception of research units).

Experience with Online 'Take-Home' Examinations

In the case of multiple-choice type examinations, if they are to be used at all online, then it would appear as though there is little alternative but to use them purely for the purposes of formative assessment. The fact the responses to questions require the student to enter only letters or numbers means that the risk of collusion is high. But can the same be said of essay-type questions?

In first semester 2000, this was the question that faced the coordinators of one unit in the BGSB MBA program when it became apparent that it would be virtually impossible to set an examination that could be taken by all students at the same time, in the same place. The majority of the 125 students enrolled were campus based, but around 40 were Australian Defence Force (ADF) personnel, some of whom were in submarines or en route to East Timor. The solution was to revise the assessment for the unit so that a compulsory, open-book, 'take-home' examination would be set in place of the compulsory, closed-book, in-class examination. Apart from this change, the flexible assessment system remained in place, although the ADF students were in a position to complete only one of the optional assessment items as two other items were class-based.

After some careful deliberation, the unit coordinators sent an email to all students explaining that the take-home examination while similar, in essence, to an essay-type assignment, would be different in two ways. First, the time constraint for completion of the take-home examination would be much tighter than that for an assignment. A single examination question would be uploaded to the OLT site at midday on a Friday, with student responses (of no more than 2000 words) to be uploaded midday the following Monday. Second, whereas assignment questions usually pertain to a subset of the content of a unit, the take-home examination would be very unit-specific, testing a much broader knowledge base.

It was also pointed out that, if a student were to work steadily over the semester, the final examination ought not take very long to complete. Students were being given three days to complete the final examination, not because they needed this long to work through it, but to build some flexibility into the examination process. Some students may have work or family commitments on one or more of the days of the examination period, and the time allowed would cater for this contingency.

At the conclusion of the unit, a focus group of 20 individuals was surveyed.8 Of these, 14 were on-campus students and 6 were ADF students, constituting a broadly representative sample in terms of the numbers enrolled from each group. The data generated by this survey produced some interesting results. Five questions were asked requiring students to choose from five alternatives and, in each case, the respondents were invited to elaborate as to why they nominated a particular alternative. The tables below summarise these results.

Question 1: How do online (open book) take-home examinations compare to regular (closed book) in-class examinations in terms of the stress and anxiety they cause?Score%
A. Take-home examinations are far less stressful.525
B. Take-home examinations are a little less stressful.945
C. Take-home examinations generate the same level of stress as in-class examinations.210
D. Take-home examinations are a little more stressful. 420
E. Take-home examinations are far more stressful.00

Table 1. Type of examination and stress levels

Question 2: If an examination question (requiring a 2000 word answer) is set at 12pm on a Friday, and due in at 12pm the following Monday, is this an appropriate amount of time to complete the task?Score%
A. This is way too much time. 00
B. This is a little more time than is needed. 210
C. This is just about the right amount of time. 1575
D. A little more time would have been better. 315
E. A lot more time is needed. 00

Table 2. Time allowed for take-home examinations

Question 3: How do online (open book) take-home examinations compare to regular (closed book) in-class examinations in terms of the opportunity they provide for mature reflection and deeper learning?Score%
A. Take-home examinations provide far greater opportunity. 1365
B. Take-home examinations provide a little more opportunity. 525
C. Take-home examinations provide the same opportunity as in-class examinations. 15
D. Take-home examinations provide a little less opportunity. 15
E. Take-home examinations provide far less opportunity. 00

Table 3. Type of examination and deeper learning

Question 4: In your opinion, how do the opportunities for cheating in an online (open book) take-home examination compare to a regular (closed book) in-class examination?Score%
A. Take-home examinations provide many more opportunities. 525
B. Take-home examinations provide a few more opportunities. 420
C. Take-home examinations provide the same opportunities as in-class examinations.840
D. Take-home examinations provide fewer opportunities. 210
E. Take-home examinations provide very few opportunities. 15

Table 4. Type of examination and cheating

Question 5: Do you think that, after considering their advantages and disadvantages, an online (open book) take-home examination is a superior alternative to a regular (closed book) in-class examination?Score%
A. A take-home examination is the far superior alternative. 630
B. A take-home examination is a slightly better alternative. 730
C. A take-home examination is no better or worse than an in-class examination. 530
D. A take-home examination is a slightly worse alternative. 15
E. A take-home examination is a very inferior alternative. 15

Table 5. Preferred type of examination overall

These raw figures provide some interesting insights. 70 per cent of respondents were of the view that take-home examinations were less stressful than the conventional in-class examination. 75 per cent of respondents thought that the 3 day period for the examination was just about right. While 90 per cent were of the view that take-home examinations were better or no worse than in-class examinations. The questions that produced particularly interesting results, however, were those relating to deeper learning and cheating.

A resounding 90 per cent of respondents were of the opinion that take-home examinations provided a better opportunity for deeper learning than in-class examinations. The comments that accompanied these responses are particularly instructive:

A take-home examination allows students time to reflect and develop their thoughts. In an in-class examination students are usually pressed for time and so just regurgitate what they have memorised. Much prefer take-home examinations at the post-graduate level. (serial no. 239)

In class examinations tend to be rote learnt, and in my experience a lot of valuable info is lost as a 'data dump' occurs after the examination. Take home allows additional reading and consideration, which results in information staying in the brain longer! (serial no. 247)

They allow the student to really show their understanding of the subject through drawing on appropriate references. They permit a fairly complete answer to be made without the rush of trying to get it all down in an examination room, perhaps at the expense of structure and logic. (serial no. 294)

Take-home examination provides far greater opportunity, especially to overseas students, this way gives them plenty of time to think and organise the words and ideas, and to explain it clearly and correctly. (serial no. 326)

These comments are among many in response to Question 3 that provide considerable evidence to suggest that open book, take-home examinations are pedagogically superior to the closed book, in-class alternative. This is extremely encouraging for those educators committed to improving the quality of assessment.

The raw data generated in response to Question 4 is less resounding. 40 per cent of respondents felt that the opportunity for cheating was no more or less for a take-home examination than it was for an in-class examination, and 45 per cent felt there were greater opportunities for cheating. Interestingly, the written comments present a somewhat different picture. A selection of students answering A or B made the following comments:

This is purely dependent on the way the 'examination' is structured. In the work environment you have access to reference material thus at the post grad level the 'test' should be aimed at the students' understanding of concepts not data recall ability. (serial no. 223)

The chances obviously increase because there is no supervision. But, you still need to be prepared, if you want to do well, and copying from a text should be fairly easy to spot. The only way to really successfully cheat would be to get somebody who is an expert in the field, or who has previously completed the course, to do the entire examination for you. (serial no. 294)

Never thought about cheating, I am here to learn something, not just to pass the examination. But I won't say everyone has the same goal. (serial no. 326)

There are 2 sides to this. One is that there is the opportunity for some students to free-ride off other students' knowledge. It will be up to the lecturer to spot this in the question response. The positive is that students are able to discuss the core issues behind the question and this encourages some students to go into further research who otherwise would not in an examination situation. (serial no. 393)

Looking at these responses (from students believing that take-home examinations presented greater opportunities for cheating) quite significant qualifications are made regarding the scope for teaching. Indeed, viewing all the responses to Questions 3 and 4 alongside one another, an interesting picture emerges. Take-home examinations certainly present the opportunity for deeper learning, but students recognising this suspect that not all of their peers will take advantage of it. The key question is whether this latent doubt over the integrity of their peers would diminish if students were sufficiently convinced that mechanisms were in place to prevent cheats from prospering.

Summary and Conclusions

While much could be made of the fact that online examinations are ineffective because of the authentication problem, it is worth bearing in mind that if a student is of a mind to cheat, this is precisely what they will do. In short, it is irrational to shy away from online take-home examinations on the grounds that it is not possible for an invigilator to check the back of calculators for crib notes. The point to observe is that neither system is perfect. A student's sister with a PhD from Harvard can just as easily sit an examination for them in an examination hall as they can by sitting at the computer in their home office. Moreover, they are equally as likely to get caught.

There are, however, a number of bonuses to arise from setting online take-home examinations that do not apply to the regular in-class examinations. First, and most importantly, the quality of the student learning experience is superior as indicated by the data in this study. Second, they are more authentic in that, as one student pointed out above (serial no. 223), in the work environment you have access to reference material, so why create the unrealistic scenario of the invigilated examination hall?9 Third, examiners get to read typed print outs rather than handwritten (sometimes illegible) scripts. Fourth, suspect examination responses are far more likely to be detected if they are in an electronic form because of the technology that allows examiners to search for word strings. Last, and by no means least, cybercheats who cut and paste from web sites are unlikely to prosper because increasingly, it is critical analysis, not content knowledge, that is the key to success.

To summarise, this paper has argued that the flexible delivery of a course is contingent on the existence of mechanisms that permit flexible assessment. Assessment systems that are believed to be flexible are, in many cases, not terribly flexible at all in that they do not cater for the increasingly diverse student profile. If an assessment system is to be truly flexible, then students must be presented with an element of choice, and they must be able to be assessed without having to attend examinations at a specific place and time. It is suggested here that online, open book, take-home examinations could be the answer, so long as three conditions are satisfied:

  1. Examinations are as 'unit specific' as possible (to prevent outsiders from sitting the examination);
  2. The time period for the examination is sufficiently tight (to prevent classmates from submitting two sufficiently different answers); and
  3. The examiner makes it clear (as a stated objective of the course) that they are looking to reward evidence of depth of learning and sound critical analysis rather than recall of content knowledge.

If these conditions are met, and the student body is sufficiently convinced that cheats will not prosper, moving online for postgraduate and undergraduate programs alike, will become a whole lot easier.


Boud, D. (1991). Three principles for good assessment practices. The New Academic, Autumn, 4-5.

Carrier, D. (1990). Legislation as a stimulus to innovation. Higher Education Management, 2(1), 88-98.

Faculty Education Committee (1999). Review of Assessment Policy and Practices. Faculty of Business, Queensland University of Technology, August.

Fröhlich, R. (2000). Keeping the Wolves from the Doors ... Wolves in Sheep's Clothing, That Is, Proceedings of the Fourth International Computer Assisted Assessment Conference, Loughborough University, England, 39-46.

Nelson, G. E. (1998). On-line Evaluation: Multiple Choice, Discussion Questions, Essay, and Authentic Projects. Paper contributed to the Teaching in the Community Colleges Online Conference, 'Online Instruction: Trends and Issues II' (3rd, Kapiolani Community College, April.

Williams, J. B. (1998). Flexible Assessment as an Integral Part of Flexible Delivery. Paper presented at the Fifth Educational Innovation in Economics and Business (EDINEB) Conference, Cleveland, USA, September.

1 The BGSB is one of six schools in the Faculty of Business at QUT, and currently has around 1000 students in the MBA and associated programs. It was recently ranked 3rd in Asia by Asiaweek magazine for the quality of the distance delivery of its programs. Ironically, the BGSB does not consider itself to be a bona fide distance education provider, but it received recognition as such because of the flexible delivery of its programs off campus for corporate clients.
2 See
3 Increasingly large enrolments tend to preclude this option.
4 This was developed by Jon Stanford and adapted by Allan Layton and Jeremy Williams for use in first year economics.
5 The final examination is the only compulsory assessment item.
6 For a more detailed description and analysis see Williams (1998). The author is also happy to supply an example Excel spreadsheet on request, which demonstrates how the system works.
7 The results from surveys conducted using QUT's Web On-Line Feedback (WOLF) facility, consistently show (over a three semester period) that more than 80 per cent of students believe the FAM is either 'the best' they have encountered, or 'compares very well with other systems' they have encountered.
8 A more comprehensive survey is currently underway that will attempt to collect responses from all the students who completed the unit.
9 For more discussion on this, see Nelson (1998).


Jeremy B Williams © 2000. The author assigns to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author also grants a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and in printed form with the conference papers.

Jeremy B Williams
Brisbane Graduate School of Business
Queensland University of Technology

Return to Contents

UniServe Science News Volume 18 October 2001

[an error occurred while processing this directive]

Page Maintained By:
Last Update: Monday, 30-Apr-2012 15:41:06 AEST