Developing an evaluation plan for an e-learning module to develop higher-order thinking skills and writing competences
Laura Macháčková Henderson
This article presents the experience of developing an evaluation plan for an e-learning module designed to foster higher-order thinking skills and writing competences. The evaluation plan is under construction and while the author is not in a position to provide an example of good practice, this article hopes to share the experience of developing an evaluation plan for an e-learning module.
According to the literature, e-learning can provide opportunities for effective development of higher-order learning skills. Based on an interest in exploring educational opportunities for staff and students in teaching and learning for these skills, CZP UK is developing a one-semester e-learning module, “Critical approaches to globalisation: developing writing competences”. This module is being developed in the context of the European Commission funded project, Virtual Campus for a Sustainable Europe (VCSE). The goal of VCSE is to share experience in developing e-learning modules in the field of sustainable development and to open the modules to students from VCSE partner universities.
The students participating in the course will be at an advanced stage in their bachelor degree or at an early stage in their master degree, and will be based at universities across Europe and from a variety of disciplinary backgrounds. The module aims to foster skills in critical reflection, question formulation and multi-disciplinary and multi-cultural collaboration for developing and co-authoring texts. The module is currently under development and will be available in the spring semester 2008.
Purpose of evaluation
The goal of developing a module evaluation plan is to
- ascertain whether the module enables students to fulfil the learning goals of the course
- obtain feedback for further development of the module
Thus, the evaluation is conducted primarily for those developing the course and for providing evidence of good “value for money” to the funding body, but the students will be the main beneficiaries of the evaluation as the course developers will be able to adapt the course according to the information gained from the evaluation.
Two phases of evaluation have been identified:
formative evaluation: conducted to obtain data from a pilot run of the course,
summative evaluation: conducted to measure the effect of developments resulting from formative evaluation and ascertain whether the module enables students to fulfil the learning goals of the course.
Time and other constraints will unfortunately not allow the pilot run of the module to occur in the same conditions as the first real life run of the module, but where appropriate it is planned to use the same methods and tools to allow for some, even if limited, basis of comparison.
Evaluation will be based on indicators addressing the following points:
- Technical context: user-friendliness and appropriateness of the e-learning platform
- Course content: appropriateness of the materials and resources
- Interactions: effectiveness of collaboration among students and with the course tutor
- Learning progress: development of skills according to learning goals
In order to avoid feedback fatigue of respondents, we hope to vary the methods of evaluation rather than relying solely on online questionnaires, and ensure that the methods we use are targeted and concise. We aim to engage students in the process by informing them of the purpose of the evaluation activities and emphasising students’ potential benefit. There are five potential methods which we find interesting and potentially manageable within our resources:
- online questionnaires – we consider questionnaires useful for collecting mainly quantitative but also a small amount of qualitative information from a larger number of students. We are in the process of developing questions with the input of colleagues from sociological research field. We propose questions with Likert scale type answers and space for free comment to cover all areas of interest for the evaluation.
- interviews – we might consider following up inclarities and expand the amount of qualitative data with semi-structured interviews in any of the areas of interest for the evaluation.
- confidence logs – these very brief questionnaires ask students for their level of confidence in newly acquired knowledge or skills on completion of a particular task or topic. This tool could be very useful for being alerted to the problems of struggling students. If there are students who feel lacking in confidence, the reasons why (technological barriers, lack of time, inclarity of materials etc) can then be explored through brief interviews.
- textual data – analysis of discussion forum contributions to get a picture of who is contributing, to what depth and frequency they contribute and the effect of contributions and interactions on students’ thinking revealed in subsequent posts
- online quizzes and assessment of written work – these tools we propose to use particularly for a view of students’ learning progress, which can be a more objective assessment to balance the subjective self assessment questionnaire on learning progress completed by students.
The connections between proposed methods and the four areas of interest for evaluation are as follows:
Who is conducting the evaluation and what is their relationship to the participants?
This is an area of concern to address when planning the evaluation. There may be some evaluation methods which should be conducted by an external evaluator, i.e. not the course tutor (e.g. interviews), while there will be other methods which might be best conducted by the course tutor (e.g. textual data, quizzes and assessment of written work, confidence logs).
In thinking through the issues outlines above, the following resources were found to be helpful:
Evaluating e-learning, Centre for Academic Practice, University of Warwick, January 2004
Evaluating learning technology resources, Learning Technology Support Service, University of Bristol
Project Evaluation Toolkit Published by the University of Tasmania http://www.utas.edu.au/pet/sections/introducing.html
Evaluation Cookbook published by the Learning Technology Dissemination Initiative http://www.icbl.hw.ac.uk/ltdi/cookbook/contents.html
Flashlight Evaluation Handbook
Handbook for Learning-centred Evaluation of Computer-facilitated Learning Projects in Higher Education