Wednesday, July 13, 2011

Project planning: Evaluation plan


JISC requirement You must develop an evaluation plan to explain how you will evaluate the success of the project and its outcomes. You must also participate in evaluation activities at programme-level.
Evaluation is measuring success in a systematic and objective way.Evaluation focuses on whether the project was effective, achieved its objectives, and the outcomes had an impact For the project, evaluation focuses on whether the project was effective and achieved its objectives. For project outputs, evaluation might focus on whether the outputs are useful, meet user needs, and perform well. 
Experts on evaluation often make a distinction between formative and summative evaluation. They are similar, and the main difference is timing:
  • Formative evaluation Performed during the project/programme to improve the work in progress and the likelihood that it will be successful
  • Summative evaluation Performed near the end of the project/programme to provide evidence of achievements and success
The evaluation undertaken by JISC projects will vary depending on the objectives set, the outputs created, and the outcomes envisaged.  However, all projects must undertake evaluation to improve their own work and that of the programme (formative) and to measure their success (summative).

Programme evaluation

JISC evaluates its programmes to ensure knowledge and results are shared with the wider community and to improve the programme itselfThe programme manager will plan an overall evaluation strategy for the programme. This will provide a framework for the evaluation work to be performed, the questions that will be addressed, and the criteria by which the success of the programme will be judged.
For most programmes, there will be formal formative and summative evaluations. These studies may be undertaken by an external consultant or group of consultants, a designated project within the programme, the programme’s advisory board, or a mixture of these approaches.
A formative evaluation is done during the programme to improve it. 
Aims of the formative evaluation might be to:
  • Assess progress towards meeting the programme’s aims and objectives
  • Assess how effectively projects are contributing to meeting the programme’s aims
  • Gather and disseminate best practice
  • Identify gaps and issues
  • Raise awareness of the programme and stimulate discussion within the community
  • Ensure programme outputs are meeting stakeholder needs
  • Ensure the programme can respond flexibly to changes in the technical and political environment and that it isn’t overtaken by events
A summative evaluation would be done at the end of the programme to assess outcomes, impact on the community, and overall success. 
Aims of the summative evaluation might be to:
  • Assess whether the programme achieved its aims and objectives
  • Assess the impacts, benefits, and value of the programme in the broader context
  • Identify achievements and stimulate discussion with the community
  • Synthesise knowledge from the programme and lessons learned
  • Identify areas for future development work
Projects are required to participate in any evaluation studies at programme-level. The programme manager will inform you about plans for programme evaluations and let you know how to participate. Some projects become nervous about this prospect and feel that formative evaluations ‘check up on them’ and summative evaluations judge their success. This isn’t the case. Programme evaluations focus on what the programme is achieving.  What projects are achieving is obviously relevant, but individual projects are not evaluated and their success is not judged.

Project evaluation plan

Each project must develop an evaluation plan that includes formative as well as summative evaluationThe evaluation plan will explain how you plan to evaluate the success of the project. It should be planned in consultation with the programme manager and approved by any programme advisory board. You will report on evaluation activities in progress reports and in the final report. 
The project plan template has a table to help you develop your evaluation plans.
Factors to evaluate
The factors to evaluate will depend on the project. In most cases they will focus on how successful the project is at achieving what it set out to do. 
This might include:
  • Achievements against aims and objectives
  • Stakeholder engagement
  • Outcomes and impacts
  • Benefits
  • Learning
  • Effectiveness of the project
Each project should decide on the factors to evaluate in consultation with the programme manager. Focus on important factors that can be evaluated within project resources.
Questions to address
List the specific questions the evaluation will answer. Focus on questions that really need to be answered to demonstrate success. Think about what stakeholders want to know. Make sure that the questions can be answered unambiguously.  Avoid questions where the answer is likely to be ‘maybe’. Typical questions you might consider in evaluating project outputs and the project itself are:
Formative questions
Summative questions
  • Have milestones been met on schedule?
  • What is holding up progress?
  • What should we do to correct this?
  • Is project management effective?
  • Are stakeholders on board?
  • Do they agree with interim findings?
  • Is our dissemination effective?
  • What lessons have we learned?
  • Do we need to change the plan?
  • Have objectives been met?
  • Have outcomes been achieved?
  • What are the key findings?
  • What impact did the project have?
  • What benefits are there for stakeholders?
  • Was our approach effective?
  • What lessons have we learned?
  • What would we do differently?
Evaluation methods
Evaluation methods are well-documented, so even if you haven’t conducted an evaluation before you should find sufficient information to choose appropriate methods and use them successfully.
Quantitative methods include:
  • Questionnaires  Questionnaires are used to gather opinions from a particular group in a systematic way using closed and open-ended questions. They are a common and versatile way of collecting data and relatively cheap. They can be sent by email, posted on the web, or even posted by snail mail. Care needs to be taken in selecting the sample, phrasing the questions, and analysing the results in order to make valid conclusions.
  • QUALSERV  This measures the quality of a service in terms of five parameters: reliability, responsiveness, assurance, empathy, and tangibles.  It’s a survey instrument that measures the gap between users’ expectations for excellence and their perception of the actual service delivered.
  • Usage logs  Usage logs record what each user does during a session, and these can be analysed using various tools and techniques. They allow you to measure what content is used, how often, using what methods (e.g. searching), and sometimes by whom (e.g. by department). Analysis can allow you to identify trends and patterns (e.g. in searching or navigation).
  • Web server logs  These can tell you a bit about how your website is used (e.g. the most used pages, if usage is increasing, and times of peak use). They don’t tell you who’s using the site, why, or if they like it. But they can identify problems to look into (e.g. navigation if important pages aren’t being used).  Many software tools are available to analyse server logs.
Qualitative methods include:
  • Interviews  These are conversations, typically with one person. They may be structured, semi-structured, or unstructured, and conducted in person or by phone. They are useful for exploring opinions and issues in depth on a one-to-one basis.
  • Focus groups  These are interviews conducted with a small group of people (e.g. 8-10). They allow you to get a range of views on an issue (not a consensus) and explore how strongly views are held or change as the issue is discussed. They are often used after a survey to help explain the results or clarify issues. However, they are time-consuming to set up and some skill is needed to guide and moderate the discussion.
  • Observation  Observation is just that, observing what people do. It’s a technique often used by developers of commercial software to find out how users use their product.  If results aren’t what they envisaged, they may change the design. Observation can be applied to other areas as well (e.g. how a process or content is used).
  • Peer review  In some areas, an expert opinion is needed. A pedagogical expert might evaluate learning objects and say if they meet learning objectives. An expert in a discipline might evaluate the quality or relevance of a collection of content in that area.
Whatever methods are used, it’s important to involve stakeholders, as this will increase their commitment to the project, confidence in the results, or likelihood they will act on the findings. Involving users will increase their prospects for using of outputs. The project may choose to involve an expert on evaluation (e.g. to help plan the studies or advise on analysing results). When planning evaluation of the project, it’s important to get independent views.
Projects are likely to be collecting personal data during evaluations (data associated with named persons), so you should ensure that the data protection policies of their institutions are followed.
Measuring success
Think about how you will measure success and what evaluation criteria or performance indicators you will useFor project outputs, performance indicators may relate to user demand, user satisfaction, efficiency, effectiveness, take-up, etc. For the project, they will relate to achieving your objectives. By using SMART objectives (specific, measurable, achievable, realistic, timed), you can demonstrate they have been achieved. Discuss how you will measure success with stakeholders to understand success from their point of view.
Think about the level of success you hope to achieve (e.g. the level of user satisfaction or take-up).  This may be difficult to assess at the start of the project, but setting targets will give you something to aim for. It’s important to quantify success in some way, to ensure that your evaluation results are objective, valid, reliable, repeatable, etc.  For example:
  • 1,000 users per day will visit the website
  • Usage of the portal will increase by 200% from year 2 to year 3
  • 80% of users questioned will express satisfaction with the service
  • Student examination marks will improve by 10% in two years
  • 90% of users questioned will say the process/method saved them time
  • 4 out of 5 institutions approached say they will adopt the guidelines
  • The portal will achieve a benchmark score of X in usability studies.
Using evaluation results
Formative evaluation will improve the project and its outputs. It lets you reflect on what you’ve done so far, what’s going well (or not so well), and what you could do to change or improve things.  The ‘Review as you Go’ sections in these project management guidelines are about formative evaluation and how to build it into the fabric of the project. Formative evaluation is also a method of improving the programme and future JISC programmes. Tell the programme manager, personally or in progress reports, what could be improved. Other projects may be saying similar things, and the programme manager can decide what action to take at programme level.
Evaluation will demonstrate that you’ve achieved your aims and objectives, the work was useful, and there are benefits for the communityThe project has received funding, and achieving results is part of accountability. But it’s also in your own interests. Demonstrating that the work was useful and has benefits for the community relates to sustainability. If you plan to carry the work forward, include evaluation results in your business plan.
Success has been mentioned frequently, and you may wonder what happens if they fail. JISC projects seldom fail, but some projects don’t achieve all their objectives. There may be circumstances beyond your control that affect what the project can achieve. JISC is likely to be sympathetic rather than judgemental, as much can be learned from ‘failure’ as well as success. JISC asks you to do the best you can and learn from the experience.
Hints and tips
  • Focus on a few important factors
  • Set realistic goals that can be achieved within project resources
  • List the specific questions you want to answer
  • Make sure they can be answered, and unambiguously (yes/no not maybe)
  • Select appropriate methods that will answer the questions
  • Decide how you will measure success
  • Involve stakeholders
  • Use formative evaluation to improve the project and the programme
  • Build formative evaluation into the fabric of the project
  • Use the results
  • Make sure that the evaluation work is reflected in the workpackages
Review as you Go
Evaluation will demonstrate that that the project was successful. By doing the evaluation plan early in the project, you can think about what you need to evaluate, when, using what methods, and how you will measure success. Build some evaluation into each phase, rather than leaving it to the end. Early feedback from users may help you understand what they do/don’t like and improve the design. Reflecting on what’s going well (or not so well) within the project will help to identify issues that should be dealt with before they become problems or risks. Change the plan as you gain experience and get feedback, and use evaluation to improve the project (and the programme).
Further resources
  • FAIR and X4L programmes The EFX project provided evaluation support to these programmes during 2002/03 and was a joint initiative between CERLIM at Manchester Metropolitan University and the Centre for Studies in Advanced Learning Technologies (CSALT) at Lancaster University. They developed an excellent evaluation toolkit explaining evaluation, how to develop an evaluation plan, and listing many useful resources. Other JISC projects may find their approach helpful.
  • Guidelines for project evaluation Tavistock Institute, developed  for the JISC Electronic Libraries programme. Much of the information is still useful and can be applied in different ways for larger or smaller projects.
  • Guidelines for good practice in evaluation UK Evaluation Society (library section) – The society explain their principles about conducting evaluations. There are also good links to evaluation resources.
  • Taking stock: a practical guide to evaluating your own programs Sally Bond et al, Horizon Research, 1997. This is a practical guide to evaluating projects or programmes.
  • JISC InfoNet’s evalkit is a directory of ICT evaluation tools and toolkits for use by the education sector. You can search the database or browse by topic, and there are links to additional resources. EvalKit was a JISC-funded project.

No comments:

Post a Comment