Search This Blog

Showing posts with label Project Management. Show all posts
Showing posts with label Project Management. Show all posts

Wednesday, July 13, 2011

Project planning: Evaluation plan


JISC requirement You must develop an evaluation plan to explain how you will evaluate the success of the project and its outcomes. You must also participate in evaluation activities at programme-level.
Evaluation is measuring success in a systematic and objective way.Evaluation focuses on whether the project was effective, achieved its objectives, and the outcomes had an impact For the project, evaluation focuses on whether the project was effective and achieved its objectives. For project outputs, evaluation might focus on whether the outputs are useful, meet user needs, and perform well. 
Experts on evaluation often make a distinction between formative and summative evaluation. They are similar, and the main difference is timing:
  • Formative evaluation Performed during the project/programme to improve the work in progress and the likelihood that it will be successful
  • Summative evaluation Performed near the end of the project/programme to provide evidence of achievements and success
The evaluation undertaken by JISC projects will vary depending on the objectives set, the outputs created, and the outcomes envisaged.  However, all projects must undertake evaluation to improve their own work and that of the programme (formative) and to measure their success (summative).

Programme evaluation

JISC evaluates its programmes to ensure knowledge and results are shared with the wider community and to improve the programme itselfThe programme manager will plan an overall evaluation strategy for the programme. This will provide a framework for the evaluation work to be performed, the questions that will be addressed, and the criteria by which the success of the programme will be judged.
For most programmes, there will be formal formative and summative evaluations. These studies may be undertaken by an external consultant or group of consultants, a designated project within the programme, the programme’s advisory board, or a mixture of these approaches.
A formative evaluation is done during the programme to improve it. 
Aims of the formative evaluation might be to:
  • Assess progress towards meeting the programme’s aims and objectives
  • Assess how effectively projects are contributing to meeting the programme’s aims
  • Gather and disseminate best practice
  • Identify gaps and issues
  • Raise awareness of the programme and stimulate discussion within the community
  • Ensure programme outputs are meeting stakeholder needs
  • Ensure the programme can respond flexibly to changes in the technical and political environment and that it isn’t overtaken by events
A summative evaluation would be done at the end of the programme to assess outcomes, impact on the community, and overall success. 
Aims of the summative evaluation might be to:
  • Assess whether the programme achieved its aims and objectives
  • Assess the impacts, benefits, and value of the programme in the broader context
  • Identify achievements and stimulate discussion with the community
  • Synthesise knowledge from the programme and lessons learned
  • Identify areas for future development work
Projects are required to participate in any evaluation studies at programme-level. The programme manager will inform you about plans for programme evaluations and let you know how to participate. Some projects become nervous about this prospect and feel that formative evaluations ‘check up on them’ and summative evaluations judge their success. This isn’t the case. Programme evaluations focus on what the programme is achieving.  What projects are achieving is obviously relevant, but individual projects are not evaluated and their success is not judged.

Project evaluation plan

Each project must develop an evaluation plan that includes formative as well as summative evaluationThe evaluation plan will explain how you plan to evaluate the success of the project. It should be planned in consultation with the programme manager and approved by any programme advisory board. You will report on evaluation activities in progress reports and in the final report. 
The project plan template has a table to help you develop your evaluation plans.
Factors to evaluate
The factors to evaluate will depend on the project. In most cases they will focus on how successful the project is at achieving what it set out to do. 
This might include:
  • Achievements against aims and objectives
  • Stakeholder engagement
  • Outcomes and impacts
  • Benefits
  • Learning
  • Effectiveness of the project
Each project should decide on the factors to evaluate in consultation with the programme manager. Focus on important factors that can be evaluated within project resources.
Questions to address
List the specific questions the evaluation will answer. Focus on questions that really need to be answered to demonstrate success. Think about what stakeholders want to know. Make sure that the questions can be answered unambiguously.  Avoid questions where the answer is likely to be ‘maybe’. Typical questions you might consider in evaluating project outputs and the project itself are:
Formative questions
Summative questions
  • Have milestones been met on schedule?
  • What is holding up progress?
  • What should we do to correct this?
  • Is project management effective?
  • Are stakeholders on board?
  • Do they agree with interim findings?
  • Is our dissemination effective?
  • What lessons have we learned?
  • Do we need to change the plan?
  • Have objectives been met?
  • Have outcomes been achieved?
  • What are the key findings?
  • What impact did the project have?
  • What benefits are there for stakeholders?
  • Was our approach effective?
  • What lessons have we learned?
  • What would we do differently?
Evaluation methods
Evaluation methods are well-documented, so even if you haven’t conducted an evaluation before you should find sufficient information to choose appropriate methods and use them successfully.
Quantitative methods include:
  • Questionnaires  Questionnaires are used to gather opinions from a particular group in a systematic way using closed and open-ended questions. They are a common and versatile way of collecting data and relatively cheap. They can be sent by email, posted on the web, or even posted by snail mail. Care needs to be taken in selecting the sample, phrasing the questions, and analysing the results in order to make valid conclusions.
  • QUALSERV  This measures the quality of a service in terms of five parameters: reliability, responsiveness, assurance, empathy, and tangibles.  It’s a survey instrument that measures the gap between users’ expectations for excellence and their perception of the actual service delivered.
  • Usage logs  Usage logs record what each user does during a session, and these can be analysed using various tools and techniques. They allow you to measure what content is used, how often, using what methods (e.g. searching), and sometimes by whom (e.g. by department). Analysis can allow you to identify trends and patterns (e.g. in searching or navigation).
  • Web server logs  These can tell you a bit about how your website is used (e.g. the most used pages, if usage is increasing, and times of peak use). They don’t tell you who’s using the site, why, or if they like it. But they can identify problems to look into (e.g. navigation if important pages aren’t being used).  Many software tools are available to analyse server logs.
Qualitative methods include:
  • Interviews  These are conversations, typically with one person. They may be structured, semi-structured, or unstructured, and conducted in person or by phone. They are useful for exploring opinions and issues in depth on a one-to-one basis.
  • Focus groups  These are interviews conducted with a small group of people (e.g. 8-10). They allow you to get a range of views on an issue (not a consensus) and explore how strongly views are held or change as the issue is discussed. They are often used after a survey to help explain the results or clarify issues. However, they are time-consuming to set up and some skill is needed to guide and moderate the discussion.
  • Observation  Observation is just that, observing what people do. It’s a technique often used by developers of commercial software to find out how users use their product.  If results aren’t what they envisaged, they may change the design. Observation can be applied to other areas as well (e.g. how a process or content is used).
  • Peer review  In some areas, an expert opinion is needed. A pedagogical expert might evaluate learning objects and say if they meet learning objectives. An expert in a discipline might evaluate the quality or relevance of a collection of content in that area.
Whatever methods are used, it’s important to involve stakeholders, as this will increase their commitment to the project, confidence in the results, or likelihood they will act on the findings. Involving users will increase their prospects for using of outputs. The project may choose to involve an expert on evaluation (e.g. to help plan the studies or advise on analysing results). When planning evaluation of the project, it’s important to get independent views.
Projects are likely to be collecting personal data during evaluations (data associated with named persons), so you should ensure that the data protection policies of their institutions are followed.
Measuring success
Think about how you will measure success and what evaluation criteria or performance indicators you will useFor project outputs, performance indicators may relate to user demand, user satisfaction, efficiency, effectiveness, take-up, etc. For the project, they will relate to achieving your objectives. By using SMART objectives (specific, measurable, achievable, realistic, timed), you can demonstrate they have been achieved. Discuss how you will measure success with stakeholders to understand success from their point of view.
Think about the level of success you hope to achieve (e.g. the level of user satisfaction or take-up).  This may be difficult to assess at the start of the project, but setting targets will give you something to aim for. It’s important to quantify success in some way, to ensure that your evaluation results are objective, valid, reliable, repeatable, etc.  For example:
  • 1,000 users per day will visit the website
  • Usage of the portal will increase by 200% from year 2 to year 3
  • 80% of users questioned will express satisfaction with the service
  • Student examination marks will improve by 10% in two years
  • 90% of users questioned will say the process/method saved them time
  • 4 out of 5 institutions approached say they will adopt the guidelines
  • The portal will achieve a benchmark score of X in usability studies.
Using evaluation results
Formative evaluation will improve the project and its outputs. It lets you reflect on what you’ve done so far, what’s going well (or not so well), and what you could do to change or improve things.  The ‘Review as you Go’ sections in these project management guidelines are about formative evaluation and how to build it into the fabric of the project. Formative evaluation is also a method of improving the programme and future JISC programmes. Tell the programme manager, personally or in progress reports, what could be improved. Other projects may be saying similar things, and the programme manager can decide what action to take at programme level.
Evaluation will demonstrate that you’ve achieved your aims and objectives, the work was useful, and there are benefits for the communityThe project has received funding, and achieving results is part of accountability. But it’s also in your own interests. Demonstrating that the work was useful and has benefits for the community relates to sustainability. If you plan to carry the work forward, include evaluation results in your business plan.
Success has been mentioned frequently, and you may wonder what happens if they fail. JISC projects seldom fail, but some projects don’t achieve all their objectives. There may be circumstances beyond your control that affect what the project can achieve. JISC is likely to be sympathetic rather than judgemental, as much can be learned from ‘failure’ as well as success. JISC asks you to do the best you can and learn from the experience.
Hints and tips
  • Focus on a few important factors
  • Set realistic goals that can be achieved within project resources
  • List the specific questions you want to answer
  • Make sure they can be answered, and unambiguously (yes/no not maybe)
  • Select appropriate methods that will answer the questions
  • Decide how you will measure success
  • Involve stakeholders
  • Use formative evaluation to improve the project and the programme
  • Build formative evaluation into the fabric of the project
  • Use the results
  • Make sure that the evaluation work is reflected in the workpackages
Review as you Go
Evaluation will demonstrate that that the project was successful. By doing the evaluation plan early in the project, you can think about what you need to evaluate, when, using what methods, and how you will measure success. Build some evaluation into each phase, rather than leaving it to the end. Early feedback from users may help you understand what they do/don’t like and improve the design. Reflecting on what’s going well (or not so well) within the project will help to identify issues that should be dealt with before they become problems or risks. Change the plan as you gain experience and get feedback, and use evaluation to improve the project (and the programme).
Further resources
  • FAIR and X4L programmes The EFX project provided evaluation support to these programmes during 2002/03 and was a joint initiative between CERLIM at Manchester Metropolitan University and the Centre for Studies in Advanced Learning Technologies (CSALT) at Lancaster University. They developed an excellent evaluation toolkit explaining evaluation, how to develop an evaluation plan, and listing many useful resources. Other JISC projects may find their approach helpful.
  • Guidelines for project evaluation Tavistock Institute, developed  for the JISC Electronic Libraries programme. Much of the information is still useful and can be applied in different ways for larger or smaller projects.
  • Guidelines for good practice in evaluation UK Evaluation Society (library section) – The society explain their principles about conducting evaluations. There are also good links to evaluation resources.
  • Taking stock: a practical guide to evaluating your own programs Sally Bond et al, Horizon Research, 1997. This is a practical guide to evaluating projects or programmes.
  • JISC InfoNet’s evalkit is a directory of ICT evaluation tools and toolkits for use by the education sector. You can search the database or browse by topic, and there are links to additional resources. EvalKit was a JISC-funded project.

Project Management - Evaluation



By hafeezrm



Evaluation and appraisal are sometimes used interchangeably. This is for two reasons: (i) both mean assessment and (ii) at times they over-lap, especially when we talk of ex-ante evaluation.
In fact, both are materially different as shown below:
Appraisal is the process of examining a proposal, like the setting-up of a fertilizer plant. It involves weighing up the costs and benefits, risks and uncertainties of putting up the plant before a decision is made.
Evaluation is a review of actual operations of the fertiliser plant, which covers (i) how successful or otherwise it had been and (ii) what lessons we learn for future industrialisation. To be short, it is like a before-project and after-project situation.

What is covered in the evaluation

In evaluation, actual performance is compared with planned performance. The gap or difference, if any, is analysed and investigated. It runs like (i) what was expected, (ii) what has actually happened, (iii) what should have been the position under the circumstances, (iv) action, if required, for penalties or rewards and for future guidance.
Here it is like variance analysis as in cost accounting where actual cost is compared with standard cost and the variance is broken into its component like price variance, quantity variance and, in overhead analysis, capacity variance. Were the variances favorable or un-favorable, avoidable or unavoidable? What action, in any, to be taken against the executives (purchase manger, production manager and marketing manager.)




Still wider coverage of evaluation

But evaluation goes much beyond the current operations. It looks at objectives set at the time of appraisal and tries to find out if the objectives were met or not.
Here are some examples:
CHANGE IN CONCEPT
A company was granted loans for replacement of 25,000 old and obsolete spindles. Instead, the company installed the same spindles at another location and continued with the old and inefficient spinning plant. When this came to the notice of the bank, it changed its follow-up procedures to ward off such future attempts.
Being financier, the bank holds all ‘titles’ including shipping documents. When machinery arrives at port, the company obtains the documents from the bank, gets the machinery released and transports it to the site. The bank decided that henceforth, the shipping documents would only be released when the site has been inspected and the visiting officers confirm that (i) old and obsolete machinery had been dismantled and scrapped, (ii) foundations for new machinery have been laid down and (iii) necessary arrangements have been made within the premises for installation of new machinery and equipment. (This was meticulously followed and no further infringement reported.)


OBJECTIVE NOT ACHIEVED
For poverty alleviation, Asian Bank granted loans to small farmers in Laos for plantation of eucalyptus trees. According to the bank own report, the project "failed to improve the socioeconomic conditions of intended beneficiaries, as people were driven further into poverty by having to repay loans that financed failed plantations."
CHANGE OF IN PRODUCT
A bank financed a company for installing a baby food plant with the collaboration of Cow & Gates. But the company failed to maintain quality standard and shifted to producing ordinary powdered milk, butter and cheese. Though the company is profitable, it has defeated the bank’s efforts to develop industries for better health of the infants.

APPROPRIATE TECHNOLOGY
A project was approved for making seamless pipe using locally available steel. When completed, it switched over to imported steel nullifying bank efforts to contribute towards development of down-stream projects of Pakistan Steel Mills Ltd.

Financial & Socio-Economic Indicators

Project feasibility Reports or Appraisal Reports is normally divided into three parts: (i) Financial (ii) Economic and (iii)n Social. A project may be financially sound but may be in-appropriate for the economy or community. For each sector, various ratios are calculated and form part of the appraisal reports. On the basis of these ratios or indicators, a project is approved or rejected.
When project is completed and put into operations, the same indicators are computed from real-life data. These are compared with estimated ones and differences, if any, are analysed as a normal routine under evaluation process. A short list of such indicators is given below:

Financial & Socio-Economic Indicators

FINANCIAL
ECONOMIC 
SOCIAL 
Internal Rate of Return 
Internal Economic Rate of Return 
Jobs created 
Debt Service Cover 
Domestic Cost per $ earned or saved 
Size of Loan 
Average Weight Cost of Funds 
Effective Rate of Protection 
% local content 
Fixed Assets Cover to Debt
Value added in manufacture
Impact on income distribution

SUCCESSFUL PROJECTS

A sound feasibility report should result in sound and successful projects. Such projects should meet all their obligations towards workers, suppliers, governments and owners and have sufficient funds to keep them in an adequate state of liquidity. These projects serve the purpose for which they were established, such as improvement in balance of payment, regional development and job creation. Their neighbourhood or spill over affects should be positive and a model for further industrialization.

CHALLENGED PROJECTS

Unfortunately, there is a time lag between project appraisal and project evaluation. Suppose, a project for a pager (sometimes called a beeper or bleep) was established in Pakistan and became instantly famous for sending short messages at very low rates. The sponsors left no aspect un-examined before launching the project. But, at that point of time, they could not envision the onslaught of mobile phones for communication conveniently, cheaply and instantly. The pager company had to be liquidated.
Delays and Cost overruns are norms rather than exceptions. In such cases, the evaluator conducts in-depth studies to find whether the delays or over-runs were unavoidable or not? In fact, the evaluator tried to determine if there was any dishonesty or inefficiency.
The evaluation gives useful feedback to avoid pitfalls experienced earlier, and if recommendations are followed, the project appraisal standards would improve, adding more and more successful projects in the country.



For various reasons, the project becomes terminally sick, and no amount of bailout can save them. It may be due to wrong location, fiscal anomalies, technological obsolescence or dis-interested management. Such projects should be abandoned, and resources so released be applied for other valuable purposes.