8.1 Anticipated and unanticipated project benefits
8.2 The PROSPER lmpact Study: A consideration of sector wide outcomes
8.3 Research and evaluation in DPEP: A review of current practices and future strategies in impact assessment
8.4 Concluding comments from the DFID Education Division
Mfanwenkosi Malaza
Mpumalanga Primary Schools Initiative
Mpumalanga, South Africa
In this paper, the author argues that the determination of a project's benefits is more complex than it appears at face value. He juxtaposes his discussion of project benefits against the background of the MPSI project. The paper begins with an elaboration of a variety of outcomes achieved by the MPSI. The author distinguishes between anticipated and unanticipated outcomes and argues that every project has a shade of both intended and unintended outcomes, whether they are positive or not. Very often, the impact of the unintended outcomes far outweighs the intended ones from the local people's point of view. Anticipated outcomes, he states, are conceptualised in the project planning stage, guided by the project goal and stated in the project log-frame. These, he suggests, are best gleaned by utilising quantitative methods. He proceeds to elaborate on the unanticipated outcomes which
are not projected at the start of the intervention but nevertheless make a
significant impact. He argues that these need also to be considered when
evaluating project impact. He cautions that in identifying the unanticipated
benefits, it is necessary to look at the wider context of a project's
operational environment in order to guard against attributing effects to the
project that are incidental to it but that may not necessarily result
directly from it. |
An outgoing deputy minister of education was once quoted as saying to his colleague:
Well, the hard work is done. We have the policy passed; now all you have to do is implement it (Fullan 1991: 65).It may well be that hard work has, indeed, been done, but what the Honourable Deputy Minister conveniently ignored was that the processes beyond adoption of educational change are more intricate and complicated than mere adoption because warm-blooded people are involved and real change is at stake. Implementation consists of a process of putting into practice an idea, a programme or a set of activities and structures that are new to the people who are involved or who are expected to change. According to Fullan (1991), commitment to what should be changed often varies inversely with knowledge about how to work through a process of change. In fact, he argues that strong commitment to a particular change may be a barrier to setting up an effective process of change. It is significant, therefore, to try and understand both the dynamics of change and the process by which change occurs in a school or society in order to interpret the meaning of the evaluation data.
2 Methods of measuring impact
Carol A. Carrier (1990) makes the point that, traditionally, programme evaluators in developing countries have been more effective in assessing the quality of inputs than of outputs, simply because inputs are less controversial. It is easy to count the number of textbooks supplied and the number of lessons taught or workshops given. Project evaluation has traditionally been quantitative and characterised by the development of standardised tests and questionnaires, the production of data from large samples of schools and individuals, and the analysis of these data by various statistical methods.
While, in principle, there is nothing wrong with this traditional approach to evaluating projects, there is a case to be made for using illuminative research methods. There is a real danger in the exclusive use of quantitative methods where either a qualitative method or a combination of the two methods might have been more appropriate. It is hard to see how questionnaire surveys can penetrate the gap between word and deed in the evaluation of projects. Quantitative methods tend to concentrate only on what can be measured and only on the intended outcomes. Every project has a shade of both intended and unintended outcomes, whether positive or not. And, very often, the unintended outcomes far outweighs the intended ones from the local people's point of view.
Qualitative methods tend to be more illuminative and are primarily concerned with description and interpretation rather than measurement and prediction. Illuminative evaluation seeks to establish how a project operates and how it is influenced by a variety of school situations. It seeks to discern the critical project processes and the most significant features of project impact. Patton (1988) accordingly argues for a commitment to broadening the use of educational research strategies to include a full range of quantitative and qualitative methodologies.
A good example of a balanced approach to assessing a project impact is illustrated in the case of the Mpumalanga Primary Schools Initiative (MPSI). MPSI implementers developed instruments that could be used in school visits to collect data through interviews and observations. The information gathered is not a product of a robust scientific investigation but is nevertheless of value and is fed back into the MPSI planning processes and clinics, which are held once a term. This feedback then informs (in a formative manner) the operations of the project.1
Mpumalanga is one of the nine provinces in South Africa. The MPSI is the first major DFID project with a provincial government after the 1994 general elections, which brought about democracy. The aim of the MPSI it to improve primary school learners' knowledge and skills in Mathematics, English Language and Science through providing integrated support for pre-service and in-service teachers' training. In order to achieve this aim, MPSI utilises the expertise of Technical Co-operation Officers (TCOs), Subject Advisers, College-based Teachers' Centre Implementers, NGOs, and local and external consultants from the link institution.
The Project is being implemented in 74 schools, which are clustered into groups of between five and seven. A total of 185 primary school teachers participate in the project's activities.
3 Project outcomes
As with all projects, the MPSI achieved a number of outcomes that were initially defined in the project planning stage. These were anticipated and were guided by the goal of the project. The project also evidenced a number of outcomes that were not anticipated - benefits which had a far-reaching impact on the sector.
3.1 Anticipated project outcomes
The anticipated outcomes were those which were conceptualised and envisaged prior to the start of the intervention. They were informed by the goal of the project. The following projected outcomes were incorporated in the MPSI logical framework:
· improved College of Education management3.2 Unanticipated project outcomes· enhanced knowledge, understanding and skills for primary teacher education on the part of the College of Education staff
· new professional training curricula and subject-specific syllabi for initial and continuing teacher education, conforming to the South African Committee of Teacher Education Policy and the National Qualifications Framework guidelines, and reflecting agreed provincial Department of Education policy on teacher education
· enhanced teaching skills by intermediate phase (Grades 4 - 6) teachers of Science, Mathematics and English within school clusters linked to functioning teachers' centres
· improved teacher support services in those school clusters linked to functioning teachers' centres
· teacher understanding and implementation of the areas of learning curriculum for the general education intermediate phase within the school clusters linked to functioning teachers' centres
A number of outcomes were not anticipated. These non-projected outcomes were outcomes resulting from MPSI activities, which may can be described as stop-gap activities and which did not form part of the project's core activities. For this reason, they are, therefore, not reflected on the log-frame. Some of the most important of these are:
· the development of principals of schools participating in the MPSI so that they are able to the role of instructional leadership4 The intended benefits of the MPSI· the establishment of a network of teachers' centres
· the evolution of a comprehensive provincial INSET strategy with a five-year development plan
As indicated above, the anticipated outcomes are meant to contribute to the intended project benefits. In order to gain a sense of the benefits, the MPSI developed a monitoring tool that takes into account both the qualitative and the quantitative progression of the project. While the instrument measures change in teaching and learning, it nevertheless allows the intended outcomes to be expressed in a quantifiable form in accordance with the verifiable indicators outlined in the project log-frame.
It must be mentioned that, at the time of writing, the MPSI is yet to undergo full-scale external evaluation and that, for this reason, any opinion expressed about the project's benefits may at best be described as preliminary. More robust scientific evidence still needs to be gathered to support these pronouncements. In the interim, pronouncements are based on the evidence gathered through the internal monitoring mechanisms referred to above and also through numerous interactions with the MPSI target groupings. What follows is a list of the benefits that may be attributed to the MPSI project.
4.1 Implementation benefits
There was a range of outcomes which pertained to the actual form of teaching and learning interaction. The most significant of these are:
· Individualised instruction4.2 Impact on learnersThere is sufficient data to suggest a definite change of the teaching-learning process towards more individualised instruction and group work. One of the shortcomings of the learning environment, both at school and college of education level, was the exclusive use of a teacher-centred, whole-class teaching approach based predominantly on chalk-and-talk. Teachers who are participating in the MPSI activities are experimenting with a variety of teaching methods which are discussed at workshops and further developed at the cluster group meetings.
· Experiential learning
Teachers are increasingly resorting to hands-on experiential activities based on teaching learning materials developed from cheap recyclable materials. Special attention has been paid to imparting skills for developing such learning materials. Hitherto, commercially produced learning materials were left in the storerooms because it was feared that they might either be lost or broken.
· Gaining of insight
Learners are challenged to arrive at conclusions by logical and, wherever possible, practical means. The learning environment is becoming increasingly cooperative rather than competitive. Group work and assignments encourage learners to cooperate with one another. The rote learning of formulas and theorems is gradually giving way to gaining insight into concepts.
There is sufficient evidence to suggest an improvement in learners' attitude towards schooling.
In schools where the learner-centred approach is gaining momentum, the incidence of learners dodging lessons is decreasing. The project seems to have encouraged regular school attendance either by what it does or by virtue of its presence at the selected schools. Regular attendance results in learners' improved scholastic performances. No attempt has yet been made to compare scholastic performance of project schools with that of non-project schools.
4.3 Impact on teachers
There is sufficient evidence to suggest some degree of improvement in the teachers' mastery of pedagogical skills - a change which has resulted in a change in their classroom behaviour. Project teachers are becoming more open to, and comfortable with, team teaching and peer tutoring. Teacher-to-teacher relations, teacher-to-management relations and teacher-to-learner relations show some improvement. These changes may be attributed to an improvement in teacher self-confidence and self-image, which may in turn be the result of external support from the project. The fact that teachers interact with TCOs and offshore consultants, who bring with them international perspectives and experiences, serves as a major motivating factor. If one may judge from the amount of work covered with learners and attendance at workshops and cluster meetings, there is a marked improvement in their commitment to teaching.
4.4 Impact on school
Some schools are looking into ways of improving their resource provisioning. The Department of Education has been approached with requests to have billboards for advertising erected on school premises. Advertisers will be charged a fee and the income will be used to provide or improve facilities. Although there is some indication that some schools are already replicating such initiatives, the way schools are organised is still a problem. There is a clear need for developing school principals so that they can manage schools in the manner that facilitates the new approach. Overcrowded classrooms and traditional time-tabling prove to be major constraints. These are issues that may require another kind of intervention.
5 MPSI unintended benefits
The following main unintended benefits of the MPSI intervention have been identified.
5.1 Improved ability to deal with change
Schools participating in the MPSI programme appear to be less threatened by the challenges of educational changes, which are spearheaded by the National Department of Education. The exposure to the innovative instructional approaches is strengthening the schools' ability to carry out further changes. Principals are increasingly assuming the role of instructional leaders. School management is becoming more supportive of the teachers and vice versa. Individual teachers are emerging as curriculum leaders at their schools and cluster meetings.
5.2. Reduced learner migration to more advantaged schools
Since 1994, schools have been open to all. Wherever it was possible, learners from disadvantaged schools have left for more advantaged schools. Parents who could afford the travelling costs tended to bus their children to these schools. The MPSI has had the effect of reversing this learner-migration. While this reversal could be attributed to the impact of the MPSI programme, there are other contributory factors which could account for the reversal of migration. It would be inaccurate to attribute everything to the project as such. In the current economic climate, not all parents can afford the cost of sending their children to the former model C schools, as they are popularly known. Besides, the means of transport is not always reliable and there have been some gruesome accidents to vehicles carrying learners who travel to these schools. Whatever claim is made should be made against the background of these factors.
5.3 Willingness of schools to participate in the MPSI
Schools were keen to participate in the MPSI. Initially schools were not selected for participation in the MPSI according to specific criteria. However, due to popular demand, all schools in a particular area had to be included. The popular demand has to be seen against the background of the context of teacher development in the country. Since the introduction of performance-related payment, in principle, teacher development has become a bread and butter issue with teachers' unions which expect the MPSI project to level the playing field. Schools therefore had to be taken onto the project incrementally rather than selectively. This meant that all the schools in a particular area had to be drawn in. For some reason, however, schools that are already on board seem to interpret their participation in the project as an affirmation of some sort. The school governing bodies' support and commitment to their schools' success has improved. The governing bodies are ensuring greater participation in schools' activities by the parent communities.
6 Conclusion
In conclusion, it is necessary to reiterate the purpose of this paper, which is to look specifically at the positive effects (benefits) of projects by extrapolating some lessons from the MPSI. It needs to be said that projects have ripple effects within their operational environment. Some of which are positive and others are not; some are immediate while others take time to appear. It is our view that the real effects of the MPSI (and similar projects) will show long after the project has run its course. This is true of all quality interventions in the classroom. It is for that reason that we emphasise the preliminary nature of the findings with regard to the MPSI benefits.
Nonetheless we believe that the findings give a strong indication of what may be expected when a fully- fledged impact study is commissioned. It is necessary to examine the wider context of a project's operational environment if we hope to guard against the attribution of project effects that are merely incidental and not proven consequences. A classic example of just such a case may be found in the reversed learner-migration from the formerly disadvantaged schools discussed above. In all impact assessments, one needs to take the context of the intervention into account. By the same token, one needs to take into account the fact that schools associated with an external intervention of one kind or another tend to gain some political and social clout. This is apart from what a project may or may not do. Simply put, the determination of a project's benefits is a more complex process than prima facie it may appear to be. It is for this reason that we advocate a judicious utilisation of both qualitative and quantitative approaches in project impact studies.
Footnote
1. At the time of writing, the project yet to be evaluated formally.
Mirela Bardi
The British Council
Bucharest
Roy Cross
The British Council
London
The paper elucidates the approach employed in the evaluation intended to gauge the impact made by PROSPER on the ESP teaching/learning process and on the various stakeholders participating in the project. The paper outlines the underlying methodology of the impact assessment and highlights findings pertaining to the differences made by PROSPER to participating teachers, students, former students, managers, employers, foreign language departments and participating educational institutions from both Romania and the UK. In addition to measuring the impact of the project, the paper makes specific reference to sectoral impact. It refers to the ways in which the ripple effects of PROSPER impacted broadly on the sector and even on institutions which were not participating in PROSPER. The first part of the paper draws attention to the methods which were used to identify areas of impact focus. It also examines the criteria underlying the development of research instruments and makes reference to the way in which the national evaluation was administered. The latter part outlines the findings of the investigation and
the impact made both intended and unintended on the various
stakeholders. Specific reference is also made to the significant ripple effect
engendered by PROSPER in the broader ESP sector in both local and
regional contexts. |
PROSPER was set up in 1991 with the expressed aim of upgrading the teaching/learning of ESP in major tertiary educational institutions in Romania. The project was seen as being indispensable to improving the English proficiency of students who would one day be members of some of the key professions in the Romanian economy, such as engineering, economics and medicine. The design of PROSPER took account of prevailing conditions and the limits on resources. The project framework was developed in collaboration with a variety of stakeholders who contributed to the formulation of the project's purpose and goals as well as to the outputs necessary for their achievement. One major decision, taken at the outset, was that the project would deal with ESP on a national rather than on a regional or institutional level. It was felt that this going-to-scale would achieve a greater impact.
The project started by initially involving six major higher education institutions from across Romania - five Polytechnic universities and the Academy of Economic Studies in Bucharest. After 1991, the project gradually expanded to include the English departments in the faculties of economics and medicine of various universities in Romania. In total, 16 institutions participated in the programme and 124 teachers received various types of PROSPER training.
The project's aim was to be achieved by:
· providing UK and in-country training in communicative methodology for ESP teachersAlthough it was obvious, during the years of project implementation, that PROSPER was making significant achievements in a variety of areas that relate to teachers' professional expertise, it became evident that it was necessary to attempt a formal estimation of these achievements and to assess the participants' perceptions of these achievements of their own practices. It was therefore decided to embark on a full-scale impact study which would include all PROSPER teachers (whether respondents or researchers, or both). It is believed that an impact study of this magnitude and nature might be the first ESP evaluation of this kind in Europe. Local teachers, in consultation with Prof. Charles Alderson of the University of Lancaster, undertook to implement the investigation. Through this association, teachers were drawn into all stages of the impact study, from the actual project design stage through to the verification and final documentation of the findings.· firstly, developing skills in ESP curriculum development, course design, materials development and, thereafter, by providing on-going support for teachers in these areas
· establishing ESP resource centres at identified institutions
· encouraging networking among ESP practitioners in Romania through the medium of national conferences and regular meetings
· encouraging networking among ESP counterparts in other countries through the medium of international conferences and a newsletter
2 Identifying areas of project impact
The collaborative group of teacher-researchers concurred that the impact study should be undertaken on a national scale and that it should review all the main areas of ESP. It was agreed that one of the main goals of PROSPER was the professionalisation1 of teachers. It was suggested that this aim would be further enhanced if teachers were to be engaged to participate in the impact investigation.
The collection of data which would reflect the impact of PROSPER on all categories of stakeholders and across all the relevant project areas including ESP teaching methods, materials development, management, and so on was considered necessary for the national investigation. Although the impact study was designed to identify changes that were anticipated in the project document, it was also designed to identify and document unpredicted and unexpected changes. This was to ensure that the investigation obtained evidence of impact from as many levels as possible. Its findings would thus be even more comprehensive and significant. The focus of the impact project was therefore broadened to include not only the individual participants, project classrooms and project-based institutions: in addition, it was directed to examine possible impact on the ESP sector and on the profession in general. The investigation was therefore extended to examine any ripple effect project might have had on other parts of society.
2.1 Defining the focus
An initial brainstorming exercise was conducted to identify the kind of impact PROSPER might have had and to establish which locations should be examined as sites of project impact. The brainstorming exercise was carried out with the members of the impact study team, and took account of their own perceptions, as well as on the results of similar brainstorming exercises which they had carried out in their own departments or institutions.
PROSPER was expected to have made the following kinds of impact on the various sites or stakeholders:
Site of impact |
FORMS OF IMPACT IDENTIFIED IN THE BRAINSTORM
SESSIONS |
Classrooms |
· Teaching methods should shift to being more learner-centred · The roles of teachers and
students should become more dialogical |
Teachers |
· Teachers should use more communicative teaching methods · Teachers should develop a wide range of professional skills · There
should be increased co-operation among teachers |
Students |
· Student partcipation should increase · Students employability should
be enhanced |
Materials and resources |
|
Tests |
· Teachers on the project should
be enabled to use a diversity of appropriate methods for assessing
learners competencies |
ESP institutions |
· The status of ESP teachers
within institutions should be enhanced |
Creation of new institutions |
· There should be an increase in
the number of language centres |
Other projects in the region |
· There should be broad dissemination of documentation pertaining to the project · Project achievements should be publicised · There should be increased
interaction between local project members and their counterparts In British
universities There should be broad dissemination of materials! produced by the
project |
It is obvious from the above list of possible areas of impact and from the diverse nature of the stakeholders, that the impact was expected to be much broader than was initially anticipated (or documented) in the original aims of PROSPER.
As will be discussed in section 4, the impact on stakeholders was indeed found to be much broader than was initially anticipated. For example, if the original aim was to upgrade the teaching of English by training teachers in a communicative ESP methodology, the findings showed that the impact on teachers was much broader than that which had been suggested when the project aim had been formulated. Teachers not only improved their classroom teaching skills; they also developed a repertoire of skills which contributed to a higher level of professionalism. Teachers displayed increased accomplishments in material writing, lesson presentation, research and entrepreneurial skills. Apart from individual achievements, PROSPER created a sense of commitment and an awareness of a common cause among its participants. This collectiveness contributed to the development of a professional community of ESP teachers - a collectivity with its own identity, which was able to work towards the achievement of shared goals.
3 Research approach
When the impact assessment team designed the research approach, they found that it was necessary to make the above list of criteria operationalisable through categorising criteria and then using the resultant categories as the basis for items to be included in the various instruments. To give one example, the Classroom Observation Chart designed for this assessment was used to collect data about what actually happens in classes - thereby detecting trends in the teaching/learning process. This chart enabled researchers to identify those areas in which the project had made a significant impact as well as those areas in which improvement was still required. The following features of a good PROSPER classroom were identified (they were based on the perceptions of teachers who had been involved in the conceptualisation of the project and its accompanying philosophy):
· There is increased student involvement in classroom decisions.These features were included in the observation instrument. In many cases, it was necessary to make the feature operationalisable by breaking down the characteristic into a checklist of types of activities which could be used to demonstrate the achievement of competence. Questions based on the observation sheet were included in the teachers' and students' questionnaires and were used as a means of triangulating the data.· Teachers focus more on teaching skills than on language structures.
· A wide range of learning tasks and materials which focus on communication are used.
· Increased classroom interaction is evidenced by pair and group work.
· Teachers exhibit effective classroom management skills.
· Teachers use a diverse range of techniques for the correction of errors.
· Teachers maintain a collaborative classroom atmosphere and this encourages students to take the initiative.
In addition to the triangulation of data, the project team attempted to ensure that all findings could be compared with comparable data which was usually drawn from the baseline study or from non-project institutions.
The first set of instruments was administered across the board, both to project and non-project institutions and to respondents. These instruments included the:
· Student questionnaire |
Since it was evident that the project had achieved many outcomes which were not previously anticipated, it decided that the research design should make a specific effort to identify and measure those outcomes which had not been anticipated at the inception of the project.
Since the ripple effects were broad and varied, the project team conceptualised an approach which could be used to measure and validify the diversity of outcomes that were identified. It was decided that one instrument could not be used across the spectrum of outcomes. When the researchers identified an unintended outcome, they wrote a brief description of the outcome and the impact that it might have made. This description was given to those participants who were affected by the outcome and they were required to complete, modify, confirm or disconfirm the description as they thought appropriate. The amended versions were then used as a measure of these outcomes.
It was found that this method of identifying and measuring the impact of unintended outcomes gave insight into the magnitude of the PROSPER project. The list was long and varied. Much of the information gained in this way was useful in documenting the impact and recommendations for future practice.
4. Sector wide outcomes
PROSPER was responsible for impacting on the sector in a number of different ways. The most salient of these are:
· The impact of devolved project management5 ConclusionMany of the outcomes of the project management impacted on the sector insofar as they had implications for other projects in the region and/or for the management structure of British Council projects in general. These outcomes were discerned in the process of interpreting the data collected through the various stages of the research process.
For example, the findings on the management of PROSPER appeared to be relevant to project institutions and to the British Council management who had been associated with PROSPER during its implementation. The findings thus have relevance for the management of similar projects elsewhere.
One of the notable features of management was that all project members were involved in all the stages of project design. This meant that the implementation was based on the joint decisions of project members. The PROSPER experience has shown that the incorporation of this local component into the project management does much to build in a sense of ownership. As an unintended consequence, devolved decision-making seemed to extend to other projects in the region (like the Ukraine baseline studies) or to other similar projects being implemented in Russia. The local management promoted from within the project family maintains the sense of project ownership, and increased local ownership of project responsibilities -budgetary as well as academic.
· The shift from outside control towards local ownership
The idea of local control was extended beyond the realms of the PROSPER project to other unrelated projects in the sector. In several cases, previously London-appointed positions were transferred to local teachers who had been empowered to fill these positions.
· Consultative mechanisms
One of the successful structures created by PROSPER for consulting its members is the annual heads of department meeting. This structure was replicated elsewhere, as, for example, by the Uni-schools project in Romania, and it has also inspired the adoption of focus groups and national consultation groups.
· Regional networking
The creation of a national team, which all PROSPER teachers perceive as the main achievement of the project, has strengthened the importance of teamwork for achieving and maintaining quality standards. A regional ESP network was created and has been sustained since 1994. Different countries taking turns in organising annual meetings. Even some western countries have recently adopted the idea of regional networks. The Anti-Conference in Switzerland is one such example. The value of these networks for disseminating information and planning joint events is immense and as the feedback from participants who attend the regional meetings suggest, PROSPER has been a source of inspiration and an explicit model for new regional developments.
· Materials development
Material writing by national teams is one such development which has inspired other projects in the region. The advisers of the ESP project in Hungary have confirmed that, in addition to using the PROSPER materials as a basis for teacher development, the Romanian experience has raised awareness of the feasibility and desirability of adopting a team-based approach to material development.
· Increased professional skills
The variety of project events and the involvement of project members in decision-making have led to the development of a whole range of professional skills. Among these is the increase in teachers' self-confidence and the development of teachers' organisational and managerial skills. These necessitate a special mention since they have implications for project sustainability.
· The establishment of other language centres
The Language Centres (LANGCEN) project, which was born out of PROSPER, has founded a group of five language centres which function as self-funding service units at different points in the country.
· International impact
The British institutions which have been associated with the project have also been affected by their need to respond to the requirements of PROSPER. The Institute for English Language Education at Lancaster University, which was involved in the design phase of the project, responded by making a number of changes to their courses. They now continually develop and adapt the courses offered to PROSPER teachers and take the diverse and changing needs of the five groups who attended their courses over the project lifespan into account.
Manchester University, responsible for the delivery of a series of distance-learning modules which lead to an M Ed degree, has constantly revised its distance delivery style and the content of modules which were designed for Romania.
· Code of project practice
Finally, it might be argued, on the basis of the outcomes claimed by the project, that PROSPER made a significant, and to some extent, a global impact on project practice in the Council. One of the outcomes of this impact was that a code of practice for grant-funded project management was created.
The findings of this impact study reflect the kinds of changes that have taken place in the ESP profession in Romania through the influence of PROSPER. Although the study reflects the complexity of ESP teaching and learning in a particular country, it may also attain to a wider relevance by contributing to a better understanding of the project approach and to managing innovation in ELT and in education in general. The research process itself may be of relevance to teachers who are involved in educational projects and who may wish to study the effects of those projects in detail. The impact study, like many other PROSPER developments, calls for reflection on the nature of the teaching profession and on what seem to be false boundaries between teachers, academics, researchers, and course and materials designers. The teachers involved in educational projects and processes of innovation may (as the project shows) take on quite complex and unexpected roles.
Footnote
1. Professionalisation here refers to their ESP teaching abilities, their abilities in doing research, in materials, course and curriculum design and also to their own perceptions of being professional as was evidenced by their self-evaluation.
Roopa Joshi
District Primary Education Programme
Government of India
· Firstly, it was necessary to address the question of how the DPEP impact assessment model should be designed. The how, she suggests, refers to the design on both a conceptual and operational level. |
This paper attempts to provide a review of a critical area of project management in DPEP, namely that of the practices and strategies used in the assessment of project outcomes. It has the following three-fold focus:
Firstly, the paper begins by highlighting how the issue of assessing project outcomes is contextualised in terms of the goals of the project. The analysis therefore covers key elements of strategy that are built into project design and that operationalise both on-line and intervention-specific project impact studies that are undertaken within DPEP.2. Monitoring of project impact in DPEPSecondly, the paper looks at current practices in the assessment of project impact prevalent across the entire area of DPEP's intervention in India, and it considers whether decentralised structures have internalised project management skills intrinsic to the spirit of DPEP. In other words, it considers whether project management skills have been disseminated to those project managers who are involved in the decentralised structures.
Finally, the paper looks at possible alternatives for strengthening initiatives for project impact assessment.
An integral component of the DPEP project design is that of research and evaluation. From the outset of the programme, research findings make an important contribution to guiding the strategies which will be employed. This is evident right in the pre-implementation phase, in the form of baseline and social assessment studies for the project districts. Ongoing research and evaluation were also crucial during project implementation. The research and evaluation component enabled the project to:
· plan, implement and monitor initiatives for the promotion of research and evaluation at all levels (i.e. the national, district, sub-district levels) within the project as well as (perhaps more importantly) at school levels, where teachers were involved in action researchIt should, however, be kept in mind, that the framework and areas for evaluation differed across various levels in the DPEP structure according to whether the focus was on a national, state, district or sub-district level.· extend support to endeavours for capacity building in training programmes which aimed to enable practitioners to do evaluation and action research, and to grasp the rudiments of research methodology
· conduct/commission specific evaluations for the requirements of project implementers
· undertake a dissemination of findings and the outcomes of research exercises
· encourage networking between the larger research community in various institutions and universities and DPEP so as to encourage these institutions and to provide an opportunity for researchers to carry out research in elementary education.
Accordingly, the impact assessment was integrated into the various DPEP project activities and was operationalised across states as well as on a national level.
2.1 Assessment at the national level
At the national level, examples of evaluation studies include the evaluation of:
· project management3 Current practices of project impact in DPEP
· institutional development (various aspects of institutional capacity building)
· community participation
· access, enrolment and retention through periodic surveys
· teacher training
· classroom processes2.1.1 Differentiating between different levels of impact
The expected outcomes of the evaluation studies differed according to the perceptions/requirements at different levels of project management. For instance, it is likely that an evaluation of the delivery of teacher training at a district level would focus on the planning, organisation and actual delivery of the training programme. It would also focus on transmission losses, teachers' perceptions, motivation, feedback and issues pertaining to the sustainability of the training programme.
An investigation into a similar project at state level would require that the investigation to focus on adequacy of preparation, the participation of targeted beneficiaries, the quality of the course content, the enhancement of trainees' skills, the competence of the master trainers, and so on.
At a national level, concerns would be differ from those of observations conducted at state or district levels. For example, a national evaluation would be concerned with whether, or the extent to which, there had been an improvement in the learners' competencies, or on the type of corrective measures (e.g. improvements in logistics or in the curriculum) that would be required if the delivery of training programmes at all levels were to improve.
What I have said above emphasises how important it is for effective project managers to be able to adapt the use of assessment instruments for the varying situational contexts in which assessments are conducted at state or district levels. The question as to whether processes to build capacity for impact assessment (other than with on-line monitoring) have been addressed within project structures, is an important question which will be considered in the final section of this paper.
Much effort went into developing evaluation plans for assessment that were to be undertaken at national and state levels. This necessitated that consideration be given to what was currently being done in DPEP and (thereafter) to what should actually be evaluated. A national workshop on evaluation was held in 1995. The workshop identified the following priority areas for evaluation in DPEP. It also indicated which aspects should assessed.
3.1 Priority areas for assessment
Priority areas for evaluation |
ASPECTS ASSESSED |
Training |
· The quality of teacher and
instructor training. This included assessment to determine the extent of the
dilution of the training that may have resulted from the cascade model of
training. |
Management training |
· The training that was offered
to enskill managers as well as the training which was presented to members of
village education committees |
Decentralised and participatory management |
· The functioning of district
and state programme management units |
Community mobilisation and participation |
· the functioning of village
education committees and an assessment of the flow of information, and the way
that information is used at different levels. This includes a consideration of
the efficacy of the management information system. |
Institutional development |
· Resource institutions such as
district evaluation teams and other resource and administrative
institutions |
School functioning and effectiveness |
· The pedagogical processes as
well as the supply and utilisation of materials |
Access and enrolment |
· These are assessed by way of
an analysis of data from education management information Systems, through the
use of case studies and also through an assessment of learners'
achievements. |
The following list of what was evaluated a national level concurs with the above table.
What was evaluated was:
· managerial structure and processes under DPEP3.3 Sample monitoring· institutional development of State Councils of Educational Research and Training (SCERTs) and DIETs
· classroom processes
· a survey of learners' mid-term achievements
· learners' access and retention
· community participation in DPEP
· teacher grants and school grants
· interventions for improving the education of the girl child
· the external evaluation of civil works
· in-service teacher training
Along with various monitoring and evaluation techniques employed at the national and state level, a form of sample monitoring was also conducted in three DPEP states. Components of the monitoring and evaluation of the sample districts included:
· a review and analysis of the information that was gathered periodically from the sample districts through the DPEP management information system· a review and analysis of the quantitative studies undertaken by the DPEP Bureau in the sample districts
· an intensive follow-up of the implementation of the Joint Supervision Missions' recommendations made in the sample districts
· designing a set of activities to monitor and evaluate
(1) the techniques, measures and processes adopted by the sample districts
(2) the process of change in classroom practices and improvements in school effectiveness
Different strands in the evaluation of AP-DPEP were developed with the assistance of DFID in 1996. These are the introduction of the annual Schools and Pupils Survey, a set of short- and long-term qualitative studies, and a set of process indicators of implementation for use in planning and evaluation for conducting fast, large-scale qualitative monitoring activities which can be aggregated across districts.
3.4 How do states address evaluation issues?
Almost all of the 14 states involved in the DPEP intervention have undergone an assessment of various processes which were initiated in the first three years of the implementation. This assessment occurred at the national level as well as at the level of state specific initiatives.
Positive evidence arising out of this was a heightened awareness of the importance of evaluation and project impact assessment. This was obvious from the array of interventions that were proposed and from the efforts made by some states (albeit on a limited scale) to increase their internal capacity to do evaluation.
While Andhra Pradesh, Kerala, Assam, Madhya Pradesh have been shown to be initiatives which have increased the capacities of district and sub-district institutional structures, states such as Tamil Nadu, Maharashtra, Gujarat, and Orissa have incorporated their state apex organisations (such as the State Councils of Educational Research and Training) into such efforts.
The capacity for doing action research and an improved understanding of research methods was one of the outcomes of research training which was presented. This training was also presented by national apex authorities such as the NCERT and the Research and Evaluation Studies Unit of EDCIL, New Delhi.
There is no doubt that DPEP has provided an opportunity for generating better research activities that will eventually contribute to better programme management and implementation.
4 Future strategies and issues for project impact
This paper has looked at project impact assessment and evaluation in the context of DPEP by focusing on coverage as well and programmes designed to enhance skills. It must, however, be remembered that the DPEP ethos is anchored in initiating educational reform through process-led change, thereby providing a platform for generating positive spin-offs for sector-wide, institutional outcomes in the country. Evidence suggests that it is equally critical that the structures in project management and implementation be sufficiently flexible and decentralised to encourage process-specific outcomes. This would in turn provide an opportunity for networking between larger subsets of stakeholders in the programme, namely teachers, institutions, community and programme implementers. It is contended that the involvement of these subsets of stakeholders would most certainly make a significant impact on the achievement of sector-specific outcomes.
Alternative approaches that might enhance the use of project impact assessment in DPEP could be considered. It is possible, for example, that an extension of the capacity for research and evaluation skills to as many states as possible would increase the quality and quantity of such assessments. Another approach might involve the development of a list of priority areas and a framework for various research designs which could be conducted on a systematic basis. This might be effected by developing the skills of participants deployed in states, districts and sub-district levels with the help of research organisations. A third possibility would be to provide criteria which would ensure that project impact assessment is sustainable. This would mean that the scale of operations, particularly at the district and sub-district level, would have to be of a sufficient magnitude to allow replicable cost-effective studies to be conducted.
Project impact assessment in the context of DPEP would therefore need to be strengthened at all levels of project management, namely at national, state, district and sub-district levels. For this purpose external and independent assessments of evaluation would be required and there would also be a need to draw existing institutional resources into evaluations. Because such a process would ensure that institutional structures would provide inputs for design and capacity building skills, such structures would be drawn into programme implementation on a sustained basis.
5 Conclusion
The effectiveness of DPEP lies in its complementary use of strategies and holistic interventions. Impact assessment is only one of the tools which enables project implementers and stakeholders to obtain a measure of the progress of the project towards achieving its goals. If one were to consolidate the impact from a programme such as DPEP, it would enhance the impact that arises from participation among all stakeholders. This would be particularly true at those decentralised levels where the goals of the programme will ultimately be realised. This is a crucial aspect of DPEP's mission and should not be forgotten.
Carew B W TreffgarneFor DFID this Forum provided a welcome sharing of insider and outsider perspectives on the key questions that can arise in the planning, design, management and implementation of impact studies. It drew attention to the hard choices that have to be addressed by the stakeholders involved in impact assessment. Issues concerning timing, time frame, availability of finance, duration, selection of impact evaluation researchers, capacity building strategy, report writing, dissemination and ownership, may lead to compromises in the organisation, scope and scale of the exercise.
Senior Education Adviser
DFID, London
The Forum provided a constructive focus for Education Advisers in DFID by emphasising some of the key elements in the Post Jomtien learning agenda, in which participatory impact assessment features prominently as a formative approach to evaluating impact. The implications for project ownership, capacity building and sustainability emerged as an underlying theme throughout the Forum. Speaking from a formative (rather than a summative) standpoint, John Shotton reminded us that the objectives of a participatory impact assessment can be:
· to gauge the extent to which a programme has led to desired changes in the target audience and fieldAlthough the Forum demonstrated that DFID Education Advisers have been using participatory approaches in several projects in different parts of the world, Veronica McKay's participatory action research model provided us with an expanded vision of the many potential benefits for those associated with the project. The wide range of ways in which it can be formative and capacity building through· to determine whether or not, and to what extent, a programme might have met its objectives
· to engage local ownership and leadership within a context of decentralisation of programme management and implementation
· to enable the different perceptions and interests of stakeholders in a project to be taken into account when planning any subsequent follow-up or a new phase
· to develop capacity building skills through facilitating local applied research, which, in turn, will enhance social discourse about relevant learning centre-based issues
· enabling all participants to become co-researchersprovides DFID with a convincing case for using this approach for empowering project stakeholders.
· enabling all participants to define the criteria used for assessment
· involving the participants in interpreting and authenticating the findings
· engaging the participants in the cycle of reflection/action/reflection
· enabling the poor or marginalised to impact on policy
· enabling bureaucracies to become more participatory
At this point it may be helpful to sound a cautionary note. In using evaluation of project impact as a formative tool, we may encounter problems when we try to generate the relevant skills and enthusiasm for the exercise. It was pointed out that some people may be reluctant to take part, particularly if they have not had any previous experience of this kind of approach. Involving people from poorer, grass-roots communities may be problematic if they feel inhibited about having to work with people with whom they would not normally have had any close contact. In spite of such difficulties, DFID needs to persevere in finding culturally sensitive ways of engaging such key stakeholders in the process.
Such risks must be considered against the potential benefits. A participatory action research approach is an on-going assessment of project impact. It encourages teachers to develop the habit of continually reflecting on their effectiveness. Project players, project monitors, evaluators and learners can come together to decide what constitutes best practice. A participatory action research approach may therefore empower the evaluation in such a way that it offers enhanced project impact sustainability. The significance of participatory evaluation of programmes was reinforced by the examples Alan Peacock gave of using this approach as a means for teacher professional development in South Africa and Sri Lanka. The value of participant development of impact criteria was contrasted with the negative risks (or inappropriate dependency) that can arise from recourse to external consultants for this purpose.
One problem that emerged from several contributions to the Forum concerns the time factor. This relates to both the time-tabling of the exercise (which may be dictated by budgetary considerations), and the actual time-schedule that is adopted for the conduct of the exercise (which may likewise be influenced by a financial imperative). The timing of any evaluation, particularly those using a participatory approach to impact assessment, may crucially affect the quality and validity of the outcome of the exercise. Given the tension between the availability of funding for an impact assessment, and the time needed for an adequate assessment to be undertaken, DFID is urged to take both aspects of the time factor into greater consideration in project planning and project design. The following conclusions became apparent:
· Unless the timing of the assessment allows an adequate period for the programme outcomes to be realised, the formative aspect of a participatory approach to the impact study may be undermined.The conclusion drawn from the Forum is that impact studies vary in scope, depth and scale, according to when they take place. An impact study can be conducted during a project as a formative means for reinforcing commitment to the implementation of project objectives. It can also take place towards the end of a project to demonstrate to different stakeholders the qualitative and quantitative value of being associated with the achievement of project outputs. In addition, long-term project impact can be researched some time after the end of the project as a way of examining whether or not project outcomes have proved to be sustainable. In DFID this last option can be adopted by the Evaluation Department - depending on whether or not there will be sufficient funds for following up on what the project completion report has recommended.· Sufficient time needs to be allocated at the onset of an impact study in order to engage all the main stakeholders and enable them to participate. Time is needed to build up trust and confidence in the exercise. Time is also needed if potential language and cultural barriers that may prevent everyone from participating fully are to be overcome.
· Time needs to be set aside for training key project personnel in participatory action research methods.
· Reporting time at the end of the exercise needs to be factored in if the various perceptions, priorities and expectations of different audiences are to be accommodated.
· The time period allocated for the impact assessment may need to be adjusted once the scope and scale of what realistically can be undertaken becomes apparent. Insufficient time undermines the qualitative validity of the impact assessment and also allows no margin for any unforeseen external events that might impinge on the exercise to be dealt with.
The Forum was enriched by the direct experience which several participants had gained in baseline studies in very different project contexts in India, Nicaragua, South Africa, Sri Lanka, Central and Eastern Europe. According to DFID procedures, baseline studies should be factored into the project design either before, or at the start of a project if planning and/or assessing subsequent progress and impact is to be made. Carol Moloney justified her argument that 'A baseline assessment is a wondrous thing!' by listing the wide range of purposes that baseline studies fulfil. It is therefore constructive for Education Advisers to note that baseline studies can be used
· to set the scene for involving all stakeholders at the onset by ensuring that there is shared understanding of programme objectives and context.The Forum concluded that sufficient time, finance and resources need to be made available for baseline studies so that a comprehensive range of initial perspectives and data from a variety of sources may be captured. It is essential to ensure that the baseline study provides an adequate benchmark for whatever evaluation may be undertaken in future (irrespective of whether this may be formative or summative, or conducted by 'insiders' or 'outsiders' to the project).· to provide an initial assessment mechanism (or benchmark) against which subsequent evaluations can be measured.
· to serve as an in-depth needs analysis, fine-tuning basic objectives set in log frames in the light of unforeseen issues or developments.
· to foster greater ownership of the programme through necessitating a high degree of collaboration in the baseline assessment.
· to emphasise delivery 'at the chalkface' right from the start of the programme by focusing on the school or classroom in which baseline data needs to be collected.
· to serve as a reform tool in itself by giving department officials, college lecturers and teachers the opportunity to develop skills of assessing and supporting teachers in a shared learning environment.
It emerged from Forum discussion that, in participatory action research, it is more appropriate to refer to stakeholder evaluation, rather than to use the outmoded terminology of the pre-Jomtien era in which donors were juxtaposed with recipients or beneficiaries. The presentation by Dermot Murphy and Pauline Rea-Dickins defined stakeholders in terms of power differentials, such as knowledge, expertise, control, budget control, responsibility, benefits, loyalty, status and distance. The conclusion for DFID is that understanding such stakeholder perspectives will enable us to plan and organise impact studies more effectively, and will promote more and better use of their findings. It was evident that responsibility for different stages of the impact assessment needs to be placed at the appropriate level where decisions will be most effectively taken. Like any other evaluation exercise, impact assessment has to be carefully planned and managed so that it is not undermined by funding or time restrictions.
Stakeholder analysis raises the question of insider/outsider involvement in participatory evaluation. The distinction between insiders/outsiders to a project emerged from the workshop as more pertinent to impact assessment than the original distinction in our workshop programme between national researchers and external researchers. There was consensus among workshop participants that there is no place for fly-in/fly-out (FIFO) consultants in impact studies - given that the emphasis in participatory impact assessment is on training stakeholders in the necessary research skills to investigate project impact themselves.
The question of who should be involved in impact evaluation can be both politically and culturally sensitive. Not only should the stakeholders involved reflect a cross section of those with an interest in the project's outputs, but the selection of such researchers must ultimately depend on those inside the project. Given that the nomination of those involved (and ultimately those who should represent them at any presentation of the findings) is crucial to the success of the exercise, the Forum concluded that those inside a project are better placed to make such decisions.
Identifying the level and strength of project impact calls for qualitative as well as quantitative research methods. Participants at the Forum agreed that, in impact evaluation, the process is as important as the product of the exercise because of the enhanced role that is attributed to researchers inside the project. More emphasis needs to be placed by DFID on training trainers in participatory research methods if impact is to be evaluated effectively from an insider perspective. It is only possible to assess the long-term impact of a project after it has ended. In consequence, empowering learning communities to undertake impact research could address the option of leaving the assessment of project impact until some time after agency support has been withdrawn. For DFID, the practical conclusion is that different impacts may be experienced by different stakeholders at different points, either during or after the project cycle.
It was encouraging to note widespread acceptance of the significance of unanticipated as well as anticipated benefits. The DFID Glossary of Aid Terms points out that "only planned, positive impacts will be included in the Logical Framework". Although DFID has to work on the assumption that planned impacts will be positive rather than negative, Education Division's experience that unplanned impacts can add an invaluable qualitative dimension to the benefits anticipated in the project logframe, was borne out by Mfanwenkosi Malaza's case study material from the Mpumamalanga Primary Schools Initiative in South Africa. Mohammed Melouk provided another dimension by referring to the different attitudinal agendas and perceptions of those involved in a project as side effects, linked to predicted and unpredicted outcomes.
Another aspect that DFID needs to take into account when identifying the key stakeholders in a project, is the question of dissemination strategy. This should be built into project or programme design. Impact studies inevitably give rise to the question of the audience for whom the findings of the evaluation are intended. The dissemination strategy has to take into consideration who will be involved in writing the report, who will read it and to what extent it will be readily available to all stakeholders?
Clara Ines Rubiano and Dermot Murphy drew DFID's attention to the different stages at which reporting can be undertaken, as well as the multiple audiences who will require feedback from the impact study. N.V. Varghese thought that stakeholder workshops should be organised for such reporting, but reminded DFID of the importance of working out how the findings should be presented. The question of multiple audiences raises the question of whether there should be one report or several reports? DFID's conclusion is that different types of reports may be necessary when there are aspects of the impact study that some audiences may need to appreciate in greater depth or detail, in order to ensure that the outcomes can be followed up or made more sustainable. Some reporting may benefit from a comparative framework or from a DFID/nonDFID perspective. It could be constructive to share and compare patterns emerging from impact studies - such as the implications for institutional practices.
During the Forum it was reiterated that it would be to the advantage of all stakeholders if more of the lessons which have been learned could be shared across projects. Projects and programmes would benefit from a greater cross-fertilisation of information about similar experiences. Although it was recognised that this Forum provided a useful opportunity to discuss issues arising from impact studies in a variety of different contexts, DFID was asked to concentrate more effort on sharing expertise across projects by promoting south/south collaboration and experience in impact study research. It would contribute to the demystification of impact studies if they were more readily available in the public domain.
Although the Forum covered the majority of key issues in impact evaluation, it also exposed areas that could be researched in more depth. These include the advantages and disadvantages of using project logframes, the balance between personal, institutional and sector wide outcomes, and the inter-relationship between social, educational, institutional and economic criteria in impact assessment. Mirela Bardi pointed out that there is also scope for closer examination of the instruments used in impact research, and that this is a topic that could be explored in more depth in a future workshop.
The Forum highlighted the value of impact assessment as an empowering process for stakeholders in a project or programme for whom it can be formative in a capacity building way that helps to reinforce a sense of ownership. It was realised that good communication channels between those involved in the impact assessment are essential, because information sharing and feedback fosters greater transparency. Education Division's conviction about the value of a participatory approach to impact assessment was reinforced by the Forum. The discussion drew attention to the complexity of the process and emphasised the many benefits that it holds for funding agencies, primarily because of the way in which a formative approach to impact assessment clarifies project ownership for all parties concerned. It therefore has the potential added value of making project achievements more sustainable.