Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

PREVIOUS PAGE TABLE OF CONTENTS NEXT PAGE


1. WHAT IS AN IMPACT STUDY AND HOW SHOULD WE DO IT?


1.1 Participatory impact assessment
1.2 Participatory action research as an approach to impact assessment
1.3 Participatory approaches to impact studies
1.4 Evaluation vs impact studies

1.1 Participatory impact assessment

John Shotton
Centre for Overseas and Developing Education
Homerton College
University of Cambridge


In this paper, John Shotton considers the changes in the theory and practice that are evidenced in the field of project impact assessment in the post-Jomtien era. He indicates that subsequent to the Jomtien Conference in 1990, aid programmes were increasingly characterised by a shift away from being funder-driven towards being locally owned and locally driven. This paradigm shift has been possible through, inter alia, the development of local capacity. This shift, Shotton indicates, has radical epistemological implications for the assessment of project impact - an issue that this paper interrogates. The author presents a strong case for formative participatory impact assessments, which, he argues, contribute to the building of project capacity and local ownership. Participatory practice enables participants to learn on the job and is more likely to be responsive to local needs than are traditional approaches.

Finally, Shotton demonstrates the shift from traditional forms of assessments to participatory assessments by contrasting the assessment of projects that might be classified as traditional (pre-Jomtien) with those that demonstrate what he considers to be the essential ingredients of participatory practice.


1 Introduction

There are three important contexts to this consideration of the nature and operation of participatory impact assessment:

· The first is what King (1991) has called The Post-Jomtien Curriculum. This is the learning agenda for the international donor and lending agencies laid out by Third World Network at the World Education for All Conference (1990) at Jomtien. The agenda centres on issues of local ownership and control in basic education aid programmes and includes a substantial critique of donor- and lender-directed approaches to evaluation in the pre-Jomtien era.

· The second is the shift in approach of some of the international donor and lending agencies in some projects to the Post-Jomtien Curriculum.

· The third is a focus on a sample of basic education programme evaluations in an attempt to draw out the essential ingredients of participatory impact assessment. The evaluations considered are by no means all examples of participatory practice. On the contrary, I make comparisons of participatory and more conventional and traditional approaches.

2 What is impact assessment?

Before we consider participatory approaches to impact assessment, it is important to be clear about the nature of impact assessment itself. Impact assessment may be distinguished from other types of evaluation by the area of the programme on which it focuses. This logic follows the evolution of the programme as it unfolds and has been a generally useful paradigm in educational evaluation. Rossi and Freeman (1993), for example, distinguish between three programme phases which strike me as particularly useful:

· Conceptualisation and design
· Monitoring and implementation
· Assessment of effectiveness
Each of these phases is compatible with different evaluation strategies:

2.1 Conceptualisation and design

At the conceptualisation phase of the programme, a diagnostic evaluation procedure may be appropriate as research questions focus on programme features such as the programme's underlying assumptions, its logic, major stakeholders, the programme's objective, and the context in which implementation is to occur. Adequate understanding of these issues is critical before a programme is designed and started.

2.2 Monitoring and implementation

The second stage, monitoring and implementation, focuses on the programme's operations after the project has started. Here, several types of evaluations may be appropriate for a given objective. These are essentially formative evaluation approaches and are intended to improve the overall operations of the programme. Several different evaluation modes could be included in this group including, evaluability assessment, which attempts to answer the basic question of whether a programme can be evaluated. Perhaps best known though in the process of implementation evaluation is what focuses on delivery and assesses the programme's conformity with its basic design. Performance monitoring and implementation indication could be included in this group. This type of evaluation periodically reviews the short-term outcomes of the programme, along with its quality, to assess the degree to which the programme's activities affect these outcomes.

2.3 Assessment of effectiveness

It is in the phase immediately after initial implementation that we find impact assessment. Impact assessment gauges the extent to which a programme has led to desired changes in the target field and audience. It implies a set of programme objectives that can be identified and used as a basis for measuring the programme's impact. Thus the overall goal of an impact assessment is to determine if, and the extent to which, a programme has met its objectives. In this phase of the programme, distinguishing impact from the programme's outputs and outcomes is often valuable. Outputs refer to the immediate consequences of the programme whereas outcomes describe the more immediate results. Both outputs and outcomes may be intended or unintended, and need to be assessed for their logical relationship to final programme objectives.

2.4 Formative or summative assessments

It has often been argued (IDRC 1972) that impact assessment can only be summative. However, given the time frame of most basic education aid programmes, it is critical that they are formative. As Phile (1994) argues, impact assessment and evaluation in general must not simply serve the need for the international donor and lending agencies to satisfy their respective governments' treasury departments and banks. On the contrary, the priority should be to serve the needs of primary users and it is here that a participatory paradigm becomes essential. Though Phile recognises the need for the agencies to benefit from evaluation, for him it is only a question of pursuing advocacy on the part of primary users as a priority.

That this is necessary is clear from the principles for the evaluation of development assistance set out by OECD (1992: 132):

The main purposes of evaluation are:

· to improve future aid policy, programmes and projects through feedback of lessons learned.
· to provide a basis for accountability, including the provision of information to the public.
To this should be added purposes that reflect the conclusions of the Jomtien Conference in relation to evaluation, namely that it should assist the process of capacity building at the local level and local ownership and control in a context of the decentralisation of programme administration.

3 What is participatory impact assessment?

By participatory impact assessment I am referring to what has been described as applied social research that involves trained evaluation personnel and practice-based decision makers working in partnership (Cousins & Earle 1992). Usually decision-makers are donor or lending agency personnel and recipient country administrators with programme responsibility, or people with a vital interest in the programme. Participatory impact assessment is best suited to formative evaluation exercises that seek to understand innovations with the expressed intention of informing and improving their implementation. As I indicate later, two projects that fit this bill are two of the largest post-Jomtien Education for All (EFA) programmes in the world, namely the District Primary Education Programme (DPEP) in India and the Effective Schools Through Enhanced Educational Management (ESTEEM) programme in Bangladesh - the latter two are substantially funded by the Department for International Development (DFID).

In participatory impact assessment, a crucial part of the capacity building deemed necessary for evaluation by the Jomtien Conference is to train key personnel (project administrative staff) in the technical skills crucial to the successful completion of the research exercise. Thereafter, practitioners (resource centre staff, teachers and community members, including those on school committees, parents and possibly children and other learners) can learn on the job with mentoring and workshop input where necessary. When this happens, both parties participate crucially in the research process. Such learning is an indispensable part of the participatory model since the intention is that key administrative personnel develop sufficient technical knowledge and research skills to take on the coordinating role in continuing and new projects, and that they need to rely on the initial trainer for consultation about technical issues and tasks such as statistical analysis, instrument modification and technical reporting. Participatory impact assessment is likely to be responsive to local needs, while maintaining enough technical rigour to satisfy probable critics - thereby enhancing use within the local context.

4 How is participatory impact assessment different?

Participatory impact assessment is conceptually distinguishable from other types of named collaborative enquiry and evaluation on two important, although not independent, dimensions: goals and process.

4.1 The goals of participatory impact assessment

In relation to goals, the pre-Jomtien orientations designed by the northern-based academic community advocated the simultaneous improvement of local practice and the generation of valid social theory (Cochran-Smith & Lytle 1993) as in, for example, the so-called state of the art evaluation of the elementary education programme in the Philippines in the 1980s. Similarly more contemporary practitioner-centred instances of collaborative evaluation have expressed as a goal the empowerment of individuals or groups, or the rectification of social iniquities. Such a goal is expressed for example by the Swedish International Development Agency (SIDA) evaluation of the teacher training programmes for primary and secondary education in Mozambique and Guinea Bissau in the 1980s and 1990s (Carr-Hill 1997). These interests are beyond the scope of participatory impact assessment since such interests belong firmly to programme goals and programme implementation. I would argue that it is fundamentally dishonest to believe that an evaluation process can achieve such ends. This would constitute only a reflection of tokenistic commitment to a social agenda by non-practitioners more interested in the formulation of grand social theories and rhetoric rather than reality: it would be a tantamount to a 'deodorant' that tries to sanitise the inadequacies of overall programme direction.

The approach that I would advocate is not ideologically bound, nor is it devoted to the generation of social theory. Rather participatory impact assessment has, as its central interest, an intention to enhance the use of evaluation data for practical problem solving within the contemporary organisational context – an endeavour that will support the overall programme goals. Indeed this is the essence of Phile's argument in relation to the post-Jomtien scenario, namely that the driving force for a new agenda relies on overall programme definition and orientation and that we need to make sure that individual programme components accord with that definition and orientation.

4.2 The process of participatory impact assessment

The second differing dimension, process-based, takes shape inside participatory impact assessment by having administrators and key organisational personnel working in partnership with members of the community of practice as opposed to other models, such as the benefit monitoring model that has served the Nepal Basic Education Programme and the Nepal Secondary Education Project through the 1990s, which exclude the latter. Whereas administrators, for example, do bring a set of technical skills to the evaluation act which are important, practitioners bring a thorough knowledge of context and content and the partnership is critical for effective participatory impact assessment. The former work as coordinators or facilitators of the research project, but fully share control and involvement in all phases of the research process with practitioners. This thrust is distinguishable both from pre-Jomtien forms of evaluation where control of the research process is maintained by the expert evaluator or evaluators (Whyte 1991), and from so-called practitioner-centred approaches where such control lies completely in the hands of the key individuals in the practitioner group (Elliot 1991).

4.3 Some references to participatory assessments

Participatory impact assessment may thus be summarised against what I call the pre-Jomtien model, which has often masqueraded as a participatory entity:

· The pre-Jomtien model, the benefit monitoring in Nepal being a classic example, attempts to engage many potentially interested members of recipient-country administrators in order to create support but without yielding any power in the crucial areas of model focus and design. The participatory model, envisaged for ESTEEM in Bangladesh, will actively involve primary users at all stages of the impact assessment process, from focus and design through to dissemination of conclusions.

· The pre-Jomtien model involves programme participants in a consultative way to clarify domains and establish the questions for the evaluation project. SIDA's work in Mozambique and Guinea-Bissau epitomises this. The participatory model engages the primary users in the 'nuts and bolts' of focusing the assessment, formulating the design, deciding on the methodology and sample, developing the instruments for data collection, collecting the data, analysing and interpreting the data and reporting the results and making recommendations. Possibly the best example of this is the impact assessment mechanism that has been developed in Andhra Pradesh, India, as part of DPEP.

· In the pre-Jomtien model, the expert evaluator or evaluators are the principal investigators who translate the institutional requirements into a study and conduct that study, as in the case of the Philippines evaluation already referred to above. In the participatory model, as in the case of DPEP Andhra Pradesh, the external consultants help only to coordinate the exercise and are responsible for advising about technical support, training and quality control. Conducting the study is the responsibility of practitioners.

5 Why participatory impact assessment?

The underlying justification for a genuinely participatory approach is problem solving in professional work, which is closely tied to Schon's (1983) terms: reflection-in-action and reflection-on-action. Through participatory impact assessment, recipient country administrators and donor and lending agency members may be surprised by what they observe and may therefore be moved to rethink their practice. Unlike so called emancipatory forms of action research, that use Participatory Rural Appraisal (PRA) for example, the rationale for participatory impact assessment resides not in its ability to ensure social justice or somehow to level the societal playing field, but in the utilisation of systematically and socially constructed knowledge.

5.1 A consideration of the utility of the findings of an evaluation

I here express my orientation towards evaluation utilisation which suggests that under certain conditions, evaluation or applied research data will be used either for providing support for discrete decisions in programme constituencies (e.g. decisions about programme expansion) or for educating organisation members about programme operation and the consequences of programme practices. These uses of data are known to be dependent on two main categories of factors:

· features of the evaluation itself, including its timeliness, relevance, quality and intelligibility

· features of the context in which data are expected to be used, such as programme implementers needs for information, political climate and receptiveness toward systematic enquiry as a mode to understanding (Cousins & Leithwood 1986).

This framework for understanding participatory impact assessment is inadequate in at least two respects.

Firstly, it links the use of data to an undifferentiated individual called the decision-maker. To assume that organisational decisions supported by data are the product of single individuals processing information and translating it into action is, at best, tenuous and probably not representative of decision making in most organisations. Rather, decisions made explicitly, or implicitly, are the product of some form of collective discourse, deliberation or exchange. As such, it is eminently preferable to envision the nature and consequences of participatory impact assessment in the context of organisational groups, units, subunits and the like.

Secondly, the evaluation framework may be described as inadequate since it fails to recognise the powerful influences of various forms of interaction between practice-based and research-based communities. Considerable evidence is accumulating to show the benefits of combining the unique sets of skills, brought to projects and tasks by both researchers and members of the community of practice, regardless of whether or not the tasks are research-based.

Cousins and Earle (1992) have provided a thorough review of a variety of lines of research-based evidence in support of the participatory impact assessment process. Their findings underscore the importance of social interaction and exchange and the need to conceive of organisational processes in collective and social terms. They also support the integration of research and practice specialisations as a means to stimulating enduring organisational change. An appropriate theoretical framework in which to situate participatory impact assessment, then, will be one that adheres to such principles.

Participatory impact assessment, viewed from this perspective, is a strategy or intervention that will produce adaptive knowledge to the extent that it monitors and provides an opportunity for the interpretation of programme outcomes, and generative knowledge such that interpretations lead to enlightenment or the development of new insights into programme operations, or effects, or especially organisational processes and consequences.

6 Conclusion

Finally, the post-Jomtien changes in the theory and practice of project impact assessment have encouraged the shift to participatory assessment - an interventionist practice that contributes to many dimensions of the project. This is more so when participatory assessments are undertaken as formative activities. The evaluative assessment can then be regarded as a powerful learning system, designed ultimately to foster local applied research, and thereby enhance social discourse about relevant learning centre-based issues. When applied research tasks are carried out by school and district staff, their potential for enhancing organisational learning activity will be strengthened and the sustainability of the project be enhanced.

1.2 Participatory action research as an approach to impact assessment

Veronica McKay
Institute for Adult Basic Education and Training
University of South Africa


In this paper Veronica McKay corroborates John Shotten's view of the post-Jomtien shift towards a participative process for researching project impact. In elaborating this point of view, she asserts that the participative approach to assessment presupposes an epistemological shift from more realist-orientated research approaches towards a non-realist approach to assessing impact. This view of knowledge, she argues, is diametrically different from the positivist belief in an objective reality and knowledge that are universally true or false - the epistemological presupposition which inspired traditional pre-Jomtien approaches. She argues that a non-realist orientation opens the way for multi-vocal discourses, and that this is a prerequisite for participation.

One of the implications of the non-realist epistemology is teachers (as active participants) are brought into our endeavours to assess project impact. It is only by doing this, she asserts, that we can ensure that the assessment of impact will be both formative and relevant and educational for teachers at the chalk face. In this paper McKay discusses the advantages and problems associated with participatory action research (PAR) in general and then specifically examines how it may be applied to the assessment of impact. She illustrates her points by making reference to the application of PAR to the assessment of impact in the Molteno Early Literacy and Language Development (MELLD) project in Namibia.


1 Introduction

This paper is informed (in general) by my experiences of impact assessment of the various school-based projects with which I have been involved in South Africa as well as by the many opportunities I have had as a sociologist1 to apply the PAR approach to varied development contexts. More specifically I shall illustrate my contentions by referring to my role in the Namibian Molteno Early Literacy and Language Development project, which is part of a broad programme of ODA/DFID-financed assistance in the education sector in Namibia.

1.1 Project outputs

The primary goal of the MELLD project (as is the case with most of the projects referred to in this publication) is the enhancement of teacher's capacities. The MELLD project document (revised in 19952) outlines the various outcomes which the project was expected to achieve, namely, to:

· introduce a learner-centred methodology into literacy and language classrooms in the lower primary grades at pilot schools

· empower the Ministry of Basic Education and Culture with the capacity to provide and manage in-service training and monitoring for literacy and language teachers in primary schools

· establish (both within the Ministry and in the regions) a sustainable research and development cadreship who would be able to produce Namibian mother-tongue and English-language materials for lower primary grades

· increase the number of learners in basic education with appropriate mother-tongue and English oral, reading and writing skills in selected classes in selected areas of Namibia.

In order to achieve these outputs, a series of partnerships were formed with a number of interested groupings. (These are referred to in section 5.1)

2 The application of a PAR and aproach to project assessment

My previous experiences in assessing projects had required me to be involved for longer periods of time, and I had been brought into projects in much earlier stages of implementation. This earlier involvement had enabled me to assume an ongoing facilitator/evaluator function. Since the nature of the MELLD investigation resonates with other contributions in this publication, I shall here only describe the way in which I endeavoured to apply a PAR approach in the implementation of the MELLD project.

I use the word endeavoured deliberately since circumstances did not allow us fully to utilise a PAR-approach in this particular case. The main reason for this was that the assessment exercise was undertaken within the constraints of my being a tacked-on-outsider evaluator who was fifoed3 in for a brief spell, three years into the implementation of the project. (I was an insiders the sense that I had had experience in using and training practitioners to use the Molteno programmes and methods.)

In spite of time and other constraints, we decided to evaluate the MELLD project by applying the principles of a PAR approach to the investigation as comprehensively as we could. Although we achieved what we had set out to achieve (the definition of our goals took into account the constraints of the overall situation), the exercise taught us a lot about how to incorporate a PAR component into educational development projects as a formative mode of assessment.

3 Towards a definition of PAR

There are many different definitions and applications of action research. In the educational arena, Kemmis and McTaggart suggest that, for them, action research means 'a form of collective self-reflective enquiry undertaken by participants in order to improve... their own social or educational practices' (Kemmis & McTaggart 1988: 5).

These two authors link the concepts of action and research because researchers acquire knowledge through the research process while simultaneously putting their research into practice (the action component of 'action research'). They draw attention to the participatory nature of such research by indicating that action of this kind is (by definition) collaborative since it takes place in the context of any group with a shared concern.

Selener (1997: 108), who suggests that collaboration brings teachers and university-based researchers or other facilitators together in the PAR exercise, corroborates this view. He indicates that the joint enterprise entails setting goals, planning the research design, collecting and analysing the results in a collaborative way. He points out that 'although teachers and researchers may play different roles based on their respective skills, members of both constituencies work as equals'. There are distinct differences between traditional approaches to assessing impact and PAR. In PAR the researcher is much more than an impartial and aloof observer: he or she is also a facilitator. In PAR participants are also thought of as researchers - rather than mere objects of research. The facilitator is an active agent in the inquiry process. He or she facilitates and provides the participants with skills and research know-how but does not give answers (Selener 1997, Udas 1998, McTaggart 1991). Understanding the role of the researcher is central to understanding the practical utility of the PAR approach.

3.1 The practical utility of PAR

While the PAR approach provides researchers (particularly if they are outsiders) with a useful route for getting into the logic of other people's projects, it allows them to enable the project simultaneously. PAR is an approach which has been applied in the professional development of teachers and in projects which are designed to improve schools. Classroom teachers, as researchers, have used PAR to improve their own practices. Selener (1997: 96) indicates that the main assumption underlying this approach is that the teacher and others working in the field of education become researchers and change agents in order to improve their situation. The main objective is thus to improve the day-to-day practice of teachers in their classes - one of the significant aims of all the projects referred to in this publication.4

When applied to the assessment of impact, the PAR approach benefits project participants in numerous ways - and also substantially improves the prospects for a project's sustainability. Some of the most significant advantages of the PAR approach are that it:

· takes the hierarchy out of the evaluation stage by bringing in project implementers to work with the so-called experts

· enables all participants become co-researchers

· enables all participants to define the criteria used for measuring

· involves the participants in interpreting and authenticating the findings

· engages participants in the cycle of reflection-action-reflection

· enables the (often) poor or marginalised to impact on policy

· enables bureaucracies to become more participatory

· creates a forum in which members can act as critical sounding boards

· acts as a forum for information exchange and as a resource for group/project players

· permits sharing of knowledge and resources and it promotes development expertise

4 Participatory action research and the reflective practitioner

The PAR approach is predicated on reflection. The reflection is introduced as part of the PAR methodology, it transforms classrooms into learning communities in which teachers become more inquiry-orientated, reflect on what they are doing, and decide on ways and means to achieve/improve on what they are doing or what is happening. In PAR-inspired assessments, practitioners themselves engage in the process of developing criteria for evaluating. This enables them to identify the strengths and weaknesses in their own practice. This requires them to:

· notice what is happening in the classroom
· think about what is happening both during the lesson and afterwards
· work out ways of improving on what is happening
· test their improvements in practice
· find out how well the improvements might have worked, and then
· think again (i.e. begin the whole cycle again).
The following is suggested as a PAR plan for teachers:

INITIAL REFLECTION

What problem did Ms X have?
Whom did she ask to help her with the problem?

ACTION PLAN

What should she try out in order to improve the situation?

OBSERVATION

How did the plan work out?
What problems remained unsolved?

REFLECTION

What else could she try to do?
How is this new idea an improvement on her first idea?

ACTION PLAN

What plan has she devised to improve her situation?


Romm and McKay (1999: 8)

4.1 reflection as the basis of change

The reflective component provides a scaffold for practice in that it allows project players, project monitors, evaluators and even learners, through reflection, to describe what constitutes best practice. This offers opportunities for ongoing monitoring and formative evaluation and confers the added benefit of ensuring sustainability. PAR usually involves groups of practitioners who come together at regular intervals to address particular problems or insights they might have encountered in their teaching situation. Practitioners are required to note anything that happens during a particular lesson that may be of interest to the other practitioners in the group.

Practitioners should also record, for example, how they dealt with tricky situations, or how a particular teaching method worked out. This is a form of situational analysis that encourages teachers (1) to think about what happens when they teach and (2) to try out different teaching ideas. This brings together the theory (through reflection) and the practice (or action) of teaching. What I have described above represents one way in which teachers may engage in situational analysis.

It is reflection and understanding - rather than random, spontaneous acts -that create change. The process requires a reflective spiral of planning, action, observation, reflection/replanning, action, and so on. Reflection uncovers successive layers of meaning. Reflection is a means for systematically collecting and analysing data, solving problems, and evaluating and implementing.

Those working in a school setting may be actively involved in all stages of the research and action process. This constitutes a radical departure from traditional education research which was always conducted exclusively by those outside the implementation strategy. PAR is unique because practitioners themselves are involved in creating and applying knowledge rather than merely implementing directives and recommendations obtained from traditional 'outsider-drive' research and imposed from above. The special advantages of PAR increase the likelihood that research results will be useful to teachers in their own practice because, in PAR, theories have to validated in practice.

4.2 Transforming teaching

Young (1983) recognises that the formulation of a curriculum, or the introduction of a teaching programme, is no less a social invention than the establishment of a political party or a new town. When referring to a social invention, Young suggests that development programmes - whether they be literacy programmes, teacher improvement programmes or new curricula - are human (and not scientific) constructs. In all human constructions, he suggests, we rely heavily on humans as the locus of decision making. PAR, as the name denotes, strives to ensure that the human emphasis of any intervention remains paramount.

4.3 PAR and its view of knowledge

The application of PAR to assessing project impact confirms the popular trend towards assessments that are participatory or collaborative. The new discourse assumed by the shift constitutes a radical break with positivist-inspired traditional approaches to impact assessment, which characterised the pre-Jomtien research agenda. Such approaches were based on what Romm (1986: 70) terms a 'comprehension-then-application' approach. By this she means that the researcher arrives at a comprehension of a situation through following the procedures of scientific protocol and thereafter proceeds to manipulate the situation in accordance with what the researcher has (unilaterally) postulated as the correct comprehension of the situation.

In contrast PAR is squarely based a non-realist epistemological paradigm.5 PAR also requires the incorporation of action at the precise point of conceiving knowledge. This location identifies PAR as being (generically speaking) a multivocal or discursive method for arriving at 'true' knowledge (McKay & Romm 1992: 90). It aims, as Udas (1998: 603) explains, to introduce humanness into human inquiry. For this reason, the voices of practitioners are essential to the construction of knowledge. Argyris and Schön (1991: 86) summarise this idea by stating that the purpose of action research is to generate insights by working with practitioners within particular, local practice contexts in exercises which are relevant to local contexts. This is because action research 'takes its cues - its questions, puzzles, and problems - from the perceptions of practitioners... [and it] bounds episodes of research according to the... local context".

5 Application of PAR to the assessment of the MELLD project

As indicated above, every attempt was made in the execution of the MELLD assessment to apply the principles of PAR (to the extent that this was possible in the light of constraints on time, timeliness and resources).

5.2 The rationalisation underlying the identification of stakeholders and selecting the 'sample'

Because this was a partnered project, there were a number of stakeholders with varied interests and concerns. It was necessary at the outset to determine the stakeholders and then to select a 'sample'. It was possible to gain sensitivity to what partners and what interests were involved by means of discussions with the project management and an analysis of documentation. It was possible to request the project managers (prior to my arrival in the country) to confer with partner organisations and decide which stakeholders should be involved. This exercise enabled us to solicit the names of significant participants or organisations who were central to the programme.

It transpired that there was a large degree of commonality in the partners' lists, and this made it possible to design an approach which in some way included all identified stakeholders. The list of stakeholders included:

· officials from the Ministry
· project managers
· the implementing agents
· teachers teacher coordinators
· the British Council
· DFID the funding agency
· district supervisors
· other service providers
Since this investigation was not contingent on so-called scientific validification, the rigorous use of orthodox 'scientific' (realist) approaches was not considered pertinent to the selection of the 'sample'. A rational sample was selected and it was based on leads that were obtained by means of snowballing. Since PAR does not concern itself with generalisabilty, the emphasis in this assessment was on capturing the distinctive quality and substance of the voices of the various stakeholders. In the remainder of this paper, I will refer only to what I consider to be primary stakeholders, i.e. the trainers and the teachers themselves.

5.2 Constructing the instruments

It was necessary to engage stakeholders in the process of constructing the various instruments that were used. Initial interviews with core stakeholders were conducted - an exercise which was crucial in enabling me to become appropriately sensitised to the relevant issues. After I had conducted a second round of in-depth interviews with the trainers (attached to the implementing agents) and the project manager,6 I began to get a good idea of what should be observed and what criteria should be used. Initial drafts of the instruments were compiled and were circulated among other project players. They went through a series of manipulations and refinements as different players provided input (this was a process that continued well into the research process).

5.3 The methodological approach

While many researchers generally believe that only qualitative methods are appropriate for doing participatory research, this is not so. It is here contended that as long as the researcher is aware of the contestable/discursive nature of knowledge, the methods used for obtaining data are secondary. This is because action research is distinguishable from other research methods to the extent to which it strives to induce practitioners to confront issues which they may find problematic. It is in this sense that the methods employed by PAR are different from the usual ways of administering surveys or conducting observations. The distinction is dependent on the fact that non-action research does not have as its main goal the need to open the way for new forms of action. Thus, any form of data gathering is appropriate in PAR provided that

· it does not exclude participants, and

· it retains as its goal the implementation of action which is responsive to the issues that people are concerned about and which they want to discuss with others (Romm & McKay 1999: 5).

Selener confirms this when he points out (1997: 111) that action research does not follow any specific research formula. He states that the conditions in which they exist and the action researcher's preferences and criteria will determine the appropriateness of the method that will be used. Since this kind of open-endedness left us to choose from a whole gamut of possible research methods, it was necessary to formulate a research design according to which the MELLD investigation would proceed. The following four research methods were utilised:

1 Documentary study

This was necessary to address questions pertaining to the location, context, baseline measures and terms of reference of the project. It was necessary to undertake an examination of documents relevant to the areas under investigation. All players were able to suggest documents which were relevant to this stage of the research. The data obtained from the documents proved adequate to provide a background which was 'validated' in the second and subsequent phases of the investigation.

2 In depth interviews/Focus group discussions

This method was useful both as a source of data gathering, as well as a means of 'validating' the context as defined by the documentary study. The in-depth interviews opened opportunities for engaging teachers in reflection. They were required to give their views about the impact of the new programme on their learners and on their practices. In the focus groups, teachers were required to reflect on problems which they encountered and to brainstorm ways of addressing these. The groupings also provided forums for initiating action.

3 Classroom observations

Observations were conducted at a number of project and non-project schools. These were coupled with interviews with groups of teachers who were asked to describe how they had experienced the process and to discuss how this had impacted on their teaching. In this situation it was necessary that the observation instrument be used as a 'negotiated' tool.

4 Self evaluation questionnaires

These were administered to all teachers involved in the intervention in order to obtain their perceptions with regard to the variety of interventions, their limitations, etc. The administration of these was facilitated by the Namibian regional coordinators. Teachers were required to indicate problems which they had identified and to propose suggestions for improving the situation. This method was designed to obtain data from teachers, to stimulate their own reflections about their practices, and to suggest action for addressing a number of issues.


6 The 'fît' between the approach and the principles of PAR

In spite of various constraints, it was nevertheless possible to comply with many of the requirements of the PAR approach.

· The process of self-evaluation

The self-evaluation questionnaire was administered to all teachers who were teaching on the MELLD programme. The survey was intended to induce reflection, tap into teachers' perceptions of project effectiveness and allow them an opportunity to identify possible problem issues. Since the self-evaluation component was conducted subsequent to the other processes, it was an additional invitation to induce reflection among teachers in their regional groups. Teachers were required to indicate:

· problems and suggested solutions
· changes in children's behaviour
· their perceptions of any changes in their confidence
· the ways in which their teaching had changed
· the kind of support they felt they needed
· their perceptions of the materials they were using and the fit between these and the national curriculum.
In compiling the questionnaire, we were sensitive to cautions by the trainers that the questionnaire should be user friendly, that the language level should be such that teachers (who might not have a good command of English) could understand what was being asked. Indeed some teachers had difficulty in writing. This of course impacted on their teaching and (of lesser importance) on their participation in the research enterprise.

· Focus groups

The group interview approach was intended to engage the teachers and coordinators (as well as other stakeholders) in a conversation in which the researcher encouraged them to relate, in their own terms, experiences and attitudes that were relevant to the project. This provided the opportunity to probe deeply and to explore various dimensions of the areas under investigation. The interviewer assumed the role of facilitator and ensured that the exchange gave individuals the opportunity to speak their minds and (also) to respond to the ideas of the other members of the group. In the course of a series of group interviews, respondents spoke about their perspectives and involvement, citing events and stages which they regarded as significant. The themes that were explored in the discussions were framed by the participants.7

The findings of the group interviews were of a collective/participative nature. While many researchers argue that this kind of group-think is one of the disadvantages of using group interviews, we regarded it as an advantage in this assessment since it offered opportunities for enriching the various nuances of the discussion. Group-think may be regarded as advantageous in the context of this assessment and in the context of the MELLD project because it concurred with the group-based nature of the programme and the group-think modus operandi. Interactions between the group members gave rise to ideas for action which may not have occurred to any single individual member reflecting alone.

The group-think function of also enabled a degree of validation to occur. Respondents were encouraged to debate contentious issues and the researcher was able to request the group to validate the final outcome these issues. Thus, for example, when groups were asked to identify reasons for the success or failure of various aspects of the programme, the debate enabled the group to solve many contentious problems in a 'controlled' environment and it also elicited new ideas for future project implementation.

· Reporting

It was clear that there was a need to speak to a number of different audiences through the report. Since we had a sense of the teachers' competence in English, we would have, preferred to publish the report in English and in one or more of the local indigenous languages. But this was not possible. What was possible, however, was to circulate draft copies of the interim reports to the regional groups of teachers through their coordinators. Each group was requested to discuss the document and to comment on it. It was possible for these discussions to take place in any languages that the groups wished to use. The comments that arose out of the initial drafts were sent to me and I was surprised by the extent to which teacher groups had responded. In my writing up, I attempted to incorporate all comments and requests - even if meant that I included conflicting opinions in footnotes.

Finally, I addressed issues pertaining to the accessibility of the document by incorporating large chunks of direct quotations - thereby letting teachers speak, as it were, for themselves. I also attempted to include case studies of typical teaching scenarios because these had elicited a substantial amount of commentary from the teachers. The following is an example of an authentic case study, which includes a problem about which teachers could reflect. This particular case study (taken from the report) also gave rise to a copious amount of commentary, especially on how to introduce a remedial teaching component.

CASE STUDY: MARY'S BREAKTHROUGH TO LITERACY (BTL) LESSON


Mrs Mary S had been teaching for 38 years and was nearing retirement. When we arrived at her school (one day early) she was initially reluctant to let us in to see her Grade 1 class. When the Principal directed us to the teacher next door, Mrs S pulled me in by the arm and requested me to visit her class.

We entered her sandy but happy classroom. There were clay models of buck and birds on the window sill and on the wall there were lots of pictures that the Grade Ones had drawn.

The children were in their four ability groups and were in the second stage of the BTL programme. The teaching group moved to the front of the room and sat on the grass mat. While Mary moved from group to group showing the learners what to do, there was a mini rumpus on the mat.

Two of the occupational groups were given sentences to write and the third group, the 'weakest' group in the class, was given a pile of words to copy. The lesson proceeded according to plan. The learners in the front of the room were deep in thought. They discussed the poster and read with great confidence. Eventually they returned to their desks to draw their pictures and write the new sentence they had learned.

Meanwhile, in the groups, a few rowdy boys and girls raced (also with confidence) through the writing of their sentences. They were trying to see who could copy the most sentences in their books. The race was on! They had already illustrated their lesson topic and were practising to write their sentences.

But, as with all the BTL classes we saw, not a lot was happening in the 'weakest' group. One or two learners had scribbled a few squiggles on the page but not much else happened.

Mary S moved around and checked on the other two groups. They were doing really well. But all was not well with the third group. They just sat and sat.

In a later discussion with Mary, she explained to us that the new approach brought about such an improvement in her teaching. She had been using it for the past two years and wished that she had learned it earlier. But she said she did not know what to do with the 'weak' group.


7 Some difficulties encountered with the approach

The PAR approach to impact assessment is of course not without its own unique problems, which, in this case, were exacerbated by the constraints of time. These are some of the problems which I experienced.

· Collaborative efforts are by definition time consuming!

· It is often difficult to generate enthusiasm in collaborative situations.

· How does one stimulate people to participate in deciding criteria and outcomes if they are habituated to not participating?

· How do lay (local) people feel about participating in such evaluations when they are in the presence of 'experts'?

· Programmes of this kind often incorporate 'grass-roots' people who can neither read nor write. What is the best way to encourage them to participate on terms of equality with 'experts'?

7.1 Addressing the human question

In spite attempts to encourage participation I found it difficult to get teachers to participate (Moloney describes the same difficulty in her paper in this publication). Admittedly a rushed evaluation is not conducive to engaging participation, and such difficulties are compounded by the teachers' lack of basic skills. This lack is in itself a source of disempowerment. Teachers who were trained in the previously undemocratic era also lacked the requisite skills for participation. I therefore argue that the inability of teachers to participate (because of the skills that they may lack) is a problem that needs to be addressed.

While the methods of PAR depend on the development of human empowerment and the belief in one's ability to participate, there is a direct relationship between human agency (voluntarism), participation and development. For this reason, it is important that projects regard the development of human agency as being of equal importance to all other preconditions.8 Development has to be firmly based on human well-being, an improved quality of life and significantly enhanced self-esteem. It has to resonate with the aspirations and needs of people as they are defined by the people themselves. It has been recognised that post-Jomtien research stresses the growing paradigm of participatory educational research. But this is contingent on the will to act. Informed acting or 'praxis' is brought about by reflection informing action.

7.2 Developing agency

While all the papers in this collection address educational needs as part of one or other development programme, it is here argued that development programmes that are considered independently of developing human agency will fail to take the people with them. In this regard, Berger (1969) stresses the importance of what he terms a 'developmental consciousness', which, he argues, should underlie all attempts to address problems of underdevelopment. It is imperative, he argues, that we address the 'human question'. While the provision of schools, infrastructure, and the enhancement of teachers' skills, is fundamental to our primary goal of development, transformation has to recognise the importance of the development of human agency and awaken to the importance of this at the local level. It is this which PAR hopes to achieve.

7 Conclusion

The use of PAR as an approach is coming of age. The collaboration embodied in PAR implies that the evaluation is informative for all players and can consequently make an important contribution to project sustainability. This is especially so if the design of the evaluation model is introduced as early as possible in the project - as a formative tool rather than a summative one. If this were done, it would have implications for the monitoring process because then the monitoring (leading to the impact assessment) could direct the project towards the desired outcomes.

Footnote

1. I have successfully used PAR several times in school-based and other development projects across a variety of sectors UNISA's Institute for Adult Basic Education has a variety of education/development projects which cross a number of sectors Our students are taught PAR and are expected apply this in their practical projects I have personally found the PAR approach to be as effective in gender and water projects as it is in education projects.

2. A mid-term evaluation was conducted in 1994, in which impact and progress levels of the objectives were assessed A revised project memorandum for phase 2, based on the recommendations of the 1994 evaluation, was complied.

3. This is an amusing and instructive concept which was coined by Rea-Dickins and Murphy to refer to consultants who Fly-in-fly-out (fi-fo) Their paper in this publication elaborates on the concept.

4. Teachers and other educational practitioners are usually engaged in PAR as active participants The process usually addresses a single case or a tricky issue, and, if these issues are reported, their findings may have wider benefits.

5. This is based on the research presupposition that we do not have access to 'objective truth' - but that 'truth' (if it exists at all) can only be encountered through intersubjective encounters with 'other truths'.

6. Fortunately the responsible person in the ministry was able to visit South Africa on a few occasions before the formal assessment began.

7. Of course this did not preclude the interviewer from introducing topics.

8. Agency refers to the empowerment or ability of people to determine needs, to reflect on possible outcomes, and to act on them.

1.3 Participatory approaches to impact studies

Sasidhara Rao
Andhra Pradesh
District Primary Education Programme


In this paper, Sasidhara Rao outlines some of the processes and instruments used to evaluate the Andhra Pradesh District Primary Education Programme (DPEP). The paper begins with a description of the aims of DPEP and then proceeds with a description of the various instruments used for the evaluation. The author provides a categorisation of the instruments used for the evaluation and locates them within the broad categories of quantitative and qualitative research approaches. This is coupled with an indication of the kinds of data that the particular instrument is intended to gather. The methods and the instruments used contribute in different ways to engaging participation at different levels and at different stages of the research enterprise.

The author stresses the importance of the evaluation process being guided by a participatory philosophy. He outlines the benefits of participatory research for participants and as a means of ensuring that quantitative data, such as the statistical descriptions obtained from the surveys, are contextualised because this contributes to the interpretation of such data. The paper also argues that the participatory nature of the study which was demonstrated by, for example, the various local studies conducted for the DPEP evaluation, conferred the advantage of enabling project participants to reflect on the project interventions in their own contexts. This, the author suggests, is both formative and necessary for making the recommendations relevant to unique local circumstances and consequently for enabling the development of capacity among practitioners at grass-roots.


1 Introduction

Major efforts are being made to implement Article 45 of the Indian Constitution, which provides for universal free and compulsory primary education for all children until they are fourteen years old. DPEP was one such intervention put in place to enable this goal to be realised in selected districts of the country.

The DPEP initiative had the following specific objectives:

· to reduce to less than 5% differences attributable to gender and social class in enrolment, dropout and learning achievement figures

· to reduce overall dropout rates for all learners to less than 10%

· to raise average achievement levels by at least 25% over the measured baseline levels

· to provide access, according to national norms, for all children to primary education in classes I to V

When the DPEP framework was formulated, special attention was given to programme features which ensured the contextuality of the programme by involving local area planning and community participation.

2 Interventions made by the Andhra Pradesh DPEP

A number of changes were made to make provision for the achievement of increased enrolments and retention and to improve the quality of education. The following interventions were made by DPEP in Andhra Pradesh:

· the opening of schools and the provision of alternative school facilities in areas where there were no schools

· the construction of buildings, additional classrooms, toilets, and the provision of drinking water facilities

· the opening of ECE centres

· the organising of awareness campaigns

· the provision of teacher and schools grants

· the delivery of a teacher-training programme

· the implementation of bridging courses for children involved in child labour

· the provision of education for children with special needs

· the provision of support for school committees

· the appointment of education promoters for the girl-child.

3 The methodological design of the AP DPEP evaluation

In order to obtain information about the progress made by DPEP, a complex multi-layered research process was formulated.1 The aim of the evaluation was to increase the use of evaluation data so that feedback would constantly flow back to the people involved in the programme. The evaluation was not intended to assess what was done to people. Its purpose was rather to involve all members of the community in assessing the effectiveness of DPEP. The study is longitudinal in the sense that the school and pupil surveys which were performed will be used in subsequent years in order to pinpoint whatever changes which may have occurred over the project's lifespan. One of the main aims of the survey was to provide essential reference data about the provision of education (DPEP nd: 1). The surveys were used to obtain information from head teachers in the schools, from Village Education Committees (VEC), and, using the household surveys, from the communities themselves. The other aims of the survey were:

· to study the impact made by DPEP on the educational achievement of children throughout their school lives

· to observe how particular schools were attracting and retaining pupils

· to investigate the extent to which girls are enrolled and retained

· to obtain an estimation of how many pupils drop out of the system

· to quantify the degree to which pupils successfully complete their schooling

Adapted from DPEP Evaluation in Primary Education. A handbook for getting started (p111)

In order to operationalise the above intentions, a research process was conceptualised. The enterprise was designed to enable the gathering of information from different sources in different ways. The evaluation comprised the following components:

· a quantitative component – comprising a series of surveys

· a qualitative component – comprising a set of long-term and short-term studies

· a priority component – certain indicators of implementation which identified priorities

· a participatory component – using the methods of participatory rural appraisal so that information could be collected quickly at grass-roots level. (In this way, DPEP was able to involve members of the community in assessing the effectiveness of the programme.)

3.1 Quantitative component

The quantitative component comprised the schools' and pupils' survey (SPS) to formulate a picture of DPEP in action. For this purpose four tools were prepared:

· a school questionnaire
· a classroom observation schedule
· a Village Education Committee survey
· a household survey.
Before the school and pupil surveys were administered in the field, they were piloted and then amended. The instruments were then used for the following purposes:

INSTRUMENT

PURPOSE

The school questionnaire

This was used for gathering information from the head teacher or other teachers, from school records and from the evaluators' direct observations.

The school classroom observation schedule

On three occasions during the year, observers visited each classroom to record which pupils were present at various times on a particular day. This exercise was necessary to obtain information about the regularity of attendance. The survey also gathered information about the gender and social groupings of the learners.

The VEC survey

This instrument was intended to give information on the potential school population. The survey is necessary to give accurate figures on the number of children aged between 6 and 11 who are live in the village.

The household survey

The household survey was administered to 10% of homes in the village. This survey was intended to enable the project to obtain information from the people living in the school catchment areas about the number of children living in the area, their social backgrounds and the economic status of the community.


The instruments were required to address the following issues:

Issue

INSTRUMENTS

The efficacy of the VEC's functioning

Interview schedule for Village Education Committee chairpersons, head teachers, additional project coordinators (APC), villagers and Village Education Committee members

The effectiveness of the mandal education offices (MEO) Supervision and inspection

· Questionnaires to headmasters, teachers and MEO |

· Documentary analysis: the perusal of books, monthly minutes and books

The utilisation of schools and teachers' grants

· Interview schedule for Village Education Committee chairperson and committee members

· Questionnaires for headmasters and teachers

· Observation schedule

· Matrix ranking

The utilisation of Class ITelugu textbook developed as part of the programme

· Questionnaire for teachers
· Classroom observation
· Pupil interviews

The functioning of Teachers Centres (TCs)

· Observation schedules on planning and management, time utilisation, teachers center (TC) activities

· Questionnaire on activities of TC administered to teachers, and to participating MEOs, APCs, mandal resource person (MRP) secretaries, and assistant secretaries

· Matrix ranking

· Schedule of availability and use of equipment.


3.2 Qualitative component

The qualitative component includes the impact studies and an investigation into the functioning of certain structures. The long-term qualitative studies include establishing the impact study of DPEP on new schools, ECE centres and on teacher training programme.

The short-term studies included investigations into the

· functioning of VECs
· effectiveness of MEO's supervision
· utilisation of Class I Telugu textbooks
· functioning of TCs
· utilisation of school and teachers grants
Focus group discussions were held to determine the
· effectiveness of the functioning of VEC/school education committees (SEC)
· ranking of schools
· needs in various areas
3.3 The participatory nature of DPEP

The evaluation was guided by a participatory philosophy which endeavours to involve all the participants in the preparation, finalisation and implementation of the evaluation programme. The design stressed the involvement of all members of the community in assessing the effectiveness of DPEP. The instruments were designed to gather information from parents, teachers, children, VEC members and the local community about their impressions of both the DPEP project and its evaluation programme. The evaluation included a series of observations of different activities of the teachers, pupils, VEC members and the community at large. Interviews were also conducted with these participants to gather data, and the documents used in the project were carefully and critically analysed (DPEP n.d.: 9).

The research design was user-friendly, and made provision for those within the DPEP system - but who were external to the activity being assessed -to participate in the implementation of the evaluation. In this way, capacity was built across the system. The process relied, to a large extent, on primary rather than secondary data in the sense that the two of the main tools used were observation and interviews. The design also advocated the collection of data through the Mandal's resource personnel who are strategically placed at the Mandal level to support the teachers in their academic spheres. Recent legislation in Andhra Pradesh has meant that the VECs are to be replaced by SECs. Ultimately, the SEC, as a stakeholder, should monitor, guide, support and evaluate all the programmes relating to primary education at the grass-roots level. Moreover, to assess the children's learning progress, DPEP conducted learning achievement surveys which measured the performance of pupils' cognitive and noncognitive dimensions. The testing of learners' on the noncognitive level included testing factors such as team spirit, cooperation, accommodation, and peer group relations. This was done by developing testing instruments appropriate for the new methodology, the teacher-training component and the DPEP's textbook – all interventions which were introduced by the project.

3.31 A process directed at participation

Adapted from DPEP. Evaluation in Primary Education: A handbook for getting started (p151).

Because of the participatory emphasis of the DPEP evaluation, every attempt was made to ensure that:
· the needs and responses of the members were taken into account in determining the evaluation system

· local people were involved in the preparation of design

· local people received immediate feedback

· capacity building was emphasised at all levels

· people were prepared for self-evaluation

· the project involved primary users

· the design of the instruments were user-friendly

· on-the-job training was provided for evaluators

· local evaluators were employed

· applied research methods were used

· progress was measured at the local level

· local people were enabled to identify problems and work out their own solutions

· information was collected from community members by way of participatory rural appraisal methods – using activities like school mapping, Venn diagrams and seasonal maps

· social mapping was used to identify those who were left out of the programme as well as the non-starters. This social mapping attempted to explore:

- reasons for non-enrolment and dropout
- ways of identifying working children
· teachers, pupils, parents and community were involved

· the evaluation was done by the members internal to the system but external to the activity

· priority was given to primary rather than secondary data

· district evaluation teams (DIET) included DIET lecturers, MRPs, teachers, community members and NGOs

· the School Education Committee participated

· the MRC was used as an evaluation unit

· the tools developed for the evaluation were participatory in design

· teachers were involved in the pupils learning achievement surveys (surveys based on natural learning experiences, teacher training and textbook development).

4 DPEP interventions in Andhra Pradesh

DPEP made a number of interventions which benefited the community in Andhra Pradesh. These included:

· the opening of schools and the provision of alternative school facilities in areas where there had been no schools

· the construction of buildings and additional classrooms

· the construction of toilets and the provision of drinking water facilities the opening of ECEs

· the organisation of community mobilisation and awareness programmes

· the provision of teacher, schools, and teacher centre grants the training of teachers

· the implementation of bridging courses for children involved in child labour

· the provision of education for children with special needs

· the establishment of MRCs with two MRPs, one mandal child development officer (MCDO) and mandal literacy organiser (MLO) under the leadership of MEO

· the appointment of education promoters for the girl-child

5 Conclusion

In this paper, an attempt was made to outline some of the processes and instruments used to evaluate the DPEP programme. The paper describes the various instruments and attempts to locate them as being either quantitative or qualitative approaches to research. In addition, the paper gives an indication of the kinds of data that the particular instrument was intended to gather. The paper stresses the importance of the process being guided by a participatory philosophy. In this way, the information gained by using other techniques - such as the statistical descriptions obtained from the surveys - is contextualised so as to enable the interpretation of the data.

The participation of the DPEP evaluation was enhanced by local studies which, in addition to being sources of essential information, were useful in enabling people to reflect on their actions in their own contexts. This meant that the recommendations that were made were relevant to the unique circumstances of local communities and that, through this process, capacity among practitioners at grass-roots was enhanced.

Footnote

1. This research design is described in detail in the DPEP (n.d.) Evaluation handbook for getting started.

1.4 Evaluation vs impact studies

N V Varghese
National Institute of Educational Planning and Administrates
New Delhi, India


In this paper Varghese considers the distinction between an evaluation and an impact study.

He argues that an understanding of the distinction is necessary since it has implications for who conducts the assessment, what the practical utility of the findings of an assessment might be and whose interests are likely to be served by each type of assessment. He concludes by pointing out that the distinction will also have implications for whether or not the assessment is seen as part of the actual project and, consequently, whether or not funding will be allocated for it.

The author succinctly illustrates the distinction by drawing on case studies which depict different assessment strategies.


1 Introduction

It is necessary to start this paper with an attempt at defining the concepts of impact studies and project evaluation.

· Impact studies are concerned with the overall changes brought about by a project or programme. They are generally carried out after the project period is completed.

· Evaluation, on the other hand, focuses on achievement of targets of a project and assesses the effectiveness of intervention strategies which are followed by the project. Evaluation studies can be initiated either during implementation of a project or immediately after the project period is completed, depending on the purpose. If the evaluation is undertaken during the implementation of a project, we refer to it as a formative evaluation, and if it takes place after the project is concluded, we refer to it as a summative evaluation.

2 Assessing the achievement of a project

The following table shows the distinction between an impact study and an evaluation in relation to

· the project objectives
· the short- and long-term goals of the project
· stakeholder interest in the assessment

EVALUATION

IMPACT ASSESSMENT

The success or failure of a project is usually assessed on the basis of its stated objectives. Hence, both evaluation studies and impact studies cannot be independent of the project objectives.

Evaluation studies usually confine themselves strictly to the boundaries stated in the project objectives and the implementation strategies.

Impact studies go beyond the narrowly stated objectives of the project.

The project matrix clearly indicates the immediate, intermediate and developmental objectives of a project.

Evaluation studies generally focus on the immediate objectives of a project.

Impact studies usually attempt to assess the development of the project.

The funding agencies and the recipient countries may be interested in carrying out both types of assessment.

Project managers in a funding agency may be more interested in assessing the cost-effectiveness of intervention strategies and efficiency of the project management structure. For this reason, funding agencies may be more interested in evaluation studies.

The participants in a project and the recipient country may be more interested in an impact study. They would be more interested in the impact that an intervention makes on structures on the existing systems after the project period.

The different forms of assessment suggest different utilities for the findings.

Evaluation studies provide an insight in to the replicability of project intervention strategies and provide useful feedback for funding agencies if they wish to apply similar decision to other countries or projects.

Impact assessment studies address themselves to systemic and long-term changes brought about by a project or programme. The impact may transcend the sectoral boundaries drawn by a specific departmental view of the problem. This is more so in the case of projects in social sectors like education since the object and subject of the project are human beings and their interactions.

2.1 Examples which illustrate the distinction between evaluation and impact studies

It may be interesting to base the distinction between evaluation and impact studies on certain examples. Let us take the case of an in-service teacher-training project.

An evaluation of the project may indicate the effectiveness of organisational arrangements created to train teachers on a regular basis. It may also indicate whether the project could succeed in training the pre-specified number of teachers as per schedule. On the whole, the evaluation will indicate the success of the project in terms of training the teachers. Policy makers are generally not concerned only about the training of teachers. They would like to know whether such training has led to improved curriculum transaction processes in the classroom (and therefore ultimately to increased levels of learner achievement). If this has happened, the INSET teacher-training may be adopted as a major systemic intervention in later periods. The impact study may focus on these aspects of the project rather than be confined to the immediate objectives of the project as in the case of evaluation studies.

Similarly, an evaluation of adult literacy programmes may indicate the total number of persons made literate by the programme. An impact study of the programme will focus on the social implications of the outcomes. It will attempt to discern, for example, whether the literacy programme led to the empowerment of illiterates and to their improved response to public provisions in sectors beyond education. It will also ask, for example, whether the reading habits of the community improved. These are questions more amenable to being assessed by way of an impact study, rather than by an evaluation study.

3 Methodology

The standard techniques used for measuring the impact of a programme are as follows:

3.1 The one group post-test design

The one group post-test design may be developed after the project period is over and it may be conducted an afterthought. However, such designs will not be in a position to indicate the rate or the degree of change brought about by the project since the initial measurements or pre-test results are not available to compare with the post-test results.

3.2 One group pre-test and post-test design

The one group pre-test and post-test design is useful for assessing the extent of the project's achievement among the beneficiaries. However, this design may not be able to indicate whether the changes brought among the beneficiaries are due to project intervention or to other factors outside the remit of the project, essentially because the design does not permit the capture of changes which have taken place in locations where the project has not been implemented. For example, we may notice an increase in enrolment in districts where the District Primary Education Programme (DPEP) is implemented in India. But the use of this type of design would not make it apparent to us whether such an increase in enrolment is due entirely to the DPEP intervention or whether the Total Literacy Campaigns, which were also initiated in India, have also contributed an impact.

3.3 Pre-test and post test of treatment and control groups

The pre-test and post-test of treatment and control groups design may facilitate impact assessment based on:

· situations before the project implementation
· the progress made in project areas
· progress made during the corresponding period in the non-project areas
The actual contribution of the project, in any case, is equal to the total changes brought about in the project areas minus the changes that have taken place in non-project areas. Baseline assessment studies are therefore necessary to provide benchmark data to make comparisons at two or more points during the project implementation. A baseline study at the beginning will identify the indicators against which the progress and achievement of the project are to be assessed.

4 Impact assessment of social sector projects

Various aspects need to be taken into consideration with regard to the assessment of impact in social sector projects. They are as follows:

· Human volition

Projects in social sectors like education deal directly with human beings and their unique behavioural patterns. This human volition means that the expected response pattern of beneficiaries is an assumption that is often taken for granted. In the event that the complexities of human behaviour are glossed over, the success of a project may depend on the extent to which the project design has reliably speculated about the expected response pattern of the actors involved in implementation, on the one hand, and beneficiaries, on the other hand. In terms of this, the achievement of the project objectives depends on how effectively the project design can accommodate the varied and changing responses to various project interventions. This means that attempts to define a blueprint for project design (especially for another location) is destined to be problematic.

It if for this reason that the project design and the project implementation cannot be totally separated and divorced from the contextual features of the location and people where the project is to be implemented. Impact studies relying entirely on quantitative methodologies may have an inherent tendency to be narrow in perspective and insensitive to the developmental objectives of the project.

· Processes vs outputs

Most of the project interventions in education are process-oriented. For this reason, it is important to decide whether or not the project impact has to be assessed in terms of changes in the processes or in terms of outputs of the project.

For example, a project objective of improving learner achievement by, say, 25% over and above the present levels can be achieved either by focusing on a limited number of schools and selected students or by bringing about overall changes in school processes and classroom practices in all schools. Both types of intervention may indicate achievement of the quantitative target of the project. Impact studies need to be sensitive to these types of problems.

· Qualitative vs quantitative research approaches

As indicated earlier, the developmental objectives of a social sector project are less amenable to easy quantification. The methodology to be adopted for impact studies therefore needs to be discussed and finalised. However, a totally non-quantitative approach may not give a clear idea of the social outcomes of the project. In assessing impact, a trade-off must be made between quantitative and qualitative techniques. The question of which form of data collection to use, needs to be discussed broadly with participants before the assessment design is finalised.

· Unintended outcomes

Any project intervention may produce unintended social outcomes. These can be either positive or negative. The implications of these consequences may not be confined to the sector in which the project has been initiated. For example, many primary education projects activate the local community and empower members to participate in development activities. Even when project targets are not fully achieved, such mobilisation may have a positive impact on the public intervention policies in other sectors. Evaluations which focus on narrowly defined project objectives and which use mainly quantitative techniques may not be in a position to make any assessment in this regard. For example, the DPEP interventions are pro-poor in nature. It would, however, be interesting to assess whether investment in primary education does indeed contribute to poverty reduction.

5 Who should do impact assessment studies?

Who should do an impact assessment? is a question that is often asked. The funding agencies, recipient countries or independent bodies may all do impact studies. However, as mentioned earlier, funding agencies may be more interested in evaluation studies and the recipient countries may be more interested in impact studies. It is possible that independent professional groups may be able to provide a more detached and objective view of the long-term implications of a project and that the impact study may be facilitated by independent bodies with or without the support of local level programme implementers.

This does not preclude the possibility of project players participating in an impact assessment. Since impact studies are conducted after the project has been implemented, they deal less with the details of project implementation and more with changes in the field. It is for this reason that even those players who participated in the actual implementation of the project will, in all probability, be more objective.

6 Conclusion

This paper was intended to highlight a distinction between what we understand as an impact study and a project evaluation. The distinction is necessary since it has implications for who conducts the assessment, what the practical utility of the findings of an assessment might be, and whose interests are likely to be served by each type of assessment. Finally, the distinction will also have implications of whether or not the assessment is seen as part of the actual project and consequently, whether or not funding will be allocated for it.


PREVIOUS PAGE TOP OF PAGE NEXT PAGE