Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

PREVIOUS PAGE TABLE OF CONTENTS NEXT PAGE


3. STAKEHOLDER PERSPECTIVES


3.1 Identifying stakeholders
3.2 Considering the audience - an important phase in project evaluations
3.3 Impact studies and their audiences

3.1 Identifying stakeholders

Dermot F. Murphy
Thames Valley University
Pauline Rea-Dickins
University of Warwick


This paper focuses on how important it is for evaluators to identify stakeholder groupings if they want to make effective use of participatory evaluations in educational development projects. It argues that it is necessary to pay detailed attention both to identification of stakeholder groupings and to understanding their relationships to projects in question.

The authors provide a detailed exposition of the variety of ways in which the concept stakeholder may be defined. Moreover, they argue that these definitions generally pertain either to individuals/groups who are involved in or who are affected by a project or its evaluation, or to the differing interests of these individuals/groups. The authors argue that stakeholder interests are not usually rigorously defined. They indicate that stakeholder interests do not seem to offer the kind of insights that might guide the planning and management of participatory or stakeholder evaluation. This lacunae, they argue, suggests the need to identify more robust parameters for exploring stakeholder perspectives on evaluation.

Accordingly, the paper begins with the authors' undertaking an interrogation of the multiplicity of definitions of stakeholder. This is followed by their examination of the inherent power relations and power differences which, they indicate, provides a framework for exploring the nature of the roles and relationship of stakeholders in an evaluation. The authors then suggest three propositions about stakeholder perspectives, which they support by using data they have obtained from their research into the stakeholder problem. Their paper concludes by suggesting some of the implications that their findings might have for practice.

1 Introduction

It is essential for those who wish to make effective use of participatory evaluation in educational development projects to identify stakeholder groups and to understand their relationships with one another and to the project. There are various ways of defining the concept stakeholder and many of these refer to individuals or members of groups involved in or affected by a project or evaluation. These definitions usually centre on the differing interests which distinguish groups - an approach which, we argue, does not offer the information needed for guiding the planning and management of participatory or stakeholder evaluations.

Our experience of conducting evaluations, both as external evaluators and consultants in participatory evaluations, suggests the need to identify more robust parameters for exploring stakeholder perspectives on evaluation. In the next section we shall examine definitions of stakeholder before we look at power relations and before suggesting three propositions about stakeholder perspectives. In section 4 we shall explore stakeholder perspectives by looking at data from questionnaires, interviews and field notes and we shall try to establish the extent to which there is support for the proposed framework. In section 5 we will attempt to show the implications of our findings for practice.

2 Definitions of stakeholder

The notion of participatory evaluation in education is not a new one. Morris (1990: 131) indicates that in his original conception of evaluation, Tyler regarded evaluations as a tool to help the teacher in planning the curriculum and making instructional decisions. In the same place Norris adds that Tyler and Waples advocated the study of classroom problems by teachers and supervisors as early as 1930 - thus showing that both these authors believed in the usefulness of participatory evaluations nearly seventy years ago.

More recent discussions of stakeholder evaluation tend also to talk about the need to respond to the interests of real people and the irrelevance or even failure of other approaches to evaluation (Weiss 1986). The aim of a stakeholder evaluation is to make evaluations fairer and more useful, usually by getting primary stakeholders - the real people who may benefit from and/or implement the project - involved in conducting the evaluation of the project, in line with the proposal just cited.

These proposals are often criticised because some advocates of stakeholder evaluation blur distinctions between, for example, accountability and knowledge evaluation, and a few make the disputable claim that stakeholder evaluation is a sufficient approach to evaluation -with the implication that no other is needed (Chelimsky 1997: 22). It is unclear whether these proposals reduce privilege or pass it to different stakeholders in a participatory evaluation. In any case, this usage of the concept of stakeholder evaluation is misleading since all evaluations are conducted by or for stakeholders. The question is rather Which particular groups of stakeholders commission, use or do evaluation? It is recognised that no one seems to propose that all possible categories of stakeholders should participate.

2.1 Ways of identifying stakeholders

Stakeholders are frequently identified by their working role within a programme, or by their contribution to the programme. In such a case, the term stakeholder may refer to either individuals or groups. When stakeholders are defined by their working role within a programme, the definition is usually unclear about whether the definition is specifically to do with their place in the project or whether this classification refers only to their association with the evaluation.

For example, Rossi and Freeman (1993: 408; Weiss 1986: 151) refer to the following in their list of stakeholders:

Policy-makers and decision-makers... Program sponsors...
Evaluation sponsors... Target participants... Program
management... Program staff...
Evaluators... Program
competitors... Contextual
stakeholders... Evaluation
community....

This list is not exhaustive and identifies groups which, while they may not always be involved in carrying out the evaluation, are potential audiences for the findings. A similar categorisation by Aspinwall et al (1992: 84-85) tries to simplify the matter of classifying stakeholders by proposing four broad groupings:

Clients or customers

Those who are intended to benefit from the project

Suppliers

Those who implement or provide resources for the project

Competitors or collaborators

Usually other organisations

Regulators

Any agency which directly or indirectly regulates the project


The categorisation of Aspinwall et al has the advantage of not being an open-ended list, as is the one previously referred to. One could easily add to their list. We wish, however, to argue that the four categories of Aspinwall et al are not sufficiently distinct and consequently of little use. For example, some participants, such as teachers in an educational development project, fall into the categories of both client and supplier. There is, moreover little discussion about how the categorisation is arrived at, and it is not evident from such a list why and how each group will take a particular attitude or set of attitudes to an evaluation.

Rossi and Freeman (1993: 409) also focus on the multiplicity of stakeholder groupings. They point out that as a consequence of the multiplicity of stakeholder groups, evaluators may be unsure whose perspective they should take in designing an evaluation. This dilemma is interpreted by Hopkins (1989) as pointing to different groupings within the group of evaluators. He draws attention to the divided loyalties of evaluators who have to take the concerns of multiple stakeholders into account. They may (variously) be loyal to the:

Profession

Rossi and Freeman's evaluation community

Sponsor

Rossi and Freeman's sponsors

Community

Rossi and Freeman's target participants (The evaluator acts as advocate and these stakeholders are not actively involved.)


If one looks closely at each of the above classifications, it is immediately apparent that one could go on subdividing each of the groupings - since even stakeholders may also have divided loyalties.

The following classification, elucidated by Guba and Lincoln (1989: 40-41), takes the relationship between any stakeholder and the evaluation as the defining parameter. They then identify the following three broad groupings:

Agents

those who conduct and use the evaluation

Beneficiaries

those who gain from use of the evaluation

Victims

those who are negatively affected by the evaluation


As with each of the aforementioned classifications, various subcategories within each of the three main classes may also identified. This clearly locates each member's or group's stake as being part of the evaluation whereas the other categorisations were potentially indeterminate between their stake in the project and their stake in the evaluation. Again, however, when we apply the categories to familiar cases, some members seem to fall into two categories. There is a further difficulty that Guba and Lincoln (1989: 202) acknowledge, the difficulty of identifying victims. We suggest that it would also be difficult to predict which of the second and third categories stakeholders would fall into: our goal for a more comprehensive framework requires some predictive power.

It is common, then, to acknowledge that there are different categories of stakeholder, and that each category has its own interests and spheres of action. This notion, however, remains on the level of generality and is a taxonomy. We suggest that, just as Linnaeen taxonomies are revealed by plant genetics to misclassify species, a study of underlying factors in groups may reveal more about their workings. At this juncture, we feel that the most useful pointer is to recognise that the defining interest is the stake in the evaluation. Categorisation as a defining procedure is more or less observable, but this has little or no explanatory value when one tries to account for different stakeholder perspectives. We suggest that a more effective explanatory procedure is still needed.

3 Defining stakeholder perspectives

As we have shown in the previous sections, stakeholders may be classified as belonging to different groups, a small, select number of which, in the field of evaluation, have traditionally been involved in conducting evaluations. In order to extend involvement in evaluation (and thereby incidentally expanding the kinds of evaluation that may be undertaken), it might be useful to expand on the stakeholders' understanding of the evaluation process. The following is a comment by a stakeholder (practitioner) who is not traditionally involved in project evaluation (except as a more or less willing subject)1. His perception is that 'evaluations are done for the funding body by ex-patriate visitors'.

This view resonates what Rossi and Freeman (1993: 252) term a connoisseur evaluation, i.e. an evaluation done by an outsider who is a subject specialist not trained in evaluation - what they call 'among the shakiest of all impact assessment techniques'. In this case, a practitioner would be looking for power, the power to do and be involved in evaluation of his/her project.

3.1 Power as a variable in defining stakeholders

Power as an element of evaluation and the activity surrounding evaluation seems to have aroused curiously little attention if one accepts that evaluation and its utilisation are about the exercise of power: evaluations decide about the continuation or curtailment of a project and about the future direction of a project. We draw on the following definition of power.

Power is the ability of individuals, or the members of a group, to achieve aims or further the interests they hold... How much power an individual or group is able to achieve governs how far they are able to put their wishes into practice at the expense of those of others (Giddens 1989: 729).
Following Giddens, power (a basic sociological concept) refers to relations within and between groups, between individuals. We would add that, when applied to the process of evaluation, power is structurally created and allocated.

3.2 Knowledge as a variable in Defining stakeholder

It is pertinent at this point to draw attention to the relationship between knowledge and power. Power is dependent on knowledge. In some views, the level of this dependence, for certain modalities of power, is currently greater than ever before (Fairclough 1989). Evaluation, on the other hand, is about generating knowledge, whether general or specific, and, as such, has its own power. This power is greatest where the findings or knowledge derived from evaluation offer clear guidance to specific stakeholders about future action or where information for prediction is information for control-not forgetting, however, that some information/knowledge is not useful or is rejected by those in power (Patton 1997: 348-350). The relationship between knowledge and power suggests why those who hold power may tend to resist the process of evaluation activities: change in the conduct of the evaluation may lead to the restructuring of power within or between organisations.

If the exercise of power is about furthering group or individual interests, and stakeholder groups can be defined by interest, then it appears to be worthwhile to explore interest and power relations as parameters if one wishes to understand stakeholder relations and perspectives. This definition of power makes it clear that power may be relative and may depend on the ability of stakeholders to control the actions of others - regardless of whether the ability to control is ascribed or achieved. It is this experience of power that may also underlie the sense of disempowerment evident in the rejection of outsider evaluation cited above.

3.3 Knowledge, power and interest as variables in defining stakeholders

The exploratory nature of this work will become evident in the discussion that follows, particularly as it becomes increasingly evident that both interest and power are perhaps more complex than might prima facie appear to be the case. In an early stage of our empirical investigation into the notion that an understanding of power relations might illuminate stakeholder perspectives in evaluation, we considered a number of potential areas where relative power and different interests might come into play. These are:

Knowledge

About the project and about project evaluation

Expertise

Relevant to the project and to evaluation

Control

Power to initiate or stop action and participation

Budget control

Power to take decisions about spending

Responsibility

Recognition of the individual's/group's power and potential to affect others

Benefits

As symbols of individual power and as potential to advance (an increase in one's own knowledge and skills, for example)

Loyalty

Individuals may have more than one loyalty, but the direction of loyalty may change (as when, for example, one becomes integrated into a team). Loyalty in groups also has the potential to influence outcomes.

Status

Position within a hierarchy, or origin of a group or individual

Distance

Degree of acceptance of another's right to take decisions or benefit personally


We also arrived at three propositions about stakeholder perspectives which we proposed to testas we examined data:

1. Stakeholder perspectives defined by power relations offer more insights into evaluations than definitions based on job or position.

2. Stakeholder perspectives defined by power relations will have greater explanatory potential than considerations of cross-cultural differences when examining and understanding reactions to evaluation or an evaluation.

3. Understanding stakeholder perspectives will enable us to plan and organise evaluations more effectively, and to promote a greater and better use of their findings.

The first proposition should be self-evident in the light of the preceding discussion. The second proposition is relevant in development, and derives from an earlier study suggesting that the existence of an evaluation culture reveals more about the utilisation of evaluation than attempts to explain utilisation through cross-cultural difference (Murphy 1997). The third proposition follows from the first two and would therefore be true for any approach.

4 Stakeholder perspectives

Our data come from an as yet small number of interviews and questionnaire responses from representatives of different stakeholder groups, which include funding agencies, evaluators, project managers and teachers. Other data used come from project reports and field notes of our own, and these were used in deriving the above list of areas. For reasons of space we will not give any more details about the design of the survey. Also, the categories considered here do not include all those which have been previously listed because we do not have adequate data to justify those which have been omitted.

4.1 Knowledge about the project or the evaluation is expressed in a number of ways.

You need workshops to get people involved and so they can understand.
This quote from an evaluation contractor identifies professional knowledge as a precondition for getting stakeholders involved, that is, being able to exercise power. It is an interest of the contractor to get this to happen and the contractor's belief is that it will promote ownership and favour project sustainability. The simultaneous passing on of control and responsibility is not perceived as a threat, a point which seems to support Guba and Lincoln's (1989: 267) idea that power is not to be shared out in a restructuring that aims to empower, but grown (new power is created).
The consultative nature of the partnership made acceptance of evaluation by local stakeholders easier.
This remark by a project stakeholder after an evaluation suggests that open communication about knowledge where the stakeholders are information users (cf. Patton 1997) means that the latter are more likely to use their power to utilise the evaluation. Their power, in addition, has been acknowledged. The following remark from the contractor supports this line of procedure, presumably because the expectation is that it responds to the interests of more stakeholders and encourages them to use their power:
I'd like all parties to understand the nature of evaluation, to have seen the TORs, to have had a hand in drawing them up, know who its for, what's to be done, what the implications are. It should be an open relationship.
Such comments beg the further question as to the nature of the consultation, involvement and partnership. To what extent is this realised through mere information exchange? To what extent are the participants in an evaluation actually enfranchised or empowered by the process? To what extent are they in a position to influence events at the various stages of an evaluation process? This in turn raises questions about the nature of expertise.

4.2 Expertise includes dimensions of learning and understanding, and these issues were raised by several respondents.

PE [Partnership Evaluation] is meant to be a positive experience for both sides, and a learning experience

Everyone involved should be learning, because there is shared ownership...

There is a trade off between learning to evaluate and quality of conclusions.

Each group learned from the other.

How about building in some kind of attachment that will allow the Fifo [fly-in fly-out evaluator] to work with/train personnel?

These observations from three stakeholder groups – contractor, evaluation manager, evaluation participant – reinforce the perception that learning to evaluate is important and that it empowers those who learn (Kiely et al 1995, Murphy 1996). Comment (3) – from an evaluator contractually engaged in carrying out an evaluation – introduces the inevitable tension between the learning process on the one hand and the dimension of accountability on the other.

At this point, we may ask, At the end of the day, which is more important: the learning or the integrity and quality of the evaluation findings and report? These are not easy questions to answer and clearly concern different stakeholder interests. Nonetheless, if we find ourselves working in a climate of partnership evaluations, then greater clarity about our own accountability relationships (as evaluators) with a funding agency and/or with the project community is required. This clarity is crucial for all stakeholders involved, since different interests need to be identified and satisfied.

4.3 Consideration of issues of control

Consideration of control raises questions about the conditions that would need to be in place in order for some balance of control to operate amongst the participants in an evaluation.

The period of serious work by locals should be included in their annual work targets.

Time is a constraint. School time is strictly for teaching and little is spent on evaluation of projects.

The points raised here are expressed as concern with time and they link with issues about levels of responsibility, extent of involvement and, presumably, ownership. In our terms, the issue here is about power - the power to act -because, at present, someone else's power to oblige these people to do other things apparently precludes their involvement in evaluation.

4.4 Consideration of status

Status is defined here in terms of an individual's position in a hierarchy -project, ministry or institution. The comments we gathered were very much to do with evaluators' status and mode of operating, in other words, how they exercise their power:

At one end of the scale there were evaluators who were a bit dictatorial while at the other end there were those who were empathetic.
This, of course, suggests the need to consider power style because this respondent is referring to experience in one project with different evaluators.

With reference to experienced and more senior teachers the following was mentioned:

Lots can be improved, tapped from focused discussions.

The evaluator should, in fact, get these teachers to reflect on what they have been doing and to evaluate themselves.

There is a strain created so it becomes a one-way discussion thereafter.

I would recommend that evaluation findings be effected in a way that will be beneficial to the project...

What emerges from these data is that, unsurprisingly, there are differences of interest between the stakeholder groups and, again, a perception of those with higher status using their power in ways which are not accepted. These respondents, in other words, do not accept the implied power distance.

In terms of project management and promoting dialogue within an evaluation framework, it would appear there are indications here that insights can be gained from gathering information about the different prevailing interests and power relations in order to understand the stakeholders' various perspectives. Alongside the differences there are themes of concern to more than one target group. These are tentative conclusions as much more work needs to be done to develop critical examination of the three propositions. Interestingly, however, the majority of issues raised in our data so far do have implications for the ways in which evaluations, in particular partnership evaluations, are managed. We now conclude with some of these implications.

5 Implications for managing evaluation

The ideas we list here are not new, and have appeared before in discussions of the principles of educational management and of managing evaluation (e.g. Aspinwall et al 1992, Everard & Morris 1996). The only value we would claim for revisiting them afresh while doing participatory evaluations is just that they come with new empirical support. We suggest that evaluators planning to do participatory research should:

· plan for open communication.
· define what partnership evaluation is to mean in the context.
· put power/responsibility at the level where decisions will be most effectively taken.
· resource time to learn to evaluate and to participate in evaluations.
To this list we propose tentatively to add that evaluators should:
· identify stakeholder interests.
· identify power relations between stakeholder groups.
Footnote
1. This perception of the situation appears to be limited since there is a lot of evidence to counter such a rosy interpretation of the scene – through talking to senior figures rather than practitioners (Mthembu 1996).

3.2 Considering the audience - an important phase in project evaluations

Dermot F Murphy
Thames Valley University
London
Clara Inés Rubiano
Universidad Distrital
Santafé de Bogotá
Colombia


This paper refers to an aspect of evaluations that often tends to be glossed over in the evaluation process – the audience of the evaluation. The paper emphasises how important it is for evaluators to give consideration to the audience or audiences for whom the evaluation is intended. The authors interrogate the complexities associated with notions of the audience and argue that differing interests and differing statuses as well as differing power relations are inherent to the conception of audience. For this reason, the paper argues that the identification of, and consideration for, the audience/s is central to notions of practical utility of an evaluation and to the compilation of evaluation reports.

The authors argue that the evaluation of any project involves people with differing roles and people who make different contributions. These differences imply differences in status, interests and in the power to act on or control what is done in the project, what is done in the evaluation, what is contained in the evaluation report and what recommendations are implemented.

By way of contextualising their argument, the authors draw on critical incidences pertaining to the audience/s which are manifested in the evaluation of the Colombian Framework for English (COFE) project, an INSET programme involving twenty-six universities and implying numerous inherent audiences.

Finally, the paper concludes with suggestions on how to approach an evaluation by taking the reality of audience/s into account.


1 The complex nature of the audience/s

Project impact evaluations take place in a variety of specific settings such as an organisation or community. In an organisation, the evaluation may be as small as a single class of students or it may be significantly larger and involve a grouping of universities at a national level. Similarly, in communities, evaluations may measure the impact of a project on, for example, a small group of women. Or, on a more complex level, evaluations may involve an investigation which resembles a national census in all its complexity and with all its accompanying participants. What is common to evaluations is that all involve people who have different roles in the project or programme in question.

The broad-ranging differences among participants may raise questions like: For whom is the evaluation? and Who wants it? There is presumably someone who is commissioning or requesting the evaluation. This furthermore begs the question: Whose interests are furthered by the evaluation?

· Funders will want to know if project goals have been achieved and to what extent the project represented value for money.

· Project planners may want to know how well their ideas translated into action, and what adjustments should be made.

· Teachers, who have developed teaching material, will want to know how appropriate the project materials really are.

· Students will want to know how well they performed prior to a project, and to what extent their performance improved as a result of the intervention.

What are accepted ways of identifying and serving these different groups?

In order to attempt to answer this, we shall briefly describe specific project evaluations which highlighted questions pertaining to the audience/s and their attendant methodologies. An attempt will be made to explore ways of dealing with different groups or audiences in evaluating projects, and then to indicate how the notion of audience was dealt with (or could have been dealt with) at different stages of the evaluation.

2 The context of this paper

The Colombian Framework for English (COFE) project ran for five years as a bilateral project which aimed to update the English language teaching programmes for teachers. The project focused on both initial and in-service training for teachers of English in Colombia, South America. It involved twenty-six universities, and built in a component of training in Britain for almost all members of staff from the participating institutions. In addition, COFE conducted several seminars and training workshops and arranged teacher exchanges within Colombia. Seventeen resource centres were set up across the country to support the project and the teaching of English in general.

The COFE project was subjected to three evaluations in the course of the project's lifespan. The first evaluation of the project was undertaken by the Ministry of Education in 1994. The second, in 1995, was undertaken by the then

ODA, and the third, by the Ministry of Education in 1996. Generally, the participants in the institutions tended to equate evaluations with supervision or inspection. Ministries, as was the case in the COFE project, tended to conceptualise evaluation narrowly as measurement of outputs against stated goals and tended not to give much consideration to the qualitative spin-offs of the project.

After an introductory training course which was designed to teach project players why and how to carry out an evaluation, this perception seemed to change. All the UK-trained lecturers received some basic training in carrying out evaluations, and one group, in 1994, initiated a small-scale evaluation in five universities (see Murphy 1994). Thereafter, in 1996, the final year of the project, an insider evaluation was undertaken by project participants. This evaluation was intended to assess the impact of the COFE on participating universities.

The group completed its work in 1997 and prepared various audience-specific reports for DFID, the Colombian Association of Universities (ASCUN), the Ministry of Education and also for the participating universities. This meant that, in its first, large-scale evaluation, the evaluation team had to deal with a variety of different audiences.

3 The notion of audience

When we consider how little attention the concept of audience receives, it would appear that the idea of audience in project evaluations is either taken for granted or (for most of the time) is relegated to the realm of the insignificant.

Audiences, it seems, are often identified with stakeholders (see Murphy & Rea-Dickins in this volume). In such cases, the range may include students in a class, their teachers, project planners, funders, university authorities, ministries (in two countries in an international project), employers, and even the taxpayers whose taxes pay for the project. One or more grouping from this list may be identified as the audience/s, and as the people who should receive the findings of the evaluation in the report prepared by the evaluators (Lynch 1996: 3). In spite of such an assumption, there does not appear to be any grounded categories for what constitutes an audience or who should be assigned the status of an audience grouping. In fact, as Freeman and Rossi (1993: 408) point out, very little is known about how evaluation audiences are formed, identified or activated.

A very general and fairly frequently used form of categorisation distinguishes between primary and secondary audiences (Sanders 1997: 398).

Although the primary audience includes teachers and other project staff, as well as students. Sanders (1997: 398) points out that there are few examples of students actually receiving evaluation reports.

The secondary audience includes administrative staff, other teachers who are not involved in the project, and, in Sanders's view, at least the sponsors or funding body. The sponsors or funding body are in fact the audience most likely to commission evaluation, while teachers may constitute the audience who are most likely to be affected by the project.

Once an audience has been identified, or has identified itself by commissioning the evaluation, consultation with the audience should determine the goals of the evaluation (Lynch 1996: 3). Obviously, different audiences are likely to have different goals and interests and these will have consequences for how the evaluation is to be conducted. The number of audiences may make it impossible for all identified audiences to be considered as either recipients or shapers of the evaluation. This consideration underlies the necessity to recognise that there are practical limitations on how far an evaluator may go in identifying and in taking account of the range of interests of different audiences. Guba and Lincoln (1989: 202) suggest that a more precise way of identifying primary and secondary audiences may be to select audiences according to their relative stake in the project. The danger inherent in this view is that audience may be perceived as comprising amorphous groups which are assembled in predetermined categories. Patton (1997: 43) cautions against this. He suggests rather that evaluators will need to build up trust and rapport with the individual members of an audience grouping as people - and not simply as an organisation or group.

Patton (1997: 43) proposes that audience must be seen as potential users of the evaluation. This emphasises the need for the evaluator to understand the politics and values underlying the project. He/she should encourage potential users to speak for themselves, especially where the evaluator is an inappropriate advocate (Patton 1997: 365). If we accept Patton's suggestion that potential users of a report constitute the audience, it makes sense to distinguish between audiences identified as users of the evaluation and audiences to whom the evaluation report will be disseminated.

Since audiences identified as users of the evaluation are the primary audience, they may be considered primary users. As such, they should receive the primary attention of the evaluator. On the other hand, the audiences to whom the evaluation report will be disseminated, the recipients of the report, are unlikely to use the report and should, in terms of this dichotomy, receive less consideration from the evaluators.

It follows from this that a proper appreciation of the notion of audience in evaluations could minimise the high number of un-used, dust-gathering reports that abound. There are too many examples of groups who receive reports, but who do not use the results or who do not pay attention to recommendations. The reasons for this may be attributed to the inappropriacy of the evaluation for its audience.

4 The audience/s in the COFE evaluation

Because the COFE1 project was so huge, there were many different groups of stakeholders who expressed an interest in the evaluation. As was suggested earlier in this document, stakeholder interests may have been attributable to their different interests in project outcomes, their different involvements, the different timing of their involvement, their varying commitment to the project or, even, to some or other hidden political agenda. Indeed, all the different interest groupings mentioned in this list could not be treated as one homogenous audience. While each group represents a stakeholder grouping with an involvement in the project, it is possible that each group represents a different audience which has different interests. This certainly creates difficulties for evaluators. Firstly, as indicated above, evaluators are cautioned against simply conflating stakeholders with audience. Patton's (1997) distinction might be usefully applied here - to distinguish between users and recipients of evaluation reports.

The following section will illustrate how, in the process of the COFE evaluation, an assortment of interest groupings were dealt with (or in some cases overlooked) by the group of insider evaluators. The discussion explores the extent to which the evaluators gave consideration to the notion of audience and it will examine the extent to which their sense of audience affected their decisions and actions.

5 Planning the COFE evaluation

Having made the decision to undertake a COFE evaluation with a difference, the insider evaluators encountered a number of difficulties which they retrospectively attributed to the following reasons. Firstly, they indicated that this was the first exercise in which the participants had been required to work together as an evaluation group. This meant that the evaluators had to learn to work together as an evaluation team (as opposed merely to being project participants). Secondly, the evaluators came from different cities and from different universities. This fact brought with it all the problems that are associated with proximity. Thirdly, since several of the evaluators had had no previous experience of carrying out evaluations, they experienced certain difficulties with the process.

What follows is a discussion of a series of problems that the group encountered as they tried to identify the audience/s. The group's problems arose out of the fact that they failed to recognise the significance inherent in the variety of the various audiences whose interests the evaluation was designed to address. Initially, the evaluation team considered the universities to be their only audience. The team had been unaware from the outset that it would have to also report to the Ministry, DFID and ASCUN. The For whom? question had not been considered in sufficient detail! When the sense of audience eventually began to impact on their decision-making, they began to believe that the report could be slanted in various ways to suit the needs of the different audiences.

Even before they began to focus on the audience, the evaluation team had omitted to consult the primary audiences to ascertain their expectations of the investigation. The group of evaluators considered that their task was simply to investigate and report on the impact of the project in the universities. They underestimated the complexity of the task and so were unready for the many problems with which they were faced.

As with all academics, they were restricted by the usual constraints of time. In addition to the evaluation, each member had his/her regular work to contend with - a matter compounded by the difficulties of attempting to communicate across continents and across the country. Even the local academics were scattered over a large geographical area. The problem of proximity was compounded by one of the criterion for selecting members of the evaluation team, namely that they had to be collectively representative of the different regions of the country. This, it was believed, would forestall the problem of having a report from the capital city being imposed on the rest of the country. While this was an important consideration for ensuring that the report would be credible in the eyes of its (acknowledged) university audience, it nevertheless created additional difficulties. In addition, after ensuring representativeness, the group were under immense pressure to get on and report.

The evaluation began in the absence of adequate consideration of who the audience/s might be. In the midst of these difficulties, there was another problem: the evaluation did not allow or create the opportunity to clarify the expectations of the different audiences. Nor indeed did the evaluators take into account the kind of report which potential audiences might have expected. The group pointed out that they also had no clarity about any particular aspects of content that potential audiences might have wanted to see emphasised. One such example of this kind of limitation was the subsequent discovery (after the report had been disseminated in the Ministry) that the Ministry had actually wanted more information about the impact pertaining to INSET. By the time this request was received, it was already too late to gather the required information - information that would have had serious implications for the sustainability of the project2.

In the process of the investigation3, other audiences came to the fore. It was found, for example, that the various universities had identified themselves as constituting an important audience, and that they all wanted reports on their performance. When the universities requests were considered, a wide variety of trends were discerned. Because their requests were so divergent, it was impossible to categorise the universities as one homogeneous audience. In fact, what the evaluators had initially perceived as a homogenous group was revealed to be a category with a large number of different needs and concerns.

For example, one similarity among universities was that they all wanted to receive a report on their performance. On closer consideration, it was found that some wanted the information immediately. Others were genuinely interested in the impact the programme would make on all universities. Yet other universities wanted to know where they stood in relation to their fellow universities: they wanted a comparative league table. In such cases, these institutions saw themselves as competitors rather than as participants in the COFE project.

It was retrospectively felt that the evaluation team could not have gathered the information needed or reported on the wide range of information wanted by the diverse audiences - simply because the needs and expectations of the different audiences were so great. This led the evaluation team to wonder whether it was indeed possible to work simultaneously for so many audiences.

6 Reporting the evaluation

The final stage of the process was the reporting stage. During this stage, the evaluation group was forcibly made conscious of the differences between its audiences. What was it to say to each of them? How would the group of evaluators say what they needed to say? How much could the group claim on the basis of its findings? It proved necessary and useful to have guidance from a consultant at this stage.

The dissemination once again meant that the evaluation team was confronted by a number of complexities pertaining to the different audiences. This problem may have been partly attributable to the fact that the evaluation group was made up of academics who were required to report to civil servants in the Ministry, and to foreign civil servants in DFID. The evaluators' backgrounds were very different from those of its audience. This was an obstacle which the evaluation team could have overcome had they not ignored an initial suggestion to include someone from the Ministry in the evaluation team. The academics had initially felt that this would not be necessary - a consideration informed by the traditional rivalry between officials and academics (apart from being an attempt, on the part of the team members, to maintain the general equilibrium of the team by not bringing in outsiders).

It was in this stage of the project that the Ministry made it clear that while it was gratified to hear that the project had achieved effected various changes and improvements in pre-service education, this was not actually the outcome about which they wanted to hear. They were more interested in the impact of the project on INSET.

The evaluators found that the compilation of the reports and their dissemination had taken far more time and energy (and had been a far more complex process) than the group had initially envisaged. This complexity was caused by the multiplicity and variety of the audiences who were emerging - a problem aggravated by the difficulties associated with the lack of both proximity and time. It is contended that these problems might have been avoided to some extent had the team taken cognisance of its various audiences and had there also been sufficient time to review the drafts in order to ensure that they met the needs of their audiences.

Had the team moreover allowed for the interim reporting to the identified audiences, the report could easily have addressed most of the interests which were not addressed in the final document.

7 Lessons learned: Some recommendations for taking audience into account

If one looks back on the experience, hindsight makes it easy to make a number of recommendations about how one might take the audience of an evaluation into an account.

The following suggestions might help evaluators to do just that.

· Identify the audience or audiences during the conceptualisation stage of the evaluation.

· Limit the number of audiences for the evaluation to a number which can actually be managed. Do not attempt to focus on too many audiences. You cannot please all audiences at once.

· Identify primary audiences because they are potential end-users of the evaluation.

· Wherever possible or appropriate, include representatives of the audience in the evaluation team.

· Consult the audience at an early stage so as to gain an understanding of its expectations and requirements. Negotiate your intentions with them so as to broaden their concept of what an evaluation is.

· Get the audience to identify criteria for the evaluation (you may add to these or modify them with the audience). This may also help you determine more specific goals for the evaluation than you have or were given initially.

· A stronger sense of audience will help you to develop more appropriate instruments and questions.

· Disseminate interim reports to the audiences.

· Ask for comments on draft reports and use these to check the acceptability and usefulness of the report. Do not, however, be bullied into falsifying or toning down what you understand to be the truth.

· Remember that your audience may not see itself as one grouping. It is up to you to give it a sense of self.

6 Conclusion

When one examines the concept of audience, and the experience of a group of novice evaluators with regard to audience, it becomes evident that evaluators cannot afford to take audiences for granted. Consideration of who comprises the audience, and what these people want, is important for the utility of the evaluation on which you will expend a huge amount of energy.

Footnote:

1. As far as we know, there is no history of evaluation being conducted in the field of education in Colombia as is described in this article.

2. Sustainability was a concept which seemed to interest most members of the groups.

3. This was the stage during which the evaluation group met the subjects from one of its audiences during the process of piloting its survey questions It was in this stage that the evaluation team got its first sense of audience.

3.3 Impact studies and their audiences

Coco Brenes
Universidad CentroAmerica
Nicaragua
Tony Luxon
Institute for English Language Education
Lancaster University


This paper considers the variety of audiences that are implied by multi-partnered projects and explores reasons why they are thought of as recipients for the project report. The paper considers various issues pertaining to the dissemination of the report, such as: Who writes such a report? Who reads it? In what language is it produced? How is it disseminated? Each of these questions is addressed in relation to the ODA ELT Project: Nicaragua, which was implemented between March 1993 and July 1996.

The project was collaborative and involved a wide range of potential audiences, including the ODA and three Nicaraguan institutions: the Ministry of Education, The National Autonomous University of Nicaragua and the University of Central America. The project was intended to upgrade the teaching of English in secondary schools and to assist with the development of INSET and PRESET in the universities and the Ministry of Education. The paper describes how contentious it may be to compile and disseminate impact reports.


1 Introduction

When impact studies and evaluation reports are produced, their reception gives rise to a number of problems. Who writes such reports? Who reads them? In what language has it been produced? How is it being disseminated?

The purpose of this paper is not to address all of these issues in detail, but merely to indicate the kind of complexities inherent in dissemination by indicating the varied audiences that are implicit when Impact Study reports are produced. The basis for the discussion will be the dissemination of the Impact Study report for the ODA ELT Project: Nicaragua. In this paper, we shall look at the audiences for the report, the way in which these audiences received the report, and the form in which they received its message.

2. Issues pertaining to reporting to a variety of audiences

There are a number of general issues which recur throughout this discussion. We do not intend to give the impression that we have any definitive answers to these issues - nor indeed that they might ever require answers of that kind. But we have realised just how important issues such as these may be. We also believe that precisely such issues need to be considered when reports on impact assessment are drawn up. The issues that we will consider in this paper are:

· the authorship of the report
· the language in which the report is produced
· access to the report
· the delivery of the report
· the form in which it is delivered.
We will begin by addressing the issues of the authorship of the report and the language in which it was produced (in this particular case).

2.1 The authorship of reports

It is usually assumed that project coordinators will write the report (if they are British native speakers of English). This often happens merely because the authorship is described in those terms in the project memorandum. This, however, may not necessarily happen. Thus, for example, in a recent Impact Study undertaken in Romania, the report was written by the Romanian team in two languages - English and Romanian.

In the case of the Nicaraguan project, the outsiders were able to facilitate the writing of the report and the smooth running of the project. Several members of the project team felt that the introduction of people from outside the context had provided a catalyst for cooperative work among the main stakeholders.

For a number of years mutual mistrust had existed among the principal players, namely the universities and the Ministry of Education. At the same time, however, all the stakeholders realised that there would be mutual benefits if all concerned could work together, and so they had searched for ways to bring this about (despite a discouraging previous history of non-cooperation).

To achieve their aims, the stakeholders made it a deliberate strategy to bring in people from the outside who might have the skills to facilitate this process. The ODA ELT Project represented neutral ground, and the presence of Cheles (the Nicaraguan name for foreigners - literally meaning blondies) helped the process to get off the ground. At the beginning of the project, the foreigners often acted as go-betweens. This also meant that the Cheles had degrees of access, both for the general purposes of the project and for the collection of impact data, that local people sometimes did not have, and this facilitated the production of the Impact Study report.

2.2 The language of the reports

The language used to write such a report is an important issue, especially if no common language is shared by all participants, sponsors and stakeholders. Language affects both the production of the report and the way in which it reaches its audiences.

The ODA ELT Project: Nicaragua report was produced in English. Two British project coordinators wrote the report. Although were able to speak Spanish, they could not write sufficiently well in Spanish to produce a report that was acceptable to Spanish first-language speakers. This was a potential obstacle to some members of the audience since Spanish was the first language of all the Nicaraguan participants, with the exception of one or two who had been born, or had lived at some stage on the Atlantic coast (in an English-speaking area). But even such people often possessed a far greater proficiency in reading and writing Spanish than English.

The fact that the coordinators were English-speakers was one of the major reasons why the report was produced in English, but it was not the only one. The structure of the project was such that there were no full-time Nicaraguan members; trainers were incorporated at particular times (as, for example, for training workshops and for intensive courses). Thus, although some twenty people were involved in the assessment, they all worked full-time for their institution and consequently did not have the necessary time to dedicate to analysing data and writing up findings. Although some capacity for producing such a report had been developed (five team members were writing dissertations based on topics that were related to the project), none of these members could afford the time required to write the impact assessment report. As a result, the only full-time members of the team who were available and able to write the report were the project coordinators.

A second issue was that it was necessary to build up the capacity to produce research in English in the country. There was a dearth of ELT research in Nicaragua, and what existed had been produced in Spanish.

The shortage of local research into ELT was enhanced by the baseline study of the project which had been produced by the British project coordinators and by a number of other shorter research papers. At last there existed an English-language body of research into ELT in Nicaragua.1

Thirdly, the use of English in English classes as the main language of instruction in secondary schools (rather than only Spanish) was one of the main goals of the project. Hence producing the report in English was seen as part of the promotion of English throughout the English teaching and learning community.

Fourthly, the main project sponsor was the ODA (as it was then known), and it was unclear as to whether the ODA would have accepted a report in Spanish. For the reasons outlined above, the report was produced in English.2

2.3 The audiences for the Impact Study

It was intended from the outset that the Impact Study report would not be an internal ODA document but rather that it would be public and open to all involved in the project and beyond. We shall now consider the main audiences for the Impact Study, and how they were reached.

2.3.1 The host institutions

Two major universities, the University of Central America (UCA), the Autonomous National University of Nicaragua (UNAN), and the Ministry of Education (MED) were the main Nicaraguan sponsors of the project. There were representatives of these institutions on the steering committee of the project, which had been established six months after its initiation. The committee was kept informed during regular project steering committee meetings of the progress of the research, both collectively in committee meetings, and individually, on an ad hoc basis. Of the steering committee, a sizeable number could speak and read English although several others had little or no ability to speak English. For this reason, all steering committee meetings were always conducted in Spanish. Since the British counterparts were all fluent in Spanish, conducting all meetings in Spanish was not a problem. However, when it came to the stage of writing the reports, the intention to write in English gave rise to several problems.

As the main report took some time to write (especially since project activities were continuing), it was necessary to produce some form of interim report for committee members. It was neither logistically possible nor financially feasible to produce a report in Spanish, as translation is very expensive in Nicaragua, and, in addition, it takes a great deal of time. The solution decided upon by the project coordinators was to produce a digest of the main findings.

A summary of the findings comprising about 20 pages, mostly in graphic form, was produced. Some of the members could read English, but for those who could not and indeed for those who had neither the time nor the desire to read the report in full, this seemed the best solution. Although this report contained minimal text, it included lots of charts and graphs which graphically depicted the findings, and there were, in addition, glossaries with explanations in Spanish. The report was then discussed in committee, and any questions concerning the results were addressed.

Each institution subsequently received a copy of the main report, but the short graphic report was most effective for many people - mainly because it could be read in a short time, was not too dependent on language, and could easily be discussed in a meeting.

This summary report (rather than the full report) was used as the basis for discussions about future plans for sustainable activity after support from the ODA had ceased, even though the main report was available if necessary.

2.3.2 Trainers

The direct beneficiaries of the project were intended to be, firstly, the teacher trainers at the two main universities, and, secondly, the majority of teachers in secondary schools. A team of 20 Nicaraguan trainers was involved in the research process, and they were constantly consulted about the general progress of the research. These trainers all spoke and read English, and so language did not pose a problem. Although it was made available to all, most of the trainers did not have the time to read the report in its entirety, and they therefore also found that the shorter document was extremely convenient.

After the main Impact Study report had been finished, a copy was distributed to the institutions and was then discussed in meetings. It was read by trainers in both its abridged and full-length forms. Although those who intended to do further research into ELT read the full report, most of the others found the abridged report far more convenient. Every trainer received a copy of the short report and a number of copies of the main report were made available to the departments. This meant that if most people preferred to read the abridged version, they could do so - since each institution had a copy of the full report.

2.4 Teachers

In terms of ownership, it seems reasonable to suppose that the secondary school teachers, who were the intended beneficiaries of the project, should receive a copy of the final Impact Study. It would, however, simply have been too difficult and too costly for this to occur, and other means were used to communicate the findings to these teachers It was not intended that the research team should take decisions about access on behalf of these primary stakeholders The decision was based on making the report as accessible as possible to those people who might be involved in activities contributing to the sustainability of the project. Although there was no bar on who might read the report in theory (since it was a public document), real access was often prohibited by circumstances. For example, it was unrealistic to expect that a teacher living in a small village in Nueva Segovia near to the Honduran border, and who could only be reached after quite a hazardous journey, would have easy or unlimited access to the report. It seemed reasonable, given the context, to find alternative ways to reach teachers.

Considering that more than 600 teachers in all parts of the country had in some way been reached by the project, it was not feasible, from a logistical or financial point of view, to distribute copies of the reports to all of them. Teachers who had received training through the project were often reached through the newsletter of the national association of English teachers (ANPI). Even the Ministry of Education found it difficult to reach many of the teachers in out-lying regions, and as much as the ministry attempted to facilitate communications with the teachers, this channel was never entirely satisfactory. A summary of the main findings was therefore included in the relevant edition of the ANPI newsletter. This seems to be one of the most efficient channels of communication. There were also regular meetings with individual teachers to discuss the report.

2.5 ODA

ODA were the British sponsors of the project, and as such were the sponsors of the Impact Study. The main report was seen both in its draft from and in its completed form by the ODA. The results and observations it contained formed the basis for a review of the project. This was a participatory exercise involving the ODA education adviser and two members of the project team from two of the key institutions involved in the project.

Of possible importance to the ODA were issues of accountability, value for money, and sustainability. There were also indications at various times of a hope that the Impact Study might contribute to developing methodology for educational impact assessment in general. The education adviser used the Impact Study as a reference point for the project, and subjected it to a process of scrutiny by covering the same areas herself through discussions with stakeholders and target groups.

2.6 The British Embassy

The ODA project in Nicaragua was unusual for an ELT project because the in-country management was conducted through the British Embassy (there being no British Council or any other similar organisation in Nicaragua). The British Ambassador was the line manager of the British project coordinators in-country. He himself had also been very active in the project as a member of the steering committee. Furthermore, in the year after the project had ended, he was placed in control of the British aid budget for Nicaragua and so wanted to see what kind of investment he might need to make in order to ensure sustainability. His interest, therefore, came from a number of angles: he was sponsor, manager, and project participant. It may also be possible that he felt the necessity to 'fly the flag' (diplomatically speaking) by showing what Britain had been doing in Nicaragua.

As with the ODA, the ambassador was shown the report at the draft stage and made comments, where relevant. He also received both forms of the report.

2.7 Lancaster University

Lancaster University had provided consultancy for the project in its initial planning stages and for the Impact Study itself. They were also the principal overseas training providers. They therefore had professional concerns about their effectiveness in these roles and also felt that the Impact Study would contribute to the development of a methodology for assessing impact.

Professor Charles Alderson provided consultancy which directly related to the procedures and the production of the Impact Study, while John McGovern had provided some consultancy on the baseline study. Both consultants were kept informed of the process when the Impact Study was being carried out, and they both received a copy of the draft and the final report, as well as a copy of the short interim report. This they shared with the trainers and developers at the Institute for English Language Education (IELE).

2.8 Overseas Service Bureau and the Australian Personnel Services Overseas

Two NGO organisations, the Overseas Service Bureau (OSB) from Australia and the Personnel Services Overseas (APSO) of the Republic of Ireland had contributed teachers and trainers to the project, two of whom participated in the Impact Study research. The research was discussed with these representatives and with the personnel who had participated. A copy of the report was distributed to representatives.

The European Union (EU) and the Inter-American Development Bank (IDB), who were prospective donors, were also informed of the activities and results of the project. It was hoped thereby that they might be persuaded to support an expansion of the project. They were considered to be recipients since the project team had hoped that it would expand to cover all the areas of the curriculum. For this reason, a summary of the results was distributed to them. This process eventually led to their funding a sector-wide education programme, which was recently initiated. The coordinators of the programme are currently using the Impact Study to inform the structure of the EU programme.

2.9 ELT Professionals

Many different research groups, in particular the Management of Innovation in Language Education (MILE) research group at Lancaster University, have contributed to the process of bringing research of this kind into the public domain. The British Council also used the Impact Study report to compile a report on ELT in Central America.

Since there are many potential audiences for a study of this kind, it is important to consider how such audiences may be reached, the form in which the results of such of research can be disseminated, and how the messages contained in such reports are received by their various audiences.

3 Conclusion

This paper considers the variety of audiences that are implicit in the ODA Nicaragua ELT project. It argues that issues pertaining to the authorship, the language in which the report is written and the accessibility of the report are more complex than they may initially seem. What was hoped in this paper was to alert the reader to some of the pitfalls that may be encountered if the actual reporting stage of an assessment fails to take potential audiences into consideration.

Footnote:

1. The impact study is also now being used as material for analysis in the Methodology courses in the new TEFL programme at UCA Also about five master's dissertations pertaining to the project have been carried out.

2. The line management of the project was the British Ambassador who could read Spanish.


PREVIOUS PAGE TOP OF PAGE NEXT PAGE