Acceso abierto

10 Years of Evaluation Practice in Media Assistance: Who, When, Why and How?


Cite

Introduction

Media assistance is an area of theory and practice with a long history, dating back to the post-Second World War period. Using Manyozo's (2012) overview of the media, communication and development field as a framework, I situate media assistance

While Manyozo uses the term “media development”, I prefer “media assistance” in order to acknowledge the act of intervention, where the role of outsiders is to support local actors.

within a broader field of media, communication and development. In this way, media assistance is related to media (or communication) for development and participatory communication, but has a distinct theoretical foundation and trajectory, including a focus on good governance.

The “third wave” of democratisation during the late 19080s and early 1990s sparked a revival of interest and funding from donors for media assistance in nations formerly under authoritarian rule. Several of the most well-known media assistance organisations (such as Internews, Panos, Article 19 and BBC Media Action) were established in this period. More than two decades on, however, little is known about the impact of such efforts, due, in part, to ineffective evaluation

In this paper my usage of the term ‘evaluation’ follows the protocols set out by Lennie and Tacchi (2013), where ‘evaluation’ is used as shorthand to include all research, data collection and assessment activities that contribute to understanding the changes occurring in relation to the project, and possible ways to improve.

practices. Several authors have pointed to a propensity for the missionary-like zeal of early media assistance efforts to override critiques (Sparks, 2005: 42) causing scant resources to be invested in evaluation (Mosher, 2011: 239–240). In the past decade, however, there have been several publications and events held on the topic by both industry and academia (e.g. Arsenault, Himelfarb, & Abbott, 2011; Banda, Berger, Panneerselvan, Nair, & Whitehouse, 2009; CAMECO, 2007, 2009; Lennie & Tacchi, 2013; Myers, Woods, & Odugbemi, 2005; Price, Abbott, & Morgan, 2011) indicating that media assistance evaluation, and more broadly, media, communication and development evaluation, is now firmly on the agenda.

Despite the growing interest, however, so far no detailed study of the actual evaluation practices of media assistance has been undertaken. Mosher's (2011) chapter provides some insights from consultants and media assistance organisations. Lennie and Tacchi have led several studies of C4D evaluation practices in UN agencies, which informed their framework for C4D evaluation (Lennie & Tacchi, 2013). Some critical analysis of problems associated with media assistance evaluation have also been noted (Abbott & Taylor, 2011; LaMay, 2011; Mosher, 2011; Waisbord, 2011).

Evaluation documents have been used as the basis of a study of evaluation practices before. Crawford.and Kearton (2001) published a document survey of evaluation reports of the entire democracy and governance assistance field over the previous 10 years (1900–2000). Passey (2012) used USAID evaluation reports, though his study focused on the relationship between media assistance and democratization and was less concerned with questions of evaluation methodology. Similarly, Inagaki's (2007) review of communication for development (C4D) impacts makes only some references to evaluation methodology, focussing instead on lessons for improved C4D impact and effectiveness.

This paper contributes to the emerging scholarship on media assistance evaluation practice providing a topography of media assistance evaluation practices over the past decade through a document analysis of evaluation reports. It finds remarkable consistency in the evaluation approaches and methods used in ex-post evaluations, where most evaluation reports were reliant on little more than stakeholder interviews and a review of project documentation.

Research Design

This qualitative document analysis, exploring evaluation practices over a ten year period, is part of a larger research project exploring effective media assistance evaluation. The sample of evaluation reports were primarily sourced from two industry databases: CAMECO and the Communication Initiative Network. To be included, the document had to be an evaluation report of a media assistance (mass media and community media) intervention (program or project) published between 2002 and 2012. The total number of evaluation reports included in this analysis is 47. The included documents are listed in Appendix 1.

In keeping with the known publication bias in the development sector (Inagaki, 2007: 39; Morris, 2003: 238–239), most published evaluation documents are positive appraisals of projects. This is an ongoing limitation to research of this type. Furthermore, as became apparent through subsequent research, very few evaluations undertaken by media assistance agencies are published online. To account for these limitations, the discussion section draws upon insights from media assistance evaluators. Ten evaluators were interviewed in 2013, including five consultants, three researchers with media assistance organisations, and two evaluators with approximately equal experience in both types of positions. The evaluators and researchers interviewed are listed in Appendix 2.

I used NVivo to code emerging themes from the sample. To guide the interpretation and analysis I used the concept of ‘accuracy’, which is one of the standards put forward by the Joint Committee on Standards for Educational Evaluation. Accuracy here depends on the justifiability of the conclusions, validity, reliability, detailed descriptions of contexts, systematic management of information, technically adequate evaluation designs, explicit reasoning and guards against bias, and distortions and errors (Joint Committee on Standards for Educational Evaluation, 1994). It is not possible to ascertain all these dimensions of ‘accuracy’ based on the evaluation reports alone, however, in this paper I provide a discussion of some elements, including aspects of the evaluation design and techniques used towards generating reasonable conclusions.

Findings
The ‘Who’, ‘When’ and ‘Why’: Purpose and Timing

The evaluation reports included in this sample range from mid-term evaluations to ex-post evaluations undertaken at the completion of the project, and from internally authored reports to consultant authored reports. These factors have important implications, since the motivations underpinning evaluations can influence the content of the reports.

In the sample, as far as could be established, 35 of the 47 (74%) evaluation reports were initiated or required by donors, while 12 (26% of the sample) were initiated by the implementing agency or project team (see Table 1).

Authorship of the Sample of Media Assistance Evaluation Reports

Authored by Total Commissioned (/ required) by donor Commissioned (/ required) by project
External consultant 27 19 8
Donor 5 5 0
Project 5 2 3
Consultant + Donor 2 2 0
Consultant + Project 1 0 1
Donor + Project 1 1 0
Unknown 6 6 0
Total 47 35 12

For the 27 reports in this sample that were undertaken by an external consultant (57% of the sample), the primary audience for the report was the donor who had commissioned the report. This was evidenced by references to the Terms of Reference or the Scope of Work in the introductory sections of reports (such as executive summaries or introductions), which indicate that the report is a response to a donor's request. The primary audience of the reports authored by project teams was less consistent. For some, there was still a self-consciousness of the donor as an audience evident through statements such as “USAID and DFID, the funders of Local Voices and Turnaround Time, require numbers to assess whether the programs have produced what they promised” (Cohen, Zivetz & Malan, 2008). Similarly, for the six reports of UNESCO-funded projects (for which the authorship is unknown), the evaluations were part of a routine, and very short, reporting cycle. In fact, only four reports in this sample (9% of the 47 reports) specified audiences in addition to, or other than, donors. These four reports listed the beneficiaries (participating journalists), local citizens or other media assistance NGOs (so that they could copy the project approach) as potential audiences of the evaluation.

One of the most common reasons stated for doing evaluations was to improve programs or to inform potential future phases. Even if this was not stated as an aim at the beginning, all but three reports (44 reports, 94% of the sample) had a substantial recommendations section, showing that guidance for future planning was indeed one of the primary outputs of most reports.

In relation to the timing of evaluation, distinct patterns were observable. The graph below (Figure 1) shows the distribution of evaluation reports by the number of years between when implementation starts and when evaluation is undertaken. Many evaluations in this sample were undertaken after quite short periods of intervention. The most common evaluative periods for this sample were at three years, five years and two years respectively; few evaluations were conducted after four years of implementation. Four reports were conducted after less than a year of programming: three of these were UNESCO/IPDC reports, which were not in-depth investigations of impact but rather were management-focused with some conjecture about possible impacts; the other was a mid-term report.

Figure 1

Timing of Evaluation (no. years following implementation)

The ‘How’: Evaluation Approaches and Tools

In the background of many evaluation reports were sets of indicators, Logical Frameworks, and occasionally data from baseline studies. The analysis of the use of these tools in the evaluation reports in this sample presents a mixed picture.

Fifteen documents in the sample (32% of the sample) made specific reference to indicators; some actively using indicators and some suggesting the use of indicators in future phases or projects. It is possible, however, that indicators are more common than this suggests, since these may not necessarily be discussed in reports.

Two evaluation reports, both of USAID-funded projects, used the Media Sustainability Index (MSI) as indicators. The evaluators of these reports, who were directed to use the MSI as indicators, repeatedly found that the indicators did not match their own observations, or that the wording was inappropriate for the local context. Examples of comments of this kind include:

The MSI is not a precise tool, but it can suggest basic trends. Our first impression was that these scores seem unreasonably low. The situation looks better to us than the Index indicates.

(McClear, 2004)

IREX met its targets for these indicators. However as with many the MIMP indicators, they do not adequately measure the results of this IR or reflect the scope of activities undertaken.

(ARD Inc., 2004)

In most cases, the indicators used were project-specific indicators or (13, or 28% of the sample), which were either set by the donor or by the project organisation. Authors sometimes criticised these indicators for being too narrow, preventing a full exploration of the impacts. One report, co-authored by a consultant and staff from Internews, questioned the appropriateness of indicators for media assistance saying, “we found it similarly challenging to mesh the indicators used by funders with the standards that journalists typically use themselves” (Cohen, Zivetz & Malan, 2008).

Several evaluators questioned the wording of indicators, commenting that indicators were not measurable, inappropriate, unclear or non-existent. Several evaluators and evaluation teams used the evaluation to change or devise new indicators. For example, one evaluation team expressed dissatisfaction with the original indicators and so focused much more on qualitative analysis of the project, saying:

The Monitoring and Evaluation plan submitted by RAMAK and approved by the CTO focused on five indicators to capture project success … They do not capture every aspect of the project; merely those that USAID felt were the most important.

(Creative Associates International, 2006)

While the usefulness and relevance of indicators was sometimes questionable, some evaluators of projects without indicators established at the onset of a project also found this to be problematic and actively recommended a process of defining project indicators. This points to a paradox: where indicators were absent, evaluators (and project staff) were inclined to recommend strategies for increased clarity and structure, and indicators were seen as a solution to this. However, it was common for evaluators to be dissatisfied with existing indicators, which often failed to remain relevant throughout the life of the project. These perspectives suggest that indicators are perceived as potentially valuable in evaluation, but they are rarely designed at the beginning in a way that is useful. Their potential usefulness is stymied by being ill-suited, immeasurable, unclear or, indeed, absent. Evaluators who seemed satisfied with indicators were normally able to base their findings on qualitative data and in-depth analysis, and had some flexibility to adapt the indicators.

The Logical Framework, a common tool for organising objectives and indicators into tabular form, was included or referred to in less than a quarter (10, 21% of the sample) of the evaluation reports. Once again, it is possible that a greater proportion of the projects in the sample actually had Logical Frameworks than specifically mentioned them in reports. Logical Frameworks were not a prominent feature in the main body of the evaluation reports, and if the Logical Framework itself was included in the report it would be in the appendices. Authors primarily referred to addressing the Logical Frameworks in the discussions of the purpose of evaluations, but they were less prevalent in the context of discussing impacts.

A similarly mixed message emerges in relation to the usefulness of the Logical Frameworks. Several evaluators involved in authoring the reports made recommendations that more efforts and capacity building to improve Logical Frameworks, implying that the current use of Logical Frameworks is largely ineffective.

The collection of baseline data against the indicators for later use as a comparison to post-intervention data is commonly asserted to be best practice in the literature from this field (Mefalopulos, 2005: 255; Mosher, 2011: 247; Taylor, 2010: 2). However, baseline designs were not common in this sample: only four reports of the 47 had baseline data to draw upon (9% of the sample). Of these, one referred to the existence of a qualitative baseline study but rarely cited this in the actual evaluation report. Two reports struggled to effectively compare the data sets. In one of these cases comparison was made impossible by the changes in methods, brought about by a dissatisfaction with the original baseline study's methodology (this issue in the Creative Associates International report of 2006, is discussed further in the next section). In the second case (Mytton, 2005) comparison was hampered by small sample sizes.

Only one report in this sample successfully used a baseline design together with a double-difference design that enabled effective comparisons both between before and after, and listeners and non-listeners (Raman & Bhanot 2008). However, this report is an exception in many ways. Although little information is given about how the study came to be conducted, the structure and style of this document is more in keeping with an academic journal article than the other project reports, which raises questions about the intentions, resources and evaluation capacity underpinning this case in comparison to others in the sample.

The ‘How’: Methodologies and Methods

Most reports in this sample had a specific methods section, but ascertaining the approaches and methods used in reports was sometimes difficult. One report in this sample did not include any discussion of the methodology, while some others provided only very brief detail. At times it was necessary to judge the methodologies used based on the type of data presented.

Though it is difficult to segregate the methodologies into discrete categories due to overlaps, Figure 2 presents an indication of the crude split between qualitative, quantitative, mixes of qualitative and quantitative methods, and participatory approaches. It is important to note when reading this diagram that many in the ‘mixed methods’ category were highly skewed towards qualitative methods, with some minor inclusions of quantitative data, such as a small-scale, often not statistically-significant, survey.

Figure 2

Crude Split of Evaluation Reports by Methodology

While in the literature on evaluation of development projects there are concerns over the dominance of quantitative indicators and tools (Lennie & Tacchi, 2013: 2, 73), this description does not characterise the practices in media assistance evaluation in this sample of reports. Instead, the most reports were based on qualitative approaches.

There was a remarkable consistency in the methods used in qualitative-based evaluation reports, and for this reason I refer to these as ‘the template’ for media assistance evaluations. As shown in Figure 2, almost two-thirds of the evaluation reports (29, 62% of the sample) relied solely on qualitative methods. The methodology sections of these became a familiar set of standard paragraphs, outlining the evaluators’ steps as involving a ‘desk review’ or a close reading of program documents and monitoring data (where available), followed by a visit to the field for around two weeks to undertake stakeholder interviews, focus groups or consultations, and to observe the running of the project. The types of stakeholders included in interviews (or other similar, qualitative methods) were the donors, the implementing agency staff, partner staff, and trainees or other participants.

In addition, this combination was the basis for of the most reports using mixed-methods, where more than half (8 of 14) of the reports categorised as ‘mixed methods’ in Figure 2 principally used desk review with stakeholder interviews, and simply added some minor quantitative study (or access to quantitative data). This means that in total 37 of the 47 documents are based on this general approach (79% of the sample).

In general, this ‘template’, or classic model of evaluation of media assistance, did not enable the provision of evidence of ongoing social or governance changes. But while there are serious limitations to this approach, it is not true to say that all evaluations of this kind failed to provide evidence and an analysis of impact. In particular, where reports added additional methods, such as content analysis, interviews with broader groups such as media experts and other media outlets not directly involved, interviews with government officials and community leaders, and ‘citizen panels’ (focus groups with the local community), the evidence and insights of concrete changes increased.

Exclusively quantitative methodologies were rare in this sample (2 of 47 reports). However 16 reports used some kind of quantitative data (14 mixed methods, 2 quantitative. 34% of the documents in this sample). It is important to note that in many cases what was referred to in reports as ‘quantitative’ would not qualify as such in academic contexts. The samples or numbers of respondents were often very small and it was rare that the usual procedures were in place to ensure statistical significance. However, as these were labelled and treated as quantitative methods in reports (through the use of percentages for example), I similarly categorised and compared the use of such methods on these terms. In this sample, quantitative data was in the form of quantified outputs, post-training surveys of journalists, content analysis or audience surveys. This discussion focuses on audience surveys, since this was the most common method of this kind, aside from basic quantified outputs data (such as the number of journalists trained, or the number and types of programs or articles published).

Quantitative audience surveys were used in five evaluations to answer questions related to impacts on audiences. Three reports used audience surveys to answer questions of reach and listenership, and, in some limited ways, opinions about the quality of the media outlet or program. Two reports used audience surveys to generate information about how listeners understood and used information, and how information affected their attitudes and behaviours.

Audience surveys were comparatively resource intensive, and compromises were often made in terms of the size and methods used. The evaluation of the SLGP program in Nigeria reduced the time and costs by using only a small sample (Mytton, 2005). Even when audience surveys were large enough to be statistically significant, there were additional problems with representativeness. One project (Creative Associates International, 2006) commissioned survey data for the baseline from a local branch of an international commercial company, Gallup. This, however, caused new problems, since such companies generally do not target rural and poor audiences. This is situation is not unique. A Project Director of BBC Media Action, Colin Spurway, in Cambodia reported a similar lack of inclusion of rural and poor people in the data collected by the local audience research company, Indochina Research, since its core business is producing commercial ratings data for advertising agencies (2013 pers. comm. 19 June).

These experiences with audience research show that generating useful evaluation evidence using these methods is often more costly, and more complicated, than it may first appear. Ideally, audience research would include questions of the audiences’ use of the information and not merely the number of listeners, in order to engage with changes at a deeper level.

Some form of participatory approach was apparent in ten of the 47 evaluation reports in this sample (21% of the sample). However, only four specifically used the term ‘participatory’ to describe the approach. Of these four, three were authored or co-authored by Birgitte Jallov, who is known among the media assistance consultant and evaluator community for her use of these kinds of approaches. The six reports that described participatory methods without using the term ‘participation’ cited various motivations for these choices, and selected different stakeholder groups to involve in participation. Table 2 presents the rationale offered for using participation, and the points in the evaluation when participation was used, which I have separated here into: participation in decisions on evaluation priorities and methods, participation in data collection, and participation in data or findings analysis.

Participatory Approaches in the Sample of Evaluation Reports

Participatory decision-making in evaluation priorities and methods Participatory/consultative data collection Participatory data analysis Stated purposes for participation
(Jallov & Lwange-Ntale, 2006) “Evaluation launch meetings” with all relevant stakeholders to “articulate their needs, interests and expectations”. Not indicated. “Debriefing meetings” to confront “all relevant stakeholders” with intermediary results to hear and include their reactions. Encourage ownership, evaluation not as control but as interactive learning process.
(Shresta, 2007) Project managers (does not use the term ‘participation’). Workshop to collect ‘change stories’ (with reference to the MSC technique). Vote on the most significant change stories. Reason not stated.
(Thompson, 2006) Not indicated. Partner staff (radio/TV stations) (does not use the term ‘participation’). Not indicated. Reason not stated, implies efficient data collection method.
(Renneberg, Green, Kapera, & Manguy, 2010) Not indicated. Partner staff (radio stations) (does not use the term ‘participation’). Not indicated. Reasons not stated, implies efficient data collection method.
(Jallov & Lwanga-Ntale, 2007) Not indicated. Communities involved in collecting change stories. Evaluator consolidated and verified. Partner staff (radio station) were involved in prioritisation. Useful when no indicators. Implies that purpose is to reflect local perspectives.
(Jallov, 2006) Not indicated. Not indicated. Confronted them with intermediary results to hear and include their reactions. Encourage ownership, evaluation as interactive learning process.
(Taouti-Cherif, 2008) Project managers (does not use the term ‘participation’). Not indicated. Not indicated. Reason not stated.
(Cohen, Zivetz, & Malan, 2008) Not indicated. Not indicated. Allowed staff to comment, no direct say over final report. Incorporate staff input.
(Stiles, 2006) Program managers. Not indicated. Not indicated. For utilisation approach, participation to focus evaluation on improvement.
(Cornell, 2006) Not indicated. Not indicated. Presented initial findings to donors, program staff. Comments were included. Purpose not given.

In the cases where the approaches were specifically named as ‘participatory’, the design and implementation of participatory evaluation was limited compared with the guidelines written by proponents such as Lennie and Tacchi (2013), Chambers (2008), and Parks et al. (2005). For example, an Internews evaluation claimed that “the evaluation process was participatory” but, in practice, and drawing on Pretty's participation typology (Pretty, 1995), the actual participation appeared more akin to participation by consultation, as evident in their description of participation as “allowing some staff to comment on the findings and recommendations although they had no direct say over the content of the final report” (Cohen, Zivetz & Malan, 2008). This report is a clear example of the clash between a desire for independence and participation. In this case, and in all cases in this sample, independence and expertise were privileged over participatory approaches where local project staff or communities would control and own the evaluation.

In keeping with existing literature on this point (see Chambers, 2008; Chouinard, 2013: 242; Parks et al., 2005; Plottu & Plottu, 2009: 343), participatory approaches, whether named as such or not, can therefore be motivated by either pragmatic purposes, such as to access local knowledge or to promote ownership of results, or by moral positions associated with people-centred development principles. Practical and instrumental uses of participatory approaches are not necessarily in conflict with people-centred and empowerment-based values; for example, a process of prioritisation by a group of stakeholders can add weight to the evidence by drawing on local knowledge, as well as provide opportunities for empowerment in the evaluation process. In general, however, it appears that access to local knowledge, and subsequent perceptions of increased accuracy, was a stronger motivating factor for involving stakeholders in the evaluation's design, data collection or analysis.

There are, however, barriers to implementing participatory evaluation in practice. Limited time, budgets and the structures of evaluation systems were barriers to these kinds of approaches. An example of this was Jallov and Lwanga-Ntale's evaluation of community radio in Tanzania (2007), which drew upon the MSC technique as a model. Rather than using the MSC technique as an ongoing monitoring tool throughout the life of the project (Dart & Davies, 2003; Davies & Dart, 2007), Jallov and Lwanga-Ntale needed to condense the process and needed to strip away some of the participatory elements in order to condense the process to meet time and budgetary constraints (Jallov, 2013 pers. comm. 6 March).

Discussion and Conclusion

This paper has outlined areas of diversity, but also aspects of current media assistance evaluation practice where there is some consistency. A series of trends in the timing, methodologies and implied epistemological perspectives were found. Although a wide range of methodologies are available in various toolkits, guides and evaluation methodology books, use of these in evaluation of media assistance was rare. Overwhelmingly, the dominant approach to evaluation in this sample was to review project documents and undertake stakeholder interviews. Evaluations were usually undertaken three or five years after project implementation had begun, and were usually authored by a consultant, who would visit to the field for about one or two weeks. This general style therefore becomes the basic template for how evaluation reports are usually carried out. This format was familiar to evaluators interviewed who said it was “the known approach” (Susman-Peña, 2013 pers. comm. 24 July) and described it as the “classic model” (Renneberg, 2013 pers. comm. 26 February).

Several factors contribute to the repeated use of this template for evaluating media assistance. In particular, most evaluation practices are a direct response to bureaucratic systems and project cycles, where quality assurance processes dictate that evaluation funds are held until the final weeks of a project cycle, that a consultant with no prior knowledge of the project should be commissioned, and the consultant is explicitly directed to check the performance against the original plan. This system compels a default to the ‘template’, since the range of methods that can be used to evaluate a project at the completion stage without existing monitoring and evaluation data are limited.

There are clear deficiencies in this template approach. Evaluators referred to these kinds of evaluations a “quick and dirty”, involving little more than a collection of “success stories” (Abbott, 2013 pers. comm. 26 July). In general, this ‘template’ model of evaluation of media assistance, did not enable the provision of evidence of ongoing social changes. As Abbott says, with a week in field “you can write a report … but you can’t really give a good evaluation” (2013 pers. comm. 26 July).

The analysis supports observations by Abbott and Taylor (2011: 260) and LaMay (2011: 223–230) that the use of global indexes and indicators is problematic. From evaluator's perspectives, when global indicators were relied upon they often provided a distorted picture of the both positive and negative changes. However, use of global indicators in evaluation reports was limited to USAID-funded, usually IREX implemented projects.

Conspicuously absent in most evaluation reports was any reference to the “M” in M&E. Though access to existing data from monitoring was mentioned in 17 of the 47 documents in the sample (36% of the sample), with one exception (where Outcome Mapping was used), authors of lamented that existing monitoring data was not of high quality, or had been generated using inappropriate methods leading to questionable results. This lack of existing monitoring and evaluation data frustrated many evaluators interviewed. For example, Warnock said,

You’ve got to have some sort of structure for gathering data as the project goes along. Otherwise you always end up in the position I’ve been in several times; that is, coming to evaluate a project where there's no data at all and you’ve got to actually spend your time, not evaluating, but trying to gather some data about it, and then do the evaluation. I don’t see that that's necessary. I think it's rather time wasting really.

(Warnock, 2013 pers. comm. 9 April)

This also has implications for learning from evaluations. Although the findings showed that most reports included substantial recommendations sections, recent research as part of the Media Map project has shown that evaluation reports are rarely used to inform future funding decisions in media assistance (Alcorn, Chen, Gardner, & Matsumoto, 2011). This may be because the timing of external evaluations in relation to the project cycles often means that funding decisions for future phases are made well before summative evaluations at the project's completion are undertaken (Patton, 2011: 64–66, 72). More than any specific methodology or approach, therefore, more effective and useful media assistance evaluation will depend upon more investment in evaluation design and planning, an emphasis on monitoring and evaluating throughout the duration of the project.

That said, while early planning is essential, flexibility and adaptability in evaluation designs is also crucial. Evaluators noted that due to the realities on the ground or changes in the expertise and interests of the personnel, the project objectives and activities often change. A lack of adaptability was a particularly pronounced problem in baseline designs, where the baseline data collected by media assistance projects was rarely found to be relevant by the end of a project. Logical Frameworks, indicators and baselines are not intrinsically antithetical to flexible and adaptive evaluation, but these must be seen as working or living documents to be added to and amended throughout the life of the project. This perspective is similar to the idea of the “Moving Baseline” (Lennie & Tacchi, 2013: 79). The concept of living frameworks and ongoing collection of evaluative evidence is critical to balancing clarity and structure while also acknowledging and dealing with complex types of projects and situations.

eISSN:
2001-5119
Idioma:
Inglés
Calendario de la edición:
2 veces al año
Temas de la revista:
Social Sciences, Communication Science, Mass Communication, Public and Political Communication