Participatory evaluation

Ronnie Vernooy

  [1]
Interview with family would become : Taking time to reflect on the research cycle, Bénin
P.Rudedjer/Bioversity

In the previous modules, you learned how to prepare, design, and conduct participatory research around resilient seed systems and climate change. You followed six steps to understand the local context and analyze how best to develop an adaptation strategy, together with farmers, that is focused on the acquisition and evaluation in the field of new crop varieties. Now that you have almost completed the research cycle, it is time to ask what has been achieved and what has beenlearned. In keeping with the overall approach of the resource box, answers to these questions can be found using a participatory method in which researchers and farmers and possibly other stakeholders work together to assess the achievements, reflect on problems and challenges encountered, and identify lessons learned about the research process itself.

In this module, you will learn how to conduct participatory evaluation with regard to the resilient seed systems research cycle. When should evaluation questions be defined? The key questions are: Who should be involved? What are the key aspects of the research that participatory evaluation questions can usefully address? What types of tools can be used for what purpose? Where can the tools be found? What are core tasks or steps in conducting an effective participatory evaluation? What factors can have an influence on the process and the results?

At the end of the module you will be able to prepare a comprehensive participatory evaluation plan.

What is participatory evaluation?

Research evaluation is the analysis of the effectiveness and direction of a particular research activity or project and involves judging progress and outcomes. Participatory evaluation is a joint effort or a partnership between researchers and other stakeholders, such as farmers, government officials, or extension workers, to evaluate, systematically, the research carried out. By participatory we mean various types and degrees of involvement, control over, and decision-making in an activity or the whole research process.

Two important reasons for using a participatory process are to increase the relevance and effectiveness of the research to stakeholders and to contribute to empowerment and social transformation. Combining these two reasons, participation can be seen as both a means and an end to strengthen local people’s capacity to make decisions, shape their future adaptation choices, and enhance their ability to create an environment for change. Participatory evaluation can also increase the accountability of everyone involved in the research process because it is constructed as a collective effort to learn from successes and failures. Last but not least, participatory evaluation can help identify gaps in the research process and suggest how these gaps can be filled through possible follow-up activities.

Participatory evaluation has emerged because of a recognition of the limitations of conventional evaluation. Conventional evaluation mainly serves the needs of project implementers and donors and ignores the interests of other groups involved in research and development efforts, especially local people. Such evaluations are normally carried out by outside experts, with the result that a gap exists between the experts’ perception of the project and its results and that of the people who are directly involved.

In this module, we introduce the concept and practice of participatory evaluation. Using such an approach will allow you to do more than just write a report for the donor agency or the agency that supports your work technically; it will allow you to put the results to good use by others as well.

What do you already know?

  • Do you have any previous experience with conducting research evaluations? With conducting participatory evaluations? What have you learned from these experiences?
  • How familiar are you with the theory that informs participatory evaluation?
  • Do you know which tools are available, which ones to select for what purpose, and how to use them effectively?
  • Are you able to identify some of the challenges and conduct a participatory evaluation in an efficient and effective way?
  • How have you used the results of previous (participatory) evaluations? How were they used by others? Was that a satisfactory experience?

This part of the module will help you answer the following questions: When should evaluation questions be defined? Who should be involved? What are the key aspects of the research that participatory evaluation questions can usefully address? 

  [2]
Farmer’s field day : Participatory evaluation emphasizes participation of the stakeholders, India
C.Bonham/Bioversity

Participatory evaluation emphasizes participation of the stakeholders in deciding how project progress should be measured and the results acted on. Broadening the involvement of the various stakeholders in identifying and analyzing change can create a clearer picture of what is really happening on the ground according to the perspectives of both women and men of various ages, classes, and ethnic groups in the community. It allows people involved in the research to share successes and learn from each other. At the same time, it is potentially empowering as it puts local people in charge, helps develop their skills, shows that their views count, and provides an opportunity for joint learning. Scientists can also use results from participatory evaluation to learn from and redesign their interventions (see Work Group for Community Health and Development 2014 listed in the Recommended reading for this module).

Well-designed participatory evaluation starts with evaluation questions, which are relevant and of practical use to the research team and others involved in the research process. They must also be effectively answered within a reasonable time frame and with the amount of resources available. To integrate participatory evaluation into the whole research cycle and plan adequately for its execution, one or more evaluation questions should be defined during the initial stages of the cycle. They may be based on the objectives of the research, which are established before research begins, then later refined or adjusted. In this way, the risk that participatory evaluation becomes a last-minute and, thus, largely futile activity can be avoided (Estrella et al. 2000, Vernooy 2005).

Farmers scoring varieties, Ethiopia
C. Fadda/Bioversity

Useful evaluation questions usually address three aspects of the research:

  • Design and planning — This can include aspects of the situational analysis carried out in step 1 of the resilient seed systems research cycle (Module 1. Situational analysis and planning) and the analyses carried out in steps 2 (Module 2. Selection of GIS tools and software) and 3 (Module 3. Climate change analysis and identification of appropriate germplasm). How useful was the situational analysis to the research team? Did the team use the results of the analysis to respond to farmers’ interests and needs and was this done in a participatory manner? How did farmers take part in the subsequent research design and planning steps? Which farmers? Did the research team spend enough time and did it have enough resources to carry out these steps? Were the research team and others trained and well prepared to take on the situational analysis and the use of the software?
  • Research implementation — How well was the research carried out? Did the research team actually do what they intended to do? Who was involved and how? Were some farmers left out? Why? How useful were the tools used? Did activities proceed as planned, given available time and resources? Did new ideas emerge? Why and how? How did farmers contribute?
  • The research outputs* and outcomes — What did the research team actually achieve? Were useful outputs produced and for whom? What were the main outcomes and for whom? Were there any unforeseen results, positive or negative, and for whom? To what degree can the outcomes be attributed to the research activities? Did other factors have an influence? What did the research team, farmers and other stakeholders learn from the overall process? Do we know why the results were obtained as happened? What could be adjusted in future research based on the lessons learned?

*Note: Evaluation of the immediate research outputs in the form of novel varieties takes place in step 5 of the research cycle (Module 5. Field experimentation).

The specific evaluation questions should be introduced by the rationale (why carry out a participatory evaluation?) and address what will be evaluated and for whom. In the following steps in the preparation of the evaluation plan, the remaining questions of who will carry out, how to carry out, and when to carry out can then be dealt with. A sound strategy requires paying attention to the six key questions (Why? For whom? What? Who? How? When?).

Recommended reading

  [3]

Vernooy, R., 2005 Participatory research and development for sustainable agriculture and natural resource management: a sourcebook  [3] Gonsalves, J., Becker, T., Braun, A., Campilan, D., de Chavez, H., Fajber, E., Kapiriri, M., Rivaca-Caminade, J., Vernooy, R. (editors), CIP-UPWARD, Los Baños, Philippines, and International Development Research Centre, Ottawa, Canada, pp. 104–112

This chapter provides practical guidance for the formulation of a participatory monitoring evaluation plan including evaluation questions.

  [4]

Work Group for Community Health and Development, 2014 The community toolbox  [4] Participatory evaluation. Work Group for Community Health and Development, University of Kansas, Lawrence, Kansas, Chapter 36: Introduction to evaluation, Section 6: Participatory evaluation

This is an excellent, concise overview of participatory evaluation covering rationale, theory, and practice.

More on the subject

Estrella, M., 2000 Learning from change: issues and experiences in participatory monitoring and evaluation  [5] Blauert, J., Campilan, D., Gaventa, J., Gonsalves, J., Guijt, I., Johnson, D., Ricafort, R. (editors) , Intermediate Technology Press, London, UK, and International Development Research Centre, Ottawa, Canada

One of the first books about the participatory approach, this book provides an overview of common themes and experiences in participatory monitoring and evaluation across different institutions and sectors as well as case studies. Chapter 1, Learning from change (pages 1–14), introduces key concepts and synthesizes four major steps for implementation.

This part of the module will help you answer the following questions: What types of tools can be used for what purpose? Where can these tools be found? 

  [6]
Seed exchange : Using appropriate evaluation tools, Nepal
Li-Bird/P. Shresta

To find answers to the evaluation questions, one or more tools must be selected. There is no blueprint for selecting tools. Usually, a mix of complementary tools will allow you to obtain the required information and provide some basis for triangulation or cross-checking of validity. Tools from conventional social science research, such as interviews and surveys, can be combined with tools from participatory research, such as participatory ranking and mapping.

In recent years, a large set of tools has been developed by researchers with various academic backgrounds, but a common interest in participatory evaluation. Among them are oral histories and testimonials, key informant interviews, brainstorming, focus groups, surveys, network analysis, various rating and ranking exercises, mapping, diagramming, community photography and video, theatre, and role playing (Gawler 2005).

The nature of the evaluation questions will be the best guide for identifying the most appropriate tool or tools (Community Sustainability Engagement Evaluation Toolbox 2010). For the three types of questions related to design and planning, implementation, and outcomes, it would be useful to combine tools that capture the individual perspectives of participants in the research as well as their collective perspective (e.g., how the community has benefited from the research in terms of improved crops and cropping practices; what the community has learned beyond the introduction of new crop diversity). It is important to keep the process doable while obtaining useful information. The readings below include lists of tools and how to use them. Numerous resources are available online.

Recommended reading

  [7]

Community Sustainability Engagement Evaluation Toolbox, 2010 Tool Selector  [7]

This list of monitoring and evaluation tools offers useful advice on the selection of tools according to type of data to collect (quantitative or qualitative) and category of outcome to measure (efficiency, effectiveness, and outcomes).

  [8]

Participatory methods,Useful methods and ideas  [8] Institute of Development Studies, Brighton, UK

This online document has a useful section on participatory methods.

  [9]

Gawler, M., 2005 Useful tools for engaging young people in participatory evaluation  [9] UNICEF Regional Office for CEE/CIS and the Baltics, Geneva, Switzerland, 42 pg.

This document presents clear and concise instructions for the use of 15 tools that can be used by young and not so young people alike.

This part of the module will help you answer the following questions: What are the core tasks or steps in conducting an effective participatory evaluation? What factors can have an influence on the process and the results?

Careful planning of the evaluation process is as important as planning in the earlier stages of the research cycle, but, all too often, researchers only begin to think about evaluation once the research is nearly complete. At that point, energy and financial resources may have run out.

Community sitting: Defining evaluation questions, Africa
L.Snook/Bioversity

Participatory evaluation integrates the results of the previous steps, defining good questions and selecting appropriate tools, and provides answers to six key questions: why? for whom? what? who? how? and when? (Patton 2005).

If all goes well, answers to the evaluation questions will be found and put to good use. One way to do that is to synthesize them in the form of recommendations for further action to be undertaken by specific users. A sound recommendation includes not only the concrete action(s) to be undertaken, but also a feasible time frame and the actors who can take responsibility for the action(s). In Module 8, you will learn more about sharing the results of research.

Taro field trials: Carrying the evaluation plan, PNG
E.Dulloo/Bioversity

The results generated by participatory research depend not only on asking sound questions, using good tools, and having a feasible plan, but also on the context in which the research takes place. This includes the socioeconomic and political situation; local culture; resource access and rights; social identities and relationships along the lines of gender, class, kinship, ethnicity, and age; and the attitudes, interests, and abilities of the various stakeholders including the researchers. In other words, as with all science, we need to be aware that knowledge is socially structured and that this implies a process of representation, discussion, and potential conflict and negotiation. For example, in countries with a strong government system from national to local levels, politics usually play a key role in the process of rural change. Participatory evaluation can lead to greater transparency and accountability, but it should be introduced and practised with prudence (Vernooy et al. 2003, 2006).

Recommended reading

  [10]

Xu Jianchu, Sun Qiu, Vernooy, R. , 2006 The power of participatory monitoring and evaluation  [10] Development in Practice 16(5),

A synthesis article on the Chinese experience. Not freely downloadable.

  [11]

Patton, M.Q., 2005 Utilization-focused evaluation (U-FE) checklist  [11]

A simple to follow checklist of 12 core tasks (or steps) and challenges for the facilitator of an evaluation.

Patton, M.Q, 1997 Utilization-focused evaluation. The new century text.  [12] Sage Publications, Thousands Oaks, USA

This classic book on utilization-focused evaluation is the source of the U-FE checklist

  [13]

Vernooy, R., Sun Qiu, Xu Jianchu (editors), 2003 Voices for change: participatory monitoring and evaluation in China  [13] Yunnan Science and Technology Press, Kunming, China, and International Development Research Centre, Ottawa, Canada

This book describes in detail how two Chinese research teams learned about and successfully integrated participatory monitoring and evaluation into their research projects in the field of natural resource management. The two case studies also explain the context of the research, the challenges faced, and how they were dealt with. Of special interest are chapters 3, 4, and 5 (pp. 55–147), which describe the field experiences of the two teams.

Here is a quiz that will help you test your newly acquired knowledge. Once you have covered the content sections and completed the assigned readings, please answer the Participatory evaluation quiz.

Continue to quiz  [14]

Applying your new knowledge

Please, document this step of the research process by summarizing your participatory evaluation plan. Please specify:

  • Your objectives
  • Your partners in the process
  • The tools you selected
  • Your work plan
  • Possible challenges that may occur
  • How you expect to put the results to good use

Moving to the next module

The next module in our research process is Knowledge sharing and communication  [15]. Let us begin!

7

Participatory evaluation

8

Knowledge-sharing and Communication

Web Address of the page:

http://www.seedsresourcebox.org/resource-box/participatory-evaluation/

Links in this page


Creative Commons LicenceAll content on this website is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License