The Tamarack Institute calls evaluation reality testing: a systematic process of testing ideas, hunches and beliefs about the nature of a challenge and how it might be addressed through the use of data and rigorous ‘sense-making’. Most evaluation findings are not used by their intended users for their intended use.
- There are a variety of different factors that affect the uptake of evaluation findings; e.g. interest in findings of evaluation user, evaluability of intervention, skill of the evaluator, political context, etc.
- Research suggests that the “personal factor” is the most important: the interest of the evaluation user in the evaluation process and findings, the quality of the evaluator, and the trustful working relationship between the two.
- The probabilities of getting intended users to use evaluation findings can be improved by embracing a utilization-focused approach.
- There are many helpful strategies typical of a utilization-focused approach.
WHAT DO WE MEAN BY “USE”?
- Instrumental use – evaluation findings are used to directly inform a decision, improve a program or policy, develop new directions, or contribute to solving a problem – the findings are linked to some subsequent, identifiable action.
- Conceptual use – when an evaluation influences how key people think about a program or policy and understand it better in some significant way – but no immediate action or decision flows from the findings.
- Process use – when an evaluation encourages people to more fully embrace the process of evaluative thinking, learning, and use of data in making decisions – but no immediate action or decision flows from the findings.
- Misuse – when evaluation users manipulate evaluation data findings or process for some political or self-interested purpose.
TYPES OF USES (PURPOSES)
- Judgment (Summative) – to help decision-makers decide whether to sustain, wind-down or expand an intervention.
- Learning (Formative) – improve or refine an intervention.
- Accountability – demonstrate that resources are well-managed, intervention plan followed and results attained.
- Monitoring – manage the intervention, routine reporting, early identification problems.
- Development – create or radically adapt an intervention in dynamic conditions.
- Knowledge generation – enhance general understandings and identify generic principles about effectiveness.
Three Critical Questions
- Primary Intended Users: Who are the primary intended users of the evaluation and what are their major questions?
- Primary Intended Use: What is the intended user’s primary intended use of the evaluation findings?
- Tailoring Key Features: What are the key features to keep in mind to improve the probability that primary intended users “use” the evaluation findings (‘interpretive lens’, preferences for data and methods, window-of-use)?