[Editor’s note: This post was written by Paul R. Kelly, PhD Researcher in Impact, Evaluation & Service Design, Lancaster University, UK. Email: email@example.com.]
Welcome attention is now being paid to how evaluations can better engage with and contribute to policy processes, beyond dissemination alone. One example of this trend is CIPPEC’s work on influencing public policy assessments (2015). To add to this kind of work, I would also like pose some questions about the relation of power and knowledge in the growing landscape of evaluation and in our visions of bridging evaluation and policy, and bridging to other cultural sites.
The rise of evaluation knowledge
Evaluation knowledge has penetrated the diverse worlds of international development in recent decades. We now design evaluations for before, during and after new policies or programs. We use baseline studies, monitoring, mid and final evaluations. We have monitoring, monitoring and evaluation (M&E), monitoring, evaluation and learning (MEL), monitoring, evaluation, learning and accountability (MELA, or MEAL), value for money (VFM) and transparency and accountability (T&A). And we must not forget our long-standing evaluations of financial performance, organisational performance, personal performance, efficiency, and effectiveness.
We use quantitative and experimental methodologies such as control groups, randomized control trials (RCTs), before and after trials, and various statistical approaches. Qualitative approaches include participatory action research, appreciative enquiry, most significant change, and outcome mapping. There are over 100 specific technical approaches today, some say nearer to 1,000.
So, in a nutshell, with our diversity of functions, timings, methods, tools and approaches, one might presume that we are in good shape to evaluate our policies and our projects? But what if more knowledge is not the answer to social problems? What if knowledge is also part of our problems?
What if knowledge is part of our problem?
How can knowledge be part of our evaluation problem, part of our policy conundrum? Surely, any new knowledge, if “robust” and relevant is useful? But there are at least three difficulties with this simple view.
Firstly, any knowledge is culturally constrained. There is a long history to studying knowledge and power, and important texts open critical doors, including Kuhn’s seminal 1962 book “The Structure of Scientific Revolutions”. This book explored how different scientific communities called upon different knowledge, evidence and “facts” to argue their cases.
Foucault, in books such as “Discipline and Punish” (1977), went much further arguing that power is not just held by single people such as chiefs, evaluators, politicians or policy directors, but is actually diffused across society, part of our facts, truths, or our “regimes of truth”. Issues such as auditing, evaluation or gender identity for example are actually composed of these knowledge forms. This knowledge / power relationship involves whole institutions and permeates how we perform our everyday work. Any claims to knowledge, evidence, or data integrity involve criteria that legitimates some knowledge, and erases others.
Secondly, contemporary researchers have argued that good policy might in fact be unimplementable (Mosse, 2004). The argument runs that the policy world and policy frameworks are not best suited to understanding complex practices. Indeed, social development may be as much about disjunctures and mess, as about the power to order, manage or rationally plan (Lewis and Mosse, 2006). Our plans might not reflect out there practice. Others have suggested that particular kinds of knowledge lend themselves to ordering and controlling. We then use our evidence in “ontological politics” (Law, 2004) – in other words in fights over truths, values and expertise. This raises questions about democratic voice and challenges the view that technical knowledge alone will solve our social problems.
Thirdly, critical studies of policy culture break down the idea that all policy environments are politically neutral and rational. Sumner (2006: 648) calls for more alternative voices around the complexities of policy and practice, noting “policy is shaped by political infrastructure”. Beeson and Islam (2005: 197) argue that policies “evolve independently of their intellectual merit and empirical credibility.” In environmental work, various authors have argued that we need to know how social, economic, scientific and political knowledges mix, merge and dominate each other in policy-making processes.
These three problems with knowledge are practical. Can marginalised groups get, read or edit evaluations? Or policies? How do evaluations and policies seek to order out there practices? Answers require a close scrutiny of policy and evaluation cultures.
Critical paralyses and pausing for thought
Much of my concern about knowledge comes from a critical angle, borrowing from Foucaultian work, post-structural analysis (not the best friend of modernisers), or from Social Studies of Science, Technology or Accounting. Such work can be insightful, problematic, and progressive, but can also lead to a kind of critical paralysis, a numbness that leaves us in a new world of indecision and inaction. How can we move past such a “knowledge” impasse? There is no one answer to this, but there are many avenues for exploration, experimentation, re-design and perhaps reknowledging.
Seeking alternative knowledges
In conclusion, we have glimpsed the paralysing effect that a critique of power and knowledge has on well-intentioned evaluation and policy-making. But these are issues we must understand; we cannot ignore these problems if we seek enlightened progress. For now, let’s pause, and reflect on three final questions.
- Is my knowledge partial?
- Is it primarily related to my field and institution, my silo?
- And importantly, does knowledge in my institution marginalise anyone else, particularly any vulnerable groups?
For this article to be of any positive effect, these questions should take us to an uncomfortable place. And this new place is a great vantage point for seeking alternative evaluation and policy knowledge.