The challenges of integrating a Monitoring, Evaluation and Learning (MEL) system into policy research organizations are often greater than they first appear. In practice the idea makes sense to most organizations, and most already have some form of system in place for at least a few projects.
Yet as both facilitators and participants of a P&I recent online MEL training course designed to do just that came to understand, getting from good idea to practical implementation might take more effort than initially anticipated.
The reasons are partially predictable in that organizations are busy and have the challenge of juggling this learning process with other organizational demands on their time. What was less clear to all of us is the manner in which conceptualizing the use of such a system requires for many a critical look at larger organizational development considerations: how organizations develop and update program strategies; how they ensure staff management practices of this; and how they finance this system are just some of the areas that need to be addressed in order to ensure that a MEL system has an implementation base.
Organizations also have many choices to make when deciding how to build off already existing practices including deciding how to scale up or down evaluation efforts; for most this means deciding whether it is more realistic to start MEL efforts on project or program level.
Many try to immediately design systems for the program level, and here the strength of the program strategy often then dictates the quality of the evaluation system that can be developed. For some, strategies and objectives have already been clearly articulated. And here identifying corresponding outputs and different types of measurement indicators can be more straightforward. For others, organizations still need to think through how long-term objectives can be focused into mid-term objectives and outputs. Only after getting this clear, can they begin to identify best ways to evaluate progress on these.
Regardless, the manner of measurement and ways to measure these are still areas where many feel uncomfortable. Quantitative measurements are the most sure thing: counting trainings or interactions is something that most have done; considering how to do more qualitative measurements that can capture attitudinal changes or real learning and application are less clear terrain. How to do such measurements within normal staffing, resource, and time constraints is another. Making this a natural learning process is perhaps the final challenge for many as they consider how to incorporate reflection and review time and tools into their normal project, program, and organizational strategy planning.
The take-away from this process for both participants and us as facilitators has been that application of these efforts is a process. It is one that inevitably will be partially implemented at first. And then with time and in relation to how the organization is dealing with its own strategic development, MEL efforts can be bit by bit designed, digested, and incorporated into regular management and programming efforts.