Firms maximize their profits by developing strategy with targets, tracking progress, and using incentives to drive achievement. Without the natural feedback of the market, how can we use this approach to drive toward results for the poor?
One of us works for the International Finance Corporation (IFC) – an old school Bretton Woods institution with years of experience building systems to track results across countries. The other one of us works for the Bill & Melinda Gates Foundation, an entrepreneurial organization that is much newer on the block, without some of the systems that come with fifty years of development work but also without the preconceptions. We come to the table with a desire to learn from each other’s experience. We hope this first blog will pique input from colleagues around the world, similarly passionate about the power of data to improve our business.
One of us works for the International Finance Corporation (IFC) – an old school Bretton Woods institution with years of experience building systems to track results across countries. The other one of us works for the Bill & Melinda Gates Foundation, an entrepreneurial organization that is much newer on the block, without some of the systems that come with fifty years of development work but also without the preconceptions. We come to the table with a desire to learn from each other’s experience. We hope this first blog will pique input from colleagues around the world, similarly passionate about the power of data to improve our business.
Our main concern is how to make relevant, credible, transparent and actionable measurement the powerful tool it needs to be in our organizations. We are convinced that private sector practices that link strategy, results, information and performance incentives hold promise, but also aware that there exist significant challenges to successfully using them. In our two very different institutions, we grapple with three similar questions about how to resolve this tension.
Question #1: How do we ensure data about results are used to increase effectiveness?
Question #1: How do we ensure data about results are used to increase effectiveness?
There is no doubt that the last 5-10 years have witnessed enormous progress in the understanding and practice of evaluating development effectiveness. Although there remains philosophical debate about how evaluation is best done, we believe that the simple fact that it is supported at the highest levels of institutions such as DFID, USAID and the World Bank, debated among practitioners and academics, funded and pursued passionately indicates progress.
The problem is that the current discourse about evaluation theory and design doesn’t touch on the more daunting challenge of how to integrate measurement into decision making. Simply put, the most elegant evaluation design is irrelevant if the findings are never used. In this way, the true “gold standard” is the use of data rather than a specific way to collect it. Moreover, in both of our organizations, impact evaluation produces a small portion of the type of regular information people need to track progress and make decisions. High level strategies to alleviate poverty through business growth, farmer productivity, access to financial services and other means are built on diverse types of investments, grants, organizations, partnerships and contexts and therefore diverse types of information. Technically, we know how hard it is to “sum up” results data of varying quality and to track them as indicators on management scorecards and dashboards. Still, the real institutional need is to find a way to do exactly this: provide decision makers responsible for allocating resources with digestible information that they use. We find very little in the current debates and policy discussions to help us to chart a course to do this.
Question #2: How can we use incentives to drive results and use of data?
The problem is that the current discourse about evaluation theory and design doesn’t touch on the more daunting challenge of how to integrate measurement into decision making. Simply put, the most elegant evaluation design is irrelevant if the findings are never used. In this way, the true “gold standard” is the use of data rather than a specific way to collect it. Moreover, in both of our organizations, impact evaluation produces a small portion of the type of regular information people need to track progress and make decisions. High level strategies to alleviate poverty through business growth, farmer productivity, access to financial services and other means are built on diverse types of investments, grants, organizations, partnerships and contexts and therefore diverse types of information. Technically, we know how hard it is to “sum up” results data of varying quality and to track them as indicators on management scorecards and dashboards. Still, the real institutional need is to find a way to do exactly this: provide decision makers responsible for allocating resources with digestible information that they use. We find very little in the current debates and policy discussions to help us to chart a course to do this.
Question #2: How can we use incentives to drive results and use of data?
Results-based approaches have been in and out of development for a while. Bilateral donors turned to tougher contractual tools as a means to buy and ensure outcomes from implementing partners some ten years back. Performance-based incentives (PBI) or pay-for-performance (P4P) is showing up today in discussions on development financing. DFID has adopted such an approach to allocate its aid budget. The Results Based Financing for Health (RBF) effort of the World Bank, Norwegian government and DFID seeks to use results-based financing to improve healthcare in developing countries.
Despite these and other efforts to incentivize changed behavior among NGOs, governments and others, we don’t use performance-based incentives in our own organizations. Experience working in development reminds us how difficult it is to control for complexity. We understand, too, that measuring change is not the same as being able to attribute it to individual action and that it is hard to be held accountable for something that is both complex and beyond our control. Nevertheless, we wonder if we’ve disregarded too quickly lessons of professional accountability from the private sector. Private sector leaders are held accountable to increased revenue and sales despite their lack of direct control. If we are willing to hold our grantees, partners and national service providers up to this test, can we also hold ourselves accountable for targeting results, measuring progress and using the data to become increasingly effective?
Question #3: How can we assure that the “client” is center stage in our efforts to plan for and measure results?
Despite these and other efforts to incentivize changed behavior among NGOs, governments and others, we don’t use performance-based incentives in our own organizations. Experience working in development reminds us how difficult it is to control for complexity. We understand, too, that measuring change is not the same as being able to attribute it to individual action and that it is hard to be held accountable for something that is both complex and beyond our control. Nevertheless, we wonder if we’ve disregarded too quickly lessons of professional accountability from the private sector. Private sector leaders are held accountable to increased revenue and sales despite their lack of direct control. If we are willing to hold our grantees, partners and national service providers up to this test, can we also hold ourselves accountable for targeting results, measuring progress and using the data to become increasingly effective?
Question #3: How can we assure that the “client” is center stage in our efforts to plan for and measure results?
Aspirations to assure that the people on the receiving end of development are central to its planning and measurement are not new. The arguments supporting participatory development date back to the 1970s and even the most mainstream institutions adopted its tenets in the 1990s. And yet, by the beginning of the century, twenty years of practice were characterized as an emperor with no clothes – the feared “New Tyranny” of orthodoxy without enough proof of the value added (Participation: The New Tyranny? by Bill Cooke and Uma Kothari, 2001). It’s no surprise that the pendulum has swung to the opposite pole today – rigorous positivist evaluation design as the sole focus of our dialogue and efforts to solve the broader measurement challenges we still face.
Although we applaud the emphasis on scientific methods, we lament the faddism of evaluation practice and the fact that we aren’t all focused on a more diverse set of problems to solve. People, communities and grantee and partner organizations remain essential to all of our efforts. Esther Duflo and Abijhit Banerjee’s recent book Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty pushes us to see the connection between individuals and our efforts to help them, but how do we apply this kind of thinking to fit the scale and scope of high level strategies designed and implemented by large donor organizations? Foundations survey their grantees and IFC surveys its clients to assess the quality of their relationships. But how do we connect the dots between what we learn and how we do and improve our work in a way that avoids the mistakes of participatory and experimental fads?
Although we applaud the emphasis on scientific methods, we lament the faddism of evaluation practice and the fact that we aren’t all focused on a more diverse set of problems to solve. People, communities and grantee and partner organizations remain essential to all of our efforts. Esther Duflo and Abijhit Banerjee’s recent book Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty pushes us to see the connection between individuals and our efforts to help them, but how do we apply this kind of thinking to fit the scale and scope of high level strategies designed and implemented by large donor organizations? Foundations survey their grantees and IFC surveys its clients to assess the quality of their relationships. But how do we connect the dots between what we learn and how we do and improve our work in a way that avoids the mistakes of participatory and experimental fads?
We could go on and on to list and pose the questions that motivate our 1:1 conversations of late. Please join us – we’d love to broaden the conversation and hear your ideas.
No comments:
Post a Comment