“So What?” – Your BI-Weekly Guide to Advocacy With Impact
Lovingly selected and lightly snarked by Team APEP: David Devlin-Foltz, Susanna Dilliplane, and Alex Gabriel
Questioning Evaluation Questions
The nice people at InnoNet recently shared a guide to formulating good evaluation questions, riffing off the Evaluation Question Checklist from Western Michigan University’s Lori Wingate and Daniela Schroeter. The checklist sensibly identifies six criteria for effective and appropriate evaluation questions: they should be evaluative, pertinent, reasonable, answerable, specific, and complete. We’d add one more desired adjective for evaluation questions: few. To whittle them down, try this simple flowchart from Meaningful Evidence.
Digital Citizen Engagement – at some length
We are fans (at least in principle) of using feedback effectively, especially feedback from development project participants in the global South. (More love for Feedback Labs.) Doing that at scale is costly. Enter fancy (or not-so-fancy) digital data collection platforms: asking participants to text responses via mobile phones, or using video to collect stories of most significant change. Check out this comprehensive overview and case studies from DEET (not the mosquito repellent) at the World Bank.
Stop Underfunding Overhead
Unless you are fabulously well-to-do, or survive on air and love, you probably deal with unwieldy or unreasonable rules regarding overhead (or “indirect costs”). Now comes reason (and data) regarding what it actually costs nonprofits to deliver services (including evaluation services). Hint: it ain’t 10%, 15%, or even 20% of direct costs. As the authors argue, unrealistic overhead limitations force funders and those seeking grants or contracts to play dishonest accounting games – corrosive to all parties. Praise be to Bridgespan and the Stanford Social Innovation Review for sparking and informing this conversation.