We were thrilled to see a familiar name in the most-recent issue of the Stanford Social Innovation Review: “CHANGE Takes Time” by Serra Sippel of the Center for Health and Gender Equity. Some of you may remember that back in April we hosted an Advocacy Evaluation Breakfast with Serra, the Institute of Medicine’s Kimberly Scott, and Shira Saperstein of the Moriah Fund to discuss the role of advocacy in the evolution of PEPFAR. APEP just joined them in taking the show on the road, presenting on this topic at the Funders Concerned About AIDS Conference. Yep, a case of successful science-based advocacy. Hooray!
A Grant-Seeking Dance
…to the tune of Jingle Bells? Maybe one of our rhythmically-inclined readers can try choreographing one. Fundraising professionals, however, know it quite well. Guidestar’s Q&A with Martin Taitel, the former CEO of a small family foundation in Boston, demystifies some of these dance moves, from preparing a proposal to seeking feedback on why it was turned down. For us, a few questions were missing: how do foundations assess M&E plans? How do these assessments contribute to the overall proposal review process? And—this may be pushin’ it a little—what’s most important in an M&E plan (e.g. breadth, depth, cost-effectiveness, methodologies, or…all of the above)? No one person can answer for all foundations, of course. But these are questions we think about often, as we try to dance ourselves.
Thanks to this past Tuesday’s “snow day” (with no snow, mind you!), the Brookings Institution cancelled their panel discussion on “Measuring the Influence of Education Advocacy: The Case of Louisiana’s School Choice Legislation.” We were pretty bummed. But the magic of the internets never fails us: the research report and technical appendix are already up on the Brookings website, ready for our eager consumption. In this paper, Brookings’ Grover Whitehurst and researchers from Basis Policy Research describe a survey tool that gauges the influence of advocacy organizations on policymakers; they call it “Survey with Placebo,” or SwP for short. They find this tool to be useful in identifying perceived influence, though not as much for distinguishing differences between specific advocacy strategies.