Quick and Dirty
There are some really valuable advocacy evaluation guides out there in the interwebs. Some of our favorites include Julia Coffman’s guide to advocacy evaluation planning and advocacy M&E toolkit, ORS’ guide to measuring advocacy and policy, and Innonet’s practical guide to advocacy evaluation. And while LFA’s recent advocacy evaluation mini-toolkit may not be as comprehensive as others, it’s pithy, actionable, and asks some terrific questions of users. Busy or not, organizations should take a look for themselves!
Michael Quinn Patton is at it again (and by “it” we mean providing folks with ridiculously useful insights into the evaluation field, where it currently sits and where it’s going). In a piece for the Nonprofit Quarterly this week, Patton writes about ten major changes in the last decade to the ways we conduct qualitative evaluation. He calls the emergence of new, purposeful sampling approaches the most significant development of all: “To be more strategically purposeful about sampling is to be more strategically purposeful about evaluation.” We couldn’t have said it better ourselves.
Evaluations, particularly external evaluations, can be viewed as intrusive and potentially damaging, if they’re framed exclusively as an accountability exercise. To combat this perception, and provide the field with new ways of talking about the intrinsic learning benefits of evaluation, we went out looking for compelling evaluator testimonials. We certainly found one by Michigan State University Professor Robin Lin Miller, a proud psychologist and program evaluator. She writes passionately about the value of evaluation findings to address the critical needs of some very vulnerable populations. For you evaluation practitioners out there, what stories do you have?