Save the Date, Folks! Next Advocacy Evaluation Breakfast on July 9th
Earlier this year, we wrote our reaction to Redstone Strategy Group’s “Assessing Advocacy,” which appeared in the Spring 2013 issue of the Stanford Social Innovation Review. We’ll be continuing the conversation on July 9th when Ivan Barkhorn and his team stop by the Aspen Institute for another edition of our Advocacy Evaluation Breakfast Series. They will be presenting on the evaluation strategy introduced in their article, with the Center for Evaluation Innovation’s Julia Coffman offering a response. Make sure to RSVP here. Yummy pastries and a good discussion are subjectively guaranteed!
The Women Deliver conference last month featured some pretty darn important discussions about the role of advocacy in lifting up women and girls globally. Denise Raquel Dunning, Director of the Adolescent Girls’ Advocacy and Leadership Initiative (AGALI), wrote a piece for The Guardian explaining how equipping girls with the skills to advocate for themselves can lead to systemic social change. Although she doesn’t focus explicitly on evaluation, Dunning’s point about “gate-keepers” and community networks—among others—got us thinking about benchmarks for progress. What woul d tell you that the “gates” are indeed opening? And how can we measure that?
Doling out Grades
Sometimes when we talk to non-evaluators about our past evaluation work, they ask: so, do you give clients grades? At which point we reply: Well, it ain’t that simple. Yesterday, the Fordham Institute released its evaluation of the Next Generation Science Standards for K-12 students: the short version is a colorful grade card by state; the long version is a sixty-seven page report explaining definitions, methodology, results and other wonky stuff. While a policymaker may want something short and spiffy, program staff may look for the detail behind all conclusions. The challenge often becomes how to make “short and spiffy” also robust and meaningful.