The biweekly ‘So What?’ guide highlights advice, events, and tips — mostly from the advocacy and evaluation worlds, selected by the Aspen Planning and Evaluation Program.
Evaluating Evaluation at the Hewlett Foundation
Many of you cool kids listened in last week as Amy Arbreton and Prithi Trivedi of the William and Flora Hewlett Foundation presented a webinar on what the Foundation is learning about, and from, almost 50 recent evaluations it had commissioned. Missed it? Thanks to our pals at the Center for Evaluation Innovation (who sponsored the webinar) you can catch up on the conversation right here. Amy and Prithi discussed key findings from the Foundation’s internal report on The Value of Our Evaluations: Assessing Spending and Quality. One finding: Higher quality evaluations cost more. Nice news for evaluators, but the Foundation noted that other factors contribute to quality, including the program staff’s investment of time at key points in the evaluation’s planning and execution. As happy collaborators with the Foundation, we say: so true.
The Summit of “What Works”
If you didn’t happen to be in Indonesia on April 16-20, you (along with us, sadly) missed the International Social and Behavior Change Communication Summit. A principal theme of the Summit was “what works” — including lessons about how we know what worked, which triggered our internal Happy Nerd Alert system. For example, an intriguing panel on the Improving Contraceptive Method Mix project in Indonesia, led by the Johns Hopkins Center for Communication Programs, discussed evidence connecting the dots among advocacy, district budget allocation for family planning, and behavior change in contraceptive use. Also debuting: a new resource hub to help Communication for Development practitioners choose appropriate research, monitoring and evaluation tools and methods. Check out the participatory story behind the hub’s development.
Measurement 101: How to Evaluate Teachers
For those who experience the privilege and hardship of being a teacher (we APEP-pers have our own hard-earned collection of colorful classroom stories), key motivating questions are: Are my students learning? How could I be doing better? But evaluating teacher effectiveness is not straightforward — nor uncontroversial. Our excellent colleagues with the Aspen Institute’s Education and Society Program have provided guidance on ways to improve teacher evaluation and make it more meaningful. And we were intrigued to learn of this multi-year process at the University of Oregon to overhaul its approach to teacher evaluation, incorporating new tools to capture student experience, teacher self-reflection, peer review best practices, and a framework for evaluating teaching excellence. We look forward to seeing what the UO’s experience can teach us.