Communications

What is Open Government and Is It Working?

November 8, 2013  • Panthea Lee, Guest Blogger

Panthea Lee is a principal at Reboot, a social enterprise working to improve governance and development worldwide.

The Aspen Institute Communications and Society program series on open government will appear on the Aspen Institute blog over the next five weeks. This is the second post in a six-part series on open government. By sharing the conversations at the 2013 Forum on Communications and Society (FOCAS) and the ideas they inspired, we hope to advance a constructive dialogue around open government and a future of more equitable and accountable governance.

What is “open government”? The question is deceivingly difficult to answer.

New York University’s Governance Lab recently listed 30 definitions of the term. Author Justin Longo explains: “Defining what open government means is complicated by the range of definitions, meanings and motivations that exist.”

And that’s precisely the problem: “open government” has become incredibly ambiguous.

The participants at FOCAS 2013 agreed. “Can we break down what open government actually means?” asked Phil Ashlock of Civic Agency. “Is open data the fundamental part of open government? That’s a technocentric view. Where does policy fit into this? […] We need standardization in our use of language so we understand what it is we are talking about.”

More to the point, if we lack consensus and clarity on what “open government” means, how do we know if it is working? The short answer: we don’t.

The Issue: Muddled Objectives

That the open government umbrella has come to include a range of initiatives is not itself a problem. The problem is that too many open government conversations take place in the context of us all working toward the same goals, which we are not.

Yes, a Congressperson seeking to enact legislation that enables citizens to request information from government, and a software engineer developing a tool that helps citizens understand when their streets will be swept are both, broadly, working toward greater transparency, accountability, and participation in government. But they are working toward fundamentally different goals. The former is focused on democratizing access to public records, while the latter is facilitating public access to government service information.

Too often, this level of specificity is lacking in open government conversations, muddling our understanding of what we are trying to achieve through different and distinct initiatives. At the end of the day, are we trying to make public agencies more efficient, hold elected officials accountable, tackle corruption, influence policy, or achieve any number of other objectives that fall under the open government umbrella? Let’s be clear about what exactly it is we are working toward.

Concepts that cover multiple definitions are tough to operationalize and their results even tougher to measure. Inasmuch as we are “working toward open government,” we need a coherent vision of the goals implicit in that statement. Once we are clear about what we want change to look like, we can then develop appropriate means to evaluate if and how we are making progress.

The Solution: Rethinking Evaluation Can Add Clarity

Rethinking how we evaluate open government initiatives could move us in the right direction.

In the United States, the Obama administration has both pledged to enable an “unprecedented level of openness in government” and heavily restricted the classification and release of government information. Across Africa, countries are opening up about how they plan to spend their budgets, but keeping mum about how they actually spent them. Public finance expert Matt Andrews has shown that across 28 African states, 63 percent of governments are more transparent in budget formulation than in budget execution.

Have these governments succeeded in achieving “open government”? And beyond evaluating their holistic records on transparency and accountability, how do we assess individual projects?

Our current frameworks for evaluation typically equate scale with success. In other words, the more people engaged in an open government initiative, the more “open” government has become. Scale alone, however, is a crude and often inaccurate measure of success.

There’s more than one million government datasets online today. As of November 14, 2013, the US government alone has released more than 87,000; at one point, it was releasing four datasets a day. Impressive? Sure. But what does this tell us about how this data is affecting people’s lives or government policy? Studies that link the number of Twitter followers a government body has with its success in open government also miss the point.

“When assessing the success of consumer applications, you don’t just measure the number of users it has,” said FOCAS participant Michelle Lee of Textizen. “You measure other factors, such as the people returning within seven days, or 30 days, to understand what is happening.”

In short, the number of users downloading a civic tech app doesn’t tell us how that app is changing attitudes toward civic engagement or the culture of governing. To assess the impacts of open government, we must stop measuring outputs and start understanding experiences.

Ideas in Practice: 100 Worst

At FOCAS, participants proposed a concept called 100 Worst to spur better public service delivery through competition. Citizens could rank government offices or services in distinctive categories, and the desire to not be labeled one of the “100 worst” in each category could, in theory, motivate offices to improve their operations — particularly those notorious for inefficiency, such as the Department of Motor Vehicles (DMV).

Now, the concept of “Yelp for government” is hardly new, but what was interesting about the conversation at FOCAS was the keen focus on evaluation. Participants didn’t want to build 100 Worst just to build it, they wanted to the use data it generated to assess what effect the project would have on public service delivery (aka “impact evaluation”), and how they may be able to increase the chances that government offices used the data to improve their offerings (aka “process evaluation”). By combining both types of evaluations, we can then see what procedures, strategies and activities lead to desirable outcomes and why. If, for example, they found a correlation between close collaboration with government officials and improved service delivery, we could structure future implementations to improve the potential for impact.

Data collected could also provide other interesting analyses. Mapping user or demographic data against user ratings, for example, may provide insights into how factors such as race, gender and average income impact service delivery.

Stefaan Verhulst at FOCAS 2013 (Photo Credit: Daniel Bayer)

“We need to measure what works. And we need a shift towards evidence-based evaluations,” said Stefaan Verhulst of GovLab. “Otherwise, these will remain faith-based undertakings.”

Putting Evaluations in Context 

Of course, evaluations can mislead, as they often seek linear, cause-and-effect relationships for complex change processes. Governments, for example, may have very legitimate reasons for poor performance — staff, for example, may lack sufficient technical training to use new systems. In these scenarios, citizen ratings can identify poor performance; applied ethnography and other qualitative research methods can surface the factors that contribute to poor performance. By blending the approaches, we can ensure government offices are not unfairly judged.

Negative evaluations should be used not just to point fingers at government, but to help it improve. Alissa Black of the New America Foundation noted that an office could leverage a low ranking for its advantage, as the New York Parks Department once had. The Parks Department used negative feedback from 311, the city’s information hotline, to demonstrate that the breadth of its mandate was unachievable given its resource allocations. The department was granted more funding.

And what of citizens? How might participating in such an initiative shift citizen perceptions of government accountability? If a 100 Worst user sees that their actions have an impact on government performance, the positive feedback may shape how they engage with their community. And if they don’t, the lack of feedback may lead them to disengage from 100 Worst, and be more skeptical towards open government initiatives in the future.

The success of an open government initiative is not simply a question of who and how many showed up. Real success will come with shifts in citizens’ sense of agency over the processes of governance that affect their lives, and government’s willingness to work with citizens in revising and implementing these processes. By exploring both sides’ experiences with open government initiatives, we gain a rich understanding of who became engaged and why. We have insight into the specific pain points. And we have a better understanding of real-world impact, and how we can achieve it.

RELATED:

Communications and Society Series on Open Government, Blog I: Open Government and its Constraints