Why evaluate public innovation approaches (based on design)?

Posted on 5 June 2018 par Sylvine Bois-Choussy

It is a a sign of a burgeoning, maturing and hybridizing field that more and more actors in public policy design are now asking themselves about their impact: communities and public actors, practitioners, designers and agencies, researchers, consulting firms specialized in evaluation, etc. To explore this issue, we spent several months conducting, in collaboration with Philippe Lefebvre (researcher at the école des Mines, with whom we worked on the ANR FIP project), a series of interviews with agents, designers, evaluators, etc. What are the evaluation and impact measurement practices, existing or yet to be developed? What are they intended to measure? What might be the first steps for those who wish to better understand the progress, impacts and effects of such projects? To narrow the field, we looked at projects that have been the subject of public policy design work, alone or in combination with other approaches.

The following are some of the lessons and questions to come out of these exchanges. We will share them more widely at Superpublic, during a meeting to take place on June 18 from 2pm to 5:30pm. To register, click here.

Why evaluate public innovation approaches based on design? 

41548509225_c7351338b3_o

Let’s start by saying that the practices of impact measurement and evaluation of public policy design projects are today unformalized, rather intuitive… but among designers, agents, evaluation professionals the debate is lively. When is a project considered successful? Can we better understand work processes? What would be the relevant indicators for assessing these complex and multifaceted processes? One of our interlocutors shares this view: “In the history of this field we have theory, and we have practice, but it lacks an objectivity regarding the criteria that produce success.”

The process pertains to a stage of reflection on the sector and its processes, of a rise in quality, but it might also be the transition from a ‘pioneer’ mode to ‘normalization’… It’s a matter of better understanding public policy design projects, to grasp the conditions of their fertility, to identify their effects, their externalities. What are the ingredients for a correct appropriation by users? The best conditions for scaling up? What the ground is there to lay, eg. work in the political sphere?

It is also a question of taking an interest in the practices of this still new field, which over the last ten years has been maturing, burgeoning and hybridizing… What are the contributions from the diversity of the disciplines involved? What are the conditions for identifying the ‘right’ problem? What does the success of the creativity phase depend on? etc. Given the current rage for creative methods, thinking design and so on, it is also a matter of emphasizing different ways of approaching design, of clarifying quality issues, of moving away from a sometimes fetishist relationship to ‘methods’ towards an interest in the know-how, the gesture and the mark of the professional designer.

On the other hand, if public policy design is one approach, many other methods populate the landscape of transformation and shepherding of public action: lean management, change management, intrapreneurship, etc. Better understanding the specific contribution of design to other means of intervention would help promote the dialogue within the sector, but also help establish common ground with other fields, enable comparison, spur debate regarding each one’s place… and to enlighten the communities in their choices: Can we adapt the means to the ends, to get a grasp of the pertinence of methods according to the contexts and the stakes? Can we mobilize this or that approach in function of the political project?

Lastly, it is a question of helping the public administrations become more aware of design, in particular by specifying more realistically the initial potentials, while trying to better objectify the expectations and the intentions of the stakeholders (labs or services sponsors, other services, directors general of services, elected officials, etc.). This helps, for example, to start a dialogue about the community’s vision of innovation. On this subject, we published in Sonar a small quiz that can help to initiate the debate… Finally, it’s a question of staking out a sort of middle ground in the context of a sometimes somewhat ideological relationship to public policy design… or, on the contrary, a call for decontextualized efficiency. “In the public sphere there is often the demand to do better with less. I can promise to do the best I can, with the means we have.”

 

What do we want to examine, understand, measure?

3470711780_c42cd1f5e3_o

How have public services or policies that have been the object of public policy design work evolved? For example in their governance (a greater number and diversity of stakeholders involved), in their fitness to the initial challenge (efficiency, relevance, etc.), in savings of time, energy, costs, etc., in their degree of innovation (are the solutions applied new?), etc.

And more broadly, what did we contribute to? Can we explain the political vision, the values that informed the choices made from among all the possible ones (emancipation of users, on-job quality of life for agents, desirability of public action, etc.)? The changes that the process has enabled, its democratic, social, societal usefulness? For example, the Bloomberg Foundation uses a highly quantitative approach to assess the impact of innovation teams deployed in 25 US cities: a 60% drop in the rate of recidivism among prisoners in the state of North Carolina in a context of prison overcrowding with high costs to the community, a 700% jump in individual building permits issued in the context of a housing crisis in Los Angeles, a 50% reduction in vacant stores on the run-down shopping streets of Jersey City, etc.

What is the impact on the agents involved? Have we strengthened the capacity to act, fostered a new grasp of the problems, new capacities, a renewed sense of public action, etc? “This growth in capacities is important to the success of the project: it allows the agents to take ownership: they will have the autonomy to tinker with the necessary elements in order to move forward, they will be more likely to follow up, etc.”

And regarding the culture of their administrations? Has the project succeeded in challenging norms and procedures in the workplace, in disseminating a culture of trial and error, etc? Has it shown how to transform jobs, demonstrated the relevance of less siloed work, of more horizontal management, etc? At Mulhouse in the context of the Transfo, for example, a project to design fun cigarette butt bins showed how the technical trades might evolve from a role of executer to one of prototyper.

Can we define the work processes, the relevance of methods, in order to better use and share them? What does a given method produce in a given situation, with a given objective? What happens if we change the context, if the prototyping phase comes before the immersion phase? Beyond the interest in sharing experience, however, we must be careful not to de-correlate method from know-how, from the human factor, from the context, etc.

… and the creation of an ecosystem of public innovation: for example, has the project contributed to the dissemination of a design culture within the administration, to the ability to approach designers in a better informed, more structuring  manner? In the aftermath of a facility project in which the designers only intervened in the upstream phase, one agent gave this assessment: “Today, in this type of project, we create silo markets, whereas it would have nice if the design were integrated in the larger market.” More broadly, has the project empowered designers, engendered fresh collaborations with other fields, the development of a local market, etc?

 

A few points of contention…

Relevance vs. efficiency? Traditional approaches to the evaluation of public policies hardly fit the processes of public policy design. “Approaches like evidence-based policy are designed to demonstrate a short-term, circumscribed effect, whereas public innovation practices have potentially broad, even diffuse, long-term effects, and are more difficult to evaluate“… Likewise,the transformation of public policies unfolds over a long period of time, within a complex ecosystem, further complicated by electoral processes marked by the numerous shifts in political direction.

Another difference with traditional evaluation practice: the starting point. “Where a traditional approach to evaluation often focuses on measuring the effectiveness of a solution in relation to an initial question, in the design world, there is a tendency to say that we shouldn’t respond to a problem, but invent it.” Rather than embracing a problem in all its dimensions, design seeks to find the right window for intervention, the “actionable” problem, at the risk of disturbing the Cartesian mindset. “You often hear it said that when you’ve found the root of the problem you’ve half solved it; in practice, this is not always true, especially for complex subjects such as non-use, for example. It is more important to find limited solutions that only serve one person but that can then be applied more broadly, than solutions that serve everyone and therefore no one.”

Finally evaluation requires points of comparison that are sometimes difficult to find. Between an old service and a new service, the latter will probably be better, but how do we measure the specific contribution of public policy design? This would require ‘scaled up’ projects, which in many communities is not yet the case… However, more and more projects, like the media library in Lezoux, “grow out of the ground,” which allows a more in-depth comparison with other projects of the same nature. For example, the city of Nantes is now working on the capitalization of institutional projects.

Explain the process in order to make it more credible? In order to scale projects, there is a need to win the engagement of both initiators and implementers, as well as the political agents involved. Can a better understanding of the mechanics bring the skeptics on board? “What was missing between the immersion and the scenarios was an understanding of where the proposals came from. We would need to integrate into the process the objectification of the choices that are made…” several agents explain. In practice, however, engagement is often based more on being involved in the process than on understanding its mechanisms; focusing too much on clarifying the methods also entails the risk of deflecting the conversation away from the actual aim of the work.

Follow the lead of user input? The designer’s job is probably more to embrace a context, to bring in a different perspective, to imagine and test solutions, than to be a translator. “In a project, I am the only one who sees the whole thing. If users come to me, it’s because they’re looking to me to put some sort of touches on the project. Our job is to weight constraints; of course that means taking the user into account, but not to give her complete power. It’s also our job. Public action is not just the citizen; it’s the agent, co-existence, etc.”

Evaluate projects in terms of their scaling-up? Certainly that is an indicator of success; however, be careful that it isn’t the only one. “In industrial design, the designer’s role is to come up with new ideas, but the job of development is allotted to others – engineers, technicians, etc. In practice, the scaling-up of public policy design projects obviously depends on the relevance of the proposals, but also on many other factors. “In the community, you need the intuition that public innovation makes sense, that going outside the box produces things, and so on. Successful projects are projects in which there are people who have made it happen, who see to that the proposals are implemented. If from the outset the stakeholders think it’s useless, it isn’t our responsibility.”

Is the user happy, but also does he get value for his money? Evaluating a project solely with regard to the savings it entails is certainly reductive. However, the question of return on investment is real, especially if one wishes to focus on working outside metropolises, in rural areas, with small or medium-sized communities, for which such interventions represent a particularly important cost.

Evaluate, this holy grail..? Beyond these elements of debate,we need to pay attention to what the evaluation produces, in terms of standardization, focus on a form of mechanics, a tendency towards a “toolbox” logic, etc. Especially since no form of evaluation is truly scientific, neutral and devoid of ideology…

On the other hand, there may be a gap between the traditional evaluation of large-scale projects, where such funding is included in the project, and much more frugal public policy design projects with scarce resources for evaluation.

Perhaps it’s more about raising visibility, demonstrating the value of the public policy design approach, than it is about measuring a somewhat abstract efficiency. There is also a need for simple tools, available to all, throughout the process, for engaging in dialogue, clarifying a vision, and making visible and following up on the multiple effects of public policy design projects. Shall we talk about it on June 18th?

 

Thanks to Francine Fenet and Amandine Babarit (Nantes Métropole), Guy Kauffmann and Florence Bannerman (Département du Val d’Oise), Hugues Archambeaud (Département de Loire Atlantique), Marion Luu (Département de l’Isère), Hélène Clot (Métropole de Grenoble), Carole Stromboni (Département de Seine Saint Denis), Jacky Foucher (Agence Grr), François Jégou (Strategic Design Scénarios), Kevin André (Kawaa), Romain Thévenet (DTA), Emmanuel Rivat (Agence Phare), Angela Hanson (OCDE), Thomas Delahais (Quadrant Conseil), Grégory Combes (Agence Indivisibles)… for the generous and stimulating exchanges!