In the dynamic landscape of economic system change, Decoding Injustice has emerged as a catalyst for change, fostering collaborative efforts globally. From learning exchanges with organizations in Tunisia and the Asia Pacific to transformative research projects in the Philippines and Zambia, the program navigates the intersection of discrimination, policy decisions, and economic trends. Here, we explore Decoding Injustice's impactful journey over the past year, shedding light on critical lessons learned and innovative approaches taken.
By: Allison Corkery during her tenure as Director of Strategy and Learning at CESR.
Over the course of this year, we’ve had the opportunity to explore how Decoding Injustice can support efforts to build evidence to transform the economic system, undertaken by a diverse range of activists in a diverse set of contexts. With funding from the Finnish Ministry of Foreign Affairs, we’re undertaking a number of projects looking at how to “connect the dots” between particular experiences of discrimination; specific policy decisions; and broader economic trends:
-
We’ve held learning exchanges with the Tunisian Observatory of Economy (OTE) and the Asia Pacific Forum on Women, Law and Development (APWLD), to see where we might build synergies between Decoding Injustice and their approaches to economic policy research and feminist participatory action research, respectively.
-
We’ve just completed our first Decoding Injustice Learning Lab, bringing together a cohort of almost 30 activists over a six-month period to do a deep dive into the three steps of Decoding Injustice.
-
We have collaborative research projects underway in the Philippines, where we’re working with IBON International and the national peasants’ movement to critique a World Bank-funded agrarian reform project, and in Zambia, where we’re working with Women’s Life Wellness Foundation (WLWF) to examine the barriers to ensuring climate adaptation financing reaches women in rural areas.
In addition, we’re part of an advisory group supporting six of our fellow ESCR-Net members to design community-led research projects on the loss and damage they’ve experienced from climate change. The International Human Rights Clinic at Harvard recently published a report on how climate change will deepen water inequality in Delhi, India, which draws on OPERA (the analytical framework that underpins Decoding Injustice). We’ve also co-published, with Debt Justice, a new guide on how to use Decoding Injustice to understand the global debt crisis. This guide was supported by a grant from the Open Society Foundations.
Across all of these activities, a priority for us has been to find ways to make Decoding Injustice more relevant to and useful for research that supports community groups and local activists in their organizing, campaigning, and advocacy. We want it to serve efforts to shift power by combining technical and lived expertise to co-produce knowledge about how the design of our economies harms people's rights. Here are some of the key lessons we’ve learned.
Part of interrogating the problem is identifying how to solve it
The three steps of Decoding Injustice outline a typical research process, in chronological order: interrogate the problem, illuminate it with data, use our findings to inspire action. Our conversations with partners over the year highlighted that, for activists, how we design our research (in other words, the way we decide to collect, analyze, and present our data) depends on who we’re trying to influence to do what. These big strategic questions can’t wait until after research has been carried out. They need to be part of the more conceptual work that is done in the first step of a project.
So, we started to experiment with different ways of focusing on solutions as part of the interrogate step. After mapping out their problems, our Learning Lab participants reflected on prompt questions, such as: What do you want to change to address your problem? Who do you think you need to convince to bring about that change?
In our workshop with WLWF, participants did a force-field analysis. This gave us a new perspective on the problem we were trying to change, which helped to identify entry points for changing it.
That also taught us how challenging it can be to pinpoint solutions to focus on. As we’ve reflected on before, as human rights activists, we tend to be much better at calling out what we don’t want than spelling out what we do. For example, who has to do what to address the loss and damage caused by extreme weather events and the slow-onset impacts of climate change? This was a question we brainstormed with our fellow ESCR-Net members who are designing community-led research on the topic. Answering it in a way that is targeted enough to shape ‘SMART’ objectives for a specific project proved difficult.
Do metrics matter? It depends
Decoding Injustice builds on more than a decade of work by CESR and our partners to efforts to strengthen accountability for policy decisions that harm people’s economic, social and cultural rights. In many contexts, “formal” accountability for these rights is weak. This is because it’s often unclear exactly who bears responsibility; to whom demands for action can be addressed; how judgments can be made about the reasonableness of tradeoffs between one policy choice and another; and what remedies are appropriate. In response, activists have looked for ways to advance “social” accountability. A common feature is to “mimic” the functions of formal accountability mechanisms (e.g. people’s hearings instead of parliamentary hearings, social audits instead of administrative audits).
Metrics – a category that includes indicators, composite indexes, and rankings—are a particular tool of accountability that activists have sought to mimic. Research shows how metrics affect accountability directly, when they are used to make a decision, and indirectly when they “shape modes of thinking”, on which decisions are made. Different metrics have been developed for “measuring” economic, social, and cultural rights.
CESR has played a key role in advancing thinking on whether, how, and in what conditions this turn to quantification can help build civil-society power vis-à-vis the state and others. For example, we supported partners in developing the Egypt Social Progress Indicators, a groundbreaking social accountability tool that benchmarks progress (using a four-color scale) on more than 50 indicators. This gives a multidimensional picture of socioeconomic wellbeing and helps identify (and build momentum on) areas of where change is most needed. For this reason, indicators have been one of the key tools underpinning Decoding Injustice.
As we’ve shared our experience of using indicators, reactions have been really interesting. For example, among the Learning Lab participants, there was initially a strong sense that indicators are the “master’s tools,” to quote Audre Lorde. As we unpacked different examples of metrics developed by activists, there was more openness to their potential to be used “subversively”. In other conversations, our partners highlighted the value they saw in using indicators to compare dominant narratives to the realities facing their communities.
But these conversations also highlighted that perhaps including “identify indicators” as a formal step in the research process isn’t always necessary. If it’s a concept that feels too abstract or technical at the community level, it might be more appropriate to go straight from agreeing on research questions to data collection. Alternatively, it could be something to return to during the data analysis stage or when deciding how to inspire action.
There’s more to learn about integrating secondary data into participatory action research
A particular strength of Decoding Injustice is the way it weaves together different sources of secondary data, to give a more holistic picture of a government’s compliance with its human rights obligations. But for many of the activists we’ve worked with this year, primary data collection is a much bigger focus. This makes a lot of sense. Collecting primary data through a participatory process can facilitate community mobilizing and organizing. It also gives a very rich and nuanced picture of the day-to-day reality facing communities. But, as they’ve shared with us, primary data, on its own, is less useful in analyzing the underlying causes of this reality.
To make secondary data a more appealing tool for participatory action research, we need to expand the methods we use to build up data literacy and facilitate participatory data analysis. In conversations with others, it’s been great to learn about tools like data walks, and data sculptures. There’s a lot of scope to experiment with ways of incorporating these as part of Decoding Injustice.
Beyond specific tools, it’s important to keep the data as “close” as possible to the communities we’re working with. One way to help do this, we’ve found, is to overlay the participation ladder with the steps in the data pipeline. This recognizes that it may be appropriate to have different levels of community involvement in different parts of the research process. But it helps make sure we’re taking that decision intentionally and collectively.
What comes next?
In 2024, we’ll convene a second cohort of our Decoding Injustice Learning lab, undertake a number of collaborative research projects, and look for additional ways to support activists to use Decoding Injustice in their work. We see this very much as a collective endeavor. If you’re interested in joining us in this work, please don’t hesitate to get in touch.