By “data experience,” I mean the user experience (UX) of data, information, stories, insights, dashboards and other artifacts of evidence or subjective truth.
The UX of data involves anyone using data and information to make decisions. An inroad to understand data experience is around measures of success. The way people can experience data about impact or performance can have varied influences and outcomes on our behavior. A negative one may happen when evaluating success focuses more on metrics than the intended outcomes. This can happen when determining the performance and impact of a person, project, service or policy.
For more relevant examples, consider a mandate requiring paperwork to be processed in 30 days. The measure of success can orient people towards meeting a 30-day timeframe, not expediting the process. To improve this process, exploring the role of mandated metrics on people’s behaviors offers a more holistic understanding of the problem.
Policy and program metrics can also center on the number of people served without sufficient attention on the quality of that experience or whether people benefitted from the service or product.
The business, philanthropic, educational and nonprofit sectors struggle with performance metrics similarly.
In light of this, ask yourself, “Are mandated metrics helping me to design a better service or product? Are they working against me? Do they have any role at all? How would others on my team / project respond to these questions?”
Unpacking the power that measures of success have on human behavior is nothing new. People have been researching this for decades, and experiencing it even longer. I’ve shared this with many managers who have acknowledged that their teams may be incentivized towards the wrong thing or might be confused as to what indicator of success is most important. It’s complicated stuff. I don’t have all the answers, but I’ve found it’s uncommon to research how measures of success (and data, more broadly) are experienced within organizations or by external partners or the people ultimately being served.
Co-creating or -evaluating the effectiveness of these measures, especially with frontline teams and “end users,” also seems to happen infrequently. Why is that?
Many of us in work in environments where responding to urgencies occupies much of our attention. It’s hard to prioritize other things when there are so many pressing needs. Our colleagues and bosses can feel engulfed by fires. Problem prevention research or even a discussion about the UX of success metrics may seem out of touch within a fire containment work culture.
Given this context, we need an ethical and effective way to frame our ask for this sort of conversation and research.
The diagnostic may help by trying to answer three questions:
Why do measures of success shape people’s behaviors in my work place (including my own)?
What may be missing from our current practice of understanding the performance and impact of my team/organization?
What could we do better with others’ help?
The diagnostic is an invitation for teams to ask: “To what degree…
- Are organizational values a part of a decision-tree when creating success metrics?
- Are bias, discrimination and trauma used as a lens to examine performance metrics, dashboards, and evaluative practices?
- Is there shared knowledge about data or its use in the organization?
- Is there recognition and support for the different ways frontline teams and managers may be experiencing the success metrics?
- Are success metrics of the public or customers aligned or complimentary with those working internally/on back-end systems?
- Is enough time spent on gathering and analyzing necessary data instead of available data?
- Is there genuine agreement on the intended results (and how to measure and recognize them) for a project, product, service or policy?
- Are people experiencing the ‘Spiral of Mistrust’: when frontline teams feel metrics are meaningful only to managers/those with more formal power? In response, these teams treat the metrics as a checkbox in order to return to the real work. If “checkboxing” is discovered by the more powerful their response is creating more metrics to control the frontline teams. The downward spiral continues.
- Is reflection encouraged to understand how historical practices are shaping today’s approach to success metrics?
- Is there uncertainty of how to have the above conversations or shame at what they may reveal?
The “diagnostic” part of this tool include identifying which of the obstacles you’re experiencing, to what degree, add what’s missing, unpack assumptions, prioritize them, explore what could be done differently, then prototype, test and learn how the changes are going.
OK, so how to do all of this?
Here is a template to get you started. The diagnostic includes a simple implementation plan.
Acknowledgments: I’m grateful for feedback from those I have trained in using the diagnostic across San Francisco government, the Rapid Research Evaluation and Appraisal Lab, the federal Service Design in Government group and the Innovation Skills Accelerator. (Last updated 1 June 2022)