Data

From availability to impact: measuring digital service success

05 January 2022 • 6 min read

Measuring-product-success_Linkedin

"How do we know if our product is successful?" should be a simple question. However, the question will often uncover some of the complexities of product development, and the answers given can be revealing. They will almost always provide an indication of how well connected your team is to the goals of the product they are building,

We posed this question recently to a team working on an important public digital service. The responses were as you’d expect: the number of people using the service; the number of submissions it had received; the percentage of successful completions.

Those are all perfectly fine answers, but none truly capture any sense of the success of the product. So, we asked again: "But how do you know if it’s successful?"

This time, answers focused on how users were able to complete a task and the fact that administrators were now able to set things up correctly. Satisfactory, sure, but no one could point to actual data to support these points.

In fact, the key - and only - data point the team had to rate the success of their service was whether the number of support tickets submitted had increased or decreased. From the team’s perspective, the success of the service was purely rooted in whether users had any issues using it; they hadn’t considered what the wider purpose of the service might be. They had forgotten about the policy or strategic objective that the service was designed to support in the first place.

In other words, the team had lost a sense of the connection between the work they did on a daily basis and the overarching reason for the service being built in the first place. This isn’t to say the team wasn’t made up of conscientious individuals who were all keen to improve things for the users; it’s rather that without goals and a way to measure progress they weren’t able to properly determine whether their good intentions were actually having any impact.

 

Don’t lose sight of why a service exists

 

Product teams can become so consumed in delivering individual features for a service, fixing problems, or reacting to requests from senior management, that they lose sight of why the service exists. Once a team loses that connection then two things start to drop: the value a service has for its users, and the sense of purpose a team has in its work. Both are important and both are connected.

Restoring that connection between the service and its goals - done through valuable metrics that can and demonstrate progress and impact - is key to building high quality services.

But where do you begin? Essentially, it’s all about being clear about the wider context of a given project. This isn’t just about goals - as important as these are - it’s also about understanding why the goals matter at all. So, yes, perhaps there’s an online learning service which has goals around the number of participants. Of course that matters, but it’s also vital that the team working on the project understand the even broader goals - such as, for example, improving access across a diverse range of groups, or reducing car journeys to reduce carbon emissions in a given region.

 

The three tiers of success

 

These big, strategic goals are what we typically refer to as 'Impact Goals.' They sit at the top of a 3-tier pyramid of measurement that’s used for all services. Impact goals describe why a service exists. They should help teams to answer what objective a service or project is trying to deliver on and determine the impact it’s having. Measuring progress against these goals is often tricky - and we'll come to that later.

Beneath Impact Goals in the pyramid is Usability. Usability is quite simply how easy it is for a user to do what they need to do with a service. This isn’t just about UX journeys - it’s also about wider issues of accessibility.

At the base of the pyramid is Availability. This is the fundamental layer of success of a service - is it actually available to be used? How often is it not available? Is it available to everyone - and what causes it to be not available?

Measurement of progress towards goals at the Usability and Availability level is often easier. Data is typically easily available on service uptime, for example.

When a team considers their own service it’s useful to use this pyramid to allow them to categorise their answers when asked how they can determine whether the service they’re building is successful. Usually the answers fall into the bottom two sections of the pyramid, and teams are forced to start to consider what actually sits at the top - why does their service actually exist?

An exercise we run with clients is to get them to imagine they are running a community bus service, and come up with the things which affect the service. Availability and Usability usually fill up fast - everyone can come up with factors that mean a bus can't run (buses not working, no drivers, tax, terrible weather, etc.) and how a bus service could be made easier to use (comfortable seats, frequency of service, disabled access, affordable ticket prices etc.). Impact then forces people to consider why the bus service exists. What’s the ultimate goal?

 

Measuring product success and impact

 

With these ideas in place, we can then move to the next stage: measurement. Peter Drucker's famous quote - if you can't measure it, you can't improve it - is key to this.

At each of the levels in our pyramid, there is something which can be improved. If our community bus service can't run because of a lack of bus drivers, then we need to know how many trained bus drivers we have, and how many we need to have to provide a consistent service. We can show progress towards this number by training people up (and maybe adding in new goals around reducing the time taken to train a new driver).

 

Data

 

The final stage is data. If you want to measure something, you need to know where to go to get the data - where does it sit? Who owns it? How do you get access to it? In the example about knowing the number of trained bus drivers, the data required for this measurement is straightforward - a simple count of everyone locally who has the right qualifications.

As we move up the pyramid, it becomes harder to measure progress against goals as the data required becomes harder to obtain. An Impact goal of 'improving the environment' for our bus service is pretty hard to measure without some very sophisticated equipment.

 

This is where proxy measurements can be used - reducing the number of car journeys could be a proxy for environmental improvement, so being able to measure how many car journeys have been replaced by a single bus journey (and by extrapolation how much CO2 we may have saved) allows us to demonstrate progress towards an Impact goal.

 


 

Read next: The SPAM framework: a tool for building better data strategies

 


 

Putting it into practice

 

The bus company exercise framework can then be repeated for a team's own service. It should work something like this:

  • Categorise the service goals across three levels: Availability, Usability and Impact.
  • For each goal write down the measurement(s) you want to gather, and then the source which could provide the data.
  • Data gaps will need investigation - it may be that this data does exist, but your team is unaware of it or doesn't have access to it.
  • Speak to other teams, or bring in your organisation's data specialists to help identify these data sources.

This isn’t something that can just be done once: it requires continued and repeated attention. If you need to do it over and over again, so be it: it will almost certainly add a huge benefit to the work teams are delivering.


The difficulty of connecting products and the teams behind them to high level impact goals is really a mark of the reality of day-to-day work. Fast, frenetic, and often rooted in the detail required to make something actually work. However, taking some time to actually reflect on Impact Goals is vital and will not only lead to a better product or service in the long run, it will likely also ensure teams are more engaged and fulfilled in their work.

 


 

Tim Hatton is Head of Data & Insight at AND Digital.

 


 

Talk to us about how to better measure product success. 

 

Contact us

 

Data

Related Posts