Collected UX research posts from my government days

I’ve taken some time over the new year to repost some of the blog posts I wrote while I was working in government. These were originally posted on government blogs, first in the UK and then in Australia.

Can’t guarantee I still agree with everything I wrote back in 2013 but perhaps you’ll find something useful here.

How we do user research in agile teams (GDS 2013)
This was probably the first time I’d ever seen user research work really well in agile.

User research not just usability (GDS 2014)
It’s pretty typical in low maturity organisations that they think usability testing is everything that user researchers can do, right?

What user researchers do when they’re not researching (GDS 2014)
First time I wrote about the old 70:30 rule…

It’s user research, not user testing (GDS 2014)
Call me a pedant. I still think it matters.

5 ways to help user research work better in agile (GDS 2014)
Summing up things that I found myself saying over and over again.

Sample size and confidence – how to get your team to trust qualitative research (GDS 2014)
My response to the ‘oh, but its not statistically significant, how can we possibly trust it’ objection.

Anatomy of a good sticky note (GDS 2014)
The building block of good research analysis. Surprising how few people get it right.

So, you’re going to be a user researcher: top tips to get you going (GDS 2015)
What I recommend when people ask me how to get started with user research.

Doing user research in the discovery phase (GDS 2015)
No, discovery is not for validating the ideas you want to proceed with, it is – surprisingly – to discover,

We need to talk about user needs (GDS 2015)
User needs are not user stories, they are so much more!

Improving internal services and tools to help make the end users experience better (GDS 2015)
A few notes on the particular challenges and rewards of doing internal facing user research.

How might we improve the voting experience? (GDS 2015)
Very occasionally I got to actually do a tiny bit of research rather than just fighting off bureaucracy so that others could do great work. This was one of those times. The other time was the Welsh project.

‘I want a pony!’ or the critical difference between user research and market research (DTA 2017)
The evergreen discussion of why user research methods are different to market research methods and how they’re both valid when they’re used for the right things in the right ways.

Why we say no to surveys and focus groups (DTA 2017)
Why many organisation’s research tools of first choice is generally my tools of last resort.

There were a few more blogs from DTA days but they were very project specific so I’m going to leave them where they lie for now.

And, of course, the summary of that government experience:
If I could tell you 3 things – notes from a brief career in the public service

And hopefully, if the new years resolutions hold, some more original content soon.

If I could tell you 3 things – notes from a brief career in the public service

Recently a colleague asked me what 3 things I would say if I ever had an audience of Secretaries (very senior public servants) that would help them do things to help make public services better for end users. This is (roughly) what I said:

  1. Your organisation will benefit more from you being user centred than the users ever will. 

    It is a common misconception that we do user-centred design because we want to deliver a delightful or engaging experience for our users. Truth is, in government, this is very rarely the case. Paying tax is not delightful, complying with regulation is not delightful, discovering you to repay a benefits debt is far from delightful.Let’s be realistic here – the job of user-centred design is to make things as painless and effortless as possible.It might not be delightful to discover that you’re not eligible for a benefit or a visa, but it is much better to find out as quickly and easily as possible before investing a lot of effort in an application or making plans for the future.

    When we focus on making things usable rather than delightful or engaging we are focussed on making sure that:
    a) people know what you want them to do.
    b) they can do that thing as easily as possible and without accidentally making mistakes.

    I think it is fair to say that many government services still don’t meet that low bar. This is bad for users but it is also bad for government. Poor usability impacts government’s ability to achieve policy outcomes and it can lead to a decrease in compliance (because even the people who WANT to be compliant often can’t work out how to do so – or have to pay specialists to explain it to them. This failure also leads to more expensive service delivery because people don’t stay in the cheaper digital channels. Instead it takes multiple encounters across multiple channels to complete a task, leading to a higher cost to serve.

    Even if you don’t care about the quality of the experience for users (and, honestly, every secretary and most public servants I’ve met have cared a lot), you should care about it for the effectiveness of your department and for the sake of your career. Services that people can use help agencies achieve organisational goals.

  2. Orient everything you can in your organisation around real user journeys 

    Some of our biggest organisational blind spots are caused by focussing on our own organisational structures at the expense of supporting and understanding real user journeys and the part our work plays in supporting those journeys.Like any large organisation, often multiple agencies are involved in the service experience that users have at key points in their life – when they lose their job, have a baby, start an education, start a  business, or when a loved one dies. Even in the services that exist in a single agency, we create false barriers between ‘authenticated’ and ‘unauthenticated’ experiences – often the only person who has a view of the end to end experience is the end user and every single touch point is managed by a different senior manager, sometimes in entirely separate parts of the organisation.

    There are small things that you can do immediately and cheaply to try to address this. Stop naming your services after the government need (eg. compliance) and start naming them after the thing that people need to do when they encounter the service (eg. tell government when your rental situation changes). Words and what we call things can be powerful catalysts for cultural transformation.

    Make the real user experience visible to people across the organisation by making journey maps from the trigger to the outcome and make sure all the people who own parts of that journey know each other and have seen each other’s work. Put someone in charge of being the expert on that journey and informing all the parts.

    Challenge concepts like ‘authenticated’ and ‘unauthenticated’ which are meaningless to end users and often reinforce silos that amplify user experience problems in services.

    Make sure that the analytics you are capturing help you understand, across all the channels (digital, phone and shopfront) what is happening, what is working and not. Create success criteria that are really based on improving outcomes for users.

  3. Seek the truth, even if it’s ugly

    Large organisations like the public service can be pretty hierarchical. Something that can happen in hierarchical organisations is that bad news doesn’t travel up the line – people don’t speak truth to power. It can be a career limiting move (CLM – an acronym I learned for the first time in the Australian Public Service).The reality is that if you’re a senior person in a large organisation, people are probably going out of their way to let you believe that everything is fine. Or as fine as it can be.

    As a leader you need to be aware of this and to do everything you can to break through this. The best thing to do is to see it for yourself.

    Research has shown that organisations where everyone, including management, sees real users using their services for just 2hrs every 6 weeks are more likely to deliver good services. In truly customer-centric organisations, the executive team routinely get ‘behind the counter’ and see for themselves both what it is like to be a customer and (equally importantly) what it is like to deliver service. Watch an episode or two of Undercover Boss where CEOs go, in disguise, to work in the grass-roots of service delivery in their organisation and discover that the reality is very different to what the reports say.

    Many customer-centric organisations require that everyone in the organisation spend time at the coalface of service delivery as a part of induction and leadership should be required to do this regularly. ServiceNSW CEO Rachna Gandhi is known for routinely working behind the counters of the ServiceNSW shopfronts – this not only demonstrates true executive commitment to high quality user experience but also means she has a direct view of the reality of what it is like to experience ServiceNSW services and to learn from the day-to-day experience of the people who work in service delivery.

    If this is a priority you need to put time in your diary to make this happen. If you can’t escape endless meetings, then work with the user researchers in your delivery teams and ask them to show you the video footage of people talking about their experiences. What are they learning out in the field? – the good and the bad.

    And don’t let your organisation become culturally afraid or disrespectful of your users. Don’t accept that if your users were less stupid or lazy or naughty everything would be better and there is nothing we can do.

    Don’t believe for a moment that your users are about to go running to their MP or the media the minute something goes wrong. Nothing could be further from the truth.The average person would have to be on the edge of desperation before they contemplate approaching a politician or journalist. Rather, most people want to spend as little time as possible thinking about government services. From my experience they are only too happy to share their experiences and insights if they think their input will be used to make government services better for everyone. If you work in government it’s your job to make sure they get heard.

Epilogue:

Yesterday was my last day at the Digital Transformation Agency (DTA) and in the Australian Public Service (APS), for now. This followed a few years working at the Government Digital Service (GDS) in the UK Civil Service (no acronym that I’m aware of).

It’s been a privilege to be a part of the movement towards better quality public services and in doing this I’ve been able to work with some of the best and most passionate technologists, designers, policy makers, administrators and more. Working in government is one of the most challenging yet rewarding working environments I’ve encountered.

Thanks for the opportunity. Stay in touch.

Related reading:

Research about management spending time seeing real users

Words and cultural change

Undercover Boss

Naming your service as a ‘doing thing’

How improving internal systems can improve customer experience.

Why we should stop banging on about users

Why we say no to surveys and focus groups

Originally published on the DTA Blog.

Surveys and focus groups aren’t used much in our user-centred design process. These are the reasons why.

You can’t get authentic, actionable insights in a few clicks

Think about the last time you filled in a survey.

As you were filling in that survey, did you feel as though you were really, genuinely able to express to that organisation how you felt about the thing they were asking you? About the actual experiences you’ve had?

If the answer is no, you’re in good company. I ask this question a lot and the answer is always the same.

This is important to remember whenever you’re looking at research reports full of statistically significant graphs. Always make sure you are critically evaluating the quality of the research data you are looking at – no matter how large the sample size or whether it has been peer reviewed.

Also, when you are looking at research outcomes you should think about whether they help you understand what to do next. Surveys and other analytics can be good at telling us what is happening, but less good at telling us why. Understanding the why is critical for service design.

Government services have to work for everyone

As researchers, we have a pretty diverse toolkit of research techniques and it is important that we choose the right tools for the job at hand.

Surveys and focus groups are research techniques widely used in market research where we want to understand the size of a market and how to reach and attract them. But most of the time, designing government services is not like marketing.

Randomised control trials are widely used in behavioural economics to understand how best to influence behaviour in a desired direction. Most of the time, designing government services is not like behavioural economics.

The job that multi-disciplinary teams have to do when designing government services is simple but difficult. We need to make sure that the service works for the widest possible audience. Everyone who wants to use that digital government service should be able to.

When we achieve this level of usability in a government service we are more likely to achieve:

  • desired policy outcomes
  • increased compliance
  • reduced error rates
  • a better user experience for end-users.

It’s not about preference

Government services work when people understand what government wants them to do. Success also means they’re able to use the service as quickly and easily as possible without making errors. These are the outcomes that the user researcher needs to prioritise.

To achieve this we use observational research techniques and iterative processes that predate both the internet and computers – having their foundations in ergonomics and later in human computer interaction.

There are 3 important things our user researchers and their multi-disciplinary teams keep in mind as they do their work to understand whether services are usable and how the team might make them more usable:

  • We care about what makes the service work better for more people, more than we do about what people (either users or stakeholders) tell us what they prefer
  • We take an evidence-based approach to evaluating whether our design is working better to help people use the service
  • We know that the more opportunities we have to iterate (test and learn) the greater the chance we have of delivering a service that most people can understand and use.

Setting real-life tasks is more valuable than ‘tell us what you think’

We used task-based usability as one of the main research tools when we are evaluating the design of digital services and iterating to improve them in the Alpha, Beta and Live stages.

To do this we come up with examples of important tasks that people need to do to complete that service. For example we might ask them to register for a service and complete a registration form as if they were doing it for real.

When we are testing content, we might provide a real-life scenario that represents a question that people should be able to quickly and easily answer. Using a real-life scenario makes it easier for us to be sure that users are getting the right answer. The worst case scenario is when users think they have the right answer but are actually incorrect.

A scenario might be something like this:

Samantha is 41. She is a single mother of a 14-year-old boy.

The building company she worked for has recently gone out of business and she’s now working part-time at the local supermarket while looking for work.

How much can she earn each fortnight before her payment stops?

We can do task-based testing in a moderated environment. This is where the user researcher is in the room (or on a video conference) with the participant and asking them about how they are interpreting the design and information as they move through the task. This helps us understand what people are thinking and why they are making the decisions they do and let’s us understand how to improve the design to work better.

Task-based testing can also be done in an unmoderated environment. This is where the participant is left alone to do the tasks and we use software to measure how long it takes to complete. We also measure the pathways the user takes, whether they can accurately complete the task and their perception of the effort involved. This can help us to create a baseline for usability which we can try and improve upon.

Both of these approaches give the team valuable insights into how well a service is performing. But critically we also learn what we can do to make the service work better for users.

Of course there are times to use surveys and randomised control trials – no research method is in itself inherently bad. But if you’re in the business of designing government services and making them work better for users (which means better outcomes for government too) then you need to make sure you’re not automatically defaulting to research tools that don’t let you dig as deep as our users deserve.

‘I want a pony!’ or the critical difference between user research and market research

Originally published on the DTA Blog.

Research is not a new phenomenon in government. When you start a new project it is very possible that there is a wheelbarrow-full of previous, relevant research for you to review. Most policy, for example, is evidence based. Similarly when it comes to service delivery, there is often no shortage of research – often in the form of market research.

Market research goes wide not deep

Market research, usually drawn from focus groups and surveys, is appealing to many large organisations including government. It lets an organisation gather opinions from a reasonably large, geographically and demographically diverse audience.

When we talk about Criteria 1 of the Digital Service Standard ‘Understand user needs, research to develop a deep knowledge of the users and their context for using the service’, we rarely recommend starting with large scale market research. Instead, we recommend that teams do user research (also known as design research).

What works is more important than what people prefer

When designing government services, we are not competing to win market share or even give people what they think they ‘want’ (ie ‘I want a pony’). Our main concern is to make sure that people know what they need to do and that they can do it as easily as possible. This is a win-win outcome. Increased digital uptake and reduced failure demand both mean less cost to deliver services, while better comprehension and fewer mistakes mean increased compliance and policy effectiveness. Better digital services are also more convenient and easy to use for the people who need to use them – a better user experience.

These priorities mean that usability (including accessibility) is our primary focus.

User research methods offer deeper insights

There is only one way to understand if a service is more or less usable and that is to observe someone attempting to use it – ideally to achieve a realistic outcome in a realistic context. For example, watching someone try to find out if they are eligible for a benefit or grant based on their own circumstances and using existing websites, rather than asking them how they’d like to do it in a focus group room.

There is quite a lot of evidence that shows that when you are doing usability testing it requires only quite a small sample size to identify usability issues. This is why we recommend doing a series of small studies instead of investing in one large scale survey or a series of focus groups.

After each session we are able to apply the insights we’ve gained with the constant goal of attempting to improve the usability of the service before testing it again. Because we work in agile teams we try to do usability testing and subsequent improvements in every sprint.

By working in this iterative way we can guarantee that the service we deliver will be more usable.

Once we have achieved usability for the widest possible audience (including usability for people who have particular access needs) we can start to consider questions of preference.

People prefer government services that work

In market research it is tempting to put pictures of websites in front of people and ask them which they prefer – which one feels more trustworthy or more secure or more modern? In real life, it is not the picture of the website that people have to interact with -it is the actual service.

While the initial perception may have an impact for a second or two, the real impression comes from whether people can actually find, understand and undertake the task they need to do easily and successfully. People don’t choose to pick up the phone because they don’t like the look of a digital service. They call because it doesn’t let them get the job done.

Choose the right research tool for the research question at hand

It is important to recognise that we have a wide range of research methods available to us and that we should seek to use the right one for the job at hand. For example, small-scale usability studies won’t let you measure the prevalence of a particular trait across the population. But they are super effective for finding and fixing big usability issues.

Large scale studies including surveys, focus groups and random control trials (popular with behavioural insights experts) can help provide certainty at scale and are an important part of the mix of government research but are not appropriate as the primary tools for either discovery research or research to improve the usability of a digital service.

Both qualitative and quantitative research is important and necessary, but in service design, we should always start with rich, qualitative insights.