Mastek Blog

Writing a user-centred survey

18-Aug-2023 08:36:54 / by Michael Watson

Michael Watson

 

In part 4 of this series on best practices in survey design, I will focus on how you can effectively write a survey in a way which provides the best possible experience for the research participants and allows you to gather the most useful, reliable data possible.  

This advice is based on a combination of guidance from the Government Digital Service (GDS), The Market Research Society (MRS) and the Royal Statistical Society (RSS), as well as a range of academic sources and personal experience.  

UCD-blog-02-01 

You can find the previous three parts of the series below:  

Part 1: Why should we use surveys in user research?  

Part 2: When should and shouldn’t we use surveys?  

Part 3: Planning and structuring a survey  

When writing a survey, you should always be mindful of whether your participants are likely to:  

  • - understand the question and response options 
  • - be able to answer the questions 
  • - and be willing to answer the questions.  
  •  

Understanding the questions  

 

Any question that you need a participant to answer should be clear and understandable, with as little ambiguity as possible. Ideally, people should only be able to interpret a question in one way. 

 Similarly, you should only ask one question at a time. Questions which contain multiple concepts are often confusing for the participants and therefore unreliable. For example, if you want to ask what people think about the layout and the content of a page, a bad way of finding this out would be to ask: ‘What do you think about the layout and the content of this page’. A much more valuable way of finding the information you need, would be to include two questions:   

1. What do you think about the layout of this page?  
2. What do you think about the content on this page?  

 

A good way of making sure your questions are clear is by phrasing them as a participant would think and talk. When writing a question, you should ask yourself: “Am I using the language people would normally use to talk about things?”   

Sometimes, there might be facts or pieces of information you need to communicate to the participant, for them to understand the question. In these cases, you should make sure this information is set out neutrally, so you don’t introduce any bias into their response. Wherever you can, you should frame your questions to be objective, so that you’re not ‘leading’ people in their answers.  

Because the order in which you show the answer options can affect how likely they are to be selected, one way you can reduce bias is by randomising the order they’re displayed in. If you do this, you always need to think about whether it’s appropriate, or if the order of responses is helping people understand the question.   

For example, when your answer options are on a scale or in some way numerically related to one another, their order can help people understand how to answer. In these circumstances, randomising responses would confuse your participants, because they expect to see the most negative response at one end of the scale and the most positive at the other.  

 

Being able to answer the questions  

 

It’s always important to make sure the questions you ask are within a participant’s “frame of reference.” In other words, do they have the required knowledge to answer that question? Is that question relevant to them? If not, they will be forced to guess, which means that data has very little integrity or reliability.  

Some questions and answers are by their very nature, likely to be less accurate than others. For example, are you asking something which respondents are likely to have a very accurate response for (e.g., their age in years, as of their last birthday)?   

Are you asking them to use their memory (e.g., how many times in the last month they’ve used a particular service)? If you are, then it’s important to accept that to some extent, participants might be guessing.  

Or are you asking them to select from a best-fit list of options? In this case, there’s a possibility that none of the available options exactly corresponds to their actual perspective.  

It’s fine to use any of these question-types, if you are mindful of this when it comes to analysing the data.  

One way to make sure the data you get is as accurate as possible is to give people the option of saying they ‘don’t know’ the answer to the question. If none of the answers you have included accurately reflect the participant’s perspective, then you shouldn’t force them to choose one. If they really don’t know what they think, then that’s fine. A ‘Don’t Know’ response should always be included for all questions because it’s a perfectly valid answer!  

Whenever you can, you should also include an ‘Other’ option, so that if your list of responses doesn’t apply to someone, they can write in their own response. This has two advantages: it means they’re not forced to choose an option which isn’t really accurate for them, and it might draw attention to a potential response which hadn’t occurred to you before they mentioned it.   

Wherever you can, you should take care to make sure that your question wording and response options don’t exclude anybody or cause offence to any groups of people.  

A common example of this on forms is to have a single select option for ethnicity when many people fall into multiple groups. There are more and less inclusive ways you can ask about race, gender, and any other variable which might form an important part of somebody’s identity, so it’s important you think about these whenever you’re writing a survey.   

Being willing to answer the questions  

 

You should always be mindful of potentially sensitive subject areas. If you think any of your questions might cause somebody distress, then you should warn people at the very beginning of the survey.   

It’s a good idea to cover sensitive subjects towards the end of a questionnaire. One reason for this is that it helps if a participant is ‘warmed up’ and in the right frame of mind to answer questions honestly. Towards the end of a survey, people are also more likely to understand that they can opt out if they don’t want to answer any individual question.   

Giving your participants the opportunity to not answer any question is very important. This can be frustrating, because sometimes it’s the question we really want answers to that people are more reluctant to answer, but it’s unethical to force or coerce anybody to answer a question. It’s good practice to include a ‘Prefer not to say’ option for every question.  

You should also provide space for your participants to record their comments on topics which are not covered by the questionnaire. This encourages people to feel like you’re interested in their perspective. It might also prove beneficial to your study overall because it can draw attention to topics which weren’t covered in the questionnaire, and which might not have occurred to you before they mentioned them.   

Finally, it’s a good idea to do some background research into whether there are any ‘standard questions or questions which have been used in previous research to find out about similar topics to the one you’re interested in. There are two main reasons this is generally sensible.   

Firstly, using these questions can give you some useful comparability across studies, which makes the data more valuable. And secondly, somebody else might have already done a lot of work to determine the best way of asking a certain question, so if you draw upon their work, you can save yourself a lot of time and effort duplicating it. 

It’s important that you consider these questions critically though, paying particular attention to their reliability. Just because a question has been used before doesn’t necessarily mean it’s a good question! 

Up next: how to effectively pilot and test a survey   

Having looked at why and when surveys can be used most effectively in user research, as well as how to effectively plan, structure and write a survey, the fifth part of this series will consider best practices when testing and piloting a survey.  

 
To find out more about user research at Mastek, reach out to me at michael.watson@mastek.com or on LinkedIn. 

 

Topics: Digital Transformation, Digital Service Design, research

Michael Watson

Written by Michael Watson

Hi, I’m Mike, a User Researcher in Mastek’s user-centred design (UCD) team. Our job is to understand users - their needs, priorities, and experiences - and design services which work for them. I have 9 years of experience in research and a PhD using advanced quantitative research methods. I have designed, conducted, and analysed surveys covering a wide range of subjects, from educational videos for children, to business support needs for SMEs, and trust in political institutions.

Subscribe to Email Updates

Lists by Topic

see all

Posts by Topic

see all

Recent Posts