Mastek Blog

Testing and piloting a survey

18-Aug-2023 10:45:33 / by Michael Watson

Michael Watson


In the final part of this series, I will take an in-depth look at how you can test and pilot a survey, to make sure it can be answered by potential research participants, and to make sure it’s likely to meet your study objectives.    

Testing and piloting a survey

You can find the previous four parts of the series below:   

Part 1: Why should we use surveys in user research?  

Part 2: When should and shouldn’t we use surveys?  

Part 3: Planning and structuring a survey  

Part 4: Writing a user-centred survey  


Why should you pilot your survey?  


Wherever possible, a questionnaire should be piloted with a smaller test group, before it’s administered to the wider, main sample of people you’re interested in researching.   

This is key because testing the survey allows you to see how feasible and appropriate your questions are. In other words, are your participants able to answer the questions, and are they willing to? (please insert link to blog 4) If they’re not, then this will severely limit how useful your survey will be. Piloting the survey with a test group of respondents allows you to catch these issues early and make any alterations your survey needs.  

Running a survey pilot also enables you to test your survey’s length. While you might assume that your survey can easily be completed in 5 minutes, it’s difficult to know this for sure until someone has actually tried answering it. Some questions require more thought than others and therefore might take longer to complete.   

By recording the average response time from your test group, you can get a good idea of how long the main sample will take to complete the survey too. Once you have a more accurate idea of your survey’s length, it’s a good idea to include this in the survey instructions so that the people completing it know how long they should expect it to take.   

A test group can also give you a good indication of your survey’s comprehensibility and general good sense. As the one responsible for writing the survey, sometimes it’s difficult to spot issues and errors, so a fresh pair of eyes can be just what your survey needs.  

Piloting also enables you to judge the quality of the data your survey is likely to collect. If the quality of the answers provided by the test group is vague, ambiguous or in some way unhelpful to you, this is likely to be reflected in the much larger, main sample of respondents too. If you discover issues with the data quality when piloting, you can think about why this might be happening and try to address it in the survey design. Perhaps your questions weren’t clear enough or your survey was too long, so participants lost concentration? If you discover this during piloting, you can still do something about it.   

How can you pilot your survey?  


Where possible, surveys should be tested at three levels:   

Firstly, by members of the project team, who are familiar with the aims and objectives of the research. In addition to testing the survey’s feasibility, appropriateness and length, testers from the project team should be able to give you a good idea of whether your survey questions are likely to give you the information you need.   

At the second level of piloting, it is a good idea to involve colleagues who are not immediately involved in the project. Members of the project team are likely to be more familiar with the concepts your survey covers than your target audience, especially if they have helped contribute to the survey design. Colleagues who are unfamiliar with the survey before testing it are likely to have a more similar experience to your ‘real’ target audience.   

Finally, your survey should be tested with a small selection of the target audience. While this is the most time-consuming and costly type of piloting, it is also the most useful, as it provides you with a real picture of how your survey will work in the field. This level of piloting is typically conducted in one of two ways:  

1) Conducting qualitative testing of specific questions, through focus groups or interviews with members of the target audience.   

This is especially useful for questions where you are concerned about sensitivity or comprehensibility and will provide you with a detailed account of a respondent’s understanding and experience with the survey. Consulting your target audience at this stage allows you to design your survey in a user-centred way.   

2) ‘Soft launching’ a survey, by initially collecting a small number of responses from the real target audience - similar to a beta release in software development

This is an effective way of rigorously testing the quality of your survey, by conducting a detailed analysis of the preliminary data. You should pay close attention to any particularly surprising or potentially problematic results.  


When is piloting especially important?  


Rigorous testing and piloting can be especially important when your survey deals with unfamiliar concepts or those for which no ‘standard’ questions have been established. In these cases, it can be difficult to discern the most appropriate way to phrase, structure or frame your questions. Observing people while they complete the pilot survey and giving them the opportunity to provide feedback can be very effective in this scenario.   

Piloting is particularly important when you are aware that your questionnaire may be complex, contentious, sensitive or simply too long. If you are concerned about any of these factors, piloting can shed some light on whether they are likely to be problematic for the research participants.   

If your survey includes open-ended questions, it can be helpful to create ‘code lists’, which categorise and group together similar responses, making it easier to analyse them. If you don’t have ready-made code lists, piloting your questionnaire with a small sample of the target audience can allow you to generate initial codes, which can save time when it comes to analysing the final results.  

You may be planning to deliver your survey on a range of different platforms, devices, web browsers or operating systems. If this is the case, it is important to test your survey on as many of these as possible before you distribute the survey. If the survey platform is not optimised for mobile devices, for example, questions might not display properly, or may appear in a confusing way.   

Finding these issues before you open the survey up to a much larger audience has the potential to save a lot of money and time. 


Other reading: best practice in survey design 


While this fifth and final part of this series focused on testing and piloting surveys, you can find the previous four parts below:   

Part 1: Why should we use surveys in user research?  

Part 2: When should and shouldn’t we use surveys?  

Part 3: Planning and structuring a survey  

Part 4: Writing a user-centred survey

To find out more about user research at Mastek, reach out to me at or on LinkedIn. 


Topics: Digital Transformation, Digital Service Design, research

Michael Watson

Written by Michael Watson

Hi, I’m Mike, a User Researcher in Mastek’s user-centred design (UCD) team. Our job is to understand users - their needs, priorities, and experiences - and design services which work for them. I have 9 years of experience in research and a PhD using advanced quantitative research methods. I have designed, conducted, and analysed surveys covering a wide range of subjects, from educational videos for children, to business support needs for SMEs, and trust in political institutions.

Subscribe to Email Updates

Lists by Topic

see all

Posts by Topic

see all

Recent Posts