From Design to Analysis: A Guide to Online Surveys

From Design to Analysis: A Guide to Online Surveys

Surveys are a popular research tool widely used in various business areas, from marketing and product development to studying employees from an HR perspective.

However, in my experience, this type of research is much less emphasized in courses for product managers, which place more “respect” on interviews and other qualitative data. Moreover, surveys are done on a whim rather than in accordance with any methodology.

This article was written to provide an overview guide that lists all the main stages and essential milestones in conducting surveys to help systematize data and share thoughts and new information. I hope this article will be helpful for any professional, not just product managers, who can benefit from this research method.

I will focus on online surveys, today's most popular version of this research method. While I do not intend to cover this large topic in exhaustive detail, I aim to discuss the key stages that must be considered to get started. If my article does not provide enough information, it should give you a direction for further study.

Define Objectives

  • Clarify Purpose: Determine what you want to learn from the survey. Is it to gauge customer satisfaction, understand user needs, or test a new product concept?
  • Set Specific Goals: Make sure your objectives are SMART (Specific, Measurable, Achievable, Relevant, Time-bound).

To start with, surveys are strategic tools for gaining insights, but their success hinges on clear objectives. Start by identifying the survey’s purpose — whether it’s to assess customer satisfaction, understand user needs, or gauge interest in a new product. Each goal needs a unique approach. After clarifying the purpose, set specific goals using, for example, the SMART criteria. This precise goal-setting ensures surveys produce actionable insights that inform data-driven decisions aligned with business strategies and customer needs, but moreover, allows you to apply those insights to decision-making for your projects, whatever they are.

Define goals suing SMART

  • Segmentation: Identify who your respondents should be. This could be based on demographics, user behavior, purchase history, and any other parameter important to your case.
  • Sample Size: Determine how large your sample needs to be for statistically significant results.

Segmentation

This phase demands meticulous segmentation, where you define who your respondents should be. The precision in this segmentation process ensures that you are tapping into the proper set of respondents whose insights will be most valuable to your survey’s objectives.

If you ask users of your service about their checkout experience, then it is reasonable to segment the survey only to those users who have reached this checkout. Select the characteristics important to you and apply them as a criterion for choosing your segment.

Sample Size

The next step, determining an appropriate sample size, is a critical and scientifically rigorous process, crucial for ensuring statistically significant results and enhancing the reliability of your survey.

Sample size refers to the number of observations, individuals, or units collected in a study or experiment. It is crucial to ensure that the collected data is statistically reliable and accurately represents the broader population. Or which is more suitable for examples from business, the groups being studied. Determining the appropriate sample size involves considering factors like the desired confidence level, margin of error, and the study’s overall objectives.

Follow these steps to calculate your sample size accurately:

  1. Define the “Population” Size: Establish the total number of individuals in the group you are studying. Of course, the population size should reflect the specific group you want to understand, which in our case is probably your website or app audience. The population size would be the total number of unique visitors or users who interact with your website over a given period, such as a month or a year.
  2. Select Confidence Level: Typically, surveys use a 95% confidence level, meaning you can be 95% certain the results reflect the views of the population. You can adjust this level based on the importance of precision in your survey, but for simplicity, you can stick to this 95% standard.
  3. Choose Margin of Error: The margin of error indicates how much you expect your survey results to reflect the views of the overall population. Commonly, a 5% margin of error is used, but this can be lowered for higher precision.
  4. Calculate the Sample Size: Using the above factors, apply the formula for sample size calculation or use a sample size calculator available online.

The most obvious way to calculate the Sample Size is to use one of the many online calculators. I’m sure you can find more than one of these on the Internet. However, for example, I will give links to 2 popular calculators:

Remember, a smaller sample size might not accurately represent your population, leading to biased results. This fact is intuitive, right? However, a larger sample size may not significantly enhance insights but could increase the survey’s resource requirements. Precision in this calculation sets a robust foundation for a survey, ensuring it yields accurate, reliable, and efficient in terms of required analysis time.

Design the Survey

  • Questionnaire Type: You can always stay with the standard mix of open-ended and closed-ended questions. However, you can go further and choose one of the more interesting methods used by sociologists and marketers.
  • Clear Wording: Ensure questions are clear, unbiased, and straightforward.
  • Tool Selection: Choose a tool or platform for creating and distributing the survey (e.g., Google Forms, SurveyMonkey).
  • Pilot Test: Conduct a pilot survey to test the questions and make necessary adjustments.

Questionnaire Type

Crafting the survey involves several pivotal decisions to ensure it effectively captures the necessary data while being respondent-friendly. A crucial aspect is deciding on the types of questions to include. I will not dwell in detail on options such as closed-ended and open-ended, matrix, ranking, rating, and dichotomous questions. I am sure that we all have encountered them at the level of general knowledge. But let me give you examples of some other ways to construct a questionnaire:

  • Best-Worst Scaling: Respondents are asked to select the best and worst options from a list. This approach provides more nuanced data about preferences compared to simple rating scales.
  • Conjoint Analysis: Presents respondents with different product profiles that vary systematically across attributes. Respondents’ choices among these profiles help deduce the importance of each attribute in their decision-making process.
  • Paired Comparison: Respondents evaluate two options at a time, and the process repeats with different pairs of options until all comparisons are made or until a clear preference hierarchy emerges.

Example of Conjoint Survey

These are examples of methods that can be used beyond simple direct questions. The main point here is not that these three are important, but that when you are faced with the task of choosing a survey form, try to be open and spend time studying, look for how other researchers have studied this topic, ask ChatGPT how such a survey can be done better and what are the ways to make it more effective and scientific. Surveys, like any other research methods, are subject to errors, and many people look for and find ways to avoid them by inventing such original methods.

Clear Wording

Another element is clear wording. The questions should be formulated to be clear, unbiased, and straightforward, avoiding technical jargon or ambiguous phrases that could lead to misinterpretation. This clarity ensures that respondents from various backgrounds can understand and accurately respond to your questions. Also, try to be as concise in your wording as possible.

Tool Selection

The next step is to choose a tool or platform for creating and distributing the survey. Various digital tools offer unique capabilities, such as Google for its simplicity and integration with other Google services. SurveyMonkey for its advanced survey design features. Qualtrics for comprehensive and sophisticated survey needs. Or even platforms like Typeform for engaging, design-forward surveys. The choice of platform should be informed by factors such as the complexity of your survey, the need for customization, user-friendliness, and your budget, of course.

Example of survey using SurveyMonkey

Pilot Test

Finally, conducting a pilot test of your survey is a crucial step. This involves rolling out the survey to a small, representative segment of your target audience. Or, at least conduct hallway testing. The feedback and data gathered from this test are invaluable for identifying any issues with question clarity, survey length, or technical glitches. Based on this feedback, necessary adjustments can be made to ensure the final survey is fine-tuned for optimal response and accuracy. The importance of this step cannot be overstated. Otherwise, you risk throwing away quite a lot of work.

Distribute the Survey

  • Reach Out: Use your chosen method to distribute the survey to your target audience.
  • Incentives: Consider offering incentives to increase response rates.

Reach Out

Once your survey is meticulously designed, the next crucial phase is its distribution. Utilize the distribution method you’ve chosen — be it through email lists, social media platforms, direct mail, or any other available. Ensure the link is easily accessible and compatible across various devices and browsers.

Incentives

Introducing incentives can significantly boost your survey’s response rate. These incentives could range from small financial rewards, discount coupons, entries into a prize draw, or access to exclusive content or services.

Here I will allow myself a small digression. Today there is still an opinion that it is not necessary or even wrong to pay, or reward in any way respondents for research. However, this is a fundamentally incorrect opinion. Small rewards create motivation and foster collaboration. In fact, there is even scientific evidence supporting the usefulness of Incentives for improving the quality of research.

Collect Responses

  • Monitoring: Regularly monitor responses to ensure data quality and sufficient response rates.
  • Follow-up: Send reminders to those who haven’t responded yet, if appropriate.

Monitoring

The active phase of collecting responses is critical in the survey process, where vigilance and responsiveness are key. First, regular monitoring of the incoming responses is essential. This doesn’t just mean counting how many responses you’ve received. It involves assessing the quality of the data being collected. Check for any patterns that might indicate misunderstanding of questions or any technical issues that respondents might be facing. Monitoring also allows you to gauge whether the response rate aligns with your expectations and sample size requirements. If the response rate is lower than anticipated, this is the time to consider adjustments to your distribution strategy.

Follow-up

Follow-up is another important aspect of this phase. For respondents who haven’t yet completed the survey, sending out reminders can be an effective strategy to boost response rates. However, it’s important to balance persistence with courtesy. Reminders should be friendly and encouraging, yet not overly intrusive. The timing of these reminders is also crucial — not too soon after the initial invitation, but not so late that the respondent loses interest or forgets about the survey.

Analyze the Data

  • Data Cleaning: Remove incomplete or outlier responses.
  • Statistical Analysis: Use appropriate statistical methods to analyze the data.
  • Insights Generation: Look for trends, patterns, and insights relevant to your objectives.

Data Cleaning

After successfully collecting responses, the next vital step is to analyze the data. This stage is where the responses are transformed into actionable insights. Initially, embark on data cleaning. This process involves scrutinizing the dataset to remove incomplete responses, as well as identifying and handling outliers. Outliers could be genuine but extreme cases, or they might indicate data entry errors. Careful assessment is needed to decide how to treat them. Also, check for any inconsistencies or illogical responses that might compromise the integrity of the data.

Statistical Analysis

Once the dataset is cleaned, proceed to the structured process of statistical analysis, which can be broken down into several key steps:

  1. Determine the Type of Analysis: First, identify whether your data requires quantitative analysis, qualitative analysis, or a combination of both. Quantitative analysis is used for numerical data and involves statistical techniques to quantify relationships and patterns. Qualitative analysis is used for textual or non-numerical data and focuses on identifying themes and patterns in responses.
  2. Descriptive Statistics: Begin with descriptive statistics to summarize the data. This includes calculating means, medians, and modes for central tendency, and standard deviations or ranges for variability. This step provides an initial understanding of your data’s general characteristics.
  3. Inferential Statistics: For quantitative data, employ inferential statistical methods to make predictions or inferences about your population based on your sample data. This might be statisicals test, like p-value.
  4. Interpretation of Results: Finally, the most critical step is the interpretation of your statistical findings. Correlate the results with your research questions and objectives. Look beyond the numbers to understand what they imply in the context of your survey goals, and consider the practical implications of these findings.

I’m sure the part about inferential statistics may seem most incomprehensible. Let’s take a little more detail here and look at an example of the simplest calculation of statistical significance without diving into the depths of the theory. Imagine that you have a survey to which 100 people responded, and there are three answer options for one of the questions: A, B, and C. The answers were as follows: 38, 29, 33. Option A scored the most. Does that mean it won, and should you draw any conclusions in accordance with this? Well, of course not.

First of all, let me bring a simple example that probably could be your go-to solution. We will try to use a p-value. P-value, basically is a statistical measure that helps scientists determine whether their hypotheses should be accepted or rejected, differences between random events and statistically significant results.

You can use existing online calculators. Personally, I like this one , which allows you to calculate the chi-square and p-value on the same page. Or, of course, you can try using ChatGPT for this, but it’s better to double-check the results.

First of all, we need to calculate the chi-square, which is a statistical method used to determine if there is a significant difference between expected and observed frequencies in one or more categories.

To calculate this, we will need to understand how the answers would be distributed in an ideal scenario, where there would be no difference between the answer options and the answers would be distributed randomly. We have three answer options. Accordingly, if you had no difference between them, every answer would get 33.33%. This distribution is usually called the null hypothesis, or expected, usually marked as E.

Comparing the real results and the null hypothesis, we can calculate chi-square. Using my example and the calculator above, here’s what we get:

Using chi-square, we can get a p-value, which in our case is 0,537299.

Next, we need to compare the p-value to the significance level, which is commonly 0.05. So, in our case

P-value = 0,537299 > 0.05

What does this mean? If your p-value is larger than the significance level, it means the observed data does not provide strong evidence against the null hypothesis. Therefore, our survey data might be coincidental and cannot be trusted.

Of course, this is a very simple example. In real life, everything is usually more complicated. However, it demonstrates how statistical significance testing can and should be applied to quantitative research methods.

Insights Generation

The culmination of this process is insights generation. Delve into the cleaned and analyzed data to uncover trends, patterns, and correlations that directly address your survey objectives. This step is not just about summarizing data. It’s about interpreting it in the context of your research questions, drawing meaningful conclusions, and translating these into practical, actionable insights. These insights should not only answer the original questions posed by the survey but also offer a foundation for informed decision-making and strategic planning.

Report Findings

  • Clear Reporting: Create a report with key findings, graphs, and charts.
  • Actionable Insights: Highlight actionable insights and recommendations based on the data.
  • Iterative Improvement: Use feedback to improve future surveys.

Clear Reporting

The final and equally crucial phase of the survey process is reporting your findings. This step involves synthesizing all the data and insights into a coherent and impactful report. Begin with clear reporting. Craft a document that articulately presents the key findings of your survey. You can utilize visual aids like graphs, charts, and tables to make the data more digestible and engaging. Try a simple scheme. State the insight and then show how you came to that conclusion based on the data.

Actionable Insights

Emphasize actionable insights in your report. Beyond merely presenting data, your report should extract and highlight insights that can inform decision-making. Clearly articulate any recommendations or implications arising from the data, providing concrete, actionable steps that can be taken based on these insights.

Iterative Improvements

Finally, advocate for iterative improvement. Use the feedback and learnings from this survey to refine future survey strategies. This could involve improving question design, adjusting the survey methodology, or enhancing data analysis techniques. The objective is continuously evolving the survey process, enhancing its effectiveness and efficiency with each iteration.

Ethical Considerations

To maximize the effectiveness and reliability of your survey, there are several additional considerations to keep in mind.

First, address ethical considerations. It is paramount to ensure the privacy and confidentiality of your respondents. This includes securing personal data, making participation voluntary, and ensuring anonymity if necessary. Be transparent about using the survey data and adhere to data protection regulations. This ethical approach not only safeguards your respondents but also enhances the credibility of your research.

Finally, if your survey spans different regions or groups, it’s important to be culturally sensitive. This involves being aware of and adapting to cultural nuances in question-wording, examples used, and the overall approach. Cultural sensitivity prevents misunderstandings and ensures that your survey is appropriate and respectful to all respondents, which is particularly important in a globalized context.

In custody

Considering all of the above, it may seem that conducting surveys is a complex process that requires a lot of time and understanding of many aspects. On the one hand, this is true. However, do not forget that the experience of many people who have conducted similar studies can help you. By studying their experience, you can find insights on how to implement certain elements of your research, thereby obtaining a powerful and objective tool in your work.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了