How do you know how you’re doing?
If you’re a senior executive or run your own successful business, chances are your instincts serve you well, but there are plenty of times when instinct isn’t enough. You may want evidence to back up your gut. You may positively need it to persuade others to follow your lead.
Doing good research is like putting on glasses while driving. You may be able to stay out of oncoming traffic without them, but you see much more – including the unexpected surprises that could get you into trouble – if you put them on.
There are many online survey options providing basic survey design, implementation, and summarization tools for free. SurveyMonkey probably has the greatest visibility in today’s marketplace.
Still, there are plenty of ways to get yourself into trouble, especially if you are surveying your own team or other key stakeholders. Keep the following guidelines in mind for best results:
Be Cautious of DIY. Unless you are a trained market researcher, there are many ways you can inadvertently muddle the results by doing it yourself, some of which I’ll address below. At least get input from someone with research experience.
Keep it short. Completion rates decline steadily as you go above 8-10 minutes online. Your team, employees, or customers are more highly motivated to give feedback, but let them know their time commitment up front. Even so, be cautious of requiring more than 20-30 minutes, as fatigue can make later responses less thoughtful. It helps to vary the type and format of the questions. If appropriate, including images or video clips will keep interest up.
Edit ruthlessly. You’ll have to.
Beware of leading questions. Certain kinds of political polling ask leading questions to test the susceptibility of opinions to change. Unless you have a similar agenda, avoid putting your thumb on the scale. Use neutral wording. Ask for general feedback first in each area. You want impressions given without “priming the pump.” Next ask questions about strengths, then weaknesses and suggestions for improvement. If you ask about problems first in an area where there is a lot of negative sentiment, you squash any positive feedback you might otherwise have gotten. The reverse is less problematic.
Include an “Other” option. There is nothing more annoying than taking a survey where none of the response options fit your experience, or where some important issue has been left out. Always allow an “other” option with open-ended space to explain. Always ask at the end of the survey if there is any additional feedback the respondent would like to provide.
Pretest. Do a couple of trial runs before releasing the survey. You don’t want an incorrect scale or label creating problems for interpretation of the results. Verify that the flow of the questions makes sense and that nothing important has been left out.
Ensure anonymity. Anonymous participation is a best practice. If you suspect some feedback may be negative or controversial, anonymity is an absolute requirement. The perception of confidentiality is just as important. This is best ensured by turning over the administration of your survey to a disinterested third party.
Follow up with non-responders. For the highest response rate, follow up twice with non-responders. Online surveys can assign a unique key to each potential respondent, allowing you to send a follow-up invitation without knowing the exact identities of those in the group.
Beware of confusing directional vs. significant results. If you are gathering qualitative data (such as open-ended responses, interviews or focus groups), your results may suggest a trend, but you rarely know with certainty that they are representative. The same is true even for numbered ratings if your group is small. It doesn’t mean your data aren’t useful. Just keep in mind that one person having a really unusual day can distort your outcomes.
For statistical significance testing to be powerful enough to be useful, a good rule of thumb is a minimum of 50 participants to compare the same group on two different ratings (e.g. strategic focus vs. accountability), and 100 participants to compare two groups on the same rating (e.g. senior management vs. line staff). You can do those comparisons in a smaller group, but know that noise in the data can hide some meaningful differences while suggesting others that aren’t real.
DO SOMETHING with the results! It’s a good practice to share some key findings with your participants unless they touch on confidential intellectual property. But note: Especially with key stakeholders such as employees or customers, you can do more harm than good by asking them what they think unless you take assertive and visible action based on the results.
If you are contemplating doing a survey and would like some guidance for creating the most powerful results, contact me to explore how I can help.
Ann Hollier provides strategic consulting and performance coaching to high achieving senior executives and management teams. She specializes in change management, strategic planning and implementation, leadership development, and building world-class collaborative teams. Learn more at http://thecogentexecutive.com/