Photo of young people taking online survey

What to Know About Employee Engagement Surveys

It all starts with “why”. When I wrote my business plan for Culture Keen, I spent a lot of energy asking why I wanted to help companies build resilient and compelling workplace cultures. My answer: I am passionate about helping executive teams create a vibrant workplace environment, improve employee engagement, build a roadmap to retain/attract top talent, and inspire strong leadership. To deliver on my objectives, I knew I needed to create tools that would give leadership teams visibility into the hearts and minds of their most treasured asset – their people.

This revelation is what inspired me to design a customizable employee engagement survey, but I wanted to be very intentional in its development. It is important to be aware of analytical best practices when designing a good survey. If you only rely on subjective or qualitative questioning, you simply cannot trust the results of the survey and the results will likely be biased, inaccurate, or contain incomplete data. It’s similar to the “garbage in, garbage out” philosophy – if you don’t ask the right questions, you cannot expect to get any meaningful results.

Researcher and author Marcus Buckingham highlights the pitfalls of poor survey design in an article he wrote for the HBR. While this article is speaking specifically about 360º reviews (which he considers to be an example of poorly designed surveys), I think his words still apply to other types of surveys. Buckingham explains “the data generated from a 360 survey is bad. It’s always bad. And since the data is bad, no matter how well-intended your coaching, how insightful your feedback, how coherent your leadership model, you are likely leading your leaders astray.”

So, how exactly does poor survey design cause leaders to make bad decisions, even when the questions are subjective and open to interpretation? Buckingham explains that “the bottom line is that, when it comes to rating [other people’s] behavior, you are not objective. You are, in statistical parlance, unreliable. You give us bad data.” And it’s not just that a single individual is unreliable, but rather every participant is equally as unreliable as the rest. There’s a false belief that a large enough sample size will “filter out” the bad data points enough to show a general trend – but that is only true if your sample size is random and diverse. Add up all their responses and you do not get an accurate, objective measure of your leadership behaviors. You get “gossip, quantified “and does not get you anywhere close to the truth.

So, then how do you create surveys that allow you to get reliable data? It is advised that leaders cut out statements that ask the rater to evaluate others on their behaviors and replace them with statements that ask the rater to evaluate themself on their own feelings. This allows participants to accurately assess their own experiences, not rate other’s perceived behaviors.

The Gallup Organization does research and analysis on employee engagement, and they have determined what types of questions are proven by research to accurately measure employee engagement. Questions that only ask employees about their own experiences and where the answers are not qualitative, but quantitative seem to generate the most accurate results. These questions require an answer that falls on some type of number line – such as a binary (1-yes, 0-no) or a rating scale (5-strongly agree, 4-somewhat agree, 3-neutral, 2-somewhat disagree, 1-strongly disagree). This introduces an additional challenge in proper survey design – how to properly analyze these data – and the answer can be quite complex.

For starters, how can you be sure that you are evaluating all data points equally? It’s not as easy as it seems. With every data set, there is always going to be “noisy” or “loud” data points that skew your data – and you need to make a choice as to whether or not you keep these data in your analysis. You also need to weigh data points equally and adjust for bias in sample size. For example, say you are evaluating two teams – a software engineering team with 20 employees and a finance team with 3 employees. Is it fair to take the average scores from both teams and compare those numbers equally? Certainly not – so details of the results must be factored in the final analysis or at least noted.

I take into consideration all these obstacles and the management team’s desired outcomes when I construct your customizable survey. Most importantly, I listen to your leadership objectives.  A final point to consider is to encourage teams to save the response data from the surveys to leverage future engagements. Data is every business’ most valuable asset. If a leadership team can say, “here are the recent results from our team, and here’s how they compare to other surveys” … that’s powerful.