Information about people’s priority is key to many crucial decisions revolving around strategy, be it for designing products, refining service offerings, honing market positioning, developing programs of change management …
Traditional Ranking Questions
Using traditional survey tools, the question type to use is “ranking”. There are potentially a few different variations of how a ranking question is asked. Regardless, the common expectation is to understand how a group of respondents would order a list of items by importance or priority.
In some context, the question type is also known as “forced ranking”, to emphasize that fact that respondents are not allowed to say “these two things matter equally”, they must pick an order. As you can see, there are some fundamental issues with that rule. In HR methodology, there is an unrelated “forced ranking”, which is very controversial and almost always riles up everyone easily. Due to that reason, you don’t hear that question type called “forced ranking” very often. However, the “forced” element is crucial to recognize.
The traditional ranking question has a few well known issues.
Three Known Problems of Ranking Questions
Reliability. With a ranking question, you can get “explicit” or “literal” answers only. The answers are not as reliable as you would hope for. People may have many real reasons, conscious or subconscious, to not tell you what they really think. For example, in order to come across as socially responsible, few people will rank “sustainability” as not important. But not all will actually pay more for a more sustainable product. For another example, people may not know the priorities of the items you are asking about, because they have never thought about those literally. Is your individual productivity more important than your team’s productivity? There are also situations where people can’t express their priorities in the way you are asking, especially when two items are really of comparable priorities. One is no way higher than the other, or even, one can go without the other. But if you ask a forced ranking question, then the answer they give in the form you require will be the wrong data point anyway. This situation is common, because it’s rare that one single thing is casually effecting a result. It often the combined effect of a few things.
Precision. The precision of forced-ranking question is much more limited than you would hope for. Cognition overload and limited attention span will create the first bottleneck. In addition, the fact that you asked people to rank 10 items, it should not mean that you have reliable data about the 4th ranked item being ahead of the 5th ranked item, or that the 3rd item is really ahead of the 5th item. You’ll need to run pair-wise t-test on the data. Few people actually have the time to do those statistical tests. In result, there may not be 10, or even 5 distinct slots in the ordering hierarchy you have. Due to that, the prevailing advice is to urge the survey designers to narrow down the item list to just 4 or 5. Unfortunately, in doing so, the designer’s personal biases will then permeate through the whole dataset. This problem is long-standing. Qualitative / in-person interviews can get over this limitation, however, this then causes severe constraints on scalability.
False sense of certainty. In the end, one respondent’s ordering is not what you need. You need group ordering, which has to be manually calculated in spreadsheet or some other packages. The math is not complex, just a weighted average. While that’s how things have always been done, the problem is also as far reaching. In the age of big data, everyone is aware of the importance of understanding the uncertainty in the data. In a weighted average formula, you are actually assuming that all ranking orders are equally certain. You are also assuming that, people are intentionally not skipping any answers (because they don’t want to skip, not because your question type forced to not skip). How true can that be?
Explicit vs. Implicit
In sum, requiring (and assuming that you can get) the most honest, explicit, and complete answers from all respondents is the root of the problems. In fact, the field is realizing this and there is a growing trend of using the more dependable implicit responses. Admittedly, implicit responses are not as easily collected as the explicit ones. Survature created AnswerCloud as that instrument. So far AnswerCloud been test on 250,000 people with very reliable results. We have also written articles on that topic. Here are a few for further reading.
- System 1 vs. System 2 - using data+psychology to unlock people’s true opinions
- Polling is getting bad information - the industry-shattering embarrassment of polling
- Ban on P value - people need good information in the first place
Since priority information is so crucial, AnswerCloud always gets that dimension of information for you, regardless of what question you ask using AnswerCloud. Even when your question is a simple satisfaction question such as “how satisfied are you about your benefits program?” This question asks about “satisfaction” directly. The resulting explicit answers are the satisfaction ratings. That’s only the first dimension, however. The implicit data, i.e. the behavior, will reveal the priority dimension. More info are available in the AnswerCloud help-page.
Priority, Degree, or Importance Scales. If you are designing new products, services, etc., you may instead want to ask “How important are the following items” in an AnswerCloud and have the opinion boxes be just the default 5-scale setting in the survey builder’s list (shown above). The analytics will tell you how the items stack up. In this case you are essentially asking the same question twice: first time explicitly, and for a second time implicitly (without the users having to thinking about it). The results will help you confirm whether “what people say” is the same as “how people are saying it”. You are not necessarily trying to catch people lying. :-) The exact interpretation and expectation are case by case. If you are not sure, please feel free to consult us. All Survature users get data-science user support included in the subscription.
There are some rare, but totally valid, situations where you may ask “How do you rank the following items?” and have the scale boxes be named as “First”, “Second”, “Third”, “Fourth” and “Fifth”. If you need less than 5 scales, we’ll have to do that for you. Just let us know which question. Configurations such as 2 boxes, 3 boxes, and 4 boxes are all possible. The resulting data analytics will have the same meaning as the above. A key thing to note. We strongly recommend that you don’t require that people put just one item into each scale box, because it is completely reasonable for someone to feel “these two items are both the top priority”.