The priority detected by Survature is by way of capturing respondents’ top-down attention. In some contexts, top-down attention is also called as goal-directed attention or selective attention. There are many science literature on that topic. Stated simply, the theory of top-down attention says that the first things we attend to are those that are the most relevant or important to us at the moment of attention. The AnswerCloud interface looks very simple and straightforward. However, even though we were guided by that mature science, and we have combined expertise of psychology, school of design, and computer science, the design process was not straightforward at all. The following are things we would recommend that you keep in mind when using the AnswerCloud.
Group, Not Individual
The priority detected by AnswerCloud is on a group basis (i.e. “group saliency”), not on a per individual basis. Especially in a scalable online setting, all of the known tools to get at an individual’s psychology tend to have very low reliability. For example, the well-known Myers-Briggs personality type classification based on an online questionnaire is not reliable. As shown on its Wikipedia page, between 39% - 76% of respondents get different personality classifications if they retake the questionnaire in just 5 weeks.
From that respect, AnswerCloud’s promise is much more modest, and more similar to most data science approaches, where our goal is to understand a group of people at a deeper level. Specifically for Survature, we need the group size to be 25-30 people to be statistically meaningful. Of course, using Survature’s full automated analytics tools, you can cut and dice the data very flexibly, and in essence (re)define the group any way you want. Nonetheless, this kind of flexibility and power should not be confused in any way with the limitation set by group size.
Sometimes we get user-support questions in a context like the following: “Can you tell us about this crucial customer's priorities based on the data?” The answer is: no, we can’t tell much about one person. If that person is so important, having a one-on-one conversation is your best solution.
“A Priority for Me?”
The word saliency in “group saliency” can be interpreted in many ways. Examples include confidence, certainty, familiarity, affinity (or the lack of affinity), emotional attachment, etc. In other words, the most relevant thing can be relevant to us for different reasons. Of course, the most common thinking of a business provider can be that if this is important to my customer, it must be a priority for me. That is usually true, but note there are other types of use.
The exact interpretation of “priority” requires knowing (1) whether this is assessing the past or anticipating the future, and (2) who the respondents are assessing: someone else, or themselves and their own organization. Much of the following has to do with the two-dimensional priority space, where the vertical axis is the priority and the horizontal axis is the explicit rating. Note: there are 20+ different kinds of ratings scale you can use in an AnswerCloud, for more detail please see this help page.
Case 1. One of the most common examples is for customer satisfaction. For example, you just conducted an internal audit of your risk management division. You want to ask about the operational aspect of your IA process, how the auditors did as a business partner and advisor, … Or, you manage a long-term supply chain partnership, there was a lot of changes implemented in Q4, now you need to know what’s working, what’s not, … All of these settings are people assessing someone else about something that has happened. In these cases, the priorities discovered by AnswerCloud tell “this matters most to the customer”. Assuming that you really want to serve them, what matters to them matters to you. Thereby, the priority dimension of theirs is the same as yours. This is what we call “A Priority for Me”. “Me” being the service provider.
Case 2. Say, you run a geographically distributed B2B sales team. You want the sales team to evaluate the ongoing marketing efforts, … In this setting, it’s actually a self evaluation. The priorities as revealed by the AnswerCloud is still the “group saliency” of the sales team. However, it’s unlikely that the priority dimension is about what the sales team wants. Usually, it’s instead certainty, i.e. the sales team has a very “sure” opinion about each item in your marketing efforts. If they rate your “newsletter” at a “Not Effective” score but as a number one priority, it doesn’t mean that they want more of the “newsletter”. It means they are really sure that your “newsletter” is “not effective”. There are things that rank below the fold (in the lower half) of the two-dimension priority space. People actually are sure whether those things have worked or not. By that token, people probably didn’t have much impression about those things, which is also not good for a marketing effort. Regardless, in this case, the priority dimension does not mean “I want this”. It means “We are sure that this thing is [Not Effective / Effective] etc.”
Case 3. Now let’s look into the future, in the sense of designing something new, be it a new product, a new service offering, or a new building, … In these anticipatory case, “human experience” is more useful information than “human satisfaction”, because dissatisfaction with the current doesn’t directly mean that people want a better version of the current. In this case, also because it’s more about the future, it doesn’t really matter whether it’s a self-assessment or not. It’s more about “what matters most to the people”, i.e. the priority dimension. However, in this case, “priority” still does not exactly mean “a priority for me”, with “me” being the strategist or the designer. Please consider the “priority” dimension of your target audience as showing you what buttons you can push. Whether you push those is more of a strategy question.
Invariably, the priority dimension is a deeper dimension about your respondents. How should your respondent’s priority dimension translate to your priorities (i.e. action plans)? That depends on the scenarios and the strategic questions at hand.
The Most Honest Answers vs. Skipping
People are likely to skip if they don’t care about an item or they don’t have a confident opinion about the item. Exhibiting these kinds of behavior in an AnswerCloud is expected. Actually, detecting what’s being ignored and the order in which things are chosen is how Survature works to capture people’s top-down attention.
In this methodology, if we compound with being “controversial”, then yes, people can skip more often. To be fair, getting the most honest answers to controversial questions is an age-old problem. We believe Survature has the best tool to address that challenge. Case 1 as listed above is the least likely to be controversial. Your customers want what they want. Case 3 is similar, because when the questions are asking in future tense, it’s often hypothetical. Case 2 is the most likely to run into being controversial. In that case, however, since the interpretation is “I am sure”, the controversial-ness is easy to discover. For example, in a supervisor evaluation survey, just to pick one of the most controversial one, the priority dimension is “I am sure”. So, if a manager prizes himself on “communication”, and that item gets a rating of 4.7 on a 5-point scale and yet it records the bottom priority, what does that mean? It means the employees are telling their boss that “we are really not sure about your communication”. The information is clear and it is AN HONEST ANSWER.
The best solution using Survature, and the solution in general, is to ask the question in ways that are less controversial, yet design so that the behavior exhibited during survey taking would give us the true answers that we need. Our existing inventory is equipped to do so. If you imagine your survey might run into obstacles related to being controversial, please talk to our user support during the survey design stage.
In traditional surveys, variance of the recorded answers is sometimes used as a metric of data quality. For example, quite often, as people go through a survey, the variances of their answers diminish later into the survey. In other words, data quality drops as your survey gets longer. Unfortunately, even though everyone knows this effect, we still see horrendously long surveys, such as those 30- to 40-minutes long.
On the Survature platforms, we don’t see that pattern normally. We see a different effect of diminishing variance. Specifically, among AnswerCloud responses, high priority items can have wider variances (more dynamic ranges) than low priority items. It’s a result of people’s subjective impressions. For example, if we are already very happy with a service provider, it may be the case that we really don’t have a strong opinion about many of the small things they offer. But when pressed for an answer about those smaller things, we are likely just to give an answer that’s inline with our overall impression. Similar examples abound in many other scenarios.
Compelling a person for answers about those things that are less important or unfamiliar to them won’t lead to good data. If someone doesn’t have a confident opinion about something, we’d rather that they just not answer that item. Please note, items that are not important to one group of respondents may be the most important to other groups. So depending on which segments you are looking at, the averages and variance statistics may vary, sometimes vary widely.
If we have to find something in traditional surveys that are similar to AnswerCloud’s not requiring an answer, it’s the “N/A”, which is known to cause the variance to narrow similarly.
Lastly, concerns regarding data variance and “controversial” are related. Interestingly, the use case incurring those concerns are commonly of “case 2”. In that regard, knowing that your employees or stakeholders “aren’t sure about your communication” is already useful. You don’t really need to hear that they are “sure that your communication is not good”, right?