Facebook
Twitter
LinkedIn
After significant interest from our first stab at understanding data degradation, Quest moved towards zeroing in on the initial results and presented a second round of investigation details at last month’s IIEX Behavior event with Greenbook.
Watch the following recording of our session and continue down for a closer look at the results!
To recap on our primary mission: Establish acceptable factoring around data degradation as it relates to respondent engagement at various lengths of online interviews.
Without repeating the details of our initial steps, Quest looked to begin accomplishing the above goal by asking the same set of measured questions (with different text), separated by “filler” sections, to compare engagement metrics at various survey lengths. The primary goal was to focus on how engaged the respondents were – the amount of time spent for the overall section, and per question as well as the number of words and content for open ends as respondents progressed through the survey.
We had some questions about the results, and what we believed were several clear answers. The bottom line – something was happening in the first 10 minutes of our survey, and not something good.
THE APPROACH to ROUND 2:
We decided to do a second survey, focusing on measurement within that first 10 minutes, rather than further out, to see what more we could find.
What we did for the second experiment:
THE RESULTS
We saw a definite degradation of attention, time spent on the same question type as we moved further into exploring the first 10 minutes of our survey.
Counterintuitive, from my assumptions about surveys from many years – I mean, don’t we all think shoppers will stay with our 10 minute survey, even 15 before they start getting tired or disinterested or whatever, right? Wrong – people were paying significantly less attention after 4 or 5 minutes into our very standard, everyday shopper kind of survey ~SW
We came up with what we called an overall Data Degradation Factor – which was 73% at the seven minute mark. Simply put, the respondent is putting 73% of the effort into their responses as they did at the 1-2 minute mark.
We went back to our first survey, the one a few months back that got us started, to check this evaluation. We see a factor of 55% and 57% at 10 and 15 minutes respectively. We considered that anecdotal due to the some changes made between the first and second survey wording and section descriptions. But it was consistent with the second survey’s findings and focus on the first 10 minutes.
What’s the One Big Thing we feel we found out?
Open ends continue to show the biggest impact of survey LOI. We believe we’re seeing a clear implication that asking open ends after a certain LOI may be pointless. That may be the biggest finding of all throughout this.
Where does this leave us?
NEXT STEPS:
For example: Do media rich questions mitigate data degradation? Are questions like Rank Sort, an engageable respondent exercise, immune from data degradation? Are certain questions, like Open Ends, far more susceptible to data degradation?
And of course, from there, start to build out our actual data degradation factoring. We look forward to sharing all our steps, fully transparent, as we continue down the road of understanding this high charged and very complex mission.
Look for more from Quest as we dig deeper, gain better knowledge and insights, and share them with researchers everywhere.
~ ‘Data Degradation’ was first presented at the Quirk’s Virtual Conference on Feb 23rd, 2021. Round 2 was first presented at the IIEX Behavior Event by Greenbook. A downloadable copy of the presentation video (round 2) can be accessed by emailing Moneeza at mali@questmindshare.com. For further information, slides or any questions, please do contact any of the following:
Greg Matheson (Managing Partner) gmatheson@questmindshare.com
Scott Worthge (Senior Director, Research Solutions) sworthge@questmindshare.com
Moneeza Ali (Director, Marketing) mali@questmindshare.com