DATA DEGREDATION AT VARIOUS LOI’S – ROUND 2: A DEEPER DIVE!

DATA DEGREDATION AT VARIOUS LOI’S – ROUND 2: A DEEPER DIVE!

After significant interest from our first stab at understanding data degradation, Quest moved towards zeroing in on the initial results and presented a second round of investigation details at last month’s IIEX Behavior event with Greenbook.

Watch the following recording of our session and continue down for a closer look at the results!

To recap on our primary mission: Establish acceptable factoring around data degradation as it relates to respondent engagement at various lengths of online interviews.

Without repeating the details of our initial steps, Quest looked to begin accomplishing the above goal by asking the same set of measured questions (with different text), separated by “filler” sections, to compare engagement metrics at various survey lengths. The primary goal was to focus on how engaged the respondents were – the amount of time spent for the overall section, and per question as well as the number of words and content for open ends as respondents progressed through the survey.

We had some questions about the results, and what we believed were several clear answers.  The bottom line – something was happening in the first 10 minutes of our survey, and not something good.

THE APPROACH to ROUND 2:

We decided to do a second survey, focusing on measurement within that first 10 minutes, rather than further out, to see what more we could find.

What we did for the second experiment:

  • Stayed with the same format – shopping survey, national sample, balanced to PGS profile.
  • Corrected a few aspects, such as length of description for answer choices, to make those more similar measurement to measurement
    • We had some data anomalies, some results we couldn’t explain for patterns of data degradation in the first survey we did. After adjustments for question labels and randomization, those weren’t seen this time.
  • Changed measurement questions previously at 2, 10 and 15 minutes
  • We moved those measurements to within the first 10 minutes –same screening questions, same first baseline set of 4 questions about 2 minutes in.
  • Second set of 4 questions came after 2, 4 and 6 minutes more, so the time from absolute start to the first measurement was about two minutes, then separate surveys to get that second measurement at 4, 6, 8 total minutes spent, with wrap up questions after, like any survey
  • And of course we cleaned the data to remove speeders, etc.

THE RESULTS

We saw a definite degradation of attention, time spent on the same question type as we moved further into exploring the first 10 minutes of our survey.

Counterintuitive, from my assumptions about surveys from many years – I mean, don’t we all think shoppers will stay with our 10 minute survey, even 15 before they start getting tired or disinterested or whatever, right? Wrong – people were paying significantly less attention after 4 or 5 minutes into our very standard, everyday shopper kind of survey ~SW

  • Average time on the same questions, those 4 we used consistently for measurement – look at the numbers:

  • Rating question showed the least effect for time and attention, the multipunch and rank sort showed more effect, and were about the same for degradation.
  • Open ends – that’s where the big hit came.

We came up with what we called an overall Data Degradation Factor – which was 73% at the seven minute mark. Simply put, the respondent is putting 73% of the effort into their responses as they did at the 1-2 minute mark.

 

We went back to our first survey, the one a few months back that got us started, to check this evaluation.  We see a factor of 55% and 57% at 10 and 15 minutes respectively. We considered that anecdotal due to the some changes made between the first and second survey wording and section descriptions.  But it was consistent with the second survey’s findings and focus on the first 10 minutes.

What’s the One Big Thing we feel we found out?
Open ends continue to show the biggest impact of survey LOI. We believe we’re seeing a clear implication that asking open ends after a certain LOI may be pointless. That may be the biggest finding of all throughout this.

Where does this leave us?

  • On more solid ground, we believe.
    • We fixed or changed a few aspects of the survey we thought might have thrown off the results the first time – that worked.
    • We kept the survey almost identical otherwise, simply moving the measurements to within the first 10 minutes, rather than at 10 and 15, based on what we saw in that first survey
    • We saw clear degradation within 10 minutes – losing 25%+ of a person’s attention, interest, engagement, at 7 minutes into a shopping survey? That means something.
    • Some questions suffered more than others – the harder questions showed the greater effect.
    • Open ends really took a hit. Mind you, this is only a couple open ends within a 10 minute simple shopping survey. Imagine how much more pronounced this could be with a longer, more complex survey and later placement of an open-end.  ~SW

NEXT STEPS:

For example: Do media rich questions mitigate data degradation? Are questions like Rank Sort, an engageable respondent exercise, immune from data degradation? Are certain questions, like Open Ends, far more susceptible to data degradation?
And of course, from there, start to build out our actual data degradation factoring. We look forward to sharing all our steps, fully transparent, as we continue down the road of understanding this high charged and very complex mission.

Look for more from Quest as we dig deeper, gain better knowledge and insights, and share them with researchers everywhere.

 

~ ‘Data Degradation’ was first presented at the Quirk’s Virtual Conference on Feb 23rd, 2021. Round 2 was first presented at the IIEX Behavior Event by Greenbook. A downloadable copy of the presentation video (round 2) can be accessed by emailing Moneeza at mali@questmindshare.com. For further information, slides or any questions, please do contact any of the following:

Greg Matheson (Managing Partner) gmatheson@questmindshare.com
Scott Worthge (Senior Director, Research Solutions) sworthge@questmindshare.com
Moneeza Ali (Director, Marketing) mali@questmindshare.com


Keep yourself up to date

Facebook
Twitter
LinkedIn