Skip to main content

Keeping Tracksuit's survey data clean

We take data quality seriously here at Tracksuit. We have three layers of process to ensure our data is credible:

(1) Panel provider processes

Our panel provider (Dynata) has their own quality assurance standards. They are a world-leading provider that invests heavily in data quality.

  • Dynata collects data at each touchpoint with a respondent, gathering 100+ data points at every interaction and using that data to manage participant reputation within the survey.

  • Before the survey, they use device / IP anomaly and reputation checks, plus open-end engagement tests to confirm identity and look for unlikely patterns.

  • Within the survey, they use digital fingerprinting, geo location clues and a second round of the checks to confirm identity and identify suspicious behaviour. They also include encrypted end links and a new quality management platform that evaluates performance and behavior inside the survey.

(2) Additional third-party cleaning via Imperium

We use Imperium's fraud control technology to ensure the authenticity of respondents. This includes:

  • RelevantID, Imperium's ID validation product. We use this product to ensure the authenticity of the respondents.

    • RelevantID maps a survey respondent ID against dozens of data points including geo-location, time, language, and IP address, returning a fraud profile score that is then used to flag suspicious respondents.

    • RelevantID also flags the device used each time a user responds to a survey, and detects when multiple email accounts are being deployed from a single computer.

    • Finally, it evaluates suspicious respondents against pre-set criteria for acceptance, redirection or elimination.

  • RealAnswer, Imperium's response analysis product. We use this to analyse the responses we receive to ensure they are genuine.

    • RealAnswer is a fully automated process that evaluates the quality of the open-end responses against multiple factors.

    • It recognises nonsense words, profanity, cut-and-paste responses and offensive terms.

    • It uses text classification option for more precise results and deeper quality controls.

(3) Tracksuit's own internal cleaning processes

We also employ our own methods to ensure the quality of our data. This includes:

  • Trap survey questions: We place a number of trap questions within our surveys to identify suspicious responses and then automatically filter out these responses. These questions are designed to catch poor quality respondents.

  • Analysis of verbatims: We review our verbatim responses to further identify suspicious behaviour and conduct further cleaning.

  • Rejection of responses: We typically reject between 9 – 20% of our sample responses in order to achieve the highest quality data for our brands.

  • Periodic checks: We do periodic spot-checks within our data for additional scrutiny.