Published: 10/17/2024
Tom Tindle
Vice President, Brand Research
Transitioning a tracking study from one sample provider to another is a common source of anxiety, but when properly managed, it can be done seamlessly and result in a better study.
The primary goal of a tracker transition is to keep legacy data trends stable. While this is a noble intention, the tendency towards “tracker inertia” of replicating all past aspects of a tracker disregards an ideal opportunity to critically evaluate it and make it work harder for the brand.
Considerations for Evaluating a Tracker Prior to Transition
Stakeholder Buy-in: Getting key stakeholders on board is essential. When everyone understands the goals and reasons for the transition, the process flows much more smoothly.
Sample Composition: Consistency in sample composition is crucial to maintaining steady score levels and trends. However, it is vital to not preserve old sample compositions at the expense of better quality or more representative samples. Ensure your new sample composition comes from reliable Dynata sources and represents your target audience.
Fielding Schedule and Cadence: Is your tracker operating on the right schedule? Consider whether continuous fielding, regular time-dips, or strategic timings around key events (such as advertising campaigns) are better suited to your needs.
Survey Appearance and Functionality: Long-standing surveys can become outdated, negatively affecting the respondent experience. Refresh your survey’s look and functionality, optimize it for mobile devices, and make it more accessible for diverse groups, such as younger mobile users or those with disabilities.
Questionnaire Design: This is a perfect time to revisit your questionnaire. Are the questions still relevant? Consider updating screener questions, brand lists, and answer sets to ensure they meet current and future research objectives. Check prior length of interview (LOI) and eliminate redundant, unused, and non-essential questions. Finally, align your questionnaire design with other research efforts to yield both broader and deeper insights.
Data Hygiene: Assess the need for enhanced security protocols and improved data-cleaning processes to ensure clean, reliable data.
Designing a Seamless Transition Program
A successful transition program identifies and addresses any differences between Dynata and your previous provider. Typically, this involves running the tracker in parallel across both providers to highlight differences in the two measurement approaches and calibrating past results as needed.
Numerous topics should be considered when designing a robust transition program:
Historical Data: If Dynata data already exists in your current tracker, it can reduce the need for parallel testing. However, running parallel tests is generally preferred when feasible.
Previous design: Gather as much information as possible about your old tracker to avoid any unforeseen differences that could occur due to overlooked design variations.
Final transfer date: Determine the date when the old tracker will be turned off. This will help set the timeline for parallel testing.
Parallel waves: The more parallel waves you run, the better the calibration between the two systems. The number of waves is often dictated by funding, complexity, seasonality, cooperation of former vendor(s), among other factors. Ideally, you should run at least two waves: one for calibration and one to confirm the results.
Maintaining Consistency During Parallel Testing
Sample Size: While it’s ideal to collect the full sample for both the past and Dynata trackers throughout the parallel testing period, if sample reductions are necessary, it’s recommended to apply them to the past tracker, preferably during the confirmation wave(s) rather than the calibration wave(s).
Questions to Align: While it’s ideal to parallel test the entire survey, testing a subset of key variables may be sufficient in some cases. If doing so, it’s best to truncate the survey rather than selectively deleting questions to maintain proper order, alignment, and timing, which helps minimize order bias and the effects of LOI.
Sample Groups to Analyze: Determine which target sample groups must be analyzed and aligned. Total sample is typically sufficient, but other groups may be critical for some brands.
Consistency in Parallel Tests: Maintaining as much consistency as possible between both providers’ surveys during parallel testing leads to better analysis and calibration. When changes are necessary, apply them to both systems. If that’s not feasible, still implement the changes with Dynata to account for their effects in the calibration process.
To the extent feasible, keep the following elements consistent between both surveys in the parallel test:
- Sample composition
- Pre-targeting of sample, interlocking of quotas, and sample weighting
- Category, prior participation, or other exclusion criteria
- Fielding time period, cadence, and quota fill rates
- Fielding markets, ensuring all translations are consistent, where applicable
- Average length of interview
- Content of direct survey invitation
- General survey look, feel, and execution, including:
- Scale answers, order, and direction
- Ordering/Randomization of answer lists
- Presence and prominence of respondent instructions
- Presence or absence of a back button or progress indicator
- Graphics and background color, button and box design, font enhancements, e.g., coloring, bolding, italicizing, and underlining
- Layout, e.g., questions per page, attributes per grid, amount of scrolling required, etc.
Defining Success Criteria
Success will look different for each client and project, depending on the specific goals and risk tolerance involved. Define success criteria early, focusing on factors such as:
- Number of parallel waves
- Sample size for each supplier
- Questions to align (Primary and secondary)
- Sample groups to analyze
- Mechanisms available for alignment (e.g., weighting, panel mix, calibration factors)
- Significance level for testing alignment
- Any other criteria identified as necessary for transitioning
By establishing these benchmarks, you’ll have a clear path to evaluate the success of your transition.
Transitioning is a Marathon, Not a Sprint
Remember, transitioning a tracker isn’t a quick process. It involves collecting parallel waves, analyzing the data in multiple iterations, and making necessary adjustments. The process can take several weeks depending on the data outcomes, so patience and thoroughness are key.
By following these steps and leveraging Dynata’s extensive resources and expertise, you can seamlessly transition your tracker while opening the door to improving your overall study design. Rather than clinging to outdated practices, use this opportunity to strengthen your research, refine your processes, and ensure your tracker is future-ready.