Cheaters, Speeders, and Repeaters

2 days ago 3

When I launched Rally I had over 100 people book a call with me to find out more about synthetic market research. They had heard claims from vendors and scientific papers of reaching 85% accuracy in using AI personas to replicate what real humans would respond. They told me that to adopt AI research tools, they needed to know how accurate they are.

RIP my calendar

The question is, how accurate compared to what? We’ve seen 85% accuracy in replicating real world studies in our testing, but that begs the question, are we sure the study we’re comparing to is accurate? Nick Boyce recently discovered that less than 10% of the respondents from a survey he ran actually watched the Facebook ad video creatives he paid $1,500 to Survey Monkey for feedback on.

Fraud and Failure in Market Research

The fundamental problem is that short of complaining on social media, most people don’t really care enough about brands to give them detailed feedback on their products. That’s why brands have to offer incentives, like a $10 Amazon gift card to get you to fill in a survey, $100 cash for being interviewed, or free samples in return for attending a focus group.

Despite the best efforts of researchers, whenever there is money on the line, people will game the system. Research vendors at Op4G and SliceMR were recently caught by the FBI having defrauding brands of millions of dollars, recruiting fake respondents, getting family members to complete surveys, and coaching participants on how to cheat. Brands like Google, Northwestern University and Seattle Children’s Hospital were affected.

Even if you do pick a legitimate vendor that isn’t trying to scam you, it’s a game of cat and mouse between the professional survey takers who earn a living from cheating market researchers, and the vendors who try to minimize fraud. There are three major types of fraudulent research participants you have to look out for, known by veterans of the industry as Cheaters, Speeders, and Repeaters.

Cheaters

Participants who deliberately provide false information to qualify for surveys they wouldn't otherwise be eligible for (e.g., claiming to own products they don't, fabricating medical conditions, or inventing professional backgrounds).

Speeders

Participants who race through surveys at implausible speeds, clicking random answers without reading questions just to collect incentives quickly (e.g., completing a 20-minute survey in 3 minutes).

Repeaters

Participants who take the same survey multiple times, either by using different identities on the same platform or by participating across multiple research panels to collect multiple incentives for the same work.

Catching the Cheaters

Cheaters are participants who deliberately provide false information to qualify for surveys they wouldn't otherwise be eligible for. They'll claim to own products they've never purchased, fabricate medical conditions they don't have, or invent professional backgrounds to match screening criteria. Their motivation is simple: access to more paid research opportunities.

The sophistication of cheating has evolved dramatically. Online forums and Discord channels now share tips on how to bypass screening questions, with members alerting each other to lucrative studies and the exact answers needed to qualify. Some professional cheaters maintain detailed spreadsheets tracking their fictional personas across different research panels.

Market researchers combat this through various validation techniques. Red herring questions that ask about non-existent products can instantly flag dishonest respondents. Progressive profiling, where participant information is collected over time rather than all at once, helps identify inconsistencies in reported demographics. More advanced techniques include digital fingerprinting to track suspicious patterns across multiple surveys and cross-referencing responses with verified third-party data.

Slowing down the Speeders

Speeders race through surveys at implausible speeds, clicking random answers without reading questions, or using AI to automatically fill in their responses. These participants might complete a 20-minute survey in three minutes or answer 30 questions in under 60 seconds. The problem is particularly prevalent in mobile surveys, where participants might be multitasking or simply tapping through screens carelessly.

The damage speeders cause is often more subtle than outright cheating but equally harmful. Their random or patterned responses (like selecting the same answer for every question) introduce statistical noise that can obscure genuine insights. If using AI to fill in responses, insights from a thousand dollar survey will converge on what you could have gotten from ChatGPT for $20/m.

Researchers have developed multiple approaches to combat speeding. The most straightforward is setting minimum time thresholds based on reading speed – if a survey section physically cannot be read in under two minutes, anyone completing it faster is automatically flagged. More sophisticated methods include attention check questions ("Please select 'somewhat agree' for this question") and trap questions that require specific answers to verify comprehension.

Rejecting the Repeaters

Repeaters take the same survey multiple times, either by using different identities on the same platform or by participating across multiple research panels. The financial incentive is clear – why complete one survey for $5 when you can complete it five times for $25? This behavior is particularly common in high-paying studies with desirable incentives.

The technical sophistication of repeaters continues to evolve. Some use VPNs to mask their IP addresses, create multiple email accounts, or even employ different devices to avoid detection. Professional repeaters often work in groups, sharing survey opportunities and coordinating their responses to avoid detection patterns. This duplication severely compromises data quality, creating artificial consensus and overrepresenting certain viewpoints. 

Experienced researchers can spot patterns in open-ended responses that suggest the same person is writing under different identities, while automated systems flag statistical anomalies that might indicate clustered duplicate responses. Device fingerprinting technology can identify when the same device is used across multiple survey attempts, even with different accounts. Biometric verification, including voice recognition for phone interviews or webcam verification for online panels, is becoming more common for high-value studies.

Fighting Fire with Fire

The rise of AI makes all three of these categories of fraud easier and more tempting, even for formerly legitimate market research participants. It's just so easy to answer questions with ChatGPT, and many people are accustomed to using AI to complete their work all day anyway. Ironically the solution to AI damaging market research insights might be more AI. The key is conducting smaller but higher-quality traditional research studies—with stringent fraud prevention—to build accurate simulations of consumer preferences using AI personas.

These validated AI personas can then be deployed at scale for a much broader set of questions, studies, and A/B tests without the limitations of traditional research. AI never gets tired, bored, or distracted, it doesn’t try to cheat, and you don’t need to offer it an incentive. This approach enables researchers to explore hundreds of potential concepts, messaging variations, or product features in virtual simulations without the expense, time constraints, or fraud concerns of recruiting new human participants for each iteration.

The cycle completes when promising virtual findings are validated through targeted real-world studies, creating a feedback loop that continuously refines the AI simulation. Each validation study improves the model's accuracy, making future simulations increasingly reliable. This hybrid methodology offers the best of both worlds: the authenticity and grounding of human feedback combined with the scale, speed, and consistency of AI-powered research—effectively rendering the problems of cheaters, speeders, and repeaters increasingly irrelevant to the insights process.

Read Entire Article