Exploring the concept of data quality and the implication for AI-augmented research

31 October 2023 | 3 min read | Written by Jennifer Reid

This is the first in a multi-part blog where Jennifer explores how we define quality research, why it matters, and how it will impact the role of researchers on the event horizon of AI.    

AI is here and everyone is all a-flutter 

AI is going to disrupt market research in ways nobody imagined. Overall, yes. I agree with that statement. AI is going to replace the researcher. No. I don’t think that is going to happen. Not for quite some time. Maybe not ever. And there are a couple reasons for that.

For me the first and perhaps most important is that the concept of “quality” responses or insights is for the time being, the sole domain of the survey participant and and the researchers who collect that data and turn it into actionable findings. Yes, the prevailing notion is that, when it comes to AI, quantity has a quality all of its own but that has never been true in market research, and I don’t believe the emergence of AI will suddenly make it so. Certainly not for first-party data collected from verified customers. Reams of junk is junk.

The power of AI matched with high-quality insight is what will elevate our toolbox in new and powerful ways. The question I care about is how we define and own "quality insight." 

Moving beyond data hygiene  

Data quality is clear-cut in some contexts (e.g., fraud detection) and there are some incredible NLP tools that can be used to detect fraud, straight liners and speeders. However, the definition of quality becomes more ambiguous and discretionary when dealing with survey data and non-binary, open-ended responses.

When working with survey data, it's not as simple as labeling data as "good" or "bad."

I’ve heard quality described as a black-box term and a suitcase term. Black box in the sense that nobody has really taken the time to define quality data. Suitcase in the sense that there are many expressions that can be used to shape definition—enough to fill a suitcase. Today our customers and clients place an enormous amount of trust and confidence in their research teams to identify quality research. To surface deep, rich, authentic  insights that gives them the confidence to make bold decisions.   

The notion does provoke some intriguing questions. As a starting place what are the parameters of good feedback? When working with survey data, it's not as simple as labeling data as "good" or "bad." Instead, it's about understanding the breadth and depth of feedback based on specific characteristics. What makes certain feedback superior to others? It's a question that often lacks a straightforward answer. But one worth exploring.

Quality vs. Thoughtfulness 

An interesting place to anchor this conversation is in an area that gives us a bit more freedom to experiment and free-associate. I like the notion of “thoughtfulness.” I like looking at the feedback through the lens of engagement, interest, consideration— something taken seriously.

"Thoughtfulness" introduces a deeper layer of consideration and care when dealing with data. It's not merely about having clean and accurate data; it's about starting to articulate the characteristics of good quality feedback. Or in the parlance of AI, explore the vectors of quality feedback. The features we can use to identify good feedback and deliver those inputs to the AI systems that we will soon be reliant on to do our jobs to the best of our ability.

Becoming the stewards of quality 

Why does this matter? Because to get the most out of AI, we must interrogate what it is that computers can do well and what is it that humans can do well. Humans can identify thoughtfulness instinctually. Researchers do it professionally.

"Thoughtfulness" introduces a deeper layer of consideration and care when dealing with data.

We already see this in action when we’re able to find the perfect selfie video that highlights the insight we are trying to elevate.  The minute you see a real person giving a thoughtful response you lean into that information. Researchers are good at highlighting thoughtful feedback. We’re good at articulating why that feedback is high quality and we can rationalize why we have made the choices that we have. I'm interested in how AI can help up to amplify this feeling.  

So as an industry, I would like to put forward the notion that we have an obligation to become stewards for quality. That quantity will never be a proxy for quality and the definition of quality will be the sole proprietorship of human researchers. We will define what is good, the machines will go and find it in reams of unstructured feedback—that is after all, what machines are good at.

Parting thoughts

If you are growing weary of everyone and their dog telling you what’s going to happen with AI and how you can “get the most of AI,” I am here to tell you, you’re not alone. The hot air of AI in our industry at this moment could float a balloon. The irony is that, as researchers, we’ve been doing far too much talking and not nearly enough listening and asking thoughtful questions.  

As I embark on this journey I hope to ask more question than provide answers. I hope to start a dialogue that helps to position researchers as irreplaceable in a world of AI. So as an industry we remain curious, empathetic, engaged. Put another way, uniquely human.

author image
Written by Jennifer Reid

Co-CEO and Chief Methodologist at Rival Group

Want to get more thoughtful insights?
TALK TO AN EXPERT

Talk to an expert

Got questions about insight communities and mobile research?
Chat with one of our experts.

TALK TO AN EXPERT

SUBSCRIBE Sign up to get new resources from Rival.

Subscribe by Email

No Comments Yet

Let us know what you think