There's a lot of talk about getting data “in order” before doing AI, including traditional Data Quality (DQ) initiatives. But from our experiences delivering AI-driven Data Products, we need a fundamental shift in how we think about Data Quality in the AI world.
TL;DR: You don’t need perfect data to build highly accurate AI models. In fact, we’ve "actually in real-life, for a client" built an effective AI model on 75K rows where 30K were garbage.
AI and Data Quality: Flipping the Script
In traditional data management, DQ is about cleaning, structuring, and validating data before it’s used. This makes sense for BI, reporting, and systems of record.
But AI works differently. It doesn’t need "perfect" data—it just needs enough statistically significant data points to identify patterns.
The key is:
✅ Rapidly identifying the right data (even if it’s messy)
✅ Iterating, testing, and refining in real-time
✅ Understanding when data needs augmentation, labeling, or restructuring
✅ Having access to raw data, not just "cleansed" data that’s lost critical context
Traditional DQ approaches slow AI innovation because they assume we know what data is needed upfront. But in AI, we often don’t know until we start working with it.
Shifting from “Fix It Upstream” to “Work With It Iteratively”
Instead of spending months cleaning data before testing an AI model, we should:
🔹 Work directly with the use case – Start from the business problem (Top-Down approach) and find the data that supports it.
🔹 Analyze the data statistically, not row-by-row – DQ isn’t about fixing individual bad records; it’s about understanding the dataset’s overall patterns.
🔹 Adapt in real-time – Change the model, get new data, synthesize missing data, or iterate based on what you discover.
🔹 Retain raw data access – AI models need the full picture, including the structure of messy data. Passing data through traditional pipelines often removes valuable context.
This iterative AI-driven approach has massive implications not just for DQ but for how organizations structure their entire data strategy.
Data Quality Is Contextual
Data Quality isn’t about some universal gold standard—it’s relative to the job at hand.
What’s considered "bad" in one context (e.g., structured reports) might be useful signal in another (e.g., AI models extracting trends from raw text).
As Eddie Short put it:
“For AI, trying to craft high-quality training data is self-defeating—especially with unstructured data.”
Or as Andy Mott said:
“Quality is always defined by the consumer of the data. In this case, it’s the AI model.”
Final Thought: AI Requires a Data Mindset Shift
🚀 Data Quality for AI is about speed, iteration, and adaptability.
🚀 Perfect data isn’t required—only enough signal to train effective models.
🚀 AI and Data Products require a shift from traditional “clean it first” thinking to a more experimental, hypothesis-driven approach.
If organizations fail to embrace this shift, they’ll slow down AI innovation and fall behind those who iterate fast.
What’s Your Experience?
Are you still trying to "fix" data before using it for AI, or have you moved to a faster, more iterative approach? Get in touch with us at Dataception!
With Dataception's DOGs (Data Object Graphs), AI is just a walk in the park!