To see data quality as an attribute of the data itself is to mistake the map for the terrain.

What counts as “quality” depends on what you are trying to do with the data. A 98% postcode match might suffice for a marketing campaign, but not for directing ambulances. A timestamp accurate to the nearest day might fail spectacularly in fraud detection, yet perfectly suffice for historical trend analysis.

Yet often companies continue to think about data quality as if it were a physical product—something to be refined, standardised and stockpiled for later use. Vast amounts are spent cleaning and curating datasets to some imagined state of perfection, with the hope that once the data is “clean” (or “productized”), the job done.

But the economics of data are nothing like the economics of physical commodities. Its value is not intrinsic. It is shaped entirely by context: by the question being asked, and by how quickly the answer is needed.

This is where the idea of “high-quality data” begins to fall apart. Unlike a well-built bridge or a precision-engineered machine, a dataset doesn’t have enduring utility on its own. Its value fluctuates with time and demand. Most of the time, what organisations need is not perfect data, but data that’s good enough for now and quickly adaptable for the future.

The effort required to take a dataset from “usable” to “pristine” is often not worth the marginal benefit—particularly when the next use case will require a different definition of quality altogether. Worse, over-engineering data for one purpose can limit its usefulness for others, introducing a kind of asset specificity. In a fast-moving environment, that’s a liability.

A better approach is to treat data not as a finished product, but as a flexible intermediate. Its value comes from how easily it can be shaped and repurposed. That requires a different kind of investment—not in perfection, but in adaptability: infrastructure that allows teams to track provenance, add context, reshape structure, and make trade-offs transparently. Optionality, in this sense, is far more valuable than polish.

There is, of course, a role for governance and hygiene. But those efforts should serve agility, not obstruct it. The most successful organisations are not those with the cleanest data, but those that move quickly with known imperfections, adjusting as they go. They understand that data quality is not a destination, but a moving target.

The idea of high-quality data is appealing because it promises something solid in a world of flux. But data doesn’t work that way. Its economics reward speed, flexibility and reusability, not polish for its own sake. The most valuable data is not that which is most refined, but that which is ready to quickly become useful—again and again, in different ways, for different ends. Activate to view larger image,