News & PR

Data Quality: From Hidden Cost to Competitive Advantage

Written by Kirey Group | Apr 17, 2025 8:27:18 AM

The only real problem with data quality is that many companies still consider it secondary. Yet, it is the foundation of every strategic decision, data-driven process, or innovative project based on artificial intelligence. Sometimes, the consequences are not immediately apparent, but they are nonetheless paid over time, and at a high price: errors in analyses, operational inefficiencies, and inconsistent customer experiences result in an average cost of 12.9 million dollars a year per company (Gartner). A hidden cost, but a very real one.

Data Quality: A Technical and Cultural Requirement

If data quality is central to business competitiveness in the data-driven era, why do so many organizations continue to (unwittingly) pay the price for errors and wrong decisions? The answer lies in a mix of technical and cultural factors; in 2025, awareness of the importance of data quality exists, but too often it is not accompanied by a genuine ability to address the issue in a structured, cross-functional, and continuous manner. There are many causes, so we report only the most common ones.

Silos Generate Inconsistencies

Data fragmentation among departments, teams, platforms, and geographical locations, each with its own peculiar needs and – above all – with its own rules, creates overlaps, gaps, and inconsistencies. The solution is standardization, which cannot be achieved without launching a modern and systemic data management program.

Data quality is not by design

Data quality is addressed ex post, that is, after the data have been collected, processed, and used. In reality, data quality—with its checks, validations, and monitoring processes—should be an integral part of the design of IT systems from the very beginning. Only in this way is it possible to ensure that data are consistent, complete, and accurate at every stage of the lifecycle, avoiding costly interventions later on.

No One Is Responsible

In addition to technical issues, there are also organizational matters related to the broader realm of data governance. One of the most underestimated problems is the lack of clear ownership: who is responsible for data quality or a specific data type? IT? The business? The data team?

Corporate Culture (Still) Little Oriented Towards Data

In many companies, data are still perceived as a tool for generating reports, rather than as a strategic asset to be nurtured and enhanced. The lack of a true data culture limits the effectiveness of initiatives, despite the development of complex architectures and the implementation of cutting-edge tools.

Data Quality 2.0: Reimagining Enterprise Data Management

A modern data quality program is a strategic, integrated, and continuous approach aimed at ensuring the reliability, consistency, and completeness of corporate data.

In a context where companies are increasingly implementing AI in their processes, a data quality program that supports individual projects but also the systemic evolution toward a data-driven company is indispensable. We have therefore identified some key elements of a hypothetical program.

Defining the Scope of the Program

Companies "cannot and should not aim for data quality everywhere, as not all data are equally important." In this way, Gartner analysts introduce what should be the first step in creating a data quality program in 2025: defining the scope of the program, that is, determining how much and where to invest.

To do so, a modeling process is necessary in which, given a certain business case, data quality is correlated with the two fundamental dimensions of value and risk. The objective is to prioritize interventions on the data that can provide the greatest value to the organization, while mitigating the main risks, such as those related to compliance or erroneous decisions.

Another crucial aspect, according to analysts, concerns the distinction between data that must be centralized and data that can remain confined to more local domains. Centralized data, such as master data, are the most critical, and thus the responsibility for their quality should be shared and based on close collaboration among different corporate stakeholders.

Assessment of Current Data Quality

While it is true that 59% of companies do not measure the quality of their data (Gartner), those that reach this phase are already in an advantageous position. However, it is clear that in broad and distributed contexts, assessing data quality is by no means trivial, also because it can only be done based on certain dimensions, including:

  • Completeness: Are the data complete, or do they lack essential information?
  • Accuracy: Do the data accurately reflect reality?
  • Consistency: Are the data consistent across all systems and departments?
  • Timeliness: Are the data up-to-date and available at the right time?
  • Compliance: Do the data meet applicable standards and regulations?

Not all dimensions are equally relevant in every context, and even in this case, choices must be made to not negatively impact timelines and budgets. For this reason, it is essential to involve the relevant stakeholders, thereby identifying with them the most valuable use cases and understanding which dimensions of analysis (completeness, accuracy, etc.) should be prioritized. Once this is done, it should be easier to identify the KPIs with which to effectively measure data quality through a process known as data profiling.

Data Governance: Who's Responsible for Data Quality?

When data quality processes (profiling, cleaning, enrichment, etc.) are left to isolated initiatives without a shared vision, the risk of duplications, inefficiencies, and inconsistencies increases exponentially. Data governance defines who should take care of the data throughout its lifecycle, ensuring that it is accurate, complete, up-to-date, and compliant with corporate standards. Data quality cannot be achieved without precise and timely governance, the complexity of which depends on the organizational structure. Key figures in data governance must be appointed and positioned within the organization, such as:

  • Data steward: Responsible for data quality at the operational level;
  • Data owner: Manages data at the strategic level;
  • Data quality engineer: Designs and implements quality processes.

Processes, Architectures, and Advanced Tools to Support Data Quality

A modern data quality program is based on skills, tools, and processes. The latter are essential to ensure that every piece of data collected, processed, and used meets corporate quality standards. All the macro processes of data quality, including profiling, data cleaning, and enrichment, must be defined and managed in line with business objectives.

To implement these processes, companies of a certain size must equip themselves with modern data architectures that go beyond the traditional silo model and focus strongly on automation; furthermore, advanced data management tools that allow the optimization of all processes and pipelines from the data producer to the data consumer are indispensable.

Today’s technology is capable of supporting complex activities such as the automatic detection of data errors, advanced data profiling, and real-time validation. The integration of these tools with pre-existing corporate systems, such as ERP, CRM, and analytics platforms, enables smooth, uninterrupted data management, significantly reducing the need for manual interventions and ensuring more effective data management. Moreover, this integration accelerates time to value, which represents one of the key indicators of success for any data-related project and illustrates how far the company is moving toward becoming a data-driven enterprise.