Why data governance is the foundation of effective data mesh

Why Data Governance is the Foundation of Effective Data Mesh

It’s still common for many businesses to operate within a centralised decision-making structure, especially in large enterprises. Often power remains concentrated at the top with directives filtering down through layers of management. At times, this can seem reminiscent of the Victorian era, when organisations were run on rigid hierarchical lines and hampered by slow communication. Back then, there was limited opportunity for information sharing with employees or collaboration and feedback. Nevertheless, this approach was effective in an age without modern technology, ensuring consistency and control even if it often restricted innovation and responsiveness.

Fast forward to today’s digital world and this model feels increasingly out of date. With the advent of digital communication, data storage, and collaborative tools, businesses now have far more choice over how they operate. Digital transformation has the power to make possible wide distribution of data and decision-making across an organisation, enabling agility, innovation, and employee empowerment.

While this sounds ideal, the reality is not nearly as straightforward. Many organisations are overflowing with data in a multitude of different formats, collected over decades. And IT departments have a mushrooming array of tools to process and make sense of these ever-expanding volumes. Instead of freeing up data for analysis and decentralised decision-making, many businesses are sitting on growing silos of information, unable to take advantage of its potential by releasing it to the functions that could extract the most value.

The crux of the data issue

Take for example, a HR department wanting a report that requires data from both finance and HR systems, along with historical archived information. Or a marketing team trying to pull together sales statistics and marry them with customer service feedback. With a centralised structure, all requests, no matter how large or small, must go via the corporate IT team. This creates bottlenecks, slowing down or even preventing projects. Constant delays or stalling can be perceived as a rejection of new ideas, demotivating employees and ultimately discouraging innovation.

If IT teams could enable a self-service approach to data enablement and retrieval, it would take a huge burden off their shoulders and organisations could quickly start generating value from their data silos. To make this happen, technology leaders have started to evaluate the relatively new concept of data mesh.

The data mesh concept

This approach facilitates a shift towards decentralisation of data by putting ownership and management back in the hands of the domain owners, protected by policies for internal governance and compliance. It is based on four key principles:

Domain Ownership: Departments or functions such as marketing, sales, and finance become the owners of the data they generate as they understand it best and care about it the most. Effectively, the domain owners behave as independent entities, accountable for the stewardship of their data assets. They are responsible for ensuring its accuracy and quality.

Data-as-a-Product: Each department turns its raw data into clear, defined products that all users enterprise-wide can easily understand, access, and utilise. This standardised approach ensures everyone is accessing the same, up-to-date information to drive business initiatives and decision-making.

Self-Service Data Platform: The IT team is responsible for providing a secure, central platform with easy-to-use tools and sufficient capacity for every department to manage their growing level of data products. This self-service approach empowers users to access data without relying on IT. Users can quickly analyse and extract insights from the new ‘data-as-a-product’ format.

Federated Governance: To ensure quality and coherence in a decentralised environment, , overarching data governance rules are established to shape the requirements each data product must satisfy. These standards must be followed by each department to keep data integrity consistent across the data ecosystem, while still allowing different areas flexibility to manage their specific needs. The platform supports this practice by checking every data product for compliance before releasing it, preventing the deployment of non-complying products.

Decentralisation fears

Removing the dependency on centralised systems and IT teams could be transformational. Data practitioners typically spend nearly 50% of their time on non-revenue making activities, including finding and validating the integrity of data before initiating projects. By enabling faster access and autonomy, time to market for new products and services could be slashed.

However, introducing data mesh poses multiple challenges concerning data duplication, storage size, master data management, and compliance. With decentralised data management, it’s imperative that every team member follows strict rules when creating, storing, and protecting data, or when implementing new tools. Otherwise, chaos will ensue. If different team leaders implement their own maverick processes or purchase incompatible tools, the data produced is likely to spawn more silos as well as cause further data privacy, compliance, and security issues.

Trusting individuals to stick to data guidelines is too risky. Instead, adherence must be enforced in a way that ensures standards are always followed, without frustrating users or impeding agility. This may sound like an impractical task, but a computational governance approach can impose the necessary restrictions, while at the same time accelerating project delivery. Some teams may struggle to adjust to this new way of working, but with the right training to support the implementation, old habits can be reshaped and an entrepreneurial mindset can be built.

The need for governance

Residing above an organisation’s data management and enablement tools, a computational governance approach can be technology agnostic. It provides the governance structure and capabilities to ensure that all projects follow pre-determined policies for data quality, architecture, compliance, and security. These customisable governance policies enforce all relevant standards at global and local levels, ensuring that projects can only go into production if they meet the necessary internal and external requirements. This ensures that rather than having to correct errors after the data products have been made, users simply cannot create new data that doesn’t meet the set criteria.

User-friendly, automated templates help data practitioners quickly set up and initiate projects across technologies and tools, facilitating data access and processing, while ensuring compliance and security standards are always met. Therefore, instead of holding back projects, computational governance speeds up processes as well as provides reassurance that data quality standards are consistent and reliable across every project.

When implemented with computational governance, data mesh can become the foundation of an intelligence-driven culture that delivers better and faster business outcomes. It can revolutionise how organisations manage and utilise their data, thereby enabling teams to quickly leverage information wherever it resides, driving innovation and informed decision-making, and giving businesses a substantial competitive advantage.

Andrea Novara, Engineering Lead | Banking & Payments Business Unit Leader at Agile Lab

Andrea Novara

Andrea Novara is Business Unit Leader – Banking & Payments at Agile Lab

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE