OpenText brings decades of expertise to help you unlock data, connect people and processes, and fuel AI with trust
Unify data seamlessly across your enterprise to eliminate silos, improve collaboration, and reduce risks
Get AI-ready and transform your data into structured, accessible, optimized information
Meet regulatory and compliance requirements and protect your information throughout its lifecycle
OpenText helps people manage content, automate work, use AI, and collaborate to boost productivity
See how thousands of companies around the world are succeeding with innovative solutions from OpenText™
Our people are our greatest asset; they are the life of the OpenText brand and values
Learn how we aspire to advance societal goals and accelerate positive change
Find a highly skilled OpenText partner with the right solution to enable digital transformation
Explore scalable and flexible deployment options for global organizations of any size
Local control. Global scale. Trusted AI
Your cloud, your control
Free up resources, optimize performance and rapidly address issues
Run anywhere and scale globally in the public cloud of your choice
See information in new ways
AI that understands your business, your data, and your goals
Say hello to faster decisions. Your secure personal AI assistant is ready to get to work
Gain better insights with generative AI for supply chains
Power work with AI content management and an intelligent AI content assistant
Improve your security posture with AI cybersecurity and agile threat detection
Enable faster app delivery, development, and automated software testing
Elevate customer communications and experiences for customer success
Empower users, service agents, and IT staff to find the answers they need
See information in new ways
AI that understands your business, your data, and your goals
Say hello to faster decisions. Your secure personal AI assistant is ready to get to work
Gain better insights with generative AI for supply chains
Power work with AI content management and an intelligent AI content assistant
Improve your security posture with AI cybersecurity and agile threat detection
Enable faster app delivery, development, and automated software testing
Elevate customer communications and experiences for customer success
Empower users, service agents, and IT staff to find the answers they need
Predict, act, and win with real-time analytics on a smarter data platform
Give users access to the answers they need, faster and easier, with multi-repository AI-based search that lets you contextualize everything from clicks to conversations
Connect once, reach anything with a secure B2B integration platform
Reimagine knowledge with AI-ready content management solutions
Supercharge intelligent workspaces with AI to modernize work
Integrated cybersecurity solutions for enterprise protection
Purpose built data protection and security solutions
Reinvent threat hunting to improve security posture with the power of agile AI
Ship better software—faster—with AI-driven DevOps automation, testing, and quality
Reimagine conversations with unforgettable customer experiences
Get the clarity needed to cut the cost and complexity of IT operations
Redefine Tier 1 business support functions with self-service capabilities from private generative AI
Build custom applications using proven OpenText Information Management technology
Build it your way with OpenText Cloud APIs that create the real-time information flows that enable custom applications and workflows
Protect what matters, recover when it counts
Get greater visibility and sharper insights from AI-driven information management. Ready to see how?
Break free from silos, streamline processes, and improve customer experiences with secure information management for AI
Improve efficiency, security, and customer satisfaction with OpenText
Run processes faster and with less risk
Achieve digital transformation with guidance from certified experts
Modernize your information management with certified experts
Unlock the full potential of your information management solution
Turn support into your strategic advantage
Extend IT teams with certified OpenText application experts
Discover training options to help users of all skill levels effectively adopt and use OpenText products
Modernize your information management with certified experts
Unlock the full potential of your information management solution
Turn support into your strategic advantage
Extend IT teams with certified OpenText application experts
Discover training options to help users of all skill levels effectively adopt and use OpenText products
Information is the heartbeat of every organization. We build information management software so you can build the future
OpenText partners with leading cloud infrastructure providers to offer the flexibility to run OpenText solutions anywhere
OpenText partners with top enterprise app providers to unlock unstructured content for better business insights
Discover flexible and innovative offerings designed to add value to OpenText solutions
Discover the resources available to support and grow Partner capabilities
Get expert product and service support to accelerate issue resolution and keep business flows running efficiently
Explore detailed services and consulting presentations, briefs, documentation and other resources
Are data mesh and data fabric the latest and greatest initiative, or new buzzwords aimed at selling solutions? It’s hard to say, but these emerging new corporate initiatives have a goal in common–namely dealing with disparate data. You can often achieve more value from your data if you can use disparate data for your analytics without having to copy data excessively and repeatedly. Data mesh and data fabric take different approaches to solving the disparate data problem.
Both data mesh and fabric focus on metadata and a semantic layer to leverage multiple data sources for analytics. However, the major difference seems to be about context.
In layman’s terms, data mesh is about the ability to offer various data sources to an analytical engine. Data mesh counts on the fact that you know the structure of your source data files and that the context of the data is solid. Using data mesh assumes you know the who, when, where, why, and how the data was created. Data mesh might be the strategy you use, for example, if you want to analyze data from several data warehouses in your company. It’s a use case where the original metadata is fairly well-defined.
Data fabric focuses on orchestration, metadata management, and adding additional context to the data. In the data fabric, managing the semantic layer is the focus. Use the semantic layer to represent critical corporate data and develop a common dialect for your data. A semantic layer in a data fabric project might map complex data into familiar business terms such as product, customer, or revenue to offer a unified, consolidated view of data across the organization. Pharmaceutical trials are a good example of where you might use data fabric, since the data from a trial comes from a combination of machines, reports, and other studies where the data has little accurate metadata to rely on. This data may be ‘sparse’ as well, meaning that a significant number of rows and columns are blank or null.
There are really no data-mesh-in-a-box or data-fabric-in-a-box solutions. As of the writing of this article, there was no one-stop shop for fabric and data mesh. In other words, data mesh and fabric aren’t software products. They are more commonly strategic initiatives that require multiple solutions.
Today, companies might use several technologies to create a data mesh or a data fabric. Here are a few examples:
Tradition databases
Modern databases can leverage external tables in data mesh style. Vertica, for example, allows you to use PARQUET files and other file types seamlessly without loading them into the main repository. In addition, if you have semi-structured data in AVRO, JSON, or TEXT, there is an easy way to leverage schema on read features to use the data. This functionality is valuable for creating a data mesh if you have disparate sources and want to leverage them like you would data in a database.
Query engines
A whole generation of query engines (sometimes called query accelerators) make data mesh possible, too. Solutions like Dremio, Starburst, and Druid primarily focus on analyzing external tables. They sometimes lack ACID compliance and the ability to do analytics with high concurrency, but they are often helpful in the data mesh mission. More and more traditional databases have added query engines to allow for seamless querying in a database and a data lake.
Visualization tools
Some advanced visualization tools have a semantic layer system. MicroStrategy, for example, offers a layer of abstraction that provides a consistent way of interpreting data from multiple sources. In addition, it maps complex data into familiar business terms. This capability is not only a simplified data fabric but can also leverage your database’s external tables capabilities. Combined, it can be mighty powerful.
Graph databases
Graph databases are good at orchestration and context and are the engines behind many data fabric solutions. Implementing data fabric with a graph DB is a significant project, but you will get a true data fabric when complete.
Data virtualization
Data virtualization tools like those offered by AtScale and Denodo present a consistent view for BI and Data Science teams to consume data. Modern databases also have data virtualization capabilities.
Data catalog
A data catalog is an organized inventory of data assets in the organization. Companies like Collibra provide data discovery and governance catalogs by collecting, organizing, accessing, and enriching metadata.
On-premises object store
It can be helpful to store all of your files in a central location. Object stores let you centrally manage databases, data repositories, and data lakes in one place with superb performance, security, and disaster recovery. For that reason, object stores such as those from Pure, Vast, Dell ECS, and many others can help with data mesh.
Data mesh is a way of accessing data that may be disparate and works particularly well when all the data sources:
If data mesh has a weakness, it is context. If your analytics is asking the question “according to whom?” then a data fabric can be more powerful to understand this. Data engineers often run into conflicting information when integrating sources together. For example, a new system might be reporting a customer’s age at 32, while legacy data might be reporting the same customer at 30 years old. Data lineage is an added feature of data fabric that let’s you decide which data sources to trust more when there are conflicts.
Data fabric solutions will tend to combine more tools to solve your disparate data problem. The tools are both more elegant and usually more complex than data mesh. They might include greater transformation capabilities, enhanced fine-grained security, graphical interfaces for governance and the lineage. However, if there is a weakness in data fabric is that you’ll probably have to spend significant effort in creating/managing a semantic layer.
Those vendors touting a data fabric strategy often promote the capabilities of a knowledge graph. A knowledge graph replaces the data mesh data integration strategy with a semantic representation of both structured and unstructured data – one that often better supports multiple schemas and dimensions that change.
More than ever, data is often diversely located in databases and data lakes. Cloud databases vary greatly in terms of accessing external data. Some solutions require data to be stored in specific formats in data warehouses and offer no support for data lakes. Still others support data lakes but require multiple tools to do so. Look for a solution that can handle common formats, (like ORC, PARQUET, AVRO, JSON) and leverage those sources into daily analysis with grace and speed. Look for solutions that can reach into other databases in your organization (data virtualization) so that no data is difficult to access.
Analyze massive data sets with minimal compute and storage