How contextual data will shed a new light onto your production process
Jeroen Coussement on , updated
Raw production data tells enticing stories. All you need to do is capture them. Find out how to obtain advanced insights by adding a layer of context to raw process data – without having to rely on external systems such as MES or ERP.
What is contextual data?
Contextual data, or context data, is background information about the set of circumstances that surrounds a collection of data points. Context data helps to make raw production data more useful, manageable and easier to analyse.
In process manufacturing, context is defined from the angle of process events, such as batches, runs or shifts, or any other event that has a start and an end time.
Why context data matters in process manufacturing
By adding context to raw data, process manufacturers can draw connections between data points, in order to discover patterns, trends, and correlations within process parameters, spot anomalies and trends, and detect minor issues before they become major threats. These contextual insights will ultimately lead to smarter processes, higher product quality, and better decisions.
Process engineers can, for instance, use these insights to correct an anomaly quickly, and learn when the issue might occur again. The company’s CEO and CFO can integrate contextual data with their BI tools and make well-informed business decisions. Compliance and facility managers leverage it to find new opportunities for lowering energy consumption or operational costs.
Companies that apply contextual data, can:
- Streamline product quality by overlaying production curves
- Spot performance gaps before they might threaten continuity
- Increase their Overall Equipment Effectiveness (OEE)
- Find the underlying reasons for downtime and errors
- Increase the bottom line through better cost accounting
So, how to add context to raw process data?
The basic process of adding context to raw production data consists of two steps: process event detection and data aggregation.
Step 1: process event detection
Event detection is the process of detecting and identifying the occurrence of specific events, by recognizing patterns in streaming production data, for example when a program step changes in the PLC. Typical examples of process events include orders, runs, batches, cycles, employee shifts, unplanned downtime, emergency stops, and changeovers.
Step 2: data aggregation
Event analysis typically entails the filtering and aggregation of raw process data within the context of the detected process events. The result of this aggregation is valuable information that provides an additional layer of context on top of the raw data, and enables complete traceability across multiple production processes, lines and sites.
However, in many cases, this is easier said than done.
Are you struggling with these common obstacles?
In practice, and despite the fact that few industries generate as much data as the manufacturing industry, we see that many process plants still fail to uncover deeper contextual insights from the time-stamped data they have been collecting in abundance. This can happen for various reasons:
Lack of the right production tools
Adding context to raw data is something you would typically expect from external systems, such as a Manufacturing Execution System (MES) or Enterprise Resource Planning (ERP). Yet in many cases, an MES is not present, nor is it required.
And even if there is an MES, it is often outdated or it does not accurately reflect reality (e.g. planned versus actual starting time of a batch, or downtime events that have been entered manually rather than detected automatically) or has no access to the originally recorded raw data.
This leaves many companies caught between a rock and a hard place: they don’t need advanced production software, but they do need contextual insights.
Complex custom integrations
To address the lack of relationships among data points in a time-series database, complex workarounds are often put in place. However, band-aid solutions such as custom scripting or complex stream processing pipelines are mainly built for internet-scale applications, and require a lot of resources.
So, still not a great solution. Fortunately, there’s a better option.
The solution! Add a relational database to your historian
You can’t get insights from contextualised data if you aren’t collecting it. So let’s double the fun by adding a relational database alongside your time series database. Generating additional contextual data will obviously make the flood of plant data larger, but also a lot more manageable by adding structure
The difference between relational and time-series databases
As its name implies, a relational database is designed for grouping data records so users can easily establish relationships between data points. By contrast, a time-series-database stores all data sequentially with time stamps, without establishing relationships among the data points.
Why should you combine both types of databases?
By capturing and combining critical events with time-stamped data, process engineers are able to surface new insights in how process events may impact operational performance. On top of that, they save hours of time on preparing the data manually for reporting, or writing custom integrations for BI tools.
Apart from the core ability of SQL databases to organise data with defined relationships, another great benefit is that they can be easily queried.
In a process manufacturing context, event data stored in an SQL database enables you to answer complex operational questions such as:
- How much energy was consumed per specific product or asset?
- What was the average temperature for a certain batch?
- How does the performance of two lines compare to each other?
- How does the pH-curve compare to previous batches?
- What volume was produced on a certain asset in one week?
Ready to add context, save time and produce smarter?
Adding an additional relational database alongside a time-series database will save you time and effort, by radically reducing the amount of pre-processing work that is needed for advanced operational and business reporting. At the same time, it speeds up the process of root cause analysis, by helping process engineers to better connect the dots between cause and effect.
In our next blog article, we will explain how to add context to raw production with Factry Historian’s generic event detection and contextualisation module.
Hang tight to find out how to:
- Auto-detect events through a generic, asset-based event module
- Aggregate raw process data automatically to get deeper insights
- Sample historical data on top of real-time production curves
- Speed up the reporting process by integrating with any BI tool