Technical people are often linear thinkers. Linear thinking is a thought process where each step triggers the next one. If a = b, and b = c, then a = c. It’s how Henry Ford’s assembly line came to existence and how Kaizen, Lean and Six Sigma methodology matured.
So to execute a step, you need information: for instance, a manual for assembling a car. As we learn, we adapt and, for instance, scribble notes on that assembly manual. All is well if these scribbles only apply to you, but what happens if everybody adds personal notes to the manual over time? It creates inconsistencies and variations on the original information.
Today, instead of assembly manuals, we have complex as-built models, used by all different silos within our organisations. Each silo “scribbles” their notes on the original as-built model, again creating variations on the same dataset, but this time with far greater consequences.
Basically every phase of the Design, Build, Maintain, Operate and Demolish cycle requires us to transfer a duplicate of an asset dataset, which evolves and is isolated in that new discipline, department or even company. Asset data is never up to date, the as-built model never resembles the true current state of physical assets, the different silos are rarely up to speed with each other’s activities, and asset managers are making decisions based on scattered, unvalidated and inconsistent information.
The fourth industrial revolution, the digital one, has amplified this situation by enabling data creation on a vast scale. The Internet of Things, machine learning, predictive maintenance, assets that communicate with each other, sensors that can show assets’ operating parameters, 3D scans that can be transformed into 3D models with an accuracy of one millimetre – all these innovations simply create yet another unmanaged, isolated dataset.
Download the full article to read how we put this data to use instead of just generating it.