The Industrial Revolution laid the foundation for the modern work ethic of continuous improvement and human productivity through the mechanization of production using steam powered machines (Industry 1.0). When electricity and mass production entered the scene, great efficiencies and cost reductions were realized (Industry 2.0). This was further enhanced by computers, where entire production processes started to be automated (Industry 3.0). Today, we are applying information and communication technologies to industry, with the ambitious goal of machine-to-machine (M2M) interaction for business outcomes (Industry 4.0).
So, what does Industry 4.0 exactly bring to the table, and how will it achieve its objectives? Well, this revolution is about digitizing physical processes and leveraging the computing infrastructure to automate and control operations via sensors and cyber-physical systems. It enables a variety of applications like predictive maintenance, digital twins, and diverse supply chain management optimizations.
Alongside this industry evolution is the progress in our approach to maintenance, which migrated from being reactive to being proactive, more commonly referred to as preventative maintenance. But as the internet of things grows with more and more connected devices, even this proactivity will not scale. Market researcher Gartner predicted that by 2020 there would be 26 times more connected things than people, and per Security Today, in 2019, the number of active IoT devices on the web was forecasted to reach 27 billion, and was growing at a rate of about 127 new devices per second (source). A different approach is needed indeed; we are at the cusp of honing predictive maintenance.
Businesses initially maintained their assets reactively, and only repaired or replaced them after they broke down. They soon learned that this passive approach to maintenance was problematic as it halted production, and led to great financial losses. If a major equipment suddenly stopped working, for example, factories could be shut down for days or even weeks while the replacement part was awaiting delivery. Based on a research conducted by Nielsen Research, in a survey of over a 100 manufacturing executives in the automotive vertical, a majority concur that the cost of production disruption is incredibly high; i.e., an average of about $22,000 per minute (source).
To mitigate this, enterprises employed the preventative maintenance (PM) model where service checks and repairs were performed at regular intervals to prevent equipment failures. This proactive approach to maintenance, based on an asset’s expected life, proved to be cost effective, and alleviated unexpected downtimes and encouraged operational continuity. According to a report by BOMA (Building Owners and Managers Association), PM accounts for about 4.5 to 7.5% of the annual operating costs. This seems to be a relatively small figure initially, but the cost can accumulate over time. However, the savings could also be substantial.
FleetNet America, for example, a comprehensive fleet repair and maintenance service provider, saved their clients over $775,000 in maintenance expenses through their PM Program from 2012 to 2016. Based on some estimates a commercial truck can cost up to $180,000 annually to operate, with about $15,000 attributed to maintenance and repairs, or about 8% of the operating costs. So, if we estimate a PM cost of about $10,000 for a single truck, then a minimum fleet of about 15 (a typical number to qualify as a fleet) would experience about $150,000 in annual PM expense, or about $600,000 over 4 years.
Thus, we believe that, although PM can result in significant savings, this effort aligns towards a net-zero effect. And, as repair and maintenance costs continue to increase, PM will not keep pace. Although PM is dominating the enterprise maintenance standards, it is a sub-optimal approach as it generally services or replaces assets before their actual end of life (EOL). We feel there is a better approach where enterprises can use analytics about the assets to plan the maintenance activities.
A shift to predictive maintenance using state-of-the-art technologies, like digital twins, which has the potential to yield Operational Intelligence (OI), will leverage the computing infrastructure to improve asset consumption, will prevent unscheduled down times, and will allow optimal planning of maintenance activities.
According to a report from McKinsey & Company on the Internet of Things, predictive maintenance using real-time data to predict and prevent breakdowns can reduce downtime by 50%, and reduce equipment capital investment by 3 to 5% by prolonging the useful life of machinery. Based on this report manufacturers can save over $600B per year by 2025. This can also apply to other organizations that spend a portion of their capital budgets on equipment maintenance; e.g., per this report, hospitals around the world could save about $70B per year by 2025.
Thus, the benefits of predictive maintenance can include significant cost savings and increased revenue for enterprises. But it is still not a widely accepted practice yet, and that’s because accessing detailed, timely data about the condition of the entities is still challenging. However, this is gradually changing due to the exponential growth of IoT alongside related digital technologies (e.g., artificial intelligence, big data, digital twins, etc.).
Digital Twins to Optimize Industrial Maintenance and Workflows
As insinuated above, Industry 4.0 facilitates the collection of massive volumes of data about the condition of machinery, which is needed for implementing predictive maintenance. This data gathering effort is empowered by a variety of IoT sensors deployed at the Edge; i.e., thermal cameras, smoke detectors, temperature gauges, and so on. The integration of sensing, computation, control, as well as networking into physical objects and infrastructure, connecting them to the internet and to each other, lays the foundation for M2M communication, and this is fueling the rise of digital twins.
What is a digital twin? Well, it’s an instance of a digital model that represents an entity, which can include an asset, a process, and/or a system for an application. A digital twin model typically describes the characteristics, attributes, and behavior of an entity type.
Digital twins, as discussed above, can be implemented in any industry, but it can greatly help companies involved in asset-intensive production efforts such as mining, oil and gas, and energy and utilities by improving their situational awareness, process efficiency, and decision making. In such applications, where the twins are continuously updated with real-time data from sensors and equipment, including data from systems, like weather services and maintenance, OI is constantly being generated.
If you plan it out properly, a digital twin can unveil opportunities for improvement. But, you have to design your twin with some goals in mind. Here are some thoughts on what to use a digital twin for:
- Event monitoring on continuous data feed
- Decision support
- Decision automation
- Interpret data and make decisions using ML and AI
- Recommend subsequent steps
These are just a few things you can do with digital twins, and you can combine these as well. For example, you can monitor events for anomalies, and have the twin suggest what actions to take. Some companies, like XMPro, a global provider of software and services for real-time OI, has created three digital twins types based on IoT use case patterns:
- Status Twin: employs visualization tools that display operating parameters; used for basic condition monitoring applications (e.g., dashboards, simple alerting systems, etc.)
- Operational Twin: links workflows where users can interact with the twin and adjust operating parameters for control as permitted; used in decision making and support
- Simulation Twin: leverages various simulation or AI capabilities to forecast and provide insight into future operational states; used for predictive maintenance
However, these seem to be interrelated as you need the status to make sense of the operational data, which, in turn, is used as an input to a predictive model.
Beyond this is the innovative use of the data generated by the digital twins. As data from different entities are shared among the internetwork of machines, novel uses of the data compositions will reveal new possibilities and business opportunities. This is the promise of digital twins!
As digital twins gain momentum in making optimal use of connected entities through predictive maintenance, decision support, and monitoring, as well as in providing insight into innovative applications from data generated through machine-to-machine communications, we see an exceptionally large volume of data throughput on the internet.
This popularity will drive the need for more accurate digital twins requiring models that need a constant stream of data, which, at scale, can’t be hauled to the Cloud. This will favor moving digital twins away from the Cloud to the Edge.