Predictive Maintenance Throwdown
By Isaac Brown
Predictive maintenance is a killer use case for IoT & predictive analytics technologies – how did a grimy old concept for equipment maintenance become such a hot digital trend? Let’s discuss… predictive maintenance (abbreviated “PdM”) is the practice of monitoring the conditions of machines to look for forward indicators of upcoming machine health issues. It is a more advanced approach to machine health than reactive maintenance (fixing things after they break) or preventative maintenance (servicing equipment in pre-prescribed intervals). While people have been doing some form of forward-thinking maintenance for centuries, today PdM is on almost every industrial technology roadmap on the planet, because advances in data science are enabling higher-fidelity predictions and improved return-on-investment (ROI).
Everyone wants a piece, and it’s useful to consider the relevant stakeholders for this discussion – we can broadly bucket them into four groups: machine operators, machine vendors, machine servicers, and pure-play PdM technology vendors.
There is a spectrum of IoT-based maintenance approaches, though not all of the deployments deserve the “predictive” definition. Most of the current maintenance-focused IoT projects are not really PdM, but more accurately “remote monitoring”. The key element is to notify an operator that a machine is broken, so they can reactively service the machine. If an IoT solution can quickly notify the operator and tell them precisely what is required to fix it, this is valuable – the goal is to decrease downtime, and the associated costs of service. This is especially true when it’s an important machine operating in a remote location, where the site-visit alone requires significant time/cost. While remote monitoring is certainly a value generator, this is not PdM – this is just finding out something is broken more quickly that you otherwise would have, and then being more precise about how to fix it.
A step up from remote monitoring is anomaly detection, which is the crux of many IoT solutions, and something many operators are doing today. This approach leverages statistics, and it commonly extends into the realm of machine learning (in a real way). Here’s an example: a data model is built with a baseline for how much power an injection molding machine typically uses over the course of its production cycle. This system notices that the machine is increasing its power consumption beyond the normal threshold in the baseline data model – so the program notifies an operator that there is an anomaly from typical behavior. In more advanced machine learning applications here, the system might record the operator’s response and adapt the data model to better identify future anomalies and prescribe responses. Some people would call this PdM and some wouldn’t – regardless, it’s much more advanced than what most machine operators are doing today. In fact, many surveys indicate that most industrial operators globally are reactive or preventative in their maintenance programs, not predictive.
While people have been doing some form of forward-thinking maintenance for centuries, today PdM is on almost every industrial technology roadmap on the planet, because advances in data science are enabling higher-fidelity predictions and improved return-on-investment (ROI).
Moving up along the curve of maintenance magic… let’s apply true PdM in an advanced example with the same injection molding machine: Suppose we witness a power consumption increase of 5% for a few hours. The next day, temperatures across a few machine components exceed the normal threshold by 10% for a few hours. The following day, the equipment vibrates at a slightly higher frequency for a few hours. Hey presto! The data model knows that there will be a bearing failure sometime between 5,000 and 7,000 cycles from now. The data model knows this because 50,000 of these machines around the world have been reporting condition data back to the mothership for five years, and we’ve all the while been getting smarter and smarter about what behavior is anomalous and what it signals. That’s the golden goose, and some people are doing that. The aerospace industry for example can model some of its critical assets with that kind of precision, but this is driven by the cost (and catastrophe) of equipment failure in that sector. PdM technology vendors like to pretend they have lots of customers doing stuff like this. Spoiler: they don’t have lots of customers doing stuff like this.
The companies selling PdM solutions come in all shapes and sizes, and they largely fall into the latter three stakeholder categories from above (machine vendors, machine servicers, and pure-play PdM vendors). The key evaluation metric is ROI – the amount of savings compared with the cost of the solution and the resources necessary to deploy/operate it. Second is payback period – the amount of time required to recoup the initial investment and then begin profiting from it. To compare the relative performance of PdM solutions, customers look at the accuracy of predictions and how far in advance they are triggered. Operators will begin to ignore a PdM system that spits out false positives (additionally, some operators feel threatened by PdM solutions, but that’s a story for its own post).
Beyond the financial value and solution performance metrics, there are other fundamental comparisons to be made across PdM solutions:
Do they sell hardware and/or software? How much of “the stack” is provided by a single vendor? The ones that do both have lower barriers to entry into many industrial environments.
Do they focus on specific assets or aim broadly at all industrial equipment? Asset-specific focus is compelling, but narrows the addressable market. It’s a delicate balance.
Do they bring historical datasets with them? Can they train on the customer’s historical datasets? They should!
This is the beginning of a series on the topic of PdM. Future posts will expand on each of these bullets, and much more. As always, I’d welcome your input!