Articles

From Simple Metrics to IoT: Mining's Big (and difficult) Shift

It's no surprise that the majority of Australia Mining companies don't leverage the full use of their data to manage their (expensive) assets better. Industry seems to think this is just an adoption issue, where the term "If it ain't broke don't fix it" comes to mind. But the reality is that there are far more underlying issues we need to consider why Mining is lagging behind other major industries such as Oil and Gas, Aviation and Power Generation, and yes.. what we do with all our data plays a big part.

It's not complex, if you want to improve reliability of your assets, you need to follow a simple process.. Identify, Prioritize, Plan and Execute. You just need to do these Four things well to push the performance of assets to the next level. HOW you do these Four steps is the difference between an organisation that is leading and one that is lagging.

In the Australian Mining industry, I've seen the planning and execution side usually done quite well, but the identification of issues (and how early) is something I've seen being done poorly time and time again.

In a very generalist sense, a typical Reliability Engineer at a mine will analyse historical failure data to identify trends and prioritise areas of downtime. They will analyse condition monitoring results such as oil samples, thermal images and basic vibration measurements to identify if a problem may be present before a failure happens. So, does this improve asset reliability? Absolutely! ... but only to an extent.

You see, this process is still largely "manual" and looking at the bigger picture, we've already failed a lot of components to get the historical data. So it's far from perfect.

We know that the next stage is to have computers and algorithms do the heavy lifting and automation. We are heading in a direction where the machine needs to tell us it's health status based on multiple sensor readings -a major step towards the 4th industrial revolution. We have tremendous amounts of data available at our finger tips that we fail to use properly!

So, what's really holding us back? I can condense this down to three sub-areas:

1. Knowledge:

Visit a typical Coal mine in Australia, and meet the Reliability Engineer. Many of these professionals have roots in trades and have advanced through experience. It's less common to encounter someone in this role with a traditional university background. However, the essence of Reliability Engineering in heavy industries is a balance between practical experience and theoretical understanding but the problem is that the individual is usually only good at one side of the equation. A desire to learn the other half is what's needed.

We can't expect trades people to understand machine learning, writing code and advanced statistics, and vice-versa we can't expect university engineers to know how to replace final drives and the intricacies of complex equipment.

So let's talk about upskilling and training. I've been lucky enough to attend a few very pricey $$$ Reliability Engineering courses during my career in the great state of Queensland, and soon realised that they teach you exactly what you're already doing on the job, or what is expected of a "reliability engineer" in the mining industry. The same as we've been doing for the last 15 years. (This still rubs me the wrong way and is a direct motivator for me developing my own course).

Do you see the flawed thinking there? We aren't teaching engineers what they NEED to know and what to STRIVE for. The people creating and teaching these courses have failed to keep up with the latest technologies and how this will change the landscape and required skills of the Reliability Engineer.

Having a mixed knowledge model (practical trades and University Engineers) works better, but doesn't quite deliver the performance we require. Although we know technically have a lot of in-house knowledge, we don't have a system architecture to process the copious amounts of data into meaningful insights, and this is another skill entirely- which brings us to our second sub-area...

2. IoT Infrastructure

Most mines already have some infrastructure in place- sensors that communicate to your SCADA or machine's control system. This data however has a purpose, and that is to control a system or process, and is usually in it's own isolated infrastructure. Pushing this data to a server for processing through machine learning algorithms isn't hard, and neither is running the algorithm in the cloud through AWS or Google Cloud. What's difficult is designing the data system to be reliable and ensure that false data is handled appropriately to ensure the algorithms don't get thrown into disarray. Another issue is how accessible and customisable you make this system to the end user? Give too much access and they can tamper with it to an extent where the output is complete rubbish. Give too little access and they'll be calling you at 3 am to fix a false alarm that won't turn off. The construction of this user friendly system requires specialised skills, and is mostly left up to the IT consultancies and Math geniuses. Setting all this up requires significant push from within the company to bring to fruition, usually by a tech champion.

3. Data quality and Linkage to past failures

Some will say what's really holding Mining's adoption of machine learning back is that the data isn't properly linked to failures, and thus ML algorithms cannot predict a failure. This is only partially true, and a misleading statement. Unsupervised ML algorithms can help predict anomalies perfectly fine without a clear failure linkage. Of course it's desirable to have the failure linkages so that we can use Supervised ML algorithms and get an estimate for Time-to-Failure, but baby steps... We must train the models on good quality data, and the system must be smart enough to filter out erratic and false sensor signals. This is easier said than done, and a lot of ML models suffer from drift over time and must be re-trained. A task that requires specialised knowledge. Don't be fooled, this is a full-time job in itself for a big operation!

A plan to move forward...

- To close the knowledge gap we must have a reliability department with a good mix of practical trades experience and university educated engineers who can code and are up to date on the latest tech. Training courses must focus on the future of reliability engineering, and not on antiquated methods and techniques that will become obsolete in the near future.

- Let your IoT infrastructure be designed by an expert firm. Do it properly. Let them teach your workforce how this works through training packages and desk-desk learning. Employee churn should not destroy the knowledge gained in the system. This firm should become your trusted partner and in a sense you almost absorb some of their employees into your organisation. This relationship is how you prevent momentum from falling during this transition and how new employees get up to speed faster.

- Design the system so it's easy for the user to link data to failures when they occur. Start a plan for a transition away from Unsupervised learning to Supervised.

It goes without saying that with any business transition, none of the above happens without buy-in and support from the company's management. Management must first believe in the vision and the process in order for the above steps to have a changing and lasting effect. The above points I've written are by no means exhaustive, but I would love all your thoughts in the comments. Let me know your

Miguel Pengel