If you’re experiencing high maintenance costs and unplanned downtime, you likely need to review your reliability strategy. Avoid these common pitfalls to ensure the review is a solid investment in performance rather than a drain on company resources.
Reviewing your reliability strategy and implementing Asset Strategy Management can increase the availability of your assets by up to 6x and reduce reactive maintenance by as much as 50%.
On the other hand, there are many pitfalls associated with reliability strategy reviews which can lead to ineffective outcomes. In this blog, we share some of the most common pitfalls and what you can do to avoid them.
Poor site engagement and use of resources
A lack of site engagement can significantly undermine the success of a reliability strategy review and result in low ownership or even poor acceptance of the revised strategies. To mitigate this, upfront education and communication is key. Write down the need for change clearly and articulate the key points to all stakeholders at your sites.
As you progress with the review, it is also important to gather site feedback to ensure any revised strategies are specific to actual assets and in alignment with site operating environments.
Just be mindful about how much feedback is actually required, as an over-reliance on site resources can lead to inconsistencies in your organization’s reliability strategies. It is also largely inefficient and not necessary for engagement.
So rather than asking sites to review the wording of failure modes or tasks which have been set globally or centrally, they should be involved in reviewing specific information like materials or operating context which are more specific to their site.
Undefined outputs and scope
When starting a review project, most organizations are clear on their high-level objective which is typically to optimize their reliability strategies based on cost and risk. What’s quite often undefined is the next layer of detail and the ultimate scope of the review.
The danger is a disconnect between what is delivered and what is required or expected by sites or other stakeholders. For example, a project may culminate in the delivery of a basic task listing (a simple list of tasks to be done on each asset) when the site was expecting full task and work instructions complete with materials and corrective task lists, all loaded into the CMMS/EAM.
To avoid this pitfall, agree on outcomes at the initiation of the project and ensure that deliverables are clear and unambiguous. To help, consider these questions:
- What plans are already in place and what’s required to deliver an uplift in performance?
- Are task instruction documents (what to do) needed? Or work instruction documents (how to do) as well?
- Do materials and secondary action tasks (corrective tasks) need to be included?
- Are full cost and resource forecasts required?
- Are load sheets required or will updates be conducted in the CMMS/EAM?
It can also be advantageous to complete a small section of the review, all the way through to implementation. This has the benefit of all stakeholders seeing the output and deliverable formats. It also creates confidence in the wider project by testing, proving and demonstrating the process the whole project will follow.
As for scope, this too should be defined at the outset of the project and may not be as straightforward as you think. For example, while it may seem logical to focus a review on a list of (say) critical assets, on completion the grouping of tasks may cause issues when it comes to implementing the new strategies.
There are two ways to mitigate this and other risks associated with scope boundaries:
- Analyze all equipment items and all tasks within a specified system that covers all packages of work (including route-based). In other words, set the scope boundary based on a system or area of plant, not by criticality.
- If setting the scope based on selected items within an area or system, then a very robust check prior to implementation needs to be completed to ensure no tasks are turned off unintentionally.
Never Ending Reviews
It is easy to get trapped in a cycle of continuous review. This may happen because of a misguided attempt to gain engagement or because someone is trying too hard to achieve consensus. For instance, a data sheet or report may be distributed to multiple representatives and each may respond with different feedback which leads to confusion and ongoing refinement.
The key here is to set the process and scope upfront. Specify the review points, timing, and be explicit in educating and communicating with stakeholders about the process. You should also have a method or mechanism to capture any inputs or updates that fall outside of the process but may be required in the future.
Set yourself up for success
Next time you embark on a reliability strategy review, make sure you have strategies in place to overcome these pitfalls and think through what resources will be required to implement the outcomes of the review. If done right, these reviews can deliver valuable savings and increases in performance, but like most important projects, they require careful planning and due diligence to achieve the right outcomes.