Poor Implementations Limit Utilization

Poor Implementations Limit Utilization

Typical Problems

  1. Leadership role (executive sponsor) for asset management focus is missing; or, current leadership does not believe the CMMS will ever be anything more than a work execution tracking system (where failure coding is too much to expect). No core team; no reliability team; no business analyst; no Long Range Plan.
  2. Planning and scheduling are happening in name only; very tedious process which results in minimal productivity gains.
  3. Poor MRO setup: non-standardized descriptions; no criticality values; no commodity codes; no stock-out tracking; no mobile solution in place for cycle counts, issuing, receiving; no automation features activated, such as auto-reorder and no auto-sparepart-add.
  4. There is not enough actionable equipment history, meaning that, the information collected is not from validated fields; nor is it deep enough (i.e. too generic or incomplete); and certainly not useful to the Reliability Engineer. Narrative text might exist but who wants to read hundreds of work orders looking at that.
  5. The CMMS PM-PdM-jobplan library exists but it is difficult to determine validity; no link to RCM analysis; no standardized PM-WO feedback; no attempt to establish continuous refinement of maintenance strategies as defined by RCM analysis (or similar process).
  6. Master data (foundation and transactional) does not support reliability analytics; no one ever linked required inputs in support of output requirements.
  7. It’s hard to pull any meaningful analytics or KPIs out of their system

Expanding on #7 -- Meaningful Analytics

With the advent of Artificial Intelligence, there will be many more advanced capabilities soon to come. But right now, today, from the CMMS, a properly designed analytical report can provide real value to the organization. When one talks about a "knowledge base", I would say that analytical reporting is at the top of the list for informing leadership on where to focus.

What exactly is an Analytical Report?

This type of output can be used for making more informed decisions.

  • Any output which provides information which is not readily available through standard query.
  • Uses validated data (as opposed to text fields).
  • Evaluates multiple records.
  • Often uses aggregate functions (i.e. group by).
  • This output can be a 2-step process, whereby after (1) selecting an asset from the bad actors, then user (2) choose pie chart wedge and dynamically drills down into failure mode to arrive at cause.

Most Organizations Struggle to Leverage Analytics

Unfortunately, most (>90%) organizations do not have these reports in place. The reasons are numerous:

  1. Leadership assumed they came with base product.
  2. Leadership is unsure what to design.
  3. The vendor provides "analytical report software" but this NOT a report, per se.

The Intent of Analytical Reporting

The intent is to know where to focus. In a perfect world, you can (1) prevent failure modes from occurring, (2) get early notification of potential failure, and (3) discover problems with defect elimination walk-downs. But, if and when failures still occur, you should be able to look back in the failure history and discover patterns. This is the beauty of analytical reporting.

What if you don't know How this Report Should be Designed?

Most organizations have NOT spent any time thinking about this subject. But they should. Lets say you had linear or distributed assets. The Reliability Engineer might setup a weekly meeting to bring asset management specialists together, with a projected screen showing GIS map -- with dots showing repair situations. They look for concentrations and draw a circle around this. These events can then be linked back to the CMMS where you pull up all related work orders. An analytical report would could be used to aggregate data based on recurring failures. Once the bad actors (assets) are identified, then this same report should allow dynamic drill-down into the failure mode (and cause). At this point, corrective action can be determine.

What if the Data is Bad?

It's hard to talk about advance report designs when the data may not be supportive. But even if your data is "suspect", it is still a good first step to draw up the report. At least you will know where to focus (data wise). By focus I mean, (1) adding fields to CMMS entry screens if necessary, (2) fixing choice lists, (3) fixing procedures, (4) conducting training, and (5) start performing data/process audits.

What if we don't have a Reliability Engineer?

You don't have to be a reliability engineer to draw up this report. The markup might just be pen on paper. And of course, i have a book on this called Failure Modes to Failure Codes. Perhaps the best advice is to draw up the report and then worry about the data.

Describing the process: Chronic Failure Analysis posting

Seeing sample report: Failure Analytic Report -- an example

Did you know many Universities now offer a degree in Business Analytics?

Seattle University

 Cornell

Drexel

 UNMC

Villanova

Osmel R. Maestre MBA, BSIE, LEED AP, PMP, RE Broker

Assistant Director, Internal Services Department - Countywide Services, Miami-Dade County

5 年

I totally agree. Especially #6?with Root Cause Analysis is critical for continuous improvement and customer satisfaction.

Chris Smith

I turn supportability problems into supportability solutions

5 年

John would you not want to select your candidate items first? If you carry out Failure analytics on all components you increase your workload for little or no benefit. By selecting candidate items using specific criteria first (e.g. an item on a system, sub-system, equipment module or sub-module , which is repairable or requires maintenance support), this reduces your workload and assists in indentifing only those items that could cause lack of availability or downtime.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了