What does Failure Data Really tell us?
I read an interesting post from a noted data scientist recently. I have copied the like and quoted a few passages I felt resonated. The probing question is really - Can data science find its way to help us with reliability?
Vincent Granvilles post is worthy of a read as it begs an interesting question. I encourage you to divert and give it a read then come back.
https://www.datasciencecentral.com/profiles/blogs/biased-vs-unbiased-debunking-statistical-myths
I seems there are more than just a few new ways to look at data, and within the haze of the umbrella called data scientists, there is rumored to be a method that works to somehow draw a conclusion from that data. But what if that data blob in the graphic "data" was failure data? With all of the data science floating around - is Weibull Analysis really that antiquaited or does it still have a place in reliability? While I am a bit old school, I am open to the fact that perhaps we could employ one of the many Matlab routines available with some R code to forge a new horizon.
We cannot igonre we are creatures of habit - Good or Bad. Is reliability stuck in the mud? When Reliability pro's turn to statistics and probability - are we missing something fundamentally and theoretically sound? There is certainly a "way of thinking" in that has been taught over the years that makes it difficult to consider a different new. for example Vincent writes;
"Anyone who attended statistical training at the college level has been taught the four rules that you should always abide by, when developing statistical models and predictions:
- You should only use unbiased estimates
- You should use estimates that have minimum variance
- In any optimization problem (for instance to compute an estimate from a maximum likelihood function, or to detect the best, most predictive subset of variables), you should always shoot for a global optimum, not a local one.
- And if you violate any of the above three rules, at least you need to make sure that your estimate, when the number of observations is large, satisfies them. "
My impressions is while statistics seems to work with large amounts of data, Reliability Engineers would be unemployed if they had large amounts of "failure data". The implication is with a large amount of failures we are not very good at reliability. So it is a given "good" Reliability Engineers must work with diminishing amounts of failure data and still find a way to see through the haze and make a good decision.
I ask you - is there a future for Reliability with Data Science?
首席可靠性工程师
6 年Hi Kirk - Yes an old post but LI is good at refreshing old topics. I find it interesting we still forge ahead knowing that good data exists, but we forge without it. I would like to read your paper but it did not come through attached. Can you resend and make available to my network?
Phillip, I realize this is an old post, but the answer is certainly yes. Statistics and models have not and cannot provide much benefit in producing reliable electronics systems. Modeling does have a role in long term wear out, such as in energy systems (Wind, PV modules) but for the vast majority of electronics, technological obsolescence proceeds the intrinsic modes of wear out, especially true for electronics systems (with no mechanical components or battery). The basic fundamental cause of electronics reliability data and true root causes analysis. as you probably well know, is the most sensitive data a commercial, and military equipment producer has to protect from competitors and litigants that are harmed by failures. It is never shared or published except in federal investigative agencies or by court order. I have observed in over 25 years of reliability development experience that most failures of equipment were due to an overlooked design weakness, an error in the many tiers of suppliers manufacturing process, or by misapplication of the product or systems, not consistent wear out mechanisms that can be statistically modeled. Please see attached US ARMY Paper about reliability prediction.
Data Scientist for industrial solutions
8 年I believe the answer is yes :) These are some of the challenges I tackle every day, and while nothing is perfect, we've done some great work leveraging technology and analytics that extracts meaning from reliability data. I am posting a blog post that summarizes the approach we've been taking at a high level https://www.meridium.com/blog/machine-learning-takes-apm-next-level
Senior Reliability Technologist - Multi-discipline
8 年Without data analysis you are running blind. The focus should be on failure recurrences, 2 or more, get to the root cause and implement the corrective actions. Take care of your operations, don't expect different results if you've changed nothing to it.
Creating the future in maintenance, reliability, and your organization.
8 年Of course :-), most of us have more data than we know what to do with… The real question is what data is relevant to the problems we are working on, and more importantly what are the important problems to be working on? On the subject of failure data, we should rephrase it as life data. Regardless, you should not ignore good data if it can help you make a better decision. The use of probability and statistics is not new in reliability, even if the present blossoming of data science is (?).