header logo image

Reliability engineering – Wikipedia, the free encyclopedia

July 14th, 2015 9:48 pm

Reliability engineering is engineering that emphasizes dependability in the lifecycle management of a product. Dependability, or reliability, describes the ability of a system or component to function under stated conditions for a specified period of time.[1] Reliability engineering represents a sub-discipline within systems engineering. Reliability is theoretically defined as the probability of success (Reliability=1-Probability of Failure), as the frequency of failures; or in terms of availability, as a probability derived from reliability and maintainability. Maintainability and maintenance are often defined as a part of "reliability engineering" in Reliability Programs. Reliability plays a key role in the cost-effectiveness of systems.

Reliability engineering deals with the estimation and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, according to some expert authors on Reliability Engineering (e.g. P. O'Conner, J. Moubray[2] and A. Barnard,[3]), reliability is not (solely) achieved by mathematics and statistics. "Nearly all teaching and literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." [4]

Reliability engineering relates closely to safety engineering and to system safety, in that they use common methods for their analysis and may require input from each other. Reliability engineering focuses on costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims. Safety engineering normally emphasizes not cost, but preserving life and nature, and therefore deals only with particular dangerous system-failure modes. High reliability (safety factor) levels also result from good engineering and from attention to detail, and almost never from only reactive failure management (reliability accounting / statistics).[5]

A former United States Secretary of Defense, economist James R. Schlesinger, once stated: "Reliability is, after all, engineering in its most practical form."[4]

The word reliability can be traced back to 1816, by poet Samuel Coleridge.[7] Before World War II the name has been linked mostly to repeatability. A test (in any type of science) was considered reliable if the same results would be obtained repeatedly. In the 1920s product improvement through the use of statistical quality control was promoted by Dr. Walter A. Shewart at Bell Labs.[8] Around this time Wallodi Weibull was working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s and evolved to the present. It initially came to mean that a product would operate when expected (nowadays called "mission readiness") and for a specified period of time. In the time around the WWII and later, many reliability issues were due to inherent unreliability of electronics and to fatigue issues. In 1945, M.A. Miner published the seminal paper titled Cumulative Damage in Fatigue in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability has proved to be very problematic and costly. The IEEE formed the Reliability Society in 1948. In 1950, on the military side, a group called the Advisory Group on the Reliability of Electronic Equipment, AGREE, was born. This group recommended the following 3 main ways of working:

In the 1960s more emphasis was given to reliability testing on component and system level. The famous military standard 781 was created at that time. Around this period also the much-used (and also much-debated) military handbook 217 was published by RCA (Radio Corporation of America) and was used for the prediction of failure rates of components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreases. More pragmatic approaches, as used in the consumer industries, are being used. The 1980s was a decade of great changes. Televisions had become all semiconductor. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as had microwave ovens and a variety of other appliances. Communications systems began to adopt electronics to replace older mechanical switching systems. Bellcore issued the first consumer prediction methodology for telecommunications, and SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for Integrated Circuits (ICs). Kam Wong published a paper questioning the bathtub curve [9]--see also Reliability Centered Maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moores Law and doubling about every 18 months. Reliability Engineering now was more changing towards understanding the physics of failure. Failure rates for components kept on dropping, but system-level issues became more prominent. Systems Thinking became more and more important. For software, the CCM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of Certification. The expansion of the World-Wide Web created new challenges of security and trust. The older problem of too little reliability information available had now been replaced by too much information of questionable value. Consumer reliability problems could now have data and be discussed online in real time. New technologies such as micro-electromechanical systems (MEMS), handheld GPS, and hand-held devices that combined cell phones and computers all represent challenges to maintain reliability. Product development time continued to shorten through this decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks must be more closely tied to the development process itself. In many ways, reliability became part of everyday life and consumer expectations.

The objectives of reliability engineering, in the order of priority, are:[10]

The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products.The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to have knowledge of the methods that can be used for analysing designs and data.

Reliability engineering for complex systems requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:

Effective reliability engineering requires understanding of the basics of failure mechanisms for which experience, broad engineering skills and good knowledge from many different special fields of engineering,[11] like:

Go here to read the rest:
Reliability engineering - Wikipedia, the free encyclopedia

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick