Abstract

Ever-increasing quality goals are being embraced by more and more companies as a condition of doing business. Customers expect no application problems, no quality problems, and no test escapes.1 Cooperation between vendors and customers is utilized to identify problems early and prevent quality incidents. Timely and informative failure analysis (FA) is an important source of information for this process. However, some perceptions are that microprocessor failure analysis typically has a turnaround time significantly greater than 30 days with a low percentage of root cause identification.2 A supplier of critical components such as semiconductors is usually required to analyze product from customers' incoming inspection, line or field returns. Analysis of any device perceived to be a failure therefore receives considerable attention from a business perspective. The intention of this system is to develop and share improvement plans, quickly identify and implement corrective actions, and prevent defective devices from reaching the customer's manufacturing line. However, streamlining of goals and measurements intended to build a traceable record of supplier's performance has generally resulted in simplification of metrics to those which only consider FA report turnaround time. To more effectively utilize critical and expensive resources such as FA laboratories and analytical laboratories, the metrics should take into account more than just FA report tum-around time. They must also include measurements of how well FA reports are prepared, that is, their presentability. The usefulness of the information contained in the reports could also be measured. Customers IMlo are FA report recipients endorse the need for this action. However, monitoring performance to this more technical goal has been perceived as a difficult process. This paper discusses proposals for increasing sophistication in the measurement of FA and analytical lab performance, utilizing metrics involving various attributes of the device, nature of the failure, and relative analysis effort, which are relatively easy to measure. For example, there is a well-documented correlation between analysis tumaround time and device complexity as measured by the number of gates. Additionally, there is a well-documented correlation between the analysis turnaround time, the number of analysis steps, and type and sophistication of the equipment which must be utilized to obtain a satisfactory answer. The management issues that face a supplier and user when confronted with a failed device or system are also discussed. Any measurement system which tracks the effectiveness of analysis effort toward the determination of root cause and the production of solved problems must operate within the auspices of larger management systems such as 150-9000 I 05-9000, 8 discipline, and 6 sigma. This process must include methods to standardize and simplify the handling of user-submitted FA requests. This paper will look at the failure analysis business with the intention of providing insight for users to: 1) supply sufficient information to simplify and expedite the FA process; 2) determine whether a failure should be submitted; 3) how the request should be prioritized, and 4) the setting of practical and achievable turn time estimates. There is great controversy on these issues, as indicated by a number of papers appearing in the Proceedings of ISTFA-1994.

This content is only available as a PDF.
You do not currently have access to this content.