This paper presents a machine learning approach that uses genetic algorithms to optimize test program timing sets based on first silicon. The method accounts for test hardware differences, discrepancies in silicon processes, and IO pin interdependency. The general theory and implementation are covered in detail and the capabilities of the method, in terms of false fail discovery, elimination, and failure debug, are demonstrated using actual product test cases.

This content is only available as a PDF.
You do not currently have access to this content.