Most chip producers perform delay testing to detect chips that are affected by defects that adversely affect timing. Several delay fault models have been introduced to guide delay test generation. But similar to static (i.e., slow speed) testing, there is always the question of which fault models are best for ensuring quality. MEasuring Test Effectiveness Regionally (METER) is an approach for evaluating fault model effectiveness. Compared to the conventional test experiment, METER is extremely inexpensive and provides a more thorough evaluation of the quality achievable by a particular fault model. In this work, we describe an extension to METER (called DELAY-METER) that allows the effectiveness of delay fault models to be precisely evaluated. Application of DELAY-METER to the production fail data from an IBM ASIC demonstrates that new and existing delay fault models can be evaluated using conventional tester response data, i.e., data logs collected from production fails through the application of tests generated using conventional fault models.