It is well recognized that a credit evaluation system predicts the probability of future behavior based on an analysis of past behavior. As such, these systems are dynamic in as much as the predictions could change as behavior changes after the model is implemented. Thus, it behooves lenders to monitor these changes on a periodic basis and take appropriate actions as warranted by the changes.
Moreover, Regulation B of the Equal Opportunity Act requires that such monitoring be done periodically. Recently, regulatory bodies have suggested that the systems be validated at least at 6 month intervals, so when a decision model is developed and underwriting begins, you must evaluate your credit system.
Underwriting evaluation systems are divided into three categories:
- Judgmental and implemented as decision tree rules set by management experience and preferences, or
- Generic based upon borrowed scorecards developed on archived or industry data, or
- Custom developed based on the lender’s own past performance experience.
Judgmental systems do not usually come with performance predictions of the approval end nodes. If not, we suggest that it is imperative that this be done by analyzing a sample of the portfolio at each of the approval end nodes. This establishes a baseline set of predictions against which future changes can be measured. Both the generic and custom models usually are provided with a performance projection table that gives the odds and probabilities of an account going “bad” at each score. These tables are the basis of the monitoring and validation activity.
There are basically two types of monitoring that need to be exercised: volume and performance. Needless to say that a well- designed reporting system needs to be in place either through the LOS or by creating a reporting mechanism for just this purpose. For the volume monitoring one needs to establish whether or not the percent of applicants at each score range, and in total, has shifted beyond acceptable statistical tolerance. For the performance monitoring one needs to establish whether or not the model performance has changed beyond statistical tolerance both overall and at each score range. These tolerance limits can be calculated and provided for each of these monitoring exercises.
The outcomes of these tests are:
- Statistical tolerances have not been exceeded. No action required.
- Statistical tolerances have not been exceeded. However, the model is still predicting risk correctly but needs to be realigned to match the original scale (original performance projection) realignment action is required. This is a simple exercise.
- Statistical tolerances have been exceeded. One needs to examine the strength of each individual variable in the set of rules or in the scorecard. If one or more are now not predictive, they need to be removed and swapped with substitute variables that are now predictive and the model recalibrated with these new swapped variables. We do not expect this to occur more often than once a year.
- Statistical tolerances have been exceeded. On examining the strength of the variables presently in the scorecard, one finds that too many of them are now not predictive. In such a case, the entire model needs to be redeveloped. We do not expect this to happen more often than once every 2 years unless new markets have been penetrated or new products have been introduced.
Considering the dynamic nature of the performance and the changing profile of the applicant population, following the above procedures will ensure the use of credit evaluation systems that are always up to date.
Moreover, regulatory requirements will always be in compliance and a solid reporting solution will be available for inspection. Although the monitoring reports and validation can be conducted in house, an outside specialist will bring fresh eyes to credit evaluation process.