Financial institutions that haven’t adopted the new current expected credit loss (CECL) accounting standard prior to 2022 are facing CECL implementation this year, 2023, with many already having implemented as of January 1.
To simplify implementation, many financial institutions are using services and purchased applications to streamline calculations, but it’s not as simple as pressing an “easy” button. Our deeper dives into the models are identifying issues in three key areas — data integrity; assumptions and model integrity; and methodology decision-making — that companies should watch out for as they ensure adherence with the CECL standard.
Data integrity
Loan renewals and the concept of vintage
For those institutions where loan origination data is a key data point, we’ve identified data in the loan core systems that are inconsistent with the definition of a new loan within the relevant accounting standard. This is also relevant to any institution meeting the accounting standard’s definition of a public business entity since it’s required to disclose loan vintages in its financial statements.
Action to consider: Be sure to understand where vintage data is derived and assess compliance with the definition of a new loan within ASC 310-20-35 paragraphs 9 through 11.
Assumptions applied and model integrity
1. Common assumptions
We’ve identified several instances in which companies make various assumptions (e.g., prepayment speeds, defaults, etc.) using multiple institutional models and tools. In certain cases, assumptions used by individual tools have differed, raising questions about whether the CECL model is using management’s best estimate.
Action to consider: Maintain a listing of common assumptions across all models and tools and evaluate any differences.
2. Effective discount rate
For some institutions with significant loan origination costs, origination fees, and purchase premiums or discounts, we’ve noted that discounted cash flow assessments routinely discount future estimated cash flows at the loan’s stated interest rate as opposed to the loan’s calculated effective yield as required by the standard.
Action to consider: Evaluate the discount rate used and measure significance periodically if not using the loan’s calculated effective yield.
Methodology decisions
1. Loss driver analysis: Poor correlations
In some instances, our clients have used linear regression analyses to prove that loss drivers impact defaults or losses. At some institutions, the correlation in certain loan pools is very poor, indicating that the loss driver isn’t a good basis for estimating changes in credit risk.
Action to consider: In these cases, a regression considering leading or lagging indicators should be performed. If the correlation continues to be poor, other loss drivers should be explored.
2. Loss driver analysis: Estimating forecasted credit losses
Many models have loss drivers embedded into the estimate of forecasted credit losses where we observed the following:
- Statistical data smoothing — In instances where management was in search of the most statistically valid correlation, we’ve identified that they inappropriately smoothed the data set of seasonally sensitive loss drivers.
- Sensitivity — In instances where insufficient loss/default data exists at the institution, we’ve identified that management has failed to test the sensitivity of changes to the loss drivers. For example, doubling the unemployment rate forecast only increases the allowance for credit losses (ACL)/loan percentage by 1 basis point.
- Segmentation — Loans are predominantly segmented by call report code, which creates better opportunity to use peer data in the event institution-specific data is unavailable or not indicative of the credit risk outstanding. In certain instances, we’ve observed clients defaulting to other types of loans rather than considering whether peer data would be a better indicator of credit risk within a loan portfolio.
Action to consider: Ensure you’ve documented the key methodology decisions within your model and complete a sensitivity assessment to assess whether the results are as expected. Any deviations within the sensitivity analysis may indicate that peer data could be a better basis for the estimate.
Concluding thoughts on CECL implementation issues
Many models selected by institutions are complex. With institutions using multiple applications across functions, it’s possible the inherent logic, assumptions, and approaches to calculations may not align.
The bottom line? Carefully document the methodology, inputs, and assumptions your institution uses in creating its CECL models and disclosures.