IRB models uncertainty and conformal inference

While regulatory requirements concerning the Margin of Conservatism (MoC) necessitate addressing the general estimation error of the IRB model, practitioners often only incorporate variability related to long-run average calculations into their modeling framework. I rarely observe the consideration of uncertainty stemming from the ranking model within this requirement. When practitioners address it (as a part of the MoC framework or as additional analysis) within the traditional methods for model development, they typically employ the bootstrap approach, primarily focusing on uncertainty in regression coefficients and risk factor encoding. However, I have not encountered a discussion about bootstrap predictor coverage. In simple terms, coverage means that when we assert that our predictor has an 80% interval, we expect that, on average, 80% of actual observations will fall within the interval generated by our model.

Lately, conformal inference has gained popularity as a method for quantifying model uncertainty. Its unique capability to provide probabilistic guarantees sets this method apart, ensuring that the true outcome is covered within the selected interval. However, this method can offer much more than a straightforward quantifying model uncertainty.

Over the past few months, I have experimented with conformal inference for the IRB models and would like to share three important insights from this exercise from my perspective:

1?? Importance of the IRB modeling principles and distinction between risk differentiation and risk quantification – If I were to offer a single piece of advice to risk modelers, it would be to always revisit the fundamentals. This distinction typically guides the choice of statistical methods for model development. Specifically, in the context of conformal inference, it assisted me in defining/testing various non-conformity measures and potentially extending diverse modeling approaches.

2?? Wrong initial impression about the width of the prediction intervals – Initially, many practitioners experimenting with conformal inference may form incorrect assumptions regarding the width of prediction intervals. However, upon closer examination, the conclusion should evolve, leading to the realization that the interval width is, in fact, appropriate as intended.

3?? Conformal inference is more than a simple tool for defining prediction intervals – Even if you don't utilize conformal inference to address regulatory requirements, I strongly recommend conducting this analysis. It can aid in comprehending and pinpointing areas where your model exhibits lower performance. Subsequently, you can enhance your model using different methods. This practice can become an integral component of your initial and periodic model validation.


If you haven't already explored conformal inference, try it out. It can significantly improve your understanding of the model, inspire you to approach your model from another angle, and ultimately boost your expertise as a modeler ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了