Establishing Model Risk Management
http://ccro.org © Copyright 2025, CCRO. All rights reserved. 13
1. Identification and inventory should include all models, applications, systems, and EUC
tools. The inventory process should include a risk assessment and tiering for each
model. It should also identify and eliminate duplicate or superfluous models.
2. The model development processes should function as a first control against model risk
and should require a competent practitioner to build the model. The developer should
reference a challenger model to provide comparative results, conduct pre-
implementation testing to ensure the model works properly, and clearly communicate
with the requester and users to ensure proper usage in the future. The developer should
share key assumptions, data inputs, methodologies, and outputs with stakeholders and
solicit their feedback prior to model completion.
3. The pre-implementation phase should include performance testing against challenger
models with explanations of differences, sensitivity analysis for all inputs and
assumptions, spurious input testing, system compatibility and crash sensitivity,
monotonicity and continuity analysis, and any other tests deemed relevant.
4. Model documentation should include clear statements on objectives, conceptual model
design, inputs, development methodology, alternatives, and the criteria used to justify
selections, assumptions and parameters, unaddressed risks, and known usage
limitations. Model documents should provide sufficient detail to allow a competent
third party to fully understand and replicate the model. Screenshots of code and user
functionality is also a valuable practice, as are links to code repositories and raw data.
5. Model validation should function as a second line of defense and should require a
competent practitioner, independent of the model’s requestor or developer and their
reporting lines. The validation process should verify assumptions, choices, and findings
from the development process. Validators should use the effective challenge principle
as a guideline, document their work, and retain all validation input and output data.
Interim findings should be shared with model developers to allow for iterative
refinement before issuing an approval, a conditional approval, or a rejection.
Documented approvals should include an expiration date and future re-validation
requirements.
6. Implementation processes should be subject to the organization’s normal change
management and IT implementation procedures, including code lockdown, version
release testing, and change control permissions and authorities. Implementation
documentation should clearly describe the implementation context, upstream data
dependency assumptions, known downstream dependencies, communication and
training plans, pre-and post-implementation tests to be conducted, and a rollback plan if
the implementation fails. Implementation plans should ensure that adequate data
lineage can be maintained between the inputs, model processing components, and its
outputs. Finally, implementation documentation should identify the users, their
relevant roles, access permissions, and training requirements.
7. Models should be restricted to intended uses. Model users should be able to run the
model, access its outputs, and receive training when necessary. Model owners should
closely supervise access permissions and usage, ensure performance parameters are
http://ccro.org © Copyright 2025, CCRO. All rights reserved. 13
1. Identification and inventory should include all models, applications, systems, and EUC
tools. The inventory process should include a risk assessment and tiering for each
model. It should also identify and eliminate duplicate or superfluous models.
2. The model development processes should function as a first control against model risk
and should require a competent practitioner to build the model. The developer should
reference a challenger model to provide comparative results, conduct pre-
implementation testing to ensure the model works properly, and clearly communicate
with the requester and users to ensure proper usage in the future. The developer should
share key assumptions, data inputs, methodologies, and outputs with stakeholders and
solicit their feedback prior to model completion.
3. The pre-implementation phase should include performance testing against challenger
models with explanations of differences, sensitivity analysis for all inputs and
assumptions, spurious input testing, system compatibility and crash sensitivity,
monotonicity and continuity analysis, and any other tests deemed relevant.
4. Model documentation should include clear statements on objectives, conceptual model
design, inputs, development methodology, alternatives, and the criteria used to justify
selections, assumptions and parameters, unaddressed risks, and known usage
limitations. Model documents should provide sufficient detail to allow a competent
third party to fully understand and replicate the model. Screenshots of code and user
functionality is also a valuable practice, as are links to code repositories and raw data.
5. Model validation should function as a second line of defense and should require a
competent practitioner, independent of the model’s requestor or developer and their
reporting lines. The validation process should verify assumptions, choices, and findings
from the development process. Validators should use the effective challenge principle
as a guideline, document their work, and retain all validation input and output data.
Interim findings should be shared with model developers to allow for iterative
refinement before issuing an approval, a conditional approval, or a rejection.
Documented approvals should include an expiration date and future re-validation
requirements.
6. Implementation processes should be subject to the organization’s normal change
management and IT implementation procedures, including code lockdown, version
release testing, and change control permissions and authorities. Implementation
documentation should clearly describe the implementation context, upstream data
dependency assumptions, known downstream dependencies, communication and
training plans, pre-and post-implementation tests to be conducted, and a rollback plan if
the implementation fails. Implementation plans should ensure that adequate data
lineage can be maintained between the inputs, model processing components, and its
outputs. Finally, implementation documentation should identify the users, their
relevant roles, access permissions, and training requirements.
7. Models should be restricted to intended uses. Model users should be able to run the
model, access its outputs, and receive training when necessary. Model owners should
closely supervise access permissions and usage, ensure performance parameters are