Does LLM AI Have Enough "Algorithmic Consistency" to Transform Lending? Part Two
Defining Acceptability in a Probabilistic World
In part one, we explored the idea that AI departs from traditional computer coding because it does not achieve what we call "algorithmic consistency." Put simply, traditional coding guarantees that the same inputs yield the same outputs. AI does not necessarily guarantee this.
While this fact does not disqualify banks from using AI, it is a consideration with which they must reckon. For any given function a bank wants to replace with AI, the institution must determine what constitutes an acceptable level of algorithmic inconsistency for that specific task.
- Replacing Human Processes: If an AI model achieves high consistency with only minor variance from its average result while replacing a subjective human process, the variation is likely acceptable.
- Replacing Deterministic Processes: Conversely, if an AI model replaces a deterministic process but displays a wide range of variation leading to significantly increased risk, the bank is unlikely to accept it.
A Framework for "Acceptable" Inconsistency
To define the boundaries of AI behavior, banks should consider three primary variables:
- Definition of "Same or Highly Similar": This sets the threshold for what counts as a consistent answer. For an AI appraisal, this might mean attaining a value within 1% of the target.
- Frequency of Similar Judgment: This measures how often the model hits the similarity threshold. For example, a policy might dictate that the model must achieve a "highly similar" answer at least 95% of the time.
- Breadth of Variation: This defines the "outer limits" of acceptable error. In our appraisal example, a bank might accept occasional variances of 10% from the central response but would deem the model unacceptable if it ever misjudges a property value by 50%.
Key Questions for Risk Officers
Several key questions should be considered in developing a measurement of acceptable consistency:
- What is being replaced? Are you replacing a deterministic system or a process that relies on human judgment?
- What is the impact of variance? What are the financial, legal, or principled considerations if the AI provides an outlier answer or an incorrect varied result?
Unacceptable Sources and Kinds of Inconsistency
A policy regarding acceptable inconsistency should define clear boundaries for prohibited variations. A model should likely be disqualified if it displays:
- Systematic Bias: Models tilted against gender, race, or other protected considerations.
- Undesirable Risk Posture: When a model's variances consistently lean toward higher-risk directions rather than neutral errors.
- Inscrutable or Unreasonable Variation: Variations that have no plausible justification or logical basis.
- Data Drift: Inconsistency caused by "stale" training. If a model was trained on 2024 economic data but is operating in a 2026 market, its outputs may become erratic and inconsistent with current reality.
- Adversarial Inputs: Variations triggered by "prompt injection" or manipulated data specifically designed to trick the AI into providing an incorrect or inconsistent result.
Conclusion
The reality of AI's algorithmic inconsistency does not prevent these tools from supplementing or replacing bank processes. However, it is both possible and essential that banks define their own specific levels of acceptable inconsistency. In part three, we will discuss methods for testing and auditing to ensure that AI models meet these defined levels.
LoanCraft has been applying cutting edge technology to lending processes for over twenty years. Learn more at loancraft.net.