Why Unrealistic AI Expectations Could Hold Some Lenders Back
We’re not suggesting that every AI tool needs to be adopted swiftly. Our team has taken a very specific and careful approach to adding these new technologies to our industry leading LOS and the benefits are clear.
But those benefits do not include 100% accuracy out of the box.
Lenders who expect to see that will be disappointed and may wait to adopt these tools, putting them at a distinct disadvantage when the market turns.
It’s important that lenders understand how we measure the accuracy of our AI models and how that compares to the way we gauge the accuracy of our human staff members.
The problem with treating AI like super humans
We tend to think of modern AI tools more like people than machines. The big technology companies are pushing us in this direction with their internet appliances, probably so we’ll like their technologies and become friends with them, thereby increasing our usage and giving them more access to our daily lives.
But these technologies are not people and we cannot measure them using the same standards.
A lender may set a threshold standard for human staff that says the employee must get 75% or more of the data elements placed in the correct field in order to be deemed effective. This is reasonable because the automated checks built into our technology platforms will catch most of those errors and QC/QA personnel will catch the rest.
When the same lender judges AI, they think of it as a human who has all the power of artificial intelligence built in and assumes that it should operate flawlessly out of the box. That doesn’t happen. When they don’t see 100% accuracy, the lender wonders if they have made a mistake.
Capitalizing on the superpowers of AI
Many assume AI should perform at or above human levels from day one across all situations. That’s not how it works.
Human staff have a lifetime of accumulated experiential knowledge AI cannot match overnight. We understand nuanced context and exceptions that AI must learn through rigorous training over time.
Consider mortgage underwriting, for example. Human underwriters draw on years judging risk across economic cycles and products. The intuition honed through different market conditions gives them flexible judgment.
In contrast, AI underwriting models start with a blank slate. The algorithms require extensive structured data on past loans of all types to recognize patterns and Inherently, some degradation in precision is expected early on.
But consistent re-training will gradually get AI models to higher accuracy than humans mathematically can achieve. And that’s when the AI’s superpower kicks in: once they learn, they never forget.
Neither do they have bad days, get distracted or fail to act in a consistent manner.
Lenders must embrace AI as a different, but complementary, approach rather than directly comparable to underwriters. Just as new team members require onboarding and training, so too do AI tools need proper implementation support.
With challenging business conditions, lenders understandably expect immediate and significant ROI from AI. However, establishing pragmatic checkpoints along the accuracy curve prevents frustration. Measure progress in regular increments on the road to optimized performance.
With the right expectations, AI delivers incredible benefits in speed, efficiency and scalability over time. But setting the implementation bar unrealistically high out of the gate slows progress.
Take a measured approach that tracks incremental gains until optimal accuracy and productivity is reached. Think of AI as a long-term investment that grows in value, not a plug-and-play instant solution. It won’t take long for the return on investment to exceed that lender’s expectations.