FREE SUBSCRIPTION Includes: The Advisor Daily eBlast + Exclusive Content + Professional Network Membership: JOIN NOW LOGIN
Skip Navigation LinksHome / Articles / Read Article

Print

Why Underwriting Throughput Hasn’t Scaled – Despite More Data, Tools and APIs

Date: Feb 23, 2026 @ 07:00 AM
Filed Under: Industry Insights

Over the past decade, equipment finance underwriting has become far more data rich. Bank transactions are pulled automatically, and credit bureau reports arrive instantly. Business verification, identity checks and document extraction are now routine parts of the underwriting process. 

On paper, this should have transformed throughput. Yet across banks, captives and independent finance companies, underwriting capacity has not scaled in proportion to loan volume. 

Decision times have improved at the margins, but headcount continues to rise alongside production. Senior reviewers remain constrained and exception queues grow as portfolios expand. This disconnect between technological progress and operational outcomes is actually not the result of poor execution or weak underwriting teams. 

It reflects a deeper issue in how underwriting work is structured.

Most technology investments in underwriting have focused on accelerating access to information. There hasn’t been much attention paid to the work that happens after the data arrives, i.e., interpreting conflicting signals, validating context, preparing files for review and documenting the reasoning behind decisions. These tasks remain largely manual, sequential and dependent on human judgment. As a result, underwriting has become faster at collecting inputs but not materially faster at reaching decisions.

To understand why, it helps to separate the parts of underwriting that have changed from the parts that have not.

Getting the Data is Easy and Fast Now with Tools Out There

Modern underwriting stacks have solved for data retrieval through robust API integrations, but data availability doesn't translate to underwriting velocity. Each data source still requires cleaning, normalization and interpretation before it becomes decision ready. Take bank statements: an API can pull months of transactions in seconds, but extracting meaningful cash flow patterns, identifying revenue streams, recurring expenses, seasonal variations, all remains labor intensive. The bottleneck has shifted from data access to data processing.

Because the core question isn’t, “What are the transactions?” It’s: What do they mean?

Every deposit must be classified. Is it operating revenue, or is it a transfer from another account? Is it customer income, or is it loan proceeds? Is it a genuine sale, or a one-time injection from the owner?
 
Those distinctions sound small, but they drive the numbers that underwriting depends on such as average monthly revenue, volatility, concentration and ultimately the borrower’s ability to service debt.

This is where generic tools and APIs tend to fall short. A bank feed can show “Zelle payment,” “Stripe transfer,” or “ACH deposit,” but it cannot reliably determine what that inflow represents without context. The same transaction label can mean something completely different depending on the business model, the industry and the way the borrower operates.

Even when a portion of transactions can be auto labelled, a meaningful share still requires judgment and often research, especially when underwriters need to identify non-operating inflows, merchant cash advance disbursements, or patterns that artificially inflate revenue.

So, while it may have become easier to collect bank data, underwriting teams still spend large amounts of time doing the work that actually matters: turning raw transactions into an accurate cashflow story. 

Why More Integrations Quietly Increase Underwriting Work

Each new integration in an underwriting stack is usually added with a clear purpose. One source verifies the business, while another pulls bank data, and the next one provides credit history. Another enriches the application with third-party signals. Individually, each tool answers a specific question more quickly than manual processes ever could.

The challenge emerges when those answers do not fully agree.

In real underwriting workflows, data sources rarely align perfectly. A business formation date may differ between a Secretary of State filing and what appears on an application. A bank account name may match closely, but not exactly, to the legal entity name returned by a verification tool. Revenue inferred from bank deposits may not line up cleanly with what appears on tax returns or financial statements.

None of these discrepancies is unusual. In fact, they are common enough that underwriters expect them. But each inconsistency introduces a decision point that no system resolves on its own.

For example, a KYB tool may confirm that a business is active and registered, while a bank feed reflects transactions tied to a slightly different entity name. Secretary of State registration status may show an established operating history, while a website or domain registration suggests the business is much newer. A cashflow tool may flag strong monthly deposits, while another source identifies recent merchant cash advance activity that changes how those deposits should be interpreted.

In these moments, underwriting does not slow down because data is missing. It slows down because there is more data, which must be reconciled.

Every additional integration effectively adds another perspective on the same borrower. When those perspectives conflict, someone must determine which signal is most reliable, whether the difference is material, and how it should be documented for review. That work cannot be automated away with more integrations because the act of reconciliation itself depends on context and judgment.

This is how work quietly shifts onto underwriters.

Instead of reviewing a single, consolidated view of the borrower, underwriters spend time comparing outputs across systems, tracing discrepancies back to their source and deciding how to interpret them in the context of the deal. The tools do their jobs well, but they do not coordinate with each other. The coordination burden falls on the human reviewing the file.

From a throughput perspective, this creates a ceiling. The faster data arrives, the sooner reconciliation begins. And reconciliation scales with experience, not with automation. Senior reviewers become bottlenecks not because they are slow, but because they are the only ones equipped to resolve ambiguity across systems with confidence.

This dynamic helps explain why underwriting capacity often plateaus even as technology investments continue. Integrations improve visibility, but they also expand the amount of interpretive work required to reach a decision. The result is not less underwriting effort, but a redistribution of effort from gathering information to making sense of it.

Understanding this tradeoff is essential to understanding why underwriting throughput has not scaled in proportion to data access. The constraint is no longer how quickly information can be retrieved. It is how efficiently conflicting signals can be reconciled into a coherent, defensible credit narrative.

Why Linear Underwriting Pipelines Break in the Real World

Most underwriting technology today follows a familiar structure. Data is extracted, enriched with third-party sources, scored against policy and routed to a decision. When inputs are clean and consistent, this approach works well. It brings order and repeatability to straightforward cases.

The difficulty is that real-world underwriting rarely unfolds in a straight line.

As applications move through the process, unexpected issues surface that require investigation before the deal can move forward.

Linear pipelines are not designed for this kind of interruption. When something falls outside predefined expectations, the workflow pauses and hands control back to a human reviewer. 

The system can flag the issue, but it cannot decide how to resolve it. The underwriter must determine what additional information is needed, where to find it and how the new context should affect the analysis.
Recognizing this has led to the emergence of a different execution model: systems designed to observe context, branch dynamically and adapt their workflow based on what each deal requires.

Rather than forcing every application through the same fixed sequence, these systems allow the analysis path to change when new information surfaces. If a discrepancy appears, the system can pursue clarification. If a signal raises questions, it can gather additional context before continuing. The workflow evolves alongside the deal, instead of breaking when reality deviates from the expected path.

Platforms like Kaaj are built around this agentic approach. Instead of treating underwriting as a static pipeline, they operate as a network of specialized agents that execute tasks, share context and adjust their execution path based on what they observe. 



Utsav Shah
Co-Founder | Kaaj
Utsav Shah, co-founder of Kaaj, an AI-native infrastructure company transforming equipment financing. Utsav has spent nearly a decade at Uber and Cruise, where he helped design and scale complex AI systems. With deep experience operating at the intersection of AI and real-world applications, he is now focused on bringing that innovation to equipment financing.
Comments From Our Members

You must be an Equipment Finance Advisor member to post comments. Login or Join Now.