The agentic economy is coming. Banks can’t rent their way into it
The agentic economy is coming. Banks can’t rent their way into it
By Derek White
You Can Rent the Tool. You Cannot Rent the Responsibility.
Last week Sequoia published an argument that has been circulating widely in Silicon Valley. The next trillion-dollar companies, they suggest, won’t sell software. They’ll sell the work. For every dollar businesses spend on software, they spend roughly six on services. As AI systems become capable of performing real work — not just generating information — the economic prize shifts from selling tools to delivering outcomes. We’re already seeing early versions of this model: AI-native companies running accounting workflows, AI systems managing insurance brokerage, AI platforms operating IT infrastructure. Instead of selling software licenses, they sell execution. It’s a compelling thesis. But something stood out when I read it. Regulated banking — one of the largest technology markets in the world — barely appears in that conversation, despite representing nearly a quarter of global technology spend. That isn’t an oversight. It’s a structural signal.
Because the autopilot model that works across much of the economy runs directly into the accountability architecture of regulated financial institutions. For decades, growth in financial services has been tightly coupled to headcount. More loans meant more analysts. More customers meant more service representatives. More regulatory complexity meant more risk and compliance staff. The industry’s efficiency ratios reflect that structural reality. Agentic systems are the first credible mechanism to change that dynamic — expanding execution capacity without expanding the organization at the same rate. But in regulated industries, that capability cannot simply be outsourced. It has to be built inside the institution.
Banking operates under a fundamentally different structure. When an AI system processes a credit decision, files a Suspicious Activity Report, or determines whether a customer receives financial services, the regulator doesn’t call the vendor. They call the bank. The OCC. The Federal Reserve. The CFPB. State banking authorities. European regulators under DORA and the EU AI Act. Their frameworks are explicit: outsourcing technology does not outsource responsibility. A bank can use a vendor’s model. But it cannot outsource accountability for what that model decides. Under SR 11-7 model risk guidance, even a third-party model must be independently validated by the institution itself. If the system cannot be explained, audited, and governed internally, regulators expect the bank to demand transparency — or find another solution. In other words: a bank can rent the technology. But it cannot rent the responsibility.
The Architecture That Actually Works
Once you accept that responsibility cannot leave the institution, the strategic question changes. It is not: which vendor should run our workflows? The real question becomes: how do we expand execution capacity inside the institution — under our governance and accountability? That is a very different problem. And it requires a very different architecture.
Take the workflow at the center of most regional banks’ revenue engines: commercial lending. A mid-market loan today typically involves a sequence of manual steps. Analysts gather financial data across multiple sources. Associates normalize financial statements. Credit teams draft investment memos from scratch. Risk reviews documentation built by hand. Compliance verifies policy adherence. Operations books the deal once everything else clears. Multiple handoffs. The same data re-entered in multiple systems. Days — sometimes weeks — of latency. Not because the people are slow, but because the process was designed around human throughput.
Could a vendor run that workflow externally? Not in a form regulators would accept. The moment the workflow leaves the bank’s governed environment, three things break simultaneously. The data leaves — customer financial information and proprietary credit models move outside the institution’s controlled perimeter. The decisions leave — every credit determination carries regulatory consequences the bank must defend under examination. The audit trail leaves, or worse, becomes dependent on infrastructure the bank does not control. For regulated institutions, examiner-ready traceability isn’t a reporting feature. It’s an operational requirement.
When agentic systems operate inside a governed institutional framework, the economics change dramatically. An extraction agent ingests structured and unstructured documents simultaneously. A financial analysis agent evaluates covenant ratios against policy thresholds. A compliance agent validates regulatory constraints in real time. A drafting agent produces a credit memo grounded in the underlying financial data. A governance layer traces every step — every data source, every decision path, every policy check. Humans remain central to the process. They review, challenge, approve, and build the relationships that close the deal. But the coordination and documentation burden collapses. Processing time drops. Consistency improves. Throughput expands because the constraint is no longer human coordination capacity. The institution doesn’t rent execution from a vendor. It owns its execution capacity — and that capability compounds.
The Strategic Question That Matters
Many institutions are approaching AI as a technology deployment — which models to use, which vendors to adopt, which tools to integrate. But technology alone does not produce the economic gains everyone expects from AI. Those gains only materialize when the operating model changes as well. Agentic systems don’t simply automate tasks inside a human-only organization. They change how work flows through the institution. Instead of organizations structured primarily around functions, work begins to organize around outcomes: loan underwriting, fraud investigation, customer resolution, regulatory reporting. Each workflow becomes a coordinated system of humans and agents operating together within clear governance boundaries. When those workflows are redesigned properly, the economics of the institution begin to change. Revenue can grow without linear headcount expansion. Cycle times compress. Consistency improves. Operational risk becomes easier to monitor because every action is traceable.
I’ve spent three decades working at the intersection of banking and technology — from launching the world’s first internet bank to leading digital transformation at Barclays, BBVA, and US Bank. And I’ve watched this structural shift happen once already. When digital-native fintechs arrived, incumbent banks had every advantage: capital, customers, infrastructure, regulatory relationships. The fintechs had almost none of that. What they had was a different operating model — one designed from the beginning for a digital world. That structural difference allowed them to move faster, build better experiences, and grow without the cost structures that constrained traditional institutions. The banks that ultimately succeeded in the digital era didn’t simply adopt new technology. They redesigned how work flowed through the organization. The same dynamic is emerging again now. But it’s moving faster.
The institutions that win in the agentic era will not be the ones that rent outcomes from vendors. They will be the ones that build governed execution capacity inside their own walls. The strategic question for bank leadership is simple: does your institution own its execution capacity? Or are you renting outcomes from systems that carry none of your regulatory responsibility? In most industries, renting works. In regulated banking, ownership is the only model that compounds. The institutions that start building that capability now will look back on this moment as the point where their operating model changed. The rest will face the same reckoning many banks faced during the fintech wave. Only this time the cycle will move much faster. The next decade of banking will not be defined by who adopts AI first. It will be defined by who builds the operating model to use it safely.