Sofware development company for banking in 2026: 7 criteria for choosing the best partner

Karolina Stolarczyk
January 22, 2026 | Banking

Global banks are already spending $600bn annually on technology. It is a significant sum, but the data shows that banks have not lost their appetite for digital transformation at all. Quite the opposite – as many as 88% of them plan to increase their budgets by another 10% in 2025. They point to cybersecurity as the priority for investment.

Yet, this spending does not necessarily turn into a market advantage. The share of IT in revenue has crossed the 10% mark and, according to BCG forecasts, will grow at a rate of 9% per year. However, instead of building innovations, this capital often falls into the void of the “efficiency trap” and gets eaten up by the costs of maintaining legacy systems. In practice, banks spend more and more, but find it harder to get real value from these investments.

The following report analyzes 7 filters for the decision-making process – from DORA and ESG requirements to Managed Outcomes models. In 2026, these will decide whether a Software House becomes a partner of strategic importance for a bank, or if the bank removes them from the supply chain at the pre-qualification stage.

Key findings

  • The efficiency trap: Currently, over 60% of banking IT budgets are consumed by maintenance (“Run the Bank”). Leaders for 2026 are setting a goal: reduce this figure below 45% through automation.
  • DORA as an effective selection tool: Regulation forces market consolidation. Providers without an auditable chain of subcontractors and ready exit plans (Exit Strategy) are rejected during pre-qualification.
  • The end of body leasing: The Time & Material billing model is being pushed out by Managed Outcomes. The provider’s margin is strictly tied to delivering business KPIs and taking over operational risk.
  • The necessity of ESG (Scope 3): In 2026, a lack of hard data on the carbon footprint of IT services eliminates a provider (knock-out) in Tier-1 bank tenders.
  • Mainframe crisis: The average age of COBOL experts is over 55. Banks require partners to have not just people, but ready-made AI tools to automate the analysis and refactoring of legacy code.
  • Composable architecture: Banks aim to avoid “Vendor Lock-in 2.0”. They require providers to be technologically neutral and to transfer intellectual property (IP) rights for unique components in the system.
  • AI governance: Deploying autonomous agents (Agentic AI) requires the Software House to apply safety barriers (Guardrails) compliant with the EU AI Act.

Operating model introduced in 2026: Radical Resilience

The year 2026 marks a clear dividing line in the evolution of the financial services sector. After a decade of fighting for customer volume and low interest rates, banks have entered a new operating model: “Radical Resilience”.

What does this mean in practice? Facing a state of permacrisis – defined by cyber threats, increased regulation (DORA, FIDA), and ESG pressure – technology has stopped “supporting the business”. It is the business. A bank without efficient IT systems is not a bank that works slower. It is a bank that does not work at all.

For Chief Technology Officers (CIOs) and Chief Procurement Officers (CPOs), this represents a major shift in how they choose IT partners. Success in 2026 is not measured only by financial indicators: Return on Equity (ROE) or Cost to Income (C/I). The new currency is the ability to manage complexity and operational continuity. The question has stopped being “how much will we save” and has become “will we survive the next crisis”.

A Software House can no longer be just a provider of “hands for work” (body leasing). In 2026, the market divides providers into two groups: those who understand they have become part of the regulated banking value chain, and those the bank cuts off from Tier-1 contracts. There is no third option.

Decision-maker’s roadmap: 7 pillars of IT partner selection

Before we look at offers, let’s verify the candidate for a partner through the lens of the seven criteria below. The order is not accidental – the first ones are “Deal-Breakers” that disqualify a provider at the pre-qualification stage.

  1. Engineering Efficiency (Engineering First): The provider must prove – not just declare – how they will escape the “efficiency trap”. Automation and DORA metrics are key. Without evidence, these are empty promises.
  2. Regulatory Readiness (DORA-Ready): An auditable chain of subcontractors and policies covering penalties for downtime are now the standard, not an option. Knock-out criteria – missing any element removes the vendor from the game.
  3. ESG Transparency (Scope 3): Reporting the carbon footprint of IT services has moved from “nice to have” to a hard requirement. In 2026, a lack of data means compliance risk and the end of the conversation.
  4. Modernization Competence (Legacy & Talent): The demographic gap in mainframe technologies is growing. The partner must show a plan – not just people, but AI tools for technical debt refactoring. Otherwise, they will drown along with the bank in old code.
  5. Responsibility for Business Results (Managed Outcomes): Time & Material is becoming a thing of the past. What counts is settlement for “delivered results”. Either the company takes on operational risk and ties its margin to business KPIs, or it looks for clients elsewhere.
  6. Ability to Co-create IP (Composable Banking): The bank needs a partner who will integrate ready-made blocks (SaaS) without creating another vendor lock-in. Technological neutrality and transferring IP rights to unique components are prerequisites, not a subject for negotiation.
  7. AI Governance Maturity (Agentic AI): AI models must have built-in “Guardrails” mechanisms compliant with the EU AI Act. This is not the future – it is a regulatory requirement for today. Otherwise, autonomous agents remain a dream.

Criterion #1: Engineering efficiency and escaping the “efficiency trap”

Procurement Question: “Why, despite a growing IT budget, are we implementing changes more slowly?”

Banks are spending more on technology than ever before. Budgets are growing at a fast pace. Yet, institutions are operating more slowly, as if stuck in a technological swamp. Analysts at Boston Consulting Group have called this phenomenon the “efficiency trap”.

The data is clear. Tech spending in banking is growing globally at a rate of 9% per year (CAGR), significantly outpacing inflation. There is no shortage of capital. The problem: institutions cannot implement innovation – what the industry calls “change the bank”. They are standing still. All those additional budgets are consumed by the costs of maintaining existing systems – “run the bank”. Banks are paying more just to avoid falling behind.

Diagnosis: Structural paralysis

The structure of spending shows the scale of the problem. In the average bank, over 60% of the IT budget goes to operational activity: maintaining legacy systems, licence fees, patching security gaps, and ensuring regulatory compliance. This proportion becomes more problematic when we compare it with another trend.

The life cycle of technology is shortening. Estimates suggest that technological skills lose half their value in just four years. An investment in technology made today might turn out to be obsolete before we manage to use it fully. As a result, banks have found themselves in a paradoxical situation: they invest huge amounts to “stand still”, while competition from digitally native entities – neo-banks, BigTech – does not carry the baggage of technology from past decades.

The efficiency trap grows directly from the fact that we have accumulated technical debt for years. Decisions made in the years 2015–2020 – when we built interfaces for mobile devices and web applications on top of outdated core banking systems – mean that in 2026 complexity is rising significantly. Every change in the system requires spending on integration and regression tests. A seemingly simple update might require weeks of work because one must check if it breaks dozens of connections with other modules.

Selection benchmark: Team structure and DORA metrics

When choosing a Software House in 2026, banks must reject marketing declarations about “agile” and “modern methodologies”. Instead, they must demand data on staff structure and team results.

Ratio of “doers” to “orchestrators”

Leading financial institutions are restructuring IT staff. The centre of gravity is shifting from management roles – Project Manager, Scrum Master, Coordinator – to engineering roles. After years of building up the coordination layer, banks have discovered the truth: too many people plan the work, too few do the work.

Market data shows an average: about 50% of staff are engineers. Transformation leaders for 2026 are setting themselves a goal: 75% of IT personnel are engineers (“doers”). A Software House that offers a team made up largely of non-technical middlemen and coordinators only deepens the efficiency trap. The bank pays for hours of meetings, reports, and synchronization – not for code and solutions. Banks are looking for flat technological structures where every person on the team builds something.

DORA Indicators (DevOps Research and Assessment)

The provider must show excellence in four DevOps Research and Assessment metrics. These are not KPIs – they are measures of the ability to deliver value quickly and safely:

  • Deployment Frequency: How often does the provider deploy changes to production? Safely deploy, not just “push to production”. Market leaders achieve dozens of deployments daily. A critical fix hits production within an hour, not weeks. In banking, where every minute of payment system downtime means lost millions, this difference decides competitiveness.
  • Time-to-Recovery (MTTR): How long, on average, it takes to restore services after a failure. In banking, every minute of payment system unavailability is not just lost revenue, but primarily damage to reputation. The difference between an MTTR of 2 hours and 20 minutes is the difference between losing customer trust and an incident that most people won’t even notice.
  • Change Failure Rate: What percentage of deployments require immediate fixes (hotfix). A high rate means that quality processes at the provider are not working. Every hotfix is cost, stress, and risk of further errors.
  • Lead Time for Changes: How much time passes from code approval to launch on production. The longer this takes, the less flexibly the provider reacts to changing business needs. Regulators introduce new requirements, competitors release new features – the bank must keep up.

It is also required that the provider can automate testing processes (CI/CD) at a level of at least 80% coverage with automatic tests. When we test the whole system manually after every minor fix, it is simply impossible in terms of time and cost. Automation minimizes regression costs with every change in the legacy system.

New economy: Risk-weighted TCO

The Procurement department and CFO now look at Software House costs differently. The day rate has stopped defining whether an offer is attractive. That perspective – focusing only on the rate card – was popular in the years 2010–2020, but led to poor choices. Advanced Risk-Weighted TCO models (Total Cost of Ownership adjusted for risk) are being introduced.

The problem lies in the consequences of purchasing decisions in the long term. A Software House that is “cheap” in terms of hourly rate, but generates high technical debt – poor code quality, lack of tests, superficial documentation – becomes the most expensive choice over a 3–5 year perspective. The bank saves 20% on developer rates today, only to spend three times that amount in two years when the code needs refactoring or the system needs rewriting.

Elements of risk-weighted TCO:

  • CAPEX vs OPEX: Banks prefer “pay-as-you-grow” models that do not freeze capital in the project phase. The bank wants to pay for usage, not for purchased licences sitting on a shelf.
  • Cost of operational risk: An hour of system unavailability caused by a provider error in Tier-1 banks costs millions of euros. One incident can wipe out all savings from low rates. The purchasing department must calculate not only how much a developer’s hour costs, but how much a potential failure caused by that developer might cost.
  • Exit Cost: How much a potential migration away from the provider will cost. Avoiding vendor lock-in becomes a strategic priority. The bank must have the ability to change providers without having to rewrite the entire application from scratch. The more the provider uses proprietary solutions and non-standard architectures, the higher the exit cost.

Table: Transformation of IT spending structure (2026)

Spending CategoryCurrent State (Market Average)Leader’s Goal (2026)Implication for Partner Selection
Run the bank (Maintenance)> 60%< 45%A partner who only logs worked hours but does not automate how they maintain systems deepens the problem instead of solving it.
Change the bank (Innovation)< 40%> 55%The partner must possess innovative competencies (Agentic AI, Composable Architecture) and prove them with references, not slides.
Staff Structure47% Doers75% DoersEvery additional project manager who does not directly add technological value increases cost without increasing value.
Technical DebtHidden15% of Budget (Open repayment)The contract must obligate systematic refactoring, not just the delivery of new features.

Source: Own compilation based on BCG data.

Procurement decision: Who gets rejected early?

Under Criterion #1, banks in 2026 reject providers based on clear, measurable criteria:

  • Lack of efficiency metrics: The provider cannot present historical data on how efficiently they deploy. Without DORA metrics, this means one thing – they do not measure work quality. And what we do not measure, we do not manage. The answer “we work agile, we don’t gather such data” disqualifies them immediately.
  • Low engineer ratio: The provider keeps a low ratio of engineers in teams (below 60%), trying to mask this with extensive project management. The bank sees clearly: it is paying for coordination, not execution. An offer containing 3 Scrum Masters for 5 programmers is a warning signal.
  • No technical debt control: The provider has no processes that automatically control technical debt. Lacking tools like SonarQube as a contract standard means code quality is not monitored systematically, only occasionally – when it is already too late. Debt accumulates in the shadows, only to explode in a year or two.

Criterion #2: Regulatory readiness (DORA-ready) and provider consolidation

Procurement Question: “Can the provider prove they are not a threat to the Bank’s system?”

When the Digital Operational Resilience Act (DORA) came into force in January 2025, with full implementation required in 2026, it fundamentally changed the relationship between banks and technology providers. It is no longer just another compliance initiative to be handed off to the legal department and forgotten. DORA has stopped being seen as “just another regulation project”. It has become a factor that shapes corporate architecture and bank purchasing strategies.

In 2026, “compliance” is not an add-on to the service. It is the service. The bank is no longer just buying technological skills – it is buying the provider’s ability to prove compliance, transparency, and operational resilience.

New vendor selection lifecycle (2026)

The process of choosing a provider has become longer and more formal. The traditional path of RFI (Request for Information) → RFP (Request for Proposal) is dead. It has been replaced by a “security funnel” that eliminates most candidates right at the start:

  1. Pre-qualification (DORA gateway): An elimination stage that happens even before the formal request for proposal. Without ready compliance reports, insurance policies covering cyber risks, and a documented chain of subcontractors – the provider does not even receive the RFP. This effectively cuts off those who do not meet the requirements.
  2. Risk Assessment: The Vendor Risk Management (VRM) department assesses concentration risk. The central question: is the bank already too dependent on this provider? Would a failure at the partner paralyze critical business processes? This is an analysis of the provider portfolio to diversify risk.
  3. Proof of Concept (PoC) / Technical Challenge: Verifying “hands-on” skills in a live environment. The bank no longer believes PowerPoint slides and references. They want to see working code, conduct security tests, and check how the team reacts to simulated incidents.
  4. Tiering decision: The provider is classified as “critical” (Tier-1) or “supporting” (Tier-2). Tier-1 means a full audit regime – regular checks, continuous monitoring, penetration tests. This is a commitment for the provider that involves extra costs and resources.

Regulatory requirements: The end of hiding subcontractors

DORA imposes duties on financial institutions regarding how they manage external risk (Third-Party Risk Management – TPRM). Banks must not only monitor direct providers but also have knowledge about subcontractors who support business functions.

Every ICT contract must go into a centralized register containing data on what the services are, where data is processed, and what the chain of related entities looks like. For a Software House, this means the end of hiding subcontractors. The popular practice of hiring “B2B freelancers” without formal notification now directly violates the regulation. The bank must know who has access to data, where they are sitting, what their skills are, and whether they have passed security verification. A provider who answers “we have a flexible resource model, we add extra people if needed” without being able to name them immediately does not meet the requirements.

Mechanism of forced consolidation

The complexity and cost of maintaining DORA compliance mean it no longer makes economic sense to maintain a fragmented base of providers. Banks that previously used a “best-of-breed” strategy – integrating hundreds of niche solutions from dozens of different vendors – are mass-consolidating their provider portfolios in 2026. This is not a random trend. It is a strategy forced by the economics of compliance.

This happens for three reasons:

  1. Cost of supervision: Every extra provider is an extra cost for audit, penetration tests (TLPT), and SLA monitoring. When a bank reduces the number of providers by 20–30%, it brings savings in the compliance area. Costs do not grow strictly in a line – they grow significantly. When managing 100 providers, costs are often four times higher than with 50 providers, not just double. Each vendor requires a risk manager, cyclical audits, tests, and verification of how they change their infrastructure.
  2. Board responsibility: DORA assigns direct, personal responsibility for ICT risk to members of the bank’s management board. This cannot be delegated to the CIO or CISO – it is a Board of Directors level responsibility. Facing administrative fines reaching millions of euros and reputational risk, decision-makers prefer working with large, stable entities. Large entities have the resources – legal departments, insurance policies, crisis management procedures – to guarantee safety. A small, dynamic startup might have the technology, but rarely has insurance for 50 million euros or a compliance department.
  3. Implementation cost estimates: The cost of full DORA implementation in large financial groups can reach as high as 100 million euros. This amount covers not just technological changes, but primarily building the whole apparatus of compliance, audit, and monitoring. Banks expect strategic partners to take on part of the burden related to adjustment. A provider who says “that’s the bank’s problem, not ours” – is out.

FIDA: Openness vs Security

Parallel to DORA, the legal framework of Financial Data Access (FIDA) forces banks to share customer data with a wider ecosystem. This creates tension that is hard to resolve without the right technology. The bank must be open (FIDA requires sharing data with external entities that have authorization) but at the same time hermetic (DORA requires maximum security and control).

Imagine a situation: a fintech wants to download a client’s transaction data – FIDA requires the bank to share the data after getting consent. At the same time, every API must be secured at the highest level, monitored in real-time, with a full audit trail – DORA requires this. Traditional systems are not designed for this duality. An API created for the bank’s internal needs must suddenly handle hundreds of external entities, each with a different trust level, different permissions, and a different risk profile.

In 2026, banks are looking for technological platforms that offer “compliance by design” – built-in mechanisms that manage consents and API security. Instead of sticking security layers on after the fact, the bank needs architecture where regulatory compliance is the foundation, not an addition. This favours providers who offer comprehensive platforms (end-to-end suites) at the expense of point solution providers. When we integrate dozens of independent tools, it is a recipe for a security gap – every joint between systems is a potential door for an attack.

Procurement decision: Knock-out criteria (Who do we eliminate?)

In the context of Criterion #2, a Software House falls out of the game based on clearly defined exclusion criteria:

  • Lack of auditability: The provider does not agree to external security and development process audits on demand by the regulator – KNF, EBA – within 24–48 hours. If the provider needs weeks to prepare documentation or denies access to auditors, that is a red flag. The bank cannot afford a partner who blocks supervision. The regulator will not wait for the provider to “tidy up the paperwork”.
  • Unclear subcontractor chain: The provider cannot immediately report the full chain of sub-outsourcing to the bank’s ICT Information Register. The answer “we will check and send it in a week” is unacceptable. The bank must have real-time visibility on all supply chain participants. Every person with access to code, test environments, or production data must be known by name, location, and legal status.
  • No exit plan support: The provider does not want to or cannot actively support the creation of exit strategies for their services. This is a classic symptom of vendor lock-in at the contract stage. If the provider is not ready to help the bank with a potential migration to another partner, it means their business model relies on dependency, not on the value of the service offered. A good partner knows they earn money on quality, not on the fact that you cannot leave them.

Real-world failure scenario (Failure mode)

A company was “DORA-ready on paper”. It declared compliance in the offer, presented certificates and documents. The bank decided to cooperate. The first operational audit – for example, penetration tests according to the TIBER-EU methodology – ended in failure. It turned out that the documentation did not match reality, and safeguards existed only in procedures, not in the code. Default firewall configurations were left active, passwords for test environments were “admin123”, and backups worked “almost always”. The bank wasted months and hundreds of thousands of euros on a partner who did not survive the first verification. The cost of breaking the contract and finding a new provider exceeded the savings from the attractive offer many times over.

Criterion #3: ESG Transparency and Scope 3

Procurement Question: “Will working with this provider degrade our non-financial contribution and raise the cost of capital?”

The CSRD directive (Corporate Sustainability Reporting Directive) obliges banks to report not only their own emissions but primarily emissions from the value chain – known as Scope 3.

Scope 3 as a new financial risk

For the banking sector, Scope 3 dominates the emissions structure, covering financed emissions – the entire loan portfolio – and emissions from the supply chain, where purchased goods and IT services prevail. Estimates indicate that Scope 3 can account for up to 95% of a financial institution’s total carbon footprint. A bank might run green office buildings, but when it finances a coal power plant or works with an IT provider with high emissions, its ESG indicators drop significantly. A lack of data precision in this area translates into compliance and reputational risk. Banks that do not meet decarbonisation goals (SBTi) have harder access to cheap financing on international markets.

Institutional investors look at ESG indicators just as closely as profitability. In 2026, “Green Risk” is an integral part of provider assessment, standing alongside financial risk and cyber risk. A provider with a high carbon footprint – inefficient data centres, old hardware, no energy policy – exposes themselves to transformation risks: carbon taxes, rising energy prices, costs of adjusting to regulations. The bank feels these risks directly as an increase in costs.

Sustainable procurement: Selection mechanism

In response, banks are implementing rigorous “Sustainable Procurement” policies. ESG criteria are becoming a necessary condition – knock-out criteria – in tenders for IT services. The effect? A Software House without an ESG report falls out in pre-qualification, regardless of price or code quality. You can have the best team of programmers and the most competitive quote – without certified emissions data, the offer goes in the bin.

Banks use their purchasing power to force providers to reduce emissions. Cooperation with vendors unable to provide credible data on the carbon footprint of services – cloud energy intensity, carbon footprint of developers’ work – is gradually being phased out.

Green coding

A new dimension of competition is appearing on the market: Green Coding. Banks are starting to require code optimization in terms of energy consumption. Bad code is not just a slow application – it is an application using more electricity in the cloud, raising the bank’s bills and worsening ESG indicators. Every non-optimal loop, every unnecessary database query translates into kilowatt-hours. Providers offering energy optimization of software gain an advantage in tenders.

Procurement decision: Who gets excluded?

Under Criterion #3, the selection process is binary:

  • No Data, No Deal: The provider does not report emissions in Scope 1, 2, and 3. A lack of certified data means the offer is rejected – the bank cannot write “zero” in the CSRD report or write “data unavailable” without risking an audit and potential penalties.
  • No decarbonisation plan: The provider does not have approved emission reduction targets, for example, under the Science Based Targets initiative (SBTi). Banks require specifics: approved goals, a schedule of actions, measurable milestones.
  • Greenwashing: The provider claims “climate neutrality” based solely on cheap offsetting of emissions – planting forests that will absorb CO2 only in 20–30 years, if they don’t burn down first. In 2026, banks recognize greenwashing immediately. Climate neutrality must be based on reducing energy consumption and optimizing infrastructure, not on buying offset certificates.

Criterion #4: Modernization Competence – Demographics and AI

Procurement Question: “Who will fix our critical systems when the last experts retire?”

Despite the move to the cloud, mainframes in 2026 remain the beating heart of transaction systems for many of the world’s largest banks, processing billions of operations daily. However, this technology faces a threat that is not technical, but demographic.

Demographic time bomb

The average age of experts in mainframe technology and COBOL is over 55. The skills gap widens every year – experienced engineers retire, and universities have not trained successors for years. Students do not learn COBOL – it is seen as a “dinosaur” technology, unfit for a modern CV. Young programmers want to write in Python, React, Go – they want to build mobile apps and AI systems, not maintain code from the 70s.

A paradox arises: systems processing hundreds of billions of dollars daily depend on a shrinking group of specialists whose services get more expensive every year. This is a classic operational risk under DORA. Market surveys speak clearly: 79% of business leaders consider acquiring the right resources and skills the biggest challenge related to maintaining mainframe platforms.

A Software House that ignores this problem and offers only “trendy” cloud-native technologies cannot step into the role of a strategic partner for a Tier-1 Bank. They remain a partner only for “pretty interfaces” – layers visible to the user. What happens deep down, in the core banking system, remains an unsolved problem.

Modernization strategies: Surgical precision vs AI

In 2026, banks require more from IT partners than just “rewriting the system”. They have learned painful lessons from modernization projects that ended in disaster – burned budgets, broken systems, court settlements. Today they require a strategy matched to the type of technical debt.

  • Tier-1 Legacy Bank: Possesses millions of lines of code in COBOL/PL1 – code written over decades, layer by layer, by different programmers, often without documentation. Here the bank looks for a partner for “surgical migration” or “safe hibernation”. You cannot turn off the mainframe and rewrite everything from scratch – that would be operational suicide.
  • Digital Challenger: Does not have a mainframe but struggles with “Legacy 2.0” – microservices written 5–6 years ago in technologies that have already left the mainstream: old versions of Angular, Node.js without LTS support, libraries that are not updated.

The role of AI in modernization

In 2026, banks are using AI on a massive scale as a tool to mitigate the risk of knowledge loss. Advanced models analyze old code, document it, and translate it into modern languages – Java, C#, Python. A young programmer who does not know COBOL can use AI to analyze code from 1987, generate documentation, and translate it into pseudocode understandable to a Java developer. This changes the economics of modernization.

However, the Software House must show caution. Banks understand the “Black Box” risk – a situation where AI generates code that no one understands.

Failure mode (Modernization failure)

Example: A Software House boasted about using proprietary AI for automatic COBOL → Java translation. They promised a 70% cost reduction and halving the migration time. The result? Code was created in Java that structurally mimicked old COBOL procedures – so-called “Java-flavoured COBOL”. The code remained unintelligible to Java programmers because it used procedural patterns instead of object-oriented ones. No one could maintain it. The TCO (Total Cost of Ownership) of the system rose instead of falling, and the bank had to hire expensive consultants to “undo” the migration. In some modules, they reverted to the original COBOL. Lesson: automated migration must go hand in hand with a deep understanding of programming patterns and target architecture.

Citizen development: Parallel growth

At the same time, facing a shortage of engineers, banks are opening up to “Business Technologists” – domain experts using Low-Code/No-Code platforms. These are business analysts building simple applications without writing traditional code – dragging components in a graphical interface, connecting to data sources, creating forms and reports.

The CIO’s role is evolving towards “platform guardian”. The IT department’s job is not to build all applications – that is impossible with the current resource shortage. Instead, IT provides a secure environment (Governance) where the business builds simple apps without violating security standards.

Procurement decision: Requirements for the partner

Under Criterion #4, the Software House must prove:

  • AI tools for migration: The need for proprietary or proven AI tools that speed up the analysis and refactoring of legacy code. Manual rewriting takes too long and costs too much – in banks, we have millions of lines of code. Without case studies from successful migrations, there is no credibility.
  • Refactoring coverage: A guarantee that after migration, the code will be covered by unit tests to a degree exceeding 80%. Without tests, every change in the new code becomes a lottery – it might work, or it might break something in another module.
  • Talent academies: Training programmes (e.g., New to Z) guaranteeing a supply of staff to maintain legacy systems. Relying only on increasingly expensive freelancers, bidding against competitors for a shrinking group of specialists, does not create a long-term model.

Criterion #5: Business responsibility (Managed Outcomes)

Procurement Question: “Why should we pay for your time when we are only interested in the result?”

The traditional IT outsourcing model, based on “body leasing” (Time & Material – T&M billing), is eroding in 2026. For years, banks rented hundreds of specialists, paying for working time – regardless of whether the project went according to plan or derailed. The bank took on the full risk of project management. Facing the “Efficiency Trap” and DORA requirements for a clear division of responsibility, boards have decided this model is unacceptable.

The twilight of the “Time & Material” model

For Software Houses, this means the end of the era of “selling CVs”. It is not enough to issue an invoice for 1000 man-hours of a senior developer. Banks are mass-migrating to Managed Services and Managed Outcomes models. The structure of margin and risk is changing:

  • T&M Model (Declining): Low provider margin, all risk on the bank’s side. In 2026, used only for simple tasks (staff augmentation) or in small cooperative banks without resources to manage providers in advanced models.
  • Managed Outcome Model (Growing): Higher margin for the provider – a premium for taking over risk. The bank pays only for the effect. This requires operational excellence from the provider. If they “burn through” the budget, they cover it from their own pocket.

In this operating model, the bank defines the goal – for example: “maintain availability of the instant payment system at 99.999%” or “deploy an AI fraud detection module in 3 months with a detection accuracy of minimum 95%”. The provider prices the realization of this goal, taking on the risk of choosing methods, tools, and staff.

Economics of cooperation: SLA and malus clauses

In 2026, a contract with a Software House looks more like an insurance policy than a contract for specific work. From the Vendor Risk Management perspective, the deciding factors are:

  • SLA (Service Level Agreement): Guarantees of technical availability. Not “we will try our best” – but concrete numbers. 99.9% availability means a maximum of 8.76 hours of downtime per year. Exceeding this? Financial penalties.
  • Business KPIs: Linking provider remuneration to business results. If a bank launches a new online loan sales channel, the conversion rate from application to loan payout becomes the provider’s KPI. Conversion below 15%? Malus (penalty). Above 20%? Bonus.
  • Risk Sharing / Malus Clauses: Clauses where the provider financially participates in the costs of failure (under the DORA regime) or implementation delays. Example: if the payment system goes down for 2 hours during Black Friday, the bank’s reputational and operational costs can reach millions. The provider pays a defined percentage of these costs.

Real-world failure scenario (Failure mode)

A case of failure: A bank chose a provider in the Managed Outcome (Fixed Price) model, guided by the lowest lump sum price – 30% cheaper than the competition. It sounded ideal – fixed price, predictable budget, all risks on the vendor’s side. The problem? To keep a margin at such a low price, the provider started cutting costs. They limited regression tests – instead of full coverage, they tested only the “happy path”. They limited code review – seniors checked every fifth change instead of every one. The result: despite “delivering” functionality on time, the system proved unstable. Technical debt hidden in code quality exploded after deployment – failures, rollbacks, crisis fixes. The TCO (Total Cost of Ownership) of the system turned out to be 200% higher than assumed. The bank formally saved on deployment but spent a fortune on maintenance.

The talent gap and the role of MSSP

The shift to Managed Outcomes is also an answer to the talent crisis. Banks are losing the fight for the best engineers against BigTech – Google, Meta, Amazon pay twice as much and offer benefits a bank will never match.

Managed Services providers, operating on a global scale, manage career paths better – they can rotate people between projects and offer technological variety. For the bank, this means stability – a programmer leaving becomes the provider’s problem, who must ensure service continuity according to the SLA. The bank does not wake up to the news “senior developer left for the competition, project is stalled”.

This becomes especially visible in cybersecurity. The shortage of SecOps experts means banks are handing over entire monitoring processes (SOC – Security Operations Center) to specialized MSSP (Managed Security Service Providers). Forecasts point to a growth in the share of MSSP in the banking security market by 2026.

Procurement decision: Who enters the game?

Under Criterion #5, banks in 2026 are looking for partners who:

  • Price flexibility: Offer models with a gain-sharing mechanism (sharing benefits from optimization), and not just rigid hourly rates. Example: the provider optimizes code and reduces cloud infrastructure costs by 20%. Instead of keeping the savings for themselves, they share them with the bank in a 50/50 ratio. This aligns interests.
  • Quality guarantee: Readiness to take contractual responsibility for bugs in the code (warranty & support) instead of pushing them onto bank maintenance teams. In the traditional T&M model, when a programmer wrote code with an error, the bank paid for the hours to fix it. In the new model, the provider fixes errors at their own cost during the warranty period (often 12–24 months).
  • Technical maturity benchmarks: The provider must show measurable indicators of operational excellence. For example, Innovation Cycle Time – how fast they implement improvements in the managed process. If the cycle from idea to deployment takes 3 months, while the competition does it in 3 weeks, that is a red flag. The bank is buying not just code, but the ability for the system to evolve quickly.

Criterion #6: Ability to co-create IP (Composable banking)

Procurement Dilemma: Are we buying flexibility or building a new addiction?

“Composable banking” has stopped being a technological novelty – it is the dominant standard for new deployments. The market for these applications is growing at a rate of 17.5% per year (CAGR), and forecasts indicate that by 2028 it will reach a value of 11.8 billion dollars.

These numbers carry concrete consequences. Banks are abandoning monolithic structures in favour of an approach based on PBC (Packaged Business Capabilities). Under this model, systems are built from independent, ready-made components – such as a credit engine, KYC module, or general ledger – communicating via API.

For software houses, this is the end of the era of simple body leasing. Banks are not looking for contractors for isolated custom software development. They need advanced integrators who can orchestrate ready-made elements. Financial institutions are moving away from risky “Big Bang” projects (replacing the whole system in one day). They are replacing them with a “Hollowing out the Core” strategy – gradual decomposition of the core. Functions are moved to modern platforms step by step until the heart of the old system stops beating.

Intellectual property dilemma: Commodity vs differentiator

The expansion of SaaS solutions forces Chief Information Officers (CIOs) to settle a strategic issue: where does competitive advantage lie in a world of ready-made blocks? When everyone uses the same process engines from a narrow group of global vendors (e.g., Mambu, Thought Machine), standing out in the market requires a strategic decision on where to locate unique IP.

In 2026, a division is crystallizing that determines the role of the software house. The strategic dichotomy “Build vs Buy” depends directly on the classification of the given business area:

AreaStrategyRole of Software HouseStatus of IP (Ownership)
Commodity
(e.g., General Ledger, SEPA payments)
BUY (SaaS)Integrator.
Deploys a ready solution “out of the box”, minimizes costly customization.
IP belongs to the SaaS provider.
The bank pays only for the licence.
Differentiator
(e.g., Risk scoring, UI/UX, AI personalization)
BUILD (Custom)Co-creator.
Builds a unique solution from scratch or based on open source.
IP must belong to the bank.
These are the “crown jewels”.

Source: Own compilation based on market analyses.

The conclusion is one: a software house trying to sell its own closed system to handle a “Differentiator” area does not understand the needs of a Tier-1 class bank. These institutions look for partners acting as an extension of internal R&D departments and agreeing to the transfer of copyright in a “work for hire” model.

Risk of “vendor lock-in” and skills matrix

The composable model, despite its advantages, generates new threats. A multiplicity of providers creates a risk of dependency (vendor lock-in) in a new form – not on a single monolith, but on a dense web of API connections. That is why in 2026, banks place emphasis on “vendor agnostic” architecture. The system must allow for relatively easy and safe swapping of modules. This is the foundation of the resilience strategy required by the EU DORA regulation – the bank must possess a real exit plan (exit strategy) for every critical component.

Skills matrix (2026)

To handle this model, a software house must demonstrate a new set of skills, verified during a technical audit:

  • Technical (must-have): Proficiency in Event-Driven Architecture (e.g., Kafka) and containerization (Kubernetes). Excellent quality of API documentation (Swagger/OpenAPI) is also key.
  • Operational: FinOps, meaning the ability to optimize cloud costs (flexibility cannot ruin the budget) and Site Reliability Engineering (SRE) to ensure business continuity.
  • Integration: Deep knowledge of the PBC ecosystem (e.g., Salesforce, Temenos, Mambu) and the ability to tie them together securely without creating a tangle of dependencies (“spaghetti code”).

Procurement decision: Who fails?

Within the analysis of Criterion #6, banks ruthlessly reject providers who show:

  • Lack of technological neutrality: Attempts to deploy their own closed solutions where SaaS or open source is the standard.
  • Weak API culture: Treating API as a technical add-on, not a product (API-as-a-Product), which drastically hinders integration and raises maintenance costs.
  • Resistance to IP transfer: No consent to joint venture models or full transfer of source codes in the layer building competitive advantage (“Differentiator”).

Criterion #7: AI Governance Maturity (Agentic AI)

Procurement Question: “Who is criminally liable when AI refuses a loan – us or the provider?”

The year 2026 is a breakthrough moment. Artificial intelligence in banking is moving from the generative phase (content creation, simple chatbots) to the agentic phase (Agentic AI). These systems autonomously plan, make decisions, and execute complex actions on behalf of the bank or client – e.g., independently processing debt restructuring. Capgemini reports indicate that the adoption of Agentic AI is accelerating, and the average Return on Investment (ROI) is 1.7x. However, deploying systems with such high autonomy redefines legal and ethical responsibility.

EU AI Act: New rules of the game

In 2026, banks operate under the full regime of the EU AI Act. Creditworthiness assessment systems and other decision-making algorithms have gained the status of “high-risk” systems. This forces banks to implement rigorous supervision frameworks:

  • Human oversight: Decisions of high importance must be approved or supervised by a human. The machine suggests, but does not rule.
  • Explainability (XAI): The bank must explain the logic of every AI agent decision. Functioning as a “black box” is illegal.
  • Resilience: Systems must pass regular tests for attacks (e.g., prompt injection) and bias.

Bank boards place “AI governance” on par with financial risk management. Dedicated AI ethics committees are forming, and algorithm audits are standard operating procedure.

Real-world failure scenario (Failure mode)

To understand the weight of the problem, let’s analyze a specific situation. A bank deploys an autonomous customer service agent from a software house that did not implement adequate safety barriers (guardrails). The agent, subjected to manipulation (prompt injection), begins offering products on terms grossly inconsistent with credit policy.

The result? Direct financial losses, the need to shut down the system, and a severe penalty from the regulator for lack of supervision. Worse still, it turns out the software house does not hold liability insurance covering algorithmic errors, leaving the bank with the full weight of responsibility.

Economics of AI: Hidden costs (AI FinOps)

Choosing an AI provider in 2026 is a question of operational economics. AI models generate huge variable costs (number of tokens, GPU power). Banks reject “AI compliant” solutions that are financially unsustainable. The provider must implement AI FinOps, demonstrating cost optimization. Example: where technically possible, instead of expensive LLM models, they use smaller, specialized models (SLM – Small Language Models) that perform tasks with the same precision but for a fraction of the price.

Procurement decision: AI security requirements

Under Criterion #7, a software house must meet boundary conditions. Missing any of these disqualifies the provider:

  • Model auditability: Full documentation of the training process and model operation (compliance with EU AI Act).
  • AI security barriers: Implementation of mechanisms physically preventing AI agents from acting against bank policy – regardless of the content entered by the user.
  • Risk-adjusted pricing: Taking responsibility for model “hallucinations” and factoring the risk of erroneous agent decisions into the billing model.

Summary: New Partner Archetype and CIO Checklist

The Great Market Segmentation: Who survives 2026?

An analysis of the intersection of Managed Outcomes, Composable Banking, and Agentic AI trends reveals a permanent stratification of the IT provider market. The concept of “one market” is dead. Three leagues have emerged, and promotion from a lower to a higher one is becoming increasingly difficult. Analyzing the seven criteria allows us to clearly identify the players:

  • Group 1: Classic body leasing (Dinosaurs). Companies offering specialist rental (Time & Material) without responsibility for the result. Their only asset – a low rate card – is losing meaning. DORA requirements and supply chain management mean the cost of supervising such entities exceeds the savings. They are pushed to simple maintenance work or the role of subcontractors.
  • Group 2: Niche point vendors (Endangered). Providers of single applications, threatened by the wave of consolidation. Banks are cutting compliance costs (fewer audits = cheaper), preferring “All-in-One” platforms or large integrators. Only those with a unique “Differentiator” solution or perfect integration in an API-first model will survive.
  • Group 3: Outcome-driven partners (The New Archetype). Partners who have modernized their business model: they accept Managed Outcomes (responsibility for KPIs), possess AI Governance frameworks, report Scope 3 emissions, and operate largely as hybrids (integrating SaaS + building custom IP). These are the “first choice” (Tier-1 Vendors). They do not compete on hourly price, but on the total cost of achieving the result (Risk-weighted TCO).

Partner Validation Algorithm (Model 2026)

Based on the trend analysis, we define the formula for a software house that wins tenders in leading institutions. It is no longer a technology company, but an entity managing risk:

$$\text{Software House 2026} = \text{Competence [X]} + \text{Responsibility [Y]} + \text{Model [Z]}$$

Where:

  • [X] Competence: “Automation-First” engineering (DORA metrics), AI Governance (EU AI Act compliance), FinOps & GreenOps.
  • [Y] Responsibility: Taking over operational risk (SLA), regulatory risk (DORA Compliance), and sharing business risk (Managed Outcomes).
  • [Z] Cooperation Model: Transparent ESG Scope 3 reporting, results-based billing, readiness for IP transfer (Exit Strategy Support).

Decision Checklist: Verifying the New Reality

For software houses, the conclusion is clear: a redefinition of partnership is necessary. The IT provider is becoming an integral part of the bank’s resilience system. The analysis below summarizes the areas verified by CIOs, CPOs, and risk officers.

I. Stability and Security (Boundary Conditions)

This is the selection sieve (knock-out criteria). Failure to meet these requirements ends conversations.

  • DORA Readiness: Banks demand evidence, not declarations: registers of subcontractors, 24h audit readiness, TLPT test scenarios.
  • Financial Stability: Cashflow analysis over a 3–5 year perspective, not just current revenue.
  • ESG and Cyber: Hard Scope 3 reporting and a cyber insurance policy for amounts adequate to the scale of cooperation.

II. Business Responsibility (Shortlist Condition)

Approach to risk determines entry onto the shortlist.

  • Managed Outcomes: Margin linked to achieving goals, not just delivering hours.
  • Risk Sharing: Malus clauses for critical downtime.
  • AI Governance and Team: Compliance with EU AI Act and an engineer/management structure of min. 70:30 (elimination of facade structures).

III. Architecture and Future (Strategic Fit)

Long-term vision decides victory.

  • Exit Strategy: The provider actively supports the exit plan and IP transfer (work for hire).
  • DORA Metrics: Reporting MTTR and Deployment Frequency in real-time.
  • Legacy Modernization: Ability to handle mainframe/Cobol using modern AI tools.

Final Conclusions: Architects of Resilience

Banking in 2026 has undergone a deep metamorphosis. Financial institutions today are “Architects of Resilience” – advanced organizations managing a complex web of dependencies: from mainframes to autonomous AI agents.

Four imperatives for leaders and their IT partners:

  1. Simplification is innovation: Funding transformation (“Change”) requires radical cutting of maintenance costs (“Run”) through automation.
  2. Consolidation for security: DORA forces a limitation on the number of partners in favour of tighter control.
  3. Responsible autonomy: AI drives efficiency only within strict legal frameworks.
  4. Sustainable value chain: ESG Scope 3 is a hard parameter of business risk and cost of capital.

Choosing a software house in 2026 is not a technical decision. It is a strategic decision about the organization’s resilience to coming shocks. Cooperation with an entity ready for this challenge is a condition for survival in the new reality.

FAQ

How do banking software development companies ensure operational efficiency?

Top banking software development companies boost operational efficiency by automating maintenance to escape the “efficiency trap.” They replace legacy systems with custom software solutions and use DORA metrics to measure deployment speed, ensuring banking operations run smoothly without consuming the entire budget.

What role does software development play in wealth management?

Modern software development enables financial institutions to build advanced wealth management platforms. By using fintech software development practices, retail and commercial banks can integrate AI tools that automate asset analysis and offer scalable digital solutions tailored to client needs.

Why choose a global software development company for digital banking?

A global software development company brings managed services and risk management expertise to digital banking. They handle the entire development lifecycle, ensuring regulatory compliance (like DORA) and delivering custom solutions that local providers might lack resources to support.

How does custom banking software improve customer engagement?

Custom banking software allows financial sector leaders to build unique digital banking portals and mobile banking apps. These custom solutions enhance customer engagement by offering personalized personal finance management tools that off-the-shelf banking software cannot provide.

What are the key features of core banking software in 2026?

Modern core banking software must support embedded finance and payment systems via composable architecture. Leading core banking software companies prioritize technological neutrality, allowing financial institutions to swap modules without vendor lock-in while maintaining a secure core banking platform.

How does mobile app development impact banking services?

Mobile app development is critical for delivering accessible banking services. A skilled development company uses an agile development process to create mobile banking applications that handle payments processing and fraud detection securely, ensuring a seamless user experience.

Why is financial software development shifting to Managed Outcomes?

Financial software development is moving to Managed Outcomes to align software development services with business results. Instead of paying for hours, financial software companies are paid for achieving KPIs, such as system availability or digital lending systems conversion rates.

What defines the best banking software development services?

The best banking software development services combine financial software development expertise with ESG transparency. They offer full cycle software development that includes green coding to reduce cloud services emissions, meeting strict banking industry sustainability goals.

How to manage legacy banking software modernization?

To manage legacy banking software, utilize banking software development partners who employ AI tools for refactoring. This approach safely migrates core banking services to modern languages, resolving the demographic gap in mainframe talent without disrupting critical financial software solutions.

What software development company fits traditional financial institutions?

Traditional financial institutions need a software development company founded on engineering efficiency. The ideal partner offers custom banking software development with a high ratio of engineers (“doers”) to ensure fast digital transformation and reliable account management systems.

How do web development and mobile app strategies align?

Effective web development and mobile app strategies share a common technology partner to ensure consistency. By synchronizing online banking solutions with mobile app development, banks create a unified experience across all digital banking platforms and customer touchpoints.

This blog post was created by our team of experts specialising in AI Governance, Web Development, Mobile Development, Technical Consultancy, and Digital Product Design. Our goal is to provide educational value and insights without marketing intent. 

If you want to meet us in person, click here and we’ll get in touch!