Banks are progressively adopting artificial intelligence, automation, and algorithmic decision-making. However, what remains less discussed is the underlying structure of these decisions – synthetic data, proxy models, inferred behaviours, and simulated realities that are gradually replacing customers, transactions, and risk.
This transformation has not been a result of a single major decision but has occurred gradually and often imperceptibly, out of operational necessity. The need to fill data gaps, create realistic testing environments, adhere to privacy constraints and manage time pressure has often led to a preference for approximations over completeness.
Consequently, many banking decisions today are not based solely on “real” customer behaviour. Instead, they are increasingly based on synthetic representations of reality. Yet, the governance structures are primarily rooted in an era when data was assumed to be observable, traceable, and historically grounded.
This is not a technology problem. It is a governance problem.
From automation to abstraction
Initial banking automation was transactional. Systems executed rules designed by humans against clearly defined inputs, with a linear accountability structure. The advent of algorithmic banking changed this structure. Decision-making became probabilistic, relying on patterns rather than rules, with learning systems adjusting behaviour over time.
Synthetic banking takes this a step further. Decisions are now influenced not only by observed reality but also by constructed reality. Synthetic datasets simulate customer profiles, synthetic transactions model potential flows, and synthetic stress scenarios project behaviour under hypothetical conditions.
But this advancement introduces a subtle shift: decisions are increasingly based on what could plausibly be true, rather than what has been demonstrably observed. This change significantly alters the risk landscape.
Where governance quietly falls behind
Most bank governance frameworks assume that data represents reality, models operate on observable inputs, and accountability can be traced through committees and approvals. Synthetic systems challenge all these assumptions.
Synthetic data is, by definition, an abstraction. It is designed to resemble reality without being reality, statistically but not factually. Yet governance structures often treat synthetic outputs with the same confidence as historical evidence.
Model risk frameworks typically focus on model performance, bias, and validation. They are less equipped to assess the ontological status of the data itself. The distinction between observed, inferred, simulated, or generated input data is often not visible at the committee level, blurring accountability lines.
The risk does not sit within a model. It sits between governance layers.
Synthetic data and false comfort
Ironically, synthetic systems, often introduced to reduce risk, can create a different form of risk – false comfort. Dashboards remain green, back-testing passes, stress scenarios show resilience. But these assurances are only as strong as the assumptions embedded in the synthetic layer.
When real-world conditions diverge from simulated ones, institutions may not notice immediately. The signals are delayed. The confidence persists. By the time outcomes diverge meaningfully, the organisation is already committed to the decision path. This is not theoretical. It mirrors earlier failures where internal metrics masked operational fragility.
The difference now is that the masking happens before reality fully unfolds. Synthetic banking accelerates decision-making while weakening the feedback loop.
Committees were not designed for constructed reality
Bank committees evolved to manage human decisions, supported by data. They were not designed to interrogate simulated worlds. Few committees ask fundamental questions about the assumptions embedded in the synthetic layer, the inferred behaviours, the uncertainties smoothed out for model usability, and where simulated confidence replaces empirical evidence.
These questions often fall outside formal mandates, creating a governance gap that is subtle, systemic, and difficult to detect – precisely the kind of risk that institutions historically struggle with most.
The human accountability problem
One of the most under-discussed aspects of synthetic banking is how it changes human accountability. When decisions are grounded in historical data, accountability can be challenged. However, when decisions are grounded in synthetic constructs, challenging outcomes becomes harder, and the organisation ends up debating models rather than decisions.
Accountability becomes procedural rather than substantive, not due to a failure of intent but a failure of institutional design.
Regulation will follow – but slowly
Regulators are beginning to engage with synthetic data, particularly around privacy and testing. However, regulatory frameworks tend to lag operational reality.
Banks should not wait for prescriptive rules. By the time formal regulation arrives, institutions will already have embedded synthetic systems deep into credit, fraud, pricing, and customer management. The more resilient approach is to treat synthetic banking as a governance design challenge, not a compliance exercise.
That means making the use of synthetic data visible at senior levels, distinguishing between observed and constructed inputs in decision artefacts, re-thinking committee mandates to include epistemic risk, not just model risk, and accepting that not all confidence is equal – some confidence is simulated.
A quiet design choice facing banks
Synthetic banking is not inherently dangerous. It can enable safer experimentation and better resilience. But it changes the nature of institutional decision-making. Banks now face a quiet design choice.
They can continue to layer synthetic systems onto governance structures designed for a different era, accepting growing opacity as the price of speed. Or they can redesign governance to recognise that decisions are increasingly made in constructed environments, and that accountability must evolve accordingly.
The choice will not appear in strategy decks but will surface later, in outcomes. The institutions that navigate this well will not be those with the most advanced algorithms. They will be those that understand what their systems are actually deciding on – and who remains accountable when reality diverges from the simulation.
Dr. Gulzar Singh, Chartered Fellow – Banking and Technology; Director, Phoenix Empire Ltd



