There was a time when a television and a generator could change the direction of a life. For Khomotso Molabe, Group Chief Information Officer at Standard Bank Group, that memory is not just a story about growing up in a village in Limpopo without electricity. It is a reminder that technology has always arrived unevenly, first as wonder, then as disruption, and eventually as something that rewrites how people live, work and make decisions.
That is why AI in financial services cannot be treated as another digital trend to test quietly in innovation labs. In banking and insurance, artificial intelligence is entering the parts of the business where trust matters most: credit decisions, fraud detection, compliance, risk governance and customer confidence. The question is no longer whether banks can use AI. The harder question is whether they can use it responsibly enough, clearly enough and boldly enough to create real value without losing the trust that holds the financial system together.
At the Regenesys AI Summit, held on 9 April 2026 at the Regenesys Education campus in Sandton, Molabe delivered a masterclass titled Financial Services Masterclass: “AI Risk Radar: Smarter Credit, Fraud and Compliance Decisions”. But what made the session powerful was that it did not feel like a technical presentation about models, dashboards and automation. It felt like a challenge to every organisation still confusing AI activity with AI progress.
His message was clear: the future of financial services will not belong to the institutions with the most AI experiments. It will belong to those that can turn AI into trusted, explainable and measurable value.
The Real AI Risk Is Not the Technology. It Is How Organisations Respond to It

One of the most striking ideas from Molabe’s masterclass was that organisations often mistake movement for progress. In the race to adopt AI, many businesses become obsessed with the visible signs of activity. They count how many employees have attended AI training. They celebrate how many use cases have been identified. They roll out new tools, run pilots, build innovation walls and proudly announce that the organisation is now “using AI”.
But activity is not the same as transformation.
This distinction matters deeply in financial services because the stakes are higher than in many other industries. A bank or insurer is not simply dealing with internal efficiency. It is dealing with people’s money, risk, access, financial identity, regulatory obligations and institutional trust. A poorly governed AI model can do more than produce a bad output. It can influence credit access, miss fraud signals, generate misleading reports or weaken confidence in a system that depends on trust.
Molabe’s point was not that experimentation is useless. Experimentation has its place. The problem is when experimentation becomes the destination instead of the starting point. AI must move beyond pilots and demonstrations into measurable business value. In financial services, that means better decisions, faster response times, stronger controls, improved customer experiences and clearer accountability.
A financial institution does not become AI-driven because it has impressive tools. It becomes AI-driven when those tools improve the quality, speed, fairness and explainability of the decisions it makes.
The Real Measure of AI Is Not Activity, but Impact
The phrase “AI theatre” may sound harsh, but it captures a real problem. Many organisations want to be seen as innovative before they have done the harder work of becoming operationally ready. They launch AI projects without changing the underlying systems, incentives, workflows or governance structures that determine whether AI can actually create value.
Molabe warned against focusing too heavily on inputs. The number of trained employees, the number of pilots and the number of licences purchased are not the true measures of AI success. They may show that something is happening, but they do not prove that value is being created.
The more important question is whether AI in financial services is helping the organisation solve real problems.
In banking and insurance, those problems are not theoretical. How can credit decisions become smarter without becoming unfair? How can fraud be detected earlier without creating unnecessary friction for legitimate customers? How can compliance reporting become faster without sacrificing accuracy? How can institutions use AI at scale while keeping models explainable, auditable and regulator-ready?
These are the questions that matter.
The organisations that win with AI will not be those with the longest list of use cases. They will be the ones that know which problems are worth solving, which risks must be controlled, and which outcomes must be measured.
Why AI Forces a New View of Professional Expertise

Molabe also raised an uncomfortable but necessary point about the future of professional work. For decades, many professionals have been valued for two abilities: the ability to reason through complex problems and the ability to produce high-quality outputs.
Lawyers produce documents. Engineers write code. Analysts produce reports. Bankers prepare assessments, recommendations and client communication. Compliance teams generate documentation. Risk teams interpret patterns and explain decisions. These are valuable forms of work, but they are also areas where AI is advancing quickly.
AI can summarise, draft, analyse, generate, compare, recommend and automate at a speed that human beings cannot match. That does not mean human expertise becomes irrelevant. It means the value of human expertise must move.
The future professional cannot rely only on producing the first draft, processing routine information or performing repetitive analysis. Those tasks will increasingly be supported or completed by AI. The human advantage will sit in judgement, ethical reasoning, client trust, contextual understanding, exception handling, leadership and the ability to ask better questions.
In financial services, this is a major shift. A credit analyst may spend less time assembling basic information and more time interpreting complex risk. A compliance professional may spend less time drafting routine documentation and more time challenging whether the system is producing reliable outputs. A banker may spend less time preparing standard communication and more time deepening client relationships.
The work does not simply become faster. The work becomes different.
That is why AI adoption cannot be treated as a technology implementation only. It is a people, process and leadership transformation.
Smarter Credit Decisions Need Better Data Discipline

Credit is one of the most important areas where AI can change financial services. Used properly, AI can help institutions assess risk more dynamically, identify early warning signs, analyse complex data patterns and support more consistent decision-making.
But this promise depends on the quality of the data beneath it.
Molabe made this point clearly: poor data does not become safe because AI is placed on top of it. In fact, AI can scale poor data at dangerous speed. If customer data is incomplete, inconsistent, outdated or poorly governed, AI can magnify the weakness. It can make flawed patterns appear more convincing. It can produce faster decisions that are not necessarily better decisions.
This is one of the biggest realities facing AI in financial services today. Many organisations want the benefits of AI, but they are still carrying years of data debt. Their systems may not speak to one another properly. Their records may be fragmented. Their master data governance may be immature. Their data architecture may not yet support the level of decision-making that AI makes possible.
The answer is not to wait until every data problem has been solved. That may take years, and the market will not wait. The smarter approach is to sequence AI adoption based on risk and readiness.
Lower-risk use cases can be tackled first where the data is good enough and the consequences of error are manageable. More sensitive areas, such as credit decisions, regulatory reporting and customer financial information, require stronger controls, better governance and deeper explainability.
This is where leadership matters. AI adoption is not an all-or-nothing decision. It is a sequencing decision. The institutions that understand this will move faster because they will know where to move carefully.
Fraud Detection Requires Earlier Signals and Better Intelligence

Fraud has always been a battle of timing. The earlier a bank or insurer can detect suspicious behaviour, the better it can protect customers, reduce losses and prevent damage. Traditional fraud systems often rely on rules, thresholds and known patterns. These remain important, but they are no longer enough on their own.
Fraudsters adapt. They test systems. They learn from friction points. They exploit gaps between channels, departments and technologies. As financial behaviour becomes more digital, the volume and complexity of signals increase.
AI can help financial institutions identify unusual activity earlier, detect anomalies across large datasets and recognise patterns that may not be obvious to human teams. It can support faster investigation and help teams separate meaningful risk signals from ordinary customer behaviour.
But again, the issue is not simply whether AI can detect fraud. The issue is whether it can do so in a way that is explainable and fair.
If an AI system flags a transaction, claim or customer profile as suspicious, the institution must be able to understand why. A black-box explanation is not good enough when customer trust, regulatory scrutiny and financial access are involved. Fraud teams need intelligence they can act on. Risk teams need outputs they can challenge. Customers need to know that technology is not being used carelessly. In this sense, AI does not remove the need for governance. It raises the standard of governance.
Compliance Reporting Must Become Faster Without Becoming Careless

Compliance is one of the areas where AI can create immediate value. Financial institutions operate in highly regulated environments where reporting, documentation, monitoring and audit readiness consume enormous time and resources. AI can help summarise information, draft reports, monitor obligations, identify gaps and support faster regulatory responses.
But compliance is also one of the areas where AI must be handled with extreme discipline.
A well-written AI-generated report is not automatically a reliable report. A confident answer is not automatically a correct answer. A summarised regulation is not automatically a compliant interpretation. In financial services, the cost of sounding right while being wrong can be severe.
Molabe’s reflections on hallucination and control were especially relevant here. Many organisations talk about keeping a “human in the loop”, but in large-scale environments, that phrase can become too simplistic. No human being can manually inspect every line of code, every model output, every generated artefact or every decision pathway at the speed and scale at which AI operates.
That is why the idea of specialist agents checking other agents becomes important. It suggests a future where AI governance is not only manual, but layered. Human beings set the intent, define the risk appetite, design the controls and remain accountable. AI systems then help monitor, test, validate and challenge other AI systems at scale.
This does not replace accountability. It changes how accountability is supported.
The future of compliance may therefore depend on a new architecture where humans and AI work together to create systems that are faster, more transparent and more resilient.
AI in Banking Must Be Built Around Problems, Not Vendor Pressure

One of the most practical parts of the masterclass was Molabe’s view on technology vendors. Large organisations are constantly approached with new tools, platforms, licences and AI-enabled solutions. The temptation is to buy what looks impressive, especially when the market is moving quickly and no organisation wants to appear behind.
But Molabe’s response was simple: what problem does this solve? That question should sit at the centre of every AI discussion.
Financial institutions should not adopt AI because it is fashionable. They should adopt it because it solves a clearly defined problem, improves a measurable outcome or unlocks value that could not be achieved in the same way before.
Sometimes that value will come from established partner solutions. In a large banking environment, scale matters, and existing enterprise tools can unlock efficiency quickly. In other cases, especially where the organisation is exploring new possibilities, it may need to build its own intellectual property by combining internal data, cloud infrastructure, large language models and specialist expertise.
The key is not whether the solution is bought, built or blended. The key is whether it is tied to value.
A value-led approach protects organisations from vendor-driven decision-making. It keeps the focus on the customer, the business problem, the risk environment and the outcome. For banks and insurers, that discipline is essential.
Productivity Is Only the First Layer of AI Value

Productivity is often the first area organisations explore with AI, and for good reason. AI can reduce repetitive tasks, draft standard communication, summarise information and help employees work faster. In banking, this can free employees from routine administration and allow them to focus on higher-value client engagement.
Molabe shared how AI can assist bankers by taking over repetitive manual work and generating routine communication. This is useful, but it is only the beginning
The second layer of value is speed. AI can help shorten the product development lifecycle, allowing teams to move from idea to prototype much faster. In a sector where customer expectations are changing rapidly, the ability to test, learn and adapt quickly can become a major competitive advantage.
The third layer is innovation. AI can change how customers experience financial information. Molabe referred to Standard Bank’s use of generative AI to transform customer spending data into personalised “money stories”, making financial insights more engaging and memorable than a simple transaction list.
That example matters because it shows AI not only as an efficiency tool, but as a customer experience tool. It can make financial information feel more human, more relevant and more understandable.
Still, productivity, speed and innovation depend on the same foundations: strong data, clear governance, leadership alignment, capable teams and a willingness to redesign work.
Without those foundations, AI may create noise. With them, it can create value.
The AI-First Bank Is No Longer Science Fiction
Perhaps the most provocative idea from Molabe’s masterclass was the possibility of an AI-first bank. In this model, AI does not sit on the side as a helpful tool. It becomes central to how the bank operates, with human beings guiding, supervising and improving the system.
For many people, that idea may feel uncomfortable. Banking has always been built around trust, judgement and institutional credibility. The thought of AI sitting at the centre of that system raises serious questions.
But the direction is clear. AI will become more embedded in how financial institutions assess risk, serve customers, monitor fraud, develop products, manage compliance and allocate human attention. The future is unlikely to be a simple choice between human-led banking and machine-led banking. It will be a more complex model where AI handles more of the scale, while people handle more of the judgement.
An AI-first bank would need governance by design. It would need explainable models, auditable processes, regulator-ready controls, clear accountability and strong ethical boundaries. It would need leaders who understand that AI is not only a tool for efficiency, but a force that changes how decisions are made.
It would also need honest conversations with employees.
Molabe did not pretend that AI will have no impact on jobs. His message was direct: roles will change, some tasks will disappear and new capabilities will become more valuable. That honesty is important because organisations cannot build an AI-enabled culture while telling people that nothing will change.
People do not need false comfort. They need clarity, support and a credible path forward.
The Human Role Must Move Higher

If AI can reason, generate and automate, then the human role must evolve. This is not only a challenge for employees. It is a challenge for leaders.
Leaders must decide how work should be redesigned. They must determine which tasks should be automated, which decisions require human judgement and which skills need to be developed. They must align incentives so that teams do not treat AI as an optional side project. They must educate boards so that governance keeps pace with innovation.
Molabe made an important point about incentives. People tend to focus on what they are measured and rewarded for. If AI value creation is treated as a minor objective buried at the bottom of a performance plan, it will not drive serious change. If it is linked to leadership goals, business outcomes and organisational priorities, it becomes much harder to ignore.
This is where many AI strategies fail. They focus on tools before they focus on behaviour. They announce transformation before changing the system that makes transformation possible. For AI to create real value, it must be connected to how the organisation leads, measures, rewards and learns.
Adaptability Is the New Financial Services Advantage
One of Molabe’s most important reflections was that no one can truly claim to be a permanent expert in AI. The technology is changing too quickly. Models are improving. Capabilities are expanding. Regulation is evolving. Customer expectations are shifting. New risks are emerging.
This means the winners will not be the institutions that get every decision right from the beginning. They will be the ones that adapt fastest and learn most effectively.
Adaptability is becoming a strategic advantage.
In financial services, this requires more than innovation teams and technical capability. It requires boards that understand the implications of AI. It requires executives who can connect AI to value. It requires risk and compliance teams that are involved early. It requires data governance that is treated as a strategic priority, not an administrative burden. It requires employees who are given the tools and confidence to work differently.
Most importantly, it requires courage.
The courage to stop pretending that AI is just another technology trend. The courage to admit that some roles and processes will need to change. The courage to move beyond use-case theatre. The courage to build systems that are not only intelligent, but trustworthy.
The Real Risk Is Standing Still

Molabe’s masterclass at the Regenesys AI Summit offered a clear message for financial services leaders. AI is not something to observe from a safe distance. It is already changing the foundations of banking and insurance.
The opportunity is enormous. AI can help banks and insurers make smarter credit decisions, detect fraud earlier, streamline compliance, improve productivity, accelerate innovation and create more personalised customer experiences. But the risk is just as real. Without strong data, explainability, governance and leadership discipline, AI can scale confusion as quickly as it scales value.
The future of financial services will not be shaped by AI alone. It will be shaped by how well leaders use AI in financial services to make better decisions today. Those that treat AI as a serious operating model shift will build smarter, safer and more adaptive organisations. Those that treat it as a collection of experiments may discover that they were busy, but not ready and in a sector built on trust, not being ready may be the greatest risk of all.
Explore the Regenesys School of AI Leadership Series and start building the leadership capability needed for the future of work.
