Blogs
Synthetic smarts: Why bigspark acquired Aizle – and what comes next
2025-04-30|Jo Whalley|Director of Fraud and Fincrime practice
From regulatory sandboxes to scale-ups, how Aizle’s synthetic data engine is reshaping innovation in financial services – and why it’s now part of bigspark’s mission
Innovation in financial services moves fast, but data access often lags behind. It’s a problem David Tracy, who recently joined bigspark as Head of Data Product, knows all too well. “As a data and product leader at a fintech, data availability and permission was always a challenge,” he says, reflecting on his previous roles at early-stage fintech companies. “We had great product ideas to help people use their money better, support vulnerable customers, and provide fairer access to financial services. But without data – without safe, shareable, high-quality data – we couldn’t move fast enough.”
That frustration became the spark behind Aizle, created by David and his team in 2020, and now acquired by bigspark from Smart Data Foundry – a synthetic data engine built for exactly this kind of problem. “We wanted to build a platform that removed the friction completely,” he explains. “A system where GDPR, customer privacy, and sharing agreement considerations could be solved – because no real customer data is ever used or exposed.”
What is synthetic data, really?
At its core, Aizle is an agent-based simulation platform. “We model behaviour – how people interact with financial products, how fraud happens, how money moves through systems,” David says. “We simulate those behaviours to generate synthetic datasets that are statistically realistic and scenario-rich without ever copying or training on real data.”
Unlike traditional anonymisation or data masking, Aizle’s synthetic data is born fake – but engineered to behave like the real thing. “It’s not just pseudonymised – it’s generated from scratch,” says David. “There’s zero reidentification risk. And that’s a gamechanger for how teams can use it.”
As the data is completely synthetic, it’s inherently privacy-safe and fully shareable – making it ideal for cross-sector collaboration, innovation testing, and model development without risk to individuals or institutions.
From sandbox success to platform power
Aizle’s first big moment came when it was selected for the FCA’s Green FinTech Challenge. “We built synthetic ESG data to help organisations design climate risk tools without needing to access or expose live corporate data,” says David.
This led to further work in the FCA’s Innovation Sandbox and participation in the regulator’s APP Fraud TechSprint, which focused on tackling authorised push payment fraud. More recently, Aizle has been adopted by clients ranging from banks and fintechs to public sector institutions and universities. It also supports collaborative R&D initiatives like the Financial Regulation Innovation Lab in Scotland and is an active partner of FinTech Scotland.
The business case for synthetic data
One of Aizle’s most powerful use cases lies in fighting financial crime. “Fraud is systemic, but data sharing across institutions remains difficult,” says David. “With Aizle, we can create a level playing field – where detection models can be built, tested, and improved without compromising customer privacy.”
He points to the growing threat of synthetic identity fraud – a form of deception in which criminals combine real and fabricated data to create new identities that are incredibly hard to detect. “Because Aizle simulates behaviour, not just static profiles, we can model how new threats could manifest – how criminals might interact with different financial products and systems, and what kinds of anomalies would flag suspicious behaviour,” David adds.
It’s an approach that helps institutions future-proof their fraud defences. “You can simulate emerging threats before they materialise in the real world,” David explains. “That’s especially valuable when fraud techniques shift faster than regulation or internal controls can keep up.”
Aizle also plays a role in improving AI governance and explainability. “You can’t always test the fairness or robustness of a decision engine using historical data,” David notes. “But if you can simulate edge cases and generate targeted synthetic datasets, you can interrogate your model much more thoroughly.”
That’s vital for high-stakes decisions – like credit approvals, transaction blocking, or risk scoring – where biased models or blind spots can have serious consequences.
Why bigspark? Why now?
bigspark’s acquisition of Aizle reflects a broader trend: the convergence of simulation, synthetic data, and AI operations. “We’d been watching Aizle’s progress for some time,” says Jo Whalley, Director at bigspark. “What stood out to us was the sophistication of its behavioural modelling and the versatility of the platform. It wasn’t just synthetic data – it was synthetic intelligence.”
For bigspark, which already works with financial institutions on data transformation, fraud analytics, and machine learning implementation, Aizle brings an additional layer of speed and experimentation. “With Aizle, we can offer clients a ‘safe mode’ for innovation,” Jo explains. “You can build new systems, stress-test them, and fine-tune policies – without waiting months for access to real data or worrying about privacy breaches.”
And the match wasn’t just strategic – it was cultural. “Both our teams care about building useful, ethical, transparent tools,” David says. “We’re not chasing AI hype. We’re solving real-world problems.”
What’s next for Aizle?
The roadmap ahead is ambitious. On the technical side, the Aizle team is expanding its library of agents, behaviours, and environments – supporting ever more granular simulations of financial ecosystems. “We’re building out synthetic personas that can evolve over time, make decisions based on context, and interact with multiple institutions,” David shares. “That means you can run lifecycle simulations – not just one-off snapshots.”
The team is also introducing governance features that enable users to generate audit trails, test policies for bias, and simulate regulatory impact scenarios. “Imagine testing the effects of a new fraud rule across a whole synthetic population – before it ever hits production,” David says. “That’s where we’re heading.”
Use cases are also expanding. Aizle is being explored as a tool for stress testing financial inclusion models; creating synthetic customer service call transcripts for NLP training; running simulations for joint risk assessments between regulators and firms; and training AI copilots to assist compliance teams.
And with bigspark’s reach, scale is now firmly in sight. “Together, we want to help every organisation – big or small – unlock the benefits of safe experimentation,” says Jo. “Whether you’re a regulator, a bank, or a fintech start-up, you need a way to learn fast without putting people at risk. That’s what Aizle offers.”
David agrees. “This isn’t just about synthetic data,” he says. “It’s about building digital twin environments for financial systems – spaces where we can test the future before it arrives.”
To find out more about Aizle click here
c.2024 bigspark.ai All Rights Reserved.