Case Studies Capgemini

Capgemini, GSK & IBM

Case Studies

Enabling Quantum Chemistry for Drug Discovery with Haiqu

The pharmaceutical industry seeks drugs with high potency and precise selectivity to improve efficacy, reduce side effects, and lower late-stage failure rates. Targeted covalent drugs are especially promising because they form a specific, irreversible bond with a target protein via a reactive chemical group called a warhead. When properly tuned, this mechanism delivers exceptional potency and durability—aspirin being one of the earliest examples. The challenge is predicting warhead reactivity: higher reactivity generally improves potency, but excessive reactivity undermines selectivity. Accurately balancing this trade-off remains a major bottleneck in drug discovery.

With R&D costs exceeding $2B per drug, pharma increasingly combines machine learning and computational chemistry to accelerate discovery. A powerful approach uses first-principles calculations to generate “quantum fingerprints”—physically grounded features that improve reactivity prediction. However, classical simulation methods scale poorly: they rely on approximations that are either too inaccurate to capture critical many-body effects or too expensive for practical screening.

Problem: Quantum chemistry workloads exceed today’s hardware limits in circuit depth, noise, and cost.

Quantum computing offers a solution, but until now has been constrained by hardware noise, limiting usable circuit depth to a few hundred two-qubit gates. Haiqu, working with Capgemini, IBM, and GSK, broke this barrier by demonstrating one of the largest electronic-structure Hamiltonian simulations ever run on real quantum hardware for covalent drug warheads. Using advanced circuit compression and middleware execution, the team initially reduced circuit depth by 15.5× and further allowed end-to-end execution by running sub-circuits up to 371 gates.

Solution: Decomposed prohibitive quantum runs into hardware-friendly, separable blocks.

Collectively, these results establish a scalable, hardware-realistic path for running Hamiltonian simulations on larger active spaces, while maintaining sufficient accuracy for molecular reactivity prediction.

Blog Website Graphics 1
Haiqu decomposes quantum circuits into hardware-friendly blocks, enabling quantum chemistry workloads (left panel). Executions with Haiqu middleware (blue squares, lower right panel) retain coherent signals and closely track ideal trajectories (grey crosses), while runs using Qiskit’s built-in error mitigation (orange squares) collapse toward a noise-dominated baseline (black dashed line).

Impact: With Haiqu, chemists build expertise ahead of broader hardware advances

For decision makers, the implications are clear and immediate: Haiqu dramatically lowers the cost and increases the performance of quantum chemistry workloads, transforming quantum computing from a long-term theoretical research bet into a near-term commercial piloting program on real quantum hardware. By making deep Hamiltonian simulations feasible on today’s quantum hardware, Haiqu enables pharmaceutical teams to:

  1. Explore larger and more realistic molecular spaces
  2. Generate predictive quantum features unavailable to classical methods
  3. Integrate quantum simulations directly into machine-learning-driven discovery pipelines
  4. Accelerate early-stage drug discovery while reducing computational cost and increasing the success of the pilot experiments

Crucially, this is not a promise for the next decade. Haiqu makes high-value quantum workloads commercially viable today, allowing enterprises to capture competitive advantage years ahead of hardware-only roadmaps.

 

Quantum for business. Run more with Haiqu.

 

Explore the full research paper.

Case Studies HSBC

HSBC

Case Studies

Loading Financial Data at a New Scale
 

Enabling Realistic Quantum Monte Carlo for Finance

Quantitative finance is fundamentally about allocating capital and redistributing risk in ways that support long-term economic health. By making uncertainty explicit and actionable, quantitative methods help firms allocate resources with confidence and contribute to more transparent, reliable markets.

A canonical example is the Black–Scholes equation, which provides a principled framework for pricing derivatives. By making the cost of risk explicit, derivative pricing improves market efficiency and enables firms to hedge uncertainty and invest over longer horizons. In practice, however, many of today’s most important financial problems extend beyond closed-form solutions and rely on computationally intensive techniques such as Monte Carlo simulation. As models become more realistic, higher-dimensional, and sensitive to tail risk, these methods become increasingly slow and expensive to simulate on classical hardware.

HSBC distributions refined
Haiqu’s data loading scales differently. As the number of qubits increases (left panel), the required quantum resources stay within today’s hardware limits, enabling real, large-scale quantum experiments. Real hardware runs at 25 and 156 qubits show what’s possible now (right panels).

Quantum computing has emerged as a promising way to accelerate financial workloads by offering fundamentally different scaling behavior than classical approaches. Beyond derivative pricing, applications such as portfolio optimization, fraud detection, and machine learning are all to benefit from quantum computation. These applications share a common and often overlooked requirement: realistic financial distributions must first be loaded into a quantum computer to govern the modelled instrument.

Problem: Distribution loading requires an exponential number of operations.

Distribution loading is extremely challenging. The number of required quantum operations in conventional algorithms can scale exponentially with the number of qubits, making it a significant bottleneck on today’s noisy, depth-limited hardware.

Solution: Compact loading circuits that fit into early QPUs.

Haiqu addresses this challenge by exploiting structure and smoothness in distributions to factor the loading process into compact quantum circuits with linear (rather than exponential) scaling. Using this approach, Haiqu demonstrated the largest-scale loading of realistic financial distributions on quantum hardware by successfully encoding heavy-tailed distributions on up to 64 qubits on IBM’s Torino processor and validating the results with standard statistical tests on up to 25 qubits. Following the initial project, Haiqu demonstrated the applicability of this method at a scale of up to 156 qubits. 

 

Scalability Advantage and Hardware Readiness of Haiqu

Conventional Approaches

Haiqu’s Solution

× Exponential scaling of circuit depth

✓ Linear circuit depth scaling

× Poor scalability in the number of qubits

✓ Works with any number of qubits

× High error rates

✓ Minimal error accumulation

× Limited applicability to near-term hardware

✓ Works on today’s quantum hardware

× Poor integration with practical applications

✓ Enables real-world applications

 

Combined with Haiqu’s optimized execution tools, this capability enables, for the first time, the execution of Quantum Monte Carlo routines on realistic fat-tailed financial distributions directly on quantum hardware. It also unlocks practical exploration of quantum machine learning applications, such as fraud detection, by enabling high-dimensional feature encoding with only a few dozen qubits.

Impact: With Haiqu, financial teams build expertise ahead of broader hardware advances.

For business decision makers, the implications are immediate. Haiqu lowers the cost and increases the performance of financial quantum workloads, transforming quantum computing from a long-term research bet into a near-term commercial piloting opportunity. By enabling realistic distribution loading on today’s devices, Haiqu allows financial teams to test larger models, integrate quantum methods into existing workflows, and build expertise ahead of broader hardware advances.

Ultimately, this progress reinforces the core promise of quantitative finance: using better models and better computation to manage risk more effectively, allocate capital more wisely, and support a more stable and transparent financial system.

Quantum for business. Run more with Haiqu.

Case Studies Airbus BMW

Airbus/BMW Challenge

Case Studies

Executing Record-scale Quantum Computational Fluid Dynamics on Today's Hardware

People want safe, affordable, and comfortable transportation. The economics of delivering quality to end customers are set early in the value chain: aircraft manufacturers make efficiency decisions that airlines convert into lower operating costs, which people then experience as lower fares and improved comfort. The introduction of Airbus A320 sharklets to reduce wingtip vortex drag and cut fuel burn is a good example. Airlines recovered retrofit costs in as little as two years per aircraft, achieved fleet-wide savings in the multi-billion-dollar, and supported lower operating cost per seat.

Producing innovations like the Airbus A320 sharklets is slow and expensive. Development spans years and depends on repeated wind-tunnel testing and flight campaigns to validate performance and safety. Each cycle consumes capital, engineering effort, and limited testing capacity. Errors discovered late in development are especially costly, often triggering redesigns and delays.

Computational Fluid Dynamics (CFD) shortens this cycle by predicting how fluid flow affects aerodynamic forces, moments, pressure distributions, and thermal loads before physical testing begins. Accurate simulations reduce reliance on experiments and avoid costly late-stage changes. However, in complex flow conditions—most notably turbulence—classical CFD struggles because flows span large swirling regions down to very small eddies. 

Blog Website Graphics 2

Quantum Computational Fluid Dynamics (Q-CFD) aims to model the full dynamics of turbulent flows by encoding large grids using only logarithmically many qubits—changing the resource scaling relative to classical solvers. Unfortunately, today’s quantum processors are noisy. Current devices only support on the order of ~20 high-quality qubits (as measured by Quantum Volume metrics) and only reliably sustain algorithms  with a depth of ~300 two-qubit operations.

Haiqu, in partnership with Quanscient, overcame this barrier as finalists in the BMW–Airbus Quantum Computing Challenge. Using Haiqu’s middleware, the team executed the largest quantum CFD simulation to date on real hardware, running a 64×64 computational grid over multiple time steps on IonQ’s Aria 1 processor. By compressing circuits, optimizing execution, and mitigating noise, Haiqu enabled deep simulations that were previously impractical on today’s devices.

Blog Website Graphics 3
Haiqu powers Quanscient’s Quantum Lattice Boltzmann Method (QLBM) run, combining circuit compression and lightweight error mitigation to enable deep execution on today's noisy quantum processors.

For decision makers, the implications are clear and immediate. Haiqu increases the performance of CFD workloads, transforming quantum computing from a long-term theoretical research bet into a near-term empirical piloting program on real quantum hardware. 

By making deep simulations feasible on today’s hardware, Haiqu enables aerospace design and modeling teams to:

 

  1. Explore intractable simulation regimes
  2. Evaluate quantum CFD workflows alongside classical pipelines

Crucially, this is not a promise for the next decade. Haiqu makes quantum workloads executable today, allowing enterprises to capture competitive advantage years earlier.

Quantum for business. Run more with Haiqu.

Read more about this result on the AWS Blog.

Case Studies Life Sciences Giant

Life Sciences Giant

Case Studies

Folding mRNA on 120 qubits

Biology is governed by the relationship between molecular form and biological function. Cellular processes such as signaling, metabolism, and regulation depend on proteins that are precisely folded to perform specific tasks.

Fundamentally, protein folding is an optimization problem with exponential complexity, making accurate, physics-based simulations impractical at scale on classical computers. As proteins grow larger, computational costs rise rapidly, limiting the use of traditional methods in drug discovery and molecular design.

Quantum computing offers a new approach by mapping protein folding to a problem that quantum algorithms are naturally suited to solve. Techniques such as variational quantum algorithms can directly search for low-energy configurations (the biologically relevant folded states) within vast and complex solution spaces.
 

Problem

Despite strong theoretical promise, most quantum folding algorithms fail to scale in practice because they are incompatible with the noise, connectivity, and depth constraints of today’s quantum hardware.

A common industry-wide bottleneck in applying quantum computing to real-world optimization problems is the mismatch between algorithm design and current hardware limits. Many promising algorithms assume ideal connectivity and require deep, noisy circuits, making them impractical on today’s quantum devices where errors accumulate before convergence. This keeps most demonstrations confined to small, non-industrial benchmarks.

Solution:

Haiqu redesigned quantum folding algorithms to run efficiently on real hardware by aligning algorithm structure with device constraints rather than idealized assumptions.

Haiqu addressed this challenge by redefining folding at scale on today’s quantum hardware. This included applying Haiqu’s topology-aware quantum circuits, lightweight error-mitigation techniques, and integrated classical pre- and post-processing to stabilize training and improve results.

Result:

Haiqu scaled quantum protein folding workloads to 120 qubits, cut circuit depth (by 89%) and two-qubit gates (by 73%), and identified optimal low-energy solutions using ~50 minutes of QPU time vs. ~12 hours classically.

By reengineering how the algorithm runs on real quantum processors, Haiqu enabled execution at 120 qubits (51 nucleotides) while cutting circuit depth from 177 to 20 and reducing two-qubit gates from 479 to 127. Using this approach, Haiqu successfully trained and executed the algorithm directly on a quantum processor, achieving the optimal folding solution in approximately 50 minutes of QPU time—compared to roughly 12 hours on a classical simulator. This work scaled prior efforts of a partner to 120 qubits and established a credible path to ~200-qubit problem sizes, aligned with next-generation quantum hardware roadmaps.

Screenshot 2026 01 13 at 13 49 03
With Haiqu's hardware-efficient algorithm tailored to the QPU topology, we can solve the mRNA folding problem using all of the available qubits of the device (up to 159 on Heron). Running the iterations of the underlying optimisation problem takes 50 min of QPU time, whereas performing the same training on the tensor-network-based quantum simulator would require more than 12 hours.
Impact:

Haiqu transforms quantum protein and mRNA folding from experimental research into a practical, near-term capability for life-science organizations.

For decision makers, the implications are immediate. Haiqu reduces cost and increases the practical performance of quantum folding workloads, shifting quantum computing from a long-term research investment to a hardware-backed piloting opportunity. By making large-scale energy minimization and folding simulations feasible on today’s quantum processors, Haiqu enables life-science teams to:

 

  • Explore larger and more realistic folding landscapes than are accessible with classical physics-based methods
     
  • Identify low-energy folding configurations that are difficult to obtain with existing optimization techniques
     
  • Integrate quantum-derived folding results into existing computational biology and machine-learning workflows
     
  • Accelerate early-stage discovery and design decisions while controlling computational cost and hardware usage

Crucially, this capability is available now. Haiqu enables meaningful protein and mRNA folding workloads on current quantum hardware, allowing organizations to build expertise, validate value, and establish early competitive advantage ahead of future hardware advances.

Quantum for business. Run more with Haiqu.

Case Studies Financial Services Giant

HSBC & Oxford Ionics

Case Studies

Anomaly Detection with High-Dimensional Quantum Embeddings

Financial institutions are increasingly exploring quantum computing as a lever to improve risk modelling, simulation, and data-driven decision-making. Yet a fundamental obstacle remains largely underappreciated: before any quantum algorithm can deliver value, classical financial data must be transformed into quantum states, a process that is both resource-intensive and highly sensitive to hardware noise on today’s devices. Overcoming this data-loading and processing bottleneck is essential to move quantum finance from theoretical promise to executable pilots on real hardware.

Haiqu is partnering with Oxford Ionics, an IonQ company, and HSBC to explore how quantum computing can address real-world data-intensive challenges in financial services. The focus will be on detecting outliers and acceleration patterns in complex, high-volume trade and payment data. The collaboration brings together Haiqu’s quantum software expertise, Oxford Ionics’ advanced quantum hardware capabilities, and HSBC’s deep industry knowledge.

Additional details on the scope and outcomes of this collaboration will be shared soon. In the meantime, learn more about quantum machine learning for anomaly detection in our blog, or get in touch to discuss how these approaches can be applied in practice.

Quantum for business. Run more with Haiqu.
 

Haiqu News 07

Haiqu raises $11M seed round led by Primary

News

Haiqu keeps the party going. 

Thank you to our new investors: Primary Venture Partners, Collaborative Fund, Alumni Ventures, Qudit Ventures, Silicon Roundabout Ventures, Harlow Capital, and Hyperion Capital. 

Thank you to our returning investors: MaC Venture Capital & Toyota Ventures.

Thank you to our champions across the industry.

Now Phase II begins: our Product phase.

Don't be a classicist. Try our beta. 

Read the full press release here.

Haiqu News 08

Haiqu claims advance in fraud detection technology

News

Quantum software startup Haiqu announced results from a trial this week demonstrating that current quantum computers could detect subtle financial anomalies that could indicate fraud more efficiently than purely classical systems.

The research, which used a hybrid computing approach pairing quantum processing power with traditional machine learning models, revealed performance gains that suggest a near-term path toward achieving "quantum advantage" for large-scale, real-world problems.

Read full article here.

Haiqu News 06

Haiqu and HSBC research team encodes ‘largest financial distributions to date’ on quantum computers

News

A team of Haiqu-led researchers have developed a new approach to encoding complex financial data into quantum circuits, pushing the limits of IBM’s quantum processors, according to a paper published on the pre-print server arXiv. The team reports that the method, which focuses on shallow and efficient circuit designs, could advance the use of quantum computing in finance and other industries.

In a recent LinkedIn post on the paper, the team writes: “Quantum computing cannot achieve wide utility in the near term until we can efficiently load classical data onto quantum hardware. Now, we can.”

Read full article here.

Haiqu News 05

Haiqu recognized by Sifted as an emerging top quantum computing startup

News

Sifted asked investors in the field who they think is the next big thing in quantum computing.

European quantum startups are on the rise. Last year, while many VC-backed companies in the region were struggling to raise funds, Europe’s quantum startups actually saw investments grow by 3% to reach $781m — more than three times the amount raised in the sector in North America ($240m).

It also made Europe the only region to see funding for quantum startups increase, while investments in North America dropped by 80%, and by 17% in Asia-Pacific.

With strong support from governments — the UK has committed $4.3bn to quantum technologies, while Germany has pledged over $3.7bn — and burgeoning interest from VCs, Europe’s quantum scene is growing steadily. 

Read full article here.

Haiqu News 03

Haiqu and Perimeter Institute forge new model for quantum computing research

News

The Perimeter Institute has established a new partnership with quantum software startup Haiqu (pronounced as ˈhaɪku) that will bring fundamental research and technological innovation closer together. Haiqu has established operations in the US, UK, and Ukraine. Through this partnership, the company will base its first Canadian hire, Dmitri Iouchtchenko, at the Perimeter Institute Quantum Intelligence Lab (PIQuIL).

Read full article here

Haiqu News 02

Haiqu presents at Creative Destruction Lab’s super session as top graduating venture

News

Graduating Ventures Took to the Main Stage at Super Session, Sharing Their CDL Journeys

Founders and speakers representing four graduating ventures and three CDL alumni companies got onstage in front of a thousand of their peers, mentors and investors to share their inspiring stories of success, and reflect on their takeaways from the CDL program.

The concluding main stage event at June’s Super Session was a showcase of some of the incredible talent that’s recently been nurtured within the 24 global streams that make up the Creative Destruction Lab (CDL) program...Another venture invited to the main stage was quantum computing company Haiqu. Unlike other streams, applicants being considered for fall admission into the Quantum program travel to Toronto for several weeks between July and August for a bootcamp. That’s where this graduating company’s co-founders, Richard Givhan and Mykola Maksymenko, met last year.

Read full article here.

Haiqu News 01

Haiqu raises largest pre-seed in quantum software

News

Haiqu Raises $4 Million in Pre-Seed Funding to Boost Adoption of Near-term Quantum Computing

Haiqu, a startup building software to enhance the performance of quantum processors, today announced it has closed a $4 million financing round led by MaC Venture Capital, with participation from Toyota Ventures, SOMA capital, u.ventures, SID Venture Partners and Roosh Ventures. The round also included private contributions from Paul Holland, Alexi Kirilenko and Gordy Holterman.

“We are accelerating the timeline to practical quantum computing by developing novel software that can extract value out of clumsy near-term quantum hardware, enabling quantum applications that were previously impossible.” said Richard Givhan, co-founder and CEO at Haiqu. “We are proud to be backed by investors with remarkable deep-tech ecosystems and a track record of supporting the commercialization of breakthrough tech.“

Read full article here.

Read Banner

Quantum embeddings for anomaly detection

Blog + Whitepapers
Overview

Detecting rare or unusual patterns requires working with large, complex data.

Traditional machine learning loses accuracy when data has many features but few examples.

Quantum machine learning can uncover patterns classical methods miss, but is limited by low qubit counts to low-dimensional datasets.

Haiqu’s new quantum encoding solution packs hundreds of features into just a few qubits and scales efficiently, removing a major barrier to real-world data processing on a quantum computer.

Anomalies are the world’s early warning systems. They appear as subtle ripples before an earthquake, a sudden deviation in a patient’s vital signs, or a single fraudulent transaction hidden among millions of legitimate ones. Detecting such rare events is essential for safety, trust, and reliability in modern data-driven systems. Still, it remains one of the hardest problems in machine learning.

The reason lies in their rarity. By definition, anomalies are statistical outliers, making up only a fraction of a dataset. Classical models must learn to recognize these events despite being trained predominantly on normal examples. And in practice, the data itself compounds the difficulty. Hundreds of correlated features, nonlinear dependencies, and limited examples make it hard for algorithms to find meaningful structure.

Classical approaches like decision trees, logistic regression, and deep neural networks have achieved remarkable progress, but even the best of them eventually plateau. Particularly at high dimensionality and low sample size, they begin to lose their footing. This is where quantum computing, with its fundamentally different way of representing and processing information, may offer a new path forward.

“The ability to encode high dimensional data with hundreds and even thousands of features enables applications of a new scale, as what the team at Haiqu has experimentally shown on our hardware. Advances like this are what push the industry towards achieving a quantum advantage in the near term.”
 

Jay Gambetta, Director of IBM Research

Where Classical Representations Fall Short


Machine learning is dependent upon how we represent data. The features describing the data determine what a model can perceive, and, as a result, what it can learn. The goal of kernel methods and, more broadly, of representation learning is to construct new data features, often by embedding in larger spaces, which “expose” the inherent patterns in the data1. This allows downstream machine learning models to achieve better performance in e.g. classification tasks. In most real-world data, however, the relations between these features are nonlinear and strongly correlated, which poses a major challenge to classical methods.

Quantum systems, by contrast, naturally take advantage of superposition and entanglement to process unstructured feature vectors2. This allows both to embed data in an exponentially larger space of multi-qubit states and to reveal its structure through the use of quantum dynamics, enabling efficient classification. In other words, mapping classical data into this space effectively expands the representational canvas and allows the discovery of patterns that classical embeddings may flatten or miss entirely.

This concept is central to quantum machine learning. Through quantum embeddings, data is transformed into quantum states, and similarities between samples are evaluated via quantum kernels, comparable to kernel methods in classical machine learning but powered by the physics of interference. Even when run in simulation, these quantum representations reveal how computation in Hilbert space can enrich classical preprocessing2.

"Understanding and implementing quantum embedding is an essential part of data analysis on quantum devices. Ultimately, they define the complexity of models and their performance. [...] Anomaly detection is a very suitable target, since even a smaller improvement in scores can lead to crucial detections or elimination of false positives."

Prof. Oleksandr Kyriienko
Professor and Chair in Quantum Technologies at the University of Sheffield.

Encoding Data of Previously Impossible Size


The first step to constructing a quantum embedding is to encode the original data in a quantum state, which is then transformed using e.g. quantum dynamics. While many existing studies rely on angular encodings, such methods quickly reach their limits as the number of qubits required grows linearly with the number of data features 3, 4. Most near-term quantum devices have fewer than 150 qubits, yet anomaly-detection and other real-life datasets can easily contain hundreds or thousands of features. This creates a dimensionality gap that makes naïve quantum feature maps impractical.

The Problem

Near-term quantum devices have fewer than 150 qubits, making naïve feature encodings impractical for real-world datasets with hundreds or thousands of features.

To address this challenge, we developed a novel data encoding method, which allows us to encode polynomially more features with the same number of qubits. Moreover, our method awards control over the complexity of the resulting quantum state and of the quantum circuit creating it. This permits adjusting the embedding to both the number of features in the data, as well as to the parameters of the QPU. Our dataset contained 506 classical features which we mapped onto 128 qubits, but the same QPU could be used to encode an order of magnitude larger feature numbers with our technique.

The Solution

We develop an encoding solution that allows an order of magnitude more features to be loaded on a QPU.

The resulting embeddings can be used as feature transformations in any machine-learning pipeline, feeding quantum-enriched data into classical models. This hybrid structure of classical models trained on quantum-preprocessed data forms the foundation of our experiment.

“Haiqu’s scalable embedding technology marks a turning point for quantum machine learning, making complex, high-dimensional data practical at scale. This innovation doesn’t just advance research; it accelerates the shift toward real-world impact in industries like finance, where precision and insight redefine what’s possible.”

Dr. Kristin Milchanowski, Chief AI and Data Officer, BMO

A Head-to-Head Comparison


To test the performance of our quantum embeddings against their classical counterparts task we designed an experiment with two parallel pipelines: a hybrid one (quantum+classical), and a fully classical one, as shown in Figure 1. The inputs to both are constructed from the Multivariate Time Series Anomaly benchmark used to evaluate the detection of rare events in temporal data. Our dataset consists of a smaller and balanced subset of 250 samples of coarse-grained time-series, where time-points within time windows in the original data have been merged together to form high-dimensional vectors of 506 features.

This reduced sample size enables easy reproducibility of our experiments—including the costly QPU runs—while still capturing the complexity of real-world scenarios.

Fairness of comparison was central to our evaluation of the potential of quantum embeddings. In both pipelines, data is embedded into an enlarged space. In the quantum setting, we apply our novel data encoding, evolve the system under a parameterized quantum circuit implementing Heisenberg dynamics with randomly chosen parameters, and measure the resulting state to obtain new classical features. This is a form of projected quantum kernel (PQK) [3, 4]. In the classical pipeline, we tested two forms of embeddings with the same number of parameters as the quantum model: neural networks with random parameters, and random Fourier features. These embeddings produced new classical features of the same dimensionality as those generated by the quantum feature map.

Both the quantum-processed and classically-generated features were then input into the same families of classical classifiers, such as random forests and logistic regression. The hyperparameters of each classifier were independently optimized to ensure the best possible performance within each pipeline. We employed an 80/20 train–test split and evaluated performance using the F1 score, which balances precision and recall for imbalanced classification tasks.

 

 

Figure 1: Quantum versus classical embeddings experiment pipeline. Both process the same dataset, but use different feature construction methods before feeding it into classical classifiers. The input data has 506 features per data point (see the main text for details). The quantum embedding uses Haiqu's proprietary data loading to encode each point in a quantum state over 128 qubits, which is then transformed through an evolution circuit with random parameters, and, finally, read out with 1024 measurement shots. These measurements are used to compute the expectation values of 8256 observables. On the classical side, random neural networks and random Fourier Features are used to construct features, also of 8256 dimensions. Classical classifiers (see Fig.2) are then trained separately for the classical and quantum features (we selected the best quantum, and the best classical features) and their performance compared.

Results: Quantum-Enhanced Preprocessing Works

The results show a consistent trend: quantum-preprocessed features outperform classical ones across multiple models. Crucially, this is the case both in simulation and on real QPUs.

In ideal simulations, our projected quantum kernel (PQK) achieved an F1 score of 0.98, outperforming classical baselines of around 0.90–0.93. Even on real quantum hardware, in this case an IBM Quantum Heron processor, where noise and gate infidelity introduce imperfections, the model retained an impressive 0.96 F1, showing that the method is both effective and robust under realistic device conditions.

 

 

Figure 2. Top: The F1 score of different classical classifiers using either the classical (black) or the quantum (blue) features obtained on the IBM Quantum Heron processor. For most classifiers the quantum features provide better performance. Bottom: A more detailed comparison for the Logistic Regression Classifier. The classifier with original input data achieves an F1 score of 0.91, while the random classical embeddings essentially do not improve the score (0.92). In contrast, using the random quantum embedding a score of 0.98 is obtained when simulated without noise, and 0.96 when executed on the real noisy quantum processor.

The largest relative improvements appeared in tree-based models such as Decision Trees and Random Forests, which benefited most from the richer feature representations. Across the board, these results suggest that quantum embeddings can serve as a general-purpose preprocessing layer, improving performance without changing model complexity or parameter count. In other words, quantum systems need not replace classical models, but augment them.

A Glimpse of the Future
 

The importance of our findings is threefold:

— First, it shows that quantum-enhanced features can achieve superior performance over classical embeddings in complex data.

— Second, that extremely high-dimensional data can be embedded using our methods on existing QPUs.

— Third, we show that this superior performance is present both in idealized simulation that validates the pipeline, and on real noisy quantum hardware.

Taken together, the results provide an empirical signal that quantum advantage with QML may be possible to achieve in full-fledged industrial deployment to complex high-dimensional problems, and that investigating efficient quantum data embeddings on real datasets is a promising path towards realizing that advantage.

This holds even without training the embedding weights, which should further improve the features, and thus the final performance.

Importantly, while our experiment scale still allowed for a classical simulation to validate the results, its complexity was already such that sampling on the QPU was actually faster than running the classical simulations. 

With increasing problem size (and thus qubit numbers and circuit depth) classical simulations will become completely infeasible, but already at smaller scales it may become advantageous to use the QPU at least for the inference from a trained kernel.

Thus, a hybrid QML pipeline which achieves superior performance in a complete end-to-end industrial deployment will likely involve a classical pre-training step of the quantum embedding using large scale simulation, followed by an optional fine-tuning step on a QPU, and finally, an inference step performed at test time on a QPU.

This hybrid workflow could extend beyond anomaly detection into fields like cybersecurity, financial modeling, predictive maintenance, and health diagnostics, all domains where interpretability and early warning signals matter most.

A Call To Explore
 

Our anomaly-detection study provides early evidence that quantum-enhanced preprocessing is practically meaningful even on today’s noisy hardware. Out study demonstrates how hybrid pipelines can achieve measurable gains without increasing model complexity, parameter count, or data requirements.

We are now preparing to share interactive notebooks that allow verified beta users to reproduce this experiment using Haiqu’s embedding technology. These tools will enable researchers and developers to explore quantum preprocessing within their own domains, from time-series analysis to computer vision, and benchmark their results against classical baselines.

We invite the community to join this exploration: replicate, challenge, and build upon our results. Each successful reproduction strengthens the case for a future where quantum and classical computation work hand in hand, and where models understand data more deeply. 

Footnotes

  1. Shawe-Taylor, John, and Nello Cristianini. Kernel methods for pattern analysis. Cambridge university press, 2004.
  2. Schuld, Maria, and Francesco Petruccione. Machine learning with quantum computers. Vol. 676. Berlin: Springer, 2021.
  3. Schnabel, J., and M. Roth. "Quantum kernel methods under scrutiny: A benchmarking study (2024)." arXiv preprint arXiv:2409.04406.
  4. D'Amore, Francesco, Luca Mariani, Carlo Mastroianni, Francesco Plastina, Luca Salatino, Jacopo Settino, and Andrea Vinci. "Assessing Projected Quantum Kernels for the Classification of IoT Data." arXiv preprint arXiv:2505.14593 (2025).