Hello, I am Hannes LEIPOLD, a researcher at Fujitsu Research of America (FRA) working on quantum computing. Fujitsu has been developing quantum technologies as a global effort, involving its research laboratories in Japan, Europe, India, and North America.
In this article, I will discuss a recent preprint we have released, which was presented as an oral talk at the American Physical Society’s 2025 March Meeting in Anaheim, California, U.S. on March 17th as well as a poster presentation at Fujitsu’s Quantum Day held at the Uvance Kawasaki Tower in Kawasaki, Japan on March 28th.
Introduction
Through the utilization of entanglement and quantum superposition, quantum computers can offer computational advantage or other enhancements on quantum data representations compared to classical computers on the associated classical data representations. However, loading such classical data to quantum data can be very challenging and the cost of doing so can destroy the quantum computational advantage the processing of the quantum computer would offer. In certain settings, finding the associated quantum representation may not be so challenging, giving us unambiguous quantum advantage, but to achieve quantum advantage on tasks like price forecasting, we must make progress on loading important, intricate classical distributions onto quantum hardware.
Forecasting Futures in Superposition
In forecasting, we are often interested in statistical quantities, such as the expected value or variance of a distribution. Quantum algorithms such as amplitude amplification, can superpolynomially reduce the number of required samples from our distribution. In the context-based generation framework, Quantum Sequence Generation (QSG) is when we generate a sequence one continuation at a time and shift the context window as we go. By doing so we can generate intricate distributions over an exponential number of paths in the length of the sequence.
Once we have generated all paths in superposition, we can utilized known quantum algorithms to achieve enhancement in the number of samples required - ranging from superpolynomially speedup to exponential speedup based on what quantum techniques we are able to exploit for that particular statistical feature in question.
Network Architecture
We developed an architecture for CQNN to enable quality index or portfolio forecasting, based on the concept of Multi-Task Learning (MTL) for neural networks. Each asset or asset group can be associated with a label. The input is then a context and a label. We design the share-and-specify ansatz to accomplish learn shared representations across all labels and the specify ansatz to learn representations specific to the labels, which enables Quantum Multi-Task Learning (QMTL). This helps us to improve the robustness of our QNN and enables us to use an efficient input representations are inference time.
In particular, we can use amplitude encoding to load a distribution of labels - such as a normalized market capitalization weighted index over stocks - with a logarithmic number of qubits in the number of total labels (such as asset names) loaded. We can then load the context associated with each in superposition as well and then utilize the CQNN with the share-and-specify ansatz to then load a quantum sequence over all paths and all asset labels in superposition. Because indices are natural distributions, quantum representations are inherently highly efficient, similarly as they can be for many paths of a sequence continuation.
Learning Conditional Distributions with Conditional Fidelity Loss
While QNNs can learn efficient and complex representations over small quantum resources, the learning procedure can be a very difficult challenge. Indeed, the representative power of QNNs is also the source of why QNNs are challenging to train - we have only have classical control over these quantum systems. One important protocol in quantum learning is called the SWAP test - a protocol for determining the fidelity (or similarity) between two quantum states. To improve the training, we introduce the Quantum Batch Gradient Update, where we load an input distribution and a joint distribution over the input and output to train the QNN to approximate the conditional distribution through the SWAP test. This quantum phenomenon allows us to take global gradients over the distributions defined by the entire data set (or subsets) rather than loading each input/output pair for supervised learning.
Discussion
Quantum computers can natively store probabilistic representations, allowing for quantum advantage over classical paradigms. If we learn to represent intricate context-based distributions, we can enhance statistical queries at inference time through quantum paradigms like quantum risk analysis or quantum principal component analysis.
Summary
Generative Neural Networks are transforming business and society with high quality contextual distribution loading. CQNNs can similarly enable high quality statistical estimation through contextual distribution loading, enabling quantum enhancement at inference time through Quantum Sequence Generation (QSG). Given the high impact of estimating statistical quantities in financial forecasting and the natural representation of indices as distributions over assets that we can load through QMTL, our work helps bring realistic quantum utility through quantum-native generative networks.