Optimizing ZX-diagrams with deep reinforcement learning

Nagele M, Marquardt F (2024)


Publication Type: Journal article

Publication year: 2024

Journal

Book Volume: 5

Journal Issue: 3

DOI: 10.1088/2632-2153/ad76f7

Abstract

ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy, simulated annealing, and state-of-the-art hand-crafted algorithms. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.

Authors with CRIS profile

Additional Organisation(s)

Involved external institutions

How to cite

APA:

Nagele, M., & Marquardt, F. (2024). Optimizing ZX-diagrams with deep reinforcement learning. Machine Learning: Science and Technology, 5(3). https://doi.org/10.1088/2632-2153/ad76f7

MLA:

Nagele, Maximilian, and Florian Marquardt. "Optimizing ZX-diagrams with deep reinforcement learning." Machine Learning: Science and Technology 5.3 (2024).

BibTeX: Download