1 Introduction

NLP is a rapidly evolving area of AI of both theoretical importance and practical interest (Jurafsky and Martin 2000; Blackburn and Bos 2005). Large language models, such as the relatively recent GPT-3 with its 175 billion parameters (Brown et al. 2020), show impressive results on general NLP tasks and one dares to claim that humanity is entering Turing-test territory (Turing 1950). Such models, almost always based on neural networks, work by learning to model conditional probability distributions of words in the context of other words. The textual data on which they are trained are mined from large text corpora; the probability distribution to be learned captures the statistical patterns of cooccurrence of words in the data. Due to this, in this work we shall refer to such models as distributional.

NLP technology becomes increasingly entangled with everyday life as part of search engines, personal assistants, information extraction and data-mining algorithms, medical diagnoses, and even bioinformatics (Searls 2002; Zeng et al. 2015). Despite success in both language understanding and language generation, under the hood of mainstream NLP models one exclusively finds deep neural networks, which famously suffer the criticism of being uninterpretable black boxes (Buhrmester et al. 2019).

One way to bring transparency to said black boxes is to explicitly incorporate linguistic structure, such as grammar and syntax (Lambek 1958; Montague 2008; Chomsky 1957), into distributional language models. Note, in this work we will use the terms grammar and syntax interchangeably, the essence of these terms being that they refer to structural information that the textual data could be supplemented with. A prominent approach attempting this merge is the Distributional Compositional Categorical model of natural language meaning (DisCoCat) (Coecke et al. 2010; Grefenstette and Sadrzadeh 2011; Kartsaklis and Sadrzadeh 2013), which pioneered the paradigm of combining explicit grammatical (or syntactic) structure with distributional (or statistical) methods for encoding and computing meaning (or semantics). There has also been follow-up related work on neural-based models where syntax is also incorporated in a recursive neural network, where the syntactic structures dictate the order of the recursive calls to the recurring cell (Socher et al. 2013). This approach also provides the tools for modelling linguistic phenomena such as lexical entailment and ambiguity, as well as the transparent construction of syntactic structures like, relative and possessive pronouns (Sadrzadeh et al. 2013; 2014), conjunction, disjunction, and negation (Lewis 2020).

From a modern lens, DisCoCat, as it is presented in the literature, is a tensor network language model. Recently, the motivation for designing interpretable AI systems has caused a surge in the use of tensor networks in language modelling (Pestun and Vlassopoulos 2017; Gallego and Orus 2019; Bradley et al. 2019; Efthymiou et al. 2019). A tensor network is a graph whose vertices are endowed with tensors. Every vertex has an arity, i.e. a number of edges to which it belongs which represent the tensor’s indices. Edges represent tensor contractions, i.e. identification of the indices joined by the edge and summation over their range (Eistein summation). Intuitively, a tensor network is a compressed representation of a multilinear map. Tensor networks have been used to capture probability distributions of complex many-body systems, both classical and quantum, and they also have a formal appeal as rigorous algebraic tools (Eisert 2013; Orús 2019).

Quantum computing (QC) is a field which, in parallel with NLP is growing at an extremely fast pace. The prominence of QC is now well-established, especially after the experiments aiming to demonstrate quantum advantage for the specific task of sampling from random quantum circuits (Arute et al. 2019). QC has the potential to reach the whole range of human interests, from foundations of physics and computer science, to applications in engineering, finance, chemistry, and optimisation problems (Bharti et al. 2021), and even procedural map generation (Wootton 2020).

In the last half-decade, the natural conceptual fusion of QC with AI, and especially the subfield of AI known as machine learning (ML), has lead to a plethora of novel and exciting advancements. The quantum machine learning (QML) literature has reached an immense size considering its young age, with the cross-fertilisation of ideas and methods between fields of research as well as academia and industry being a dominant driving force. The landscape includes using quantum computers for subroutines in ML algorithms for executing linear algebra operations (Harrow et al. 2009), or quantising classical machine learning algorithms based on neural networks (Beer et al. 2020), support vector machines, clustering (Kerenidis et al. 2019), or artificial agents who learn from interacting with their environment (Dunjko et al. 2016), and even quantum-inspired and dequantised classical algorithms which nevertheless retain a complexity theoretic advantage (Chia et al. 2020). Small-scale classification experiments have also been implemented with quantum technology (Havlíček et al. 2019; Li et al. 2015).

From this collection of ingredients, there organically emerges the interdisciplinary field of quantum natural language processing (QNLP), a research area still in its infancy (Zeng and Coecke 2016; O’Riordan et al. 2020; Wiebe et al. 2019; Bausch et al. 2020; Chen 2002), combines NLP and QC and seeks novel quantum language model designs and quantum algorithms for NLP tasks. Building on the recently established methodology of QML, one imports QC algorithms to obtain theoretical speedups for specific NLP tasks or use the quantum Hilbert space as a feature space in which NLP tasks are to be executed.

The first paper on QNLP, using the DisCoCat framework by Zeng and Coecke (Zeng and Coecke 2016), introduced an approach where a standard NLP task are instantiated as quantum computations. The task of sentence similarity was reduced to the closest-vector problem, for which there exists a quantum algorithm providing a quadratic speedup, albeit assuming a Quantum Random Access Memory (QRAM). The mapping of the NLP taks to a quantum computation is attributed to the mathematical similarity of the structures underlying DisCoCat and quantum theory. This similarity becomes apparent when both are expressed in the graphical language of string diagrams of monoidal categories or process theories (Coecke and Kissinger 2017). The categorical formulation of quantum theory is known as Categorical Quantum Mechanics (CQM) (Abramsky and Coecke 2004) and it becomes apparent that the string diagrams describing CQM are tensor networks endowed with a graphical language in the form of a rewrite-system (a.k.a. diagrammatic algebra with string diagrams). The language of string diagrams places syntactic structures and quantum processes on equal footing, and thus allows the canonical instantiation of grammar-aware quantum models for NLP.

In this work, we bring DisCoCat to the current age of noisy intermediate-scale quantum (NISQ) devices by performing the first-ever proof-of-concept QNLP experiment on actual quantum processors. We employ the framework introduced in Meichanetzidis et al. (2020) by adopting the paradigm of parameterised quantum circuits (PQCs) as quantum machine learning models (Schuld et al. 2020; Benedetti et al. 2019), which currently dominates near-term algorithms. PQCs can be used to parameterise quantum states and processes, as well as complex probability distributions, and so they can be used in NISQ machine learning pipelines. The framework we use in this work allows for the execution of experiments involving non-trivial text corpora, which moreover involves complex grammatical structures. The specific task we showcase here is binary classification of sentences in a supervised-learning hybrid classical-quantum QML setup.

2 The model

The DisCoCat model relies on algebraic models of grammar which use types and reduction-rules to mathematically formalise syntactic structures of sentences. Historically, the first formulation of DisCoCat employed pregroup grammars (Appendix A), which were developed by Lambek (2008). However, any other typeological grammar, such as Combinatory Categorial Grammar (CCG), can be used to instantiate DisCoCat models.

In this work, we will work with pregroup grammars, to stay close to the existing DisCoCat literature. In a pregroup grammar, a sentence’s type is composed of a finite product of words \(\sigma ={\prod }_{i} w_{i}\). A parser tags a word wσ with its part of speech. Accordingly, w is assigned a pregroup type \(t_{w}={\prod }_{i} b_{i}^{\kappa _{i}}\) comprising a product of basic (or atomic) types bi from the finite set B. Each type carries an adjoint order \(\kappa _{i}\in \mathbb {Z}\). Pregroup parsing is efficient; specifically, it is linear time under reasonable assumptions (Preller 2007). The type of a sentence is the product of the types of its words and it is deemed grammatical if it type-reduces to the special type s0B, i.e. the sentence-type, \(t_{\sigma } = {\prod }_{w} t_{w} \to s^{0}\). Reductions are performed by iteratively applying pairwise annihilations of basic types with adjoint orders of the form bibi+ 1. As an example, consider the grammatical reduction:

At the core of DisCoCat is a process-theoretic model of natural language meaning. Process theories are alternatively known as symmetric monoidal (or tensor) categories (Baez and Stay 2009). Process networks such as those that manifest in DisCoCat can be represented graphically with string diagrams (Selinger 2010). String diagrams are not just convenient graphical notation, but they constitute a formal graphical language for reasoning about complex process networks (see Appendix B for an introduction to string diagrams and concepts relevant to this work). String diagrams are generated by boxes with input and output wires, with each wire carrying a type. Boxes can be composed to form process networks by wiring outputs to inputs and making sure the types are respected. Output-only processes are called states and input-only processes are called effects.

A grammatical reduction is viewed as a process and so it can be represented as a string diagram. In string diagrams representing pregroup-grammar reductions, words are represented as states and pairwise type-reductions are represented by a pattern of nested cup-effects (wires bent in a U-shape), and identities (straight wires). Wires in the string diagram carry the label of the basic type being reduced. As an example, in Fig. 1 we show the string diagram representing the pregroup reductions for ‘Romeo who loves Juliet dies’. Only the s-wire is left open, which is the witness of grammaticality.

Fig. 1
figure 1

Diagram for ‘Romeo who loves Juliet dies’. The grammatical reduction is generated by the nested pattern of non-crossing cups, which connect words through wires of types n or s. Grammaticality is verified by only one s-wire left open. The diagram represents the meaning of the whole sentence from a process-theoretic point of view. The relative pronoun ‘who’ is modeled by the Kronecker tensor. Interpreting the diagram in CQM, it represents a quantum state

Given a string diagram resulting from the grammatical reduction of a sentence, we can instantiate a model for natural language processing by giving semantics to the string diagram. This two-step process of constructing a model, where syntax and the semantics are treated separately, is the origin of the framework’s name; ‘Compositional’ refers to the string diagram describing structure and ‘Distributional’ refers to the semantic spaces where meaning is encoded and processed. Any choice of semantics that respects the compositional structure is allowed, and is implemented by component-wise substitution of the boxes and wires in the diagrams. Such a structure preserving mapping constitutes a functor, and ensures that the model instantiation is ‘canonical’. Valid choices of semantics range from neural networks, to tensor networks (which was the default choice for the majority of the DisCoCat literature), to quantum processes, and even hybrid combinations involving components from the whole range of available choice by interpreting the string diagram correspondingly.

In this work, we focus on the latter choice of quantum processes, and in particular, we will be realising quantum processes using pure quantum circuits. As we have described in Meichanetzidis et al. (2020), the string diagram of the syntactic structure of a sentence σ can be canonically mapped to a PQC Cσ(𝜃σ) over the parameter set 𝜃. The key idea here is that such circuits inherit their architecture, in terms of a particular connectivity of entangling gates, from the grammatical reduction of the sentence.

Quantum circuits also, being part of pure quantum theory, enjoy a graphical language in terms of string diagrams. The mapping from sentence diagram to quantum circuit begins simply by reinterpreting a sentence diagram, such as that of Fig. 1, as a diagram in categorical quantum mechanics (CQM). The word-state of word w in a sentence diagram is mapped to a pure quantum state prepared from a trivial reference product-state by a PQC as \(\vert w(\theta _{w})\rangle = C_{w}(\theta _{w}) \vert 0 \rangle ^{\otimes q_{w}}\), where \(q_{w}={\sum }_{b\in t_{w}} q_{b}\). The width of the circuit depends on the number of qubits assigned to each pregroup type bB from which the word-types are composed and cups are mapped to Bell effects.

Given a sentence σ, we instantiate its quantum circuit by first concatenating in parallel the word-circuits of each word as they appear in the sentence, corresponding to performing a tensor product, \(C_{\sigma }(\theta _{\sigma }) = \bigotimes _{w} C_{w}(\theta _{w})\) which prepares the state |σ(𝜃σ)〉 from the all-zeros basis state. As such, a sentence is parameterised by the concatenation of parameters of its words, 𝜃σ = ∪wσ𝜃w. The parameters 𝜃w determine the word-embedding |w(𝜃w)〉. In other words, we use the Hilbert space as a feature space (Havlíček et al. 2019; Schuld and Killoran 2019; Lloyd et al. 2020) in which the word-embeddings are defined. Finally, we apply Bell effects as dictated by the cup pattern in the grammatical reduction, a function whose result we shall denote gσ(|σ(𝜃σ)〉). Note that in general this procedure prepares an unnormalised quantum state. In the special case where no qubits are assigned to the sentence type, i.e. qs = 0, then it is an amplitude which we write as 〈gσ|σ(𝜃σ)〉. Formally, this mapping constitutes a parameterised functor from the pregroup grammar category to the category of quantum circuits. The parameterisation is defined via a function from the set of parameters to functors from the aforementioned source and target categories.

Our model has hyperparameters (Appendix E). The wires of the DisCoCat diagrams we consider carry types n or s. The number of qubits that we assign to each pregroup type are qn and qs. These determine the arity of each word, i.e. the width of the quantum circuit that prepares each word-state. We set qs = 0 throughout this work, which establishes that the sentence-circuits represent scalars. For a unary word w, i.e. a word-state on 1 qubit, we choose to prepare using two rotations as \(\mathrm {R}_{z}({\theta ^{2}_{w}})R_{x}({\theta ^{1}_{w}}) \vert 0\rangle \). For a word w of arity k ≥ 2, we use a depth-d IQP-style parameterisation (Havlíček et al. 2019) consisting of d layers where each layer consists of a layer of Hadamard gates followed by a layer of controlled-Z rotations \(\text {CR}_{z}({\theta ^{i}_{w}})\), such that \(i\in \{1,2,\dots ,d(k-1)\}\). Such circuits are in part motivated by the conjecture that circuits involving them are classically hard to evaluate (Havlíček et al. 2019). The relative pronoun ‘who’ is mapped to the GHZ circuit, i.e. the circuit that prepares a GHZ state on the number of qubits as determined by qn and qs. This is justified by prior work where relative pronouns and other functional words are modelled by a Kronecker tensor (a.k.a. ‘spider’), whose entries are all zeros except when all indices are the same for which case the entries are ones (Sadrzadeh et al. 2013; 2014). It is also known as a ‘copy’ tensor, as it copies the computational basis.

In Fig. 2, we show an example of choices of word-circuits for specific numbers of qubits assigned to each basic pregroup type (Appendix Fig. 8). In Fig. 3, we show the corresponding circuit to ‘Romeo who loves Juliet dies’. In practice, we perform the mapping of sentence diagrams to quantum circuits using the Python library DisCoPy (de Felice et al. 2020), which provides a data structure for monoidal string-diagrams and enables the instantiation of functors, including functors based on PQCs.

Fig. 2
figure 2

Example instance of mapping from sentence diagrams to PQCs where qn = 1 and qs = 0. (a) The dashed square is the empty diagram. In this example, (b) unary word-states are prepared by parameterised Rx rotations followed by Rz rotations and (c) k-ary word-states are prepared by parameterised word-circuits of width k and depth d = 2. (d) The cup is mapped to a Bell effect, i.e. a CNOT followed by a Hadamard on the control and postselection on 〈00|. (e) The Kronecker tensor modelling the relative pronoun is mapped to a GHZ state

Fig. 3
figure 3

The PQC to which ‘Romeo who loves Juliet dies’ of Fig. 1 is mapped, with the choices of hyper-parameters of Fig. 2. As qs = 0, the circuit represents a scalar

Here, a motivating remark is in order. In ‘classical’ implementations of DisCoCat, where the semantics chosen in order to realise a model is in terms of tensor networks, a sentence diagram represents a vector which results from a tensor contraction. In this case, meanings of words are encoded in the state-tensors in terms of cooccurrence frequencies or other vector-space word-embeddings (Mikolov et al. 2013). In general, tensor contractions are exponentially expensive to compute. The cost scales exponentially with the order of the largest tensors present in the tensor network and the base of the scaling is the dimension of the vector spaces carried by the wires. However, tensor networks resulting from interpreting syntactic string diagrams over vector spaces and linear maps do not have a generic topology; rather they are tree-like. This means that contracting tensor networks whose connectivity is given by pregroup reductions are efficiently contractable as a function of the dimension of the wires carrying the vector spaces playing the role of semantic spaces. Even in this case however, the dimension of the wires for NLP-relevant applications can become prohibitively large (order of hunderds) in practice. In a fault-tolerant quantum computing setting, ideally, as is proposed in Zeng and Coecke (2016), one has access to a QRAM and one would be able to efficiently encode such tensor entries as quantum amplitudes using only \(\lceil \log _{2}{d}\rceil \) qubits to encode a d-dimensional vector. However, building a QRAM currently remains challenging (Aaronson 2015). In the NISQ case, we still attempt to take advantage of the tensor-product structure defined by a collection of qubits which provides an exponentially large Hilbert space as a function of the number of qubits, and can be used as a feature-space in which the word-embeddings can be trained. Consequently, we adopt the paradigm of QML in terms of PQCs to carry out near-term QNLP tasks. Any possible quantum advantage is to be identified heuristically on a case by case basis depending on the task, the data, and the available quantum computing resources.

3 Classification task

Now that we have established our construction of sentence circuits, we describe a simple QNLP task. The dataset or ‘labelled corpus’ K = {(Dσ,lσ)}σ is a finite set of sentence-diagrams {Dσ}σ constructed from a finite vocabulary of words V. Each sentence has a binary label lσ ∈{0,1}. In this work, the labels represent the sentences’ truth-values; 0 for False and 1 for True. Our setup trivially generalises to multi-class classification by assigning \(q_{s}=\log _{2}(\#\text {classes})\) to the s-type wire and measuring in the Z-basis.

We split K into the training set Δ containing the first ⌊p|{Dσ}σ|⌉ of the sentences, where p ∈ (0,1), and the test set E containing the rest.

We define the predicted label as

$$ l^{\text{pr}}_{\sigma}(\theta_{\sigma}) = \vert \langle g_{\sigma} \vert \sigma(\theta_{\sigma})\rangle \vert^{2} \in [0,1] $$
(1)

from which we can obtain the binary label by rounding to the nearest integer \(\lfloor l^{\text {pr}}_{\sigma } \rceil \in \{0,1\}\).

The parameters of the words need to be optimised (or trained) so that the predicted labels match the labels in the training set. The optimiser we invoke is SPSA (Spall 1998), a gradient-free optimiser which has shown adequate performance in noisy settings (Bonet-Monroig et al. 2021) (Appendix F). The cost function we define is

$$ L(\theta) = {\sum}_{\sigma\in {\Delta}} {(l^{\text{pr}}_{\sigma}(\theta_{\sigma})- l_{\sigma})^{2} }. $$
(2)

Minimising the cost function returns the optimal parameters 𝜃 = argminL(𝜃) from which the model predicts the labels \(l^{\text {pr}}_{\sigma }(\theta ^{*})\). Essentially, this constitutes learning a functor from the grammar category to the category of quantum circuits. We then quantify the performance by the training and test errors eΔ and eE, as the proportion of labels predicted incorrectly:

$$ e_{\mathrm{A}} = \frac{1}{\vert A\vert} {\sum}_{\sigma\in {A}}\vert{\big\vert\lfloor l^{\text{pr}}_{\sigma}(\theta^{*}) \rceil - l_{\sigma}\big\vert}~~,~~~~ A={\Delta}, E. $$

This supervised learning task of binary classification for sentences is a special case of question answering (QA) (de Felice et al. 2020; Chen et al. 2020; Zhao et al. 2020); questions are posed as statements and the truth labels are the binary answers. After training on Δ, the model predicts the answer to a previously unseen question from E, which comprises sentences containing words all of which have appeared in Δ. The optimisation is performed over the parameters of all the sentences in the training set 𝜃 = ∪σ∈Δ𝜃σ. In our experiments, each word appears at least once in the training set and so 𝜃 = ∪wV𝜃w. Note that what is being learned are the inputs, i.e. the quantum word embeddings, to an entangling process corresponding to the grammar. Recall that a given sentence-circuit does not necessarily involve the parameters of every word. However, every word appears in at least one sentence, which introduces classical correlations between the sentences. This makes such a learning task possible.

In this work, we use artificial data in the form of a very small-scale corpus of grammatical sentences. We randomly generate sentences using a simple context-free grammar (CFG) whose production rules we define. Each sentence then is accompanied by its syntax tree by definition of our generation procedure. The syntax tree can be cast in string-diagram form, and each CFG-generated sentence-diagram can then be transformed into a DisCoCat diagram (see Appendix C for details). Even though the data is synthetic, we curate the data by assigning labels by hand so that the truth values among the sentences are consistent with a story, rendering the classification task semantically non-trivial.

Were one to use real-world datasets of labelled-sentences, one could use a parser in order to obtain the syntactic structures of the sentences; the rest of our pipeline would still remain. Since the writing of this manuscript, the open-source Python package lambeq (Kartsaklis et al. 2021) has been made available, which couples to the end-to-end parser Bobcat. The parser, given a sentence, returns its syntax tree as a grammatical reduction in the Combinatory Categorial Grammar (CCG). The package also has the capability to translate CCG reductions to pregroup reductions (Yeung and Kartsaklis 2021), and so effectively it is the first practical tool introduced for large-scale parsing in pregroup grammar. The string diagrams in lambeq are encoded using DisCoPy, and thus enables the instantiation of large-scale DisCoCat models on real-world data.

3.1 Classical simulation

We first show results from classical simulations of the QA task. The sentence circuits are evaluated exactly on a classical computer to compute the predicted labels in Eq.1. We consider the corpus K30 of 30 sentences sampled from the vocabulary of 7 words (Appendix D) and we set p = 0.5. In Fig. 4 we show the convergence of the cost function, for qn = 1 and qn = 2, for increasing word-circuit depth d. To clearly show the decrease in training and test errors as a function of d when invoking the global optimiser basinhopping (Appendix F).

Fig. 4
figure 4

Convergence of mean cost function 〈L(𝜃)〉 vs number of SPSA iterations for corpus K30. A lower minimum is reached for larger d. (Top) qn = 1 and |𝜃| = 8 + 2d. Results are averaged over 20 realisations. (Bottom) qn = 2 and |𝜃| = 10d. Results are averaged over 5 realisations. (Insets) Mean training and test errors 〈etr〉, 〈ete〉 vs d. Using the global optimisation basinhopping with local optimisation Nelder-Mead (red), the errors decrease with d

3.2 Experiments on IBMQ

We now turn to readily available NISQ devices provided by the IBMQ in order to estimate the predicted labels in Eq.1.

Before each circuit can be run on a backend, in this case a superconducting quantum processor, it first needs to be compiled. A quantum compiler takes as input a circuit and a backend and outputs an equivalent circuit which is compatible with the backend’s topology. A quantum compiler also aims to minimise the most noisy operations. For IBMQ, the gate most prone to erros is the entangling CNOT gates. The compiler we use in this work is TKET (Sivarajah et al. 2020), and for each circuit-run on a backend, we use the maximum allowed number of shots (Appendix G).

We consider the corpus K16 from 6 words (Appendix D) and set p = 0.5. For every evaluation of the cost function under optimisation, the circuits were run on the IBMQ quantum computers ibmq_montreal and ibmq_toronto. In Fig. 5, we show the convergence of the cost function under SPSA optimisation and report the training and testing errors for different choices of hyperparameters. This constitutes the first non-trivial QNLP experiment on a programmable quantum processor. According to Fig. 4, scaling up the word-circuits results in improvement in training and testing errors, and remarkably, we observe this on the quantum computer, as well. This is important for the scalability of our experiment when future hardware allows for greater circuit sizes and thus richer quantum-enhanced feature spaces and grammatically more complex sentences.

Fig. 5
figure 5

Convergence of the cost L(𝜃) evaluated on quantum computers vs SPSA iterations for corpus K16. For qn = 1, d = 2, for which |𝜃| = 10, on ibmq_montreal (blue) we obtain etr = 0.125 and ete = 0.5. For qn = 1, d = 3, where |𝜃| = 13, on ibmq_toronto (green) we get etr = 0.125 and a lower testing error ete = 0.375. On ibmq_montreal (red) we get both lower training and testing errors, etr = 0, ete = 0.375 than for d = 2. In all cases, the CNOT-depth of any sentence-circuit after TKET-compilation is at most 3. Classical simulations (dashed), averaged over 20 realisations, agree with behaviour on IBMQ for both cases d = 2 (yellow) and d = 3 (purple)

4 Discussion and outlook

We have performed the first-ever quantum natural language processing experiment by means of classification of sentences annotated with binary labels, a special case of QA, on actual quantum hardware. We used a compositional-distributional model of meaning, DisCoCat, constructed by a structure-preserving mapping from grammatical reductions of sentences to PQCs. This proof-of-concept work serves as a demonstration that QNLP is possible on currently available quantum devices and that it is a line of research worth exploring.

A remark on postselection is in order. QML-based QNLP tasks such as the one implemented in this work rely on the optimisation of a scalar cost function. In general, estimating a scalar encoded in an amplitude on a quantum computer requires either postselection or coherent control over arbitrary circuits so that a swap test or a Hadamard test can be performed (Appendix H). Notably, in special cases of interest to QML, the Hadamard test can be adapted to NISQ technologies (Mitarai and Fujii 2019; Benedetti et al. 2020). In its general form, however, the depth-cost resulting after compilation of controlled circuits becomes prohibitable with current quantum devices. However, given the rapid improvement in quantum computing hardware, we envision that such operations will be within reach in the near-term.

Future work includes experimentation with other grammars, such as CCG which returns tree-like diagrams, and using them to construct PQC-based functors, as is done in Socher et al. (2013) but with neural networks. This for example would enable the design of PQC-based functors that do not require postselection, such as the pregroup-based models where in order for each Bell effect to take place one needs to postselect on measurements involving the qubits on which one wishes to realise a Bell effect.

We also look toward more complex QNLP tasks such as sentence similarity and work with real-world large-scale data using a pregroup parser, as made possible with lambeq (Kartsaklis et al. 2021). In that context, regularisation techniques during training will become important, which is an increasingly relevant topic for QML that in general deserves more attention (Benedetti et al. 2019).

In addition, our DisCoCat-based QNLP framework is naturally generalisable to accommodate mapping sentences to quantum circuits involving mixed states and quantum channels. This is useful as mixed states allow for modelling lexical entailment and ambiguity (Piedeleu et al. 2015; Bankova et al. 2016). As also stated above, it is possible to define functors in terms of hybrid models where both neural networks and PQCs are involved, where heuristically one aims to quantify the possible advantage of such models compared to strictly classical ones.

Furthermore, note that the word-embeddings are learned in-task in this work. However, training PQCs to prepare quantum states that serve as word embeddings can be achieved by using the usual NLP objectives (Mikolov et al. 2013). It is interesting to verify that such pretrained word embeddings can be useful in downstream tasks, such as the simple classification task presented in this work.

Finally, looking beyond the DisCoCat model, it is well-motivated to adopt the recently introduced DisCoCirc model (Coecke 2020) of meaning and its mapping to PQCs (Coecke et al. 2020), which allows for QNLP experiments on text-scale real-world data in a fully-compositional framework. In this model, nouns are treated as first-class citizen ‘entities’ of a text and makes sentence composition explicit. Entities go through gates which act as modifiers on them, modelling for example the application of adjectives or verbs. The model also considers higher-order modifiers, such as adverbs modifying verbs. This interaction structure, viewed as a process network, can again be used to instantiate models in terms of neural networks, tensor networks, or quantum circuits. In the latter case, entities are modelled as density matrices carried by wires and their modifiers as quantum channels.