1 Introduction

Nanoscale-based circuit design using quantum dot cellular automata (QCA) is focused by the researchers today because of the advantages of low energy consumption and high processing speed [1,2,3,4,5,6,7]. The fundamental QCA logic is based on the 3-input majority gate (MV), QCA wire and QCA inverter [8,9,10,11,12,13,14,15]. The architecture of QCA accommodates the above three cellular automata structures. Lent et al. [1] proposed a new paradigm for computation with cellular automata and also showed the construction and interconnection of basic digital logic gates. An adiabatic switching technique that permits the clocked control of the arrays of quantum dot cells performing useful computations was developed [6, 16,17,18,19]. Researchers already reported the experimental demonstration of a binary wire and logic gate using QCA [2,3,4,5, 8, 20, 21]. Lots of general-purpose combinational and sequential circuits have been designed using QCA [22,23,24,25,26,27,28,29,30]. The overview of logic redundancy schemes and the classical fault-tolerant approaches for circuit reliability was also discussed [7, 8, 14, 15, 31,32,33,34]. The computational efficiency of such circuits is, however, influenced by the environmental noise such as temperature variation. The efficacy of nanocomputing devices in the presence of noise can be estimated by Shannon’s information-theoretic measures. In nanocomputing, two things are most important: (1) the efficacy of noisy channels to implement complex computational operations and (2) statistically how the efficacy varies in ensembles of channels having variable physical structures. Those aspects motivate the information-theoretic measures, which statistically confines the computational efficacy of noisy computation channels that assembled via processes. This induces the randomness in the structure of physical channel and is therefore tailor-made to characterize realistic nanocomputing channels. The stated confinement is generally appropriate for any artificial as well as natural nanocomputing channels, realized through a structured or in random nanonetwork, which can be modeled like discrete computation channel. Recently, Anderson et al. [35] have quantified the impact of structural randomness and temperature fluctuations on the efficiency of performance of the full-adder circuit from the information-theoretic point of view.

The main aim of this work is to explore the robustness and trustworthiness of digital circuits in QCA through calculation of computational fidelity under thermal randomness and presence of noise. Thus, this work deals with the computational fidelity of QCA circuits, i.e., computation of channel fidelity in QCA channel routing when they will be implemented as noiseless and noisy nanocomputing channels. This work has the contributions as follows.

  1. 1.

    Computation of channel fidelity in QCA channel routing. The computation is performed for both noiseless and noisy QCA channels.

  2. 2.

    To perform the computation of channel fidelity, in this work, QCA-based 4-bit binary-to-Gray code and binary-to-excess-3 code converters have been considered as a routing channels.

  3. 3.

    The computational fidelity has been estimated applying Shannon’s information-theoretic measures to confirm the robustness of the QCA channels.

  4. 4.

    It has been observed that the computational fidelity of QCA channel fluctuates by rising the temperatures. Thus, range of temperatures over which the QCA channels yield reliable computation is proposed.

This article has five sections as follows. The QCA-based design of both the converters is outlined in Sect. 2. Estimation of channel fidelity in QCA channel routing for noiseless and noisy QCA channels is given in Sect. 3. Comparative analysis is given in Sect. 4. Finally, the conclusion is made in Sect. 5.

2 QCA-based binary-to-Gray and excess-3 code converters

For an n-bit converter, the output bits (Yi,n) of the Gray code related to the input bits (Xi,n) can be written as

$$Y_{i,n} = X_{i,n} , \, Y_{i,n - 1} = X_{i,n} \oplus X_{i,n - 1} , \ldots ,Y_{i,1} = X_{i,2} \oplus X_{i,1}$$

The truth table for conversion of 4-bit binary code into Gray code is given in Table 1 [12]. It is seen from the truth table that the most significant bit (MSB) of the Gray code is same with that of the input binary code. The logic expressions of the 4-bit binary code-to-Gray code converter circuit can be expressed as \(W = A,\quad \, X = A \oplus B,\quad \, Y = B \oplus C,\quad \, Z = C \oplus D\). QCA-based majority function representations are given by \(X = M(M(\overline{A} ,B,0),M(A,\overline{B} ,0),1)\), \(Y = M(M(\overline{B} ,C,0),M(B,\overline{C} ,0),1)\), \(Z = M(M(\overline{C} ,D,0),M(C,\overline{D} ,0),1)\).

Table 1 Truth table for the conversion of 4-bit binary code into Gray code

The QCA layout of this code converter circuit is outlined in Fig. 1. The circuit is developed using QCA Designer tool. The design is executed as follows.

Fig. 1
figure 1

QCA layout of 4-bit binary code-to-Gray code converter

  1. 1.

    Each XOR layout consists of 2 inverters and 3 MVs.

  2. 2.

    The inputs are in first clock zone.

  3. 3.

    The AND and OR operations necessary for the XOR operation are performed in the second and third clock zones, respectively.

  4. 4.

    The circuit layout consists of total 194 QCA cells over the area 920 nm × 240 nm, and three clock zones are required.

  5. 5.

    The simulation is achieved by the coherence vector method and it requires 5 iterations to converge into the initial steady-state polarization.

  6. 6.

    The required clock zone is three. Thus, after ¾ clock cycles the correct output will be appeared as displayed in Fig. 2.

    Fig. 2
    figure 2

    Simulation result for 4-bit binary code-to-Gray code converter

The computational efficacy of both the code converter circuits designed in this article can be analyzed by considering them as the computational channel where the output(s) can arise from the combination of the different inputs unlike the noiseless communication channel having one-to-one mapping between the input(s) and output(s) as shown in Fig. 3.

Fig. 3
figure 3

a Noiseless communication channel, b computation channel

The truth table for conversion of 4-bit binary code into excess-3 code is shown in Table 2. Here, the circuit is designed using four serial adders as presented in Fig. 4. The combinations of the input bit sequence range from “0000” to “1111” (i.e., 0–15). Since any output of the excess-3 code converter is increased by three with respect to a specific binary input, four bits cannot represent the correct output of the converter for the input combinations “1101,” “1110” and “1111” (i.e., 13–15). The most significant bit (E) denotes the error in the output of the excess-3 code converter as shown in the truth table for the last three combinations of the binary input.

Table 2 Truth table for the binary-to-excess-3 code converter
Fig. 4
figure 4

Design of binary-to-excess-3 code converter

The circuit layout of the binary-to-excess-3 code converter shown in Fig. 5 is designed using QCA Designer simulation tool. The design is carried out as follows.

Fig. 5
figure 5

QCA layout of binary code-to-excess-3 code converter

  1. 1.

    The inputs are in first clock zone.

  2. 2.

    Each of the full adders requires four clock zones to execute.

  3. 3.

    The layout consists of 686 cells including four inputs and five outputs.

  4. 4.

    The simulation is calculated by coherence vector method and it requires 10 iterations to converge into the initial steady-state polarization, and the output is shown in Fig. 6.

    Fig. 6
    figure 6

    Simulation result for binary-to-excess-3 code converter

The simulation result of proposed converters as shown in Figs. 2 and 6 is compared with their theoretic values. The evaluation agreed the circuit’s functional efficiency. The parameters used for coherence vector simulation of proposed QCA layout is shown in Fig. 7. The height as well as breadth of QCA cell used to achieve the circuit is 18 nm.

Fig. 7
figure 7

Coherence vector simulation parameter list

3 Computational fidelity in QCA channel routing

The logical 4-bit binary code-to-Gray code as well as binary-to-excess-3 code converter has no information loss as because the number of inputs and that of the corresponding outputs are equal. Besides, each input line and output line are distinct in nature. Thus, this computational channel can behave almost like a communicational channel. In particular, when both the converters having equal number of input and output bits perform ideal logical transformation, then they will behave as an ideal noiseless communication channel.

The 4-bit binary code-to-Gray code converter has 4-bit output for each 4-bit input. But binary code-to-excess-3 code converter has 5-bit output for each 4-bit input. So, there must be 32 (25 = 32) distinct output combinations. But, out of these 32 output combinations, logically only 16 output combinations are valid. As there is only 16 input combinations, the number of output combinations must be ≤ 16. Thus, binary code-to-Gray code as well as binary-to-excess-3 code converter is 16-input and 16-output channel.

Though both the converters have equal number of inputs as well as outputs, the QCA circuit of binary-to-excess-3 code converter as shown in Fig. 5 is more complex than that of the QCA circuit of binary code-to-Gray code converter circuit as shown in Fig. 1. Binary-to-excess-3 code converter required more QCA cells, MVs and clocking zones. Thus, it is better to study how computational fidelity varies in QCA circuits, which have same number of input and output combinations. This motivates the selection of the proposed converters as a QCA routing channels.

The mean or average information per message (considering equal probability {pi} of input bits of message (m) defined in terms of M-array input alphabet {xi} and N-array output alphabet {yj}with probability {qj}) transmitted by the discrete memory less source is given by the entropy [35, 36],

$$H(m) = \sum\limits_{i = 0}^{M - 1} {p_{i} I_{i} }$$
(1)

where \(I_{i} = \log_{2} \left( {\frac{1}{{p_{i} }}} \right)\) bits represent the information content in the message m. Thus, Eq. (1) can be written as

$$\begin{aligned} H(m) & = \sum\limits_{i = 0}^{M - 1} {p_{i} \log_{2} \left( {\frac{1}{{p_{i} }}} \right)} \\ & = - \sum\limits_{i = 0}^{M - 1} {p_{i} \log_{2} p_{i} } {\text{bits}} \\ \end{aligned}$$
(2)

Since the nanodots are arranged in the QCA cells, form the channel for carrying signal and circuit for performing computation, Shannon’s information theory-based measures can be applied to evaluate the amount of uncertainty in obtaining the correct output as specified by the logical transformation of the inputs given to the circuits in the presence of noise. The 4-bit binary number contains total 16 combinations of the input (xi) to both the binary code-to-Gray code and binary-to-excess-3 code converters and the output (yj) of the circuit also contains 16 combinations as the input. So, pi = qi = 1/16. Using Eq. (2), the Shannon’s entropy for the input of both the converters is given by,

$$\begin{aligned} H(X_{abcd} ) & = - \sum\limits_{i = 0}^{M - 1} {p_{i} \log_{2} p_{i} } \\ & = - \sum\limits_{i = 0}^{15} {\left( {\frac{1}{16}} \right)\log_{2} \left( {\frac{1}{16}} \right)} \\ & = - 16\left[ {\left( {\frac{1}{16}} \right)\log_{2} \left( {\frac{1}{16}} \right)} \right] \\ & = 4 \\ \end{aligned}$$

The average uncertainty about the input bits (xi) with respect to the received output bits (yj) is calculated by the conditional entropy as follows [35, 36]

$$H(X|Y) = \sum\limits_{j = 0}^{N - 1} {q_{j} H(X|y_{j} )} = \sum\limits_{j = 0}^{N - 1} {q_{j} \left( { - \sum\limits_{i = 0}^{M - 1} {p_{i|j} \log_{2} p_{i|j} } } \right)}$$
(3)

where \(H(X|y_{j} ) = - \sum\limits_{i = 0}^{M - 1} {p_{i|j} \log_{2} p_{i|j} }\) is the average entropy for all output bits yj and \(p_{i|j} = (q_{i|j} p_{i} /q_{j} )\) stands for conditional probability of output bit (yj) with respect to input bits (xi) [35]. Similarly, \(q_{i|j}\) is the transition probability. \(q_{i|j}\) will be 1 if \(i \in \left\{ i \right\}_{j}\); otherwise, it will be zero.

The mutual information received at the output of the circuits, i.e., the information present in output Y about input X is formulated by,

$$I(X;Y) = H(X) - H(X|Y)$$
(4)

In practical cases, the array of QCA cells can behave as noisy channel in the presence of thermal randomness because of the change in maximum output polarization (MOP) of the cells with temperature. Consequently, the logical transformation effectively done by the circuits may deviate from the predicted output.

The computational fidelity (FL) measure provides the amount of information about how far the noisy channel output resembles the correct output expected from the logical transformation of the same input distribution, i.e.,{xi} with probability mass function {pi} performed by a noiseless channel [35]. Basically, computational fidelity measure provides a more widespread description of computational efficacy. It is an information theory-based measure that reveals the statistical information of correlations between inputs and outputs in a noisy computing channel, yielding quantitative data about computational abilities of the channel that transcend a specific cluster of the M channel outputs ({zk}) into N abstract outputs yj (logical) and assignment of N sequence of digits to this yj. Besides, the computational fidelity measure is more versatile due to their information-theoretic nature and the entropic scenario of information-theoretic approach enables connections from computational efficacy to the physical costs for noisy computation. In more general, the computational fidelity measure statistically computes the distinguishability of outputs resulting from the inputs which belongs to the poles apart logical outputs. Note that the computational fidelity measure is mainly used in quantum computation to quantify the closeness behavior of physical outputs to desire states.

So, we consider an M-input ({xi}), K-output ({zk}) noisy discrete channel as binary-to-Gray and binary-to-excess-3 code converters and the channels are characterized by a channel matrix ({πk|i}) that represents the conditional probability of the occurrence of a particular output ({zk}) for a particular input xi.

For both the circuits, the amount of information about the extent to which the noisy channel generates correct logical output which is analogous to that of the ideal channel is given by the mutual information between the noise-free and noisy channel outputs as [35] as shown in Eq. (5).

$$I(Z;Y) = H(Z) - H(Z|Y)$$
(5)

where

$$H(Z) = - \sum\limits_{k = 0}^{k - 1} {\pi_{k} \log_{2} \pi_{k} }$$
(6)

and

$$\pi_{k} = \sum\limits_{i} {p_{i} \pi_{k|i} }$$
(7)

In Eq. (5), Y stands for output of ideal channel and Z denotes output for noise channel. \(H(Z)\) is the self-entropy for noise channel output Z and \(H(Z|Y)\) denotes average entropy for noisy channel output.

The conditional entropy in the noisy channel output, i.e., average entropy for both the converters can be calculated from the expression [35] as shown in Eq. (8).

$$H(Z|Y) = \sum\limits_{j = 0}^{N - 1} {q_{j} H(Z|y_{j} ) = \sum\limits_{j = 0}^{N - 1} {q_{j} \left( { - \sum\limits_{k = 0}^{k - 1} {\pi_{k}^{(j)} \log_{2} \pi_{k}^{(j)} } } \right)} }$$
(8)

where

$$\pi_{k}^{(j)} = \frac{1}{{q_{j} }}\sum\limits_{{i \in \left\{ i \right\}_{j} }} {p_{i} \pi_{k|i} }$$
(9)

In Eq. (8), \(H(Z|y_{j} )\) is the entropy given that \(Y = y_{j}\) and \(\pi_{k}^{(j)}\) is the probability for \(Z = z_{k}\) when \(Y = y_{j}\). For a noisy computing channel \(N = \left\{ {\left\{ {x_{i} } \right\},\left\{ {z_{k} } \right\},\left\{ { \, \pi_{k|i} } \right\}} \right\}\) with input probability mass function{pi}, the computational fidelity can be written as

$$F_{\text{L}} = \frac{I(Z;Y)}{{H_{L} (Y)}}$$
(10)

where

$$H_{\text{L}} (Y) = - \sum\limits_{j} {q_{j} \log_{2} q_{j} }$$
(11)

In Eq. (10), HL(Y) is the output entropy associated with the logical transformation of the ideal channel, i.e., the output entropy for logical transformation L with input probability mass function {pi} [35]. The computational fidelity of the proposed circuits can be calculated using Eq. (10).

3.1 Computational fidelity in noiseless QCA channel routing

In this section, the measure of computational fidelity of 4-bit binary code-to-Gray code converter and binary code-to-excess-3 code converter is performed when they behaves like a noiseless computing channels.

The conditional probabilities comprising the channel matrix ({πk|i}) for the binary-to-Gray code converter (for the binary inputs 0–15) are presented in Eq. (12).

$$\pi_{0|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12a)
$$\pi_{1|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12b)
$$\pi_{2|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12c)
$$\pi_{3|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12d)
$$\pi_{4|i} = \left( {\frac{1}{16}1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12e)
$$\pi_{5|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12f)
$$\pi_{6|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12g)
$$\pi_{7|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12h)
$$\pi_{8|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12i)
$$\pi_{9|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12j)
$$\pi_{10|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12k)
$$\pi_{11|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12l)
$$\pi_{12|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12m)
$$\pi_{13|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12n)
$$\pi_{14|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{Z} } \right)$$
(12o)
$$\pi_{15|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{Z} } \right)$$
(12p)

where \(P_{i}^{W}\),\(P_{i}^{X}\),\(P_{i}^{Y}\) and \(P_{i}^{Z}\) denote the polarization of output cell W, X, Y and Z, respectively, for ith input {xi}. If it is assumed that the proposed channels have the logical outputs, i.e., all the outputs which are logically true provide perfect 1 and all the outputs which are logically false provide perfect 0. Then, the channel matrix ({πk|i}) for the binary-to-Gray code converter can be estimated (Table 3) using Eqs. (12a12p).

Table 3 Channel matrix for the binary code-to-Gray code converter

The conditional probabilities comprising the channel matrix ({πk|i}) for the binary-to-excess-3 code converter (for the binary inputs 0–15) are presented in Eq. (13).

$$\pi_{0|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13a)
$$\pi_{1|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13b)
$$\pi_{2|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13c)
$$\pi_{3|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13d)
$$\pi_{4|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13e)
$$\pi_{5|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13f)
$$\pi_{6|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13g)
$$\pi_{7|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13h)
$$\pi_{8|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13i)
$$\pi_{9|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13j)
$$\pi_{10|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13k)
$$\pi_{11|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 - P_{i}^{E} } \right)$$
(13l)
$$\pi_{12|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 + P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 + P_{i}^{E} } \right)$$
(13m)
$$\pi_{13|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 + P_{i}^{E} } \right)$$
(13n)
$$\pi_{14|i} = \frac{1}{16}\left( {1 - P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 - P_{i}^{X} } \right)\left( {1 + P_{i}^{Z} } \right)\left( {1 + P_{i}^{E} } \right)$$
(13o)
$$\pi_{15|i} = \frac{1}{16}\left( {1 + P_{i}^{W} } \right)\left( {1 - P_{i}^{Y} } \right)\left( {1 + P_{i}^{X} } \right)\left( {1 - P_{i}^{Z} } \right)\left( {1 + P_{i}^{E} } \right)$$
(13p)

where \(P_{i}^{W}\),\(P_{i}^{X}\),\(P_{i}^{Y}\),\(P_{i}^{Z}\) and \(P_{i}^{E}\) denote the polarization of output cell W, X, Y, Z, and E respectively, for ith input {xi}. If it is assumed that the proposed channels have the logical outputs, i.e., all the outputs which are logically true provide perfect 1 and all the outputs which are logically false provide perfect 0. Then, the channel matrix ({πk|i}) for binary-to-excess-3 code converter can be estimated (Table 4) using Eqs. (13a13p).

Table 4 Channel matrix for the binary-to-excess-3 code converter

Now, in case of binary-to-Gray code converter, for noiseless communication, \(P_{i}^{W} = P_{i}^{X} = P_{i}^{Y} = P_{i}^{Z} = 1\) and then from Eq. (12), we can write

$$\begin{aligned} \pi_{k|i} & = \frac{1}{16}(1 + 1)(1 + 1)(1 + 1)(1 + 1) \\ & = \left( {\frac{1}{16} \times 16} \right) \\ & = 1 \\ \end{aligned}$$

Using Eq. (7), for every value of k (k = 0, 1, …, 15), \(\pi_{k}\) can be calculated as

$$\begin{aligned} \pi_{k} & = \sum\limits_{i} {p_{i} \pi_{k|i} } \\ & = \left( {\frac{1}{16} \times 1} \right)\left( {{\text{For }}4{\text{ - bit binary code-to-Gray code converter}}\;p_{i} = \frac{1}{16}} \right) \\ & = \frac{1}{16} \\ \end{aligned}$$
$$\begin{aligned} {\text{Therefore}},\;H(Z) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k} \log_{2} \pi_{k} } \\ & = - \sum\limits_{k = 0}^{15} {\pi_{k} \log_{2} \pi_{k} } \\ & = - 16\left( {\frac{1}{16}\log_{2} \frac{1}{16}} \right) \\ & = - \log_{2} \frac{1}{16} \\ & = 4 \\ \end{aligned}$$

Again from Eq. (9), \(\pi_{k}^{(j)}\) can be calculated as

$$\begin{aligned} \pi_{k}^{(j)} & = \frac{1}{{q_{j} }}\sum\limits_{{i \in \left\{ i \right\}_{j} }} {p_{i} \pi_{k|i} } \;\left( {{\text{For }}4{\text{ - bit binary-to-Gray code converter }}q_{i} = \frac{1}{16}} \right) \\ & = 16 \times \frac{1}{16} \\ & = 1 \\ \end{aligned}$$
$$\begin{aligned} {\text{Thus}},\quad H(Z|y_{j} ) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k}^{(j)} \log_{2} \pi_{k}^{(j)} } \\ & = - \sum\limits_{k = 0}^{15} {\pi_{k}^{(j)} \log_{2} \pi_{k}^{(j)} } \\ & = - \frac{1}{16}(1\log_{2} 1) \\ & = 0 \\ \end{aligned}$$
$$\begin{aligned} H(Z|Y) & = \sum\limits_{j = 0}^{15} {q_{j} H(Z|y_{j} )} \\ & = \frac{1}{16} \times 0 \\ & = 0 \\ \end{aligned}$$
$$\begin{aligned} I(Z;Y) & = H(Z) - H(Z|Y) \\ & = 4 - 0 \\ & = 4 \\ \end{aligned}$$
$$\begin{aligned} H_{L} (Y) & = - \sum\limits_{j} {q_{j} \log_{2} q_{j} } \\ & = - 16\left( {\frac{1}{16}\log_{2} \frac{1}{16}} \right) \\ & = 4 \\ \end{aligned}$$

Placing the value of \(I(Z;Y)\) and \(H_{L} (Y)\) in Eq. (10), the computational fidelity can be calculated as

$$\begin{aligned} F_{\text{L}} & = \frac{I(Z;Y)}{{H_{L} (Y)}} \\ & = \frac{4}{4} \\ & = 1 \\ \end{aligned}$$

So, fidelity (FL) will be 1 when the circuit behaves like a noiseless computing channel.

Similarly, in case of binary-to-excess-3 converters, it can be shown that the fidelity (FL) will be 1 when it behaves like a noiseless computing channel.

It can be noted that the computational fidelity is within the range 0 ≤ FL ≤ 1. The equality in the lower bound occurs when the noisy channel output, i.e., Z has no knowledge about the logical output Y. On the other hand, the equality in the upper bound occurs when Y can be inferred from Z without ambiguity, i.e., I(Z;Y) has maximum value of HL(Y) [35]. Thus, it can be noted that if the output of any circuit is correspondent to logical one then the circuit will achieve the maximum fidelity, i.e., FL = 1 and it indicates no induced noise in the outputs. But, less than one means the noise is present in the outputs, i.e., the output is produced with induced noise.

3.2 Computational fidelity in noisy QCA channel routing

Fidelity may vary based on induced noise, which may present due to dissipated power, structural randomness and thermal randomness. In this section, the estimation of fidelity for both the circuits is performed considering thermal randomness.

To perform the estimation of fidelity, proposed binary code-to-Gray code converter circuit as shown in Fig. 1 has been simulated on QCA Designer tool at different temperatures such as 1 K and 2 K. From the simulation outcome, the polarization of each output cell at specific temperature is observed and utilized in calculation of fidelity. For example, if the circuit is simulated at 1 K temperature, then each output cell has the maximum output polarization (MOP) for logical true as follows.

$$P_{i}^{W} = P_{i}^{X} = P_{i}^{Y} = P_{i}^{Z} = 0.954$$

From Eq. (12), we can write

$$\begin{aligned} \pi_{k|i} & = \frac{1}{16}(1 + 0.954)(1 + 0.954)(1 + 0.954)(1 + 0.954) \\ & = \frac{14.578}{16} \\ & = 0.9111 \\ \end{aligned}$$

Using Eq. (7), for every value of k (k = 0, 1, ….., 15), \(\pi_{k}\) can be calculated as

$$\begin{aligned} \pi_{k} & = \sum\limits_{i} {p_{i} \pi_{k|i} } \\ & = \left( {\frac{1}{16} \times 0.9111} \right)\left( {{\text{For }}4{\text{ - bit binary code-to-Gray code converter }}p_{i} = \frac{1}{16}} \right) \\ & = \frac{0.9111}{16} \\ & = 0.05694 \\ \end{aligned}$$
$$\begin{aligned} {\text{Therefore}},H(Z) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k} \log_{2} \pi_{k} } \\ & = - 16\left[ {\left( {\frac{0.9111}{16}} \right)\log_{2} \left( {\frac{0.9111}{16}} \right)} \right] \\ & = - (0.9111)\log_{2} \left( {\frac{0.9111}{16}} \right) \\ & = 3.7670 \\ \end{aligned}$$

Again from Eq. (9), \(\pi_{k}^{(j)}\) can be estimated as

$$\begin{aligned} \pi_{k}^{(j)} & = \frac{1}{{q_{j} }}\sum\limits_{{i \in \left\{ i \right\}_{j} }} {p_{i} \pi_{k|i} } \\ & = 16 \times \frac{1}{16} \times (0.9111) \\ & = 0.9111 \\ \end{aligned}$$
$$\begin{aligned} {\text{Thus}},\;H(Z|y_{j} ) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k}^{(j)} \log_{2} \pi_{k}^{(j)} } \\ & = - 16[(0.9111)\log_{2} (0.9111)] \\ & = 1.9575 \\ \end{aligned}$$
$$\begin{aligned} H(Z|Y) & = \sum\limits_{j = 0}^{15} {q_{j} H(Z|y_{j} )} \\ & = \frac{1}{16}(16 \times 1.9575) \\ & = 1.9575 \\ \end{aligned}$$
$$\begin{aligned} I(Z;Y) & = H(Z) - H(Z|Y) \\ & = 3.7670 - 1.9575 \\ & = 1.8094 \\ \end{aligned}$$
$$\begin{aligned} H_{\text{L}} (Y) & = - \sum\limits_{j} {q_{j} \log_{2} q_{j} } \\ & = - 16\left( {\frac{1}{16}\log_{2} \frac{1}{16}} \right) \\ & = 4 \\ \end{aligned}$$

Placing the value of \(I(Z;Y)\) and \(H_{\text{L}} (Y)\) in Eq. (10), the computational fidelity at 1 K temperature can be calculated as

$$\begin{aligned} F_{{{\text{L}}({\text{For}}1\,{\text{K}})}} & = \frac{I(Z;Y)}{{H_{\text{L}} (Y)}} \\ & = \frac{1.8094}{4} \\ & = 0.4567 \\ \end{aligned}$$

Similarly, at different temperatures, the computational fidelity of the binary code-to-Gray code converter is estimated and tabulated in Table 5.

Table 5 Fidelity calculation of binary code-to-Gray code converter

In similar approach, the estimation of computational fidelity of proposed binary-to-excess-3 code converter for different temperatures such as 1 K and 2 K has been performed. For example, if the circuit is simulated at 1 K temperature, then each output cell has the maximum output polarization (MOP) for logical true as follows.

$$P_{i}^{W} = P_{i}^{X} = P_{i}^{Y} = P_{i}^{Z} = 0.988$$

From Eq. (13), we can write

$$\begin{aligned} \pi_{k|i} & = \frac{1}{16}(1 + 0.988)(1 + 0.988)(1 + 0.988)(1 + 0.988) \\ & = \frac{15.6194}{16} \\ & = 0.9762 \\ \end{aligned}$$

Using Eq. (7), for every value of k (k = 0, 1, ….., 15), \(\pi_{k}\) can be calculated as

$$\begin{aligned} \pi_{k} & = \sum\limits_{i} {p_{i} \pi_{k|i} } \\ & = \left( {\frac{1}{16} \times 0.9762} \right)\left( {{\text{For }}4{\text{ - bit binary code-to-Gray code converter }}p_{i} = \frac{1}{16}} \right) \\ & = \frac{0.9762}{16} \\ & = 0.06101 \\ \end{aligned}$$
$$\begin{aligned} {\text{Therefore}},\quad H(Z) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k} \log_{2} \pi_{k} } \\ & = - 16\left[ {\left( {\frac{0.9762}{16}} \right)\log_{2} \left( {\frac{0.9762}{16}} \right)} \right] \\ & = 3.9388 \\ \end{aligned}$$

Again from Eq. (9), \(\pi_{k}^{(j)}\) can be calculated as

$$\begin{aligned} \pi_{k}^{(j)} & = \frac{1}{{q_{j} }}\sum\limits_{{i \in \left\{ i \right\}_{j} }} {p_{i} \pi_{k|i} } \\ & = 16 \times \frac{1}{16} \times (0.9762) \\ & = 0.9762 \\ \end{aligned}$$
$$\begin{aligned} {\text{Thus}},\quad H(Z|y_{j} ) & = - \sum\limits_{k = 0}^{k - 1} {\pi_{k}^{(j)} \log_{2} \pi_{k}^{(j)} } \\ & = - 16[(0.9762)\log_{2} (0.9762)] \\ & = 0.5425 \\ \end{aligned}$$
$$\begin{aligned} H(Z|Y) & = \sum\limits_{j = 0}^{15} {q_{j} H(Z|y_{j} )} \\ & = \frac{1}{16}(16 \times 0.5425) \\ & = 0.5425 \\ \end{aligned}$$
$$\begin{aligned} I(Z;Y) & = H(Z) - H(Z|Y) \\ & = 3.9388 - 0.5425 \\ & = 3.3963 \\ \end{aligned}$$
$$\begin{aligned} H_{L} (Y) & = - \sum\limits_{j} {q_{j} \log_{2} q_{j} } \\ & = - 16\left( {\frac{1}{16}\log_{2} \frac{1}{16}} \right) \\ & = 4 \\ \end{aligned}$$

Placing the value of \(I(Z;Y)\) and \(H_{\text{L}} (Y)\) in Eq. (10), the computational fidelity 1 K temperature can be calculated as

$$\begin{aligned} F_{{L({\text{For}}1K)}} & = \frac{I(Z;Y)}{{H_{L} (Y)}} \\ & = \frac{3.3988}{4} \\ & = 0.8497 \\ \end{aligned}$$

Similarly, at different temperatures, the computational fidelity of the binary-to-excess-3 code converter is estimated and tabulated in Table 6.

Table 6 Fidelity calculation of binary-to-excess-3 code converter

4 Results and discussions

4.1 Fidelity versus temperature analysis

The variation of computational fidelity of the binary-to-Gray and binary-to-excess-3 code converter with temperature is shown in Figs. 8 and 9. The observations from the graphs are as follows.

Fig. 8
figure 8

Computational fidelity versus temperature characteristics of a 4-bit binary code-to-Gray code converter

Fig. 9
figure 9

Computational fidelity versus temperature characteristics of a 4-bit binary-to-excess-3 code converter

  1. 1.

    Mop decreases by raising temperature. Thus, the computational fidelity decreases with increasing temperature for both the code converter circuits.

  2. 2.

    Both the circuits perform reliable computation over the low temperature range, i.e., stability of the code converter circuits decreases under thermal randomness.

4.2 Comparative analysis

The comparative study of the computational fidelity for both the code converter circuits is shown in Fig. 10 which shows results as follows.

Fig. 10
figure 10

Comparison of computational fidelity versus temperature characteristics of a 4-bit binary-to-Gray (“+” symbol) and binary-to-excess-3 code (“*” symbol) converters

  1. 1.

    The fidelity of the binary code-to-Gray code converter is much less than that of the binary-to-excess-3 code converter even in the low-temperature regime.

  2. 2.

    Further, the fidelity of the binary-to-excess-3 code converter starts to degrade at a relatively higher temperature range than that of the other converter.

  3. 3.

    The binary-to-Gray code and binary-to-excess-3 code converters can perform logical transformation or computation efficiently over the temperature range 1–4 K and 1–11 K, respectively.

The fidelity of the binary code-to-Gray code converter is much less than that of the binary-to-excess-3 code converter even in the low-temperature regime because of MOP. Higher value of MOP means higher fidelity as in that case I(Z;Y) has maximum value of HL(Y). Section 3.2 shows that both the converters have same value of HL(Y), i.e., 4. Thus, the variation in fidelity depends on \(I(Z;Y)\). If \(I(Z;Y)\) increases, fidelity also increases, and if \(I(Z;Y)\) decreases, fidelity also decreases. Now, the value of \(I(Z;Y)\) depends on MOP. So, if MOP increases, \(I(Z;Y)\) also increases, and if MOP decreases, \(I(Z;Y)\) also decreases. For example, as described in Sect. 3.2, at T = 1 K, the MOP of binary code-to-Gray code converter is 0.954 which causes \(I(Z;Y)\) = 1.8094 and thus fidelity (FL) = 0.4567. But in case of binary-to-excess-3 code converter, at T = 1 K, the MOP is 0.988 which causes \(I(Z;Y)\) = 3.3963 and thus fidelity (FL) = 0.8497. Therefore, due to lower MOP, the fidelity of the binary code-to-Gray code converter is much less than that of the binary-to-excess-3 code converter even in the low-temperature regime.

4.3 Computational faithfulness

During estimation process, the degree of computational faithfulness of proposed converters with temperature is observed and the result is plotted in Table 7. The result is analyzed as follows.

Table 7 Computational faithfulness under thermal randomness
  1. 1.

    The computational fidelity of the QCA binary-to-Gray and binary-to-excess-3 code converters is good over the temperature range \(0 \le T < 4\,{\text{K}}\) and \(0 \le T < 11\,{\text{K}}\), respectively. Thus, both the code converters have reliable computation over that range of temperatures.

  2. 2.

    Adequate over the range \(5\,{\text{K}} \le T < 7\,{\text{K}}\) and \(11\,{\text{K}} \le T < 18\,{\text{K}}\), respectively. Thus, the output from both of the code converters can be considered as valid outputs over that range of temperatures.

  3. 3.

    Poor over the range \(7\,{\text{K}} \le T < 9\,{\text{K}}\) and \(18\,{\text{K}} \le T < 20\,{\text{K}}\), respectively. Thus, both the code converters have faulty outputs.

4.4 Data statistics of the proposed code converter circuits

The data statistics for both the code converter circuits is tabulated in Table 8. Table 8 shows that the standard deviation of the computational fidelity of the binary code-to-excess-3 code converter is slightly larger than that of the binary-to-Gray code converter. This little difference of the order of approximately 0.07 is manifested in the small difference in the slope of the two curves (Fig. 10). The phenomenon reveals the performance of the binary-to-excess-3 code converter to be more reliable than the binary-to-Gray code converter under thermal randomness.

Table 8 Data statistics of the code converters

5 Conclusion

This article shows the computation of channel fidelity in QCA channel routing for noiseless and noisy QCA channel routing. Shannon’s information-theoretic measure of computational fidelity confirms the robustness of the proposed binary-to-Gray and binary-to-excess-3 code converter-based QCA routing channels. The proposed routing channels yield reliable computation under certain range of temperatures. The computational fidelity is found to deteriorate with increasing temperature for both the routing channels. This routing channels exhibit considerable fidelity when operated in the temperature regime 15 K and 111 K, respectively. Hence, both the channels yield appreciable computational efficacy over the low-temperature regime. Moreover, the extent of variation of the computational fidelity of the circuits with the thermal fluctuations reflects the fuzzy multivalued status of the performance of the QCA-based routing channels. The simulation result is verified through theoretic values that agreed the design accuracy of the proposed channels.