1 Introduction

Ever since the invention of computers, and especially since their ever-increasing acceptance with businesses, almost all organizations have nowadays computerized their business operations. Business data are collected and stored using computers. Managing increasing volumes of digital data was a tricky proposition from the very beginning. On-Line Transaction Processing (OLTP) systems were developed to collect transaction data [1]. These systems were designed with complex structures in order to minimize data redundancies on account of very fast write operations. The informational value of this stored transactional data was recognized from the very beginning considering that many applications, based on them, for report generation, trend analysis, etc., were developed for decision support. OLTPs were not designed for fast retrieval operations of huge volumes of data for their analysis; hence it performed very limited analysis and that to inefficiently [1, 2]. With the advent of computer networks having multiple communication protocols, computers with varied hardware configurations, distinct operating systems and disparate database management systems, the stand-alone transactional data sources of an organization became incompatible and obsolete. As a result, the credibility of an organization’s data asset for analysis diminished and data analysis using incompatible data sources became an exorbitantly costly process, causing information bottlenecks in the organization [3]. A data warehouse (DW) was designed as an alternative to address this crisis with the aim of meeting the informational needs of the organization’s decision support system (DSS) [4]. DW is a centralized repository of historic, integrated, subjected-oriented, time-variant and non-volatile data, created and maintained for supporting complex data analysis, for acquiring information to support strategic and tactical decision making [1,2,3]. In the DW, data from various disparate, remote, heterogeneous and incompatible transactional data sources are extracted, transformed, consolidated and integrated to obtain correct, unambiguous, consistent and complete data. In transactional data sources, data are organized and stored around specific operational applications; however, in a DW, data are organized and stored not by their applications, but by business subjects crucial for the organization [2, 3, 5]. Generally, operational systems store the current value of data by overwriting its previous value; they do not maintain a record of all its preceding values. As a result, historical analysis cannot be performed using such operational data. In contrast, a DW, in addition to the current value of data, stores all values ever assigned to the data since its creation. The data that enter a DW are never deleted, but stored permanently throughout the life of the organization. Such time-variant and non-volatile data facilitates historical and predictive analysis. A DW stores atomic data, as well as summarized data, to support detailed and fast analytical queries.

The sequence of interactive and multi-perspective data analysis performed on the DW using some data analytical tool is called On-Line Analytical Processing (OLAP), as this is the primary goal of building a DW [2, 6,7,8]. OLAP queries access very large volumes of historical and aggregate data at different levels of granularity using roll-up and drill down OLAP operators along certain combinations of dimensions (sets of attributes). They use slice and dice OLAP operators to select and project a desired portion of data. A pivot operator is used by them to reorient the data to another view. These complex OLAP operations are made possible by the multidimensional data model [6,7,8]. It provides a multidimensional view of the data comprising measures and dimensions [9]. A measure is the numerical quantification of an event of interest and a dimension is a set of attributes forming the context of a measure [9]. Product, Customer and Time can be dimensions for a sales measure as shown in figure 1. The multidimensional data model is implemented using star schema in relational database management systems. It is implemented using multidimensional arrays in a multidimensional database [9]. The star schema with Sales as a fact table and Customer, Product and Time as the dimensional tables is shown in figure 2.

Figure 1
figure 1

Data cube.

Figure 2
figure 2

Star schema.

One of the major challenges during an OLAP session is the speed with which the DW and OLAP components of a DSS are able to respond to OLAP queries posed by analysts. Although the available OLAP operators have been successfully assisting such queries, if DSS takes hours or days to answer these, the analysis would be very limited and less productive as response delays would break the chain of ad-hoc analytical thoughts of an analyst and the information obtained at the end of such a long OLAP sessions would mostly be obsolete and irrelevant. Not only the depth and width of the information are essential but also the speed at which such information is delivered is all the more important to stay competitive in a dynamic market environment having many other competitors [3]. OLAP queries are usually long, complex, exploratory and ad hoc in nature. These queries when posed against a voluminous DW consume lot of time for their processing, thereby increasing the query response time. This problem worsens due to the continuously growing data in a DW, as these queries may take hours and days to process, where the desired requirement is of only a few seconds and minutes. Although several query optimization techniques and indexing strategies [10,11,12,13,14] exist, they do not scale up with the ever increasing voluminous data in the DW. Scalability can be addressed by pre-computing relevant and frequently accessed data in the DW and storing it separately as materialized views [15] in the DW. This pre-computed information, which is significantly small when compared with the voluminous DW, can be used to process OLAP queries in an efficient manner.

Materialized views, unlike virtual views, store data along with their query definition. Their primary purpose is to reduce the response time of OLAP queries. Amongst the issues associated with materialized views like view maintenance, view synchronization, view adaptation, answering queries using views and view selection [12], view selection is the focus of this paper and is discussed next.

1.1 View selection

View selection is concerned with the selection of an appropriate subset of views from amongst all possible views for a given query workload that can improve the query performance, while conforming to the resource constraints in terms of storage space, maintenance cost, etc. [16,17,18,19,20]. It has been studied in the context of query optimization, warehouse design, distributed databases, semantic web databases, etc. [17, 21,22,23]. According to [20], view selection is concerned with the selection of views, for a given query workload, from a database that is capable of being stored within the available storage space. Since the number of views is exponential with respect to the number of dimensions, all possible views cannot be materialized due to storage space constraint. Thus, there is a need to select a subset of views that conform to the storage space constraints from amongst all possible views. Further, optimal selection of subse t of views is a NP-Complete problem [13]. The objective then is to select a subset of good quality views, if not optimal, for materialization. This subset cannot be arbitrarily selected, as it may result in selecting views that are irrelevant and not capable of answering the OLAP queries. Several approaches exist that have been attempted to select this subset of views, which are broadly classified as empirically based or heuristically based [24]. The empirically based view selection approach [24,25,26,27,28,29,30,31,32,33] considers data accessed by past queries as indicators of data that are likely to be accessed by future queries. These approaches monitor and assess these queries on parameters like the query frequency, data accessed, etc., and use this information to select subsets of views from materialization. On the other hand, heuristically based view selection approaches use heuristics, to prune the search space of all possible views to select views that can reduce the response time of OLAP queries. Most of these are based on greedy heuristics [12, 13, 34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50]. Among these, most of them focus on the issues and problems associated with the greedy view selection algorithm given in [13], also referred to as HRUA [36, 37, 43,44,45,46,47,48,49]. This paper also proposes a view selection algorithm that attempts to address the issues and problems associated with the greedy view selection algorithm HRUA given in [13].

HRUA, in each iteration, computes the benefit of each view of a multidimensional lattice, using its size, and selects amongst them the one having maximum benefit. For selecting Top-K views, this process continues for K number of iterations. HRUA aims to select the Top-K views that minimize the total cost of evaluating all the views. However, HRUA has certain limitations. Since the number of views is exponential with respect to the number of dimensions, increase in the number of dimensions leads to a deterioration in the quality of views selected using HRUA with respect to TVEC. This is because the possible number of views from which to select Top-K views increases with increase in the number of dimensions. Thus HRUA, being a greedy algorithm, selects a sub-optimal quality of views. Another limitation with HRUA is that it is not scalable for higher dimensions, i.e., it becomes computationally infeasible to select views for higher dimensional data sets. These limitations need to be addressed in order to select good quality views for higher dimensional data sets. One way to address this problem is by selecting views using evolutionary algorithms. Darwin’s theory of evolution by means of natural selection has been the heuristic used in evolutionary algorithms [51]. According to it, genetically and behaviourally well adapted progenies survive and replace their parents to reproduce in order to prolong the existence of their species. Many aspects of natural evolution have been viewed as computational processes and have been adapted with stochastic elements to solve complex computational problems [51]. These algorithms have been used for view selection. Evolutionary view selection is discussed next.

1.2 Evolutionary view selection

Evolutionary view selection would select views by exploring and exploiting the search space of all possible views using evolutionary operators like selection, crossover and mutation with an aim to select good sets of views for materialization. Several evolution-based view selection algorithms exist [52,53,54,55,56,57,58,59,60,61,62,63]. Reference [62] used a genetic algorithm (GA) to select an optimal set of views, for a given set of multiple query processing plans, to minimize its processing cost. The proposed GA performed better than heuristics of [62] and [18]; it was also observed that the hybrid of GA and heuristics of [62] and [18] performed better than the GA and heuristics of [62] and [18]. Reference [52] used the Genetic Local Search (GLS) algorithm using AND–OR view graph to address the view selection problem. The GLS searches in two steps; in the first step it uses a local search to improve the population of chromosomes, whereupon the GA is employed to diversify the search to look out for an unexplored solution space. It was applied to a real database to determine its efficiency. GA, in conjunction with OR view graph, was used for the maintenance-cost view selection problem [54]. The results obtained were consistently within 10 percent of the optimal solution; further, it exhibited a linear execution time. Reference [61] proposed a two-level structure for materialized view selection and used evolutionary algorithms with Multiple View Processing Plan (MVPP) framework to select views. It was observed that evolutionary algorithms had impractically long computation time and the quality of solutions was no better than those of the hybrid of evolutionary algorithms and heuristics of [61] and [18]. Reference [60] proposed a stochastic ranking evolutionary algorithm (SEA) for the maintenance cost view selection problem. In finding optimal solutions, it was observed that SEA was able to obtain near-optimal results that were very close to those obtained by A*-heuristic; it was also noticed that SEA performed much better than the algorithm proposed by [54]. A* and SEA performed better than the algorithm proposed by [54] and the inverted-tree greedy for smaller instances of the problem. As A* and inverted-tree greedy were not able to handle lattices with 256 nodes, SEA and the algorithm of [54] were compared for higher dimension data sets; it was observed that the solutions obtained by SEA were much better than those obtained through the algorithm of [54], though it took longer time than the algorithm of [54]. Reference [55] proposed a genetic greedy method to select a set of materialized cubes from the data cube; it included a greedy repair procedure to handle infeasible solutions generated by simple GA. Its performance was compared to that of the greedy algorithm of [13] and was observed to be better than the latter. Niched Pareto Genetic Algorithm (NPGA) and Multi-Objective Genetic Algorithm (MOGA) have been applied to the maintenance cost view selection problem in [53] and the solutions are compared to those produced by HRUA. Their performance was studied using real and synthetic data sets. It was observed that the performances of NPGA and MOGA were better than that of HRUA for all problems, but their performance gap decreased for larger instances of the problem. The performances of NPGA and MOGA were almost similar for most of the problems, but MOGA performed better than NPGA for skewed problems. Reference [56] proposed M-OLAP Genetic and M-OLAP co-Genetic algorithm to select distributed OLAP cubes for materialization. They were shown to perform better than the greedy algorithm. Reference [57] applied GA to the view selection problem with dual constraints, i.e., space and maintenance cost constraints. The benefit of the solutions obtained by GA was at least 64 percent of the benefit obtained of the optimal solution. The results of GA were compared with those of exhaustive search and it was observed that the results of GA were better than the latter. Reference [58] used the evolutionary algorithm to address the weighted materialized view selection problem, which included both the volume and importance of the retrieved data. Simulation results showed that it performed substantially better than the brute force method and the heuristic approach in terms of the quality of the solutions and execution time. In [64], a GA-based view selection algorithm GVSA was proposed, which was used to select Top-K views from a multidimensional lattice. GVSA was shown to perform better than HRUA. Further, in [65], memetic-based view selection algorithm MVSA was proposed, which incorporated iterative improvement into the evolutionary nature of the algorithm. MVSA produced better quality Top-K views when compared with those produced using GVSA [66]. Later, in [66], algorithm DEVSA that selects Top-K views from a multidimensional lattice using differential evolution was proposed. DEVSA, in comparison with GVSA and MVSA, was able to select comparatively better quality views.

The effectiveness of evolutionary algorithms depends upon their ability to achieve an appropriate balance between exploration and exploitation of the search space, while arriving at solutions to a given problem. Though the traditional evolutionary algorithms are characterized by individual representation, fitness function, selection, crossover and mutation, they are not able to achieve better population diversity and thus are unable to explore and exploit the search space adequately [67]. A quantum-inspired evolutionary algorithm (QIEA) has been used as an alternative to achieve such population diversity using the principles of quantum computing such as quantum bit and superposition of states [67, 68]. In QIEA, an individual is probabilistically represented using a string of Q-bits, where each Q-bit is the smallest unit of information. Since the Q-bit individual can represent binary solutions probabilistically, it can ensure better population diversity, which in turn would enable better exploration and exploitation of the search space [67]. Further, the Q-gate operation in QIEA maintains an appropriate balance between the exploration and the exploitation of the search space and thereby drives the individuals towards better solutions [68, 69]. QIEA has been shown to perform better than the traditional evolutionary algorithms [67, 70,71,72,73,74]. In this paper, an attempt has been made to use QIEA to address the view selection problem in the context of a multidimensional lattice framework. Accordingly, a quantum-inspired-evolution-based view selection algorithm (QIEVSA) that selects the Top-K views from a multidimensional lattice has been proposed. Further, QIEVSA is compared to the existing view selection algorithms GVSA, MVSA and DEVSA. Such comparisons show that QIEVSA performs comparatively better than GVSA, MVSA and DEVSA.

1.3 Organization of the paper

This paper is organized as follows: view selection using QIEA is discussed in section 2 followed by an illustrative example in section 3. Experimental results are given in section 4. Section 5 presents the conclusion.

2 View selection using QIEA

Since the proposed view selection algorithm focuses on HRUA, it considers a multidimensional lattice for view selection. In a multidimensional lattice framework, the root node represents the fact table while the intermediate nodes represent all other possible combinations of the dimension tables (views). Multidimensional lattice is discussed next.

2.1 Multidimensional lattice

An OLAP cube, shown in figure 1, can be represented as a multidimensional lattice shown in figure 3. This lattice of figure 3 comprises three dimensions, viz., Customer (C), Product (P) and Time (T), and therefore the total number of possible views is 8 (23). The possible views are represented by nodes of the lattice with the view index in parenthesis and the view size alongside. The dependences between the views are represented using edges and each view is either directly or indirectly dependent on the root view, which represents the base fact table CPT. View NONE has no dependent view. View CP is directly dependent on view CPT, whereas view C is indirectly dependent on view CPT.

Figure 3
figure 3

Three-dimensional lattice of views along with the size of each view.

Next, the QIEA that has been discretized and adapted for view selection is discussed.

2.2 QIEA

Quantum computation is based on the physical principles of quantum mechanics, like Q-bits, superposition, Q-gates and quantum measurement, for solving problems in the context of classical computing paradigms [68]. In quantum computer systems, a single bit of information, i.e., a quantum bit (Q-bit), may be in the “0” state or in the “1” state or in any superposition of the “0” state and “1” state [67]. Its state can be given as [67]

$$ \left| {\varPsi > \, = \alpha } \right|0 > \, + \beta |1 > $$

where |0> and |1> represent the values of classical bits 0 and 1, respectively, and α and β are complex numbers satisfying:

$$ \left| \alpha \right|^{2} + \, \left| \beta \right|^{2} = \, 1 $$

| α |2 and |β|2 are the probability of the Q-bit being in “0” state and “1” state, respectively.

The state of the Q-bit can be changed using a quantum gate (Q-gate) that is a reversible gate represented by a unary operator “U”; when it acts on the Q-bit it satisfies the condition U+U=UU+ [67]. Several Q-gates exist like NOT gate, controlled NOT gate, rotation gate, etc. [67, 75]. For a system with m Q-bits, there are 2m states. Observing a quantum state collapses it to a single state [67].

Quantum computers were formalized in the late 1980s and active research was carried out in the 1990s, during which research related to merging of quantum computing and evolutionary computing started. Accordingly, several algorithms exist that are broadly classified [68] as evolution-designed quantum algorithms [76,77,78,79,80], quantum evolutionary algorithms [81,82,83,84,85,86] and QIEAs [67, 87,88,89,90,91,92,93]. This paper focuses on the use of the QIEA to address the view selection problem.

QIEA uses probabilities associated with each state to describe the behaviour of the system. QIEA uses Q-bits, Q-gates and the observation process. Q-bits represent individuals, Q-gates operate on Q-bits to generate offsprings and the observation process enables the quantum particle to take state |0> or state |1>. The superposition state represented by Q-bits would collapse to a single state during the observation process. QIEA uses Q-bit representation, which is probabilistically a linear superposition of states, to describe individuals of the population. It uses Q-gate to generate individuals with better solutions for the next generation. It can also exploit the search space for a global solution having, even, a single element. Several QIEA algorithms exist and are broadly classified into three types, namely binary observation QIEA (bQIEA) [67, 89, 90], real observation QIEA (rQIEA) [54, 93] and QIEA-like algorithms (iQIEA) [87, 88, 92]. In this paper, bQIEA [67] has been used to address the view selection problem.

An individual in bQIEA is represented as a string of Q-bits as follows:

$$ \left[ {\begin{array}{*{20}c} {\alpha_{1} } \\ {\beta_{1} } \\ \end{array} \begin{array}{*{20}c} | \\ | \\ \end{array} \begin{array}{*{20}c} {\alpha_{2} } \\ {\beta_{2} } \\ \end{array} \begin{array}{*{20}c} | \\ | \\ \end{array} \begin{array}{*{20}c} {\alpha_{3} \,|} \\ {\beta_{3} \;|} \\ \end{array} \begin{array}{*{20}c} . \\ . \\ \end{array} \begin{array}{*{20}c} . \\ . \\ \end{array} \begin{array}{*{20}c} . \\ . \\ \end{array} \begin{array}{*{20}c} {\,|\alpha_{m} } \\ {\,|\beta_{m} } \\ \end{array} } \right] $$

where |αi|2 + |βi|2 = 1, i = 1, 2, …, m, where m is the number of Q-bits representing an individual.

The algorithm bQIEA, as proposed in [67, 68], is given in figure 4.

Figure 4
figure 4

Algorithm bQIEA [67, 68].

In Step-1, the generation number g is initialized to 1; α and β values in quantum state Q(g) are initialized in a manner such that |α|2+|β|2=1. As a result, each Q-bit individual is a linear superposition of all possible states. Next, in Step-2, a binary solution P(g) is obtained by observing the quantum state Q(g). The resultant binary string of length m is formed by selecting either 0 or 1 for each bit based on the probability |α|2 or |β|2. In Step-3, each binary solution in P(g) is evaluated based on the problem-specific fitness function. The initial best solutions in P(g) are selected and stored in B(g) and the best solution of B(g) in b in Step-4. In Step-5, g is incremented by 1. In Step-6, binary solutions in P(g) are obtained by observing Q(g–1), as in Step-2. The solutions in P(g) are evaluated using the problem-specific fitness function in Step-7, as in Step-3. Thereafter, in Step 8, Q-bit individuals in Q(g) are updated by applying rotation Q-gate as follows [67]:

$$ G^{g - 1} (\theta ) = \left[ {\begin{array}{*{20}c} {\cos \theta^{g - 1} } & { - \sin \theta^{g - 1} } \\ {\sin \theta^{g - 1} } & {\cos \theta^{g - 1} } \\ \end{array} } \right] $$

where \( \theta^{g} \) is an adjustable Q-gate rotation angle.

The Q-bit [\( \alpha^{g} \), \( \beta^{g} \)] is updated as follows [67]:

$$ \left[ {\begin{array}{*{20}c} {\alpha^{g} } \\ {\beta^{g} } \\ \end{array} } \right] = G^{g - 1} (\theta )\left[ {\begin{array}{*{20}c} {\alpha^{g - 1} } \\ {\beta^{g - 1} } \\ \end{array} } \right] $$

where \( \theta^{g - 1} \; = \;s\left( {\alpha^{g - 1} ,\beta^{g - 1} } \right)\;\Delta \theta^{g - 1} \); \( s(\alpha^{g - 1} ,\beta^{g - 1} ) \) and \( \Delta \theta^{g - 1} \) are the sign and the magnitude of the rotation gate, respectively. The afore-mentioned particular values are taken from the look-up table [68] given in figure 5; x and y are bits of p(g) and b, respectively.

Figure 5
figure 5

Look-up table [68].

In Step-9, the best solutions, from among B(g–1) and P(g), are selected and stored in B(g) and b is updated with the best solution from amongst solutions in B(g) and b. In Step-10, the migration condition is checked. If it is found to be satisfied, the best solution b is migrated to B(g) or some of the best solutions in B(g) are migrated to other solutions in B(g). Next, the termination condition is checked in Step-11. If the termination condition is not satisfied, then Step-5 to Step-11 are repeated until the termination condition is satisfied.

The afore-mentioned algorithm bQIEA has been adapted to solve the view selection problem. Accordingly, a QIEVSA has been proposed that selects the Top-K views from a multidimensional lattice using bQIEA. QIEVSA is discussed next.

2.3 QIEVSA

QIEVSA [94] selects Top-K views from a multidimensional lattice using bQIEA. QIEVSA takes the d-dimensional lattice of views, number of views to be selected K and the pre-specified number of generations G for which QIEVSA runs as input, and produces the Top-K views TKV as the output. QIEVSA comprises the following steps.

  • Step-1: Initialization

Initialize the generation g = 0. Generate a random population QTKV(g) of N Top-K views with quantum chromosome length Kd, where each quantum gene is represented by d quantum bits (Q-bits) and d is the dimension of the lattice.

$$ Q_{TKV} \left( g \right) = \, \left\{ {qtkv_{1}^{g} ,qtkv_{2}^{g} , \, \ldots ,qtkv_{N}^{g} } \right\} $$
$$ qtkv_{i}^{g} = \left( {\begin{array}{*{20}c} {\alpha v_{i11}^{g} } & {\alpha v_{i12}^{g} } & \ldots & {\alpha v_{i1d}^{g} } \\ {\beta v_{i11}^{g} } & {\beta v_{i12}^{g} } & \ldots & {\beta v_{i1d}^{g} } \\ \end{array} \left| {\begin{array}{*{20}c} {\alpha v_{i21}^{g} } & {\alpha v_{i22}^{g} } & \ldots & {\alpha v_{i2d}^{g} } \\ {\beta v_{i21}^{g} } & {\beta v_{i22}^{g} } & \ldots & {\beta v_{i2d}^{g} } \\ \end{array} } \right.\left| {\begin{array}{*{20}c} \ldots & \ldots \\ \ldots & \ldots \\ \end{array} } \right.\left| {\begin{array}{*{20}c} {\alpha v_{iK1}^{g} } & {\alpha v_{iK2}^{g} } & \ldots & {\alpha v_{iKd}^{g} } \\ {\beta v_{iK1}^{g} } & {\beta v_{iK2}^{g} } & \ldots & {\beta v_{iKd}^{g} } \\ \end{array} } \right.} \right) $$

where

$$ \left| {\alpha v_{ijl}^{g} } \right|^{2} + \left| {\beta v_{ijl}^{g} } \right|^{2} = 1\quad \forall i = \, 1,2, \ldots ,N;j = 1,2, \ldots ,K;\;l = \, 1,2, \ldots ,d. $$

Initially, at g = 0, all possible states are superimposed with the same probability, i.e.

$$ \alpha v_{ijl}^{0} = \frac{1}{\sqrt 2 }\;{\text{and}}\;\beta v_{ijl}^{0} = \frac{1}{\sqrt 2 }\quad \forall i = \, 1,2, \ldots ,N;\,j = \, 1, \, 2, \, \ldots ,K;\,l = \, 1,2, \ldots ,d. $$

In this step of initialization of QTKV(g), each of the Q-bits (\( \alpha v_{ijl}^{g} \) and \( \beta v_{ijl}^{g} \)) is initialized to (1/√2, 1/√2). Each Q-bit Top-K views \( qtkv_{i}^{g} \) represents the linear superposition of all the possible solutions with the same probability [67].

  • Step-2: Make P TKV (g) by observing Q TKV (g)

Observe each Q-bit of QTKV(g) to arrive at PTKV(g) where

$$ P_{TKV} \left( g \right) = \, \left\{ {ptkv_{1}^{g} ,ptkv_{2}^{g} , \, \ldots ,ptkv_{N}^{g} } \right\} $$
$$ \begin{aligned} ptkv_{i}^{g} = \;\left( {\underbrace {{\;pv_{i11}^{g} \,pv_{i12}^{g} \; \cdots \;pv_{i1d}^{g} \,}}_{{pv_{i1}^{g} }}|\;\underbrace {{pv_{i21}^{g} \,pv_{i22}^{g} \; \cdots \;pv_{i2d}^{g} \,}}_{{pv_{i2}^{g} }}|\;\;\;\; \ldots \;\;\;\;|\underbrace {{pv_{iK1}^{g} \,pv_{iK2}^{g} \; \cdots \;pv_{iKd}^{g} \;\,}}_{{pv_{iK}^{g} }}} \right) \hfill \\ \quad \forall i = 1, \, 2, \ldots ,N;\;j = 1,2, \ldots ,K;\;l = 1,2, \ldots ,d. \hfill \\ \end{aligned} $$
$$ pv_{ijl}^{g} = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {{\text{random}}\left[ {0,\;1} \right)\; < \;\left| {\alpha v_{ijl} } \right|^{2} } \hfill \\ {1,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.. $$

Each binary solution \( ptkv_{i}^{g} \) is a binary string of length K, which is formed by selecting either 0 or 1 for each bit with the probability \( \left| {\alpha v_{ijl} } \right|^{2} \).

  • Step-3: Evaluate P TKV (g)

Each binary solution PTKV(g) is evaluated using the fitness function TVEC. Each Top-K view ptkv g i in PTKV(g) is transformed into equivalent decimal representation dtkvi to arrive at DPTKV(g), where

$$ \begin{aligned} DP_{{TKV}} \left( g \right) & = {\text{ }}\left\{ {dptkv_{1} ^{g} ,dptkv_{2} ^{g} , \ldots ,dptkv_{N} ^{g} } \right\}\;{\text{and}} \\ dptkv_{i} ^{g} & = {\text{ }}\left\{ {dpv_{{i1}} ^{g} ,dpv_{{i2}} ^{g} , \ldots ,dpv_{{iK}} ^{g} } \right\},\;i = {\text{ }}1,2, \ldots N. \\ \end{aligned} $$

Compute TVEC of each view dptkv g i in DPTKV(g) as follows [64,65,66, 95,96,97]:

$$ TVEC\left( {\;dptkv_{i}^{g} } \right)\; = \;\sum\limits_{{i = 1 \wedge SM_{{V_{i} }} = 1}}^{n} {Size\left( {V_{i} } \right)} \quad + \quad \sum\limits_{{i = 1 \wedge SM_{{V_{i} }} = 0}}^{n} {SizeSMA\left( {V_{i} } \right)} $$

where n is the total number of views in the lattice, SMVi is status materialized of view Vi (SMVi = 1, if materialized, SMVi = 0, if not materialized), Size(Vi) is the size of view Vi and SizeSMA(Vi) is the size of smallest materialized ancestor of view Vi

  • Step-4: Create B TKV (g)/D TKV (g) and B TKV /D TKV

In this step, the initial best Top-K views are selected, from among the binary Top-K views in PTKV(g), and stored into BTKV. Also, the best Top-K views among the equivalent decimal representation DPTKV(g) of PTKV(g) are selected and stored into DTKV.

Initially, at g = 0, store all N Top-K views in PTKV(g) and in corresponding DPTKV(g) into BTKV(g) and DTKV(g), respectively, where

$$ B_{TKV} \left( g \right) = \left\{ {btkv_{1}^{g} ,btkv_{2}^{g} , \ldots ,btkv_{N}^{g} } \right\} $$
$$ D_{TKV} \left( g \right) = \left\{ {dtkv_{1}^{g} ,dtkv_{2}^{g} , \ldots ,dtkv_{N}^{g} } \right\} $$
$$ \begin{aligned} & for\;i = {\mkern 1mu} 1,2, \ldots ,N \\ & btkv_{i}^{g} = \left\{ {bv_{i1}^{g} ,bv_{i2}^{g} , \ldots ,bv_{iK}^{g} } \right\} \\ & dtkv_{i}^{g} = \left\{ {dv_{i1}^{g} ,dv_{i2}^{g} , \ldots ,dv_{iK}^{g} } \right\} \\ & \quad and\;\forall i = 1,2, \ldots ,N \\ & btkv_{i}^{g} = ptkv_{i}^{g} ,dtkv_{i}^{g} = dptkv_{i}^{g} . \\ \end{aligned} $$

The best Top-K view in BTKV(g) and in the corresponding DTKV(g) are stored into BTKV and DTKV, respectively. Increment g by 1.

  • Step-5: Obtain P TKV (g) by observing Q TKV (g–1)

Binary solutions in PTKV(g) are formed by observing the states of QTKV(g-1), as carried out in Step-2.

  • Step-6: Evaluate P TKV (g)

Each binary solution PTKV(g) is evaluated using fitness function TVEC, as evaluated in Step-3.

  • Step-7: Update Q TKV (g–1) using Q-gates

Q-bit individuals in QTKV(g) are updated by applying rotation Q-gate operator of QIEA [67] with the updated Q-bit satisfying the normalization condition \( \left| {\alpha v_{ijl}^{g} } \right|^{2} + \left| {\beta v_{ijl}^{g} } \right|^{2} = 1 \), where \( \alpha v_{ijl}^{g} \) and \( \beta v_{ijl}^{g} \) are the values of the updated Q-bits.

QTKV(g) is updated using quantum rotation gate as Q-gate. The lth Q-bit of the jth view of the ith Top-K view qtkv g i , i = 1, 2, …, N; j = 1, 2, …, K; l = 1, 2, . . ., d is updated by applying the current Q-gate G g ijl (θ):

$$ G_{ijl}^{g - 1} (\theta ) = \left[ {\begin{array}{*{20}c} {\cos \theta_{ijl}^{g - 1} } & { - \sin \theta_{ijl}^{g - 1} } \\ {\sin \theta_{ijl}^{g - 1} } & {\cos \theta_{ijl}^{g - 1} } \\ \end{array} } \right] $$

where \( \theta_{ijl}^{g} \) is an adjustable Q-gate rotation angle.

The Q-bit [\( \alpha v_{ijl}^{g} \), \( \beta v_{ijl}^{g} \)] is updated as follows:

$$ \left[ {\begin{array}{*{20}c} {\alpha v_{ijl}^{g} } \\ {\beta v_{ijl}^{g} } \\ \end{array} } \right] = G_{ijl}^{g - 1} (\theta )\left[ {\begin{array}{*{20}c} {\alpha v_{ijl}^{g - 1} } \\ {\beta v_{ijl}^{g - 1} } \\ \end{array} } \right] $$

where \( \theta_{ijl}^{g - 1} \; = \;s\left( {\alpha v_{ijl}^{g - 1} ,\beta v_{ijl}^{g - 1} } \right)\;\Delta \theta_{ijl}^{g - 1} \) and \( s(\alpha v_{ijl}^{g - 1} ,\beta v_{ijl}^{g - 1} ) \) and \( \Delta \theta_{ijl}^{g - 1} \) are the sign and the magnitude of the rotation gate, respectively. The values of \( s(\alpha v_{ijl}^{g - 1} ,\beta v_{ijl}^{g - 1} ) \) and \( \Delta \theta_{ijl}^{g - 1} \) are taken from the look-up table [93] given in figure 6; x and b are bits of ptkv g i and BTKV, respectively.

Figure 6
figure 6

Look-up table [68].

  • Step-8: Update B TKV (g)/D TKV (g) and B TKV /D TKV

In this step, BTKV(g) and DTKV(g) are generated and BTKV and DTKV are updated.

The best N Top-K views among PTKV(g) and BTKV(g–1) are stored into BTKV(g).

The best N Top-K views among DPTKV(g) and DTKV(g–1) are stored into DTKV(g).

The best Top-K views in BTKV(g) and in the corresponding DTKV(g) are stored into BTKV and DTKV, respectively.

Increment g by 1.

  • Step-9: Migration condition

In this step, the global migration condition is checked. If it is satisfied, the best solution BTKV is migrated to BTKV(g) globally. Otherwise, the best Top-K views in a local group in BTKV(g) are migrated to others in the same local group. Variation of probabilities of a Q-bit individual is induced during the migration process. The global migration and the local migration are carried out after a pre-specified number of generations GGM and GLM, respectively.

  • IF global migration THEN

    • migrate BTKV to BTKV(g) globally

      • replace all the Top-K views in BTKV(g) by BTKV

  • ELSE

    • migrate BTKV to BTKV(g) locally

      • replacing the Top-K views in each of the pre-specified number of local groups in B(g) by best Top-K views amongst them in the respective groups.

  • Step-10: Termination condition

QIEVSA runs for a pre-specified number of generations G, whereafter it produces the Top-K views DTKV as output.

  • IF (gG) THEN

    • GO TO Step-5

  • ELSE

    • Return DTKV as the Top-K views.

Next, an example illustrating the use of QIEVSA to select Top-K views from a multidimensional lattice is discussed.

3 An example

Consider the selection of Top-5 views from a three-dimensional lattice shown in figure 3 using QIEVSA.

  • Step-1: Initialization

Generation, g=0; quantum population size, N=10; number of Q-bits in quantum gene, d=3. Since Top-5 views need to be selected, the quantum chromosome length K=5. The quantum population of Top-K views at g = 0, i.e., QT5V(0) = {qt5v 01 , qt5v 02 , …, qt5v 010 } is presented in figure 7. Initially, αv 0 ikl and βv 0 ikl are taken as 1/√2 (0.7071).

Figure 7
figure 7

QT5V(0).

  • Step-2: Make P T5V (0) by observing Q T5V (0)

Quantum observed state PT5V(0) = {pt5v 01 , pt5v 02 , …, pt5v 010 } in binary form, i.e., Q-bit representation of initial population QT5V(0) is observed. The observation of the first Top-5 views qt5v 01 in QT5V(0) to generate pt5v 01 is shown in figure 8.

Figure 8
figure 8

pt5v 01 of qt5v 01 in QT5V(0).

In a similar manner, quantum observed state for other Top-K views in QT5V(0) are computed to generate PT5V(0), which is given in figure 9.

Figure 9
figure 9

PT5V(0).

  • Step-3: Evaluate P T5V (0)

Each Top-5 views pt5v 0 i (i = 0, 1, …, 10) in PT5V(0) is transformed into equivalent decimal representation, i.e., dt5v 0i to generate DPT5V(0) = {dpt5v 01 , dpt5v 01 , …, dpt5v 010 }, where dpt5v 0 i = {dpv 0 i1 , dpv 01 , …, dpv 05 }. DPT5V(0) is shown in figure 10. TVEC of each view dpt5v 0 i in DPT5V(0) is computed. TVEC computation of dpt5v 01 comprising views CT, CP, P, T and None is shown in figure 11. In a similar manner, TVEC of other dpt5v 0 i in DPT5V(0) are computed and are shown in figure 12.

Figure 10
figure 10

DPT5V(0).

Figure 11
figure 11

TVEC computation if views CT, CP, P, T and None are selected for materialization.

Figure 12
figure 12

TVEC of dpt5v 0 i in DPT5V(0).

  • Step-4: Create B T5V (0) and B T5V

PT5V(0) and DPT5V(0) are stored into BT5V(0) and DT5V(0), respectively. BT5V(0) and DT5V(0) are shown in figures 13 and 14, respectively.

Figure 13
figure 13

BT5V(0).

Figure 14
figure 14

DT5V(0).

The best Top-5 views in BT5V(0) and DT5V(0) are stored into BT5V and DT5V, respectively. They are shown in figure 15. Next, g is incremented by 1, i.e., g = 1+1 = 2.

Figure 15
figure 15

BT5V and DT5V.

  • Step-5: Make P T5V (1) by observing Q T5V (0)

Quantum observed state PT5V(1) = {pt5v 11 , pt5v 12 , …, pt5v 110 } in binary form is generated. The observation of the first Top-5 views qt5v 01 in QT5V(0) to generate pt5v 11 is shown in figure 16.

Figure 16
figure 16

pt5v 11 of qt5v 01 in QT5V(0).

In a similar manner, quantum observed states for other Top-K views in QT5V(0) are computed to generate PT5V(1), which is given in figure 17.

Figure 17
figure 17

PT5V(1).

  • Step-6: Evaluate P T5V (1)

Each Top-5 views pt5v 0 i (i = 0, 1, …, 10) in PT5V(1) is transformed into equivalent decimal representation, i.e., dt5v 0 i , to generate DPT5V(1) = {dpt5v 11 , dpt5v 11 , …, dpt5v 110 }, where dpt5v 1 i = {dpv 1 i1 , dpv 11 , …, dpv 15 }. DPT5V(1) is shown in figure 18. TVEC of each view dpt5v 1 i in DPTKV(1) is computed and are shown in figure 19.

Figure 18
figure 18

DPT5V(1).

Figure 19
figure 19

TVEC of dpt5v 1 i in DPT5V(1).

  • Step-7: Update Q T5V (0) using Q-gates

Update Q-bits (αv 0 ikl and βv 0 ikl ) of each Top-5 views in QT5V(0) using corresponding Top-5 views in PT5V(1) and best Top-5 views BT5V. Updation of Q-bit (αv 0 1kl and βv 0 1kl ) of the first individual qt5v 01 of QTKV(0) using pt5v 11 , in PT5V(1), and BT5V, shown in figure 20, is given below:

Figure 20
figure 20

pt5v 11 and BT5V.

Computation of \( \varvec{\theta}_{\bf{111}}^{\bf{0}} \) for the first bit

Bit x = 1, bit b = 1, TVEC(pt5v 11 ) = 187 and TVEC(BT5V) = 181. Since TVEC(pt5v 11 ) ≤ TVEC(BT5V) is false, using the look-up table, \( s(\alpha v_{111}^{0} ,\beta v_{111}^{0} ) \) = 1 and \( \Delta \theta_{111}^{0} \) = 0.

$$ \theta_{111}^{0} = s(\alpha v_{111}^{0} ,\beta v_{111}^{0} )\Delta \theta_{111}^{0} = \, 1 \, \times \, 0 \, = \, 0. $$

Computation of \(\varvec{\theta}_{\bf{112}}^{\bf{0}} \) for the second bit

Bit x = 1, bit b = 1, TVEC(pt5v 11 ) = 187 and TVEC(BT5V) = 181. Since TVEC(pt5v 11 ) ≤ TVEC(BT5V) is false, using the look-up table, \( s(\alpha v_{112}^{0} ,\beta v_{112}^{0} ) \)= 1 and \( \Delta \theta_{112}^{0} \) = 0.

$$ \theta_{112}^{0} = s(\alpha v_{112}^{0} ,\beta v_{112}^{0} )\Delta \theta_{112}^{0} = \, 1 \, \times \, 0 \, = \, 0. $$

Computation of \( \varvec{\theta}_{\bf{113}}^{\bf{0}} \) for the third bit

Bit x = 0, bit b = 1, TVEC(pt5v 11 ) = 187 and TVEC(BT5V) = 181. Since TVEC(pt5v 11 ) ≤ TVEC(BT5V) is false, using the look-up table, \( s(\alpha v_{113}^{0} ,\beta v_{113}^{0} ) \) = 1 and \( \Delta \theta_{113}^{0} \) = 0.0314.

$$ \theta_{113}^{0} = s(\alpha v_{113}^{0} ,\beta v_{113}^{0} )\Delta \theta_{113}^{0} = \, 1 \, \times \, 0.0314 \, = \, 0.0314. $$

In a similar manner, \( \theta_{1kl}^{0} \) for the other quantum bits of the first individual qt5v 01 of QT5V(0) is computed and is shown in figure 21.

Figure 21
figure 21

\( \theta_{1kl}^{0} \) of quantum bits of qt5v 01 in QT5V(0).

Updation of \( \alpha v_{111}^{0} \) and \( \beta v_{111}^{0} \) of the first Q-bit \( \left[ {\begin{array}{*{20}c} {\alpha v_{111}^{1} } \\ {\beta v_{111}^{1} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos 0} & { - \sin 0} \\ {\sin 0} & {\cos 0} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\; = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\;\, = \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right] \).

Updation of \( \alpha v_{112}^{0} \) and \( \beta v_{112}^{0} \) of the second Q-bit

$$ \left[ {\begin{array}{*{20}c} {\alpha v_{112}^{1} } \\ {\beta v_{112}^{1} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos 0} & { - \sin 0} \\ {\sin 0} & {\cos 0} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\; = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\; = \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right] . $$

Updation of \( \alpha v_{113}^{0} \) and \( \beta v_{113}^{0} \) of the third Q-bit

$$ \left[ {\begin{array}{*{20}c} {\alpha v_{113}^{1} } \\ {\beta v_{113}^{1} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos 0.0314} & { - \sin 0.0314} \\ {\sin 0.0314} & {\cos 0.0314} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\; = \left[ {\begin{array}{*{20}c} {0.9995} & { - 0.0314} \\ {0.0314} & {0.9995} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} { 0. 7 0 7 1 0 6 7 7} \\ { 0. 7 0 7 1 0 6 7 7} \\ \end{array} } \right]\; = \left[ {\begin{array}{*{20}c} {0.6846} \\ {0.7290} \\ \end{array} } \right] . $$

In a similar manner, Q-bits of each qt5v 0 i of QT5V(0) are updated to generate QT5V(1), which is shown in figure 22.

Figure 22
figure 22

QT5V(1).

  • Step-8: Update B T5V (1)/D T5V (1) and B T5V /D T5V

The 10 Top-5 views, from amongst PT5V(1) and BT5V(0), are stored into BT5V(1). The 10 Top-5 views, from amongst DPT5V(1) and DT5V(0), are stored into DT5V(1). BT5V(1) and DT5V(1) are shown in figures 23 and 24, respectively. The best Top-K views in BT5V(1), and in the corresponding DT5V(1), are stored into BT5V and DT5V, respectively. The updated BT5V and DT5V are shown in figure 25. This is followed by incrementing g by 1.

Figure 23
figure 23

BT5V(1).

Figure 24
figure 24

DT5V(1).

Figure 25
figure 25

BT5V and DT5V.

  • Step-9: Perform local/global migration

Considering that local migration is performed in each step, the resultant DT5V(1), after performing local migration for local group size 2, is given in figure 26.

Figure 26
figure 26

DT5V(1) after local migration.

Step-5 to Step-9 are repeated for a pre-specified number of generations G, whereafter the Top-5 views are produced as output.

Next, experiment-based comparison of QIEVSA to the evolutionary view selection algorithms DEVSA, MVSA, GVSA and HRUA is discussed.

4 Experimental results

Algorithms QIEVSA, DEVSA, GVSA, MVSA and HRUA were implemented using JDK 1.7 in a Windows-7 environment on an Intel-based 2.13 GHz PC having 4 GB RAM. First, the appropriate value of number of generations after which local migration GLM and global migration GGM are to be performed, for which QIEVSA is able to select views having lower TVEC, is determined. The mean value of TVEC of Top-10 views over four simulation runs after each generation up to 500 generations for various combinations of GLM and GGM was used for plotting graphs [98]. These graphs showing the TVEC of the Top-10 views for (GLM, GGM) = {(1, 50), (1, 100), (2, 50), (2, 100), (4, 50), (4, 100)} for dimensions 5, 6, 7, 8, 9 and 10 are shown in figures 27, 28, 29, 30, 31 and 32, respectively. From each of these graphs, it can be inferred that the Top-10 views selected by QIEVSA have a lower TVEC for GLM = 2 and GGM = 50. Further, minTVEC (minimum value of TVEC), maxTVEC (maximum value of TVEC), meanTVEC (mean value of TVEC) and the stdTVEC (standard deviation of TVEC) over four simulation runs for selecting Top-10 views after 500 generations are computed for dimensions 5–10.They are given in a table in figure 33. From the table also, it can be observed that QIEVSA performs best for GLM = 2 and GGM = 50. These values of GLM and GGM for QIEVSA were used for further comparisons to DEVSA, GVSA, MVSA and HRUA.

Figure 27
figure 27

QIEVSA - TVEC Vs. Generations - Top-10 Views - 5 Dimensions.

Figure 28
figure 28

QIEVSA-TVEC vs. generations – Top-10 views – 6 dimensions.

Figure 29
figure 29

QIEVSA-TVEC vs. generations – Top-10 views – 7 dimensions.

Figure 30
figure 30

QIEVSA-TVEC vs. generations – Top-10 views – 8 dimensions.

Figure 31
figure 31

QIEVSA-TVEC vs. generations – Top-10 views – 9 dimensions.

Figure 32
figure 32

QIEVSA-TVEC vs. generations – Top-10 views – 10 dimensions.

Figure 33
figure 33

QIEVSA-minTVEC, maxTVEC, meanTVEC, stdTVECTop-10 views.

Next, TVECs of the Top-K (K = 5, 6, 7, 8, 9, 10) views selected using HRUA, GVSA, MVSA, DEVSA and QIEVSA for dimensions 5, 6, 7, 8, 9 and 10 are plotted and are shown in figures 34, 35, 36, 37, 38 and 39, respectively. The comparisons were based on the values GVSA (crossover probability Pc = 0.6, mutation probability Pm = 0.05) observed in [64], MVSA (crossover probability Pc = 0.8, mutation probability Pm = 0.05) observed in [65], DEVSA (crossover rate CR = 0.6, scaling factor F = 0.1) observed in [66] and QIEVSA (GLM = 2, GGM = 50) observed from figure 33. The mean of the TVEC values, for each of the five algorithms over four simulation runs, was taken for plotting graphs. It can be inferred from each of these graphs that the Top-K views selected using QIEVSA, in comparison with those selected using DEVSA, MVSA, GVSA and HRUA, have a lower TVEC. Further, it can be observed from figure 40 that the difference in the TVEC value increases with increase in the dimensions and the value of K. Furthermore, the performance of DEVSA is the next best followed by MVSA and GVSA. Views selected using HRUA, in comparison with others, have a higher TVEC.

Figure 34
figure 34

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 5 dimensions.

Figure 35
figure 35

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 6 dimensions.

Figure 36
figure 36

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 7 dimensions.

Figure 37
figure 37

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 8 dimensions.

Figure 38
figure 38

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 9 dimensions.

Figure 39
figure 39

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA-TVEC vs. Top-K views – 10 dimensions.

Figure 40
figure 40

QIEVSA vs. DEVSA vs. MVSA vs. GVSA vs. HRUA (TVEC of Top-K views for 5–10 dimensions).

5 Conclusions

In this paper, a QIEA has been suitably adapted and discretized to address the view selection problem in a multidimensional lattice framework. Accordingly, view selection algorithm QIEVSA that selects the Top-K views from a multidimensional lattice has been proposed. The Q-bits, Q-gates and the observation process in QIEA have been suitably adapted and discretized in QIEVSA to generate a population of Top-K views for the subsequent generation. QIEVSA, at first, randomly selects a population of Q-bit Top-K views. Binary sets of Top-K views are generated by observing the quantum state of the Top-K views in the population. TVEC of these views is then computed, whereafter the best set of Top-K views are updated. Thereafter, the Q-bit Top-K views are updated by applying the rotation Q-gate operator. QIEVSA terminates after running for a pre-specified number of generations, whereupon the Top-K views having minimum TVEC are produced as output. Further, experimental comparison of QIEVSA, with other evolutionary view selection algorithms based on multidimensional lattice framework like DEVSA, MVSA, GVSA and HRUA, shows that QIEVSA is able to select Top-K views at a comparatively lesser TVEC for the observed values of GLM and GGM. This performance improves for higher dimensions. Further, QIEVSA is able select views for higher dimensional data sets, as, in QIEVSA, population of the Top-K views is randomly generated and their TVEC is computed from the lattice. This is unlike the case in HRUA, where HRUA needs to compute the Top-K views from an exponentially large search space of possible views and for higher dimensional data sets, it becomes almost infeasible to select views for materialization using HRUA. Thus, it can be reasonably inferred that QIEVSA is able to select reasonably good quality views, for higher dimensional data sets, that are capable of reducing the response time of analytical queries, which thereby would lead to efficient decision making. As future work, QIEVSA would be compared to existing swarm-based view selection algorithms [99,100,101,102,103,104,105,106,107].