1 Introduction

A visual cryptographic scheme (VCS) for a set \({\mathcal {P}}=\{1,2, \ldots ,n\}\) of \(n\) participants is a method that encodes a secret image \(SI\) into \(n\) shares which are distributed among \(n\) participants in the form of transparencies on which the shares are photocopied. Such shares have the property that only “qualified” subsets of participants can visually recover the secret image by carefully stacking the transparencies. The novelty of visual cryptography lies in the fact that the encrypted message can be decrypted directly by the human visual system, no complex computation or computer participation is required. This is the precise reason of VCS attracting a lot of attention since its inception and as a result extensive research work has been done in this area. Researchers have worked through different aspects of VCS. Definitions have been modified, changed according to the need for different applications.

The most general type of VCS deals with the situation when the “qualified” subsets of participants are just handpicked subsets of the set of participants \({\mathcal {P}}\). This type of visual cryptographic scheme is known as VCS for general access structure. One particular case, known as a \((k,n)\)-threshold VCS, takes care of the scenario where any “qualified” set \(X\) of participants is a subset of \({\mathcal {P}}\) i.e., \(X \subseteq {\mathcal {P}}\), such that the cardinality of \(X\) is at least \(k\). In this case, any qualified subset of \(k\) or more participants can visually recover the secret image, whereas forbidden sets of participants consisting of \(k-1\) or less number of participants have no information on the secret image.

Originally developed by Naor and Shamir [20], this concept has been extended in [1, 4, 7, 8] to general access structures. In 1996 Droste [17] gave a brilliant algorithm and used linear program to construct basis matrices of any \((k,n)\)-threshold VCS. In search for optimal contrast Blundo et al., [12] resorted to the LPP. They, for the first time, defined canonical form for a \((k,n)\)-threshold VCS and analyzed the contrast for these schemes.

The idea of optimizing the pixel expansion of an VCS with the help of integer linear programming (ILP) was put forward by Blundo and De Santis [9]. However they did not implement it to get computational results. In 2011 Shyu and Chen [23] modeled the minimization of the pixel expansion of a \((k,n)\)-threshold VCS into an ILP to acquire the optimum solution.

Adhikari et al., [4] proposed a technique to construct any \((k,n)\)-threshold VCS using linear algebra. The technique is useful even for the case when \(k\) is large enough. However the question of optimality of pixel expansion was left open.

For other interesting and further studies in this area one may refer to [2, 3, 5, 10, 11, 13, 15, 16, 18, 21, 22, 24, 25].

Recently, a new type of VCS has been introduced [6] called a \((k,n)^{*}\)-VCS. It addresses the scenario where one participant is “essential”. Loosely speaking, in this scenario a black and white secret image is shared among \(n\) many participants in such a way that the following two conditions are satisfied while reconstructing the secret image from the shares:

  • One participant is essential in the sense that his presence is necessary to retrieve the secret image. In other words the secret image cannot be retrieved in his absence.

  • It is required to have the shares of at least \(k\) participants including the essential participant to recover the secret image.

Guo et al. [14] forwarded the work \((k,n)^{*}\)-VCS of [6] by considering \((k,n)\)-VCS with \(t\) essential participants. We denote this by the notation \(t\)-\((k,n)^*\)-VCS. Note that in the paper [14], the authors denote \((k,n)\)-VCS with \(t\) essential participants as \((k,n,t)\)-VCS. However, to keep parity with the original paper [6], we adopt the notation \(t\)-\((k,n)^*\)-VCS for \(0 \le t \le k \le n\). Thus for \(0 \le t \le k \le n\) and \({\mathcal {P}} = \{1,2,3,\ldots ,n\}\), the collection of all minimal qualified sets for the \(t\)-\((k,n)^*\)-VCS, denoted by \(\Gamma _{0}\) is given by \(\Gamma _{0}=\{S\subseteq {\mathcal {P}}:1,2,\ldots ,t\in S\wedge |S|=k\}\).

Note that the case when \(t=0\) depicts the original scenario of a \((k,n)\)-VCS where no participant is essential. The case \(t=1\) is the usual \((k,n)^{*}\)-VCS while \(t=n\) leads to the \((n,n)\)-VCS.

The scheme proposed in [14] for \(t\)-\((k,n)^*\)-VCS generalizes the \(1\)-\((k,n)^*\)-VCS proposed in [6]. However, both of the schemes are based on the construction of \((k,n)\)-VCS. In this paper we put forward an efficient but direct construction, based on linear algebraic technique, of a \(t\)-\((k,n)^*\)-VCS for monochrome images. The scheme is efficient in the sense that it only requires solving a system of linear equations to construct the required initial basis matrices. To make the scheme even more efficient, we apply the technique of deletion of common columns from the initial basis matrices to obtain the reduced basis matrices. However, as mentioned in [1], finding exact number of common columns in the initial basis matrices is a challenging problem. In this paper we deal with this open issue. We first provide a construction and analysis of \(t\)-\((k,n)^*\)-VCS. We completely characterize the case of \(t\)-\((n-1,n)^*\)-VCS, \(0 \le t \le n-1\), by finding a closed form of the exact number of common columns in the initial basis matrices and thereby deleting the common columns to get the exact pixel expansion and the relative contrast of the efficient and simple scheme. As a particular case of \(t\)-\((n-1,n)^*\), we derive a closed form of the reduced pixel expansion of \((n-1,n)\)-VCS. We show that for odd \(n \ge 3\), our scheme has exactly half the pixel expansion but achieves the same relative contrast of the canonical scheme based on LPP posed in [12]. For even \(n \ge 3\), our scheme has the same relative contrast and the pixel expansion as the scheme posed in [12]. Furthermore, our closed form for the reduced pixel expansion, as the numerical evidences in [23] show, indeed provides the optimal pixel expansion for \((n-1,n)\)-VCS. We further deal with the \((n-2,n)\)-VCS and resolve an open issue, as posed in [1], by providing an efficient algorithm for grouping the system of linear equations and thereby show that our proposed algorithm works better in terms of pixel expansion, than the existing scheme based on the linear algebraic technique. Finally the numerical evidences shows that our scheme provides almost optimal pixel expansion for \((n-2,n)\)-VCS.

1.1 Organization of the paper

The rest of the paper is organized as follows. In Sect. 2 we discuss few terms and concepts that are essential. In Sect. 3 we describe a very efficient construction based on linear algebraic tools of a \(t\)-\((k,n)^{*}\)-VCS where \(t\) many parties are essential. In Sect. 4 we deal with the particular case when the threshold number of participants is \(n-1\) and in this process we derive a closed form of the reduced pixel expansion. We present the analysis of the reduced pixel expansion of any \((n-1,n)\)-VCS and give a closed form of the same. In Sect. 5 we deal with \((n-2,n)\)-VCS and provide an algorithm to group minimal qualified sets such that every group contains at least three sets. We also compare our scheme with existing schemes in terms of pixel expansion and prove some bounds. Lastly we conclude with some open issues in Sect. 6.

2 The model and the prerequisites for monochrome VCS

The model that we describe here is similar to the model as described in Blundo, De Santis, and Stinson [10]. Let \({\mathcal {P}}\) = \(\{1,\ldots ,n\}\) be a set of elements called participants, and let \(2^{{\mathcal {P}}}\) denote the set of all subsets of \({\mathcal {P}}\). Let \(\Gamma _{Qual}\) and \(\Gamma _{Forb}\) be subsets of \(2^{{\mathcal {P}}}\), where \(\Gamma _{Qual} \cap \Gamma _{Forb}\) = \(\emptyset \). We will refer to members of \(\Gamma _{Qual}\) as qualified sets and the members of \(\Gamma _{Forb}\) as forbidden sets. The pair (\(\Gamma _{Qual}, \Gamma _{Forb}\)) is called the access structure of the scheme. In general, \(\Gamma _{Qual} \cup \Gamma _{Forb}\) need not be \(2^{{\mathcal {P}}}\).

Let \( \Gamma _0=\{A~|~ A \in \Gamma _{Qual} \wedge \forall A' \subset A, A' \notin \Gamma _{Qual} \} \) be the collection of all minimal qualified sets. A participant \(P \in {\mathcal {P}}\) is an essential participant if there exists a set \(X \subseteq {\mathcal {P}}\) such that \(X \cup \{ P \} \in \Gamma _{Qual}\) but \(X \notin \Gamma _{Qual}\). If a participant \(P \in {\mathcal {P}}\) is not an essential participant, then we call the participant as a non-essential participant.

Let \(\Gamma \subseteq 2^{{\mathcal {P}}} \setminus \{ \emptyset \}\) (\(\Gamma \subseteq 2^{{\mathcal {P}}}\)). If \(A \in \Gamma \) and \(A \subseteq A' \subseteq {\mathcal {P}}\) (\(A' \subseteq A \subseteq {\mathcal {P}}\)) implies \(A' \in \Gamma \) then \(\Gamma \) is said to be monotone increasing (decreasing) on \({\mathcal {P}}\). If \(\Gamma _{Qual}\) is monotone increasing, \(\Gamma _{Forb}\) is monotone decreasing, and \(\Gamma _{Qual} \cup \Gamma _{Forb}= 2^{{\mathcal {P}}}\), then the access structure is called strong and \(\Gamma _{0}\) is called a basis. A visual cryptographic scheme with a strong access structure will be termed as a \(strong\ visual\ cryptography\ scheme\). In this paper we deal with only strong access structures. Throughout this paper, we presume that \(\Gamma _{Qual} \cup \Gamma _{Forb}= 2^{{\mathcal {P}}}\). So any \(X \subseteq {\mathcal {P}}\) is either a qualified set or a forbidden set of participants.

We further assume that the secret image consists of a collection of black and white pixels, each pixel being shared separately. To understand the sharing process consider the case where the secret image consists of just a single black or white pixel. On sharing, this pixel appears in the \(n\) shares distributed to the participants. However, in each share the pixel is subdivided into \(m\) subpixels. This \(m\) is called the pixel expansion i.e., the number of pixels, on the transparencies corresponding to the shares (each such pixel is called subpixel), needed to represent one pixel of the original image. The shares are printed on transparencies. So a “white” subpixel is actually an area where nothing is printed, and left transparent. We assume that the subpixels are sufficiently small and close enough so that the eye averages them to some shade of grey.

In order that the recovered image is clearly discernible, it is important that the grey level of a black pixel be darker than that of a white pixel. Informally, the difference in the grey levels of the two pixel types is called contrast. We want the contrast to be as large as possible. Three variables control the perception of black and white regions in the recovered image: a threshold value \((t)\), a relative contrast \((\alpha (m))\), and the pixel expansion \((m)\). The threshold value is a numeric value that represents a grey level that is perceived by the human eye as the color black. The value \(\alpha (m) \cdot m\) is the contrast, which we want to be as large as possible. We require that \(\alpha (m) \cdot m \ge 1\) to ensure that black and white areas will be distinguishable.

Notations

Consider an \(n \times m\) Boolean matrix \(M\) and let \(X \subseteq \{1,2,\ldots ,n \}\). By \(M[X]\) we will denote the \(|X| \times m\) submatrix obtained from \(M\) by retaining only the rows indexed by the elements of \(X\). \(M_X\) will denote the Boolean “or” of the rows of \(M[X]\). The Hamming weight \(w(V)\) is the number of 1’s in a Boolean vector \(V\).

2.1 Basis matrices

To construct a visual cryptographic scheme, it is sufficient to construct the basis matrices corresponding to the black and white pixel. In the following, we formally define what is meant by basis matrices.

Definition 2.1

(adapted from [10]) Let \((\Gamma _{ Qual},\Gamma _{ Forb})\) be an access structure on a set \({\mathcal P}\) of \(n\) participants. A \((\Gamma _{ Qual},\Gamma _{ Forb},m)\)-VCS with relative difference \(\alpha (m)\) and a set of thresholds \(\{t_{X}\}_{X \in \Gamma _{ Qual}}\) is realized using the \(n \times m\) basis matrices \(S^{0}\) and \(S^{1}\) if the following two conditions hold:

  1. 1.

    If \(X = \{i_{1},i_{2}, \ldots , i_{p}\} \in \Gamma _{ Qual}\), then \(S^0_X\), the \(``or''\) of the rows \(i_{1},i_{2},\ldots ,i_{p}\) of \(S^{0}\), satisfies \( w(S^0_X) \le t_{X} -\alpha (m) \cdot m;\) whereas, for \(S^{1}\) it results in \(w(S^1_X) \ge t_{X}\).

  2. 2.

    If \(X = \{i_{1},i_{2}, \ldots , i_{p}\} \in \Gamma _{ Forb}\), the two \(p \times m\) matrices obtained by restricting \(S^{0}\) and \(S^{1}\) to rows \(i_{1},i_{2},\ldots ,i_{p}\) are equal up to a column permutation.

3 Construction and analysis of some efficient \(t\)-\((k,n)^*\)-VCS

Let \({\mathcal {P}}=\{1,2,3,\ldots ,n\}\) be the set of participants. Let \(0 \le t \le k \le n\) be three integers where \(t\) denotes the number of essential participants, \(k\) denotes the threshold number of participants that is required to recover the secret image. Without loss of generality, let us assume that the first \(t\) participants, namely, \(1,2,\ldots ,t\) are the essential participants.

We have not as such put any restriction on the parameters \(t, k\) and \(n\) other than \(0 \le t \le k \le n\). It is worth mentioning that we are interested in only those triplets \((t,k,n)\) of parameters which admit a meaningful visual cryptographic scheme. For example, if \(t = k\) then the only meaningful value that \(n\) may assume is \(k\) because if \(n>k\) then the rest of the \(n-k\) participants are non-essential and we can ignore them while sharing the secret. Again, if \(k\) equals \(n\) then \(t=k=n\) or \(t\) equals \(n\) then all the participants are essential and the resulting scheme is again an \((n,n)\)-VCS. The case when \(t = 0\) with \(n\ge k >1 \) is the original \((k,n)\)-threshold VCS where no participant is essential and any \(k\) or more of them can recover the secret. Henceforth whenever we consider a triplet \((t,k,n)\), it is a meaningful triplet. Moreover, it should be noted that once \((t-1,k-1,n-1)\) is a meaningful triplet then so is \((t,k,n)\).

We now describe a method for constructing the basis matrices realizing a \(t\)-\((k,n)^*\)-VCS. The method is straightforward and efficient in the sense that it only requires solving a system of linear equations to construct the basis matrices. This method is an extension of linear algebraic method introduced by Adhikari et. al., in [4] and developed further in [1].

3.1 Construction of a \(t\)-\((k,n)^{*}\)-VCS: linear algebraic technique

We associate to each participant \(i\), a variable \(x_{i}\) for all \(i=1,2,\ldots ,n\). If \(X\) is a minimal qualified set for a \(t\)-\((k,n)^*\)-VCS, then we must have each of \(1,2,\ldots ,t\in X\) and \(|X|=k\). Thus \({\mathcal {P}}\) has altogether \({\displaystyle \genfrac(){0.0pt}0{n-t}{k-t}}\) such subsets. Based on the lexicographic ordering we arrange the subsets as follows: \(B_{1}, B_{2},\ldots ,B_{r}\) where \(r\)= \({\displaystyle \genfrac(){0.0pt}0{n-t}{k-t}}\). For example, if \({\mathcal {P}}=\{1,2,3,\ldots ,6\}, t=2\) and \(k=4\) then \(B_{1}=\{1,2,3,4\}, B_{2}=\{1,2,3,5\}, B_{3}=\{1,2,3,6\}, B_{4}=\{1,2,4,5\}, B_{5}=\{1,2,4,6\}\) and \(B_{6}=\{1,2,5,6\}\).

Let us pair the consecutive subsets, except for the last subset \(B_{r}\) if \(r\) is odd, to form \(\lfloor \frac{r}{2}\rfloor \) groups. For odd \(r\), the last group will contain only one set, \(B_{r}\) itself. Thus, ultimately for any \(r\), we have \(\lceil \frac{r}{2}\rceil \) many groups.

The groups for 2-\((4,6)^*\)-VCS are as follows:

\(\text{ Group } \text{1: }~(B_{1},B_{2})\); \(~\text{ Group } \text{2: }~(B_{3},B_{4})\); and \(\text{ Group } \text{3: }~(B_{5},B_{6})\).

In general, the \(i\)-th group can be described as follows:

$$\begin{aligned} i\text{ th } \text{ Group }= \left\{ \begin{array}{ll} (B_{2i-1},B_{2i}), &{} \quad \text{ for }~ 1 \le i \le \left\lceil \frac{r-2}{2} \right\rceil , \text{ and } \text{ any } ~n>2;\\ (B_{r-1},B_{r}), &{} \quad \text{ for } \text{ even } ~r> 2 \text{ and } ~i=\frac{r}{2};\\ (B_r), &{} \quad \text{ for } \text{ odd } ~r>2 \text{ and } ~i=\left\lceil \frac{r}{2} \right\rceil . \end{array} \right. \end{aligned}$$

Let us denote by \(f_{B_{j}}=0\), the linear equation \(\displaystyle \sum \nolimits _{k \in B_{j}}x_{k}=0\) and by \(f_{B_{j}}=1\), the linear equation \(\displaystyle \sum \nolimits _{k \in B_{j}}x_{k}=1\).

We consider the following systems of linear equations over the binary field \(\mathbb {Z}_2\):

For \(1\le i \le \lceil \frac{r-2}{2} \rceil \) and for any \(r \ge 3\),

$$\begin{aligned} \left. \begin{array}{cl} f_{B_{2i-1}}&{}=0\\ f_{B_{2i}}&{}=0\\ \end{array} \right\} \cdots (i) \text{ and } \left. \begin{array}{cl} f_{B_{2i-1}}&{}=1\\ f_{B_{2i}}&{}=1\\ \end{array} \right\} \cdots (i') \end{aligned}$$

For \(i= \frac{r}{2}\) and for even \(r > 3\),

$$\begin{aligned} \left. \begin{array}{cl} f_{B_{r-1}}&{}=0\\ f_{B_{r}}&{}=0\\ \end{array} \right\} \cdots \left( \frac{r}{2}\right) \text{ and } \left. \begin{array}{cl} f_{B_{r-1}}&{}=1\\ f_{B_{r}}&{}=1\\ \end{array} \right\} \cdots \left( \frac{r'}{2}\right) \end{aligned}$$

For \(i= \lceil \frac{r}{2} \rceil \) and for odd \(r \ge 3\),

$$\begin{aligned} \left. \begin{array}{cl} f_{B_{r}}&{}=0\\ \end{array} \right\} \cdots \left( \left\lceil \frac{r}{2} \right\rceil \right) \text{ and } \left. \begin{array}{cl} f_{B_{r}}&{}=1\\ \end{array} \right\} \cdots \left( \left\lceil \frac{r'}{2}\right\rceil \right) \end{aligned}$$

Let for any \(r \ge 3\) and \(1\le i \le \lceil \frac{r-2}{2} \rceil , S^0_i\) and \(S^1_i\) denote the Boolean matrices whose columns are all possible solutions of the equations \((i)\) and \((i')\) respectively. Similarly, for any even (odd) \(r \ge 3, S^0_{\frac{r}{2}}\) (\(S^0_{\lceil \frac{r}{2}\rceil }\)) and \(S^1_{\lceil \frac{r}{2}\rceil }\) (\(S^1_{\lceil \frac{r}{2}\rceil }\)) denote the Boolean matrices corresponding the solutions of the equations \(({\frac{r}{2}}) (({\lceil \frac{r}{2}\rceil }))\) and \(({\frac{r'}{2}}) ({\lceil \frac{r'}{2}\rceil })\) respectively.

Let \((S^{0}_{in},S^{1}_{in})\) denote the pair of Boolean matrices obtained by the concatenations: \(S^{0}_{in}=S^{0}_{1}||S^{0}_{2}||\cdots ||S^{0}_{\lceil \frac{r}{2}\rceil }\) and \(S^{1}_{in}=S^{1}_{1} ||S^{1}_{2}||\cdots ||S^{1}_{\lceil \frac{r}{2}\rceil }.\)

The pair of matrices \(S^{0}_{in}\) and \(S^{1}_{in}\), thus obtained, constitutes the basis matrices of the \(t\)-\((k,n)^*\)-VCS. This result follows from a more general result viz., Theorem \(2.1\) in [1]. For the sake of completeness we present an easier and shorter proof of the fact. Towards proving the results we need the following facts.

Fact 1: It is a very well known fact from linear algebra that if we consider two systems of linear equations \(Ax = 0\) and \(Ax =b\) where \(b \ne 0\), then all possible solutions of the second system can be obtained by adding (i.e, addition of solution vectors) one particular solution of the second system to each solution of the first system.

Result 1: The way we have constructed \(S^0\) and \(S^1\) it is now easy to see that each block \(S^1_i\) can be obtained from \(S^0_i\) by adding a particular solution of the system \((i')\) to each column of \(S^0_i\).

Theorem 3.1

The pair of matrices \((S^{0},S^{1})\) obtained by the above algorithm, constitute basis matrices of the \(t\)-\((k,n)^*\)-VCS.

Proof

Let \(m\) denote the pixel expansion which is the number of columns occuring in \(S^0\) or \(S^1\). We need to prove the following:

  1. 1.

    If \(X\) is a qualified set of participants then \( w(S^1_X) - w(S^0_X) \ge \alpha _{X}\cdot m\).

  2. 2.

    If \(Y \subset {\mathcal {P}}\) is a forbidden set of participants then \(S^0[Y]\) and \(S^1[Y]\) are identical up to column permutation.

Let us denote \(\lceil \frac{r}{2}\rceil \) by \(p\).

First we prove the second condition viz., the security condition. Let \(Y = \{i_1, i_2, \ldots , i_s\}\) be a forbidden set. We want to show that \(S^0[Y]\) and \(S^1[Y]\) are identical up to a column permutation. Thus it is sufficient to prove that

$$\begin{aligned} \left\{ \begin{array}{ll} S_1^0[Y] \text { and } S_1^1[Y] \text { are identical up to a column permutation}\\ S_2^0[Y] \text { and } S_2^1[Y] \text { are identical up to a column permutation}\\ \cdots \\ S_p^0[Y] \text { and } S_p^1[Y] \text { are identical up to a column permutation}\\ \end{array} \right. \end{aligned}$$

We will prove only for \(S^0_1[Y]\) and \(S^1_1[Y]\). Rest of them follow in the same manner. Recall that each column of \(S^0_1\) and \(S^1_1\) is a solution of the systems \((1)\) and \((1')\) respectively such that the variables that are not present in any of the equations are all set equal to zero. Therefore, if we can prove that there exists a particular solution of the system \((1')\), say \(\mathbf {c} = (c_1, c_2, \ldots ,c_n)\) such that \(c_j = 0\) for \(j = i_1,i_2,\ldots ,i_s\) then by \( {Result ~ 1}\), the restricted matrices \(S^0[Y]\) and \(S^1[Y]\) are identical up to a column permutation. Now, since \(Y\) is a forbidden set of participants therefore \(B_i \not \subseteq Y\) for \(i = 1,2\) because otherwise, \(Y\) would contain a minimal qualified set and would itself become a qualified set. Suppose \(\mu \) and \(\sigma \) be two such indices (that is, participants) such that \(\mu \in B_1\) and \(\sigma \in B_2\) but \(\mu ,\sigma \notin Y\). If \(\mu = \sigma \) then \(c_{\mu } = 1\) and \(c_i = 0\) for all \(i \ne \mu \), admits a particular solution to \((1')\). On the other hand if \(\mu \ne \sigma \) then \(c_{\mu } = c_{\sigma } = 1\) and \(c_i = 0\) for all \(i \ne \mu , \sigma \) gives rise to a particular solution to \((1')\). In both cases \(c_j = 0\) for \(j = i_1,i_2,\ldots ,i_s\) and hence the proof follows.

In the same manner we can prove that \(S^0_i[Y]\) and \(S^1_i[Y]\) are identical up to a column permutation for all \(i = 1,2,\ldots ,p\). Hence the matrices \(S^0[Y]\) and \(S^1[Y]\) are identical up to a column permutation. This proves the security condition.

To prove the first condition that is, the contrast condition let us consider a minimal qualified set \(X\). Then \(X = B_j\) for some \(1 \le j \le r\), where the symbols have their usual meanings. For \(S^0[X]\) let us break it up in \( S^0_{1}[X] \Vert S^0_{2}[X] \Vert \cdots \Vert S^0_{p}[X]\) and for \(S^1[X]\), in \( S^1_{1}[X] \Vert S^1_{2}[X] \Vert \cdots \Vert S^1_{p}[X]\).

Without loss of generality, let \(X = B_1 = \{1,2,3,\ldots ,t,t+1,\ldots ,k\}\). Let us now consider \(S^0[X]\) and \(S^1[X]\) that is, we restrict ourselves on the first \(k\) rows of the matrices. It is not hard to see that each restricted column \(c_X\) say, \((c_1,c_2,\ldots ,c_k)^t\) of \(S^0_{1}[X]\) is a solution of the homogeneous system \((1)\) and thus there exists a all zero solution. On the other hand there does not exist an all zero solution to the non-homogeneous system \((1')\) and hence \(w(S^1_{1[X]}) - w(S^0_{1[X]}) \ge 1\). Now since \(X \ne B_j, j\ne 1\), therefore we argue in a similar manner as for the \( {security ~ condition} \) to get that \(S^0_{j, j \ne 1}[X]\) and \(S^1_{j, j \ne 1}[X]\) are identical up to column permutation and hence \(w(S^0_{j[X]}) - w(S^1_{j[X]}) = 0\) for all \(j \ne 1\). Taking \(\alpha _{X}=\frac{1}{m}\), it is easy to see that \( w(S^1_X) - w(S^0_X) \ge \alpha _{X} \cdot m = 1\). The above technique works for any minimal qualified set \(X\). This completes the proof of the theorem. \(\square \)

Remark

Notice that the construction is direct, that is, the basis matrices can be directly constructed given any meaningful triplet of parameters \((t,k,n)\).

Example 3.1

If \({\mathcal {P}}=\{1,2,3,4,5,6\}, t=2\) and \(k=4\) then the minimal qualified subsets of participants are \(B_{1}=\{1,2,3,4\}, B_{2}=\{1,2,3,5\}, B_{3}=\{1,2,3,6\}, B_{4}=\{1,2,4,5\}, B_{5}=\{1,2,4,6\}, B_{6}=\{1,2,5,6\}\).

The basis matrices for the \(2\)-\((4,6)^*\)-VCS following the above construction rule are given by

$$\begin{aligned} S^0_{in}= & {} \left[ \begin{array}{ccc} \leftarrow \text{ Col } \text{1 } \text{ to } \text{8 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{9 } \text{ to } \text{24 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{24 } \text{ to } \text{32 }\rightarrow \\ 00001111 &{} \quad 0000000011111111 &{} \quad 00001111\\ 00110011 &{} \quad 0000111100001111 &{} \quad 00110011\\ 01010101 &{} \quad 0011001100110011 &{} \quad 00000000\\ 01101001 &{} \quad 0101010101010101 &{} \quad 01101001\\ 01101001 &{} \quad 0101101010100101 &{} \quad 01101001\\ 00000000 &{} \quad 0011110011000011 &{} \quad 01010101 \end{array}\right] \, and \\ S^1_{in}= & {} \left[ \begin{array}{ccc} \leftarrow \text{ Col } \text{1 } \text{ to } \text{8 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{9 } \text{ to } \text{24 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{24 } \text{ to } \text{32 }\rightarrow \\ 11110000 &{} \quad 1111111100000000 &{} \quad 11110000\\ 00110011 &{} \quad 0000111100001111 &{} \quad 00110011\\ 01010101 &{} \quad 0011001100110011 &{} \quad 00000000\\ 01101001 &{} \quad 0101010101010101 &{} \quad 01101001\\ 01101001 &{} \quad 0101101010100101 &{} \quad 01101001\\ 00000000 &{} \quad 0011110011000011 &{} \quad 01010101 \end{array}\right] \end{aligned}$$

Remark

The basis matrices \(S^{0}_{in}\) and \(S^{1}_{in}\) thus constructed may have common columns. We call these basis matrices as initial basis matrices and the corresponding pixel expansion as initial pixel expansion. In [1] the author pointed out that the common columns appearing in the initial basis matrices \(S^{0}_{in}\) and \(S^{1}_{in}\) may be deleted to obtain reduced basis matrices denoted by \(S^0_{red}\) and \(S^1_{red}\). This method reduces the pixel expansion (we call it as reduced pixel expansion) and thereby increases the relative contrast significantly. In the next section, we adopt this technique to analyze the construction technique of any \(t\)-\((k,n)^*\)-VCS to get the reduced pixel expansion.

3.2 Analysis of \(t\)-\((k,n)^{*}\)-VCS

The following discussion establishes the relationship between the pixel expansions and relative contrasts of a higher order VCS and a lower order VCS constructed by the linear algebraic method. In [14] such a discussion is given but when the difference between \(k\) and \(t\) is strictly greater than one. However, our construction method includes all the possibilities. In the following lemma we first establish the relationship between the initial pixel expansions of a higher order VCS and lower order VCS.

Lemma 1

Suppose \((t-1,k-1,n-1)\) represents a meaningful triplet. Let \(m_{in}(t-1,k-1,n-1)\) denote the initial pixel expansion of a \((t-1)\)-\((k-1,n-1)^{*}\)-VCS constructed by the linear algebraic method described above. Then the \(t\)-\((k,n)^{*}\)-VCS constructed using linear algebra has initial pixel expansion \(2m_{in}(t-1,k-1,n-1)\).

Proof

Suppose the set of participants for \((t-1)\)-\((k-1,n-1)^{*}\)-VCS is \({\mathcal {P}}=\{2,3,\ldots , n\}\), where the participants \(2,3,\ldots ,t\) are the essential participants. To construct a \(t\)-\((k,n)^{*}\)-VCS, we add another essential participant \(1\) to the set of participants. The new set of participants becomes \(\bar{{\mathcal {P}}}=\{1,2,3,\ldots , n\}\) and the basis matrices for \(t\)-\((k,n)^{*}\)-VCS are constructed by solving the set of linear equations same as those for the \((t-1)\)-\((k-1,n-1)^{*}\)-VCS with the modification that the new variable \(x_{1}\) is added to the left hand side of each of the linear equations. Since \(x_{1}\) appears in each of the equations and that \(x_{1}\) can have two distinct values \(0\) and \(1\), therefore the initial pixel expansion of the \(t\)-\((k,n)^{*}\)-VCS is exactly twice the initial pixel expansion of \((t-1)\)-\((k-1,n-1)^{*}\)-VCS. \(\square \)

In the following lemma we show a relationship between the number of common columns appearing in the initial basis matrices of a lower order VCS and a higher order VCS. Deleting those common columns will reduce the pixel expansion of the schemes and we establish a relation between the reduced pixel expansions.

Lemma 2

Let \((t-1,k-1,n-1)\) represent a meaningful triplet. Let \(m_{red}(t-1,k-1,n-1)\) denote the reduced pixel expansion of a \((t-1)\)-\((k-1,n-1)^{*}\)-VCS constructed by the linear algebraic method. Then the \(t\)-\((k,n)^{*}\)-VCS constructed using linear algebra has reduced pixel expansion \(2m_{red}(t-1,k-1,n-1)\).

Proof

It suffices to show that the number of common columns occurring in the initial basis matrices of \(t\)-\((k,n)^{*}\)-VCS is exactly twice to that of \((t-1)\)-\((k-1,n-1)^{*}\)-VCS. Suppose \(c\) is a common column occurring in the initial basis matrices of \((t-1)\)-\((k-1,n-1)^{*}\)-VCS. Corresponding to this \(c\), by Lemma \(1\), two columns appear in each of the basis matrices of \(t\)-\((k,n)^{*}\)-VCS, namely the columns \((0,c)\) and \((1,c)\). Since \(c\) is a common column that occurs in the basis matrices of the \((t-1)\)-\((k-1,n-1)^{*}\)-VCS, \(c\) must satisfy both the systems, for some \(i\) and \(j\) with \(i \ne j\), as given below:

$$\begin{aligned} \left. \begin{array}{ll} f_{B_{2i-1}}&{}=0\\ f_{B_{2i}}&{}=0\\ \end{array} \right\} \text{ and } \left. \begin{array}{ll} f_{B_{2j-1}}&{}=1\\ f_{B_{2j}}&{}=1\\ \end{array} \right\} \end{aligned}$$

Then it is easy to see that \((0,c)\) is solution to both the systems

$$\begin{aligned} \left. \begin{array}{ll} x_{1}+f_{B_{2i-1}}&{}=0\\ x_{1}+f_{B_{2i}}&{}=0\\ \end{array} \right\} \text{ and } \left. \begin{array}{ll} x_{1}+f_{B_{2j-1}}&{}=1\\ x_{1}+f_{B_{2j}}&{}=1\\ \end{array} \right\} \end{aligned}$$

corresponding to the access structure of \(t\)-\((k,n)^{*}\)-VCS. Similarly \((1,c)\) is also a common column. Now the proof follows. \(\square \)

The following theorem is immediate.

Theorem 3.2

Let \((t,k,n)\) be a meaningful triplet such that \((0,k-t,n-t)\) is also meaningful. Let the pixel expansion and relative contrast of the \(0\)-\((k-t,n-t)^{*}\)-VCS be \(m\) and \(\alpha \) respectively. Then the pixel expansion and relative contrast of the \(t\)-\((k,n)^{*}\)-VCS will be \(2^{t}m\) and \(\frac{\alpha }{2^{t}}\) respectively.

Corollary

It is immediate that if \(m^{*}\) be the optimal pixel expansion and \(\alpha ^{*}\) be the optimal relative contrast of a \(t\)-\((k,n)^{*}\)-VCS then \(m^{*} \le 2^{t}m\) and \(\alpha ^{*} \ge \frac{\alpha }{2^{t}}\) where \(m\) is pixel expansion and \(\alpha \) is relative contrast of the \(0\)-\((k-t,n-t)^{*}\)-VCS.

Remark

A version of Theorem 4 in [14] can be derived from Theorem 3.2. So in particular, a version of Theorem 2.3 in [6] may also be derived by setting \(t = 1\).

The following section deals with the scenario when \(k-t = 1\). In this process we are able to address the following issues. The method of construction adopted in [14] constructs the basis matrices of a higher order \(t\)-\((k,n)^{*}\)-VCS out of the basis matrices of a \((k-t,n-t)\)-VCS and basis matrices of an optimal \((t,t)\)-VCS. Hence, as the authors pointed out, Corollary \(1\) and Corollary \(2\) to Theorem \(4\) in [14] hold true if \(k-t\ge 2\). Our construction method is direct and this enables to circumvent this obstacle when the difference between \(k\) and \(t\) is equal to \(1\). On the other hand we generalize the \((2,n)^{*}\)-VCS in [6] to the case when there are \(t\) many essential parties. Moreover, in [1], the author pointed out an interesting question whether or not collecting three or more equations at a time gives better result in terms of pixel expansion while constructing a VCS for general access structure. We resolve the issue for \((k-1)\)-\((k,n)^{*}\)-VCS by considering three or more equations at a time to obtain better results in terms of pixel expansion and relative contrast.

3.2.1 On construction of \((k-1)\)-\((k,n)^{*}\)-VCS

Let us consider a restricted \((k,n)\)-VCS having \(k-1\) essential participants. In this case the minimal qualified sets are \(\{1,\ldots ,k-1,k\}, \{1,\ldots ,k-1,k+1\}, \{1,\ldots ,k-1,k+2\}, \ldots , \{1,\ldots ,k-1,n-1\}, \{1,\ldots ,k-1,n\}\). Observe that taking any number of equations at a time does not affect the security and contrast conditions of a visual cryptography scheme. We need not take two equations at a time to form groups of linear equations. We may take all the equations at a time forming only one group and the values of the variables \(x_{1}, x_{2}, \ldots \) ,\(x_{k-1}\) determine the values of the rest of the variables. In this scenario the pixel expansion thus becomes \(2^{k-1}\). We start with an example for better understanding.

Example 3.2

Consider \(3\)-\((4,7)^{*}\)-VCS. Here, \(\Gamma _0=\{\{1,2,3,4 \}, \{1,2,3,5 \}, \{1,2,3,6 \}\),

\(\{1,2,3,7 \} \}\). To construct the \(3\)-\((4,7)^*\)-VCS, we consider the following system of linear equations over the binary field \(\mathbb {Z}_2\). The first system is a homogeneous system of equations and the latter one is a non-homogeneous system of equations.

$$\begin{aligned}&{\left\{ \begin{array}{ll} x_{1}+x_{2}+x_3+x_4=0\\ x_{1}+x_{2}+x_3+x_5=0\\ x_{1}+x_{2}+x_3+x_6=0\\ x_{1}+x_{2}+x_3+x_7=0 \end{array}\right. } \end{aligned}$$
(1)
$$\begin{aligned}&{\left\{ \begin{array}{ll} x_{1}+x_{2}+x_3+x_4=1\\ x_{1}+x_{2}+x_3+x_5=1\\ x_{1}+x_{2}+x_3+x_6=1\\ x_{1}+x_{2}+x_3+x_7=1 \end{array}\right. } \end{aligned}$$
(2)

The solution of the above systems admits the following basis matrices:

$$\begin{aligned} S^{0}=\begin{bmatrix} 00001111 \\ 00110011 \\ 01010101 \\ 01101001 \\ 01101001 \\ 01101001 \\ 01101001 \end{bmatrix}\quad and\quad S^{1}=\begin{bmatrix} 11110000 \\ 00110011 \\ 01010101 \\ 01101001 \\ 01101001 \\ 01101001 \\ 01101001 \end{bmatrix}. \end{aligned}$$

The pixel expansion is 8 and the relative contrast is \(\frac{1}{8}\).

Looking at the above example we now prove the following theorem.

Theorem 3.3

For a meaningful triplet \((t,k,n)\) with \(t=k-1\), there exists a \((k-1)\)-\((k,n)^*\)-VCS having pixel expansion \(2^{k-1}\) and relative contrast \(\tfrac{1}{2^{k-1}}\).

Proof

Let \(m\) denote the number of columns of the matrices \(S^{0}\) and \(S^{1}\) (i.e, \(m\) denotes the pixel expansion). Note that in this theorem \(m = 2^{k-1}\). We need to prove the following:

  1. 1.

    If \(X\) is a qualified set then \( w(S^1_X) - w(S^0_X) \ge \frac{1}{2^{k-1}} \cdot m\).

  2. 2.

    If \(Y \subset {\mathcal {P}}\) is a forbidden set of participants then \(S^0[Y]\) and \(S^1[Y]\) are identical up to column permutation.

First we prove the second condition viz., the security condition. Let \(Y = \{i_1, i_2, \ldots , i_s\}\) be a forbidden set. We want to show that \(S^0[Y]\) and \(S^1[Y]\) are identical up to a column permutation. Recall that each column of \(S^0\) is a solution of the homogeneous system of equations and each column of \(S^1\) is a solution of the non-homogeneous system of equations. If we can prove that there exists a particular solution of the non-homogeneous system, say \(\mathbf {c} = (c_1, c_2, \ldots ,c_n)\) such that \(c_j = 0\) for \(j = i_1,i_2,\ldots ,i_s\) then by \( {Result ~ 1}\), the restricted matrices \(S^0[Y]\) and \(S^1[Y]\) will be identical up to a column permutation. Since \(Y\) is a forbidden set of participants, \(Y\) does not contain any \(B_i\) for all \(i\) because otherwise, \(Y\) would contain a minimal qualified set and would itself become a qualified set. In a \((k-1)\)-\((k,n)^*\)-VCS there are two types of maximal forbidden sets:

  1. 1.

    Type 1: \(\{1,2,3,\ldots ,k-2,k-1\}\) that is all the essential parties are present but a regular party is unavailable.

  2. 2.

    Type 2: \(\{2,3,\ldots ,k-2,k-1,k,k+1,\ldots ,n-1,n\}\) that is all regular parties along with \(k-2\) essential parties are present but one essential party namely, \(1\) is missing.

If we can prove that neither type \(1\) nor type \(2\) subsets can have any information about the secret then we are done. Let us first consider type \(1\) i.e. \(Y=\{1,2,3,\ldots ,k-2,k-1\}\). In this case we see that \(c_1 = c_2 = c_3 y= \cdots = c_{k-1} = 0\) and \(c_k = c_{k+1} = \cdots = c_{n} = 1\) is particular solution to the non-homogeneous system. Again, if \(Y=\{2,3,\ldots ,k-2,k-1,k,k+1,\ldots ,n\}\) is of type \(2\) then also \(c_1 = 1\) and \(c_2 = c_3 = \cdots = c_n = 0\) is a particular solution to the non-homogeneous system. Hence the proof follows.

For the proof of the contrast condition let \(X\) be a minimal qualified set of participants. Corresponding to this \(X\) an all zero solution vector occurs in \(S^0[X]\) but there is no such vector in \(S^1[X]\). Hence, \( w(S^0_X) - w(S^1_X)\ge 1\) which proves the contrast condition. \(\square \)

Remark

Theorem 3.3 reveals that for the \((k-1)\)-\((k,n)^*\)-VCS, constructed using above construction, the pixel expansion and hence relative contrast depend only on \(k\) and surprisingly not on \(n\).

4 On construction of \(t\)-\((n-1,n)^*\)-VCS

In this section we deal with the particular case of the restricted access structure \(t\)-\((k,n)^*\)-VCS with \(k=n-1\). For the construction and analysis of \(t\)-\((n-1,n)^*\)-VCS, \((n-1,n)\)-VCS plays an important role. In this section we first revisit the technique as given in Adhikari [1] to obtain an \((n-1,n)\)-threshold VCS. In that paper [1], the author pointed out that by deleting common columns from the initial basis matrices \(S^0_{in}\) and \(S^1_{in}\), one may construct better VCS in terms of less pixel expansion and better relative contrast. However, the author failed to provide the exact number of common columns in \(S^0_{in}\) and \(S^1_{in}\) for any \((n-1,n)\)-VCS. In the next section, we provide the exact count of the common columns and thereby provide a closed form formula for the exact pixel expansion of an efficient \((n-1,n)\)-VCS.

4.1 On \((n-1,n)\)-VCS

The construction of an \((n-1,n)\)-VCS essentially follows from the construction of a general \(t\)-\((k,n)^{*}\)-VCS (as described in Sect. 3) by considering \(t=0\). For the sake of completeness we sketch the construction method.

Consider a set of participants \({\mathcal {P}}=\{1,2,\ldots ,n\}\) and let \(X \subseteq {\mathcal {P}}\) having \(n-1\) elements. Then \({\mathcal {P}}\) has altogether \({\displaystyle \genfrac(){0.0pt}0{n}{n-1}}=n\) such subsets. Based on the increasing lexicographic arrangement we arrange the subsets as follows: \(B_{1}, B_{2}, \ldots , B_{n}\). We now collect the subsets to form \(\lceil \frac{n}{2} \rceil \) many groups. In general, the \(i\)th group \(G_i\) can be described as follows:

$$\begin{aligned} i\text{ th } \text{ Group }(G_i)= \left\{ \begin{array}{ll} (B_{2i-1},B_{2i}), &{} \text{ for }~ 1 \le i \le \left\lceil \frac{n-2}{2} \right\rceil , \text{ and } \text{ any } ~n>2;\\ (B_{n-1},B_{n}), &{} \text{ for } \text{ even } ~n > 2 \text{ and } ~i=\frac{n}{2};\\ (B_n), &{} \text{ for } \text{ odd } ~n>2 \text{ and } ~i=\left\lceil \frac{n}{2} \right\rceil , \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} B_j= \left\{ \begin{array}{cl} \{1,2, \ldots ,n-2i,n-2i+1,n-2i+3,n-2i+4, \ldots \}, &{} j\!=\!2i-1~\text{ and } ~ i \le \left\lceil \frac{n\!-\!2}{2} \right\rceil ;\\ \leftarrow ~~~~~~~~~~~~~n-1 \text{ many } \text{ elements }~~~~~~~~~~~~~~~~~~~\rightarrow &{} \\ \{1,2,\ldots ,n-2i,n-2i+2,n-2i+3, n-2i+4, \ldots \}, &{} j=2i ~ \text{ and } ~ i \le \left\lceil \frac{n-2}{2} \right\rceil ;\\ \leftarrow ~~~~~~~~~~~~~n-1 \text{ many } \text{ elements }~~~~~~~~~~~~~~~~~~~\rightarrow &{}\\ \{ 1,3,4,5,\ldots ,n \}, &{} \text{ for }~j=n-1; \\ \{ 2,3,4,5,\ldots ,n\}, &{} \text{ for }~j=n. \end{array} \right. \end{aligned}$$

Now we consider the following systems of linear equations:

For \(1\le j \le \lceil \frac{n-2}{2} \rceil \) and for any \(n>2\),

$$\begin{aligned} \left. \begin{array}{ll} f_{B_{2j-1}}&{}=0\\ f_{B_{2j}}&{}=0\\ \end{array} \right\} \cdots (j) \text{ and } \left. \begin{array}{ll} f_{B_{2j-1}}&{}=1\\ f_{B_{2j}}&{}=1\\ \end{array} \right\} \cdots (j') \end{aligned}$$

For \(i= \frac{n}{2}\) and for even \(n>2\),

$$\begin{aligned} \left. \begin{array}{ll} f_{B_{n-1}}&{}=0\\ f_{B_{n}}&{}=0\\ \end{array} \right\} \cdots \left( \frac{n}{2}\right) \text{ and } \left. \begin{array}{ll} f_{B_{n-1}}&{}=1\\ f_{B_{n}}&{}=1\\ \end{array} \right\} \cdots \left( \frac{n'}{2}\right) \end{aligned}$$

For \(i= \lceil \frac{n}{2} \rceil \) and for odd \(n>2\),

$$\begin{aligned} \left. \begin{array}{ll} f_{B_{n}}&{}=0\\ x_{1}&{}=0\\ \end{array} \right\} \cdots \left( \left\lceil \frac{n}{2}\right\rceil \right) \text{ and } \left. \begin{array}{ll} f_{B_{n}}&{}=1\\ x_{1}&{}=0\\ \end{array} \right\} \cdots \left( \left\lceil \frac{n'}{2}\right\rceil \right) \end{aligned}$$

For any \(n>2\) and \(1\le j \le \lceil \frac{n-2}{2} \rceil \), let \(S^0_j\) and \(S^1_j\) denote the Boolean matrices whose columns are all possible solutions of the systems \((j)\) and \((j')\) respectively over the binary field. Similarly, for any even (resp. odd) \(n>2\), let \(S^0_{\frac{n}{2}}\) (resp. \(S^0_{\lceil \frac{n}{2}\rceil }\)) and \(S^1_{\lceil \frac{n}{2}\rceil }\) (resp. \(S^1_{\lceil \frac{n}{2}\rceil }\)) denote the Boolean matrices corresponding to the solutions of the equations \(({\frac{n}{2}}) (resp. ({\lceil \frac{n}{2}\rceil }))\) and \(({\frac{n'}{2}}) (resp. {\lceil \frac{n'}{2}\rceil })\).

Then the matrices \(S^{0}_{in}=S^{0}_{1}||S^{0}_{2}||\cdots ||S^{0}_{\lceil \frac{n}{2}\rceil }\) and \(S^{1}_{in}=S^{1}_{1}||S^{1}_{2}||\cdots ||S^{1}_{\lceil \frac{n}{2}\rceil }\) formed by the concatenation constitute the initial basis matrices of an \((n-1,n)\)-VCS. The proof of this fact can be found in [1]. For the rest of our discussion we will call the matrices \(S^0_j\) and \(S^1_j\) for all \(j\) as the \(j\)th blocks of the initial basis matrices.

The following lemma is immediate from the construction.

Lemma 3

The \((n-1,n)\)-VCS, constructed using the linear algebraic technique as described above, has initial pixel expansion \(m_{in} =\lceil \frac{n}{2}\rceil 2^{n-2}\).

Example 4.1

For the \((4,5)\)-VCS, the five subsets can be grouped as follows: \(G_{1}={\left\{ \left\{ 1,2,3,4\right\} ,\left\{ 1,2,3,5\right\} \right\} , } G_{2}={\left\{ \left\{ 1,2,4,5\right\} ,\left\{ 1,3,4,5\right\} \right\} , } G_{3}={\left\{ \left\{ 2,3,4,5\right\} \right\} }.\)

The initial basis matrices obtained by solving the set of linear equations corresponding to the above groups, are given by

$$\begin{aligned} S^0_{in}= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} \leftarrow \text{ Col } \text{1 } \text{ to } \text{8 }\rightarrow &{}\quad \leftarrow \text{ Col } \text{9 } \text{ to } \text{16 }\rightarrow &{}\quad \leftarrow \text{ Col } \text{17 } \text{ to } \text{24 }\rightarrow \\ 01010101 &{} \quad 01010101 &{} \quad 00000000\\ 00110011 &{} \quad 01101001 &{} \quad 01010101\\ 00001111 &{} \quad 01101001 &{} \quad 00110011\\ 01101001 &{} \quad 00110011 &{} \quad 00001111\\ 01101001 &{} \quad 00001111 &{} \quad 01101001 \end{array}\right] \, and \\ S^1_{in}= & {} \left[ \begin{array}{c@{\quad }c@{\quad }c} \leftarrow \text{ Col } \text{1 } \text{ to } \text{8 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{9 } \text{ to } \text{16 }\rightarrow &{} \quad \leftarrow \text{ Col } \text{17 } \text{ to } \text{24 }\rightarrow \\ 01010101 &{} \quad 01010101 &{} \quad 00000000\\ 00110011 &{} \quad 10010110 &{} \quad 01010101\\ 00001111 &{} \quad 10010110 &{} \quad 00110011\\ 10010110 &{} \quad 00110011 &{} \quad 00001111\\ 10010110 &{} \quad 00001111 &{} \quad 10010110 \end{array}\right] . \end{aligned}$$

The initial pixel expansion is \(24\).

Now observe that the following columns are common to both \(S^0_{in}\) and \(S^1_{in}\).

\(S^0_{in}\)

2

3

5

7

10

11

13

15

24

\(S^1_{in}\)

16

22

23

9

8

24

20

1

7

As described in [1], these common columns can be deleted to form the reduced matrices \(S^{0}_{red}\) and \(S^{1}_{red}\) which still remain basis matrices of the \((4,5)\)-VCS. Moreover the pixel expansion of the reduced scheme is reduced to \(15\).

In general, the above technique can be applied to the initial basis matrices of any \((n-1,n)\)-VCS to obtain the reduced basis matrices with less pixel expansion. We are now going to determine, case by case the exact number of common columns occurring in the initial basis matrices \(S^{0}_{in}\) and \(S^{1}_{in}\) of an \((n-1,n)\)-VCS constructed by the linear algebraic method and find the exact value of the reduced pixel expansion of the scheme. Towards finding the results the following lemmas play an important role.

Lemma 4

Let \(S^{0}_{in}\) and \(S^{1}_{in}\) be the basis matrices of an \((n-1,n)\)-VCS, \(n\ge 3\), constructed using linear algebraic technique. Then,

  • For even \(n\), the columns with hamming weights \(0, 1, (n-1)\) and \(n\) cannot be deleted from \(S^{0}_{in}\) and \(S^{1}_{in}\).

  • For odd \(n\), the columns with hamming weights \(0, 1\) and \(n\) cannot be deleted from \(S^{0}_{in}\) and \(S^{1}_{in}\).

Proof

Let us first consider the case when \(n\) is even. Then for any group \(G_j\) for \(1 \le j \le \frac{n}{2}\), there are \(n-2\) independent variables common in both the systems \((j)\) and \((j')\). The values of these variables determine the values of the remaining two variables. Moreover for every fixed \((n-2)\)-tuple of values of the independent variables, the values of the two dependent variables are equal. Further a column \(c\) with hamming weight \(w(c)=0\) can appear only as a solution of the equation \((j)\) and not of the equation \((j')\). So, if \(w(c)=0\) then \(c \in S^{0}_{in}\) only. Again if \(w(c)=1\) then \(c \in S^{1}_{in}\) only. If \(w(c)=n-1\) then the only possibility is that among \((n-2)\) independent variables, \((n-3)\) assume the value \(1\) and the two dependent variables must be \(1\). So, if \(w(c)=n-1\) then \(c \in S^{0}_{in}\) only. Lastly, when \(w(c)=n\), then all the variables assume the value \(1\) and the \(n\)-tuple \((1,1,\ldots ,1)\) can only be a solution of the equation \((j')\). Hence, if \(w(c)=n\) then \(c \in S^{1}_{in}\) only.

Using the same arguments we can see that for odd \(n\),

  • if \(w(c)=0\) then \(c \in S^{0}_{in}\) only,

  • if \(w(c)=1\) then \(c\in S^{1}_{in}\) only and

  • if \(w(c)=n\) then \(c\in S^{0}_{in}\) only.

This completes the proof.\(\square \)

Remark

For an \((n-1,n)\)-VCS and \(n\ge 3\)

  • For even \(n\), the possible common columns occurring in \(S^{0}_{in}\) and \(S^{1}_{in}\) are the columns with hamming weights \(2,3,\ldots , n-4,n-3,n-2\).

  • For odd \(n\), the possible common columns for \(S^{0}_{in}\) and \(S^{1}_{in}\) are the columns with hamming weights \(2,3,\ldots , n-3,n-2,n-1\).

Now that we understand the distribution of possible common columns of an \((n-1,n)\)-VCS constructed by linear algebraic method, we proceed to calculate exact number of common columns. In the following we will denote the number of \(\mathbf common columns \) with hamming weight \(j\) by \(d_{j}\). For example, by Lemma 4, \(d_{0} = d_{1} = d_{n}=0\).

Towards finding the exact number of common columns for any \((n-1,n)\)-VCS we divide the proof technique into two parts. Sect. 4.2 deals when \(n\) is even while Sect. 4.3 deals when \(n\) is odd.

4.2 Number of common columns when \(n\) is even

Throughout this subsection, \(n\) is assumed to be even. To find the closed form of the number of common columns, the following lemmas play an important role.

The following lemma enables us to look only at the weights that lie between \(1\) and \(\frac{n}{2}\) while calculating the exact number of common columns occurring in the initial basis matrices.

Lemma 5

For even \(n\ge 4\), the \((n-1,n)\)-VCS constructed using linear algebraic method has the property that \(d_{j} = d_{n-j}\) for all \(j = 1,2,\ldots ,\frac{n}{2}\).

Proof

Recall that \(S^{0}_{in}\) is constructed by concatenating the blocks \(S^{0}_{1}, S^{0}_{2}\),..., \(S^{0}_{\frac{n}{2}}\) which are obtained respectively by solving the systems of equations \((1), (2), \ldots , (\frac{n}{2})\). Now \(S^{1}_{1}, S^{1}_{2}\),..., \(S^{1}_{\frac{n}{2}}\) can be obtained by adding (modulo 2) a particular solution to each of the columns of \(S^{0}_{1}, S^{0}_{2}\),..., \(S^{0}_{\frac{n}{2}}\) respectively. In this case, observe that \((1,1,1,\ldots ,1)\) is a particular solution to each of the system of equations \((1'), (2'), \ldots ,(\frac{n'}{2})\). Thus if a column \(c \in S^{0}_{in}\) then its complement \(\bar{c} \in S^{1}_{in}\) and vice versa. Therefore if \(c\) is a common column to both \(S^{0}_{in}\) and \(S^{1}_{in}\) then so is \(\bar{c}\). Lastly, if weight of \(c\) is \(j\) then weight of \(\bar{c}\) is \((n-j)\). Hence, \(d_{j}=d_{n-j}\) for all \(j=1,2,3,\ldots ,\frac{n}{2}\).\(\square \)

Lemmas 6 and 7 give a closed form formula for the number of common columns of weight \(j, 2 \le j \le \frac{n}{2}\) when \(j\) is even and odd respectively.

Lemma 6

For even \(n\ge 4\), the \((n-1,n)\)-VCS obtained using linear algebraic technique has the property that if \(j\) is even and \(2 \le j \le \frac{n}{2}\) then \(d_{j} = {\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{n-j}}\).

Proof

Let us first consider the first block \(S^{0}_{1}\) of \(S^{0}_{in}\). Suppose \(c\) is a column of weight \(j\), where \(j\) is even, occurring in \(S^{0}_{1}\). Therefore \(c\) is a solution to the system

$$\begin{aligned} \begin{array}{ll} x_{1}+x_{2}+\cdots + x_{n-2}+x_{n-1}&{} =0\\ x_{1}+x_{2}+\cdots + x_{n-2}+x_{n} &{}=0 \end{array} \end{aligned}$$
(3)

It is not hard to see that since \(j\) is even, \(x_{n-1}=x_{n}=0\), that is, the last two entries of the column \(c\), corresponding to \(x_{n-1}\) and \(x_{n}\) are both \(0\). Since \(w(c)=j\), therefore among the variables \(x_{1}, x_{2}, \ldots , x_{n-2}\) only \(j\) many variables assume the value \(1\) and the rest of them are all \(0\). Thus there are in total \(\genfrac(){0.0pt}0{n-2}{j}\) possible choices which implies that there are \(\genfrac(){0.0pt}0{n-2}{j}\) many columns of weight \(j\) occurring in \(S^0_{1}\). Following similar argument it is easy to see that each of the rest of \((\frac{n}{2}-1)\) blocks have the same number of weight \(j\) columns. Therefore, there are altogether \({\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{j}}\) many columns of weight \(j\) occurring in \(S^{0}_{in}\).

Now let us consider the first block \(S^{1}_{1}\) of \(S^{1}_{in}\). Suppose \(c\) is a column of weight \(j\), where \(j\) is even, occurring in \(S^{1}_{1}\). Therefore \(c\) is a solution to the system

$$\begin{aligned} \begin{array}{ll} x_{1}+x_{2}+\cdots +x_{n-2}+x_{n-1} &{} =1\\ x_{1}+x_{2}+\cdots +x_{n-2}+x_{n} &{} =1 \end{array} \end{aligned}$$
(4)

Again, it is not hard to see that since \(j\) is even, \(x_{n-1}=x_{n}=1\), that is, the last two entries of column \(c\), corresponding to \(x_{n-1}\) and \(x_{n}\), are both \(1\). Following a similar argument as above we see that there are \(\genfrac(){0.0pt}0{n-2}{j-2}\) many columns of weight \(j\) occurring in \(S^{1}_{1}\). Hence there are \({\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{j-2}}\) many columns of weight \(j\) occurring in \(S^{1}_{in}\).

Observe that, \({\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{j-2}} \le {\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{j}}\) for \(2 \le j \le \frac{n}{2}\).

Now we shall show that if there exists a column \(c\) of weight \(j\), where \(j\) is even, in \(S^{1}_{in}\) then that \(c\) must also occur in \(S^{0}_{in}\).

Towards this let \(c\) be any column of weight \(j\) such that \(c\in S^{1}_{in}\). Then \(c\) must belong to some block, say the block constructed by solving the system of equations,

$$\begin{aligned} \begin{array}{lcl} x_{1}+x_{2}+\cdots +x_{2k-2}+ &{} x_{2k-1} &{} + x_{2k+1}+\cdots +x_{n}=1\\ x_{1}+x_{2}+\cdots +x_{2k-2}+ &{} x_{2k} &{} +x_{2k+1}+\cdots +x_{n}=1 \end{array} \end{aligned}$$
(5)

It is easy to see \(x_{2k-1}=x_{2k}=1\) and among the \((n-2)\) remaining variables only \((j-2)\) assume the value \(1\). The rest of the variables assume \(0\). Let us write \(c=(c_1,c_2,\ldots ,c_{2k-1},c_{2k},\ldots ,c_{n})\). Now observe that since \(n\) is even therefore there are exactly \(\frac{n}{2}\) many pairs of the form \((c_{2i-1},c_{2i})\), namely, \((c_{1},c_{2}), (c_{3},c_{4}), \ldots , (c_{n-1},c_{n})\). Among these pairs we have already seen that \((c_{2k-1},c_{2k})=(1,1)\). Since \(j-2 \le \frac{n}{2}-2\), by Pigeon Hole principle there exists at least one pair say, \((c_{2r-1},c_{2r})=(0,0)\). It is now easy to observe that \(c\) is also a solution to the system

$$\begin{aligned} \begin{array}{lcl} x_{1}+x_{2}+\cdots +x_{2r-2}+ &{} x_{2r-1} &{} +x_{2r+1}+\cdots +x_{n}=0\\ x_{1}+x_{2}+\cdots +x_{2r-2}+ &{} x_{2r} &{} +x_{2r+1}+\cdots +x_{n}=0 \end{array} \end{aligned}$$
(6)

Thus \(c\in S^{0}_{in}\). Again it is not hard to see that if a column \(c\) of weight \(j\), where \(j\) is even, occurs multiple times in \(S^{1}_{in}\) then \(c\) also occurs at least that many times in \(S^{0}_{in}\). Hence the proof follows.\(\square \)

Lemma 7

For even \(n\ge 4\), the \((n-1,n)\)-VCS constructed using the linear algebraic method has the property that if \(j\) is odd and \(3 \le j \le \frac{n}{2}\) then \(d_{j}\)= \({\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{j-2}}\).

Proof

Proof is similar to that of Lemma 6.

Now we are in a position to calculate the exact value of reduced pixel expansion \(m_{red}\). The following theorem provides the exact value of pixel expansion of an efficient \((n-1,n)\)-VCS with basis matrices \(S^{0}_{red}\) and \(S^{1}_{red}\) having no common columns.

Theorem 4.1

Let \(n\ge 4\) be even. Then there exists an \((n-1,n)\)-VCS obtained by linear algebraic technique having pixel expansion \(m_{red}={\displaystyle \frac{n}{4}\genfrac(){0.0pt}0{n}{\frac{n}{2}}}\) and relative contrast \(\frac{1}{m_{red}}\).

Proof

Let \(\mathcal {K}=\{k:2<k\le \frac{n}{2}~ and~ k~ is~ odd\}\) and \(\mathcal {L}=\{k:2\le k\le \frac{n}{2}~ and~ k~ is~ even\}\). Then the reduced pixel expansion is given by

$$\begin{aligned} \begin{array}{ll} m_{red}&{} = m_{in}-{\displaystyle \sum _{2\le k\le n-2}}{{ \displaystyle d_{k}}} \\ &{} =m_{in}-\left[ {{\displaystyle }}\displaystyle \sum _{2\le k\le \frac{n}{2}}{{ \displaystyle d_{k}}}+{\displaystyle {{\displaystyle }}\displaystyle \sum _{\frac{n}{2}<k\le n-2}{{\displaystyle d_{k}}}}\right] \\ \\ &{} =m_{in}-\left[ {{\displaystyle }}\displaystyle \sum _{2\le k\le \frac{n}{2}}{{ \displaystyle d_{k}}}+{\displaystyle {{\displaystyle }}\sum _{2\le k< \frac{n}{2}}{{\displaystyle d_{n-k}}}}\right] . \end{array} \end{aligned}$$

Therefore by Lemma 5,

$$\begin{aligned} m_{red}= & {} m_{in}- 2{{\displaystyle }}\displaystyle \sum _{2\le k\le \frac{n}{2}}{{ \displaystyle d_{k}}} \\ \\= & {} m_{in}-2\left[ {{\displaystyle }}\displaystyle \sum _{k\in \mathcal {K}}{{ \displaystyle d_{k}}}+{\displaystyle {{\displaystyle }}\displaystyle \sum _{k\in \mathcal {L}}{{\displaystyle d_{k}}}}\right] \\ \\= & {} m_{in}-2\frac{n}{2}\left[ {{\displaystyle }}\displaystyle \sum _{k\in \mathcal {K}}{{ \displaystyle \genfrac(){0.0pt}0{n-2}{k-2}}}+{\displaystyle {{\displaystyle }}\displaystyle \sum _{k\in \mathcal {L}}{{\displaystyle \genfrac(){0.0pt}0{n-2}{k-2}}}}\right] +d_{\frac{n}{2}} \\ \\= & {} m_{in}-\frac{n}{2}\left[ 2{{\displaystyle }}\displaystyle \sum _{k\in \mathcal {K}}{{ \displaystyle \genfrac(){0.0pt}0{n-2}{k-2}}}+{\displaystyle 2{{\displaystyle }}\displaystyle \sum _{k\in \mathcal {L}}{{\displaystyle \genfrac(){0.0pt}0{n-2}{k-2}}}}\right] +d_{\frac{n}{2}} \\ \\= & {} m_{in}-\frac{n}{2}{{\displaystyle }}\displaystyle \sum _{0\le k\le n-2}{{ \displaystyle \genfrac(){0.0pt}0{n-2}{k}}}+ {\displaystyle {{\displaystyle }}\frac{n}{2}{{\displaystyle \genfrac(){0.0pt}0{n-2}{\frac{n}{2}-1}}}}+d_{\frac{n}{2}} \\= & {} m_{in}-\frac{n}{2}{{\displaystyle }}\displaystyle \sum _{0\le k\le n-2}{{ \displaystyle \genfrac(){0.0pt}0{n-2}{k}}}+ {\displaystyle {{\displaystyle }}\frac{n}{2}{{\displaystyle \genfrac(){0.0pt}0{n-2}{\frac{n}{2}-1}}}}+d_{\frac{n}{2}} \\ \\= & {} m_{in}-\frac{n}{2}{{\displaystyle }}{{ \displaystyle 2^{n-2}}}+ {\displaystyle {{\displaystyle }}\frac{n}{2}{{\displaystyle \genfrac(){0.0pt}0{n-2}{\frac{n}{2}-1}}}}+d_{\frac{n}{2}}. \end{aligned}$$

By Lemma 3, \(m_{in}={{\displaystyle \frac{n}{2}}}2^{n-2}\) and therefore,

$$\begin{aligned} m_{red}={{ \displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-2}{\frac{n}{2}-1}}}+ {\displaystyle {{\displaystyle }}{{\displaystyle \frac{n}{2} \genfrac(){0.0pt}0{n-2}{\frac{n}{2}-2}}}}. \end{aligned}$$

Using the identities on binomial coefficients the above expression reduces to \(m_{red}={\displaystyle \frac{n}{4}\genfrac(){0.0pt}0{n}{\frac{n}{2}}}\). This completes the proof of the theorem. \(\square \)

We have found the exact value of the pixel expansion of an \((n-1,n)\)-VCS constructed by using the linear algebraic technique for the case when \(n\) is even. In the next subsection we find the same for odd \(n\ge 3\). For this we take a similar approach as described in Lemma 1.

4.3 Number of common columns when \(n\) is odd

Throughout this subsection we assume that \(n\) is odd. To start with, we consider the set of participants to be the set \({\mathcal {P}}=\{2,3, \ldots ,n+1\}\) consisting of \(n\) many participants where \(n\ge 3\) is odd. The minimal qualified sets arranged in the lexicographic order are given by \({B}_{1}=\{2,3, \ldots ,n\}, {B}_{2}=\{2,3, \ldots ,n-1,n+1\}, \ldots , {B}_{n}=\{3,4, \ldots ,n+1\}\). As described in Sect. 4.1, the above collection of minimal qualified sets can be grouped as follows: \(G_{1}=({B}_{1},{B}_{2}), G_{2}=({B}_{3},{B}_{4}), \ldots , G_{\lceil \frac{n}{2}\rceil }=({B}_{n})\). The initial basis matrices can be found by solving the appropriate systems of linear equations corresponding to the above grouping.

We now transform the access structure of the \((n-1,n)\)-VCS to the access structure of an \((n,n+1)\)-VCS in the following manner:

We incorporate another participant \(1\) to the set of participants and define \({\mathcal {P}}^{*}=\{1\}\bigcup {\mathcal {P}}\)= \(\{1,2,3, \ldots ,n+1\}\) and \({\mathcal {B}}^{*}_{1}\)= \(\{1\}\bigcup {B}_{1}, {\mathcal {B}}^{*}_{2}\)= \(\{1\}\bigcup {B}_{2}, \ldots , {\mathcal {B}}^{*}_{n}\)= \(\{1\}\bigcup {B}_{n}\) and introduce \({\mathcal {B}}^{*}_{n+1}\)= \(\{2\}\bigcup {B}_{n}\). Now these \((n+1)\) many subsets \({\mathcal {B}}^{*}_{1}, {\mathcal {B}}^{*}_{2}, \ldots , {\mathcal {B}}^{*}_{n+1}\) are precisely the minimal qualified sets of an \((n,n+1)\)-VCS on the set of participants \(\mathcal {P^{*}}\), where \(n+1\) is even. As already described these sets can now be grouped into \(\frac{n+1}{2}\) many groups namely, \(\mathcal {G}^{*}_{1}, \mathcal {G}^{*}_{2}, \ldots , \mathcal {G}^{*}_{\frac{n+1}{2}}\) and then solving the corresponding linear equations, the basis matrices for the \((n,n+1)\)-VCS are obtained. By Theorem 4.1, the reduced pixel expansion of the transformed scheme becomes \(\frac{n+1}{4}\genfrac(){0.0pt}0{n+1}{\frac{n+1}{2}}\). Using this we calculate the exact value of the reduced pixel expansion for the \((n-1,n)\)-VCS, when \(n\ge 3\) is odd.

Note: We will use the following notations and symbols for the rest of this section. All the constructions of basis matrices are done using the linear algebraic technique. We will denote by \(S^{0}_{in}(k,r)\) and \(S^{1}_{in}(k,r)\) the initial basis matrices of the \((k,r)\)-VCS. \(S^{0}_{red}(k,r)\) and \(S^{1}_{red}(k,r)\) will denote the reduced basis matrices of the \((k,r)\)-VCS. To distinguish between the linear equations corresponding to the \((n-1,n)\)-VCS and \((n,n+1)\)-VCS we will use the variables \(x_{2},x_{3},\ldots ,x_{n+1}\) corresponding to the participants in \({\mathcal {P}}\) and the variables \(y_{1},y_{2},\ldots ,y_{n+1}\) corresponding to the participants in \(\mathcal {P^{*}}\).

Now we proceed to find the exact pixel expansion of the reduced basis matrices when \(n\) is odd. We first give the overview of the steps. First, in Lemma 8 we find the relationship between the initial pixel expansions of \((n-1,n)\)-VCS and \((n,n+1)\)-VCS. From Lemmas 9 to 15 we calculate the number of common columns that can be deleted from the initial basis matrices for both cases and find the relationship between the number of common columns. Lastly, Theorem 4.2 gives the closed form of the reduced pixel expansion.

Lemma 8

Let \(m_{in}\) be the initial pixel expansion of the \((n-1,n)\)-VCS obtained by the linear algebraic technique, where \(n\ge 3\) is odd. If \(m^{*}_{in}\) denotes the initial pixel expansion of the \((n,n+1)\)-VCS then \(m^{*}_{in}\)= \(2m_{in}\).

Proof

The proof is immediate.

In the following two lemmas we show that for every column in the initial basis matrices of the \((n-1,n)\)-VCS, where \(n\ge 3\) is odd, which does not appear in the last block \(S^{0}_{\lceil \frac{n}{2}\rceil }\) of \(S^{0}_{in}(n-1,n)\) or \(S^{1}_{\lceil \frac{n}{2}\rceil }\) of \(S^{1}_{in}(n-1,n)\), there exist exactly two columns viz. \((0,c)\) and \((1,c)\) that occur in the initial basis matrices of the \((n,n+1)\)-VCS.

Lemma 9

Let \(c\) be a column such that \(c\in S^{0}_{in}(n-1,n)\), where \(n\ge 3\) is odd. Further assume that \(c\) appears as a solution to any system of equations (as described earlier) except the following system of equations

$$\begin{aligned} \left\{ \begin{array}{rl} x_{3}+x_{4}+\cdots +x_{n+1}&{} =0\\ x_{2}&{}=0 \end{array} \right. \end{aligned}$$
(7)

Then the columns \((0,c)\) and \((1,c)\) occur respectively in the initial basis matrices \(S^{0}_{in}(n,n+1)\) and \(S^{1}_{in}(n,n+1)\) of the \((n,n+1)\)-VCS.

Proof

Let \(c=(b_2,b_3,\ldots ,b_{n+1})\), where each \(b_{i}\in \{0,1\}\). By the given conditions \(c\) is a solution to some system, say

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{2}+\cdots +x_{2k-2}+x_{2k-1}+x_{2k+1}+\cdots +x_{n+1}=0\\ x_{2}+\cdots +x_{2k-2}+~~x_{2k}~+x_{2k+1}+\cdots +x_{n+1}=0 \end{array}\right. } \end{aligned}$$
(8)

Then it is easy to see that \((0,b_2,b_3,\ldots ,b_{n+1})\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{2}+\cdots +y_{2k-2}+y_{2k-1}+y_{2k+1}+\cdots +y_{n+1}=0\\ y_{1}+y_{2}+\cdots +y_{2k-2}+~~y_{2k}~+y_{2k+1}+\cdots +y_{n+1}=0 \end{array}\right. } \end{aligned}$$
(9)

corresponding to the \((n,n+1)\)-VCS and \((1,b_2,b_3,\ldots ,b_{n+1})\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{2}+\cdots +y_{2k-2}+y_{2k-1}+y_{2k+1}+\cdots +y_{n+1}=1\\ y_{1}+y_{2}+\cdots +y_{2k-2}+~~y_{2k}~+y_{2k+1}+\cdots +y_{n+1}=1 \end{array}\right. } \end{aligned}$$
(10)

corresponding to the \((n,n+1)\)-VCS. Hence the proof.\(\square \)

Lemma 10

Let \(c\) be a column such that \(c\in S^{1}_{in}(n-1,n)\), where \(n\ge 3\) is odd. Further assume that \(c\) appears as a solution to any system of equations except the following system of equations

$$\begin{aligned} \begin{array}{rl} x_{3}+x_{4}+\cdots +x_{n+1} &{} =1\\ x_{2}&{}=0 \end{array} \end{aligned}$$
(11)

Then the columns \((0,c)\) and \((1,c)\) occur respectively in the initial basis matrices \(S^{1}_{in}(n,n+1)\) and \(S^{0}_{in}(n,n+1)\) of the \((n,n+1)\)-VCS.

Proof

Same as the proof of the Lemma 9.\(\square \)

At this point it is not clear what happens if a column occurs in the block \(S^{0}_{\lceil \frac{n}{2}\rceil }\) of \(S^{0}_{in}(n-1,n)\) or occurs in the block \(S^{1}_{\lceil \frac{n}{2}\rceil }\) of \(S^{1}_{in}(n-1,n)\).

Let us recall that the system of linear equations corresponding to the last block \(S^{0}_{\lceil \frac{n}{2}\rceil }\) of \(S^{0}_{in}(n-1,n)\) is

$$\begin{aligned} \left\{ \begin{array}{rl} x_{3}+x_{4}+\cdots +x_{n+1}&{}=0\\ x_{2}&{}=0 \end{array} \right. \end{aligned}$$
(12)

and the system corresponding to \(S^{1}_{\lceil \frac{n}{2}\rceil }\) of \(S^{1}_{in}(n-1,n)\) is

$$\begin{aligned} \left\{ \begin{array}{rl} x_{3}+x_{4}+\cdots +x_{n+1}&{}=1\\ x_{2}&{}=0 \end{array} \right. \end{aligned}$$
(13)

A typical solution of these particular systems can be written as \((0,b_{3},\ldots ,b_{n+1})\), where each \(b_{i}\in \{0,1\}\).

Let us consider the system of linear equations given by (12). Observe that by our construction method, this system of linear equations corresponding to the \((n-1,n)\)-VCS is somewhat transformed into the system of linear equations

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=0\\ y_{2}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=0 \end{array}\right. } \end{aligned}$$
(14)

corresponding to the \((n,n+1)\)-VCS. Moreover the set of all solutions to the system (14) constitutes the columns of the \(\frac{n+1}{2}\)-th block of \(S^{0}_{in}(n,n+1)\). In (14) the variables \(y_{3}, y_{4}, \ldots , y_{n+1}\) are independent and for each of the \(2^{n-1}\) many possible tuples of values of them the dependent variables \(y_{1}\) and \(y_{2}\) assume the same value. A similar transformation also holds for the system of linear equations given by (13). The corresponding system of linear equations becomes

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=1\\ y_{2}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=1 \end{array}\right. } \end{aligned}$$
(15)

Now we have the following lemmas.

Lemma 11

Suppose \(n\ge 3\) is odd. Let the column \((0,b_{3},\ldots ,b_{n+1})\in S^{0}_{\lceil \frac{n}{2}\rceil }\) corresponding to the \((n-1,n)\)-VCS where each \(b_{i}\in \{0,1\}\). Then the column \((0,0,b_{3},\ldots ,b_{n+1})\) occurs in the \(\frac{n+1}{2}\)-th block of \(S^{0}_{in}(n,n+1)\) and \((1,1,b_{3},\ldots ,b_{n+1})\) occurs in the \(\frac{n+1}{2}\)-th block of \(S^{1}_{in}(n,n+1)\).

Proof

Given that \((0,b_{3},\ldots ,b_{n+1})\) is a solution to

$$\begin{aligned} \left\{ \begin{array}{rl} x_{3}+x_{4}+\cdots +x_{n+1}&{}=0\\ x_{2}&{}=0 \end{array} \right. \end{aligned}$$
(16)

It immediately follows that \((0,0,b_{3},\ldots ,b_{n+1})\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=0\\ y_{2}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=0 \end{array}\right. } \end{aligned}$$
(17)

Again it is not hard to see that \((1,1,b_{3},\ldots ,b_{n+1})\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=1\\ y_{2}+y_{3}+\cdots +y_{n-1}+y_{n}+y_{n+1}=1 \end{array}\right. } \end{aligned}$$
(18)

Hence the proof follows.\(\square \)

Lemma 12

Suppose \(n\ge 3\) is odd. Let the column \((0,b_{3},\ldots ,b_{n+1})\in S^{1}_{\lceil \frac{n}{2}\rceil }\) corresponding to the \((n-1,n)\)-VCS where each \(b_{i}\in \{0,1\}\). Then the column \((0,0,b_{3},\ldots ,b_{n+1})\) occurs in the \(\frac{n+1}{2}\)-th block of \(S^{1}_{in}(n,n+1)\) and \((1,1,b_{3},\ldots ,b_{n+1})\) occurs in the \(\frac{n+1}{2}\)-th block of \(S^{0}_{in}(n,n+1)\).

Proof

Same as the proof of Lemma 11.\(\square \)

Now that the redistribution of the columns is well understood, we proceed to find the relationship between the number of deleted columns in both cases.

Lemma 13

Suppose \(n\ge 3\) is odd. Let us for the time being, restrict our view within the first \(\frac{n-1}{2}\) many blocks of the initial basis matrices \(S^{0}_{in}(n-1,n)\) and \(S^{1}_{in}(n-1,n)\). Further assume that \(c\) is a common column occurring in these restricted portions of the basis matrices. Then both the columns \((0,c)\) and \((1,c)\) are common columns occurring in the basis matrices \(S^{0}_{in}(n,n+1)\) and \(S^{1}_{in}(n,n+1)\) of the \((n,n+1)\)-VCS.

Proof

Let \(c\) occur as solution to the systems

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{2}+\cdots +x_{2k-2}+x_{2k-1}+x_{2k+1}+\cdots +x_{n+1}=0\\ x_{2}+\cdots +x_{2k-2}+~~x_{2k}~+x_{2k+1}+\cdots +x_{n+1}=0 \end{array}\right. } \end{aligned}$$
(19)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{2}+\cdots +x_{2r-2}+x_{2r-1}+x_{2r+1}+\cdots +x_{n+1}=1\\ x_{2}+\cdots +x_{2r-2}+~~x_{2r}~+x_{2r+1}+\cdots +x_{n+1}=1 \end{array}\right. } \end{aligned}$$
(20)

corresponding to the \((n-1,n)\)-VCS, where \(k\ne r\). That is, \(c\) occurs in \(S^{0}_{in}(n-1,n)\) and also in \(S^{1}_{in}(n-1,n)\). Since \(c\) is a solution to 19, by Lemma 9, \((0,c)\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{2}+\cdots +y_{2k-2}+y_{2k-1}+y_{2k+1}+\cdots +y_{n+1}=0\\ y_{1}+y_{2}+\cdots +y_{2k-2}+~~y_{2k}~+y_{2k+1}+\cdots +y_{n+1}=0 \end{array}\right. } \end{aligned}$$
(21)

and \((1,c)\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{2}+\cdots +y_{2r-2}+y_{2r-1}+y_{2r+1}+\cdots +y_{n+1}=1\\ y_{1}+y_{2}+\cdots +y_{2r-2}+~~y_{2r}~+y_{2r+1}+\cdots +y_{n+1}=1 \end{array}\right. } \end{aligned}$$
(22)

corresponding to the \((n,n+1)\)-VCS. In other words, \((0,c)\in S^{0}_{in}(n,n+1)\) and \((1,c)\in S^{1}_{in}(n,n+1)\). Similarly by Lemma 10, \((0,c)\) and \((1,c)\) occur as columns in \(S^{1}_{in}(n,n+1)\) and \(S^{0}_{in}(n,n+1)\) respectively. The proof follows.\(\square \)

Therefore if \(d\) is the number of columns deleted from the initial basis matrices of the \((n-1,n)\)-VCS where the columns satisfy the conditions of Lemma 13, then \(2d\) many corresponding columns will be deleted from the initial basis matrices of the \((n,n+1)\)-VCS. After deleting those common columns we are now left with \((m_{in}-d)\) many columns in the partly reduced basis matrices of the \((n-1,n)\)-VCS and \((m^{*}_{in}-2d)\) many columns in the partly reduced basis matrices of the \((n,n+1)\)-VCS. At this stage if \(c\) be any column that is common to both the partly reduced basis matrices of the \((n-1,n)\)-VCS then it must hold that either \(c\) occurs in the block \(S^{0}_{\lceil \frac{n}{2}\rceil }\) of \(S^{0}_{in}(n-1,n)\) or in the block \(S^{1}_{\lceil \frac{n}{2}\rceil }\) of \(S^{1}_{in}(n-1,n)\) but not both.

Lemma 14

Let \(n\ge 3\) be odd. Further let \(c\) be a common column occurring in the partly reduced basis matrices of the \((n-1,n)\)-VCS such that \(c\in S^{0}_{\lceil \frac{n}{2}\rceil }\). Then corresponding to this \(c\) there are two columns namely, \((0,c)\) and \(\overline{(0,c)}\), the complement of \((0,c)\), that are common columns occurring in the partly reduced basis matrices of the \((n,n+1)\)-VCS.

Proof

As \(c\in S^{0}_{\lceil \frac{n}{2}\rceil }\) and \(c\) is a common column appearing in the partly reduced basis matrix \(S^{0}_{in}(n-1,n), c\) must belong to some \(i\)-th block of the partly reduced \(S^{1}_{in}(n-1,n)\) where \(i\ne \lceil \frac{n}{2}\rceil \). By Lemma 10, \((0,c)\) appears as a column in the \(i\)-th block of partly reduced \(S^{1}_{in}(n,n+1)\) corresponding to the \((n,n+1)\)-VCS. Again since \(c\in S^{0}_{\lceil \frac{n}{2}\rceil }\) corresponding to the \((n-1,n)\)-VCS, by Lemma 11, \((0,c)\) appears as a column in the \(\frac{n+1}{2}\)-th block of \(S^{0}_{in}(n,n+1)\) corresponding to the \((n,n+1)\)-VCS. Thus \((0,c)\) is a common column in the partly reduced basis matrices of the \((n,n+1)\)-VCS.

Now as \((0,c)\) appears as a column in the \(\frac{n+1}{2}\)-th block of \(S^{0}_{in}(n,n+1)\) therefore it must be a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n+1}=0\\ y_{2}+y_{3}+\cdots +y_{n+1}=0 \end{array}\right. } \end{aligned}$$
(23)

Hence its complement \(\overline{(0,c)}\) is a solution to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{1}+y_{3}+\cdots +y_{n+1}=1\\ y_{2}+y_{3}+\cdots +y_{n+1}=1 \end{array}\right. } \end{aligned}$$
(24)

This follows from the fact that \(y_{1}=1,y_{2}=1,\ldots ,y_{n+1}=1\) is a particular solution to the system (24) as \(n\) is odd. Similarly as \((0,c)\) occurs in the \(i\)-th block of the partly reduced matrix \(S^{1}_{in}(n,n+1)\) therefore \(\overline{(0,c)}\) occurs in \(i\)-th block of \(S^{0}_{in}(n,n+1)\) corresponding to the \((n,n+1)\)-VCS.

The proof follows.\(\square \)

By an essentially same line of argument we can prove the following lemma.

Lemma 15

Let \(n\ge 3\) be odd. Further let \(c\) be a common column occurring in the partly reduced basis matrices of the \((n-1,n)\)-VCS such that \(c\in S^{1}_{\lceil \frac{n}{2}\rceil }\). Then corresponding to this \(c\) there are two columns namely, \((0,c)\) and \(\overline{(0,c)}\), the complement of \((0,c)\), that are common columns occurring in the partly reduced basis matrices of the \((n,n+1)\)-VCS.

If \(d'\) be the number of such common columns occurring in the partly reduced basis matrices of the \((n-1,n)\)-VCS then there are \(2d'\) many common columns occurring in the partly reduced basis matrices of the \((n,n+1)\)-VCS. Thus we have shown that to each common column appearing in the initial basis matrices of the \((n-1,n)\)-VCS there are two corresponding columns that are common to the initial basis matrices of the \((n,n+1)\)-VCS. Moreover, it is easy to see that the common columns occurring in the basis matrices of the \((n,n+1)\)-VCS arise only in this manner. There are no more common columns. Hence the exact pixel expansions (after reduction) of an \((n-1,n)\)-VCS and the \((n,n+1)\)-VCS are related via,

$$\begin{aligned} \begin{array}{ll} m^{*}_{red}&{} ={\displaystyle {m^{*}_{in} - 2(d+d')}} \\ &{} ={\displaystyle {2m_{in} - 2(d+d')}} \\ &{}={\displaystyle {2(m_{in} - d-d')}} \\ &{} =2m_{red}. \end{array} \end{aligned}$$

Thus we have the following theorem.

Theorem 4.2

Let \(m_{red}\) be the reduced pixel expansion of an \((n-1,n)\)-VCS where \(n\ge 3\) is odd. Then

$$\begin{aligned} m_{red}={\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-1}{\frac{n-1}{2}}}. \end{aligned}$$

Proof

The reduced pixel expansion \(m_{red}\) is given as follows

$$\begin{aligned} m_{red}={\displaystyle \frac{1}{2}\frac{n+1}{4}\genfrac(){0.0pt}0{n+1}{\frac{n+1}{2}}} ={\displaystyle \frac{n}{2}\genfrac(){0.0pt}0{n-1}{\frac{n-1}{2}}}. \end{aligned}$$

\(\square \)

Table 1 Comparison of pixel expansions for different \((n-1,n)\)-VCSs of various values of \(n\)

Remark

Thus we have given a closed form formula regarding the pixel expansion and relative contrast of an \((n-1,n)\)-VCS constructed using the linear algebraic technique. It is to be noted that for odd \(n \ge 3\), our closed form formula for pixel expansion is exactly half that of given in Lemma \(4.4\) in [12] and \((ii)\) of Sect. 3.2 of [19]. However, the expression that we have found for \(m_{red}\) is exactly the same as the formulas that appear in [12, 19], for even \(n\). Further Table 1 shows that our proposed closed form for reduced pixel expansion of \((n-1,n)\)-VCS for every possible value of \(n\) matches with the numerical values of the optimal pixel expansions as shown in [23].

We are now in a position to calculate the exact pixel expansion of any \(t\)-\((n-1,n)^{*}\)-VCS for a meaningful triplet \((t,n-1,n)\) such that \((0,n-1-t,n-t)\) is also a meaningful one. Using Theorem 3.2, we have the following theorem.

Theorem 4.3

For any meaningful triplet \((t,n-1,n)\) such that \((0,n-1-t,n-t)\) is also meaningful, there exists a \(t\)-\((n-1,n)^*\)-VCS having reduced pixel expansion

$$\begin{aligned} m_{red} = \left\{ \begin{array}{rcl} {{2^t}~\frac{n-t}{4}~\genfrac(){0.0pt}0{n-t}{\frac{n-t}{2}}}, &{} \text{ if } &{} n-t~ is~ even \\ {{2^t}~\frac{n-t}{2}~\genfrac(){0.0pt}0{n-t}{\frac{n-t-1}{2}}}, &{} \text{ if } &{} n-t~ is ~ odd \\ \end{array}\right. \end{aligned}$$

and relative contrast \(\frac{1}{m_{red}}\).

4.4 Some features of the basis matrices of \((n-1,n)\)-VCS with even \(n\)

We analyze the structures of the basis matrices \(S^{0}\) and \(S^{1}\) for \((n-1,n)\)-VCS, with even \(n\), obtained by the process described so far. In this case \((n-1)\) becomes odd and we have already observed in Lemma 4 that the \(n\)-tuple \((1,1,1,\ldots ,1)\) is a particular solution to each of the system of equations (as in Sect. 4.1)

$$\begin{aligned} \left. \begin{array}{ll} f_{B_{2j-1}}&{}=1\\ f_{B_{2j}}&{}=1\\ \end{array} \right\} \end{aligned}$$

for all \(j\) such that \(1\le j\le \frac{n}{2}\).

Now let us consider the initial basis matrix \(S^{0}_{in}\). We observe that once the first block \(S^{0}_{1}\) is constructed the rest of the blocks \(S^{0}_{2}, S^{0}_{3}\),..., \(S^{0}_{n/2}\) can be constructed by suitable permutations of rows. More explicitly, recall that the block \(S^{0}_{1}\) contains, as its columns, the solution vectors of the system

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{1}+x_{2}+\cdots +x_{n-2}+x_{n-1}=0\\ x_{1}+x_{2}+\cdots +x_{n-2}+~~~x_{n}=0 \end{array}\right. } \end{aligned}$$
(25)

In the above system of equations observe that the variables \(x_{n-1}\) and \(x_{n}\) are dependent on the rest of the variables. Thus all possible \( 2^{n-2}\) binary column vectors occur in the \((n-2)\times n\) submatrix and the last two rows (which represent the values of \(x_{n-1}\) and \(x_{n}\)) are obtained from them. For constructing the second block \(S^0_{2}\) we solve the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{1}+x_{2}+\cdots +x_{n-4}+x_{n-3}+x_{n-1}+x_{n}=0\\ x_{1}+x_{2}+\cdots +x_{n-4}+x_{n-2}+x_{n-1}+x_{n}=0 \end{array}\right. } \end{aligned}$$
(26)

In this system \(x_{n-3}\) and \(x_{n-2}\) are dependent on the rest of the variables. Thus \(S^{0}_{2}\) can be easily obtained from \(S^{0}_{1}\) by interchanging \((n-1)\)-th row with \((n-3)\)-th row and row \(n\) with row \(n-2\) and keeping all other rows fixed. In the similar manner we can construct \(S^{0}_{3}, S^{0}_{4}, \ldots , S^{0}_{n/2}\).

Following the exact same steps we can construct the matrix \(S^1_{in}\) from its first block \(S^1_{1}\). As we have noted that the \(n\)-tuple \((1,1,\ldots ,1)\) is a particular solution of the system

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{1}+x_{2}+\cdots +x_{n-2}+x_{n-1}=1\\ x_{1}+x_{2}+\cdots +x_{n-2}+~~~x_{n}=1 \end{array}\right. } \end{aligned}$$
(27)

therefore the solutions of this system can be obtained by adding (modulo 2) the particular solution \((1,1,\ldots ,1)\) to each of the solutions of the system \((25)\). Hence, \(\overline{S^{0}_{i}} = S^{1}_{i}\) for all \(i\) such that \(1\le i\le \frac{n}{2}\). Thus \(\overline{S^{0}_{in}} = S^{1}_{in}\). It is immediate that if \(c\) is a column appearing in \(S^{0}_{in}\) then it must belong to \(S^{0}_{i}\), for some \(i\) and thus \(S^{1}_{i}\) contains \(\bar{c}\) as one of its columns. The following lemma summarizes the above discussion.

Lemma 16

Suppose \(S^{0}_{in}\) and \(S^{1}_{in}\) denote the basis matrices of an \((n-1,n)\)-VCS constructed by the linear algebraic method where \(n\ge 4\) is even. Then the frequency with which a column \(c\) appears in \(S^{b}_{in}\) equals the frequency with which \(\bar{c}\) occurs in \(S^{1-b}_{in}\) for \(b\in \{0,1\}\).

Recall that we delete the common columns from \(S^{0}_{in}\) and \(S^{1}_{in}\) to construct the reduced basis matrices \(S^{0}_{red}\) and \(S^{1}_{red}\). In this process, suppose that a column \(v\) gets deleted. Thus \(v\) occurs as a column in both \(S^{0}_{in}\) and \(S^{1}_{in}\). Therefore \(v\) occurs in some block \(S^{0}_{r}\) for some \(r\) and also occurs in some block \(S^{1}_{t}\) for some \(t\) where \(r\ne t\). Now

  • \(v\in S^{0}_{r} \Rightarrow \bar{v}\in S^{1}_{r}\) and

  • \(v\in S^{1}_{t} \Rightarrow \bar{v}\in S^{0}_{t}\)

Hence \(\bar{v}\) also is a common column and hence gets deleted. We now have the following theorem.

Theorem 4.4

Let \(n\ge 4\) be even. Then for an \((n-1,n)\)-VCS constructed using linear algebraic method number of times a column \(c\) appears in \(S^{b}_{red}\) is equal to the number of times the column \(\bar{c}\) occurs in \(S^{1-b}_{red}\) for \(b\in \{0,1\}\).

5 On \((n-2,n)\)-VCS

In this section we focus our attention on the construction of an \((n-2,n)\)-VCS. The motivation of this section is two-fold. First, we resolve an open issue posed in [1] where the author pointed out [Remark 4, Sect. 3] that for some specific access structures one may take three equations at a time to reduce pixel expansion. In this section we first show by providing concrete examples that taking three equations at a time works better than taking two equations at a time. Then we go on providing an algorithm, for an \((n-2,n)\)-VCS, that takes three equations at a time to construct the systems of linear equations. We further show that using this simple algorithm we can always have less initial pixel expansion than the initial pixel expansion while taking two equations at a time. The second motivation is to provide a closed form formula of the pixel expansion for any \((n-2,n)\)-VCS and thereby provide a closed form of any \(t\)-\((n-2,n)^*\)-VCS, with meaningful triplet \((t,n-2,n)\). The closed form of the pixel expansion of any \((n-2,n)\)-VCS as provided in Theorem 5.3 gives much better pixel expansion than the closed forms of the schemes as posed in [1, 7]. We further show that further reduction in pixel expansion may be possible by deleting the common columns of the basis matrices. More precisely, in Sect. 5.1 we first discuss the difficulties of taking three equations at a time and then give algorithms overcoming the difficulties, separately for odd and even \(n\), to collect three minimal qualified subsets at a time to form groups. In Sect. 5.2 we provide upper bounds iteratively, for initial and reduced pixel expansions of any \((n-2,n)\)-VCS constructed by the algorithm in Sect. 5.1. Lastly, in Sect. 5.3 we give the exact form of the initial pixel expansion of any \((n-2,n)\)-VCS and prove that taking three equations together always works better than taking two equations at a time. Numerical evidence shows that our algorithm provides almost optimal pixel expansion. Further numerical evidence shows that for \((n-2,n)\)-VCS, the reduced pixel expansion obtained using our proposed algorithm provides much less pixel expansion compared to the reduced pixel expansion obtained by taking two equations at a time.

Before starting the formal discussion, let us first fix up certain notations that will be used throughout this section.

Notation

Let \(m^{(2)}_{in}(k,n) (m^{(3)}_{in}(k,n))\) and \(m^{(2)}_{red}(k,n) (m^{(3)}_{red}(k,n))\) denote respectively the initial pixel expansion and the reduced pixel expansion of the \((k,n)\)-VCS obtained by considering two (three) equations at a time.

Remark

From Theorem 2.5 of [1], it follows that

$$\begin{aligned} m^{(2)}_{in}(n-2,n)=\left\{ \begin{array}{ll} n(n-1)2^{n-5}, &{} \text{ if } n \equiv 0,1 \pmod {4} \\ (n(n-1)+2)2^{n-5}, &{} \text{ if } n \equiv 2,3 \pmod {4} \end{array} \right. \end{aligned}$$
(28)

Hence we can now find the pixel expansion of \(t\)-\((n-2,n)^*\)-VCS using Theorem 3.2.

To get the motivation why three equations taken at a time may provide better pixel expansion than two equations taken at a time, let us first consider the following example where we first consider two equations at a time.

Example 5.1

Consider a \((3,5)\)-VCS where the groups are \(G_{1}\!=\!\left\{ \left\{ 1, \text{2 } \text{, } \text{3 } \right\} , \left\{ 1, \text{2, } \text{4 }\right\} \right\} , G_{2}\!=\!\left\{ \left\{ 1, \text{2 } \text{, } \text{5 }\right\} ,\left\{ 1, \text{4, } \text{5 }\right\} \right\} , G_{3}=\left\{ \left\{ 1, \text{3, } \text{4 } \right\} ,\left\{ 1, \text{3, } \text{5 } \right\} \right\} , G_{4}=\left\{ \left\{ 2, \text{3 } \text{, } \text{4 } \right\} ,\left\{ 2, \text{3, } \text{5 }\right\} \right\} , G_{5}=\left\{ \left\{ 2, \text{4 } \text{, } \text{5 } \right\} ,\left\{ 3, \text{4, } \text{5 } \right\} \right\} \). Then the initial basis matrices are given by

$$\begin{aligned} S^{0}_{in}= & {} \begin{bmatrix}0011&\quad 0011&\quad 0011&\quad 0000&\quad 0000\\ 0101&\quad 0110&\quad 0000&\quad 0011&\quad 0110\\ 0110&\quad 0000&\quad 0101&\quad 0101&\quad 0110\\ 0110&\quad 0110&\quad 0110&\quad 0110&\quad 0011\\ 0000&\quad 0101&\quad 0110&\quad 0110&\quad 0101 \end{bmatrix}\quad and\\ S^{1}_{in}= & {} \begin{bmatrix}0011&\quad 0011&\quad 0011&\quad 0000&\quad 0000\\ 0101&\quad 1001&\quad 0000&\quad 0011&\quad 1001\\ 1001&\quad 0000&\quad 0101&\quad 0101&\quad 1001\\ 1001&\quad 1001&\quad 1001&\quad 1001&\quad 0011\\ 0000&\quad 0101&\quad 1001&\quad 1001&\quad 0101 \end{bmatrix}. \end{aligned}$$

Here, \(m_{in}^{(2)}(3,5)=20\). However, we may further reduce the pixel expansion by deleting the following common columns:

  • 16th column of \(S^{0}_{in}\) and 17th column of \(S^{1}_{in}\).

  • 20th column of \(S^{0}_{in}\) and 13th column of \(S^{1}_{in}\).

Thus \(m_{red}^{(2)}(3,5)=20-2=18\).

5.1 Reducing the pixel expansion: taking three equations at a time

By considering the Example 5.1, we shall illustrate further that if we can choose the equations suitably by taking three equations at a time, then the pixel expansion may be reduced significantly.

Example 5.2

Let us regroup the minimal qualified sets for \((3,5)\)-VCS as follows: \(G_{1}=\left\{ \left\{ 1, \text{2, } \text{3 }\right\} , \left\{ 1, \text{2, } \text{4 }\right\} ,\left\{ 2, \text{4, } \text{5 }\right\} , \left\{ 2, \text{3, } \text{5 }\right\} \right\} \), \(G_{2}=\left\{ \left\{ 1, \text{2 } \text{, } \text{5 }\right\} ,\left\{ 1, \text{3, } \text{5 }\right\} , \left\{ 1, \text{4, } \text{5 }\right\} \right\} , G_{3}=\left\{ \left\{ 1, \text{3, } \text{4 } \right\} , \left\{ 2, \text{3, } \text{4 } \right\} , \left\{ 3, \text{4, } \text{5 }\right\} \right\} \).

Note that for \(G_{1}\), the first three equations, corresponding to the first three minimal qualified subsets of \(G_1\), give rise (addition modulo 2) to the last equation corresponding to the last minimal qualified subset \(\{2,3,5 \}\), i.e.,

$$\begin{aligned} \begin{array}{c} x_1+x_2+x_3=b \\ x_1+x_2+x_4=b \\ x_2+x_4+x_5=b \\ \hline x_2+x_3+x_5=b \end{array} \end{aligned}$$
(29)

where \(b \in \{0,1 \}\).

Thus considering the three groups we have the following initial basis matrices:

$$\begin{aligned} S^{0}_{in}=\begin{bmatrix} 0011&\quad 0011&\quad 0110\\ 0101&\quad 0110&\quad 0110\\ 0110&\quad 0110&\quad 0011\\ 0110&\quad 0110&\quad 0101\\ 0011&\quad 0101&\quad 0110 \end{bmatrix}\quad and\quad S^{1}_{in}=\begin{bmatrix} 0011&\quad 0011&\quad 1001\\ 0101&\quad 1001&\quad 1001\\ 1001&\quad 1001&\quad 0011\\ 1001&\quad 1001&\quad 0101\\ 0011&\quad 0101&\quad 1001 \end{bmatrix}. \end{aligned}$$

After deleting the common columns, we have the reduced basis matrices as follows:

$$\begin{aligned} S^{0}_{red}=\begin{bmatrix}00&\quad 001&\quad 011\\ 01&\quad 011&\quad 011\\ 01&\quad 011&\quad 001\\ 01&\quad 011&\quad 010\\ 01&\quad 010&\quad 011 \end{bmatrix}\quad and\quad S^{1}_{red}=\begin{bmatrix}01&\quad 011&\quad 001\\ 11&\quad 001&\quad 001\\ 01&\quad 001&\quad 011\\ 01&\quad 001&\quad 101\\ 01&\quad 101&\quad 001 \end{bmatrix}. \end{aligned}$$

Thus by choosing suitably three equations at a time, we have \(m_{red}^{(3)}(3,5)=8\), a significant reduction in pixel expansion from 18 to 8.

Example 5.2 shows that if we carefully collect three equations at a time then the pixel expansion is reduced drastically. We now describe an algorithm for constructing the groups such that each group contains at least three equations. In other words, we try to collect three minimal qualified subsets of participants such that all minimal qualified subsets are exhausted and solving the corresponding simultaneous linear equations we construct the basis matrices corresponding to any \((n-2,n)\)-VCS. While developing the algorithm we must take care of the fact that collecting three minimal qualified sets (three equations at a time) together admits, in each group, a fourth subset of participants which may be a forbidden subset of participants and in this process this forbidden subset becomes a qualified one. For example, while constructing \((3,5)\)-VCS, if we take the following three equations at a time corresponding to the minimal qualified sets \(\{1,2,3 \}, \{1,2,4 \}\) and \(\{2,3,4 \}\):

$$\begin{aligned} \begin{array}{cc} x_1+x_2+x_3&{}=b \\ x_1+x_2+x_4&{}=b \\ x_2+x_3+x_4&{}=b \\ \hline x_2&{}=b \end{array} \end{aligned}$$

where \(b \in \{ 0,1\}\), the set \(\{ 2\}\) becomes qualified. We do not want that. So we should take care of the fact that the fourth subset of participants, obtained as above, must be a qualified set.

The algorithm we are now going to discuss makes sure that the fourth subset is a qualified one. Two things may happen in this process. Firstly, if three equations are added modulo 2 then the resulting equation may contain all the \(n\) variables (which obviously corresponds to a qualified set \(\{1,2, \ldots ,n \}\)). Secondly, the resulting equation may contain exactly \((n-2)\) variables which corresponds to a minimal qualified set of \((n-2,n)\)-VCS. To describe the algorithm, we define these two different types of groups.

Definition 5.1

When three minimal qualified sets are collected to form a group and the resulting fourth qualified set is the full set of participants then the group is called a \(X\)-type group.

Definition 5.2

When three minimal qualified sets are collected to form a group and the resulting fourth qualified set is another distinct minimal qualified set then the group is called a \(Y\)-type group.

Example 5.3

If we consider the \((3,5)\)-VCS as described in Example 5.2, the group \(G_1\) is a \(Y\)-type group while \(G_2\) and \(G_3\) are \(X\)-type groups.

We are now in a position to describe an algorithm for making the groups of minimal qualified subsets of the \((n-2,n)\)-VCS, taking three equations at a time. We divide the algorithm for two case, namely \(n\) even and odd. First let us describe the algorithm for odd \(n \ge 5\).

Grouping algorithm when \(n (\ge 5)\) is odd: In this case, \(n-2\) is also odd and there are \(\genfrac(){0.0pt}0{n}{n-2}\) many minimal qualified sets, \({\mathcal {B}}_1, {\mathcal {B}}_2, \ldots , {\mathcal {B}}_{\left( ^{~n}_{n-2}\right) }\), arranged in a lexicographic order. We then form \(\lceil \frac{n-2}{2}\rceil \) that is, \(\frac{n-1}{2}\) many \(X\)-type groups say, \(X_{1},~X_{2},\ldots ,~X_{\frac{n-1}{2}}\). Corresponding to each \(X_{m}\), except the last one, we form \(m\) many \(Y\)-type groups say, \(Y_{m,1},~Y_{m,2},\ldots , ~Y_{m,m}\). Thus there are in all \(\frac{(n-1)(n-3)}{8}\) many \(Y\)-type groups.

The grouping algorithm is as follows:

$$\begin{aligned} \begin{array}{l} \mathbf{~~for~~ } m=1 \mathbf{~~~to~~~ } \tfrac{n-1}{2} \\ ~~~~~~ X_{m} \leftarrow \{{\mathcal {B}}_{m(2m-1)},~{\mathcal {B}}_{(m+1)(2m-1)},~{\mathcal {B}}_{m(2m+1)}\} \\ \mathbf{~~for~~ } m=1 \mathbf{~~~to~~~ } \tfrac{n-1}{2} -1 \\ ~~~~~ \mathbf{~~for~~ } j=1 \mathbf{~~~to~~~ } m \\ ~~~~~~ ~~~~ Y_{m,j} \leftarrow \{{\mathcal {B}}_{m(2m+1)+2j-1},~ {\mathcal {B}}_{(m+1)(2m+1)+2j},~{\mathcal {B}}_{m(2m+1)+2m+2j},~{\mathcal {B}}_{ m(2m+1)+2m+2j+1}\} \end{array} \end{aligned}$$

Remark

Observe that the linear equation corresponding to the last element of \(Y_{m,j}\) arises from the first three. Now, as usual we construct the linear equations for each \(X_{m}\) and each \(Y_{m,j}\) and solving them we get the initial basis matrices of the \((n-2,n)\)-VCS.

Let us now illustrate the above algorithm through the following example.

Example 5.4

Let us consider a \((7,9)\)-VCS. Let \(\Gamma _0=\{{\mathcal {B}}_{i}:i=1,2,\ldots , 36\}\) be the set of all minimal qualified sets arranged in the lexicographic order. Then

$$\begin{aligned} X_{1}= & {} \{{\mathcal {B}}_{1}, {\mathcal {B}}_{2}, {\mathcal {B}}_{3}\},\\ Y_{1,1}= & {} \{{\mathcal {B}}_{4}, {\mathcal {B}}_{5}, {\mathcal {B}}_{7}, {\mathcal {B}}_{8}\},\\ X_{2}= & {} \{{\mathcal {B}}_{6}, {\mathcal {B}}_{9}, {\mathcal {B}}_{10}\},\\ Y_{2,1}= & {} \{{\mathcal {B}}_{11}, {\mathcal {B}}_{12}, {\mathcal {B}}_{16}, {\mathcal {B}}_{17}\},\\ Y_{2,2}= & {} \{{\mathcal {B}}_{13}, {\mathcal {B}}_{14}, {\mathcal {B}}_{18}, {\mathcal {B}}_{19}\},\\ X_{3}= & {} \{{\mathcal {B}}_{15}, {\mathcal {B}}_{20}, {\mathcal {B}}_{21}\},\\ Y_{3,1}= & {} \{{\mathcal {B}}_{22}, {\mathcal {B}}_{23}, {\mathcal {B}}_{29}, {\mathcal {B}}_{30}\},\\ Y_{3,2}= & {} \{{\mathcal {B}}_{24}, {\mathcal {B}}_{25}, {\mathcal {B}}_{31}, {\mathcal {B}}_{32}\},\\ Y_{3,3}= & {} \{{\mathcal {B}}_{26}, {\mathcal {B}}_{27}, {\mathcal {B}}_{33}, {\mathcal {B}}_{34}\},\\ X_{4}= & {} \{{\mathcal {B}}_{28}, {\mathcal {B}}_{35}, {\mathcal {B}}_{36}\}. \end{aligned}$$

Solving the corresponding simultaneous linear equations we get the basis matrices of the \((7,9)\)-VCS. In this case, \(m^{(3)}_{in}(7,9)=640\) and \(m^{(3)}_{red}(7,9)=256\).

Grouping algorithm for the case when \(n(>4)\) is even: In this case the list \((L)\) of all minimal qualified sets, arranged in the lexicographic order, is divided into \(2\) separate sublists, \(L_{1},L_{2}\), where:

$$\begin{aligned} L_{1}= & {} \{{\mathcal {B}}\in L:1\in {\mathcal {B}}\}\hbox { and}\\ L_{2}= & {} \{{\mathcal {B}}\in L:{\mathcal {B}}\notin L_{1}\} \end{aligned}$$

It is easy to see that \(|L_{1}|=\genfrac(){0.0pt}0{n-1}{n-3}\) and \(|L_{2}|=\genfrac(){0.0pt}0{n-1}{n-2}\).

Now observe that in the list \(L_{1}\) if we ignore the participant \(1\) then the list is essentially the list of all minimal qualified sets of an \((n-3,n-1)\)-VCS on the participant set \({\mathcal {P}}= \{ 2,3,\ldots ,n\}\). Since \((n-1)\) is odd, we may use the grouping algorithm for odd \(n\) (as described above) to exhaust the sublist for \(L_1\). Now for the sublist \(L_{2}\) it is nothing but the list of all minimal qualified sets of an \((n-2,n-1)\)-VCS on the participant set \({\mathcal {P}}= \{ 2,3,\ldots ,n\}\). We can collect two minimal qualified sets at a time as described in Sect. 4.1 to form the groups.

Let us illustrate the above algorithm through an example.

Example 5.5

Let us now consider the \((8,10)\)-VCS. Let Let \(\{{\mathcal {C}}_{i}:i=1,2,\ldots , 45\}\) be the all minimal qualified sets arranged in the lexicographic order. Then the first \(36\) subsets \({\mathcal {C}}_{1}\) through \({\mathcal {C}}_{36}\) are grouped as the \((7,9)\)-VCS, that is, just replace \({\mathcal {C}}_{i}\) in place of \({\mathcal {B}}_{i}\) for all \(i\)= \(1,2,\ldots , 36\). For the 9 subsets, \({\mathcal {C}}_{37}\) to \({\mathcal {C}}_{45}\), group as follows:

\(G_{1}\)= \(\{ {\mathcal {C}}_{37},{\mathcal {C}}_{38}\}, G_{2}\)= \(\{ {\mathcal {C}}_{39},{\mathcal {C}}_{40}\}, G_{3}\)= \(\{ {\mathcal {C}}_{41},{\mathcal {C}}_{42}\}, G_{4}\)= \(\{ {\mathcal {C}}_{43},{\mathcal {C}}_{44}\}\) and \(G_{5}\)= \(\{ {\mathcal {C}}_{45}\}\).

Remark

The above algorithms hold when \(n\ge 5\). For the \((2,4)\)-VCS we directly construct the basis matrices by considering the following groups: \(G_{1} = \{\{1,2\}, \{1,3\}, \{1,4\}\}, G_{2} = \{\{2,3\}, \{2,4\}\}\) and \(G_{3} = \{\{3,4\}\}\) and then solving the corresponding systems of linear equations.

5.2 Bounds and numerical evidence of betterment of pixel expansion

It is immediate from the Example 5.5 that if \(m_{red}^{(l)}(k,n)\) denotes the reduced pixel expansion of a \((k,n)\)-VCS constructed by considering \(l\) equations at a time, then

$$\begin{aligned} m_{red}^{(3)}(8,10)\le {\displaystyle 2m_{red}^{(3)}(7,9)+m_{red}^{(2)}(8,9)}. \end{aligned}$$

This can be very easily generalized to the following theorem.

Theorem 5.1

Let \(m_{red}^{(l)}(k,n)\) denote the reduced pixel expansion of a \((k,n)\)-VCS after deleting the common columns. Then for all even \(n>4\)

$$\begin{aligned} m_{red}^{(3)}(n-2,n)\le {\displaystyle 2m_{red}^{(3)}(n-3,n-1)+ m_{red}^{(2)}(n-2,n-1)}. \end{aligned}$$

In a similar way we can give (in a recursive manner) an upper bound for the case when \(n\ge 5\) is odd. For this we break the list of all minimal qualified sets of an \((n-2,n)\)-VCS into two parts namely, \(L_{1}\) which contains all those subsets containing \(1\) and \(L_{2}\) which contains all other minimal qualified subsets that are not in \(L_{1}\). Now it is easy to see that \(L_{1}\) can be identified as the minimal qualified sets for a \(1\)-\((n-2,n)^{*}\)-VCS with \(1\) as the essential participant. Again, \(L_{2}\) can be identified as the minimal qualified sets of an \((n-2,n-1)\)-VCS on the participant set \(\{2,3,\ldots ,n-1\}\). Hence we have the following theorem.

Theorem 5.2

Let \(m_{red}^{(l)}(k,n)\) denote the reduced pixel expansion of a \((k,n)\)-VCS after deleting the common columns. Then for all odd \(n\ge 5\)

$$\begin{aligned} m_{red}^{(3)}(n-2,n)\le {\displaystyle m_{red}^{(2)}(1-(n-2,n)^*)+ m_{red}^{(2)}(n-2,n-1)} \end{aligned}$$

that is,

$$\begin{aligned} m_{red}^{(3)}(n-2,n)\le {\displaystyle 2m_{red}^{(2)}(n-3,n-1)+ m_{red}^{(2)}(n-2,n-1)}. \end{aligned}$$

Actually the above upper bound of the pixel expansion is far from being tight. The exact pixel expansion comes out to be much less than the upper bound. It is an interesting problem to find the exact reduced pixel expansion of an \((n-2,n)\)-VCS. Numerical evidence, as shown in Table 2, shows that our algorithm provides almost optimal pixel expansion compared with the optimal pixel expansion as obtained in [23].

Table 2 Comparison of pixel expansions for different \((n-2,n)\)-VCSs of various values of \(n\)

5.3 Comparison of pixel expansions while taking two and three equations at a time

We further deal with the \((n-2,n)\)-VCS and resolve an open issue, as posed in [1]. In [1] the author pointed out [Remark 4, Sect. 3] that for some specific access structure one may take three equations at a time to reduce pixel expansion. In this section we resolve this issue by showing that our grouping algorithm for taking three equations at a time always provides less initial pixel expansion than the initial pixel expansion while taking two equations at a time.

We have already observed in Eq. (28) the exact value of initial pixel expansion of an \((n-2,n)\)-VCS constructed using linear algebraic technique when two equations are taken at a time. Our bound in this matter (when taking three equations at a time) is a recursive one, viz. Theorems 5.1 and 5.2. We now find the exact value of the initial pixel expansion when the grouping of the minimal qualified sets are done using the grouping algorithm described in the last section.

Let us first consider the case when \(n\ge 5\) is odd. Our algorithm for constructing the groups, taking at least three minimal qualified sets at a time, guarantees the following: there are \(\frac{n-1}{2}\) many \(X\)-type groups and \(\frac{1}{8}(n-1)(n-3)\) many \(Y\)-type groups. In any system of linear equations corresponding to a \(X\)-type group there are \((n-3)\) many independent variables. Therefore, the total pixel expansion contributed by all \(X\)-type groups is \(\frac{n-1}{2}2^{n-3}\), that is, \((n-1)2^{n-4}\). Similarly in any system of linear equations corresponding to a \(Y\)-type group there are \((n-3)\) many independent variables and hence the total pixel expansion contributed by all \(Y\)-type groups is \((n-1)(n-3)2^{n-6}\).

Therefore initial pixel expansion

$$\begin{aligned} m^{(3)}_{in}(n-2,n) = {\displaystyle (n^2-1)2^{n-6}~ \forall ~ odd~ n\ge 5}. \end{aligned}$$

Now it not hard to see that \(m^{(2)}_{in}(n-2,n) > m^{(3)}_{in}(n-2,n)\) for all odd \(n\ge 5\).

If \(n\) is even then as per our algorithm the initial pixel expansion becomes

$$\begin{aligned} m_{in}^{(3)}(n-2,n) = {\displaystyle 2m_{in}^{(3)}(n-3,n-1)+ m_{in}^{(2)}(n-2,n-1)~ \forall ~ even ~ n > 4}. \end{aligned}$$

Observe that \((n-1)\) is odd and therefore the right hand side becomes \(2^{n-6}(n^2+ 2n)\) after some algebra. Now for \(n > 4\) with \(n\) even it is not hard to see that \(m^{(2)}_{in}(n-2,n) > m^{(3)}_{in}(n-2,n)\). Thus we have calculated the initial pixel expansion \(m_{in}^{(3)}(n-2,n)\) for even \(n>4\), in terms of the pixel expansions of the \((n-3,n-1)\)-VCS and \((n-2,n-1)\)-VCS. However, for the \((2,4)\)-VCS, constructed directly [See Remark following Example 5.5] using the linear algebraic technique, the initial pixel expansion equals \(2^{n-6}(n^2+ 2n)\) with \(n = 4\). In this case \(m^{(2)}_{in}(2,4) = m^{(3)}_{in}(2,4)\). Thus we have the following theorem.

Theorem 5.3

Let \(m_{in}^{(l)}(k,n)\) denote the initial pixel expansion of a \((k,n)\)-VCS taking l equations at a time. Then \(m^{(2)}_{in}(n-2,n) \ge m^{(3)}_{in}(n-2,n)\), where

$$\begin{aligned} m^{(3)}_{in}(n-2,n)=\left\{ \begin{array}{ll} (n^2-1)2^{n-6}, &{} \text{ for } \text{ all } ~ odd~ n\ge 5 \\ 2^{n-6}(n^2+ 2n), &{} \text{ for } \text{ all } ~ even ~ n\ge 4. \end{array} \right. \end{aligned}$$

Moreover the equality \(m^{(2)}_{in}(n-2,n) = m^{(3)}_{in}(n-2,n)\) holds if and only if \(n = 4\).

Further numerical evidence, as shown in Table 3, shows that for \((n-2,n)\)-VCS, the reduced pixel expansion obtained using our proposed algorithm provides much less pixel expansion compared to the reduced pixel expansion obtained by taking two equations at a time.

Table 3 Numerical evidence of getting significantly less pixel expansion for \((n-2,n)\)-VCS different values of \(n\)

6 Conclusion and open issues

In this paper we have put forward a construction and analysis, based on linear algebraic technique, of a \(t\)-\((k,n)^*\)-VCS for monochrome images in which \(t\) participants are essential in a \((k,n)\)-VCS. We grouped the minimal qualified sets, taken two at a time, to form systems of linear equations solving which we constructed the initial basis matrices. We then, applied the technique of deleting the common columns occurring in the initial basis matrices to reduce the pixel expansion. We completely characterize the case of \(t\)-\((n-1,n)^*\)-VCS by deriving a closed form of the reduced pixel expansion and hence a closed form for the relative contrast too. We further investigated the case of an \((n-2,n)\)-VCS and provided an algorithm to collect at least three minimal qualified sets at a time for grouping in such a way that the resulting basis matrices, in their reduced forms, have almost optimal pixel expansions. An interesting open issue would be to compute exact values of the reduced pixel expansions of any \(t\)-\((n-2,n)^*\)-VCS and more generally of any \(t\)-\((k,n)^*\)-VCS and derive a closed form for it.