Keywords

1 Introduction

Secret sharing is an important cryptographic primitive, by means of which a secret value is distributed into shares among a number of participants in such a way that only the qualified sets of participants can recover the secret value from their shares. A scheme is perfect if the unqualified subsets do not obtain any information about the secret. The first proposed secret sharing schemes [8, 31] realized threshold access structures, in which the qualified subsets are those having at least a given number of participants. In addition, these schemes are ideal and linear. A scheme is ideal if the share of every participant has the same length as the secret, and it is linear if the linear combination of the shares of different secrets results in shares for the same linear combination of the secret values. Even though there exists a linear secret sharing scheme for every access structure [6, 24], the known general constructions are not impractical because the length of the shares grows exponentially with the number of participants. Actually, the optimization of secret sharing schemes for general access structures has appeared to be an extremely difficult problem and not much is known about it. Nevertheless, secret sharing schemes have found numerous applications in cryptography and distributed computing, such as threshold cryptography [16], secure multiparty computations [5, 11, 14, 15], and oblivious transfer [32, 36]. In many of the applications mentioned above, we hope to use practical schemes, namely, the linear schemes in which the size of the share of each participant is a polynomial of the size of the secret. In particular, we want to use the ideal schemes since they are the most space-efficient.

Due to the difficulty of constructing an ideal liner scheme for every given access structure, it is worthwhile to find families of access structures that admit ideal linear schemes and have useful properties for the applications of secret sharing. Several such families are formed by multipartite access structures, in which the set of participants is divided into different parts and all participants in the same part play an equivalent role. Weighted threshold access structures [4, 31], hierarchical access structures [18, 34, 35], and compartmented access structures [9, 22, 37] are typical examples of such multipartite access structures. Readers can refer to [19] for comprehensive survey on multipartite access structures. A great deal of the ongoing research in this area is devoted to the properties of multipartite access structures and to secret sharing schemes (especially ideal and linear ones) that realize them.

The first class of multipartite access structures is weighted threshold access structures which appeared in the seminal paper by Shamir [31]. Weighted threshold access structures do not admit an ideal secret sharing scheme in general. Ideal multipartite secret sharing and their access structures were initially studied by Kothari [25] and by Simmons [34]. Kothari [25] presented some ideas to construct ideal linear schemes with hierarchical properties. Simmons [34] introduced the multilevel access structures (also called disjunctive hierarchical threshold access structures (DHTASs) in [35]) and compartmented access structures, and constructed ideal linear schemes for some of them by geometric method [8], but the method is inefficient. The efficient method to construct ideal linear schemes for DHTASs was presented by Brickell [9] based on primitive polynomials over finite fields. He also presented a more general family, that is the so-called compartmented access structures with lower bounds (LCASs) as a generalization of Simmons’ compartmented access structures and offered a method to construct ideal linear schemes realizing LCASs too. This method is efficient to construct schemes realizing Simmons’ compartmented access structures but is inefficient to construct the schemes realizing LCASs in general because it is required to check (possible exponentially) many matrices for non-singularity. Tassa [35] presented conjunctive hierarchical threshold access structures (CHTASs) and offered a method to construct ideal linear schemes realizing them based on Birkhoff interpolation. In the case of random allocation of participant identities, this method is probabilistic. A method is probabilistic if it produces a scheme for the given access structure with high probability. In the probabilistic method, it is still required to check many matrices for non-singularity. In general, we hope to construct schemes by efficient methods. By allocating participant identities in a monotone way, Tassa gave an efficient method to construct ideal linear schemes for CHTASs over a sufficiently large prime field based on Birkhoff interpolation. Tassa and Dyn [37] presented compartmented access structures with upper bounds (UCASs) and offered probabilistic methods to construct ideal linear schemes for UCASs, LCASs and CHTASs based on bivariate interpolation.

Another related line of work deals with the characterization of the ideal multipartite secret sharing schemes and their access structures. This line of research was initiated by Brickell [9] and by Brickell and Davenport [10]. They introduced the relationship between secret sharing schemes and matroids, and characterized the ideal secret sharing schemes by matroids. Beimel et al. [4] characterized ideal weighted threshold secret sharing schemes by matroids. The bipartite access structures were characterized in [29] and some partial results about the tripartite case were presented in [13] and [22]. On the basis of the works in [9, 10], Farràs et al. [17,18,19] introduced integer polymatroids for the study of ideal multipartite secret sharing schemes. They studied the connection of multipartite secret sharing schemes, matroids and polymatroids, and presented many new families of multipartite access structures such as ideal hierarchical access structures (IHASs) and compartmented access structures with upper and lower bounds. Their work implies the problem of how to construct a scheme realizing a multipartite access structure can be transformed to the problem of how to find a representation of a matroid from a presentation of its associated polymatroid. Nevertheless, Farràs et al. [17, 19] pointed out it remains open whether or not exist efficient algorithms to obtain representations of matroids from representations of their associated polymatroids in general.

1.1 Related Work

Efficient Explicit Constructions of Ideal Multipartite Secret Sharing. The most of the known constructions of ideal multipartite secret sharing schemes are either inefficient or randomized in the literature. Efficient methods to construct ideal hierarchical secret sharing schemes were given by Brickell [9] and by Tassa [35]. Brickell’s construction provides a representation of a matroid associated to DHTASs over finite fields of the form \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda \ge mk^2\), where q is a prime power, m is the number of parts that the set of participants is divided into, and k is the rank of the matroid. An irreducible polynomial of degree \(\lambda \) over \(\mathbb {F}_q\) has to be found, but this can be done in time polynomial in q and \(\lambda \) by using the algorithm given by Shoup [33]. Therefore, a representation can be found in time polynomial in the size of the ground set. Accordingly, ideal linear schemes realizing DHTASs can be obtained by an efficient method. Tassa [35] offered a representation of a matroid associated to CHTASs over prime fields \(\mathbb {F}_{p}\) with p larger than \(2^{-k+2} (k-1)^{( k-1)/2} ( k-1)! N^{(k-1)(k-2)/2}\), where k is the rank of the matroid and N is the maximum identity assigning to participants. A matrix M is the representation if some of its submatrices are nonsingular. The non-singularity of these submatrices depends on the Birkhoff interpolation. There is an efficient algorithm to solve this kind of interpolation over the prime fields \(\mathbb {F}_{p}\), and consequently, ideal linear schemes for CHTASs can be obtained by an efficient method. Ball et al. [1] extended the methods in [9, 35] and obtained two different kinds of representations of biuniform matroids, one by using a primitive element of an extension field and another one by using a large prime field. The schemes for some bipartite access structures can be obtained based on these representations. In addition, efficient methods to construct schemes for some multilevel access structures with two levels and three levels were presented in [7] and [21], respectively.

Multipartite Secret Sharing, Polymatroids and Matroids. On the basis of the connection of multipartite secret sharing schemes, matroids and polymatroids, Farràs et al. [17,18,19] introduced a unified method based on polymatroid techniques, which simplifies the task of determining whether a given multipartite access structures is ideal or not. In particular, they characterized ideal secret sharing schemes for hierarchical access structures in [18] by the unified method. They defined the accurate form of IHASs and showed that every ideal hierarchical access structure is of this form or it can be obtained from a structure of this form by removing some participants. Moreover, they presented a general method to construct ideal linear schemes realizing multipartite access structures. Specially, to construct a secret sharing scheme realizing a given multipartite access structure, first find an integer polymatroid associated to the access structure, then find a representation of the integer polymatroid over some finite field, and third find a representation of the matroid associated the access structure over some finite extension of the finite field based on the representation of the integer polymatroid. The result in [17] implies the matroid can be used to construct an ideal linear scheme realizing the access structure.

1.2 Our Results

In this paper, we study how to construct secret sharing schemes realizing multipartite access structures. The main results are the constructions for IHASs, a family that contains all ideal hierarchical access structure as a particular case such as DHTASs and CHTASs, and the constructions for UCASs and LCASs, two special families of compartmented access structures. We give efficient methods to explicitly construct ideal linear schemes realizing these access structures combining the general polymatroid-based method in [17] and Brickell’s method to construct ideal linear schemes for DHTASs in [9]. The ideal of our construction is described as follows.

Our method to construct multipartite schemes is closely related to the representations of the multipartite matroid associated to the given multipartite access structure. The problem of how to obtain a representation of the multipartite matroid can be transformed to find a matrix M such that its some special submatrices are nonsingular. Thus, our main goal is that providing a polynomial time algorithms to construct such a matrix M such that all the determinants of those special submatrices are nonzero over some finite fields. More precisely, we construct the matrix M with special form such that every determinant of those submatrices can be viewed as a nonzero polynomial on x of degree at most t over some finite field \(\mathbb {F}_{q}\). Based on such a matrix M, over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > t\), the algorithm given by Shoup [33] implies a representation of the matroid associated the given access structure can be found in time polynomial in the size of the ground set.

The idea of finding a matrix M such that the determinants of some of its submatrices are denoted by a nonzero polynomial on x comes from Brickell [9]. This is the key to find a representation of the matroid. This is related to the determinant function of matrix. To solve this question, we introduce approaches to calculate two class of matrices with special form, one can be applied to the constructions for IHASs and another one can be applied to the constructions for UCASs and LCASs.

Specifically, based on the integer polymatroids associated to the three families of multipartite access structures presented in [17,18,19], for each of the three families of access structures, we give an efficient method to find a representation of the integer polymatroid over some finite field, and then over some finite extension of that field, we give an efficient method to find a presentation of the matroid associated to the integer polymatroid. Accordingly, we construct ideal linear schemes for these access structures. First, we construct a \(\mathbb {F}_{q}\)-representation of an integer polymatroid that is as simple as possible. In the constructions for IHASs, the representation is constructed based on unit matrix, and in the constructions for UCASs and LCASs, the representations are constructed based on Vandermonde matrix. Second, based on the special representation for some access structure, we construct the matrix M satisfied the required conditions such that every determinant of some of its submatrices can be viewed as a nonzero polynomial on x over \(\mathbb {F}_{q}\). Thus, a representation of the matroid associated the given access structure can be found in time polynomial in the size of the ground set by the algorithm in [33].

In addition, we compare our results with the efficient methods to construct multipartite secret sharing schemes from [9, 35] in Sect. 4.3. In particular, we point out that our construction for DHTASs is the same as the one in [9], but we improve the bound for the size of the ground set.

1.3 Organization of the Paper

Section 2 introduces some knowledge about access structures, secret sharing schemes, polymatroids, matroids, and the methods to construct secret sharing schemes by matroids and polymatroids. Section 3 introduces the approaches to calculate the determinant functions of two classes of matrices with special form. Section 4 gives two classes of constructions for ideal linear secret sharing schemes realizing IHASs. Section 5 construct ideal linear secret sharing schemes realizing UCASs and LCASs.

2 Preliminaries

We introduce here some notation that will be used all through the paper. In particular, we recall the compact and useful representation of multipartite access structures as in [17,18,19].

We use \(\mathbb {Z}_{+}\) to denote the set of the non-negative integers. For every positive integer i we use the notation \([i]:= \{1,\ldots , i\}\) and for every \(i, j \in \mathbb {Z}_{+}\) we use the notation \([i, j]:=\{i,\ldots , j\}\) with \(i < j\). Consider a finite set J and given two vectors \(\varvec{u} = (u_i)_{i \in J}\) and \(\varvec{v} = (v_i)_{i \in J}\) in \(\mathbb {Z}_{+}^J\), we write \(\varvec{u} \le \varvec{v}\) if \(u_i \le v_i\) for every \(i \in J\). The modulus \(|\varvec{u}|\) of a vector \(\varvec{u} \in \mathbb {Z}_{+}^J\) is defined by \(|\varvec{u}| = \sum _{i \in J} u_i\). For every subset \(X \subseteq J\), we notate \(\varvec{u} (X) = (u_i)_{i \in X} \in \mathbb {Z}_{+}^X\). For every positive integer m, we notate \(J_m = \{1,\ldots , m\}\) and \(J'_m = \{0, 1,\ldots , m\}\). Of course the vector notation that has been introduced here applies as well to \(\mathbb {Z}_{+}^m = \mathbb {Z}_{+}^{J_m}\).

2.1 Access Structures and Secret Sharing Schemes

Let \(P = \{ p_1, \ldots , p_n \}\) denote the set of participants and its power set be denoted by \(\mathcal {P}(P) = \{\mathcal {V} : \mathcal {V} \subseteq P \}\) which contains all the subsets of P. A collection \(\varGamma \subseteq \mathcal {P}(P)\) is monotone if \(\mathcal {V} \in \varGamma \) and \(\mathcal {V}\subseteq \mathcal {W}\) imply that \(\mathcal {W} \in \varGamma \). An access structure is a monotone collection \(\varGamma \subseteq \mathcal {P}(P)\) of nonempty subsets of P. Sets in \(\varGamma \) are called authorized, and sets not in \(\varGamma \) are called unauthorized. An authorized set \(\mathcal {V} \in \varGamma \) is called a minimal authorized set if for every \(\mathcal {W} \varsubsetneq \mathcal {V}\), the set \(\mathcal {W}\) is unauthorized. An unauthorized set \(\mathcal {V} \notin \varGamma \) is called a maximal unauthorized set if for every \(\mathcal {W} \supsetneq \mathcal {V}\), the set \(\mathcal {W}\) is authorized. The set \(\varGamma ^*=\{ \mathcal {V} : \mathcal {V}^c \notin \varGamma \}\) is called the dual access structure to \(\varGamma \). It is easy to see that \(\varGamma ^*\) is monotone too. In particular, an access structure is said to be connected if all participants are in at least one minimal authorized subset.

A family \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) of subsets of P is called here a partition of P if \(P = \bigcup _{i \in J_m} \mathrm {\Pi }_i\) and \(\mathrm {\Pi }_i \cap \mathrm {\Pi }_j = \emptyset \) whenever \(i \ne j\). For a partition \(\mathrm {\Pi }\) of a set P, we consider the mapping \(\mathrm {\Pi }: \mathcal {P}(P) \rightarrow \mathbb {Z}_{+}^m\) defined by \(\mathrm {\Pi } (\mathcal {V}) = ( |\mathcal {V} \cap \mathrm {\Pi }_i |)_{i \in J_m}\). We write \(\mathbf {P} = \mathrm {\Pi } ( \mathcal {P}(P)) = \{ \varvec{u} \in \mathbb {Z}_{+}^m: \varvec{u} \le \mathrm {\Pi } (P) \}\). For a partition \(\mathrm {\Pi }\) of a set P, a \(\mathrm {\Pi }\)-permutation is a permutation \(\sigma \) on P such that \(\sigma (\mathrm {\Pi }_i) = \mathrm {\Pi }_i\) for every part \(\mathrm {\Pi }_i\) of \(\mathrm {\Pi }\). An access structure on P is said to be \(\mathrm {\Pi }\)-partite if every \(\mathrm {\Pi }\)-permutation is an automorphism of it.

As in [17,18,19], we describe a multipartite access structure in a compact way by taking into account that its members are determined by the number of elements they have in each part. If an access structure \(\varGamma \) on P is \(\mathrm {\Pi }\)-partite, then \(\mathcal {V} \in \varGamma \) if and only if \(\mathrm {\Pi } (\mathcal {V} ) \in \mathrm {\Pi } (\varGamma )\). That is, \(\varGamma \) is completely determined by the partition \(\mathrm {\Pi }\) and set of vectors \(\mathrm {\Pi } (\varGamma ) \subseteq \mathbf {P} \subseteq \mathbb {Z}_{+}^m\). Moreover, the set \(\mathrm {\Pi } (\varGamma ) \subseteq \mathbf {P}\) is monotone increasing, that is, if \(\varvec{u} \in \mathrm {\Pi } (\varGamma )\) and \(\varvec{v} \in \mathbf {P}\) is such that \(\varvec{u} \le \varvec{v}\), then \(\varvec{v} \in \mathrm {\Pi } (\varGamma )\). Therefore, \(\mathrm {\Pi } (\varGamma )\) is univocally determined by \(\min \mathrm {\Pi } (\varGamma )\), the family of its minimal vectors, that is, those representing the minimal qualified subsets of \(\varGamma \). By an abuse of notation, we will use \(\varGamma \) to denote both a \(\mathrm {\Pi }\)-partite access structure on P and the corresponding set \(\mathrm {\Pi } (\varGamma )\) of points in \(\mathbf {P}\), and the same applies to \(\min \varGamma \).

Now, we introduce some families of multipartite access structures.

Definition 1

(Ideal hierarchical access structures). Take \(\varvec{\hat{k}, ~k} \in \mathbb {Z}_{+}^m\) such that \( \hat{k}_1 =0\) and \( \hat{k}_i \le \hat{k}_{i+1} < k_i \le k_{i+1}\) for \(i \in [m-1]\). The following access structures are called ideal hierarchical access structures (IHASs)

$$\begin{aligned} \varGamma = \{\varvec{u} \in \mathbf {P} : |\varvec{u} ([\ell ])| \ge k_{\ell }~\mathrm{for~ some}~ \ell \in J_m ~\mathrm{and}~ |\varvec{u} ([i])| \ge \hat{k}_{i+1}~\mathrm{for~ all}~ i \in [\ell -1]\}. \end{aligned}$$
(1)

In particular, if \(\hat{k}_i =0\) for every \(i \in J_m\) and \(0< k_1< \cdots < k_m = k\), then IHASs is the disjunctive hierarchical threshold access structures (DHTASs), which can be denoted by

$$\begin{aligned} \varGamma = \{\varvec{u} \in \mathbf {P} : |\varvec{u} ([i])| \ge k_{i}~\mathrm{for~ some}~ i \in J_m \}, \end{aligned}$$
(2)

and if \(0 = \hat{k}_1< \cdots < \hat{k}_m\) and \( k_1 = \cdots = k_m = k\) then IHASs is the conjunctive hierarchical threshold access structures (CHTASs), which can be denoted by

$$\begin{aligned} \varGamma = \{\varvec{u} \in \mathbf {P} : |\varvec{u} ([i])| \ge \tilde{k}_{i}~\mathrm{for~ all}~ i \in J_m \}, \end{aligned}$$
(3)

where \(\tilde{k}_i = \hat{k}_{i+1}\) for \(i \in [m-1]\) and \(\tilde{k}_m = k _m\).

Definition 2

(Compartmented access structures). Take \(\varvec{t} \in \mathbb {Z}_{+}^m\) and \(k \in \mathbb {N}\) such that \(k \ge |\varvec{t}|\). The following access structures are called compartmented access structures with lower bounds (LCASs)

$$\begin{aligned} \min \varGamma = \{\varvec{u} \in \mathbf {P}: |\varvec{u}| = k ~\mathrm{and}~\varvec{u} \ge \varvec{t} \}. \end{aligned}$$
(4)

Take \(\varvec{r} \in \mathbb {Z}_{+}^m\) such that \(\varvec{r} \le \mathrm {\Pi } (P)\) and \(r_i \le k \le |\varvec{r}|\) for every \(i \in J_m\). The following access structures are called compartmented access structure with upper bound (UCASs)

$$\begin{aligned} \min \varGamma = \{\varvec{u} \in \mathbf {P}: |\varvec{u}| = k ~\mathrm{and}~\varvec{u} \le \varvec{r} \}. \end{aligned}$$
(5)

We next present the definition of unconditionally secure perfect secret sharing scheme as given in [3, 12]. For more information about this definition and secret sharing in general, see [2].

Definition 3

(Secret sharing schemes). Let \(P = \{ p_1, \ldots , p_n \}\) be a set of participants. A distribution scheme \(\varSigma = (\varPhi , \mu )\) with domain of secrets \( \mathcal {S}\) is a pair, where \(\mu \) is a probability distribution on some finite set \(\mathcal {R}\) called the set of random strings and \(\varPhi \) is a mapping from \(\mathcal {S} \times \mathcal {R}\) to a set of n-tuples \(\mathcal {S}_1 \times \mathcal {S}_2 \times \cdots \times \mathcal {S}_n\), where \(\mathcal {S}_i\) is called the domain of shares of \(p_i\). A dealer distributes a secret \(s \in \mathcal {S}\) according to \(\varSigma \) by first sampling a random string \(r \in \mathcal {R}\) according \(\mu \), computing a vector of shares \(\varPhi (s, r) = ( s_1, \ldots , s_n)\), and privately communicating each share \(s_i\) to participant \(p_i\). For a set \(\mathcal {V} \subseteq P\), we denote \(\varPhi _{\mathcal {V}} (s, r)\) as the restriction of \(\varPhi (s, r)\) to its \(\mathcal {V}\)-entries (i.e., the shares of the participants in \(\mathcal {V}\)).

Let \( \mathcal {S}\) be a finite set of secrets, where \( \mathcal {S} \ge 2\). A distribution scheme \(\varSigma = (\varPhi , \mu )\) with domain of secrets \( \mathcal {S}\) is a secret sharing scheme realizing an access structure \(\varGamma \subseteq \mathcal {P}(P)\) if the following two requirements hold:

CORRECTNESS. The secret s can be reconstructed by any authorized set of participants. That is, for any authorized set \(\mathcal {V} \in \varGamma \) (where \(\mathcal {V} = \{ p_{i_1}, \ldots , p_{i_{|\mathcal {V}|}} \})\), there exists a reconstruction function \(Recon_{\mathcal {V}} : \mathcal {S}_{i_1} \times \cdots \times \mathcal {S}_{i_{|\mathcal {V}|}} \rightarrow \mathcal {S}\) such that for every \( s \in \mathcal {S}\) and every random string \(r \in \mathcal {R}\),

$$\begin{aligned} Recon_{\mathcal {V}} \big ( \varPhi _{\mathcal {V}} (s, r) \big ) = s. \end{aligned}$$

PRIVACY. Every unauthorized set can learn nothing about the secret (in the information theoretic sense) from their shares. Formally, for any unauthorized set \(\mathcal {W} \notin \varGamma \), every two secrets \( s, s' \in \mathcal {S}\), and every possible \(| \mathcal {W}|\)-tuple of shares \(( s_i )_{u_i \in \mathcal {W}}\),

$$\begin{aligned} Pr \big [ \varPhi _{\mathcal {W}} (s, r) = (s_i)_{u_i \in \mathcal {W}} \big ] = Pr \big [ \varPhi _{\mathcal {W}} (s', r) = (s_i)_{u_i \in \mathcal {W}} \big ] \end{aligned}$$

when the probability is over the choice of r from \(\mathcal {R}\) at random according to \(\mu \).

Definition 4

(Ideal linear secret sharing schemes). Let \(P = \{ p_1, \ldots , p_n \}\) be a set of participants. Let \(\varSigma = (\varPhi , \mu )\) be a secret sharing scheme with domain of secrets \( \mathcal {S}\), where \(\mu \) is a probability distribution on a set \(\mathcal {R}\) and \(\varPhi \) is a mapping from \(\mathcal {S} \times \mathcal {R}\) to \(\mathcal {S}_1 \times \mathcal {S}_2 \times \cdots \times \mathcal {S}_n\), where \(\mathcal {S}_i\) is called the domain of shares of \(p_i\). We say that \(\varSigma \) is an ideal linear secret sharing scheme over a finite field \(\mathbb {K}\) if \(\mathcal {S} = \mathcal {S}_1 = \cdots = \mathcal {S}_n = \mathbb {K}\), \(\mathcal {R}\) is a \(\mathbb {K}\)-vector space, \(\varPhi \) is a \(\mathbb {K}\)-linear mapping, and \(\mu \) is the uniform probability distribution.

This paper deals with unconditionally secure perfect ideal linear secret sharing schemes.

2.2 Polymatroids and Matroids

In this section we introduce the definitions and some properties of polymatroids and matroids. Most results of this section are from [17,18,19]. For more background on matroids and polymatroids, see [23, 28, 30, 38].

Definition 5

A polymatroid \(\mathcal {S}\) is defined by a pair (Jh), where J is the finite ground set and \(h : \mathcal {P}(J) \rightarrow \mathbb {R}\) is the rank function that satisfies

  1. (1)

    \(h( \emptyset ) = 0\);

  2. (2)

    h is monotone increasing: if \(X\subseteq Y \subseteq J\), then \(h(X) \le h(Y)\);

  3. (3)

    h is submodular: if \(X, Y \subseteq J\), then \(h( X \cup Y) + h (X \cap Y) \le h(X) + h( Y)\).

An integer polymatroid \(\mathcal {Z}\) is a polymatroid with an integer-valued rank function h. An integer polymatroid such that \(h(X) \le |X|\) for any \(X \subseteq J\) is called a matroid.

While matroids abstract some properties related to linear dependency of collections of vectors in a vector space, integer polymatroids do the same with collections of subspaces. Suppose \((V_i)_{i \in J}\) is a finite collection of subspaces of a \(\mathbb {K}\)-vector space V, where \(\mathbb {K}\) is a finite field. The mapping \( h(X): \mathcal {P}(J) \rightarrow \mathbb {Z}\) defined by \(h(X) = \dim (\sum _{i \in X} V_i)\) is the rank function of an integer polymatroid with ground set J. Integer polymatroids and, in particular, matroids that can be defined in this way are said to be \(\mathbb {K}\)-representable.

Following the analogy with vector spaces we make the following definitions. For an integer polymatroid \(\mathcal {Z}\), the set of integer independent vectors of \(\mathcal {Z}\) is

$$\mathcal {D} = \{ \varvec{u} \in \mathbb {Z}_{+}^J : |\varvec{u} (X)| \le h(X) ~{for~every}~ X \subseteq J \},$$

in which the maximal integer independent vectors are called the integer bases of \(\mathcal {Z}\). Let \(\mathcal {B}\) or \(\mathcal {B} (\mathcal {Z})\) denote the collection of all integer bases of \(\mathcal {Z}\). Then all the elements of \(\mathcal {B} (\mathcal {Z})\) have the identical modulus. In fact, every integer polymatroid \(\mathcal {Z}\) is univocally determined by \(\mathcal {B} (\mathcal {Z})\) since h is determined by \(h(X) = \max \{ |\varvec{u} (X)| : \varvec{u} \in \mathcal {B} (\mathcal {Z}) \}\).

Given an integer polymatroid \(\mathcal {Z} = (J, h)\) and a subset \(X \subseteq J\), let \(\mathcal {Z}|X = (X, h)\) denote a new integer polymatroid restricted \(\mathcal {Z}\) on X, and \(\mathcal {B} (\mathcal {Z}, X) = \{ \varvec{u} \in \mathcal {D}: \mathrm{supp}( \varvec{u} ) \subseteq X ~ \mathrm{and}~|\varvec{u}| = h(X) \}\) where \(\mathrm{supp}(\varvec{u}) = \{ i \in J: u_i \ne 0 \}\). Then there is a natural bijection between \(\mathcal {B} (\mathcal {Z}, X) \) and \(\mathcal {B} (\mathcal {Z}|X)\).

We next introduce a class of polymatroids as follows.

Definition 6

(Boolean polymatroids). Let S be a finite set and consider a family \((S_i)_{i \in J}\) of subsets of S. The mapping \(h : \mathcal {P}(J) \rightarrow \mathbb {Z}\) defined by \(h(X) = |\bigcup _{i \in X} S_i|\) is clearly the rank function of an integer polymatroid. Integer polymatroids that can be defined in this way are called Boolean polymatroids.

Boolean polymatroids are very simple integer polymatroids that are representable over every finite field \(\mathbb {K}\). If \(| S| = t\), we can assume that S is a basis of the vector space \(V = \mathbb {K}^t\). For every \(i \in J\), consider the vector subspace \(V_i = \langle S_i \rangle \). Obviously, these subspaces form a \(\mathbb {K}\)-representation of \(\mathbb {Z}\).

2.3 Secret Sharing Schemes, Matroids and Polymatroids

In this section we review the methods to construct ideal linear secret sharing schemes for multipartite access structures by matroids and polymatroids. Most results of this section are from [17,18,19]. We first introduce the method to construct ideal linear schemes by matroids.

Let \(P = \{p_1, \ldots , p_n \}\) be a set of participants and \(p_0 \notin P\) be the dealer. Suppose \(\mathcal {M}\) is a matroid on the finite set \(P' = P \cup \{p_0\}\), and let

$$ \varGamma _{p_0} (\mathcal {M}) = \{ A \subseteq P : h (A \cup \{p_0\} ) = h(A) \}. $$

Then \(\varGamma _{p_0} (\mathcal {M})\) is an access structure on P because monotonicity property is satisfied, which is called the port of the matroid \(\mathcal {M}\) at the point \(p_0\).

Matroid ports play a very important role in secret sharing. Brickell [9] proved that the ports of representable matroids admit ideal secret sharing schemes and provided a method to construct ideal schemes for ports of \(\mathbb {K}\)-representable matroids. These schemes are called a \(\mathbb {K}\)-vector space secret sharing schemes. This method was described by Massey [26, 27] in terms of linear codes. Suppose M is a \(k \times (n+1)\) matrix over \(\mathbb {K}\). Then the columns of M determine a \(\mathbb {K}\)-representable matroid \(\mathcal {M}\) with ground set \(P'\) such that the columns of M are in one-to-one correspondence with the elements in \(P'\). In this situation, the matrix M is called a \(\mathbb {K}\)-representation of the matroid \(\mathcal {M}\). Moreover, M is a generator matrix of some \((n+1, k)\) linear code C over \(\mathbb {K}\), that is, a matrix whose rows span C. A code C of length \(n+1\) and dimension k is called an \((n+1, k)\) linear code over \(\mathbb {K}\) which is a k-dimensional subspace of \(\mathbb {K}^{n+1}\). A secret sharing scheme can be constructed by the matrix M based the code C as follows.

Let \(s \in \mathbb {K}\) be a secret value. Secret a codeword \(\varvec{c} = (c_0, c_1, \ldots , c_n) \in C\) uniformly at random such that \(c_0 = s\), and define the share-vector as \( (c_1, \ldots , c_n)\), that is \(c_i\) is the share of the participant \(p_i\) for \(i \in [n]\). Let LSSS(M) denote this secret sharing scheme.

Theorem 1

([26]). LSSS(M) is a perfect ideal linear scheme such that a set \(\mathcal {V} \subset P \) is qualified if and only if the first column in M is a linear combination of the columns with indices in \(\mathcal {V}\).

The dual code \(C^{\bot }\) for a code C consists of all vectors \(\varvec{c^{\bot }} \in \mathbb {K}^{n+1}\) such that \(\langle \varvec{c^{\bot }}, \varvec{c} \rangle = 0\) for all \(\varvec{c} \in C\), where \(\langle \cdot , \cdot \rangle \) denotes the standard inner product. Suppose M and \(M^*\) are generator matrices of some \((n+1, k)\) linear code C and its dual \(C^{\bot }\) over \(\mathbb {K}\), respectively. Then LSSS(M) and \(LSSS(M^*)\) realize \(\varGamma \) and \(\varGamma ^*\), respectively. Sometimes it is not easy to construct an ideal linear scheme for a given access structure \(\varGamma \) directly. In this case we can first construct a scheme for \(\varGamma ^*\) and then translate the scheme into an ideal linear scheme for \(\varGamma ^*\) using the explicit transformation of [20]. In Sect. 5.2, we will present the construction for LCASs (4) by this method.

This paper deals with unconditionally secure perfect ideal linear secret sharing schemes. Brickell’s method can be applied to construct such schemes. Nevertheless, it is difficult to determine whether a given access structure admits an ideal linear secret sharing scheme or not. Moreover, even for access structures that admit such schemes, it may not be easy to construct them. Some strategies based on matroids and polymatroids were presented in [17, 19] to attack those problems for multipartite access structures.

The relationship between ideal multipartite access structures and integer polymatroids is summarized as follows.

Theorem 2

([17]). Let \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be a partition of the set P, and \(\mathcal {Z}' = (J'_m, h)\) is an integer polymatroid such that \(h ( \{ 0 \} ) = 1\) and \(h ( \{ i \} ) \le |\mathrm {\Pi }_i|\) for every \( i \in J_m\). Take \(\varGamma _{0} (\mathcal {Z}') = \{ X \subseteq J_m : h (X \cup \{0\} ) = h(X) \}\) and

$$\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi }) = \{ \varvec{u} \in \mathbf {P} : \mathrm{there ~exist}~X \in \varGamma _{0} (\mathcal {Z}')~\mathrm{and}~ \varvec{v} \in \mathcal {B}(\mathcal {Z}' | J_m, X) ~\mathrm{such~that}~ \varvec{v} \le \varvec{u} \} .$$

Then \(\varGamma = \varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\) is a \(\mathrm {\Pi }\)-partite access structure on P and a matroid port. Moreover, if \(\mathcal {Z}'\) is \(\mathbb {K}\)-representable, then \(\varGamma \) can be realized by some \(\mathbb {L}\)-vector space secret sharing scheme over every large enough finite extension \(\mathbb {L}\) of \(\mathbb {K}\). In addition, \(\mathcal {Z}'\) is univocally determined by \(\varGamma \) if it is connected.

The general method presented by Farràs et al. [17] to construct ideal schemes for the multipartite access structures satisfying the conditions in Theorem 2 is summarized as follows.

Let \(\mathrm {\Pi }_0 = \{ p_0\}\) and \(\mathrm {\Pi }' = (\mathrm {\Pi }_i)_{i \in J'_m}\) be a partition of the set \(P' = P \cup \{p_0\} \) such that \(|\mathrm {\Pi }_i |= n_i\). Given a connected \(\mathrm {\Pi }\)-partite access structure \(\varGamma \) satisfying the conditions in Theorem 2.

  • Step 1. Find an integer polymatroid \(\mathcal {Z}'\) such that \(\varGamma = \varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\);

  • Step 2. Find a representation \((V_i)_{i \in J'_m}\) of \(\mathcal {Z}'\) over some finite field \(\mathbb {K}\);

  • Step 3. Over some finite extension of \(\mathbb {K}\), find a representation of the matroid \(\mathcal {M}\) such that \(\varGamma \) is a port of \(\mathcal {M}\). More precisely, construct a \(k \times (n+1)\) matrix \(M = (M_{0} | M_1| \cdots | M_m)\) with the following properties:

    1. 1.

      \(k = h(J'_m)\) and \(n = \sum _{i =1}^m n_i\);

    2. 2.

      \(M_i\) is a \(k \times n_i\) matrix whose columns are vectors in \(V_i\);

    3. 3.

      \(M_{\varvec{u}}\) is nonsingular for any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), where \(M_{\varvec{u}}\) is the \(k \times k\) submatrix of M formed by any \(u_i\) columns in every \(M_i\).

Farràs et al. [17,18,19] proved that all the multipartite access structures introduced in Sect. 2.1 are connected matroid ports. Moreover, they presented the associated integer polymatroids and proved that they are representable. Therefore, the results in [17,18,19] solve Step 1. In this paper, we will give an efficient method to explicitly solve Steps 2 and 3, and hence to construct ideal linear schemes for those families of access structures. Our method is based on the properties of determinant functions.

3 Some Properties of Determinant Functions

In this section, we study determinant functions of two classes of matrices with special form, which will be applied to the constructions of representations of matroids associated to multipartite access structures.

3.1 The First Class of Matrices

In this Section, we introduce the approach to calculate the determinant of a class of matrices with special form. This approach is very useful to calculate the determinant of the matrices with some zero blocks. This class of matrices will be applied to the construction of representable matroid associated to IHASs. We will use the well known Laplace Expansion Theorem of determinant.

Theorem 3

(The Laplace Expansion Theorem). Take a \(n \times n\) matrix A. Let \(\varvec{r}=(r_{1}, \ldots , r_{k})\) be a list of k column indices for A such that \(1 \le r_1< \cdots< r_k < n\) where \(1 \le k < n\) and \(\varvec{t}=(t_{1}, \ldots , t_{k})\) be a list of k row indices for A such that \(1 \le t_1< \cdots< t_k < n\) where \(1 \le k < n\). The submatrix obtained by keeping the entries in the intersection of any column and row that are in the lists is denoted by \(S(A: \varvec{r}, \varvec{t})\). The submatrix obtained by removing the entries in the columns and rows that are in the list is denoted by \(S'(A: \varvec{r}, \varvec{t})\). Then the determinant of A is

$$\begin{aligned} \det (A) = (-1)^{| \varvec{r} |} \sum _{ \varvec{t} \in \mathcal {T}} (-1)^{|\varvec{t}|} \det \big (S(A: \varvec{r}, \varvec{t})\big ) \det \big (S'(A: \varvec{r}, \varvec{t})\big ), \end{aligned}$$

where \(\mathcal {T}\) denotes the set of all k-tuples \(\varvec{t}=(t_{1}, \ldots , t_{k})\) for which \(1 \le t_1< \cdots< t_k < n\).

Example 1

Take a \(7 \times 7\) matrix \(A= (A_1| A_2| A_3)\) where \(A_1\) and \(A_2\) are \(7 \times 2\) blocks, and \(A_3\) is a \(7 \times 3\) block. Then the determinant of A can be calculated as follows.

Take \(\varvec{r_1}=(r_{1,1}, r_{1,2}) = (1, 2)\) and \(\varvec{t_1} = (t_{1,1}, t_{1,2} )\). Then from Theorem 3,

$$\det (A) = (-1)^{|\varvec{r_1}|} \sum _{ \varvec{t_1} \in \mathcal {T}_1 } (-1)^{|\varvec{t_1}|} \det \big (S(A: \varvec{r_1}, \varvec{t_1})\big ) \det \big (S'(A: \varvec{r_1}, \varvec{t_1})\big ),$$

where \(\mathcal {T}_1\) denotes the set of all 2-tuples \(\varvec{t_1} = (t_{1,1}, t_{1,2})\) for which \(1 \le t_{1,1} < t_{1,2} \le 7\). We proceed to calculate \(\det (S'(A: \varvec{r_1}, \varvec{t_1}))\) by Theorem 3. Take \(\varvec{r_2} = ( r_{2,1}, r_{2,2}) = ( 3,4)\), \(\varvec{r} = (\varvec{r_1}, \varvec{r_2}) = (r_{1,1}, r_{1,2}, r_{2,1}, r_{2,2})\), \(\varvec{t_2} = (t_{2,1}, t_{2,2} )\), \(\varvec{t} = (\varvec{t_1}, \varvec{t_2}) = (t_{1,1}, t_{1,2}, t_{2,1}, t_{2,2})\), and let \(\mathcal {T}_2\) denote the set of all 2-tuples \(\varvec{t_2} = (t_{2,1}, t_{2,2})\) for which \(1 \le t_{2,1} < t_{2,2} \le 7\). For a given \(\varvec{t_1} = (t_{1,1}, t_{1,2} )\), let

$$\mathcal {T}_2(\varvec{t_1} ) = \mathcal {T}_2 \backslash \{(t_{2,1}, t_{2,2}) : t_{2,1} \in \{ t_{1,1}, t_{1,2} \} ~ \mathrm{or} ~ t_{2,2} \in \{ t_{1,1}, t_{1,2} \} \}.$$

Then

$$\det (S'(A: \varvec{r_1}, \varvec{t_1})) = 1)^{|\varvec{r_2}|}{\sum _{\varvec{t_2} \in \mathcal {T}_2(\varvec{t_1} )}{(-1)^{|\varvec{t_2}|} det (S(A:\varvec{r_2}, \varvec{t_2})) \det (S'(A: \varvec{r, t})).}}$$

Hence the determinant of A can also be denoted by

$$\begin{aligned} \det (A) = (-1)^{|\varvec{r}|} \sum _{\varvec{t_1} \in \mathcal {T}_1 }{\sum _{\varvec{t_2} \in \mathcal {T}_2(\varvec{t_1} )}(-1)^{|\varvec{t}|} \det \big (S(A: \varvec{r_1}, \varvec{t_1})\big ) \det \big (S(A: \varvec{r_2}, \varvec{t_2})\big ) \det \big (S'(A : \varvec{r}, \varvec{t})\big ).} \end{aligned}$$

In general, we have the following result.

Proposition 1

Take a \(n \times n\) matrix \(A = ( A_1| \cdots | A_m)\) where \(A_i\) is a \(n \times n_i\) matrix, and take \(n_0 = 0\). For every \(i \in J_m\), let \(\varvec{r_i}=(r_{i,1},\ldots , r_{i,n_i}) = (\sum _{j=0}^{i-1} n_j +1, \ldots , \sum _{j=0}^{i} n_j)\), and \(\varvec{t_i}=(t_{i,1}, \ldots , t_{i,n_i})\) be a list of \(n_i\) row indices for \(A_i\) such that \(1 \le t_{i,1}< \cdots < t_{i,n_i} \le n\). Take \(\varvec{r} = (\varvec{r_1}, \ldots , \varvec{r_m})\) and \(\varvec{t} = (\varvec{t_1}, \ldots , \varvec{t_m})\). Let \(\mathcal {T}_i\) denote the set of all \(n_i\)-tuples \(\varvec{t_i} = (t_{i,1}, \ldots , t_{1,n_i})\) for which \(1 \le t_{i,1}< \cdots < t_{1,n_i} \le n\). For a given \(\varvec{t_i} = (t_{i,1}, \ldots , t_{1,n_i})\), take \(S(\varvec{t_i} )= \{ t_{i,1}, \ldots , t_{i,n_i} \}\), and for given \(\varvec{t_{i'}} = (t_{i',1}, \ldots , t_{i', n_{i'}} )\) with \(i' \in [i-1]\), take

$$\mathcal {T}_i(\varvec{t_{i'}}, i' \in [i-1] ) = \mathcal {T}_i \big \backslash \big \{(t_{i,1},\ldots , t_{i,n_i}) : t_{i,j} \in \bigcup _{i'=1}^{i-1} S(\varvec{t_{i'}} ) ~ \mathrm{for~some} ~ j \in [n_i]\big \}.$$

Then

$$\begin{aligned} \det (A) = (-1)^{|\varvec{r}|} \sum _{ \varvec{t_1} \in \mathcal {T}_1 } \sum _{\varvec{t_2} \in \mathcal {T}_2 (\varvec{t_{1}})} \cdots \sum _{\begin{array}{c} \varvec{t_{m - 1}} \in \mathcal {T}_{m - 1}( \varvec{t_{i'}}, \\ ~~~~~~~~~~~~i' \in [m-2] ) \end{array}} (-1)^{|\varvec{t}|} \prod _{i=1}^{m-1} \det \big (S(A: {\varvec{r_i}}, \varvec{t_i})\big ) \det \big (S'(A:{\varvec{r}}, \varvec{t})\big ). \end{aligned}$$

Proof

Theorem 3 implies

$$ \det (A) = (-1)^{|\varvec{r_1}|} \sum \limits _{\varvec{t_1} \in \mathcal {T}_1 } (-1)^{|\varvec{t_1}|} \det (S(A:\varvec{r_1}, \varvec{t_1})) \det (S'(A: \varvec{r_1}, \varvec{t_1})). $$

We proceed to calculate \(\det (S'(A: \varvec{r_1}, \varvec{t_1}))\) by Theorem 3 and the following result can be obtained

$$ \det (S' (A : \varvec{r_1}, \varvec{t_1})) = (-1)^{|\varvec{r_2}|} \sum _{\varvec{t_2} \in \mathcal {T}_2 (\varvec{t_1} )} (-1)^{|\varvec{t_2}|} \det (S(A : \varvec{r_2}, \varvec{t_2})) \det (S'(A :(\varvec{r}_1, \varvec{r}_2) , (\varvec{t}_1, \varvec{t}_2)). $$

Accordingly, \(\det (S'( A : (\varvec{r}_1,\ldots , \varvec{r}_i) , (\varvec{t}_1, \ldots , \varvec{t}_i)))\) can be obtained by Theorem 3 for \(i \in [2,m-1]\), and the result follows.    \(\square \)

Example 2

Take

$$\begin{aligned} A = \left( \begin{array}{cc|cc|ccc} a_{1,1} &{} a_{1,2} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ a_{2,1} &{} a_{2,2} &{} a_{2,3} &{} a_{2,4} &{} 0 &{} 0 &{} 0 \\ a_{3,1} &{} a_{3,2} &{} a_{3,3} &{} a_{3,4} &{} a_{3,5} &{} a_{3,6} &{} a_{3,7} \\ 0 &{} 0 &{} a_{4,3} &{} a_{4,4} &{} a_{4,5} &{} a_{4,6} &{} a_{4,7} \\ 0 &{} 0 &{} a_{5,3} &{} a_{5,4} &{} a_{5,5} &{} a_{5,6} &{} a_{5,7} \\ 0 &{} 0 &{} 0 &{} 0 &{} a_{6,5} &{} a_{6,6} &{} a_{6,7} \\ 0 &{} 0 &{} 0 &{} 0 &{} a_{7,5} &{} a_{7,6} &{} a_{7,7} \\ \end{array}\right) . \end{aligned}$$

Then from Example 1,

$$\begin{aligned} \det (A) = ( -1 )^{|\varvec{r}|} \sum _{ \varvec{t_1} \in \mathcal {T}_1 } \sum _{\varvec{t_2} \in \mathcal {T}_2(\varvec{t_1} )} (-1)^{|\varvec{t}|} \det (S(A: \varvec{r_1}, \varvec{t_1})) \det ( S(A: \varvec{r_2}, \varvec{t_2})) \det ( S'( A: \varvec{r}, \varvec{t})). \end{aligned}$$

Note that the \(\mathcal {T}_1\) and \(\mathcal {T}_2\) are different from the ones in Example 1. Here, there are some zero blocks in A. In this case, \(\mathcal {T}_1\) denotes the set of all 2-tuples \(\varvec{t_1} = (t_{1,1}, t_{1,2})\) for which \(1 \le t_{1,1} < t_{1,2} \le 3\) and \(T_2\) denotes the set of all 2-tuples \(\varvec{t_2} = (t_{2,1}, t_{2,2} )\) for which \(2 \le t_{2,1} < t_{2,2} \le 5\).

This example implies that Proposition 1 is more suitable for calculating the determinant function of the matrix which has more zero blocks in its submatrices consist of some columns.

3.2 The Second Class of Matrices

In this section, we introduce the calculation approach to the determinant function of another class of matrices with special form. These matrices will be applied to the construction of representable matroid associated to UCASs and LCASs. Recall that the determinant function is linear in the columns of a matrix as follows.

Proposition 2

If a and b are scalars, \(\varvec{\bar{\alpha }}\) and \(\varvec{\bar{\beta }}\) are columns vectors, and B is some matrix, then \( \det \big ( (a \varvec{\bar{\alpha }} + b \varvec{\bar{\beta }} ~|B) \big ) = a \det \big ( (\varvec{\bar{\alpha }} ~| B) \big ) + b \det \big ( (\varvec{\bar{\beta }}~| B) \big ). \)

Example 3

Let \(A_i = (a_{u,v} )_{2 \times 3}\) and \(B_i = (b_{u,v} )_{3 \times 2}\) be a \(2 \times 3\) matrix and a \(3 \times 2\) matrix, respectively. Then \(A B = \big ( \sum _{i_1 =1}^{3} b_{i_1,1} \varvec{\bar{a}_{i_1}} \big | \sum _{i_2=1}^{3} b_{i_2,2} \varvec{\bar{a}_{i_2}} \big ) \) is a \(2 \times 2\) matrix, where \(\varvec{\bar{a}_{i}} \) denotes the ith column of A. Hence, from Proposition 2,

$$\begin{aligned} \begin{aligned} \det (A B)&= \sum _{i_1=1}^{3} b_{i_1,1} \det \bigg ( \Big ( \varvec{\bar{a}_{i_1}} \Big | \sum _{i_2=1}^{3} b_{i_2, 2} \varvec{\bar{a}_{i_2}} \Big ) \bigg ) \\&= \sum _{i_1=1}^{3} \sum _{i_2=1}^{3} b_{i_1,1} b_{i_2,2} \det \big ( ( \varvec{\bar{a}_{i_1}} | \varvec{\bar{a}_{i_2}}) \big ) \\&= b_{1,1} b_{2,2} \det \big ( ( \varvec{\bar{a}_{1}} | \varvec{\bar{a}_{2}} ) \big ) + b_{1,1} b_{3,2} \det \big ( ( \varvec{\bar{a}_{1}}| \varvec{\bar{a}_{3}} )\big ) + b_{2,1} b_{1,2} \det \big ( ( \varvec{\bar{a}_{2}} | \varvec{\bar{a}_{1}} ) \big ) \\&~~~ + b_{2,1} b_{3,2} \det \big ( ( \varvec{\bar{a}_{2}}| \varvec{\bar{a}_{3}} ) \big ) + b_{3,1} b_{1,2} \det \big ( ( \varvec{\bar{a}_{3}} |\varvec{\bar{a}_{1}} ) \big ) + b_{3,1} b_{2,2} \det \big ( ( \varvec{\bar{a}_{3}} |\varvec{\bar{a}_{2}}) \big ) \\&= \sum _{1 \le j_1< j_2 \le 3} \det \left( \begin{array}{cc} b_{j_1,1} &{} b_{j_1,2} \\ b_{j_2,1} &{} b_{j_2,2} \end{array}\right) \det \big ( ( \varvec{\bar{a}_{j_1}} | \varvec{\bar{a}_{j_2}} ) \big ). \end{aligned} \end{aligned}$$

In general, we have the following proposition.

Proposition 3

Take a \(k \times k \) matrix (AB|D) where \(A= (a_{u,v})\) is a \(k \times r \) matrix, \(B= (b_{u,v})\) is a \(r \times l \) matrix, and \(k \ge r \ge l\), and take \(\varvec{j} = (j_1, \ldots , j_l)\) such that \(1 \le j_{1}< \cdots < j_{l} \le r\). Let \(A(\varvec{j})\) and \(B(\varvec{j})\) denote the \(k \times l\) submatrix formed by the \(j_{1}\)th column, ..., \(j_{l}\)th column of A and the \(l \times l\) submatrix formed by the \(j_{1}\)th row, ..., \(j_{l}\)th row of B, respectively. Then

$$ \det \big ( (A B | D)\big ) = \sum _{\varvec{j} \in \mathcal {J}} \det \big ( B(\varvec{j}) \big ) \det \big ( ( A(\varvec{j}) | D) \big ), $$

where \(\mathcal {J}\) denotes the set of all l-tuples \(\varvec{j} = (j_1, \ldots , j_l)\) for which \(1 \le j_{1}< \cdots < j_{l} \le r\).

Proof

If there are two identical columns in a square matrix, then its determinant equals 0. Therefore, from this and Proposition 2,

$$\begin{aligned} \begin{aligned} \det \big ( (A B | D) \big )&= \det \bigg ( \Big ( \sum _{i_1=1}^{r} b_{i_1,1} \varvec{\bar{a}_{i_1}} \Big | \cdots \Big | \sum _{i_l=1}^{r} b_{i_l, l} \varvec{\bar{a}_{i_l}} \Big | ~D \Big ) \bigg ) \\&= \sum _{ i_v \in [r], v \in [l] } \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) \det \big ( ( \varvec{\bar{a}_{i_1}} | \cdots | \varvec{\bar{a}_{i_l}} | D ) \big ) \\&= \sum _{\varvec{i}} \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) \det \big ( ( \varvec{\bar{a}_{i_1}} | \cdots | \varvec{\bar{a}_{i_l}} | D) \big ), \end{aligned} \end{aligned}$$

where the summation is over all l-tuples \(\varvec{i} = (i_1, \ldots , i_l)\) for which \(i_v \in [r]\) and \(i_v \ne i_{v'}\), \(v\ne v' \in [l]\).

For a given \(\varvec{j} = (j_1, \ldots , j_l)\) such that \(1 \le j_{1}< \cdots < j_{l} \le r\), let \(S(\varvec{j})\) denote the set of all the permutations on the set \(\{ j_1, \ldots , j_l \}\). we claim that

$$ \sum _{\varvec{i} \in S(\varvec{j})} \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) \det \big ( ( \varvec{\bar{a}_{i_1}}| \cdots |\varvec{\bar{a}_{i_l}} | D ) \big ) = \det \big ( B(\varvec{j}) \big ) \det \big ( ( A(\varvec{j}) | D) \big ) $$

Without loss of generality, we may assume that \(\varvec{j} = (1, \ldots , l)\), that is \(j_v = v\) with \(v \in [l]\). Then

$$ \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) \det \big ( ( \varvec{\bar{a}_{i_1}} | \cdots |\varvec{\bar{a}_{i_l}} | D ) \big ) = \mathrm{sgn} (\varvec{i}) \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) \det \big ( ( \varvec{\bar{a}_{1}} | \cdots |\varvec{\bar{a}_{l}} | D ) \big ), $$

where \(\mathrm{sgn} (\varvec{i})\) denotes the sign of \(\varvec{i}\). Note that for \(\varvec{j} = (1, \ldots , l)\),

$$ \sum _{\varvec{i} \in S(\varvec{j}) } \mathrm{sgn} (\varvec{i}) \Big ( \prod _{v \in [l]} b_{i_v, v} \Big ) = \det \big (B(\varvec{j}) \big ). $$

This implies the claim, and the result follows.    \(\square \)

We next give a formula to calculate the determinant function of a matrix with special form which will be used to the scheme for UCASs and LCASs.

Proposition 4

Let \(G = (A_1B_1 | \cdots | A_m B_m)\) be a \(k \times k \) matrix such that \(A_i\) is a \(k \times r_i\) block and \(B_i\) is a \(r_i \times l_i\) block, where \(l_i \le r_i \le k\) and \(\sum _{i=1}^{m} l_i = k\). For any \(\varvec{j_i} = (j_{i,1}, \ldots , j_{i,l_i})\) with \(i \in J_m\) such that \(1 \le j_{i,1}< \cdots < j_{i,l_i} \le r_i\), let \(A_i(\varvec{j_i} )\) and \(B_{i}(\varvec{j_i} )\) denote the \(k \times l_i\) submatrix formed by the \(j_{i,1}\)th column, ..., \(j_{i,l_i}\)th column of \( A_i\) and the \(l_i \times l_i\) submatrix formed by the \(j_{i,1}\)th row, ..., \(j_{i,l_i}\)th row of \(B_{i}\), respectively. Then

$$\begin{aligned} \det ( G) = \sum _{\varvec{j_i}, i\in [m]} \bigg ( \prod _{i=1}^{m} \det \big ( B_{i} ( \varvec{j_i}) \big ) \bigg ) \det \Big ( \big ( A_1(\varvec{j_1}) | \cdots | A_m(\varvec{j_m}) \big ) \Big ), \end{aligned}$$

where the summation is over all \(l_i\)-tuples \(\varvec{j_i} = (j_{i,1}, \ldots , j_{i,l_i})\) with \(i \in J_m\), for which \(1 \le j_{i,1}< \cdots < j_{i,l_i} \le r_i\).

Proof

Let \(A_i := (a_{u,v}^{(i)})\) with \(u \in [k]\) and \(v \in [r_i]\), \(B_i := (b_{u,v}^{(i)})\) with \(u \in [r_i]\) and \(v \in [l_i]\), and \(\varvec{\bar{a}^{(i)}_{j} }\) denote the jth column of matrix \(A_i\). From Proposition 3,

$$\begin{aligned} \begin{aligned} \det (G)&= \det \bigg ( \Big ( \sum _{i_{1,1}=1}^{r_1} b^{(1)}_{i_{1,1},1} \varvec{\bar{a}^{(1)}_{i_{1,1}}} \Big | \cdots \Big | \sum _{i_{1, l_1}=1}^{r_1} b^{(1)}_{i_{1, l_1}, l_1} \varvec{\bar{a}^{(1)}_{i_{1, l_1}}} \Big | A_2 B_{2} \Big | \cdots \Big | A_m B_{m} \Big ) \bigg ) \\&= \sum _{\varvec{j_1}} \det \big ( B_{1}(\varvec{j_1} ) \big ) \det \Big ( \big ( A_1( \varvec{j_1} )| A_2 B_{2}| \cdots | A_m B_{m} \big ) \Big ), \end{aligned} \end{aligned}$$

where the summation is over all \(l_1\)-tuples \(\varvec{j_1} = (j_{1,1}, \ldots , j_{1,l_1})\), for which \(1 \le j_{1,1}< \cdots < j_{1,l_1} \le r_1\). The conclusion can be obtained by computing \(A_i B_{i}\) for \(i \in [2, m]\) using the similar method to \(A_1 B_{1}\).    \(\square \)

4 Secret Sharing Schemes for Ideal Hierarchical Access Structures

In this section, we construct ideal linear secret sharing schemes realizing IHASs by an efficient method. We will present two classes of constructions based on the same representation of an integer polymatroid. We first present an integer polymatroid \(\mathcal {Z}'\) satisfying Theorem 2 such that the IHASs (1) are of the form \(\varGamma _{0} (\mathcal {Z}', \mathrm {\Pi })\).

Given two vectors \(\varvec{\hat{k}, ~k} \in \mathbb {Z}_{+}^{J'_m}\) such that \( \hat{k}_0 = \hat{k}_1 =0\), \(k_0 = 1\), \(k_m=k\), and \( \hat{k}_i \le \hat{k}_{i+1} < k_i \le k_{i+1}\) for \(i \in [0,m-1]\), consider the subsets \(S_i = [\hat{k}_i +1, k_i]\) of the set \(S = [k]\) and the Boolean polymatroid \(\mathcal {Z}' = \mathcal {Z}' (\varvec{\hat{k}, ~k})\) with ground \(J'_m\) defined from them. The following result was presented in Section IX of [18].

Lemma 1

Let \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be a partition of the set P with \(|\mathrm {\Pi }_i| \ge h(\{ i \}) = k_i - \hat{k}_i\). Then the IHASs (1) are of the form \(\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\).

Now we introduce a linear representation of the polymatroid defined in Lemma 1, that is a collection \((V_i)_{i \in J'_m}\) of subspaces of some vector space. Recalled that Boolean polymatroids are representable over every finite field. Here, we give a simple representation of \(\mathcal {Z}'\) based on the unit matrix as follows.

Take a \(k \times k\) unit matrix \(I_k\), and for every \(i \in J'_m\), let \(E_i\) denote the submatrix formed by the \((\hat{k}_i +1)\)th column to the \(k_i\)th column of \(I_k\). Consider the \(\mathbb {F}_{q}\)-vector subspace \(V_i \subseteq \mathbb {F}^k_{q}\) spanned by all the columns of \(E_i\). Let the integer polymatroid \(\mathcal {Z}' = (J'_m, h)\) such that \( h(X) = \dim \big ( \sum _{i \in X} V_i\big )\) for every \(X \subseteq J'_m\). We have the following result.

Proposition 5

For the integer polymatroid \(\mathcal {Z}' \) defined above, the IHASs (1) are of the form \(\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\) and \(\mathcal {B}( \mathcal {Z}') = \mathcal {B}_1 \cup \mathcal {B}_2\), where

$$\begin{aligned}&\mathcal {B}_1 =\{\varvec{u} \in \mathbb {Z}_{+}^{J'_{m}}: |\varvec{u}| = k, u_0 = 0~ \mathrm{and}~\hat{k}_{i+1}\le |\varvec{u} ([i])| \le k_{i}~\mathrm{for~ all}~ i \in [m - 1] \}, \\&\mathcal {B}_2 =\{ \varvec{u} \in \mathbb {Z}_{+}^{J'_{m}}: |\varvec{u}| = k, u_0 = 1~ \mathrm{and}~\hat{k}_{i+1}-1 \le |\varvec{u} ([i])| \le k_{i}-1~\mathrm{for~ all}~ i \in [m -1] \}.\nonumber \end{aligned}$$
(6)

Proof

Suppose the set \(S = [k]\) and the subsets \(S_i = [\hat{k}_i +1, k_i]\) for every \(i \in J'_m\). Then for every \(X \subseteq J'_m\), \(h(X) = \dim \big ( \sum _{i \in X} V_i\big ) = |\cup _{i \in X} S_i|\). This implies \(\mathcal {Z}' \) is a linear representation of the polymatroid \(\mathcal {Z}' (\varvec{\hat{k}, ~k})\), and the first claim follows. In addition, since \(I_k\) is nonsingular and \(E_i\) is an submatrix of \(I_k\) for every \(i \in J'_m\), it follows that any k distinct columns vectors from \(E_i\) with \(i \in J'_m\) are linearly independent, and the second claim follows.    \(\square \)

This proposition implies that the collection \((V_i)_{i \in J'_m}\) is a linear representation of the integer polymatroid \(\mathcal {Z}'\) associated to the IHASs (1). We will present two class of constructions for ideal linear schemes realizing IHASs by representable matroids obtained based on \(\mathcal {Z}'\).

4.1 Construction for Ideal Hierarchical Access Structures

In this section, we represent a class of ideal linear scheme for IHASs, which can be obtained by a representation of the matroid associated to IHASs.

Suppose \(\mathrm {\Pi }_0 = \{ p_0\}\) and let \(\mathrm {\Pi }' = (\mathrm {\Pi }_i)_{i \in J'_m}\) and \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be the partition of \(P' = P \cup \{p_0\} \) and P, respectively, such that \(|\mathrm {\Pi }_i |= n_i\). For every \(i \in J_m\), take different elements \(\beta _{i, v} \in \mathbb {F} \backslash \{0\}\) with \(v \in [n_i]\) and define a \((k_i - \hat{k}_i) \times n_i\) matrix

$$\begin{aligned} B_{i} = \big ( (\beta _{i,v} x^{m-i} )^{u-1} \big ) \quad \quad u \in [k_i- \hat{k}_i], ~v \in [n_i ]. \end{aligned}$$

Let a \(k \times (n+1)\) matrix be defined as

$$\begin{aligned} M = (M_0 | M_1 | \cdots | M_m), \end{aligned}$$
(7)

where \(M_0= (1, 0, \ldots ,0)^T\) is a k-dimensional column vector and \(M_i = E_i B_i\) for every \(i \in J_m\). Then the secret sharing scheme LSSS(M) is as follows:

Secret Sharing Scheme.

  1. 1.

    Let \(s \in \mathbb {K}\) be a secret value. The dealer chooses randomly a k-dimensional vector \(\varvec{a}\) such that \(\varvec{a} M_0 = s\);

  2. 2.

    The share of each participant \(p_{i,j}\) from compartment \(\mathrm {\Pi }_i\) is \(\varvec{a b^T_{i,j}}\), where \(\varvec{b^T_{i,j}}\) denotes the jth column of \(M_i\) with \(i \in J_m\) and \(j \in [n_i]\).

We proceed to show that LSSS(M) is a perfect ideal linear scheme realizing IHASs. This can be done by proving M is a representation of the matroid associated the IHASs (1). Obviously, M satisfies the first two conditions in Step 3 of Sect. 2.3. We will prove that it satisfies the third condition too. We first give the following lemmas.

Lemma 2

For any \(\varvec{u} \in \mathcal {B}_1\), (6), \(\det ( M_{\varvec{u}}) \) is a nonzero polynomial on x of degree at most K where

$$\begin{aligned} K = \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1) - \sum _{i=2}^{m-1} (m-i) (k_i - k_{i-1}) \hat{k}_i. \end{aligned}$$

Proof

For every \(i \in J_m\), take

$$\begin{aligned} B'_{i} = \big ( \beta _{i, v}^{u-1} \big ) \quad u \in [k_i- \hat{k}_i], ~v \in [n_i ], \end{aligned}$$

and for any \(\varvec{u} \in \mathcal {B}_1\), (6), let \(B_{i}(u_i)\) and \( B'_{i}(u_i)\) denote the submatrices formed by any \(u_i\) columns in \(B_i\) and \(B'_i\), respectively.

Let us exemplify how such an event may occur. Assume, for example, that \(m = 3\), \(\varvec{k}=(k_1, k_2, k_3) = (3,5,7)\), \(\varvec{\hat{k}}=(\hat{k}_1, \hat{k}_2, \hat{k}_3) = (0,1,2)\). Take \(\varvec{u}=(u_1, u_2, u_3) = (2,2,3)\) and the corresponding matrix \(M_{\varvec{u}}\) has the following form:

$$\begin{aligned} M_{\varvec{u}} = \left( \begin{array}{cc|cc|ccc} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \beta _{1,1}x^2 &{} \beta _{1,2}x^2 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 \\ (\beta _{1,1}x^2)^2 &{} (\beta _{1,2}x^2)^2 &{} \beta _{2,1}x &{} \beta _{2,2}x &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} (\beta _{2,1}x)^2 &{} (\beta _{2,2}x)^2 &{} \beta _{3,1} &{} \beta _{3,2} &{} \beta _{3,3} \\ 0 &{} 0 &{} (\beta _{2,1}x)^3 &{} (\beta _{2,2}x)^3 &{} \beta _{3,1}^2 &{} \beta _{3,2}^2 &{} \beta _{3,3}^2 \\ 0 &{} 0 &{} 0 &{} 0 &{} \beta _{3,1}^3 &{} \beta _{3,2}^3 &{} \beta _{3,3}^3 \\ 0 &{} 0 &{} 0 &{} 0 &{} \beta _{3,1}^4 &{} \beta _{3,2}^4 &{} \beta _{3,3}^4 \\ \end{array}\right) . \end{aligned}$$

Suppose \(1 \le t_{1,1} < t_{1,2} \le 3\), \(2 \le t_{2,1} < t_{2,2} \le 5\), \(3 \le t_{3,1}< t_{3,2} < t_{3,3}\le 7\), and \(\{t_{1,1}, t_{1,2}, t_{2,1}, t_{2,2}, t_{3,1}, t_{3,2}, t_{3,3} \} = [7]\). Let \(\hat{B}_1\) and \(\hat{B}'_1\) be the blocks formed by the \(t_{1,1}\)th and \(t_{1,2}\)th rows of \(B_{1}(u_1)\) and \(B'_{1}(u_1)\), respectively, \(\hat{B}_{2}\) and \(\hat{B}'_2\) be the blocks formed by the \(t_{2,1}\)th and \(t_{2,2}\)th rows of \(B_{2}(u_2)\) and \(B'_{2}(u_2)\), respectively, and \(\hat{B}_3\) and \(\hat{B}'_3\) be the blocks formed by the \(t_{3,1}\)th, \(t_{3,2}\)th and \(t_{3,3}\)th rows of \(B_{3}(u_3)\) and \(B'_{3}(u_3)\), respectively. Then Proposition 1 implies that the summation in \(\det ( M_{\varvec{u}}) \) can be denoted by

$$\begin{aligned} |a_{t} x^t| : = \det (\hat{B}_1)\det (\hat{B}_2)\det (\hat{B}_3) = \det (\hat{B}'_1)\det (\hat{B}'_2)\det (\hat{B}'_3) x^t \end{aligned}$$

where \(t = 2(t_{1,1}-1) + 2(t_{1, 2}-1) + (t_{2,1}-2) + (t_{2,2}-2)\). Therefore, when \(t_{1,1}=1\), \(t_{1, 2}=2\), \(t_{2,1}=3\) and \(t_{2,2} = 4\), t is minimal. In this case \(t =5\) and \(\hat{B}'_i\) with \(i \in [3]\) are all nonsingular. This implies \(a_5 \ne 0\).

In addition, take \(\varvec{u'} = (u'_1, u'_2, u'_3)\) such that \(\varvec{u'} ([i]) = k_i\) for every \(i \in [3]\). Then \(\varvec{u'} \in \mathcal {B}_1\). In this case let \(t'_{1,1} =1\), \(t'_{1,2} =2\), \(t'_{1,3} =3\), \(t'_{2,1} =4\), \(t'_{2,2}= 5\), \(t'_{3,1} =6\) and \(t'_{3,2} =7\), then \( t \le 2\sum _{i'=1}^3 (t'_{1,i'}-1) + \sum _{i'=1}^2 (t'_{2,i'}-2) = 11. \) Therefore, \(\det ( M_{\varvec{u}}) \) is a nonzero polynomial on x of degree at most 11. In fact, by computing, we have \(t < 11\).

In general, for any \(\varvec{u} \in \mathcal {B}_1\), let \(\hat{B}_i\) and \(\hat{B}'_i\) be the blocks formed by all the \(t_{i, i'}\)th rows of \(B_{i}(u_i)\) and \(B'_{i}(u_i)\), respectively, where \(i ' \in [ u_i]\) such that

$$\hat{k}_i +1 \le t_{i,1}< \cdots < t_{i, u_i} \le k_i \; \mathrm{and} \; \bigcup \nolimits _{i =1}^m \big \{ t_{i, i'} : i ' \in [u_i] \big \} = [k]. $$

Then Proposition 1 implies that the summation in \(\det ( M_{\varvec{u}}) \) can be denoted by

$$|a_{t} x^t| = \prod \nolimits _{i=1}^{m} \det (\hat{B}_i) = \prod \nolimits _{i=1}^{m} \det (\hat{B}'_i) x^t $$

where

$$\begin{aligned} t = \sum _{i=1}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ u_i} (t_{i, i'} - \hat{k}_i - 1) \Big ) = \sum _{j=1}^{m-1} \bigg ( \sum _{i=1}^{j} \Big (\sum _{i' = 1}^{u_i} (t_{i, i'} - \hat{k}_i - 1) \Big ) \bigg ). \end{aligned}$$
(8)

For every \( j \in [m-1]\), take \(T_j = \sum _{i=1}^{j} \big (\sum _{i' = 1}^{ u_i} (t_{i, i'} - \hat{k}_i - 1) \big )\). We have that \(T_{m-1}\) is minimal if \(\bigcup _{i =1}^{m-1} \big \{ t_{i, i'} : i ' \in [ u_i] \big \} = \big [ |\varvec{u}([m-1])| \big ] \). In this case \(T_{m-2}\) is minimal if \(\bigcup _{i =1}^{m-2} \big \{ t_{i, i'}: i ' \in [ u_i] \big \} = \big [ |\varvec{u}([m-2])| \big ]\). Therefore, t is minimal if \(\bigcup _{i =1}^{j} \big \{ t_{i, i'} : i ' \in [ u_i] \big \} = \big [ |\varvec{u}([j])| \big ] \) for all \(j \in [m-1]\). This implies that \(t_{1, i'} = i'\) and \(t_{i, i'} = |\varvec{u}([i-1])| + i'\) for \(i \in [2, m-1]\). Hence,

$$\begin{aligned} t \ge (m-1) \sum _{i' = 1}^{u_1} (i'-1) + \sum _{i=2}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{u_i} \big ( |\varvec{u}([i-1]) | + i' - \hat{k}_i - 1 \big ) \Big ) = t_0. \end{aligned}$$

In this case each \(\hat{B}'_i\) is nonsingular since it is the square submatrix formed by the successive \(u_i\) rows of \(B'_{i}(u_i)\). This implies that \( a_{t_0} \ne 0\).

In addition, take a vector \(\varvec{u'} \in \mathbb {Z}_{+}^m\) such that \(|\varvec{u}([i])| = k_i\) for every \(i \in J_m\). Then \(\varvec{u'} \in \mathcal {B}_1\). In this case \(t_{1, i'} = i'\) with \(i' \in [k_1]\) and \(t'_{i,i'} =k_{i-1} + i'\) with \(i \in [2, m-1]\) and \(i' \in [k_i - k_{i-1}] \). Then

$$\begin{aligned} \begin{aligned} t&\le (m-1) \sum _{i' = 1}^{k_1} (i'-1) + \sum _{i=2}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ k_i - k_{i-1}} (k_{i-1} + i' - \hat{k}_i - 1) \Big ) \\&= (m-1) \sum _{i' = 1}^{k_1} (i'-1) + \sum _{i=2}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ k_i - k_{i-1}} (k_{i-1} + i' - 1) \Big ) - \sum _{i=2}^{m-1} (m-i) \sum _{i' = 1}^{ k_i - k_{i-1}} \hat{k}_i \\&= \sum _{i =1}^{m-1}(1+ 2+ \cdots + ( k_i-1) ) - \sum _{i=2}^{m-1} (m-i) (k_i - k_{i-1}) \hat{k}_i \\&= \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1)- \sum _{i=2}^{m-1} (m-i) (k_i - k_{i-1}) \hat{k}_i. \end{aligned} \end{aligned}$$
(9)

This implies the conclusion.

Lemma 3

For any \(\varvec{u} \in \mathcal {B}_2\), (6), \(\det ( M_{\varvec{u}}) \) is a nonzero polynomial on x of degree at most K.

Proof

Let \(M'\) denote the submatrix obtained by removing the first row and the first column of M and take \(\varvec{k', \hat{k}' } \in \mathbb {Z}_{+}^m\) such that for every \(i \in J_m\), \(k'_i = k_i -1\), and \(\hat{k}'_i = \hat{k}_i\) if \(\hat{k}_i=0\) and \(\hat{k}'_i = \hat{k}_i -1\) if \(\hat{k}_i > 0\). For every \(i \in J_m\), let \(E'_i\) denote the submatrix formed by the \((\hat{k}'_i+1)\)th column to the \(k'_i\)th column of \(I_{k-1}\). Let \(D_1\) and \(D'_1\) denote the submatrices formed by the last \(k'_1\) rows of \(B_1\) and \(B'_1\), respectively. For every \(i \in [2, m]\), if \(\hat{k}_i=0\), let \(D_i\) and \(D'_i\) denote the submatrices formed by the last \(k'_i -1\) rows of \(B_i\) and \(B'_i\), respectively, and if \(\hat{k}_i>0\), let \(D_i= B_i\) and \(D'_i= B'_i\). Then

$$M' = ( M'_1 | \cdots | M'_m)$$

where \(M'_i = E'_i D_i\) and for any \(\varvec{u} \in \mathcal {B}_2\), (6), \(\det ( M_{\varvec{u}}) = \det \big ( M'_{\varvec{u}(J_m)} \big )\). In particular, for any \(\varvec{u} \in \mathcal {B}_2\), (6), \(\hat{k}'_{i+1} \le |\varvec{u} ([i])| \le k'_{i}\) for all \(i \in [m -1]\) and \(|\varvec{u}| = k-1\). Therefore, this claim can be proved by the same method in the proof of Lemma 2.

For any \(\varvec{u} \in \mathcal {B}_2\), (6), let \( D'_{i}(u_i)\) denote the block formed by any \(u_i\) columns in \(D'_i\), and let \(\hat{D}'_i\) be the block formed by all the \(t_{i, i'}\)th rows of \(D'_{i}(u_i)\). Here, \(i ' \in [ u_i]\) such that \(\hat{k}'_i +1 \le t_{i,1}< \cdots < t_{i, u_i} \le k'_i\) and \(\bigcup _{i =1}^m \big \{ t_{i, i'} : i ' \in [u_i] \big \} = [k-1]\). Then the summation in \(\det \big ( M'_{\varvec{u}(J_m)} \big )\) can be denoted by \(|b_{t'} x^{t'}| = \prod _{i=1}^{m} \det (\hat{D}'_i) x^{t'} \). Similar to (8),

$$\begin{aligned} t' = \sum _{i=1}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ u_i} (t_{i, i'} - \hat{k}'_i - y_i) \Big ) \end{aligned}$$

where \(y_i = 0\) if \(\hat{k}'_i= 0\) and \(y_i = 1\) if \(\hat{k}'_i > 0\). From \(\hat{k}'_i = \hat{k}_i\) if \(\hat{k}_i=0\) and \(\hat{k}'_i = \hat{k}_i -1\) if \(\hat{k}_i > 0\), we have

$$\begin{aligned} t' = \sum _{i=1}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ u_i} (t_{i, i'} - \hat{k}_i ) \Big ). \end{aligned}$$

Similar to the proof in Lemma 2, we can obtain \(t'\) is minimal if \(t_{1, i'} = i'\) and \(t_{i, i'} = |\varvec{u}([i-1])| + i'\) for \(i \in [2,m-1]\), and in this case each \(\hat{D}'_i\) is nonsingular, thus \(\det \big ( M'_{\varvec{u}(J_m)} \big )\) is a nonzero polynomial on x. In addition, take a vector \(\varvec{u'} \in \mathbb {Z}_{+}^m\) such that \(|\varvec{u}([i])| = k'_i\) for every \(i \in J_m\). Then \(\hat{k}'_{i+1} \le |\varvec{u'} ([i])| \le k'_{i}\) for all \(i \in [m -1]\) and \(|\varvec{u'}| = k-1\). In this case \(t_{1, i'} = i'\) with \(i' \in [k'_1]\) and \(t'_{i,i'} =k'_{i-1} + i'\) with \(i \in [2, m-1]\) and \(i' \in [k'_i - k'_{i-1}] \). Similar to (9),

$$\begin{aligned} \begin{aligned} t'&\le (m-1) \sum _{i' = 1}^{k'_1} i' + \sum _{i=2}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ k'_i - k'_{i-1}} (k'_{i-1} + i' - \hat{k}_i ) \Big ) \\&= (m-1) \sum _{i' = 1}^{k_1} (i'-1) + \sum _{i=2}^{m-1} \Big ( (m-i) \sum _{i' = 1}^{ k_i - k_{i-1}} (k_{i-1} + i' - \hat{k}_i - 1) \Big ) = K \\ \end{aligned} \end{aligned}$$

since \(k'_{i} = k_i -1\) for every \(i \in J_m\). This implies \(\det \big ( M'_{\varvec{u}(J_m)} \big )\) is a nonzero polynomial on x of degree at most K, and the claim follows.    \(\square \)

The following result was proved by Shoup [33].

Theorem 4

([33]). Take a finite field \(\mathbb {F}_{q^{\lambda }}\) where q is a prime power and \(\lambda \) is a positive integer. Then there exists an element \(x \in \mathbb {F}_{q^{\lambda }}\) such that its minimal polynomial over \(\mathbb {F}_q\) is of degree \(\lambda \) which can be found in time \(O(q, \lambda )\).

Now, take a finite field \(\mathbb {F}_{q^{\lambda }}\), where \(q > \max _{i \in J_m} \{ n_i\}\) is a prime power and \(\lambda > K\). Take all \(\beta _{i,v}\) in the matrix (7) from \(\mathbb {F}_q \backslash \{0\}\) and take \(x \in \mathbb {F}_{q^{\lambda }}\) such that its minimal polynomial over \(\mathbb {F}_q\) is of degree \(\lambda \). We have the following result.

Theorem 5

The matrix (7) is a representation of the matroid associated to IHASs (1) over \(\mathbb {F}_{q^{\lambda }}\) for some prime power \(q > \max _{i \in J_m} \{ n_i \}\) and some \(\lambda > K\). Moreover, such a representation can be obtained in time \(O(q, \lambda )\).

Proof

Since all the entries in the matrix (7), except the powers of x, are in \(\mathbb {F}_q\), and Theorem 4 implies that such an element x can be found in time \(O(q, \lambda )\), it follows that for any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), (6), \(\det (M_{\varvec{u}})\) must be a nonzero \(\mathbb {F}_q\)-polynomial on x with degree smaller than \(\lambda \), and consequently, the matrix \(M_{\varvec{u}}\) is nonsingular. This implies the claim.    \(\square \)

Proposition 6

Suppose M is the matrix (7). Then LSSS(M) realizes the IHASs (1) over \(\mathbb {F}_{q^{\lambda }}\) defined as in Theorem 5. Moreover, such a scheme can be obtained in time \(O(q, \lambda )\).

Proof

Theorem 1 implies that proving this claim is equivalent to proving that \(\varvec{v}(J_m) \in \varGamma \) if and only \(M_{0}\) is a linear combination of all the columns in \(M_{\varvec{v}(J_m)}\).

Let \(\varvec{v}(J_m) \in \min \varGamma \), (1), namely, \(\varvec{v} (J_m)= (v_1, v_2, \ldots , v_{\ell }, 0, \ldots , 0)\) for some \(\ell \in J_m\) such that \(\hat{k}_{i+1} \le |\varvec{v} ([i])| < k_i\) for all \(i \in [\ell -1]\) and \(|\varvec{v} ([\ell ])| = k_{\ell }\). Then there must exist a vector \(\varvec{u} \in \mathcal {B}_1\), (6), such that \(\varvec{u} \ge \varvec{v}\) and \(u_i = v_i\) for every \(i \in [\ell ]\). Note that the last \(k-k_{\ell }\) rows of \(M_{\varvec{v}(J_m)}\) are all zero rows, it follows that \(M_{\varvec{u}(J_m)}\) has the following form

$$\begin{aligned} M_{\varvec{u}(J_m)} = \left( \begin{array}{cc} \hat{M}_{\varvec{v}(J_m)} &{} A_1 \\ O &{} A_2 \end{array}\right) \end{aligned}$$

where \(\hat{M}_{\varvec{v}(J_m)}\) is the square block formed by the first \(k_{\ell }\) rows of \(M_{\varvec{v}(J_m)}\), \(A_1\) is a \( (k- k_{\ell }) \times k_{\ell }\) block and \(A_2\) is a \((k- k_{\ell }) \times (k- k_{\ell })\) block. From Theorem 5, \(M_{\varvec{u}(J_m)}\) is nonsingular. This with \(\det ( M_{\varvec{u}(J_m)}) = \det (\hat{M}_{\varvec{v}(J_m)}) \cdot \det ( A_2)\) implies that \(\hat{M}_{\varvec{v}(J_m)}\) is nonsingular. In this case, the \(k_{\ell }\)-dimensional column vector formed by the first \(k_{\ell }\) elements of \(M_0\) can be spanned by the columns of \(\hat{M}_{\varvec{v}(J_m)}\). Accordingly, \(M_0\) can be spanned by the columns in \(M_{\varvec{v}(J_m)}\) as the last \(k-k_{\ell }\) elements of \(M_0\) are all zero. Hence, \(M_0\) can be spanned by the columns in \(M_{\varvec{v}(J_m)}\) for any \(\varvec{v}(J_m) \in \varGamma \).

Assume that \(\varvec{v}(J_m) \notin \varGamma \). We know every unauthorized subset may be completed into an authorized subset (though not necessarily minimal) by adding to it at most k participants. Without loss of generality, we may assume that there exists a vector \(\varvec{v'}(J_m) \in \varGamma \) such that \(\varvec{v'}(J_m) \ge \varvec{v}(J_m)\) and \(|\varvec{v'}(J_m)|=|\varvec{v}(J_m)| +1\).

First, assume that \(\varvec{v}(J_m) = (v_1, v_2, \ldots , v_{\ell }, 0, \ldots , 0)\) for some \(\ell \in J_m\) such that \(\hat{k}_{i+1} -1 \le |\varvec{v} ([i])| \le k_i - 1\) for all \(i \in [\ell -1]\) and \(|\varvec{v} ([\ell ])| = k_{\ell }-1\). Then for the vector \(\varvec{v}(J'_m)\) with \(u_0 =1\), namely, \(\varvec{v}(J'_m)= (1, v_1, v_2, \ldots , v_{\ell }, 0, \ldots , 0)\), there must exist a vector \(\varvec{u}(J'_m) \in \mathcal {B}_2\), (6), such that \(\varvec{u}(J'_m) \ge \varvec{v}(J'_m)\) and \(u_i = v_i\) for every \(i \in [0, \ell ]\). From Theorem 5, \(M_{\varvec{u}(J'_m)}\) is nonsingular. This with \(\varvec{v}(J_m) \le \varvec{u}(J_m)\) implies that \(M_0\) can’t be spanned by all the columns in \(M_{\varvec{v}(J_m)}\).

Second, assume that \(\varvec{v}(J_m) = (v_1, v_2, \ldots , v_m)\) with \(|\varvec{v}(J_m) | \ge k\) such that for some \(\ell \in J_m\), \( |\varvec{v}([\ell ]) | = \hat{k}_{l+1} -1\), \(\hat{k}_{i+1} -1\le |\varvec{v}([i]) | < k_{i}\) for every \(i \in [\ell - 1]\), and \(v_i = n_i\) for every \(i \in [\ell +1, m]\). Then \(M_0\) can’t be spanned by the columns in \(M_{\varvec{v'}(J_m)}\) for any \(\varvec{v'}(J_m) \le \varvec{v}(J_m)\) if \(M_0\) can’t be spanned by the columns in \(M_{\varvec{v}(J_m)}\). We claim that every column in \(M_{\varvec{v}(J_m)}\) can be spanned by the columns in \(M_{\varvec{u}(J_m)}\) for any \(\varvec{u}(J_m) \le \varvec{v}(J_m)\) with \(|\varvec{u}(J_m) | =k - 1\) such that \(|\varvec{u}([i]) | = |\varvec{v}([i]) |\) for every \(i \in [l]\) and \(\hat{k}_{i+1} - 1 \le |\varvec{u}([i]) | < k_i\) for every \(i \in [\ell +1, m -1]\).

For such a vector \(\varvec{u}(J_m)\), if \(u_0 =1\), then \(\varvec{u}(J'_m) \in \mathcal {B}_2\), (6). This implies \(M_0\) can’t be spanned by the columns in \(M_{\varvec{u}(J_m)}\). Furthermore, \(M_0\) can’t be spanned by the columns in \(M_{\varvec{v}(J_m)}\) if the claim is true.

We proceed to prove the claim. Note that

$$\begin{aligned} M_{\varvec{u}(J'_m)} = \left( \begin{array}{c|c} M_{\varvec{u}([0, \ell ])}&M_{\varvec{u}([\ell +1, m])} \end{array}\right) = \left( \begin{array}{cc} D_1 &{} O\\ D_2 &{} \bar{M}_{\varvec{u}([\ell +1, m])} \end{array}\right) \end{aligned}$$

where \(\bar{M}_{\varvec{u}([\ell +1, m])}\) is the square block formed by the last \(k - \hat{k}_{\ell +1}\) rows of \( M_{\varvec{u}([\ell +1, m])}\). As \(M_{\varvec{u}(J'_m)}\) is nonsingular, thus \(\bar{M}_{\varvec{u}([\ell +1, m])}\) is nonsingular. On the other hand, \( M_{\varvec{v}(J_m)} = \left( \begin{array}{c|c} M_{\varvec{v}([\ell ])}&M_{\varvec{v}([\ell +1, m])} \end{array}\right) , \) where

$$\begin{aligned} M_{\varvec{v}([\ell +1, m])} = \left( \begin{array}{c} O\\ \bar{M}_{\varvec{v}([\ell +1, m])} \end{array}\right) \end{aligned}$$

for which \(\bar{M}_{\varvec{v}([\ell +1, m])}\) is the block formed by the last \(k - \hat{k}_{\ell +1}\) rows of \( M_{\varvec{v}([\ell +1, m])}\). Since \(\bar{M}_{\varvec{u}([\ell +1, m])}\) is a submatrix of \(\bar{M}_{\varvec{v}([\ell +1, m])}\) and \(\bar{M}_{\varvec{u}([\ell +1, m])}\) is nonsingular, it follows that any column in \(\bar{M}_{\varvec{v}([\ell +1, m])}\) can be spanned by the columns in \(\bar{M}_{\varvec{u}([\ell +1, m])}\). Accordingly, any column in \(M_{\varvec{v}([\ell +1, m])}\) is a linear combination of the columns in \(M_{\varvec{u}([\ell +1, m])}\). This with \( M_{\varvec{v}([\ell ])} = M_{\varvec{u}([\ell ])} \) implies the claim.    \(\square \)

4.2 Another Construction for Ideal Hierarchical Access Structures

In this section, we give another construction of ideal linear secret sharing schemes for IHASs using the similar technique in Sect. 4.1. The parameters of this construction may be better than the construction in Sect. 4.1 in some cases.

For every \(i \in J_m\), take \(n_i\) different elements \(\beta _{i, v} \in \mathbb {F} \backslash \{0\}\) and let the \((k_i - \hat{k}_i) \times n_i\) matrix \(B_i\) be defined as follows:

$$\begin{aligned} B_{i} = \big ( (\beta _{i,v} x^{i-1} )^{k_i- \hat{k}_i -u} \big ) \quad u \in [k_i- \hat{k}_i], ~v \in [n_i ]. \end{aligned}$$

Take a k-dimensional column vector \(M_0 = (1, 0, \ldots , 0)^T\) and \(M_i = E_i B_i\) for every \(i \in J_m\). Define a \(k \times (n+1)\) matrix as

$$\begin{aligned} M = (M_0 | M_1| \cdots | M_m). \end{aligned}$$
(10)

Similar to the case in Sect. 4.1, we will prove that LSSS(M) realizes IHASs. First, we give an example to explain this construction as follows.

Example 4

As in Lemma 2, assume that \(m = 3\), \(\varvec{k} =(k_1,k_2,k_3) = (3,5,7)\), and \(\varvec{\hat{k}} =(\hat{k}_1, \hat{k}_2, \hat{k}_3) = (0,1,2)\). Take \(\varvec{u}=(u_1, u_2, u_3) = (2,2,3)\) and the matrix \(M_{\varvec{u}}\) has the following form:

$$\begin{aligned} M_{\varvec{u}} = \left( \begin{array}{cc|cc|ccc} \beta _{1,1}^2 &{} \beta _{1,2}^2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \beta _{1,1} &{} \beta _{1,2} &{} (\beta _{2,1}x)^3 &{} (\beta _{2,2}x)^3 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} (\beta _{2,1}x)^2 &{} (\beta _{2,2}x)^2 &{} (\beta _{3,1}x^2)^4 &{} (\beta _{3,2}x^2)^4 &{} (\beta _{3,3}x^2)^4 \\ 0 &{} 0 &{} \beta _{2,1}x &{} \beta _{2,2}x &{} (\beta _{3,1}x^2)^3 &{} (\beta _{3,2}x^2)^3 &{} (\beta _{3,3}x^2)^3 \\ 0 &{} 0 &{} 1 &{} 1 &{} (\beta _{3,1}x^2)^2 &{} (\beta _{3,2}x^2)^2 &{} (\beta _{3,3}x^2)^2 \\ 0 &{} 0 &{} 0 &{} 0 &{} \beta _{3,1}x^2 &{} \beta _{3,2}x^2 &{} \beta _{3,3}x^2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 1 \\ \end{array}\right) . \end{aligned}$$

Note that \(M_{\varvec{u}}\) can be transformed to the following form by exchanging rows and columns

$$\begin{aligned} \tilde{M}_{\varvec{u}} = \left( \begin{array}{ccc|cc|cc} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ \beta _{3,1}x^2 &{} \beta _{3,2}x^2 &{} \beta _{3,3}x^2 &{} 0 &{} 0 &{} 0 &{} 0 \\ (\beta _{3,1}x^2)^2 &{} (\beta _{3,2}x^2)^2 &{} (\beta _{3,3}x^2)^2 &{} 1 &{} 1 &{} 0 &{} 0 \\ (\beta _{3,1}x^2)^3 &{} (\beta _{3,2}x^2)^3 &{} (\beta _{3,3}x^2)^3 &{} \beta _{2,1}x &{} \beta _{2,2}x &{} 0 &{} 0 \\ (\beta _{3,1}x^2)^4 &{} (\beta _{3,2}x^2)^4 &{} (\beta _{3,3}x^2)^4 &{} (\beta _{2,1}x)^2 &{} (\beta _{2,2}x)^2 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} (\beta _{2,1}x)^3 &{} (\beta _{2,2}x)^3 &{} \beta _{1,1} &{} \beta _{1,2} \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \beta _{1,1}^2 &{} \beta _{1,2}^2 \end{array}\right) , \end{aligned}$$

Therefore, \(|\det ( M_{\varvec{u}})| = |\det ( \tilde{M}_{\varvec{u}})|\).

Take \(\varvec{\kappa } = (\kappa _1, \kappa _2, \kappa _3) = (k- \hat{k}_3, k- \hat{k}_2, k- \hat{k}_1 ) =(5, 6, 7)\), and \(\varvec{\hat{\kappa }} = (\hat{\kappa }_1, \hat{\kappa }_2, \hat{\kappa }_3) = (k- k_3, k- k_2, k- k_1 ) =(0, 2, 4)\). Then Lemma 2 implies that \(\det (\tilde{M}_{\varvec{u}})\) is a nonzero polynomial on x of degree at most L with

$$\begin{aligned} L = \frac{1}{2} \sum _{i =1}^{2} \kappa _i ( \kappa _i-1)- (\kappa _2 - \kappa _{1}) \hat{\kappa }_2 = 23. \end{aligned}$$

Accordingly, \(\det (M_{\varvec{u}})\) is a nonzero polynomial on x of degree at most L.

In general, we have the following lemma.

Lemma 4

For any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), (6), \(\det ( M_{\varvec{u}}) \) is a nonzero polynomial on x of degree at most L where

$$\begin{aligned} L = \frac{1}{2} \sum _{i =2}^{m} (k - \hat{k}_{i}) ( k - \hat{k}_{i}-1)- \sum _{i=2}^{m-1} (i-1) (\hat{k}_{i+1} - \hat{k}_{i}) (k - k_{i}). \end{aligned}$$

Proof

For every \(i \in J_m\), take

$$\begin{aligned} \tilde{B}_{i} = \big ( (\beta _{m-i+1 ,v} x^{m-i} )^{u-1} \big ) \quad \quad u \in [k_{m-i+1}- \hat{k}_{m-i+1}],~v \in [n_{m-i+1} ] \end{aligned}$$

and let \(\tilde{E}_i\) be the submatrix formed by the \((k-k_{m-i+1} +1)\)th column to the \((k- \hat{k}_{m-i+1})\)th column of \(I_k\). Let

$$\begin{aligned} \tilde{M} = (\tilde{M}_0 | \tilde{M}_2| \cdots | \tilde{M}_m), \end{aligned}$$

where \(\tilde{M}_0 = (0, 0, \ldots , 0,1)^T\) is a k-dimensional column vector and \(\tilde{M}_i = \tilde{E}_i \tilde{B}_i\) for every \(i \in J_m\). Take \(\tilde{\mathrm {\Pi }}_0 = \mathrm {\Pi }_0\) and \(\tilde{\mathrm {\Pi }}_i = \mathrm {\Pi }_{m-i+1}\) for every \(i \in J_m\). Then \(\tilde{\mathrm {\Pi }} = (\tilde{\mathrm {\Pi }}_i)_{i \in J'_m}\) is a partition of \(P' = P \cup \{p_0\} \) too. Moreover, take \(\varvec{\kappa , ~ \hat{\kappa }} \in \mathbb {Z}_{+}^{J'_m}\) such that \(\kappa _0 = k\), \(\hat{\kappa }_0 = k-1\), and for every \(i \in J_m\), \(\kappa _i = k - \hat{k}_{m-i+1}\) and \(\hat{\kappa }_i = k - k_{m-i+1}\). Then \( \hat{\kappa }_i \le \hat{\kappa }_{i+1} < \kappa _i \le \kappa _{i+1}\) for \(i \in [m-1]\).

If \(\varvec{u} \in \mathcal {B}_1\), (6), then for any matrix \(M_{\varvec{u}}\), as in Example 4, by exchanging rows and columns we can obtain the matrix \(\tilde{M}_{\varvec{u}}\) such that \(|\det (M_{\varvec{u}})| = |\det (\tilde{M}_{\varvec{u}})|\). As \(\hat{k}_{m-i+1} \le |\varvec{u}([m-i])| \le k_{m-i}\) for every \(i \in [m-1]\),

$$ \hat{\kappa }_{i+1} = k - k_{m-i} \le |\varvec{u}([m-i+1, m])| \le k - \hat{k}_{m-i+1} =\kappa _i $$

for every \(i \in [m-1]\). From Lemma 2, \(\det (\tilde{M}_{\varvec{u}})\) is a nonzero polynomial on x of degree at most L where

$$\begin{aligned} \begin{aligned} L&= \frac{1}{2} \sum _{i =1}^{m-1} \kappa _i ( \kappa _i-1)- \sum _{i=2}^{m-1} (m-i) (\kappa _i - \kappa _{i-1}) \hat{\kappa }_i \\&= \frac{1}{2} \sum _{i =2}^{m} (k - \hat{k}_{i}) ( k - \hat{k}_{i}-1)- \sum _{i=2}^{m-1} (i-1) (\hat{k}_{i+1} - \hat{k}_{i}) (k - k_{i}). \end{aligned} \end{aligned}$$

If \(\varvec{u} \in \mathcal {B}_2\), (6), then for any matrix \(M_{\varvec{u}}\), we can obtain a matrix \(\tilde{M}_{\varvec{u}}\) such that \(|\det (M_{\varvec{u}})| = |\det (\tilde{M}_{\varvec{u}})|= |\det (\tilde{M}'_{\varvec{u}})|\), where \(\tilde{M}'_{\varvec{u}}\) is the submatrix obtained by removing the first column and the last row of \(\tilde{M}_{\varvec{u}}\). In this case \(\hat{k}_{m-i+1}-1 \le |\varvec{u}([m-i])| \le k_{m-i}-1\) for every \(i \in [m-1]\), hence

$$ \hat{\kappa }_{i+1} = (k-1) - ( k_{m-i} -1)\le |\varvec{u}([m-i+1, m])| \le (k-1) - (\hat{k}_{m-i+1} -1) =\kappa _i $$

for every \(i \in [m-1]\). Lemma 2 implies that \(\det (\tilde{M}'_{\varvec{u}})\) is a nonzero polynomial on x of degree at most L too, and the claim follows.    \(\square \)

Now, take a finite field \(\mathbb {F}_{q^{\lambda }}\), where \(q > \max _{i \in J_m} \{ n_i\}\) is a prime power and \(\lambda > L\). Take all \(\beta _{i,v}\) in the matrix (10) from \(\mathbb {F}_q \backslash \{0\}\) and take \(x \in \mathbb {F}_{q^{\lambda }}\) such that its minimal polynomial over \(\mathbb {F}_q\) is of degree \(\lambda \). Using the similar method to prove Theorem 5 and Proposition 6, we can obtain the following results.

Theorem 6

The matrix (10) is a representation of the matroid associated to IHASs (1) over \(\mathbb {F}_{q^{\lambda }}\) for some prime power \(q > \max _{i \in J_m} \{ n_i \}\) and some \(\lambda > L\). Moreover, such a representation can be obtained in time \(O(q, \lambda )\).

Proposition 7

Suppose M is the matrix (10). Then LSSS(M) realizes the IHASs (1) over \(\mathbb {F}_{q^{\lambda }}\) defined as in Theorem 6. Moreover, such a scheme can be obtained in time \(O(q, \lambda )\).

Remark 1

In some cases, Proposition 7 can give schemes for IHASs superior to the ones given by Proposition 6. For example, Proposition 6 can give the scheme for the DHTASs (2) over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > K = \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1) \) since \(\hat{k}_1 = \cdots =\hat{k}_m=0\) and the scheme for the CHTASs (3) over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > K = \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1) = \frac{1}{2} (m-1) k ( k-1)\) since \(0 = \hat{k}_1< \cdots < \hat{k}_m\) and \( k_1 = \cdots = k_m = k\).

On the other hand, Proposition 7 give the scheme for the DHTASs (2) over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > L = \frac{1}{2} \sum _{i =2}^{m} (k - \hat{k}_{i}) ( k - \hat{k}_{i}-1) = \frac{1}{2} (m-1) k ( k-1)\) and the scheme for the DHTASs (3) over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > L = \frac{1}{2} \sum _{i =1}^{m-1} (k - \tilde{k}_{i}) ( k - \tilde{k}_{i}-1)\).

Therefore, Proposition 6 gives the scheme for DHTASs superior to the one given by Proposition 7. Nevertheless, Proposition 7 gives the scheme for CHTASs superior to the one given by Proposition 6.

4.3 Comparisons

Comparison to the Construction of Brickell. Brickell [9] presented an efficient method to construct the ideal linear scheme for the DHTASs (2) over \(\mathbb {F}_{q^{\lambda '}}\) with \(q > \max _{i \in J_m} \{ n_i \}\) and \(\lambda ' \ge mk^2\). Proposition 6 gives a scheme for the DHTASs (2) too. In fact, our scheme is the same as Brickell’s scheme. Nevertheless, Proposition 6 implies the scheme for the DHTASs (2) can be obtained over \(\mathbb {F}_{q^{\lambda }}\) with \(\lambda > K = \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1) \). Therefore, we improve the bound for the field size since

$$\begin{aligned} \frac{1}{2} \sum _{i =1}^{m-1} k_i ( k_i-1)+1 \le \frac{1}{2}(m-1) k_{m-1}( k_{m-1}-1) +1< \frac{1}{2}(m-1) k_{m-1}^2 < mk^2. \end{aligned}$$

The reason for the improvement is that we give a relatively precise description of \(\det (M_{\varvec{u}})\) by the formula provided in Proposition 1.

Comparison to the Construction of Tassa. Tassa [35] presented an efficient method to construct the ideal linear scheme for the CHTAS (3) over \(\mathbb {F}_p\) where

$$\begin{aligned} p > 2^{-k+2} (k-1)^{( k-1)/2} ( k-1)! N^{(k-1)(k-2)/2} \end{aligned}$$
(11)

is a prime and N is the maximum identity assigning to participants. Proposition 7 gives a scheme for the CHTAS (3) over \(\mathbb {F}_{q^{\lambda }}\) with \(q > \max _{i \in J_m} \{ n_i\}\) and \(\lambda > L = \frac{1}{2} \sum _{i =1}^{m-1} (k - \tilde{k}_{i}) ( k - \tilde{k}_{i}-1).\)

Since \(( k-1)! \ge 2^{k-2}\) when \(k \ge 2\), it follows that the right hand of (11) is great than or equal to \( (k-1)^{( k-1)/2} N^{(k-1)(k-2)/2}\). From this with \(N \ge n \ge \max _{i \in J_m} \{n_i\}\), we have

$$\begin{aligned} q^{L} \le N^{(k-1)(k-2)/2} < 2^{-k+2} (k-1)^{( k-1)/2} ( k-1)! N^{(k-1)(k-2)/2} \end{aligned}$$

if \(L \le \frac{1}{2} (k-1)(k-2)\). In fact, \(\max _{i \in J_m} \{n_i\} \ll N\) in general. This implies in this case \(2^{-k+2} (k-1)^{( k-1)/2} ( k-1)! N^{(k-1)(k-2)/2} \gg q^{L}\), and consequently, our result is superior to Tassa’s result. In the case of \(L > \frac{1}{2} (k-1)(k-2)\), it is very possible that \(q^{L}\) is smaller than the right hand of (11). In particular, our efficient methods can also work for non-prime fields.

5 Secret Sharing Schemes for Compartmented Access Structures

In this section, we study ideal linear secret sharing schemes for two families of compartmented access structures by efficient methods.

5.1 Construction for Compartmented Access Structures with Upper Bounds

In this section, we construct ideal linear secret sharing schemes realizing UCASs. We first present a representation of the integer polymatroid \(\mathcal {Z}'\) satisfying Theorem 2 such that the UCASs (5) are of the form \(\varGamma _{0} (\mathcal {Z}', \mathrm {\Pi })\).

Take \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be a partition of the set P such that \(|\mathrm {\Pi }_i| = n_i\). Let \(\varvec{r} \in \mathbb {Z}_{+}^{J'_m}\) and \(k \in \mathbb {N}\) such that \(r_{0} = 1\), \(\varvec{r}(J_m) \le \mathrm {\Pi } (P)\) and \(r_i \le k \le |\varvec{r}(J_m)|\) for every \(i \in J_m\). The following result was presented in Section 8.2 of [17].

Lemma 5

Suppose \(\mathcal {Z}' = (J'_m, h)\) is an integer polymatroid such that \(h(X) = \min \big \{ k, |\varvec{r} (X) | \big \}\) for every \(X \subseteq J'_m\). Then the UCASs (5) are of the form \(\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\).

Now, we introduce a linear representation of the polymatroid defined in Lemma 5. Take different elements \(\alpha _{i, j} \in \mathbb {F}_{q}\) with \(i \in J'_m\) and \(j \in [r_i]\), where \(q \ge \max _{i \in J_m} \{ n_i, |\varvec{r}(J_m)|+1\}\) is a prime power. For every \(i \in J'_m\), let

$$A_i = \big ( \alpha _{i, v}^{u-1} \big ) \quad \quad u \in [k],~ v \in [r_i]$$

and consider the \(\mathbb {F}_{q}\)-vector subspace \(V_i \subseteq \mathbb {F}^k_{q}\) spanned by all the columns of \(A_i\). Let the integer polymatroid \(\mathcal {Z}' = (J'_m, h)\) such that \( h(X) = \dim \big ( \sum _{i \in X} V_i \big )\) for every \(X \subseteq J'_m\). We have the following result.

Proposition 8

For the integer polymatroid \(\mathcal {Z}' \) defined above, the UCASs (5) are of the form \(\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\) and

$$\begin{aligned} \mathcal {B}( \mathcal {Z}')=\{ \varvec{u} \in \mathbb {Z}_{+}^{J'_m}: |\varvec{u}| = k ~\mathrm{and}~\varvec{u} \le \varvec{r} \}. \end{aligned}$$
(12)

Proof

Let \(A= (A_0| A_1 | \cdots | A_m)\). Then it is a \(k \times (|\varvec{r}(J_m)|+1)\) Vandermonde matrix. Therefore, any \(k \times k\) submatrix of A is nonsingular. This with \(\dim (V_i)= r_i\) for every \(i \in J'_m\) implies the second claim. In addition, \(\big |\bigcup _{i \in X} \{ \varvec{a_{i,v}}: v \in [r_i] \} \big |= |\varvec{r} (X)|\) for every \(X \subseteq J'_m\) where \(\varvec{a_{i,v}}\) denotes the vth columns of \(A_i\). Hence, \(h(X) = \min \big \{ k, |\varvec{r} (X)| \big \}\) for every \(X \subseteq J'_m\), and the first claim follows.    \(\square \)

This proposition implies that the collection \((V_i)_{i \in J'_m}\) is a linear representation of the integer polymatroid \(\mathcal {Z}'\) associated to the UCASs (5). We next present a matrix M based on \(\mathcal {Z}'\), which is a representation of a matroid \(\mathcal {M}\) such that the UCASs (5) are of the form \(\varGamma _{p_0} (\mathcal {M})\).

Let \(\mathrm {\Pi }_0 = \{ p_0\}\) and let \(\mathrm {\Pi }' = (\mathrm {\Pi }_i)_{i \in J'_m}\) and \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be the partition of \(P' = P \cup \{p_0\} \) and P, respectively, such that \(|\mathrm {\Pi }_i |= n_i\). For every \(i \in J'_m\), take \(n_i\) different elements \(\beta _{i, v} \in \mathbb {F}_{q}\) with \(v \in [n_i]\) and let

$$ B_i = \big ( (\beta _{i, v} x) ^{u-1} \big ) \quad \quad u \in [r_i],~ v \in [n_i]. $$

Let a \(k \times (n+1)\) matrix be defined as

$$\begin{aligned} M = (M_{0} | M_1| \cdots | M_m) \end{aligned}$$
(13)

where \(M_i = A_iB_i\). We have the following result.

Lemma 6

For any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), (12), \(\det ( M_{\varvec{u}}) \) is a nonzero polynomial on x of degree at most \(k(r-1)\), where \(r = \max _{i \in J_m} \{r_i\}\).

Proof

Without loss of generality, we may assume that \(M_{\varvec{u}}\) is the \(k \times k\) submatrix of M formed by the first \(u_i\) columns in every \(M_i\). For every \(i \in J'_m\), take \(\bar{B}_i = \big ( \beta _{i,v}^{u-1}\big )\) with \(u \in [r_i]\) and \(v \in [n_i]\), and let \(B'_i\) and \(\bar{B}'_i\) denote the submatrices formed by the first \(u_i\) columns in \(B_i\) and \(\bar{B}_i\), respectively. In addition, for any \(\varvec{j_i} = (j_{i,1}, \ldots , j_{i, u_i})\) with \(i \in J'_m\) such that \(1 \le j_{i,1}< \cdots < j_{i,u_i} \le r_i\), let \(B'_{i}(\varvec{j_i} )\) and \(\bar{B}'_{i}(\varvec{j_i} )\) denote the \(u_i \times u_i\) submatrices formed by the \(j_{i,1}\)th row, ..., \(j_{i,u_i}\)th row of \(B'_i\) and \(\bar{B}'_i\), respectively, and let \(A_i (\varvec{j_i})\) denote the submatrix formed by the first \(u_i\) columns in \(A_i\). Then

$$\begin{aligned} \det \big (B'_{i}(\varvec{j_i} ) \big ) = \det \big (\bar{B}'_{i}(\varvec{j_i} ) \big ) x^{\sum _{v=1}^{u_i} (j_{i,v}-1)}. \end{aligned}$$

If \(j_{i, v} = r_i - u_i +v\) for \(v \in [u_i]\), then the exponent of x in \(\det ( B'_{i} (\varvec{j_i})) \) is maximum, that is

$$\begin{aligned} \sum _{v=1}^{u_i} (j_{i,v}-1) = \sum _{v=1}^{u_i} ( r_i - u_i +v -1) = u_i ( r_i - u_i ) + \sum _{v=1}^{u_i-1} v = \frac{1}{2} u_i ( 2r_i - u_i - 1 ). \end{aligned}$$
(14)

Note that in this case \(\bar{B}'_{i} (\varvec{j_i})\) is the submatrix formed by of the last \(u_i\) rows of \(\bar{B}'_{i}\), it follows \(\det (\bar{B}'_{i} (\varvec{j_i})) \ne 0\). Hence, Proposition 4 implies that \(\det (M_{\varvec{u}})\) can be viewed as a polynomial on x and the summation with maximum exponent of x in it is

$$\begin{aligned} \Big ( \prod _{i=1}^{m} \det \big ( \bar{B}'_{i} (\varvec{j_i}) \big ) \Big ) \det \Big ( \big ( A_0(\varvec{j_0})| A_1(\varvec{j_1})| \cdots | A_m(\varvec{j_m}) \big ) \Big ) x^t, \end{aligned}$$
(15)

where for \(i \in J_m\) and \(v \in [u_i]\), \(j_{i, v} = r_i - u_i +v\). As \(\sum _{i=1}^m u_i^2 \ge \sum _{i=1} ^m u_i\) and \(\sum _{i=1} ^m u_i = k~ \mathrm{or} ~ k-1\), from (14), we have

$$\begin{aligned} t = \frac{1}{2} \sum _{i=1}^{m} u_i ( 2r_i - u_i - 1 ) = \sum _{i=1}^{m} u_i r_i - \frac{1}{2} \sum _{i=1}^{m} (u_i^2 + u_i ) \le k ( r - 1 ). \end{aligned}$$
(16)

In addition, the matrix \(\big ( A_0(\varvec{j_0})| A_1(\varvec{j_1})| \cdots |A_m(\varvec{j_m}) \big )\) is nonsingular, thus \(\det (M_{\varvec{u}})\) is a nonzero polynomial on x of degree t. Using the same method, we can prove this claim for any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), (12).    \(\square \)

Now, take a finite field \(\mathbb {F}_{q^{\lambda }}\), where \(q \ge \max _{i \in J_m} \{ n_i , |\varvec{r}(J_m) |+1\}\) is a prime power and \(\lambda > k(r-1)\). Take \(\alpha _{i,v}\) and \(\beta _{i, v}\) in the matrix (13) from \(\mathbb {F}_q \) and take \(x \in \mathbb {F}_{q^{\lambda }}\) such that its minimal polynomial over \(\mathbb {F}_q\) is of degree \(\lambda \). Then similar to Theorem 5 and Proposition 6, from this lemma, we can obtain the following result.

Theorem 7

The matrix (13) is a representation of the matroid associated to UCASs (5) over \(\mathbb {F}_{q^{\lambda }}\) for some prime power \(q \ge \max _{i \in J_m} \{ n_i , |\varvec{r}(J_m)|+1\}\) and some \(\lambda > k(r-1)\). Moreover, such a representation can be obtained in time \(O(q, \lambda )\).

Proposition 9

Suppose M is the matrix (13). Then LSSS(M) realizes the UCASs (5) over \(\mathbb {F}_{q^{\lambda }}\) defined as in Theorem 7. Moreover, such a scheme can be obtained in time \(O(q, \lambda )\).

Proof

If \(\varvec{u}(J_m) \in \min \varGamma \) and \(u_{0} = 0\), then \(\varvec{u} (J'_m) \in \mathcal {B}( \mathcal {Z}')\), (12). Theorem 7 implies \(M_{\varvec{u}(J_m)}\) is nonsingular. Accordingly, \(M_{0}\) can be spanned by the columns in \(M_{\varvec{u}(J_m)}\) for any \(\varvec{u}(J_m) \in \varGamma \). Assume that \(\varvec{u}(J) \notin \varGamma \). As \(h(\{(i)\}) = r_{i}\) for every \(i \in J_m\), thus without loss of generality, we may assume that \(\varvec{u}(J_m) \le \varvec{r}(J_m)\). Furthermore, we may assume that \(|\varvec{u}(J_m)| =k - 1\), since if \(|\varvec{u}(J_m)| < k - 1\), we may find a vector \(\varvec{u'}(J_m) \ge \varvec{u}(J_m)\) such that \(\varvec{u'}(J_m) \le \varvec{r}(J_m)\) and \(|\varvec{u'} (J_m) | =k - 1\). In this case if \(u_{0} =1\), then \(\varvec{ u} (J'_m) \in \mathcal {B}( \mathcal {Z}')\). Theorem 7 implies \(M_{\varvec{u} (J'_m)}\) is nonsingular, and the claim follows.    \(\square \)

5.2 Construction for Compartmented Access Structures with Lower Bounds

In this section, we describe ideal linear secret sharing schemes realizing LCASs based on the schemes for the dual access structures of LCASs.

The dual access structures of LCASs (4) presented in [37] are defined as

$$\begin{aligned} \varGamma ^* = \{ \varvec{u} \in \mathbf {P}: |\varvec{u}| \ge l ~\mathrm{or}~ u_i \ge \tau _i ~\mathrm{for ~some} ~i \in J_m \} \end{aligned}$$
(17)

where \(l=|P|- k +1\), \(\tau _i= |\mathrm {\Pi }_i| - t_i +1\) for \(i \in J\), and \( |\varvec{\tau }| \ge l +m -1\).

We first present a representation of the integer polymatroid \(\mathcal {Z}'\) satisfying Theorem 2 such that the access structures (17) are of the form \(\varGamma _{0} (\mathcal {Z}' , \mathrm {\Pi })\).

Take \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be a partition of the set P such that \(|\mathrm {\Pi }_i| = n_i\). Let \(\varvec{\tau } \in \mathbb {Z}_{+}^{J'_m}\) and \(l \in \mathbb {N}\) such that \(\tau _0 = 1\), \(\varvec{\tau }(J_m) \le \mathrm {\Pi } (P)\) and \(|\varvec{\tau }(J_m)| \ge l +m -1\). Take \(\varvec{\tau '} \in \mathbb {Z}_{+}^{J'_m}\) such that \(\tau '_{0} =1\) and \(\tau '_{i} = \tau _i -1\) for every \(i \in J_m\). The following result was presented in Section IV-D of [19].

Lemma 7

Suppose \(\mathcal {Z}' = (J'_m, h)\) is an integer polymatroid with h satisfying

  1. (1)

    \(h(\{0\} ) =1\);

  2. (2)

    \(h( X) = \min \{l, 1+ |\varvec{\tau '} (X)| \}\) for every \(X \subseteq J_m\);

  3. (3)

    \(h( X \cup \{ 0 \} ) = h(X)\) for every \(X \subseteq J_m\).

Then the access structures (17) are of the form \(\varGamma _{0} (\mathcal {Z}', \mathrm {\Pi } )\).

We next introduce a linear representation of the polymatroid defined in Lemma 7. Take elements \(\alpha _{i, j} \in \mathbb {F}_{q}\) with \(i \in J'_m\) and \(j \in [\tau _i]\) where \(q > \max _{i \in J_m} \{ n_i, |\varvec{\tau '} (J_m)|\}\) is a prime power such that

  • \(\alpha _{i,1} = \alpha _{0} \) for all \(i \in J'_m\) and

  • the elements \(\alpha _{0}\) and \(\alpha _{i, j}\) with \(i \in J_m\) and \(j \in [2, \tau _i]\) are pairwise distinct.

For every \(i \in J'_m\), let

$$ A_i = \big ( \alpha _{i, v}^{u-1} \big ) \quad \quad u \in [l],~ v \in [\tau _i] $$

and consider the \(\mathbb {F}_{q}\)-vector subspace \(V_i \subseteq \mathbb {F}^k_{q}\) spanned by all the columns of \(A_i\). Let the integer polymatroid \(\mathcal {Z}' = (J'_m, h)\) such that \(h(X) = \dim \big ( \sum _{i \in X} V_i \big )\) for every \(X \subseteq J'_m\).

Proposition 10

For the integer polymatroid \(\mathcal {Z}' \) defined above, the access structures (17) are of the form \(\varGamma _{0} (\mathcal {Z}', \mathrm {\Pi } )\) and \(\mathcal {B}( \mathcal {Z}') =\mathcal {B}_1 \cup \mathcal {B}_2\), where

$$\begin{aligned} \begin{aligned} \mathcal {B}_1 = \{ \varvec{u} \in \mathbb {Z}_{+}^{J'_m}:&~|\varvec{u}| = l,~u_{0} =0, ~u_{i'} \le \tau _{i'}~\mathrm{for~some}~ i' \in J_m \\&~~\mathrm{and}~u_i \le \tau '_{i} ~\mathrm{for~all}~ i \in J_m \backslash \{i'\} \}, \\ \mathcal {B}_2=\{ \varvec{u} \in \mathbb {Z}_{+}^{J'_m} :&~ |\varvec{u} | = l,~u_{0} =1 ~\mathrm{and}~\varvec{u} (J_m) \le \varvec{\tau '} (J_m)\}. \end{aligned} \end{aligned}$$
(18)

Proof

Proving the first claim is equivalent to proving that h satisfies the three conditions in Lemma 7. First, \(h(\{0\} ) =1\) as \(\dim (V_{0}) =1\). Let A be the matrix formed by the column \(A_0\) and the last \(\tau '_i\) columns of \(A_i\) for every \(i \in J_m\). Then it is a \(l \times (1+ |\varvec{\tau '} (J_m)|)\) Vandermonde matrix. Accordingly, any \(l \times l\) submatrix of A is nonsingular. Since \( \big |\bigcup _{i \in X} \{ \varvec{a_{i,v}} : v \in [\tau _i] \} \big | = 1+ |\varvec{\tau '} (X)| \) for every \(X \subseteq J_m\) where \(\varvec{a_{i,v}}\) denotes the vth columns of \(A_i\), it follows that \(h( X) = \min \{l, 1+ |\varvec{\tau '} (X)| \}\) for every \(X \subseteq J_m\). Moreover, \( V_{0} \subseteq V_i\) for every \(X \subseteq J_m\), Therefore, \(h( X \cup \{ 0 \} ) = h(X)\) for every \(X \subseteq J_m\).

In addition, since any \(l \times l\) submatrix of A is nonsingular, on the one hand, any l distinct columns from \(A_i\) with \(i \in J_m\) are linearly independent, and on the other hand, \(A_0\) and any \(l-1\) columns from the last \(\tau '_i\) columns of \(A_i\) with \(i \in J_m\) are linearly independent too. This implies the second claim.    \(\square \)

We next present a matrix M which is a representation of a matroid \(\mathcal {M}\) such that the access structures (17) are of the form \(\varGamma _{p_0} (\mathcal {M})\).

Suppose \(\mathrm {\Pi }_0 = \{ p_0\}\) and let \(\mathrm {\Pi }' = (\mathrm {\Pi }_i)_{i \in J'_m}\) and \(\mathrm {\Pi } = (\mathrm {\Pi }_i)_{i \in J_m}\) be the partition of \(P' = P \cup \{p_0\} \) and P, respectively, such that \(|\mathrm {\Pi }_i |= n_i\). Take \(\beta _{0,1} = 0\) and for every \(i \in J_m\), take \(n_i\) different elements \(\beta _{i, v} \in \mathbb {F}_{q}\) with \(v \in [n_i]\) such that \(\beta _{i, v} \ne 0\). For every \(i \in J'_m\), take

$$ B_i = \big ( (\beta _{i, v} x) ^{u-1} \big ) \quad \quad u \in [\tau _i],~ v \in [n_i] $$

and \(M_i = A_i B_i\). Define a \(l \times (n+1)\) matrix as

$$\begin{aligned} M = (M_0 | M_1| \cdots | M_m). \end{aligned}$$
(19)

Lemma 8

For any \(\varvec{u} \in \mathcal {B}( \mathcal {Z}')\), (18), \(\det (M_{\varvec{u}})\) is a nonzero polynomial on x of degree at most \(l ( \tau - 1 )\), where \(\tau = \max _{i \in J_m} \{\tau _i\}\).

Proof

Without loss of generality, we may assume that \(M_{\varvec{u}}\) is the \(l \times l\) submatrix of M formed by the first \(u_i\) columns in every \(M_i\). For every \(i \in J'_m\), take \(\bar{B}_i = \big ( \beta _{i,v}^{u-1}\big )\) with \(u \in [\tau _i]\) and \(v \in [n_i]\), and let \(\bar{B}'_i\) denote the submatrix formed by the first \(u_i\) columns in \(\bar{B}_i\). Proposition 4 implies that \(\det (M_{\varvec{u}})\) can be viewed as a polynomial on x.

In the case of \(\varvec{u} \in \mathcal {B}_1\), let the summation with maximum exponent of x in \(\det (M_{\varvec{u}})\) be denoted by \(a_{t_1} x^{t_1}\). Then similar to (15),

$$a_{t_1} x^{t_1} = \big ( \prod _{i=1}^{m} \det \big ( \bar{B}'_{i} (\varvec{j_i}) \big ) \big ) \det \big ( \big ( A_1(\varvec{j_1})| \cdots | A_m (\varvec{j_m}) \big ) \big ) x^{t_1},$$

where \(\varvec{j_i} = (j_{i,1}, \ldots , j_{i, u_i})\) with \(i \in J_m\) such that \(j_{i,v} = \tau _i - u_i + v\) for \(v \in [u_i]\). In this case the matrix \(\big ( A_1(\varvec{j_1})| \cdots | A_m(\varvec{j_m}) \big )\) is nonsingular since its all columns are pairwise distinct. From this and each \(\bar{B}'_{i} (\varvec{j_i})\) is nonsingular, we have that \(a_{t_1} \ne 0\). In addition, as \(u_i \le \tau _i\) for every \(i \in J_m\), the inequality (16) implies \(t_1 \le l (\tau -1)\).

In the case of \(\varvec{u} \in \mathcal {B}_2\), let the summation with maximum exponent of x in \(\det (M_{\varvec{u}})\) be denoted by \(a_{t_2} x^{t_2}\). Then

$$a_{t_2} x^{t_2} = \big ( \prod _{i=1}^{m} \det \big ( \bar{B}'_{i} (\varvec{j_i}) \big ) \big ) \det \big ( \big ( A_0 | A_1(\varvec{j_1})| \ldots | A_m(\varvec{j_m}) \big ) \big ) x^{t_2},$$

where \(\varvec{j_i} = (j_{i,1}, \ldots , j_{i, u_i})\) with \(i \in J_m\) such that \(j_{i,v} = \tau _i - u_i + v\) for \(v \in [u_i]\). In this case \(u_i \le \tau _i -1\) for every \(i \in J_m\). Therefore, from the inequality (16), \(t_2 \le l (\tau -1)\). Moreover, \(a_{t_2} \ne 0\) as \(\bar{B}'_{i} (\varvec{j_i})\) with \(i \in J'_m\) and \(\big ( A_0(\varvec{j_0})| \cdots |A_m(\varvec{j_m}) \big )\) are all nonsingular.    \(\square \)

Now, take a finite field \(\mathbb {F}_{q^{\lambda }}\) with \(q > \max _{i \in J_m} \{ n_i , |\varvec{\tau '} (J_m)|\}\) is a prime power and \(\lambda > l ( \tau - 1 )\). Take \(\alpha _{i,v}\) and \(\beta _{i, v}\) in the matrix (19) from \(\mathbb {F}_q \backslash \{0\} \) and take \(x \in \mathbb {F}_{q^{\lambda }}\) such that its minimal polynomial over \(\mathbb {F}_q\) is of degree \(\lambda \). Similar to Theorem 7, we can obtain the following result.

Theorem 8

The matrix (19) is a representation of the matroid associated to access structures (17) over \(\mathbb {F}_{q^{\lambda }}\) for some prime power \(q > \max _{i \in J_m} \{ n_i , |\varvec{\tau '} (J_m)|\}\) and some \(\lambda >l ( \tau - 1 )\). Moreover, such a representation can be obtained in time \(O(q, \lambda )\).

Proposition 11

Suppose M is the matrix (19). Then LSSS(M) realizes the access structures (17) over \(\mathbb {F}_{q^{\lambda }}\) defined as in Theorem 8. Moreover, such a scheme can be obtained in time \(O(q, \lambda )\).

Proof

Let \(\varvec{u}(J_m) \in \varGamma ^*\), (17), be a minimal set, then \(|\varvec{u} (J_m)| = l\) and \(\varvec{u}(J_m) \le \varvec{\tau '}(J_m)\), or \(u_{i} = \tau _{i}\) for some \(i \in J_m\). In the first case, Theorem 8 implies \(M_{0}\) is can be spanned by all the columns in \(M_{\varvec{u}(J_m)}\). Moreover, Theorem 8 implies any \(\tau _i\) columns of \(M_i\) are linearly independent. From this with \(h(\{0, i\}) = h (\{ i \}) = \tau _i\) for every \(i \in J_m\), \(M_{0}\) is a linear combination of any \(\tau _i\) columns in \(M_i\). Hence, in the second case \(M_{0}\) can be spanned by all the columns in \(M_{\varvec{u}(J_m)}\) too.

Assume that \(\varvec{u}(J_m) \notin \varGamma ^*\), (17). Then \(\varvec{u} (J_m) \le \varvec{\tau '} (J_m)\) and \(|\varvec{u} (J_m) |\le l-1\). Without loss of generality, we may assume that \(|\varvec{u} (J_m) | =l-1\), since if \(|\varvec{u} (J_m)| < l - 1\), we may find a vector \(\varvec{u'}(J_m) \ge \varvec{u}(J_m)\) such that \(\varvec{u'} (J_m) \le \varvec{\tau '}(J_m)\) and \(|\varvec{u'} (J_m)| =l - 1\). As \(l \le |\varvec{\tau '}(J_m) | +1\), the above-described procedure is possible. In this case if \(u_{0} =1\), then \(\varvec{u}(J'_m) \in \mathcal {B}_2\). Theorem 8 implies \(M_{\varvec{u} (J'_m)}\) is nonsingular, and the claim follows.    \(\square \)

Remark 2

From the dual relationship of the access structures (17) and the LCASs (4), we can translate the scheme in Proposition 11 into an ideal linear scheme for the LCASs (4) using the explicit transformation of [20]. Specially, the efficient construction of ideal linear schemes realizing LCASs (4) can be obtained over \(\mathbb {F}_{q^{\lambda }}\) in time \(O(q, \lambda )\) for some prime power \(q > \max _{i \in J_m} \{ n_i , \sum _{i=1}^m (n_i - t_i) \}\) and some \(\lambda > (n-k+1) t\), where \(t = \max _{i \in J_m} \{n_i - t_i \}\).