Keywords

1 Introduction

Consider a ground set G composed of n elements and \(k\in N_+\), we define \((k+1)^{G}\) as the family of k disjoint subset \((X_1,\ldots ,X_k)\), where \(X_i\subseteq G\), \(\forall i\in [k]\) and \(X_i\cap X_j=\emptyset \), \(\forall i\ne j\). A function \(f:(k+1)^{G}\rightarrow R\) is said to be k-submodular [7], if

$$\begin{aligned} f(\boldsymbol{x})+f(\boldsymbol{y}) \ \ge \ f(\boldsymbol{x}\sqcup \boldsymbol{y})+f(\boldsymbol{x}\sqcap \boldsymbol{y}), \end{aligned}$$

for any \(\boldsymbol{x}=(X_1,\ldots ,X_k)\) and \(\boldsymbol{y}=(Y_1,\ldots ,Y_k)\) in \((k+1)^{G}\), where

$$\begin{aligned} \boldsymbol{x}\sqcup \boldsymbol{y}:=(X_1\cup Y_1\setminus (\bigcup _{i\ne 1}X_i\cup Y_i),\ldots ,X_k\cup Y_k\setminus (\bigcup _{i\ne k}X_i\cup Y_i)), \end{aligned}$$
$$\begin{aligned} \boldsymbol{x}\sqcap \boldsymbol{y}:= (X_1\cap Y_1,\ldots ,X_k\cap Y_k). \end{aligned}$$

Obviously, it is a submodular function for \(k=1\).

As early as 1978, Nemhauser et al. [11] studied the monotone submodular maximization problem subject to cardinality constraints and obtained a greedy \((1-1/e)\)-approximation algorithm. Many scholars extended submodular maximization to different constraints and design approximate algorithms, see [1,2,3,4,5,6, 10, 17, 20]. Among them, knapsack constraint and matroid constraint are mainly concerned, and most of the algorithms can achieve the tight \(1-1/e\) approximation ratio. However, under the intersection constraint of a knapsack and a matroid, we have not found that the algorithm can achieve \(1-1/e\)-approximation, since the loss of rounding is difficult to avoid. Recently, by combining greedy and local search techniques, Sarpatwa et al. [16] contributed an algorithm for reaching \(\frac{1-e^{-2}}{2}\)-approximation ratio.

In recent years, k-submodular maximization problem has been widely concerned and studied. There have been many research results. For k-submodular maximization without constraint, Ward and Zivny [19] gave a deterministic greedy algorithm, whose approximate ratio reached 1/3, and a randomized greedy algorithm whose approximate ratio is \(\frac{1}{1+a}\), where \(a=\max \{1, \sqrt{\frac{k-1}{4}}\}\). Iwata et al. [8] improved the approximation ratio to 1/2. Later, [14] contributed an algorithm with ratio \(\frac{k^2+1}{2k^2+1}\). Under the monotonicity assumption, Ward and Zivny [19] gave a 1/2 approximation algorithm and Iwata et al. [8] improved the approximation ratio to \(k/(2k-1)\), which is asymptotically tight. There are also many results for nonnegative monotone k-submodular maximization with constraints. In 2015, Ohsaka and Yoshida [13] designed a 1/2-approximation algorithm for a total size constraint. Sakaue [15] presented a 1/2-approximation algorithm with a matroid constraint. And for monotone k-submodular maximization subject to a knapsack constraint, Tang et al. [18] proposed an algorithm of \(\frac{1-1/e}{2}\) approximate ratio. Liu et al. [9] design a combinatorial approximation algorithm for monotone k-submodular maximization subject to a knapsack and a matroid constraint and obtained a \(\frac{1}{4}(1-e^{-2})\) approximate ratio.

In this paper, we consider the k-submodular maximization subject to a knapsack and a matroid constraint, and do some work on the basis of the algorithm given by Liu et al. [9]. The main contributions of this paper are as follows:

  • We extend the algorithm for k-submodular maximization problem with a knapsack and a matroid constraint to nonmonotone case, and achieve a \(\frac{1}{6}(1-e^{-2})\) approximate ratio, based on the pairwise monotone property.

  • We improve the approximate ratio from \(\frac{1}{4}(1-e^{-2})\) in [9] to \(\frac{1}{3}(1-e^{-3})\) under the monotonicity assumption. In the theoretical analysis of the algorithm, we no longer rely on the results of the greedy algorithm for the unconstrained k-submodular maximization problem, and use the properties of k-submodular function to get the new result.

We organize our paper as follows. In Sect. 2, we first introduce the k-submdodular function and some corresponding results, then present the k-submodular maximization problem with a knapsack and a matroid constraint. We present our results for non-monotone case in Sect. 3. In Sect. 4, we show our theoretical analysis for monotone case.

2 Preliminaries

2.1 k-Submodular Function

For any two k disjoint subsets \(\boldsymbol{x}=(X_1,\ldots ,X_k)\) and \(\boldsymbol{y}=(Y_1,\ldots ,Y_k)\) in \((k+1)^{G}\), we need to introduce a remove operation and a partial order, i.e.

$$\begin{aligned} \boldsymbol{x}\setminus \boldsymbol{y}:= (X_1\setminus Y_1,\ldots ,X_k\setminus Y_k), \end{aligned}$$

\(\boldsymbol{x}\preceq \boldsymbol{y}\), if \(X_i\subseteq Y_i, \forall i\in [k]\).

Define \(\emptyset :=(\emptyset ,\dots ,\emptyset )\in (k+1)^G\) and \((v,i)\in (k+1)^G\) such that \(X_i=\{v\}\) and \(X_{j}=\emptyset \) for \(\forall j\in [k]\) with \(j\ne i\). Refer \(U(\boldsymbol{x})=\bigcup _{i=1}^{k}X_i\). For \(v\notin U(\boldsymbol{x})\), we use \(f_{\boldsymbol{x}}((v,i))=f(\boldsymbol{x}\sqcup (v,i))-f(\boldsymbol{x})\) to represent the marginal gain of f. A function f is said to be pairwise monotone if \(f_{\boldsymbol{x}}((v,i))+f_{\boldsymbol{x}}((v,j))\ge 0\) for any \(i\ne j\in [k]\) holds. In addition, we call that the function f is orthant submodular, if \(f_{\boldsymbol{x}}((v,i))\ge f_{\boldsymbol{y}}((v,i))\) holds, for any \(\boldsymbol{x}\preceq \boldsymbol{y}\). According to the above definition, we have the equivalent definition and property of the k-submodular function as follows.

Definition 1

[19]. A function \(f:(k+1)^G\rightarrow R\) is k-submodular iff it is pairwise monotone and orthant submodular.

Lemma 1

[18]. Given a k-submodular f, we have

$$\begin{aligned} f(\boldsymbol{y})-f(\boldsymbol{x})\le \sum \limits _{(v,i)\preceq \boldsymbol{y}\backslash \boldsymbol{x}}f_{\boldsymbol{x}}((v,i)), \end{aligned}$$

for any \(\boldsymbol{x}\preceq \boldsymbol{y}\).

Check the definition of k-submodular, we have the lemma as follows.

Lemma 2

Given a k-submodular f, we set \(g(\boldsymbol{x})=f(\boldsymbol{x}\sqcup (v,i))\): \((k+1)^{G\backslash {v}}\rightarrow R\), then \(g(\boldsymbol{x})\) is k-submodular.

2.2 k-Submodular Maximization with a Knapsack and a Matroid Constraint

We define \(\mathcal {L}\subseteq 2^G\) as the family of subsets of G. A pair \((G,\mathcal {L})\) is called as an independence system if \((\mathcal {M}1)\) and \((\mathcal {M}2)\) holds. And if \((\mathcal {M}3)\) also holds, the independence system \((G,\mathcal {L})\) is a matroid.

Definition 2

Given a pair \(\mathcal {M}=(G,\mathcal {L})\), where \(\mathcal {L}\subseteq 2^G\). We call \(\mathcal {M}\) is a matroid if the following holds:

  • \((\mathcal {M}1)\): \(\emptyset \in \mathcal {L}\).

  • \((\mathcal {M}2)\): for any subset \(A\in \mathcal {L}\), \(B\subseteq A\) indicates \(B\in \mathcal {L}\).

  • \((\mathcal {M}3)\): for any two subset \(A,B\in \mathcal {L}\), \(\mid A\mid >\mid B\mid \) indicates that there exists a point \(v\in A\backslash B\), such that \(B\cup \{v\}\in \mathcal {L}\).

Given a subset \(A\in \mathcal {L}\) and a pair of points (ab), where \(a\in A\cup \{\emptyset \}\) and \(b\in G\backslash A\), we refer the pair (ab) as a swap(ab) if \(A\backslash \{a\}\cup \{b\}\in \mathcal {L}\). It means that only some special points pair called swap can guarantee that \(A\backslash \{a\}\cup \{b\}\in \mathcal {L}\) is still an independent set.

We highlight that the next lemma ensures that a swap(ab) must exist between the optimal solution \(\boldsymbol{x}^*\) and the current solution \(\boldsymbol{x}^t\) in the later analysis. Consider the support set of the current solution \(U(\boldsymbol{x}^t)\) as \(A\in \mathcal {L}\) and \(U(\boldsymbol{x}^*)\) as \(B\in \mathcal {L}\). We will consider finding a special kind of swap(y(b), b) of \(U(\boldsymbol{x}^t)\), where \(b\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x}^t)\) and \(y(b)\in U(\boldsymbol{x}^t)\backslash U(\boldsymbol{x}^*)\cup \{\emptyset \}\).

Lemma 3

[16]. Assume two sets \(A,B\in \mathcal {L}\), then we can construct a mapping \(y : B\backslash A\rightarrow (A\backslash B)\cup \{\emptyset \}\), where every point \(b\in B \backslash A\) satisfies \((A\backslash \{y(b)\})\cup \{b\} \in \mathcal {L}\), and \(a\in A \backslash B\) satisfies \(|y^{-1}(a)|\le 1\).

Consider every point v in G, we give it a weight \(w_v\ge 0\) and a total upper bound B. In the following, we assume that \(w_v\) is an integer, because we can always change all \(w_v\) and B proportionally without losing generality. The two constraints reduce the domain of candidate solutions, so we can only find some solutions \(\boldsymbol{x}\in (k+1)^G\) such that the sum of weight \(w_v\) of all points v in \(U(\boldsymbol{x})\) is less than B and \(U(\boldsymbol{x})\) is an independent set. Define \(w_{\boldsymbol{x}}=\sum \limits _{v\in U(\boldsymbol{x})}w_v\). The problem can be written as

$$\begin{aligned} \max _{\boldsymbol{x}\in (k+1)^G}\{f(\boldsymbol{x})\mid w_{\boldsymbol{x}}\le B ~\textrm{and} ~ U(\boldsymbol{x})\in \mathcal {L} \}. \end{aligned}$$
(1)

In addition, in the later proof, we need to use the following lemma.

Lemma 4

[11]. Given two fixed \(P,D\in N_+\) and a sequence of numbers \(\gamma _i\in R_+\), where \(i\in [P]\) , then we have

$$\begin{aligned} \begin{aligned}&\frac{\sum _{i=1}^P \gamma _i}{\min _{t\in [P]}(\sum _{i=1}^{t-1}\gamma _i+D\gamma _t)}\\&\ge 1-(1-\frac{1}{D})^P \ge 1-e^{-P/D}.\end{aligned} \end{aligned}$$
(2)

2.3 Algorithm

Before giving the algorithm to solve problem (1), we firstly introduce a greedy algorithm for unconstrained k-submodular by [19]. We know that a k-submodular function f is pairwise monotone due to Definition 1, that is, \(f_{\boldsymbol{x}}((v,i))+f_{\boldsymbol{x}}((v,j))\ge 0\) for any \(i\ne j\in [k]\). It means that for a fixed \(\boldsymbol{x}\in (k+1)^G\) and \(v\in G\backslash U(\boldsymbol{x})\), there are no two positions \(i\ne j\in [k]\) such that \(f_{\boldsymbol{x}}((v,i))<0\) and \(f_{\boldsymbol{x}}((v,j))<0\) both hold. So we can always find a position \(i\in [k]\) such that \(f_{\boldsymbol{x}}((v,i))\ge 0\) for any \(v\in G\backslash U(\boldsymbol{x})\). Therefore, for every current solution \(\boldsymbol{x}^t\) in the Algorithm 1, we add \(v\in G\backslash U(\boldsymbol{x}^t)\) with a greedy position \(i_j\) until all points \(v\in G\) are added to \(U(\boldsymbol{x}^t)\).

Then we give an algorithm inspired by [16] and [18] for problems (1) called MK-KM abbreviated as maximizing k-submodular function with a knapsack constraint and a matroid constraint. Let’s highlight some important nodes. Firstly, we select three elements with the largest marginal return from the optimal solution \(\boldsymbol{x}^*\) by enumerating. Second, for every current solution \(\boldsymbol{x}^t\in \mathcal {L}\) and the optimal solution \(\boldsymbol{x}^*\in \mathcal {L}\), we can always find a swap(y(b), b) satisfying \(y(b)\in \boldsymbol{x}^t\backslash \boldsymbol{x}^*\) and \(b\in \boldsymbol{x}^*\backslash \boldsymbol{x}^t\) by Lemma 3. But we always choose a swap(ab) with the highest marginal profit density \(\rho (a,b)\). In the line 9 of MK-KM, we reorder the \(U(\boldsymbol{x}^t)\) after the operation of swap(ab) and ensure \(\boldsymbol{x}^{0}\preceq \boldsymbol{x}^t\). Considering the order of each element in \((U(\boldsymbol{x}^{t-1}\setminus \boldsymbol{x}^0)\setminus \{a\})\cup \{b\})\) as it is added to current solution in MK-KM, we add them to Greddy Algorithm in the same order. Last but not least, only when \(\boldsymbol{x}^t\) is updated, S will be regenerated in line 5. Otherwise, MK-KM will continue to pick and remove the next swap in the loop from 6 to 13. So MK-KM will break the loop when \(S=\emptyset \) in line 6.

figure a
figure b

We modify MK-KM and give MK-KM’ algorithm for problem (1) with monotonicity. MK-KM’ selects two elements with the largest marginal return from the optimal solution \(\boldsymbol{x}^*\) by enumerating. This modification reduces the running time.

figure c

In order to pave the way for analysis of Sect. 4, we consider the process of the current solution \(\boldsymbol{x}^t\) generated by \(\boldsymbol{x}^0\sqcup \tilde{\boldsymbol{x}}^t\). We carefully define \(\tilde{\boldsymbol{x}}^t_j\) as the current solution of each iteration of the greedy algorithm of the 9th line, where \(j\in \{1, \dots ,| U(\boldsymbol{x}^t)-2|\}\) for every fixed t. Define \((v_j,i_j)=\tilde{\boldsymbol{x}}^t_{j}\backslash \tilde{\boldsymbol{x}}^t_{j-1}\) in Greedy Algorithm.

For the convenience of writing, we define \(\boldsymbol{x}^t_j=\tilde{\boldsymbol{x}}^t_j\sqcup \boldsymbol{x}^0\). Then immediately \((v_j,i_j)=(\boldsymbol{x}^t_{j}\backslash \boldsymbol{x}^t_{j-1})=((\tilde{\boldsymbol{x}}^t_{j}\sqcup \boldsymbol{x}^0)\backslash (\tilde{\boldsymbol{x}}^t_{j-1}\sqcup \boldsymbol{x}^0))\) holds. For each fixed iteration step t, there are a string of iteration steps \(j\in \{1, \dots ,| U(\boldsymbol{x}^t)-2|\}\) for the nested greedy algorithm.

3 Analysis for Non-monotone k-submodular Maximization with a Knapsack Constraint and a Matroid Constraint

In this section, we will draw support from the nested greedy algorithm to solve problem (1). For nonnegative, non-monotone and unconstrained k-submodular, we need the following conclusions. Lemma 5 comes from Proposition 2.1 in [8]. If there exists a solution achieving the optimal value, we can construct an optimal solution containing all points of ground set. Therefore, for unconstrained k-submodular maximization, we only analyze the optimal solution which is the partition of ground set of Algorithm 1. And Lemma 6 ensures that we can obtain a 1/3-approximate greedy solution in the nested greedy Algorithm 1 by using \((U(\boldsymbol{x}^{t}\setminus \boldsymbol{x}^0)\setminus \{a\})\cup \{b\}\) as ground set G, where \(OPT_f(G)\) is the optimal value of unconstrained k-submodular f maximization over G.

Lemma 5

[8]. For maximizing a non-monotone k-submodular f over a set G, there exists a partition of G achieving the optimal value.

Lemma 6

[19]. For maximizing a non-monotone k-submodular f over a set G, by greedy algorithm, we can get a solution \(\boldsymbol{x}\) such that \(U(\boldsymbol{x})=G\) and \(3f(\boldsymbol{x})\ge OPT_f(G)\).

Drawing support from the nested greedy algorithm, we reorder each iterative solution of MK-KM and analyze the approximate ratio in two cases.

Theorem 1

Applying MK-KM algorithm to problem (1), we can obtain a \(\frac{1}{6}(1-e^{-2})\)-approximate ratio.

Proof

Using Lemma 3 between the iterative solution \(\boldsymbol{x}^t\) of MK-KM and the optimal solution \(\boldsymbol{x}^*\), there exists swap (y(b), b) satisfying \(y(b)\in (U(\boldsymbol{x}^t)\backslash U(\boldsymbol{x}^*))\cup \{\emptyset \}\) and \(b\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x}^t)\).

For any iteration step t, we construct a solution \(\hat{\boldsymbol{x}}^t\). Considering all \((b,i)\preceq \boldsymbol{x}^*\backslash \boldsymbol{x}^t\), we add them to \(\boldsymbol{x}^t\) and get \(\hat{\boldsymbol{x}^t}\). Note that \(\boldsymbol{x}^0\preceq \boldsymbol{x}^t\preceq \hat{\boldsymbol{x}}^t\) and \(U(\hat{\boldsymbol{x}}^t)=U(\boldsymbol{x}^*)\cup U(\boldsymbol{x}^t)\).

Due to Lemma 5, there exists an optimal solution containing all points in ground set G. And by Lemma 2, we know that \(f(\boldsymbol{x}\sqcup \boldsymbol{x}^0)\) is a k-submodular over \(U(\hat{\boldsymbol{x}}^t)\backslash U(\boldsymbol{x}^0)\). So we define that \(OPT_{f(\boldsymbol{x}\sqcup \boldsymbol{x}^0)}(U(\hat{\boldsymbol{x}}^t)\backslash U(\boldsymbol{x}^0))\) is the optimal value of \( f(\boldsymbol{x}\sqcup \boldsymbol{x}^0)\) over \(U(\hat{\boldsymbol{x}}^t\backslash \boldsymbol{x}^0)\). Using Lemma 6 for each \(\boldsymbol{x}^t\) in MK-KM, we always have

$$\begin{aligned} \begin{aligned}&OPT_{f(\boldsymbol{x}\sqcup \boldsymbol{x}^0)}(U(\hat{\boldsymbol{x}}^t)\backslash U(\boldsymbol{x}^0))\\&\le 3f(\hat{\boldsymbol{x}}^t)\\&\le 3f(\boldsymbol{x}^t)+3\sum \limits _{(b,i)\preceq \hat{\boldsymbol{x}}^t\backslash \boldsymbol{x}^t}[f(\boldsymbol{x}^t\sqcup (b,i))-f(\boldsymbol{x}^t)]\\&\le 3f(\boldsymbol{x}^t)+3\sum \limits _{(b,i)\preceq \hat{\boldsymbol{x}}^t\backslash \boldsymbol{x}^t}[f((\boldsymbol{x}^t\backslash (y(b),j))\sqcup (b,i))-f((\boldsymbol{x}^t\backslash (y(b),j))].\\ \end{aligned} \end{aligned}$$
(3)

The first inequality is due to Lemma 6. And the second is due to Lemma 1. By orthant submodularity, we get the third inequality. Recall that MK-KM breaks all loops when \(S=\emptyset \) in line 6. It implies that we cannot find a qualified swap(ab) to update the output solution \(\boldsymbol{x}\). We only consider swaps(y(b), b) in \(S(U(\boldsymbol{x}\backslash \boldsymbol{x}^0))\) related to \(b\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x})\) instead of all candidate swaps(ab). Now we use this construction method to analyze the algorithm in two cases.

Case 1: Consider a very special case that every swap(y(b), b) was rejected just due to \(\rho (y(b),b)\le 0\) instead of knapsack constraint.

Applying formula (3) for the output solution \(\boldsymbol{x}\) and constructed solution \(\hat{\boldsymbol{x}}\), we get

$$\begin{aligned} \begin{aligned}&f(\boldsymbol{x}^*)\\&\le OPT_{f(\boldsymbol{x}\sqcup \boldsymbol{x}^0)}(U(\hat{\boldsymbol{x}})\backslash U(\boldsymbol{x}^0))\\&\le 3f(\boldsymbol{x})+3\sum \limits _{(b,i)\preceq \hat{\boldsymbol{x}}\backslash \boldsymbol{x}}[f((\boldsymbol{x}\backslash (y(b),j))\sqcup (b,i))-f((\boldsymbol{x}\backslash (y(b),j))].\\ \end{aligned} \end{aligned}$$
(4)

Since \(\rho (y(b),b)\le 0\), we have

$$\begin{aligned} \begin{aligned} f((\boldsymbol{x}\backslash (y(b),j))\sqcup (b,i))\le f(\boldsymbol{x}) \end{aligned} \end{aligned}$$
(5)

for all \((b,i)\preceq \hat{\boldsymbol{x}}\backslash \boldsymbol{x}\). We define \(\{(y(b),j)\}_{b\in U(\hat{\boldsymbol{x}}^t\backslash \boldsymbol{x}^t)} \backslash \{\emptyset \}=\{(y_1,j_1),\dots ,(y_K,j_K)\}\), then we get

$$\begin{aligned} \begin{aligned}&\sum \limits _{(b,i)\preceq \hat{\boldsymbol{x}}\backslash \boldsymbol{x}}[f(\boldsymbol{x})-f((\boldsymbol{x}\backslash (y(b),j))]\\&\le \sum _{l=1}^{K} [f((\boldsymbol{x}\backslash ((y_1,j_1)\sqcup \dots \sqcup (y_{K},j_{K})))\sqcup ((y_1,j_1)\sqcup \dots \sqcup (y_{l},j_{l})))\\&-f((\boldsymbol{x}\backslash ((y_1,j_1)\sqcup \dots \sqcup (y_{K},j_{K})))\sqcup ((y_1,j_1)\sqcup \dots \sqcup (y_{l-1},j_{l-1}))]\\&=f(\boldsymbol{x})-f(\boldsymbol{x}\backslash ((y_1,j_1)\sqcup \dots \sqcup (y_{K},j_{K})))\\&\le f(\boldsymbol{x}). \end{aligned} \end{aligned}$$
(6)

The first inequality is due to orthant submodularity. Because f is nonnegative, the second inequality holds. So we can get

$$\begin{aligned} \begin{aligned}&f(\boldsymbol{x}^*)\le 6f(\boldsymbol{x}). \end{aligned} \end{aligned}$$
(7)

Therefore, we find a 1/6-approximate solution in Case 1.

Case 2: Consider the opposite of Case 1 that there exists at least one swap(y(b), b) satisfying \(w_{\boldsymbol{x}}-w_{y(b)}+ w_b > B\).

Assume a special iteration step \(t^*\). For the first time, there appears a swap \((y(b_*),b_*)\) in \(S(U(\boldsymbol{x}^{t^*}\backslash \boldsymbol{x}^0))\) such that \(w_{\boldsymbol{x}^{t^*}}-w_{y(b_*)}+ w_{b_*} > B\), where \(b_*\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x}^{t^*})\) and \(y(b_*)\in (U(\boldsymbol{x}^{t^*})\backslash U(\boldsymbol{x}^*))\cup \{\emptyset \}\).

Although this swap\((y(b_*),b_*)\) violates the knapsack constraint, we use it to construct a solution \((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*})\). By orthant submodularity, pairwise monotonicity and the greedy choice of \(\boldsymbol{x}^{\alpha }\), \(\boldsymbol{x}^{\beta }\) and \(\boldsymbol{x}^{\gamma }\), we have

$$\begin{aligned} \begin{aligned}&f((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))-f(\boldsymbol{x}^{t^*}) \le \frac{2}{3}f(\boldsymbol{x}^0). \end{aligned} \end{aligned}$$
(8)

The detailed process of proof is shown in the Appendix. By Lemma 2, we know that \(g(\boldsymbol{x})=f(\boldsymbol{x})-f(\boldsymbol{x}^0)\) is a k-submodular function. Then applying formula (3) for the current solution \(\boldsymbol{x}^t\) and constructed solution \(\hat{\boldsymbol{x}}^t\), we can get

$$\begin{aligned} \begin{aligned}&g(\boldsymbol{x}^*)\le 6[g(\boldsymbol{x}^t)+\frac{(B-w_{\boldsymbol{x}^0})}{2}\rho _{t+1}]. \end{aligned} \end{aligned}$$
(9)

for all \(t\in \{1,\ldots ,t^{*}\}\). The detailed process of proof is shown in the Appendix. We introduce a construction method inspired by K. K. Sarpatwar [16]. Its details are still in the Appendix. Due to the construction method, we can get

$$\begin{aligned} \begin{aligned}&\frac{g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))}{g(\boldsymbol{x}^*)} \ge \frac{1}{6}(1-e^{-2}). \end{aligned} \end{aligned}$$
(10)

Then, combing (8) and (10), we have

$$\begin{aligned} \begin{aligned}&f(\boldsymbol{x}^{t^*})\\&=f(\boldsymbol{x}^0)+g(\boldsymbol{x}^{t^*})\\&= f(\boldsymbol{x}^0)+g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))\\&-[g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*})))-g(\boldsymbol{x}^{t^*})]\\&= f(\boldsymbol{x}^0)+g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))\\&-[f((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*})))-f(\boldsymbol{x}^{t^*})]\\&\ge f(\boldsymbol{x}^0)+\frac{1}{6}(1-e^{-2})g(\boldsymbol{x}^*)-\frac{2}{3}f(\boldsymbol{x}^0)\\&\ge \frac{1}{6}(1-e^{-2})f(\boldsymbol{x}^*). \end{aligned} \end{aligned}$$
(11)

Therefore, we have a \(\frac{1}{6}(1-e^{-2})\)-approximate solution \(\boldsymbol{x}^{t^*}\) for MK-KM.

4 Analysis for Monotone k-Submodular Maximization with a Knapsack and a Matroid Constraint

A function f is said to be monotone, if \(f(\boldsymbol{x})\le f(\boldsymbol{y})\) for any \(\boldsymbol{x}\preceq \boldsymbol{y}\). It is easy to see that f must be pairwise monotone if f is monotone. Therefore, a monotone function \(f:(k+1)^G\rightarrow R\) is k-submodular if and only it is orthant submodular. In this section, we introduce a special construction method inspired by Lan N. Nguyen [12], and obtain a better approximate ratio by MK-KM’ algorithm.

For a fixed iteration t, recall that \((v_j,i_j)=\boldsymbol{x}^t_{j}\backslash \boldsymbol{x}^t_{j-1} \). Define \((v_j,i_*)\preceq \boldsymbol{x}^*\). We construct two sequences \(\{\boldsymbol{o}_{j-1/2}\}\) and \(\{\boldsymbol{o}_j\}\) such that \(\boldsymbol{o}_{j-1/2} =(\boldsymbol{x}^*~\sqcup ~ \boldsymbol{x}_{j}^t)~\sqcup ~\boldsymbol{x}_{j-1}^t\) and \(\boldsymbol{o}_{j} =(\boldsymbol{x}^*~\sqcup ~\boldsymbol{x}_{j}^t)~\sqcup ~\boldsymbol{x}_j^t\), where \(j\in \{1, \dots ,|U(\boldsymbol{x}^t)|-2\}\) and \(\boldsymbol{o}_{j=0}=\boldsymbol{x}^*\).

Note that \(\boldsymbol{x}^t_{j-1}\preceq \boldsymbol{x}^t_j\preceq \boldsymbol{o}_j\) and \( \boldsymbol{o}_{j-1/2}\preceq \boldsymbol{o}_j\). By Lemma 2, we know that \(g(\boldsymbol{x})=f(\boldsymbol{x})-f(\boldsymbol{x}^0)\) is a monotone k-submodular function. Then for any \(j\in \{1, \dots ,|U(\boldsymbol{x}^t)|-2\}\), we have

$$\begin{aligned} g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_j)\le g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_{j-1/2})\le g(\boldsymbol{x}^t_{j})-g(\boldsymbol{x}^t_{j-1}). \end{aligned}$$
(12)

The first inequality is due to monotonicity and \( \boldsymbol{o}_{j-1/2}\preceq \boldsymbol{o}_j\). When \(v_j\notin U(\boldsymbol{x}^*)\) or \(v_j\in U(\boldsymbol{x}^*)\) with \(i_j=i_*\), we have \(g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_{j-1/2})\le 0\) by monotonicity. When \(v_j\in U(\boldsymbol{x}^*)\) and \(i_j\ne i_*\), we have \(g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_{j-1/2})\ge 0\). Using orthant submodularity, we get the following inequality.

$$\begin{aligned} g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_{j-1/2}) \le g(\boldsymbol{x}^t_{j-1}\sqcup (v_j,i_*))-g(\boldsymbol{x}^t_{j-1}) \end{aligned}$$
(13)

Then by greedy choice, the inequality (12) holds.

Theorem 2

According to MK-KM’ algorithm, a \(\frac{1}{3}(1-e^{-3})\)-approximate solution of problem (1) can be obtained, if f is monotone.

Proof

Similarly to Theorem 1, we analyze the algorithm in two cases. When we get the output solution \(\boldsymbol{x}\), there is not any qualified swap (ab) to update \(\boldsymbol{x}\). We only consider swaps(y(b), b) in \(S(U(\boldsymbol{x}\backslash \boldsymbol{x}^0))\) related to \(b\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x})\) instead of all candidate swaps(ab).

Case 1: Consider a very special case that every swap(y(b), b) was rejected just due to \(\rho (y(b),b)\le 0\) instead of knapsack constraint.

For the optimal solution \(\boldsymbol{x}^*\) and the output solution \(\boldsymbol{x}\), we construct two sequences \(\{\boldsymbol{o}_{j-1/2}\}\) and \(\{\boldsymbol{o}_{j}\}\), where \(j\in \{1, \dots ,| U(\boldsymbol{x})|-2\}\). Sum (12) for j from 1 to \((|U(\boldsymbol{x})|-2)\), we have

$$\begin{aligned} \begin{aligned} g(\boldsymbol{x}^*)-g(\boldsymbol{o}_{|U(\boldsymbol{x})|-2})&= \sum ^{|U(\boldsymbol{x})|-2}_{j=1} [g(\boldsymbol{o}_{j-1})-g(\boldsymbol{o}_j)]\\&\le \sum ^{|U(\boldsymbol{x})|-2}_{j=1} [g(\boldsymbol{x}_{j})-g(\boldsymbol{x}_{j-1})]\\&=g(\boldsymbol{x}). \end{aligned} \end{aligned}$$
(14)

Using Lemma 1, orthant submodularity and \(\rho (y(b),b)\le 0\), we get

$$\begin{aligned} \begin{aligned} g(\boldsymbol{x}^*)&\le g(\boldsymbol{o}_{|U(\boldsymbol{x})|-2})+ g(\boldsymbol{x})\\&\le g(\boldsymbol{x}) + \sum \limits _{(b,i)\preceq (\boldsymbol{o}_{|U(\boldsymbol{x})|-2}\backslash \boldsymbol{x})} [g(\boldsymbol{x}\sqcup (b,i))-g(\boldsymbol{x})] + g(\boldsymbol{x})\\&\le 2g(\boldsymbol{x}) + \sum \limits _{(b,i)\preceq (\boldsymbol{o}_{|U(\boldsymbol{x})|-2}\backslash \boldsymbol{x})} [g((\boldsymbol{x}\backslash (y(b),j))\sqcup (b,i))-g(\boldsymbol{x}\backslash (y(b),j))] \\&\le 2g(\boldsymbol{x}) + \sum \limits _{(b,i)\preceq (\boldsymbol{o}_{|U(\boldsymbol{x})|-2}\backslash \boldsymbol{x})} [g(\boldsymbol{x})-g(\boldsymbol{x}\backslash (y(b),j))]. \\ \end{aligned} \end{aligned}$$
(15)

Let \(\{(y(b),j)\}_{b\in U(\boldsymbol{o}_{|U(\boldsymbol{x})|}\backslash \boldsymbol{x})} \backslash \{\emptyset \}=\{(y_1,j_1),\dots ,(y_K,j_K)\}\), then we have

$$\begin{aligned} \begin{aligned} g(\boldsymbol{x}^*)&\le 2g(\boldsymbol{x}) + \sum _{l=1}^{K} [g((y_1,j_1)\sqcup \dots \sqcup (y_{l},j_{l}))-g((y_1,j_1)\sqcup \dots \sqcup (y_{l-1},j_{l-1}))]\\&\le 2g(\boldsymbol{x}) + \sum _{l=1}^{K} g((y_1,j_1)\sqcup \dots \sqcup (y_{K},j_{K}))\\&\le 3g(\boldsymbol{x}). \end{aligned} \end{aligned}$$
(16)

Therefore,

$$\begin{aligned} \begin{aligned} f(\boldsymbol{x}^*) \le 3f(\boldsymbol{x})-2f(\boldsymbol{x}^0)\le 3f(\boldsymbol{x}). \end{aligned} \end{aligned}$$
(17)

We obtain 1/3-approximate ratio in case 1.

Case 2: Consider the opposite of case 1 that there exists at least one swap(y(b), b) satisfying \(w_{\boldsymbol{x}}-w_{y(b)}+ w_b > B\).

For the first time, there appears a swap \((y(b_*),b_*)\) in \(S(U(\boldsymbol{x}^{t^*}\backslash \boldsymbol{x}^0))\) such that \(w_{\boldsymbol{x}^{t^*}}-w_{y(b_*)}+ w_{b_*} > B\), where \(b_*\in U(\boldsymbol{x}^*)\backslash U(\boldsymbol{x}^{t^*})\) and \(y(b_*)\in (U(\boldsymbol{x}^{t^*})\backslash U(\boldsymbol{x}^*))\cup \{\emptyset \}\). For each \(t\in \{1,\dots ,t^*\}\), we construct two sequences \(\{\boldsymbol{o}_{j-1/2}\}\) and \(\{\boldsymbol{o}_{j}\}\) between \(\boldsymbol{x}^t\) and \(\boldsymbol{x}^*\), where \(j\in \{1, \dots ,| U(\boldsymbol{x}^t)|-2\}\). Summing (13) for j from 1 to \(|U(\boldsymbol{x}^{t})|-2\) and using Lemma 1, we have

$$\begin{aligned} \begin{aligned} g(\boldsymbol{x}^*)&\le g(\boldsymbol{o}_{|U(\boldsymbol{x}^t)|-2})+ g(\boldsymbol{x}^t)\\&\le g(\boldsymbol{x}^t) + \sum \limits _{(b,i)\preceq (\boldsymbol{o}_{|U(\boldsymbol{x}^t)|-2}\backslash \boldsymbol{x}^t)} [g(\boldsymbol{x}^t\sqcup (b,i))-g(\boldsymbol{x}^t)] + g(\boldsymbol{x}^t).\\ \end{aligned} \end{aligned}$$
(18)

Then applying (18) and the similar technique of (3) and (6), we can get

$$\begin{aligned} \begin{aligned} g(\boldsymbol{x}^*)&\le 3g(\boldsymbol{x}^t) + (B-w_{\boldsymbol{x}^0})\rho _{t+1},\\ \end{aligned} \end{aligned}$$
(19)

for all \(t\in \{1,\ldots ,t^{*}\}\). The detailed process of proof is shown in the Appendix. Similar to the proof of (10), using (19), we can get

$$\begin{aligned} \begin{aligned}&\frac{g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))}{g(\boldsymbol{x}^*)} \ge \frac{1}{3}(1-e^{-3}). \end{aligned} \end{aligned}$$
(20)

We modify inequality (8) as follows. By orthant submodularity, monotonicity and the greedy choice of \(\boldsymbol{x}^{\alpha }\), \(\boldsymbol{x}^{\beta }\), we have

$$\begin{aligned} \begin{aligned}&f((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))-f(\boldsymbol{x}^{t^*}) \le \frac{f(\boldsymbol{x}^0)}{2}. \end{aligned} \end{aligned}$$
(21)

The detailed process of proof is shown in the Appendix. Combing (20) and (21), we have

$$\begin{aligned} \begin{aligned}&f(\boldsymbol{x}^{t^*})\\&= f(\boldsymbol{x}^0)+g((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*}))\\&-[f((\boldsymbol{x}^{t^*}\setminus (y(b_*),j_{y(b_*)}))\sqcup (b_*,i_{b_*})))-f(\boldsymbol{x}^{t^*})]\\&\ge f(\boldsymbol{x}^0)+\frac{1}{3}(1-e^{-3})g(\boldsymbol{x}^*)-\frac{f(\boldsymbol{x}^0)}{2}\\&\ge \frac{1}{3}(1-e^{-3})f(\boldsymbol{x}^*). \end{aligned} \end{aligned}$$
(22)

Hence, MK-KM’ has an approximation ratio of at least \(\frac{1}{3}(1-e^{-3})\).

5 Discussion

To summarize this paper, inspired by [16] and [18], we propose a nested algorithm applicable to monotone and non-monotone k-submodular maximization with the intersection of a knapsack and a matroid constraint. For problem (1), we have a \(\frac{1}{6}(1-e^{-2})\)-approximate ratio. Inspired by [12], we use a new construction method between optimal solution and current solution. For monotone k-submodular maximization with a knapsack and a matroid constraint, we achieve at least \(\frac{1}{3}(1-e^{-3})\) approximation ratio.