1 Introduction

Given a ground set G containing n elements and \(k\in N_+\), refer \((X_1,\ldots ,X_k)\) as k-disjoint subsets, with \(X_i\subseteq G\), \(\forall i\in [k]\) and \(X_i\cap X_j=\emptyset \), \(\forall i\ne j\in [k]\); write \((k+1)^{G}\) as the family of k disjoint subsets. Define join and meet operations for any \(\textbf{x}=(X_1,\ldots ,X_k)\) and \(\textbf{y}=(Y_1,\ldots ,Y_k)\) in \((k+1)^{G}\), that is,

$$\begin{aligned}{} & {} \textbf{x}\sqcup \textbf{y}:=(X_1\cup Y_1\setminus (\bigcup _{i\ne 1}X_i\cup Y_i),\ldots ,X_k\cup Y_k\setminus (\bigcup _{i\ne k}X_i\cup Y_i)),\\{} & {} \textbf{x}\sqcap \textbf{y}:= (X_1\cap Y_1,\ldots ,X_k\cap Y_k).\end{aligned}$$

The join operation removes some points with different positions in \(\textbf{x}\) and \(\textbf{y}\), that is, points v with \(v\in X_i\), \(v\in Y_j\), \(\forall i\ne j\in [k]\). And the meet operation is just an intersection operation of sets.

A function \(f:(k+1)^{G}\rightarrow R\) is said to be k-submodular (Huber and Kolmogorov 2012) if

$$\begin{aligned} f(\textbf{x})+f(\textbf{y}) \ \ge \ f(\textbf{x}\sqcup \textbf{y})+f(\textbf{x}\sqcap \textbf{y}), \end{aligned}$$

for any \(\textbf{x}\) and \(\textbf{y}\) in \((k+1)^{G}\). The k-submodular function is a generalization of a submodular function. Note that the definition domain of k-submodular function is a collection of k disjoint subsets instead of simple subsets. When \(k=1\), a k-submodular function becomes a submodular function.

1.1 Related work

There have been many research results on monotone submodular maximization problem. Nemhauser et al. (1978) firstly achieved a greedy \((1-1/e)\)-approximation algorithm under a cardinality constraint, which was known as a tight bound. Later, Sviridenko (2004) designed a combinatorial \((1-1/e)\) approximate algorithm under a knapsack constraint. For this problem, Ene and Nguyen (2019) also offered an approximate ratio of \((1-1/e-\varepsilon )\) by using multilinear extention function, which only needed approximate linear running time. With a matroid constraint, Calinescu et al. (2011) got an approximate ratio of \((1-1/e)\), by using the continuous greedy method and pipage rounding technique. Filmus and Ward (2014) designed a combination algorithm using local search technique, which also achieved an approximate ratio of \((1-1/e)\). More recently, Sarpatwar et al. (2019) contributed an algorithm with an approximate ratio of \(\frac{1-e^{-(m+1)}}{m+1}\) combining the greedy algorithm and local search techniques for maximization problem of submodular function subject to the intersection of a knapsack and m matroid constraints. For maximizing non-monotone submodular functions, Lee et al. (2010) presented a \((\frac{1}{m+2+\frac{1}{m}+\varepsilon })\) approximation algorithm under m matroid constraints, and a \((\frac{1}{5}-\varepsilon )\) approximation algorithm under m knapsack constraints. Feldman et al. (2011) and Chekuri et al. (2014) studied constant factor approximation algorithms to maximize a multilinear extension of the submodular function over a down-closed polytope, respectively. The fractional solution could be rounded with contention resolution schemes. For more references on submodular maximization, see Bian et al. (2017); Calinescu et al. (2011); Ene and Nguyen (2019); Feldman and Naor (2013); Filmus and Ward (2014); Huang et al. (2022); Liu et al. (2022b); Sviridenko (2004); Yoshida (2019).

As a generalization of submodular function, the k-submodular function still has diminishing marginal benefits, where the definition domain is extended from the collection of simple subsets to the collection of k disjoint subsets. Many practical applications can be attributed to the k-submodular maximization problem. Ohsaka and Yoshida (2015) studied influence maximization with k topics and sensor placement with k sensors both based on k-submodular maximization with a size constraint. Rafiey and Yoshida (2020) applied k-submodular maximization to facility location.

In recent years, many researches on k-submodular maximization has sprung up. For k-submodular maximization without monotonicity assumption, Ward and Zivny (2014) studied the unconstrained problem and gave a deterministic greedy algorithm and a randomized greedy algorithm achieving the approximate ratio of 1/3 and \(\frac{1}{1+a}\) with \(a=\max \{1, \sqrt{\frac{k-1}{4}}\}\), respectively. Later, the approximation ratio was improved to 1/2 by Iwata et al. (2016). And Oshima (2021) also contributed a \(\frac{k^2+1}{2k^2+1}\)-approximate algorithm. For monotone k-submodular maximization, Ward and Zivny (2014) showed a 1/2-approximate algorithm without constraint, and then it was improved to \(k/(2k-1)\) by Iwata et al. (2016), which is asymptotically tight. Ohsaka and Yoshida (2015) introduced a construction method between current solution and optimal solution to obtain a 1/2-approximate ratio, for a total size constraint. Using the similar construction method, a 1/2-approximate ratio could be also achieved by Sakaue (2017) for a matroid constraint. Tang et al. (2022) contributed a \(\frac{1}{2}(1-e^{-1})\)-approximate algorithm with a knapsack constraint. Xiao et al. found that this result could be improved to \(\frac{1}{2}(1-e^{-2})\). Recently, Liu et al. (2022a) designed a nested greedy and local search \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\)-approximation algorithm for monotone k-submodular maximization subject to the intersection of a knapsack and m matroid constraints.

1.2 Our contributions

In this paper, we consider the k-submodular maximization subject to the intersection of a knapsack and m matroid constraints, and discuss the results in monotone and non monotone cases respectively. The main contributions of this paper are as follows:

  • We improve the approximate ratio from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) in Liu et al. (2022a) to \(\frac{1}{m+2}(1-e^{-(m+2)})\) for monotone k-submodular maximization problem with the intersection of a knapsack and m matroid constraints. In the theoretical analysis of the algorithm, we no longer rely on the conclusion of the greedy algorithm for unconstrained k-submodular maximization problem, and use the properties of k-submodular function to get the new result. Note that our result will be \(\frac{1}{3}(1-e^{-3})\) when \(m=1\), it improves the result \(\frac{1}{4}(1-e^{-2})\) in Liu et al. (2022a) with the intersection of a knapsack and a matroid constraint.

  • We extend the approximation algorithm to non-monotone case. By increasing the number of enumeration points in the algorithm and using the pairwise monotone property, we achieve a \(\frac{1}{m+3}(1-e^{-(m+3)})\) approximate ratio. It is easy to know that we have a \(\frac{1}{4}(1-e^{-4})\) approximate ratio for the non-monotone k-submodular maximization problem with the intersection of a knapsack and a matroid constraint.

1.3 Organization

Organize our paper as follows: In Sect. 2, we introduce notations, properties and some basic results about k-submodular function. In Sect. 3, we give and explain the nested greedy and local search algorithm. In Sects. 4 and 5, we present our theoretical analysis and show the main results for monotone case and non-monotone case, respectively.

2 Preliminaries

2.1 k-Submodular function

In this paper, we set \(k\ge 2\) and \(k\in N_+\), because k-submodular function is submodular function when \(k=1\). For any two k disjoint subsets \(\textbf{x},~\textbf{y}\in (k+1)^{G}\), we need to introduce a remove operation and a partial order, i.e.

$$\begin{aligned}\textbf{x}\setminus \textbf{y}:= (X_1\setminus Y_1,\ldots ,X_k\setminus Y_k),\end{aligned}$$

\(\textbf{x}\preceq \textbf{y}\), if \(X_i\subseteq Y_i, \forall i\in [k]\).

Define one-item \(\textbf{1}_{v,i}:=(X_1,\dots ,X_k)\), where \(X_i=\{v\}\) and \(X_{j\ne i}=\emptyset \), and empty-item \(\textbf{0}:=(\emptyset ,\dots ,\emptyset )\). Denote the support set \(U(\textbf{x}):=\bigcup _{i=1}^{k}X_i\).

Given a function \(f:(k+1)^G\rightarrow R\), for any \(\textbf{x}\in (k+1)^G\), \(v\in G\setminus U(\textbf{x})\) and \(i\in [k]\), it is said to be monotone if its marginal gain satisfies:

$$\begin{aligned} \begin{aligned} f_{\textbf{x}}(\textbf{1}_{v,i})=f(\textbf{x}\sqcup \textbf{1}_{v,i})-f(\textbf{x})\ge 0. \end{aligned} \end{aligned}$$

From Ohsaka and Yoshida (2015), f is pairwise monotone if

$$\begin{aligned} f_{\textbf{x}}(\textbf{1}_{v,i})+f_{\textbf{x}}(\textbf{1}_{v,j})\ge 0, \end{aligned}$$

for any \(\textbf{x}\in (k+1)^G\), \(v\in G\setminus U(\textbf{x})\) and \(i\ne j\in [k]\). And f is orthant submodular, if

$$\begin{aligned} f_{\textbf{x}}(\textbf{1}_{v,i})\ge f_{\textbf{y}}(\textbf{1}_{v,i}), \end{aligned}$$

for \(\textbf{x}\preceq \textbf{y}\in (k+1)^G\), \(v\in G{\setminus } U(\textbf{y})\) and \(i\ne j\in [k]\). As below, a k-submodular function has a well-known equivalent definition (Ward and Zivny 2014).

Definition 1

A function \(f:(k+1)^G\rightarrow R\) is k-submodular iff it is pairwise monotone and orthant submodular.

Obviously, the monotonicity of f implies pairwise monotonicity. For a monotone function \(f:(k+1)^G\rightarrow R\), the k-submodularity is equivalent to the orthant submodularity. In addition, a k-submodular function also has the following useful property (Ohsaka and Yoshida 2015).

Lemma 1

Given a k-submodular function f, we have

$$\begin{aligned}f(\textbf{y})-f(\textbf{x})\le \sum \limits _{\textbf{1}_{v,i}\preceq \textbf{y}\backslash \textbf{x}}f_{\textbf{x}}(\textbf{1}_{v,i}),\end{aligned}$$

for any \(\textbf{x}, \textbf{y}\in (k+1)^G\) and \(\textbf{x}\preceq \textbf{y}\).

Given a fixed k disjoint subsets \(\textbf{y}\in (k+1)^G\), define a family of k disjoint subsets \(D(\textbf{y}):=\{\textbf{x}\in (k+1)^G~|~\textbf{y}\preceq \textbf{x}\}\). In the later analysis, we need to construct a function \(g(\textbf{x}): D(\textbf{y})\rightarrow R\) by temporarily hiding \(\textbf{y}\). In order to maintain the regularity, we can set a k-submodular function \(g(\textbf{x})=f(\textbf{x})-f(\textbf{y})\), which is still a k-submodular function.

Lemma 2

Given a k-submodular function \(f: (k+1)^{G}\rightarrow R\) and \(\textbf{y}\in (k+1)^G\), then \(g(\textbf{x})=f(\textbf{x})-f(\textbf{y}): D(\textbf{y})\rightarrow R\) is a k-submodular function and \(g(\textbf{y})=0\).

2.2 Knapsack and matroid constraints

Given \(\mathcal {L}\subseteq 2^G\), a pair \((G,\mathcal {L})\) is an independence system if \((\mathcal {M}1)\) and \((\mathcal {M}2)\) hold, and a set A is an independence set if \(A\in \mathcal {L}\). Further, the independence system \((G,\mathcal {L})\) is said to be a matroid if \((\mathcal {M}3)\) holds.

Definition 2

Given \(\mathcal {L}\subseteq 2^G\) and a pair \(\mathcal {M}=(G,\mathcal {L})\) is a matroid if

\((\mathcal {M}1)\): \(\emptyset \in \mathcal {L}\).

\((\mathcal {M}2)\): \(A\subseteq B\) and \(B\in \mathcal {L}\) \(\Longrightarrow \) \(A\in \mathcal {L}\).

\((\mathcal {M}3)\): \(A,B\in \mathcal {L}\) and \(\mid A\mid >\mid B\mid \) \(\Longrightarrow \) \(\exists ~ v\in A\backslash B\), s.t. \(B\cup \{v\}\in \mathcal {L}\).

For \(m\in N_+\) and each \(j\in [m]\), \(\mathcal {L}_j\) is a collection of independent sets, and \(\mathcal {M}_j = (G, \mathcal {L}_j)\) is a matroid. Given a nonnegative bound B, and for each element \(v\in G\), there is a nonnegative weight \(w_v\). Without losing generality, we assume that \(w_v\) and B are integers. Otherwise, we can always enlarge them to integers in the same proportion. Let \(w_{\textbf{x}}=\sum \limits _{v\in U(\textbf{x})}w_v\). The k-submodular maximization problem with the intersection of a knapsack and m matroid constraints is

$$\begin{aligned} \max _{\textbf{x}\in (k+1)^G}\{f(\textbf{x})\mid w_{\textbf{x}}\le B ~\textrm{and} ~ U(\textbf{x})\in \bigcap _{j=1}^{m}\mathcal {L}_j \}. \end{aligned}$$
(1)

For any \(A\in G\), we use \([A]^{m}\) to express a collection of subsets of A, whose size does not exceed m. Given an independence set \(A\in \bigcap _{j=1}^{m}\mathcal {L}_j\) and a pair \((\bar{a},b)\) with \(\bar{a}\in [A]^{m}\) and \(b\in G\backslash A\), we refer the pair \((\bar{a},b)\) as a m-swap \((\bar{a},b)\) if \((A\backslash \bar{a})\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\). The next lemma ensures that there exists some m-swap \((\bar{a},b)\) between two independence sets. The detailed proof of Lemma 3 is given by Sarpatwar et al. (2019).

Lemma 3

Assume two independence sets \(A,B\in \bigcap _{j=1}^{m}\mathcal {L}_j\), then we can construct a mapping \(y: B\backslash A\rightarrow [A\backslash B]^{m}\), such that \((A\backslash \bar{a})\cup \{b\} \in \bigcap _{j=1}^{m}\mathcal {L}_j\) with \(b\in B \backslash A\), \(\bar{a}\in [A\backslash B]^{m}\), and each element \(a\in A\backslash B\) appears in mapping y no more than m times.

In the later theoretical proof, the following Lemma 4 (Nemhauser et al. 1978) needs to be used.

Lemma 4

Given two fixed \(P,D\in N_+\) and a sequence of nonnegative real numbers \(\{\gamma _i\}_{i\in [P]}\), then we have

$$\begin{aligned}{} & {} \frac{\sum _{i=1}^P \gamma _i}{\min _{t\in [P]}(\sum _{i=1}^{t-1}\gamma _i+D\gamma _t)}\nonumber \\{} & {} \quad \ge 1-(1-\frac{1}{D})^P \ge 1-e^{-P/D}. \end{aligned}$$
(2)

3 Algorithm overview

3.1 Greedy algorithm

Firstly, we introduce a Greedy Algorithm (f, G) from Ward and Zivny (2014). By Definition 1, k-submodularity of f implies pairwise monotonicity, that is, \(f_{\textbf{x}}(\textbf{1}_{v,i})+f_{\textbf{x}}(\textbf{1}_{v,j})\ge 0\) for any \(\textbf{x}\in (k+1)^G\), \(v\notin U(\textbf{x})\) and \(i\ne j\in [k]\). It means that there are no two positions \(i\ne j\in [k]\) such that \(f_{\textbf{x}}(\textbf{1}_{v,i})<0\) and \(f_{\textbf{x}}(\textbf{1}_{v,j})<0\) both hold. For k-submodular maximization problem without constraint, there is always an optimal solution \(\textbf{x}^*\) satisfying \(U(\textbf{x}^*)=G\). In Greedy Algorithm (fG), we enter a set G and give a fixed order to the points in G, that is \(G=\{v_1,\dots ,v_{|G|}\}\). Each current solution \(\textbf{x}_l\) is obtained by \(\textbf{x}_{l-1}\) adding \(v_{l}\in G\backslash U(\textbf{x}_{l-1})\) with a greedy position \(i_{l}\in [k]\) for \(l=1,\ldots ,|G|\).

figure a

3.2 Nested greedy and local search algorithm KM-KM

Next, we present a nested greedy and local search algorithm for problem (1), which is inspired by Liu et al. (2022a). For simplicity, we call it KM-KM. If the objective function f is monotone, we choose \(\lambda =2\) in KM-KM. Otherwise, we need to choose \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\), because of the proof of the approximate ratio.

KM-KM starts with \(\textbf{x}^\lambda \preceq \textbf{x}^*\) obtained by enumerating with the largest marginal profits, where \(\textbf{x}^*\) is an optimal solution of problem (1). If \(|U(\textbf{x}^*)|\le \lambda \), we can find \(\textbf{x}^*\) by enumerating \(\textbf{x}\in (k+1)^G\) with \(|U(\textbf{x})|\le |U(\textbf{x}^*)|\). Therefore, we only consider the case when \(|U(\textbf{x}^*)|\) is greater than \(\lambda \). For a positive integer \(t\ge \lambda \), we define t-th iteration as the process when KM-KM finds a suitable m-swap \((\bar{a}^t,b^t)\) to update \(\textbf{x}^{t}\). Clearly \(|(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}|=|U(\textbf{x}^{t+1}\backslash \textbf{x}^\lambda )|\). If the current m-swap \((\bar{a}^t,b^t)\) satisfies all the conditions in line 11, KM-KM performs line 12-18 and breaks loop 9-19 to update \(S^m\) in line 8. In line 12 of KM-KM, we consider the elements in \((U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}\), and add them to Greedy Algorithm in the same order as in KM-KM. For \(l\in \{1,\dots ,|(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}|\}\), Greedy Algorithm \((f(\tilde{\textbf{x}}^{t+1}\sqcup \textbf{x}^\lambda ), ~(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\})\) reorders the positions i of points \(v_l\in (U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}\). Define \(\tilde{\textbf{x}}^{t+1}_l\) as the current solution, such that \(\tilde{\textbf{x}}^{t+1}_l=\tilde{\textbf{x}}^{t+1}_{l-1}\sqcup \textbf{1}_{v_l,i_l}\). If current m-swap \((\bar{a}^t,b^t)\) violates any conditions in line 11, KM-KM will remove it and continue to pick the next m-swap. Finally, KM-KM breaks all loops when \(S^m=\emptyset \) in line 9 and return \(\textbf{x}^t\). We define the time when KM-KM outputs \(\textbf{x}^t\) as T and \(T\ge \lambda +1\).

figure b

3.3 A construction method for analysis

In order to give an approximate ratio analysis, we introduce a construction method based on Algorithm 2. Mark \(\textbf{x}^*\) as an optimal solution of problem (1).

Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and \(l\in \{1, \dots ,|U(\textbf{x}^t{\setminus } \textbf{x}^\lambda )|\} \). Define \(\textbf{x}^t_l=\tilde{\textbf{x}}^t_l\sqcup \textbf{x}^\lambda \), then \(\textbf{x}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}=\textbf{x}^t\). We further construct two sequences \(\{\textbf{o}_{l-1/2}^t\}\) and \(\{\textbf{o}_{l}^t\}\) such that \(\textbf{o}_{l-1/2}^t =(\textbf{x}^*~\sqcup ~ \textbf{x}_{l}^t)~\sqcup ~\textbf{x}_{l-1}^t\), \(\textbf{o}_{l}^t =(\textbf{x}^*~\sqcup ~\textbf{x}_{l}^t)~\sqcup ~\textbf{x}_l^t\) and \(\textbf{o}_{l=0}^t=\textbf{x}^*\). Note that \(\textbf{x}^t_{l-1}\preceq \textbf{x}^t_l\preceq \textbf{o}_{l}^t\), \( \textbf{o}_{l-1/2}^t\preceq \textbf{o}_{l}^t\) and \(U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)=U(\mathbf {x^*})\backslash U(\textbf{x}^t)\).

By Lemma 2, define a k-submodular function \(g(\textbf{x})=f(\textbf{x})-f(\textbf{x}^\lambda ):D(\textbf{x}^\lambda )\rightarrow R_+\). The construction method has the following conclusions. The detailed proofs of them are shown in the Appendix.

Lemma 5

Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and an optimal solution \(\textbf{x}^*\) for problem (1), we have : 

(i) when the objective function f is monotone,

$$\begin{aligned}{} & {} g(\textbf{o}_{l-1}^t)-g(\textbf{o}_{l}^t)\le g(\textbf{x}^t_{l})-g(\textbf{x}^t_{l-1}), \end{aligned}$$
(3)
$$\begin{aligned}{} & {} g(\textbf{x}^*)\le g(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})+g(\textbf{x}^t). \end{aligned}$$
(4)

(ii) when the objective function f is non-monotone,

$$\begin{aligned}{} & {} g(\textbf{o}_{l-1}^t)-g(\textbf{o}_{l}^t)\le 2[g(\textbf{x}^t_{l})-g(\textbf{x}^t_{l-1})], \end{aligned}$$
(5)
$$\begin{aligned}{} & {} g(\textbf{x}^*)\le g(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})+2g(\textbf{x}^t). \end{aligned}$$
(6)

4 Analysis for monotone k-submodular maximization with a knapsack and m matroid constraints

In this section, we will explain in detail how to obtain the approximate ratio for problem (1). Our framework of proof is inspired by Sviridenko (2004); Sarpatwar et al. (2019); Liu et al. (2022a). To simplify the process of analyzing approximate ratio, we give several lemmas. The detailed proofs of them are shown in the Appendix.

Lemma 6

Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and an optimal solution \(\textbf{x}^*\) for problem (1), there exists a mapping y : 

$$\begin{aligned} U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\rightarrow [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m \end{aligned}$$

such that \((U(\textbf{x}^t)\backslash \bar{y}(b))\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\), for \(~b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\), \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\), and each element \(a\in U(\textbf{x}^t)\backslash U(\textbf{x}^*)\) appears in mapping y no more than m times. Then we have

$$\begin{aligned} \begin{aligned} g(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})&\le \sum \limits _{\textbf{1}_{b,i}\preceq (\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\backslash \textbf{x}^t)} [g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})\sqcup \textbf{1}_{b,i})\\&\quad -g(\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})] +g(\textbf{x}^t) \end{aligned}\end{aligned}$$
(7)

and

$$\begin{aligned} \sum \limits _{\textbf{1}_{b,i}\preceq (\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\backslash \textbf{x}^t)} [g(\textbf{x}^t)-g(\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})] \le mg(\textbf{x}^t). \end{aligned}$$
(8)

for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).

Let us assume that there exists a m-swap \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) satisfying \(w_{\textbf{x}^T}-w_{\bar{y}(b)}+ w_b > B\), when KM-KM runs. Let \(t^*+1\) be the iteration which appears a m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) in \(S^m(U(\textbf{x}^{t^*}))\setminus \{m-\)swap \( (\bar{a},b)~|~\bar{a}\cap U(\textbf{x}^\lambda )\ne \emptyset \}\) violating \(w_{\textbf{x}^{t^*}}-w_{\bar{y}(b^{t^*})}+ w_{b^{t^*}} \le B\), with \(b^{t^*}\in U(\textbf{x}^*)\backslash U(\textbf{x}^{t^*})\) and \(\bar{y}(b^{t^*})\in [(U(\textbf{x}^t)\backslash U(\textbf{x}^*))]^m\), for the first time.

Lemma 7

Considering the current solution \(\textbf{x}^{t^*}\) and the m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) mentioned above, we have

$$\begin{aligned} \begin{aligned}&f((\textbf{x}^{t^*}\setminus \bigsqcup _{y(b^{t^*})\in \bar{y}(b^{t^*})}\textbf{1}_{y(b^{t^*}),j^{t^*}})\sqcup \textbf{1}_{b^{t^*},i^{t^*}})-f(\textbf{x}^{t^*}) \le \frac{1}{2}\cdot f(\textbf{x}^\lambda ), \end{aligned} \end{aligned}$$
(9)

where \(\textbf{1}_{y(b^{t^*}),j^{t^*}}\preceq \textbf{x}^{t^*}\backslash \textbf{x}^\lambda \), if f is monotone.

Lemma 8

Given \(t\in \{\lambda +1,\dots ,t^*\}\) in KM-KM for problem (1), we have

$$\begin{aligned}{} & {} \sum \limits _{\textbf{1}_{b,i}\preceq (\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\backslash \textbf{x}^t)} [g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})\sqcup \textbf{1}_{b,i})-g(\textbf{x}^t)]\nonumber \\{} & {} \quad \le (B-w_{\textbf{x}^\lambda })\rho _{t}, \end{aligned}$$
(10)

for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).

Lemma 9

Given \(t\in \{\lambda +1,\dots ,t^*\}\) in KM-KM, \(\alpha , \beta \), r are positive constants satisfying \(1-\frac{1}{\alpha }(1-e^{-\beta })-r\ge 0\) and \(\textbf{x}^*\) be an optimal solution of problem (1). If

$$\begin{aligned} g(\textbf{x}^*)\le \alpha [g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{\beta }\rho _{t}] \end{aligned}$$

and

$$\begin{aligned} f((\textbf{x}^{t^*}\setminus \bigsqcup _{y(b^{t^*})\in \bar{y}(b^{t^*})}\textbf{1}_{y(b^{t^*}),j^{t^*}})\sqcup \textbf{1}_{b^{t^*},i^{t^*}})-f(\textbf{x}^{t^*}) \le r\cdot f(\textbf{x}^\lambda ) \end{aligned}$$

hold, we have

$$\begin{aligned} f(\textbf{x}^{t^*})\ge \frac{1}{\alpha }(1-e^{-\beta })f(\textbf{x}^*). \end{aligned}$$
(11)

Theorem 1

If the objective function f is monotone for problem (1), we can obtain a \(\frac{1}{m+2}(1-e^{-(m+2)})\)-approximate solution in KM-KM by setting \(\lambda =2\).

Proof

When there is no qualified m-swap\((\bar{a},b)\in S^m\), KM-KM will break all loops and output \(\textbf{x}^T\). Using Lemma 3 between \(U(\textbf{x}^t)\) and \(U(\textbf{x}^*)\), for a fixed \(t\ge \lambda \), there exists a mapping y :  \( U(\mathbf {x^*})\backslash U(\textbf{x}^t)\rightarrow [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\) such that \((U(\textbf{x}^t)\backslash \bar{y}(b))\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\), for \(~b\in U(\mathbf {x^*})\backslash U(\textbf{x}^t)\) and \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\). Thus, there are some m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^t\) and \(\textbf{x}^*\).

When \(t=T\), according to whether the conditions in line  11 of KM-KM are violated, consider dividing m-swaps (y(b), b) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) into two cases.

Case 1: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), they were all rejected just due to \(\rho (\bar{y}(b),b)\le 0\) instead of knapsack constraint.

Due to our assumption about the m-swaps, we get

$$\begin{aligned} \begin{aligned} g((\textbf{x}^T\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})\sqcup \textbf{1}_{b,i})\le g(\textbf{x}^T). \end{aligned} \end{aligned}$$
(12)

Since f is monotone, we combine formula (4) in Lemma 5 and formula (7) in Lemma 6, then use formula (12) and formula (8) in Lemma 6 to get \(g(\textbf{x}^*)\le (m+2)g(\textbf{x}^T)\). Finally, we have \(f(\textbf{x}^*)\le (m+2)f(\textbf{x}^T)-(m+1)f(\textbf{x}^\lambda )\le (m+2)f(\textbf{x}^T) \) due to nonnegativity of f. Therefore, we find a \(\frac{1}{m+2}\)-approximate solution in case 1, if f is monotone.

Case 2: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), there exists at least one satisfying \(w_{\textbf{x}^t}-w_{\bar{y}(b)}+ w_b > B\).

For a fixed \(t\ge \lambda \), KM-KM selects a qualified m-swap\((\bar{a}^t,b^t)\) to update \(\textbf{x}^{t}\) in each t-th iteration. In \(t^*+1\) iteration, KM-KM checks m-swap \((\bar{y}(b^{t^*}),b^{t^*})\), where \(b^{t^*}\in U(\textbf{x}^*)\backslash U(\textbf{x}^{t^*})\) and \(\bar{y}(b^{t^*})\in [(U(\textbf{x}^t)\backslash U(\textbf{x}^*))]^m\), in line 11 and removed it due to \(w_{\textbf{x}^{t^*}}-w_{\bar{y}(b^{t^*})}+ w_{b^{t^*}} > B\), for the first time. Define \(\rho _t:=\rho (\bar{a}^t,b^t)\) for \(t\in \{\lambda ,\dots ,t^*-1\}\) and

$$\begin{aligned} \rho _{t^*}:=\frac{f((\textbf{x}^{t^*}\setminus \bigsqcup _{y(b^{t^*})\in \bar{y}(b^{t^*})}\textbf{1}_{y(b^{t^*}),j^{t^*}})\sqcup \textbf{1}_{b^{t^*},i^{t^*}})-f(\textbf{x}^{t^*})}{w_{b^{t^*}}}. \end{aligned}$$

When \(t\in \{\lambda +1,\ldots ,t^{*}\}\), we combine formula (4) in Lemma 5 and formula (7) in Lemma 6, then rewrite formula (7) in Lemma 6 as below

$$\begin{aligned} \begin{aligned}&g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})\sqcup \textbf{1}_{b,i}) -g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j}))\\&=[g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j})\sqcup \textbf{1}_{b,i})-g(\textbf{x}^t)]\\&+[g(\textbf{x}^t)-g((\textbf{x}^t\backslash \bigsqcup _{y(b)\in \bar{y}(b)}\textbf{1}_{y(b),j}))]. \end{aligned} \end{aligned}$$
(13)

Using formula (8) in Lemma 6 and Lemma 8, we can get \(g(\textbf{x}^*)\le (m+2)[g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{m+2}\rho _{t}].\) By formula (9) in Lemma  7, we set \(r=\frac{1}{2}\) in Lemma 9. Therefore, \(f(\textbf{x}^{t^*})\ge \frac{1}{m+2}(1-e^{-(m+2)})f(\textbf{x}^*)\) holds immediately. So we get the approximate ratio of \(\frac{1}{m+2}(1-e^{-(m+2)})\) in case 2, if f is monotone. \(\square \)

As above, we show a \(\frac{1}{m+2}(1-e^{-(m+2)})\)-approximate ratio for monotone k-submodular maximization with a knapsack and m matroid constraints. Due to our conclusion, we improve the approximate ratio of monotone k-submodular maximization with a knapsack and m matroid constraints (Liu et al. 2022a) from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) to \(\frac{1}{m+2}(1-e^{-(m+2)})\).

When \(m=1\), i.e. monotone k-submodular maximization with a knapsack and a matroid constraints, we have the corresponding conclusion as below. It also improves the result \(\frac{1}{4}(1-e^{-2})\) in Liu et al. (2022a).

Corollary 1

If the objective function f is monotone for problem (1) with \(m=1\), we can obtain a \(\frac{1}{3}(1-e^{-3})\)-approximate solution in KM-KM by setting \(\lambda =2\).

5 Analysis for non-monotone k-submodular maximization with a knapsack and m matroid constraints

In this section, we further study non-monotone k-submodular maximization with a knapsack and m matroids constraints. In fact, the impact of monotonicity of f is not reflected in Lemmas 68,  9. So we only need to give the following Lemma 10. Using Lemmas 689 10, we can get an approximate ratio \(\frac{1}{m+3}(1-e^{-(m+3)})\).

Lemma 10

Considering the current solution \(\textbf{x}^{t^*}\) and the m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) as in Lemma 7, we have

$$\begin{aligned} \begin{aligned}&f((\textbf{x}^{t^*}\setminus \bigsqcup _{y(b^{t^*})\in \bar{y}(b^{t^*})}\textbf{1}_{y(b^{t^*}),j^{t^*}})\sqcup \textbf{1}_{b^{t^*},i^{t^*}})-f(\textbf{x}^{t^*}) \le \frac{m+1}{\lambda }\cdot f(\textbf{x}^\lambda ), \end{aligned} \end{aligned}$$
(14)

where \(\textbf{1}_{y(b^{t^*}),j^{t^*}}\preceq \textbf{x}^{t^*}\backslash \textbf{x}^\lambda \).

Theorem 2

If the objective function f is non-monotone for problem (1), we can obtain a \(\frac{1}{m+3}(1-e^{-(m+3)})\)-approximate solution in KM-KM by setting \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\).

Proof

When \(t=T\), similar to Theorem 1 in Sect. 4, we consider dividing the m-swaps (y(b), b) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) into two cases.

Case 1: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), they were all rejected just due to \(\rho (\bar{y}(b),b)\le 0\) instead of knapsack constraint.

Combine formula (6) in Lemma 5 and formula (7) in Lemma 6, then use formula (12) and formula (8) in Lemma 6 to get \(g(\textbf{x}^*)\le (m+3)g(\textbf{x}^T)\). Finally, we have \(f(\textbf{x}^*)\le (m+3)f(\textbf{x}^T)-(m+2)f(\textbf{x}^\lambda )\le (m+3)f(\textbf{x}^T) \) due to nonnegativity of f. Therefore, we find a \(\frac{1}{m+3}\)-approximate solution in case 1, if f is non-monotone in problem (1).

Case 2: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), there exists at least one satisfying \(w_{\textbf{x}^t}-w_{\bar{y}(b)}+ w_b > B\).

When \(t\in \{\lambda +1,\ldots ,t^{*}\}\), we combine formula (6) in Lemma 5, formula (7) in Lemma 6 and formula(13). Then use formula (8) in Lemma 6 and Lemma 8, we can get \(g(\textbf{x}^*)\le (m+3)[g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{m+3}\rho _{t}].\) By formula (14) in Lemma  10, we set \(r=\frac{m+1}{\lambda }\) in Lemma 9. Therefore, \(f(\textbf{x}^{t^*})\ge \frac{1}{m+3}(1-e^{-(m+3)})f(\textbf{x}^*)\) holds immediately. So we get the approximate ratio of \(\frac{1}{m+3}(1-e^{-(m+3)})\) in case 2, if f is non-monotone in problem (1). \(\square \)

As above, we show a \(\frac{1}{m+3}(1-e^{-(m+3)})\)-approximate ratio for non-monotone k-submodular maximization with a knapsack and m matroid constraints. Due to our conclusion, we extend monotone k-submodular maximization with a knapsack and m matroid constraints (Liu et al. 2022a) to non-monotone case.

When \(m=1\), i.e. non-monotone k-submodular maximization with a knapsack and a matroid constraints, we have the corresponding conclusion as below.

Corollary 2

If the objective function f is non-monotone for problem (1) with \(m=1\), we can obtain a \(\frac{1}{4}(1-e^{-4})\)-approximate solution in KM-KM by setting \(\lambda =3\).

6 Conclusions

In our paper, based on a nested greedy and local search algorithm KM-KM (Liu et al. 2022a) and a construction method (Nguyen and Thai 2020), we improve the approximate ratio for problem (1) (Liu et al. 2022a) from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) to \(\frac{1}{m+2}(1-e^{-(m+2)})\) by enumerating \(\lambda =2\) items with the largest marginal profits in the optimal solution. The conclusion can get \(\frac{1}{3}(1-e^{-3})\)-approximate ratio for problem (1) with \(m=1\). Furthermore, we extend the conclusion to non-monotone case and get the approximate ratio \(\frac{1}{m+3}(1-e^{-(m+3)})\) for problem (1) by enumerating \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\) items with \(\lambda \in N_+\). The conclusion can get \(\frac{1}{4}(1-e^{-4})\) -approximate ratio for problem (1) with \(m=1\). And we need to enumerate \(\lambda =3\) items with the largest marginal profits in the optimal solution.