Abstract
A k-submodular function is a generalization of a submodular function. The definition domain of a k-submodular function is a collection of k-disjoint subsets instead of simple subsets of ground set. In this paper, we consider the maximization of a k-submodular function with the intersection of a knapsack and m matroid constraints. When the k-submodular function is monotone, we use a special analytical method to get an approximation ratio \(\frac{1}{m+2}(1-e^{-(m+2)})\) for a nested greedy and local search algorithm. For non-monotone case, we can obtain an approximate ratio \(\frac{1}{m+3}(1-e^{-(m+3)})\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Given a ground set G containing n elements and \(k\in N_+\), refer \((X_1,\ldots ,X_k)\) as k-disjoint subsets, with \(X_i\subseteq G\), \(\forall i\in [k]\) and \(X_i\cap X_j=\emptyset \), \(\forall i\ne j\in [k]\); write \((k+1)^{G}\) as the family of k disjoint subsets. Define join and meet operations for any \(\textbf{x}=(X_1,\ldots ,X_k)\) and \(\textbf{y}=(Y_1,\ldots ,Y_k)\) in \((k+1)^{G}\), that is,
The join operation removes some points with different positions in \(\textbf{x}\) and \(\textbf{y}\), that is, points v with \(v\in X_i\), \(v\in Y_j\), \(\forall i\ne j\in [k]\). And the meet operation is just an intersection operation of sets.
A function \(f:(k+1)^{G}\rightarrow R\) is said to be k-submodular (Huber and Kolmogorov 2012) if
for any \(\textbf{x}\) and \(\textbf{y}\) in \((k+1)^{G}\). The k-submodular function is a generalization of a submodular function. Note that the definition domain of k-submodular function is a collection of k disjoint subsets instead of simple subsets. When \(k=1\), a k-submodular function becomes a submodular function.
1.1 Related work
There have been many research results on monotone submodular maximization problem. Nemhauser et al. (1978) firstly achieved a greedy \((1-1/e)\)-approximation algorithm under a cardinality constraint, which was known as a tight bound. Later, Sviridenko (2004) designed a combinatorial \((1-1/e)\) approximate algorithm under a knapsack constraint. For this problem, Ene and Nguyen (2019) also offered an approximate ratio of \((1-1/e-\varepsilon )\) by using multilinear extention function, which only needed approximate linear running time. With a matroid constraint, Calinescu et al. (2011) got an approximate ratio of \((1-1/e)\), by using the continuous greedy method and pipage rounding technique. Filmus and Ward (2014) designed a combination algorithm using local search technique, which also achieved an approximate ratio of \((1-1/e)\). More recently, Sarpatwar et al. (2019) contributed an algorithm with an approximate ratio of \(\frac{1-e^{-(m+1)}}{m+1}\) combining the greedy algorithm and local search techniques for maximization problem of submodular function subject to the intersection of a knapsack and m matroid constraints. For maximizing non-monotone submodular functions, Lee et al. (2010) presented a \((\frac{1}{m+2+\frac{1}{m}+\varepsilon })\) approximation algorithm under m matroid constraints, and a \((\frac{1}{5}-\varepsilon )\) approximation algorithm under m knapsack constraints. Feldman et al. (2011) and Chekuri et al. (2014) studied constant factor approximation algorithms to maximize a multilinear extension of the submodular function over a down-closed polytope, respectively. The fractional solution could be rounded with contention resolution schemes. For more references on submodular maximization, see Bian et al. (2017); Calinescu et al. (2011); Ene and Nguyen (2019); Feldman and Naor (2013); Filmus and Ward (2014); Huang et al. (2022); Liu et al. (2022b); Sviridenko (2004); Yoshida (2019).
As a generalization of submodular function, the k-submodular function still has diminishing marginal benefits, where the definition domain is extended from the collection of simple subsets to the collection of k disjoint subsets. Many practical applications can be attributed to the k-submodular maximization problem. Ohsaka and Yoshida (2015) studied influence maximization with k topics and sensor placement with k sensors both based on k-submodular maximization with a size constraint. Rafiey and Yoshida (2020) applied k-submodular maximization to facility location.
In recent years, many researches on k-submodular maximization has sprung up. For k-submodular maximization without monotonicity assumption, Ward and Zivny (2014) studied the unconstrained problem and gave a deterministic greedy algorithm and a randomized greedy algorithm achieving the approximate ratio of 1/3 and \(\frac{1}{1+a}\) with \(a=\max \{1, \sqrt{\frac{k-1}{4}}\}\), respectively. Later, the approximation ratio was improved to 1/2 by Iwata et al. (2016). And Oshima (2021) also contributed a \(\frac{k^2+1}{2k^2+1}\)-approximate algorithm. For monotone k-submodular maximization, Ward and Zivny (2014) showed a 1/2-approximate algorithm without constraint, and then it was improved to \(k/(2k-1)\) by Iwata et al. (2016), which is asymptotically tight. Ohsaka and Yoshida (2015) introduced a construction method between current solution and optimal solution to obtain a 1/2-approximate ratio, for a total size constraint. Using the similar construction method, a 1/2-approximate ratio could be also achieved by Sakaue (2017) for a matroid constraint. Tang et al. (2022) contributed a \(\frac{1}{2}(1-e^{-1})\)-approximate algorithm with a knapsack constraint. Xiao et al. found that this result could be improved to \(\frac{1}{2}(1-e^{-2})\). Recently, Liu et al. (2022a) designed a nested greedy and local search \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\)-approximation algorithm for monotone k-submodular maximization subject to the intersection of a knapsack and m matroid constraints.
1.2 Our contributions
In this paper, we consider the k-submodular maximization subject to the intersection of a knapsack and m matroid constraints, and discuss the results in monotone and non monotone cases respectively. The main contributions of this paper are as follows:
-
We improve the approximate ratio from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) in Liu et al. (2022a) to \(\frac{1}{m+2}(1-e^{-(m+2)})\) for monotone k-submodular maximization problem with the intersection of a knapsack and m matroid constraints. In the theoretical analysis of the algorithm, we no longer rely on the conclusion of the greedy algorithm for unconstrained k-submodular maximization problem, and use the properties of k-submodular function to get the new result. Note that our result will be \(\frac{1}{3}(1-e^{-3})\) when \(m=1\), it improves the result \(\frac{1}{4}(1-e^{-2})\) in Liu et al. (2022a) with the intersection of a knapsack and a matroid constraint.
-
We extend the approximation algorithm to non-monotone case. By increasing the number of enumeration points in the algorithm and using the pairwise monotone property, we achieve a \(\frac{1}{m+3}(1-e^{-(m+3)})\) approximate ratio. It is easy to know that we have a \(\frac{1}{4}(1-e^{-4})\) approximate ratio for the non-monotone k-submodular maximization problem with the intersection of a knapsack and a matroid constraint.
1.3 Organization
Organize our paper as follows: In Sect. 2, we introduce notations, properties and some basic results about k-submodular function. In Sect. 3, we give and explain the nested greedy and local search algorithm. In Sects. 4 and 5, we present our theoretical analysis and show the main results for monotone case and non-monotone case, respectively.
2 Preliminaries
2.1 k-Submodular function
In this paper, we set \(k\ge 2\) and \(k\in N_+\), because k-submodular function is submodular function when \(k=1\). For any two k disjoint subsets \(\textbf{x},~\textbf{y}\in (k+1)^{G}\), we need to introduce a remove operation and a partial order, i.e.
\(\textbf{x}\preceq \textbf{y}\), if \(X_i\subseteq Y_i, \forall i\in [k]\).
Define one-item \(\textbf{1}_{v,i}:=(X_1,\dots ,X_k)\), where \(X_i=\{v\}\) and \(X_{j\ne i}=\emptyset \), and empty-item \(\textbf{0}:=(\emptyset ,\dots ,\emptyset )\). Denote the support set \(U(\textbf{x}):=\bigcup _{i=1}^{k}X_i\).
Given a function \(f:(k+1)^G\rightarrow R\), for any \(\textbf{x}\in (k+1)^G\), \(v\in G\setminus U(\textbf{x})\) and \(i\in [k]\), it is said to be monotone if its marginal gain satisfies:
From Ohsaka and Yoshida (2015), f is pairwise monotone if
for any \(\textbf{x}\in (k+1)^G\), \(v\in G\setminus U(\textbf{x})\) and \(i\ne j\in [k]\). And f is orthant submodular, if
for \(\textbf{x}\preceq \textbf{y}\in (k+1)^G\), \(v\in G{\setminus } U(\textbf{y})\) and \(i\ne j\in [k]\). As below, a k-submodular function has a well-known equivalent definition (Ward and Zivny 2014).
Definition 1
A function \(f:(k+1)^G\rightarrow R\) is k-submodular iff it is pairwise monotone and orthant submodular.
Obviously, the monotonicity of f implies pairwise monotonicity. For a monotone function \(f:(k+1)^G\rightarrow R\), the k-submodularity is equivalent to the orthant submodularity. In addition, a k-submodular function also has the following useful property (Ohsaka and Yoshida 2015).
Lemma 1
Given a k-submodular function f, we have
for any \(\textbf{x}, \textbf{y}\in (k+1)^G\) and \(\textbf{x}\preceq \textbf{y}\).
Given a fixed k disjoint subsets \(\textbf{y}\in (k+1)^G\), define a family of k disjoint subsets \(D(\textbf{y}):=\{\textbf{x}\in (k+1)^G~|~\textbf{y}\preceq \textbf{x}\}\). In the later analysis, we need to construct a function \(g(\textbf{x}): D(\textbf{y})\rightarrow R\) by temporarily hiding \(\textbf{y}\). In order to maintain the regularity, we can set a k-submodular function \(g(\textbf{x})=f(\textbf{x})-f(\textbf{y})\), which is still a k-submodular function.
Lemma 2
Given a k-submodular function \(f: (k+1)^{G}\rightarrow R\) and \(\textbf{y}\in (k+1)^G\), then \(g(\textbf{x})=f(\textbf{x})-f(\textbf{y}): D(\textbf{y})\rightarrow R\) is a k-submodular function and \(g(\textbf{y})=0\).
2.2 Knapsack and matroid constraints
Given \(\mathcal {L}\subseteq 2^G\), a pair \((G,\mathcal {L})\) is an independence system if \((\mathcal {M}1)\) and \((\mathcal {M}2)\) hold, and a set A is an independence set if \(A\in \mathcal {L}\). Further, the independence system \((G,\mathcal {L})\) is said to be a matroid if \((\mathcal {M}3)\) holds.
Definition 2
Given \(\mathcal {L}\subseteq 2^G\) and a pair \(\mathcal {M}=(G,\mathcal {L})\) is a matroid if
\((\mathcal {M}1)\): \(\emptyset \in \mathcal {L}\).
\((\mathcal {M}2)\): \(A\subseteq B\) and \(B\in \mathcal {L}\) \(\Longrightarrow \) \(A\in \mathcal {L}\).
\((\mathcal {M}3)\): \(A,B\in \mathcal {L}\) and \(\mid A\mid >\mid B\mid \) \(\Longrightarrow \) \(\exists ~ v\in A\backslash B\), s.t. \(B\cup \{v\}\in \mathcal {L}\).
For \(m\in N_+\) and each \(j\in [m]\), \(\mathcal {L}_j\) is a collection of independent sets, and \(\mathcal {M}_j = (G, \mathcal {L}_j)\) is a matroid. Given a nonnegative bound B, and for each element \(v\in G\), there is a nonnegative weight \(w_v\). Without losing generality, we assume that \(w_v\) and B are integers. Otherwise, we can always enlarge them to integers in the same proportion. Let \(w_{\textbf{x}}=\sum \limits _{v\in U(\textbf{x})}w_v\). The k-submodular maximization problem with the intersection of a knapsack and m matroid constraints is
For any \(A\in G\), we use \([A]^{m}\) to express a collection of subsets of A, whose size does not exceed m. Given an independence set \(A\in \bigcap _{j=1}^{m}\mathcal {L}_j\) and a pair \((\bar{a},b)\) with \(\bar{a}\in [A]^{m}\) and \(b\in G\backslash A\), we refer the pair \((\bar{a},b)\) as a m-swap \((\bar{a},b)\) if \((A\backslash \bar{a})\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\). The next lemma ensures that there exists some m-swap \((\bar{a},b)\) between two independence sets. The detailed proof of Lemma 3 is given by Sarpatwar et al. (2019).
Lemma 3
Assume two independence sets \(A,B\in \bigcap _{j=1}^{m}\mathcal {L}_j\), then we can construct a mapping \(y: B\backslash A\rightarrow [A\backslash B]^{m}\), such that \((A\backslash \bar{a})\cup \{b\} \in \bigcap _{j=1}^{m}\mathcal {L}_j\) with \(b\in B \backslash A\), \(\bar{a}\in [A\backslash B]^{m}\), and each element \(a\in A\backslash B\) appears in mapping y no more than m times.
In the later theoretical proof, the following Lemma 4 (Nemhauser et al. 1978) needs to be used.
Lemma 4
Given two fixed \(P,D\in N_+\) and a sequence of nonnegative real numbers \(\{\gamma _i\}_{i\in [P]}\), then we have
3 Algorithm overview
3.1 Greedy algorithm
Firstly, we introduce a Greedy Algorithm (f, G) from Ward and Zivny (2014). By Definition 1, k-submodularity of f implies pairwise monotonicity, that is, \(f_{\textbf{x}}(\textbf{1}_{v,i})+f_{\textbf{x}}(\textbf{1}_{v,j})\ge 0\) for any \(\textbf{x}\in (k+1)^G\), \(v\notin U(\textbf{x})\) and \(i\ne j\in [k]\). It means that there are no two positions \(i\ne j\in [k]\) such that \(f_{\textbf{x}}(\textbf{1}_{v,i})<0\) and \(f_{\textbf{x}}(\textbf{1}_{v,j})<0\) both hold. For k-submodular maximization problem without constraint, there is always an optimal solution \(\textbf{x}^*\) satisfying \(U(\textbf{x}^*)=G\). In Greedy Algorithm (f, G), we enter a set G and give a fixed order to the points in G, that is \(G=\{v_1,\dots ,v_{|G|}\}\). Each current solution \(\textbf{x}_l\) is obtained by \(\textbf{x}_{l-1}\) adding \(v_{l}\in G\backslash U(\textbf{x}_{l-1})\) with a greedy position \(i_{l}\in [k]\) for \(l=1,\ldots ,|G|\).
3.2 Nested greedy and local search algorithm KM-KM
Next, we present a nested greedy and local search algorithm for problem (1), which is inspired by Liu et al. (2022a). For simplicity, we call it KM-KM. If the objective function f is monotone, we choose \(\lambda =2\) in KM-KM. Otherwise, we need to choose \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\), because of the proof of the approximate ratio.
KM-KM starts with \(\textbf{x}^\lambda \preceq \textbf{x}^*\) obtained by enumerating with the largest marginal profits, where \(\textbf{x}^*\) is an optimal solution of problem (1). If \(|U(\textbf{x}^*)|\le \lambda \), we can find \(\textbf{x}^*\) by enumerating \(\textbf{x}\in (k+1)^G\) with \(|U(\textbf{x})|\le |U(\textbf{x}^*)|\). Therefore, we only consider the case when \(|U(\textbf{x}^*)|\) is greater than \(\lambda \). For a positive integer \(t\ge \lambda \), we define t-th iteration as the process when KM-KM finds a suitable m-swap \((\bar{a}^t,b^t)\) to update \(\textbf{x}^{t}\). Clearly \(|(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}|=|U(\textbf{x}^{t+1}\backslash \textbf{x}^\lambda )|\). If the current m-swap \((\bar{a}^t,b^t)\) satisfies all the conditions in line 11, KM-KM performs line 12-18 and breaks loop 9-19 to update \(S^m\) in line 8. In line 12 of KM-KM, we consider the elements in \((U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}\), and add them to Greedy Algorithm in the same order as in KM-KM. For \(l\in \{1,\dots ,|(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}|\}\), Greedy Algorithm \((f(\tilde{\textbf{x}}^{t+1}\sqcup \textbf{x}^\lambda ), ~(U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\})\) reorders the positions i of points \(v_l\in (U(\textbf{x}^{t}\backslash \textbf{x}^\lambda )\backslash \bar{a}^t)\cup \{b^t\}\). Define \(\tilde{\textbf{x}}^{t+1}_l\) as the current solution, such that \(\tilde{\textbf{x}}^{t+1}_l=\tilde{\textbf{x}}^{t+1}_{l-1}\sqcup \textbf{1}_{v_l,i_l}\). If current m-swap \((\bar{a}^t,b^t)\) violates any conditions in line 11, KM-KM will remove it and continue to pick the next m-swap. Finally, KM-KM breaks all loops when \(S^m=\emptyset \) in line 9 and return \(\textbf{x}^t\). We define the time when KM-KM outputs \(\textbf{x}^t\) as T and \(T\ge \lambda +1\).
3.3 A construction method for analysis
In order to give an approximate ratio analysis, we introduce a construction method based on Algorithm 2. Mark \(\textbf{x}^*\) as an optimal solution of problem (1).
Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and \(l\in \{1, \dots ,|U(\textbf{x}^t{\setminus } \textbf{x}^\lambda )|\} \). Define \(\textbf{x}^t_l=\tilde{\textbf{x}}^t_l\sqcup \textbf{x}^\lambda \), then \(\textbf{x}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}=\textbf{x}^t\). We further construct two sequences \(\{\textbf{o}_{l-1/2}^t\}\) and \(\{\textbf{o}_{l}^t\}\) such that \(\textbf{o}_{l-1/2}^t =(\textbf{x}^*~\sqcup ~ \textbf{x}_{l}^t)~\sqcup ~\textbf{x}_{l-1}^t\), \(\textbf{o}_{l}^t =(\textbf{x}^*~\sqcup ~\textbf{x}_{l}^t)~\sqcup ~\textbf{x}_l^t\) and \(\textbf{o}_{l=0}^t=\textbf{x}^*\). Note that \(\textbf{x}^t_{l-1}\preceq \textbf{x}^t_l\preceq \textbf{o}_{l}^t\), \( \textbf{o}_{l-1/2}^t\preceq \textbf{o}_{l}^t\) and \(U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)=U(\mathbf {x^*})\backslash U(\textbf{x}^t)\).
By Lemma 2, define a k-submodular function \(g(\textbf{x})=f(\textbf{x})-f(\textbf{x}^\lambda ):D(\textbf{x}^\lambda )\rightarrow R_+\). The construction method has the following conclusions. The detailed proofs of them are shown in the Appendix.
Lemma 5
Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and an optimal solution \(\textbf{x}^*\) for problem (1), we have :
(i) when the objective function f is monotone,
(ii) when the objective function f is non-monotone,
4 Analysis for monotone k-submodular maximization with a knapsack and m matroid constraints
In this section, we will explain in detail how to obtain the approximate ratio for problem (1). Our framework of proof is inspired by Sviridenko (2004); Sarpatwar et al. (2019); Liu et al. (2022a). To simplify the process of analyzing approximate ratio, we give several lemmas. The detailed proofs of them are shown in the Appendix.
Lemma 6
Given a fixed iteration step \(t\ge \lambda +1\) in KM-KM and an optimal solution \(\textbf{x}^*\) for problem (1), there exists a mapping y :
such that \((U(\textbf{x}^t)\backslash \bar{y}(b))\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\), for \(~b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\), \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\), and each element \(a\in U(\textbf{x}^t)\backslash U(\textbf{x}^*)\) appears in mapping y no more than m times. Then we have
and
for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).
Let us assume that there exists a m-swap \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) satisfying \(w_{\textbf{x}^T}-w_{\bar{y}(b)}+ w_b > B\), when KM-KM runs. Let \(t^*+1\) be the iteration which appears a m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) in \(S^m(U(\textbf{x}^{t^*}))\setminus \{m-\)swap \( (\bar{a},b)~|~\bar{a}\cap U(\textbf{x}^\lambda )\ne \emptyset \}\) violating \(w_{\textbf{x}^{t^*}}-w_{\bar{y}(b^{t^*})}+ w_{b^{t^*}} \le B\), with \(b^{t^*}\in U(\textbf{x}^*)\backslash U(\textbf{x}^{t^*})\) and \(\bar{y}(b^{t^*})\in [(U(\textbf{x}^t)\backslash U(\textbf{x}^*))]^m\), for the first time.
Lemma 7
Considering the current solution \(\textbf{x}^{t^*}\) and the m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) mentioned above, we have
where \(\textbf{1}_{y(b^{t^*}),j^{t^*}}\preceq \textbf{x}^{t^*}\backslash \textbf{x}^\lambda \), if f is monotone.
Lemma 8
Given \(t\in \{\lambda +1,\dots ,t^*\}\) in KM-KM for problem (1), we have
for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).
Lemma 9
Given \(t\in \{\lambda +1,\dots ,t^*\}\) in KM-KM, \(\alpha , \beta \), r are positive constants satisfying \(1-\frac{1}{\alpha }(1-e^{-\beta })-r\ge 0\) and \(\textbf{x}^*\) be an optimal solution of problem (1). If
and
hold, we have
Theorem 1
If the objective function f is monotone for problem (1), we can obtain a \(\frac{1}{m+2}(1-e^{-(m+2)})\)-approximate solution in KM-KM by setting \(\lambda =2\).
Proof
When there is no qualified m-swap\((\bar{a},b)\in S^m\), KM-KM will break all loops and output \(\textbf{x}^T\). Using Lemma 3 between \(U(\textbf{x}^t)\) and \(U(\textbf{x}^*)\), for a fixed \(t\ge \lambda \), there exists a mapping y : \( U(\mathbf {x^*})\backslash U(\textbf{x}^t)\rightarrow [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\) such that \((U(\textbf{x}^t)\backslash \bar{y}(b))\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\), for \(~b\in U(\mathbf {x^*})\backslash U(\textbf{x}^t)\) and \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\). Thus, there are some m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^t\) and \(\textbf{x}^*\).
When \(t=T\), according to whether the conditions in line 11 of KM-KM are violated, consider dividing m-swaps (y(b), b) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) into two cases.
Case 1: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), they were all rejected just due to \(\rho (\bar{y}(b),b)\le 0\) instead of knapsack constraint.
Due to our assumption about the m-swaps, we get
Since f is monotone, we combine formula (4) in Lemma 5 and formula (7) in Lemma 6, then use formula (12) and formula (8) in Lemma 6 to get \(g(\textbf{x}^*)\le (m+2)g(\textbf{x}^T)\). Finally, we have \(f(\textbf{x}^*)\le (m+2)f(\textbf{x}^T)-(m+1)f(\textbf{x}^\lambda )\le (m+2)f(\textbf{x}^T) \) due to nonnegativity of f. Therefore, we find a \(\frac{1}{m+2}\)-approximate solution in case 1, if f is monotone.
Case 2: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), there exists at least one satisfying \(w_{\textbf{x}^t}-w_{\bar{y}(b)}+ w_b > B\).
For a fixed \(t\ge \lambda \), KM-KM selects a qualified m-swap\((\bar{a}^t,b^t)\) to update \(\textbf{x}^{t}\) in each t-th iteration. In \(t^*+1\) iteration, KM-KM checks m-swap \((\bar{y}(b^{t^*}),b^{t^*})\), where \(b^{t^*}\in U(\textbf{x}^*)\backslash U(\textbf{x}^{t^*})\) and \(\bar{y}(b^{t^*})\in [(U(\textbf{x}^t)\backslash U(\textbf{x}^*))]^m\), in line 11 and removed it due to \(w_{\textbf{x}^{t^*}}-w_{\bar{y}(b^{t^*})}+ w_{b^{t^*}} > B\), for the first time. Define \(\rho _t:=\rho (\bar{a}^t,b^t)\) for \(t\in \{\lambda ,\dots ,t^*-1\}\) and
When \(t\in \{\lambda +1,\ldots ,t^{*}\}\), we combine formula (4) in Lemma 5 and formula (7) in Lemma 6, then rewrite formula (7) in Lemma 6 as below
Using formula (8) in Lemma 6 and Lemma 8, we can get \(g(\textbf{x}^*)\le (m+2)[g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{m+2}\rho _{t}].\) By formula (9) in Lemma 7, we set \(r=\frac{1}{2}\) in Lemma 9. Therefore, \(f(\textbf{x}^{t^*})\ge \frac{1}{m+2}(1-e^{-(m+2)})f(\textbf{x}^*)\) holds immediately. So we get the approximate ratio of \(\frac{1}{m+2}(1-e^{-(m+2)})\) in case 2, if f is monotone. \(\square \)
As above, we show a \(\frac{1}{m+2}(1-e^{-(m+2)})\)-approximate ratio for monotone k-submodular maximization with a knapsack and m matroid constraints. Due to our conclusion, we improve the approximate ratio of monotone k-submodular maximization with a knapsack and m matroid constraints (Liu et al. 2022a) from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) to \(\frac{1}{m+2}(1-e^{-(m+2)})\).
When \(m=1\), i.e. monotone k-submodular maximization with a knapsack and a matroid constraints, we have the corresponding conclusion as below. It also improves the result \(\frac{1}{4}(1-e^{-2})\) in Liu et al. (2022a).
Corollary 1
If the objective function f is monotone for problem (1) with \(m=1\), we can obtain a \(\frac{1}{3}(1-e^{-3})\)-approximate solution in KM-KM by setting \(\lambda =2\).
5 Analysis for non-monotone k-submodular maximization with a knapsack and m matroid constraints
In this section, we further study non-monotone k-submodular maximization with a knapsack and m matroids constraints. In fact, the impact of monotonicity of f is not reflected in Lemmas 6, 8, 9. So we only need to give the following Lemma 10. Using Lemmas 6, 8, 9 10, we can get an approximate ratio \(\frac{1}{m+3}(1-e^{-(m+3)})\).
Lemma 10
Considering the current solution \(\textbf{x}^{t^*}\) and the m-swap \((\bar{y}(b^{t^*}),b^{t^*})\) as in Lemma 7, we have
where \(\textbf{1}_{y(b^{t^*}),j^{t^*}}\preceq \textbf{x}^{t^*}\backslash \textbf{x}^\lambda \).
Theorem 2
If the objective function f is non-monotone for problem (1), we can obtain a \(\frac{1}{m+3}(1-e^{-(m+3)})\)-approximate solution in KM-KM by setting \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\).
Proof
When \(t=T\), similar to Theorem 1 in Sect. 4, we consider dividing the m-swaps (y(b), b) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\) into two cases.
Case 1: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), they were all rejected just due to \(\rho (\bar{y}(b),b)\le 0\) instead of knapsack constraint.
Combine formula (6) in Lemma 5 and formula (7) in Lemma 6, then use formula (12) and formula (8) in Lemma 6 to get \(g(\textbf{x}^*)\le (m+3)g(\textbf{x}^T)\). Finally, we have \(f(\textbf{x}^*)\le (m+3)f(\textbf{x}^T)-(m+2)f(\textbf{x}^\lambda )\le (m+3)f(\textbf{x}^T) \) due to nonnegativity of f. Therefore, we find a \(\frac{1}{m+3}\)-approximate solution in case 1, if f is non-monotone in problem (1).
Case 2: Considering the m-swaps \((\bar{y}(b),b)\) with respect to \(\textbf{x}^T\) and \(\textbf{x}^*\), there exists at least one satisfying \(w_{\textbf{x}^t}-w_{\bar{y}(b)}+ w_b > B\).
When \(t\in \{\lambda +1,\ldots ,t^{*}\}\), we combine formula (6) in Lemma 5, formula (7) in Lemma 6 and formula(13). Then use formula (8) in Lemma 6 and Lemma 8, we can get \(g(\textbf{x}^*)\le (m+3)[g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{m+3}\rho _{t}].\) By formula (14) in Lemma 10, we set \(r=\frac{m+1}{\lambda }\) in Lemma 9. Therefore, \(f(\textbf{x}^{t^*})\ge \frac{1}{m+3}(1-e^{-(m+3)})f(\textbf{x}^*)\) holds immediately. So we get the approximate ratio of \(\frac{1}{m+3}(1-e^{-(m+3)})\) in case 2, if f is non-monotone in problem (1). \(\square \)
As above, we show a \(\frac{1}{m+3}(1-e^{-(m+3)})\)-approximate ratio for non-monotone k-submodular maximization with a knapsack and m matroid constraints. Due to our conclusion, we extend monotone k-submodular maximization with a knapsack and m matroid constraints (Liu et al. 2022a) to non-monotone case.
When \(m=1\), i.e. non-monotone k-submodular maximization with a knapsack and a matroid constraints, we have the corresponding conclusion as below.
Corollary 2
If the objective function f is non-monotone for problem (1) with \(m=1\), we can obtain a \(\frac{1}{4}(1-e^{-4})\)-approximate solution in KM-KM by setting \(\lambda =3\).
6 Conclusions
In our paper, based on a nested greedy and local search algorithm KM-KM (Liu et al. 2022a) and a construction method (Nguyen and Thai 2020), we improve the approximate ratio for problem (1) (Liu et al. 2022a) from \(\frac{1}{2(m+1)}(1-e^{-(m+1)})\) to \(\frac{1}{m+2}(1-e^{-(m+2)})\) by enumerating \(\lambda =2\) items with the largest marginal profits in the optimal solution. The conclusion can get \(\frac{1}{3}(1-e^{-3})\)-approximate ratio for problem (1) with \(m=1\). Furthermore, we extend the conclusion to non-monotone case and get the approximate ratio \(\frac{1}{m+3}(1-e^{-(m+3)})\) for problem (1) by enumerating \(\lambda \ge \frac{(m+1)(m+3)}{m+2+e^{-(m+3)}}\) items with \(\lambda \in N_+\). The conclusion can get \(\frac{1}{4}(1-e^{-4})\) -approximate ratio for problem (1) with \(m=1\). And we need to enumerate \(\lambda =3\) items with the largest marginal profits in the optimal solution.
Data Availability
Enquiries about data availability should be directed to the authors.
References
Bian AA, Buhmann JM, Krause A, Tschiatschek S (2017) Guarantees for greedy maximization of non-submodular functions with applications. In: International conference on machine learning, 498–507
Calinescu G, Chekuri C, Pal M, Vondrák J (2011) Maximizing a monotone submodular function subject to a matroid constraint. SIAM J Comput 40(6):1740–1766
Chekuri C, Vondrák J, Zenklusen R (2014) Submodular function maximization via the multilinear relaxation and contention resolution schemes. Siam J Comput 43(6):1831–1879
Ene A, Nguyen HL (2019) A nearly-linear time algorithm for submodular maximization with a knapsack constraint. Leibniz Int Procee Inf 132:53:1-53:12
Feldman M, Naor J, Schwartz R (2011) A unified continuous greedy algorithm for submodular maximization. In: 2011 IEEE 52nd annual symposium on foundations of computer science, pp 570–579
Feldman M, and Naor S (2013) Maximization problems with submodular objective functions. PhD thesis, Computer Science Department, Technion, Haifa, Israel
Filmus Y, Ward J (2014) Monotone submodular maximization over a matroid via non-oblivious local search. SIAM J Comput 43(2):514–542
Huang CC, Kakimura N, Mauras S, Yoshida Y (2022) Approximability of monotone submodular function maximization under cardinality and matroid constraints in the streaming model. SIAM J Discret Math 36(1):355–382
Huber A, Kolmogorov V (2012) Towards minimizing k-submodular functions. In: International symposium on combinatorial optimization, pp 451–462
Iwata S, Tanigawa S, Yoshida Y (2016) Improved approximation algorithms for k-submodular function maximization. In: Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pp 404–413
Lee J, Mirrokni VS, Nagarajan V, Sviridenko M (2010) Maximizing nonmonotone submodular functions under matroid or knapsack constraints. SIAM J Discret Math 23(4):2053–2078
Liu Q, Yu K, Li M, Zhou Y (2022) k-submodular maximization with a knapsack constraint and p matroid constraints. Tsinghua Science and Technology, Accepted
Liu Z, Guo L, Du D, Xu D, Zhang X (2022) Maximization problems of balancing submodular relevance and supermodular diversity. J Global Optim 82(1):179–194
Nemhauser GL, Wolsey LA, Fisher ML (1978) An analysis of approximations for maximizing submodular set functions. Math Progr 14(1):265–294
Nguyen L, and Thai M (2020) Streaming k-submodular maximization under noise subject to size constraint. In: International conference on machine learning, pp 7338–7347
Ohsaka N, Yoshida Y (2015) Monotone k-submodular function maximization with size constraints. Adv Neural Inf Process Syst 28:694–702
Oshima H (2021) Improved randomized algorithm for k-submodular function maximization. SIAM J Discret Math 35(1):1–22
Rafiey A, Yoshida Y (2020) Fast and private submodular and \(k\)-submodular functions maximization with matroid constraints. In: International conference on machine learning, pp 7887–7897
Sakaue S (2017) On maximizing a monotone k-submodular function subject to a matroid constraint. Discrete Optim 23:105–113
Sarpatwar KK, Schieber B, Shachnai H (2019) Constrained submodular maximization via greedy local search. Op Res Lett 47(1):1–6
Sviridenko M (2004) A note on maximizing a submodular set function subject to a knapsack constraint. Op Res Lett 32(1):41–43
Tang Z, Wang C, Chan H (2022) On maximizing a monotone k-submodular function under a knapsack constraint. Op Res Lett 50(1):28–31
Ward J, Zivny S (2014) Maximizing k-submodular functions and beyond. ACM Trans Algorithms 12(4):471–4726
Yoshida Y (2019) Maximizing a monotone submodular function with a bounded curvature under a knapsack constraint. SIAM J Discret Math 33(3):1452–1471
Funding
The authors have not disclosed any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This paper is supported by Natural Science Foundation of Shandong Province of China (Nos. ZR2020MA029 and ZR2021MA100) and National Science Foundation of China (No. 12001335).
Appendix
Appendix
Proof of Lemma 5:
When k-submodular function f is monotone, the conclusions are as follows. Due to monotonicity of f, \(f_{\textbf{x}^t_{l-1}}(\textbf{1}_{v_l.i_l})\ge 0\) holds in Greedy Algorithm. By the definition of g, we have \(g_{\textbf{x}^t_{l-1}}(\textbf{1}_{v_l.i_l})\ge 0\). For \(v_l\) in l-th iteration of Greedy Algorithm, we compare the position \(i_l\) with \(\textbf{1}_{v_l,i_l}\preceq \textbf{x}^t_l\) and \(i_*\) with \(\textbf{1}_{v_l,i_*}\preceq \textbf{x}^*\).
If \(v_l\in U(\textbf{x}^*)\) with \(i_*=i_l\), then \(\textbf{o}_{l-1}^t=\textbf{o}_l^t\). Therefore, we have
If \(v_l\in U(\textbf{x}^*)\) with \(i_*\ne i_l\), then \(\textbf{o}_{l-1}^t=\textbf{o}_{l-1/2}^t\sqcup \textbf{1}_{v_l,i_*}\) and \(\textbf{o}_{l}^t=\textbf{o}_{l-1/2}^t\sqcup \textbf{1}_{v_l,i_l}\). By monotonicity of f, greedy choice of Greedy Algorithm and orthant submodularity, we get
If \(v_l\notin U(\textbf{x}^*)\), then \(\textbf{o}_{l-1/2}^t=\textbf{o}_{l}^t=\textbf{o}_{l-1}^t\sqcup \textbf{1}_{v_l,i_l}\), we have
In summary, we have
Sum it for l from 1 to \(|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|\) and get
When k-submodular function f is non-monotone, the conclusion will change as below. Due to pairwise monotonicity, \(f_{\textbf{x}^t_{l-1}}(\textbf{1}_{v_l.i_l})\ge 0\) holds in Greedy Algorithm. By the definition of g, we have \(g_{\textbf{x}^t_{l-1}}(\textbf{1}_{v_l.i_l})\ge 0\).
If \(v_l\in U(\textbf{x}^*)\) with \(i_*=i_l\), we have
If \(v_l\in U(\textbf{x}^*)\) with \(i_*\ne i_l\), we get
for any \(i'\in [k]\) with \(i'\ne i_l\). Due to pairwise monotonicity, we get the first inequality. By greedy choice of Greedy Algorithm and orthant submodularity, the second holds.
If \(v_l\notin U(\textbf{x}^*)\), similar to above, we have
In summary, we have
Sum it for l from 1 to \(|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|\) and get
Proof of Lemma 6:
Due to \(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}=(\textbf{x}^*\sqcup \textbf{x}^t)\sqcup \textbf{x}^t\), we have \(U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)= U(\textbf{x}^*)\backslash U(\textbf{x}^t)\) and \(\textbf{x}^t\preceq \textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\). By Lemma 3 between \(U(\textbf{x}^*)\) and \(U(\textbf{x}^t)\), there exists a mapping y :
such that \((U(\textbf{x}^t)\backslash \bar{y}(b))\cup \{b\}\in \bigcap _{j=1}^{m}\mathcal {L}_j\), for \(~b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\), \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\), and each element \(a\in U(\textbf{x}^t)\backslash U(\textbf{x}^*)\) appears in mapping y no more than m times. Using Lemma 1 and the mapping \(y:b\rightarrow \bar{y}(b)\), we get
for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).
Then we give the proof of second inequality as follows.
For fixed iteration step \(t\ge \lambda +1\) in KM-KM, the ground set of Greedy Algorithm(f, G) is \(G=\{v_1,\dots ,v_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\}=U(\textbf{x}^t\backslash \textbf{x}^\lambda )\) in a fixed order as we mentioned earlier.
Each \(b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\) will be mapped to \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\). Let \(p=|\bar{y}(b)|\in [m]\) for each such b, then write \(\bar{y}(b)=\{v_{q_1},\dots ,v_{q_p}\}\), where \(1\le q_1<\dots q_p\le m\).
By our settings, we have
for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \). The first inequality is due to orthant submodularity. As we mentioned, each element \(a\in U(\textbf{x}^t)\backslash U(\textbf{x}^*)\) appears in mapping y no more than m times. In Greedy Algorithm(f, G), all marginal gains \(g_{\textbf{x}}(\textbf{1}_{v_l,j_l})\ge 0\) for non-monotone or monotone k-submodular function f input. Therefore, we get the second inequality. The third inequality needs nonnegativity of g.
Proof of Lemma 7:
For problem (1), input a monotone k-submodular function f and \(\lambda =2\) in KM-KM. In the fixed \(t^*+1\) iteration, considering the current solution \(\textbf{x}^{t^*}\) and the m-swap \((\bar{y}(b^{t^*}),b^{t^*})\), we have
Using the monotonicity of f, we get the first inequality. Then the second is due to orthant submodularity. Because we greedily choose \(\textbf{x}^t\) for \(t\in \{1,2\}\), the third inequality holds. Similarly, we have
Combining the above two formulas, we have
Proof of Lemma 8:
Given a fixed \(t\in \{\lambda ,\dots ,t^*\}\), by greedy choice of t-th iteration and the assumption about \(t^*+1\), we have
for m-swaps \((\bar{y}(b),b)\) with \(~b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\) and \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\). Due to \(U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)=U(\mathbf {x^*})\backslash U(\textbf{x}^t)\), we have
Combining the above formula, we get
for \(\textbf{1}_{y(b),j}\preceq \textbf{x}^t\backslash \textbf{x}^\lambda \).
Proof of Lemma 9:
We introduce a framework of proof inspired by Sarpatwar et al. (2019) to get
Let \(B_\lambda =0\) and \(B_{t}=\sum _{\tau =\lambda +1}^{t}w_{b^\tau }\), for any \(t\in \{\lambda +1,\ldots ,t^{*}+1\}\). Define \(B'= B_{t^{*}+1}=B_{t^{*}}+w_{b^{t^*}}\) and \(B''= B-w_{\textbf{x}^\lambda }\). By the assumption of case 2, we have \(B'> B\ge B''\). For \(j = 1,\ldots , B'\), we define \(\gamma _j=\rho _{t-1}\) when \(j = B_{t-1}+1,\ldots , B_t\). Note that \(g(\textbf{x}^\tau )-g(\textbf{x}^{\tau -1})=w_{b^{\tau -1}}\rho _{\tau -1}\), using the above definition, we obtain that
for each \(t\in \{\lambda +1,\ldots ,t^{*}\}\), and
Using \(g(\textbf{x}^*)\le \alpha [g(\textbf{x}^t)+\frac{(B-w_{\textbf{x}^\lambda })}{\beta }\rho _{t}]\) and (32), we have the following equalities
From (33), (34) and Lemma 4, we obtain that
Using \(1-\frac{1}{\alpha }(1-e^{-\beta })-r\ge 0\) and (35), we have
Proof of Lemma 10:
Recall our settings in proof of Lemma 6: For a fixed iteration step \(t\ge \lambda +1\) in KM-KM, the ground set of Greedy Algorithm(f, G) is \(G=\{v_1,\dots ,v_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|}\}=U(\textbf{x}^t\backslash \textbf{x}^\lambda )\) in a fixed order as we mentioned earlier. Each \(b\in U(\textbf{o}^t_{|U(\textbf{x}^t\backslash \textbf{x}^\lambda )|})\backslash U(\textbf{x}^t)\) will be mapped to \(\bar{y}(b)\in [U(\textbf{x}^t)\backslash U(\textbf{x}^*)]^m\). Let \(p=|\bar{y}(b)|\in [m]\) for each such b, then write \(\bar{y}(b)=\{v_{q_1},\dots ,v_{q_p}\}\), where \(1\le q_1<\dots q_p\le m\). We have
for \(\tau \in {1,\dots ,\lambda }\). The first inequality is due to pairwise monotonicity, that is, \(-f_{\textbf{x}}((v,i))\le f_{\textbf{x}}((v,j))\) for \(i\ne j\in [k]\). The second is due to orthant submodularity. Because we greedily choose \(\textbf{x}^{t}\) for \(t\in \{1,\dots ,\lambda \}\), the third inequality holds. Combining the above \(\lambda \) formulas, we have
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, K., Li, M., Zhou, Y. et al. On maximizing monotone or non-monotone k-submodular functions with the intersection of knapsack and matroid constraints. J Comb Optim 45, 93 (2023). https://doi.org/10.1007/s10878-023-01021-w
Accepted:
Published:
DOI: https://doi.org/10.1007/s10878-023-01021-w