Download PDFOpen PDF in browserCurrent version

Frank Wolfe Algorithm for Nonmonotone One-sided Smooth Function Maximization Problem

EasyChair Preprint 9245, version 1

Versions: 12history
10 pagesDate: November 4, 2022

Abstract

In this paper, we study the problem of maximizing a nonmonotone one-sided-$\eta$ smooth ($OSS$ for short) function $\psi(x)$ under a downwards-closed convex polytope constraint. The concept of $OSS$ was first proposed by Mehrdad et al. to express the properties of multilinear extension of some set functions. It is a generalization of the continuous $DR$ submodular function. The $OSS$ property guarantees an alternative bound based on Taylor expansion. If the objective function is nonmonotone diminishing return ($DR$) submodular, Bian et al. gave a $1/e$ approximation algorithm with a regret bound $O(\frac{LD^{2}}{2K})$. On general convex sets, D$\ddot{u}$rr et al. gave a $\frac{1}{3\sqrt{3}}$ approximation solution with $O(\frac{LD^{2}}{(\ln K)^{2}})$ regrets. In this paper, we consider maximizing the more general $OSS$ function, and by adjusting the iterative step of the Jump-Start Frank Wolfe algorithm, an approximation of $1/e$ can still be obtained in the case of a larger regret bound $O(\frac{L(\mu D)^{2}}{2K})$. (where $L,\mu, D$ are some parameters, see Table \ref{tab:1}). The larger the parameter $\eta$ we choose, the more regrets we will receive, because of $\mu=\left(\frac{\beta}{\beta+1}\right)^{-2\eta}$ $(\beta\in (0,1])$.

Keyphrases: Frank-Wolfe, Nonmonotone, One-sided smooth, approximation algorithm

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:9245,
  author    = {Hongxiang Zhang and Chunlin Hao and Wenying Guo and Yapu Zhang},
  title     = {Frank Wolfe Algorithm for Nonmonotone One-sided Smooth Function Maximization Problem},
  howpublished = {EasyChair Preprint 9245},
  year      = {EasyChair, 2022}}
Download PDFOpen PDF in browserCurrent version