Download PDFOpen PDF in browser

Reinforcement Learning for Variable Selection in a Branch and Bound Algorithm

EasyChair Preprint 2572

2 pagesDate: February 5, 2020

Abstract

Mixed integer linear programs are commonly solved by Branch and Bound algorithms. A key factor of the efficiency of the most successful commercial solvers is their fine-tuned heuristics. In this paper, we leverage patterns in real-world instances to learn from scratch a new branching strategy optimised for a given problem and compare it with a commercial solver.
We propose FMSTS, a novel Reinforcement Learning approach specifically designed for this task. The strength of our method lies in the consistency between a local value function and a global metric of interest. In addition, we provide insights for adapting known RL techniques to the Branch and Bound setting, and present a new neural network architecture inspired from the literature. To our knowledge, it is the first time Reinforcement Learning has been used to fully optimise the branching strategy. Computational experiments show that our method is appropriate and able to generalise well to new instances.

Keyphrases: Branch and Bound, Branching Strategy, Mixed Integer Linear Programming, Reinforcement Learning, neural network

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:2572,
  author    = {Marc Etheve and Zacharie Alès and Côme Bissuel and Olivier Juan and Safia Kedad-Sidhoum},
  title     = {Reinforcement Learning for Variable Selection in a Branch and Bound Algorithm},
  howpublished = {EasyChair Preprint 2572},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser