How Many Bits Does it Take to Quantize Your Neural Network?

EasyChair Preprint 1000, version history

VersionDatePagesVersion notes
1
May 25, 2019
9
2
September 12, 2019
8

We provide an example for the non-monotonicity of robustness and improve our experimental evaluation, which now includes a comparison between encodings and against a recently published gradient descent–based method for quantized networks.

3
February 24, 2020
18

We provide a comparison between various SMT-solvers.

Keyphrases: Quantized Neural Networks, SMT solving, adversarial attacks, bit-vectors

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:1000,
  author    = {Mirco Giacobbe and Thomas A. Henzinger and Mathias Lechner},
  title     = {How Many Bits Does it Take to Quantize Your Neural Network?},
  howpublished = {EasyChair Preprint 1000},
  year      = {EasyChair, 2020}}