Download PDFOpen PDF in browser

Learning from Multiple Proofs: First Experiments

13 pagesPublished: August 19, 2013

Abstract

Mathematical textbooks typically present only one proof for most of the theorems. However, there are infinitely many proofs for each theorem in first-order logic, and mathematicians are often aware of (and even invent new) important alternative proofs and use such knowledge for (lateral) thinking about new problems.

In this paper we start exploring how the explicit knowledge of multiple (human and ATP) proofs of the same theorem can be used in learning-based premise selection algorithms in large-theory mathematics.
Several methods and their combinations are defined, and their effect on the ATP performance is evaluated on the MPTP2078 large-theory benchmark.
Our first findings are that the proofs used for learning significantly influence the number of problems solved, and that the quality of the proofs is more important than the quantity.

Keyphrases: automated reasoning, automated theorem proving, machine learning, multiple proofs, premise selection

In: Pascal Fontaine, Renate A. Schmidt and Stephan Schulz (editors). PAAR-2012. Third Workshop on Practical Aspects of Automated Reasoning, vol 21, pages 82-94.

BibTeX entry
@inproceedings{PAAR-2012:Learning_from_Multiple_Proofs,
  author    = {Daniel Kuehlwein and Josef Urban},
  title     = {Learning from Multiple Proofs: First Experiments},
  booktitle = {PAAR-2012. Third Workshop on Practical Aspects of Automated Reasoning},
  editor    = {Pascal Fontaine and Renate A. Schmidt and Stephan Schulz},
  series    = {EPiC Series in Computing},
  volume    = {21},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {/publications/paper/Pc},
  doi       = {10.29007/nb2g},
  pages     = {82-94},
  year      = {2013}}
Download PDFOpen PDF in browser