Download PDFOpen PDF in browserARCH-COMP22 Repeatability Evaluation Report9 pages•Published: December 13, 2022AbstractThe repeatability evaluation for the 6th International Competition on Verifying Con- tinuous and Hybrid Systems (ARCH-COMP’22) is summarized in this report. The compe- tition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2022, affiliated with the 41st International Conference on Computer Safety, Reliability and Security (SAFECOMP’22). In its sixth edition, 25 tools had submit- ted artifacts through a Git repository for the repeatability evaluation, which were applied to solve benchmark instances through 7 competition categories. The majority of partic- ipants adhered to the specifications of this year’s repeatability evaluation, which was to submit scripts to automatically install and execute tools in containerized virtual environ- ments (specifically Dockerfiles to execute within Docker containers). Some categories used performance evaluation information from a common execution platform. The repeatability results represent a snapshot of current tools and the types of benchmarks on which they are well suited, and so that others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing of being repeatable.Keyphrases: artifact evaluation, hybrid systems, reachability, repeatability evaluation, reproducibility, verification In: Goran Frehse, Matthias Althoff, Erwin Schoitsch and Jeremie Guiochet (editors). Proceedings of 9th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH22), vol 90, pages 222-230.
|