Download PDFOpen PDF in browser

On the Evaluation and Comparison of Runtime Verification Tools for Hardware and Cyber-Physical Systems

15 pagesPublished: December 14, 2017

Abstract

The need for runtime verification (RV), and tools that enable RV in practice, is widely recognized. Systems that need to operate autonomously necessitate on-board RV technolo- gies, from Mars rovers that need to sustain operation despite delayed communication from operators on Earth, to Unmanned Aerial Systems (UAS) that must fly without a human on-board, to robots operating in dynamic or hazardous environments that must take care to preserve both themselves and their surroundings. Enabling all forms of autonomy, from tele-operation to automated control to decision-making to learning, requires some ability for the autonomous system to reason about itself. The broader class of safety-critical systems require means of runtime self-checking to ensure their critical functions have not degraded during use.
Runtime verification addresses a vital need for self-referential reasoning and system health management, but there is not currently a generalized approach that answers the lower-level questions. What are the inputs to RV? What are the outputs? What level(s) of the system do we need RV tools to verify, from bits and sensor signals to high-level architectures, and at what temporal frequency? How do we know our runtime verdicts are correct? How do the answers to these questions change for software, hardware, or cyber-physical systems (CPS)? How do we benchmark RV tools to assess their (comparative) suitability for particular platforms? The goal of this position paper is to fuel the discussion of ways to improve how we evaluate and compare tools for runtime verification, particularly for cyber-physical systems.

Keyphrases: cyber physical system verification, runtime benchmarks, runtime verification, temporal logic

In: Giles Reger and Klaus Havelund (editors). RV-CuBES 2017. An International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools, vol 3, pages 123-137.

BibTeX entry
@inproceedings{RV-CuBES2017:Evaluation_Comparison_Runtime_Verification,
  author    = {Kristin Yvonne Rozier},
  title     = {On the Evaluation and Comparison of Runtime Verification Tools for Hardware and Cyber-Physical Systems},
  booktitle = {RV-CuBES 2017. An International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools},
  editor    = {Giles Reger and Klaus Havelund},
  series    = {Kalpa Publications in Computing},
  volume    = {3},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2515-1762},
  url       = {/publications/paper/877G},
  doi       = {10.29007/pld3},
  pages     = {123-137},
  year      = {2017}}
Download PDFOpen PDF in browser