Download PDFOpen PDF in browserCommon Metrics for Benchmarking Human-Machine TeamsEasyChair Preprint 1409210 pages•Date: July 23, 2024AbstractIn the evolving landscape of human-machine teams, benchmarking performance is crucial to evaluate and enhance collaborative efficacy. This paper presents a comprehensive review of common metrics used to benchmark human-machine teams, focusing on dimensions such as accuracy, efficiency, adaptability, robustness, and user satisfaction. We analyze traditional metrics like task completion time, error rates, and workload distribution, as well as advanced measures including situation awareness, trust, and cognitive load. By examining these metrics through various case studies and experimental setups, we highlight their strengths, limitations, and applicability across different domains. Our findings underscore the importance of multi-faceted evaluation frameworks that integrate both quantitative and qualitative measures to provide a holistic assessment of human-machine collaboration. This study aims to guide researchers and practitioners in selecting appropriate metrics for their specific contexts, thereby fostering the development of more effective and reliable human-machine teams. Keyphrases: Adaptability Metrics, Benchmarking, Human-Machine Teams, Interaction Metrics, performance metrics
|