Authors: Vincent J. Koeman; Koen V. Hindriks; Catholijn M. Jonker
Addresses: Delft University of Technology, Van Mourik Broekmanweg 6, 2628XE Delft, The Netherlands ' Delft University of Technology, Van Mourik Broekmanweg 6, 2628XE Delft, The Netherlands ' Delft University of Technology, Van Mourik Broekmanweg 6, 2628XE Delft, The Netherlands
Abstract: Debugging is notoriously difficult and time consuming but also essential for ensuring the reliability and quality of a software system. In order to reduce debugging effort and enable automated failure detection, we propose an automated testing framework for detecting failures in cognitive agent programs. Our approach is based on the assumption that modules within such programs are a natural unit for testing. We identify a minimal set of temporal operators that enable the specification of test conditions and show that the test language is sufficiently expressive for detecting all failure types of existing failure taxonomy. We also introduce an approach for specifying test templates that supports a programmer in writing tests. Furthermore, empirical analysis of agent programs allows us to evaluate whether our approach using test templates adequately detects failures, and to determine the effort that is required to do so in both single and multi-agent systems. We also discuss a concrete implementation of the proposed framework for the GOAL agent programming language that has been developed for the Eclipse IDE. With the use of this framework, evaluations have been performed based on test files and according questionnaires that were handed in by 94 novice programmers.
Keywords: multi-agent systems; MASs; testing; verification; cognitive agents; failure detection; testing framework; failure taxonomy; test templates; run-time validation; agent-oriented software engineering.
International Journal of Agent-Oriented Software Engineering, 2018 Vol.6 No.3/4, pp.275 - 308
Received: 31 May 2017
Accepted: 27 Apr 2018
Published online: 26 Nov 2018 *