An Automated Approach to Model Based Testing of Multi-Agent Systems
Multi-agent systems (MAS) have been used progressively more for complicated and dynamic environments. Complex environments require MAS applications to work efficiently.MAS are used in dynamic and complex environments like e-commerce, banking, air traffic control, information management due to agents’ unique features like autonomy to make their own decisions, reactivity upon environmental changes, social ability and pro-activeness in goaldirected behaviors to select feasible plans based on current situation.
Software testing plays a vital role in ensuring multi-agent systems’ quality and acts as a major phase in their development life cycle. There is a need to have a good testing technique in order to ensure MAS quality. Testing based on extracting test requirements from system models is useful for revealing faults in MAS testing because testing can start early, we do not need to wait for complete system to develop. In literature, some work has been done on testing only a few features of MAS using model based testing (MBT), i.e., testing MAS units using system models and testing only one type of interaction between agents in MAS. There is a lack of a comprehensive testing technique which assures aspects ranging from system specification level to detailed plans execution covering integration and system level testing. Existing model based testing techniques for multi-agent systems do not cover all aspects of MAS, e.g., dependencies between interactions and goal plan coverage. Goals and plans related existing testing techniques only cover part of plans and goals in a specific design artifact. Greater coverage of design artifacts ensures higher fault detection capability
Prometheus is a well-developed MAS designing methodology based on Agent Unified Modeling Language (AUML) notation. Interactions between agents in Prometheus methodology have actions, percepts and message interactions between agents; but only message interactions have been covered in existing techniques which is not enough for fault free MAS because action and percept interactions are also equally important. Dependency fault occurs in case of missing percept as percepts are required for events. Actions are used to update output of agent to environment. Messages usually depend on the correct sequential execution of related actions and percepts involved in an interaction.
Goals and plans are the key premise to achieve MAS targets. Different types of faults can occur if certain plans, goals, sub-goals or their order of execution is incorrect. Literature covers faults like incorrect belief and incorrect context etc., but there are certain aspects of MAS that are still missing and can cause MAS to behave unexpectedly, like inaccurate goal achievement, plan failure, internal agent fault, missing functionality and scenario related faults. Such faults can be minimized by ensuring maximum coverage of goals and plan using design artifacts.
We have developed an approach using Prometheus design artifacts for integration and system level testing of MAS. AUML interaction protocol is used for interaction between agents and environment which is further elaborated in process diagrams corresponding to each agent. Fault models for interaction coverage and goal-plan coverage have been presented in which different integration and system level fault types are discussed. In this novel approach different interactions are considered like percept, action and message between the agents which can be modeled in a test model, i.e., protocol graph. Different coverage criteria for interaction coverage have been devised and applied to generate test paths for interactions between agents. For system level testing, we have also created test model, i.e., Goal-Plan Graph (GPG) for goals, sub-goals and plans using Prometheus design artifacts, i.e., goal overview diagram, scenario overview, agent and capability overview diagrams. We have defined coverage criteria for system level testing and applied on Goal-Plan Graph for test paths generation. Test cases have been generated using test model paths and its relevant implementation in agent development environment. Test paths are generated automatically with the help of tool from protocol graph and Goal-Plan Graph. We executed test cases on MAS implementation and compared expected results with the actual results to evaluate test cases. Failed test cases are further investigated to identify which type of fault was detected. We seeded faults in MAS and applied interaction and system level test cases. Expected results were gathered manually for evaluation purpose.