Abstract :
NASA are interested in using intelligent agents in future space missions, for example Mars exploration or deep space missions. Such missions might involve completely autonomous agents, able to direct their own activity, or might involve human-agent (typically human-robot) teams in which the participants work together. However, the dangers of allowing agents to control, even partially, critical aspects of a mission are clear. Software agents, just like any other computer program, need to be verified to ensure they are appropriate for use in mission critical areas. In this talk, we outline some of our ongoing work concerned with verifying, using logical representations of autonomous agents, the behaviour of systems comprising multiple agents. In addition, we indicate the possibilities (and pitfalls) involved in extending this work to more sophisticated human-agent teams.