Abstract :
Human-Robot Interaction (HRI) and Inter-Robot Communication (ICI) are rapidly evolving fields with little standardization. A number of middleware architectures, frameworks and programming languages exist for implementing algorithms on robots. Also, efforts have been made to enable robots to understand the multitude of natural languages available. Nevertheless, there is definite lack of intermediary languages for the representation of symbol grounding mechanisms in robots and standards for inter-robot cognitive communication. We address this void by presenting an intermediary meta-language based on a perceptually grounded algorithmic alphabet - the Affordance and kTR Augmented Alphabet based Neuro-Symbolic language, in short Af-kTRAANS, yielding an abstract layer sandwiched between the natural and the programming language layers that robots can use for knowledge representation, sharing and communication, while being agnostic to the embodiment, the pertinent human language, as well as, socio-cultural contexts and environments. Based on the k-TR theory of cognitive visual perception and implemented for practical systems using the Affordance Network (AfNet) and the AfRob ontology, the graphical language can support a wide variety of object definition phrases as well as action verbs/ object interaction commands while providing the necessary succinctness for tractable modeling. The various aspects of this cognitive inter-robot communication language are presented in this paper. Several examples of usage of the graphical language for common robotic task based queries are demonstrated along with the grounding mechanisms.