A novel idea for the creation of an intelligent interface that allows the remote control of arbitrary complex robotic morphologies by translating intuitive human behaviours into purposeful robotic actions.
Taking inspiration from human robot interaction, ergonomic principles, and autonomous robotics this project proposes a human-centric framework for robot control inspired by the current advancements in machine learning, artificial intelligence and autonomous robotics. We realise the interface as a software agent connecting the two end points of the system: human and robot, providing an adaptive and intelligent interface for robot control. So far, in our studies we have been able to demonstrate the proposed methodology for both self-exploration of robotic morphologies and acquisition of human behaviours.
Two fundamental elements for constructing this kind of interface are based on understanding and constructing methods for autonomous exploration of robot behaviours on one hand, and finding a suitable methodology for human-machine interaction on the other.
For our purposes, particularly interesting is the non-linear dynamics approach put forward by Ralf Der and the homeokinesis principle, which is a representative example of a the bottom-up approach in robot control and exploits self- organisation. Other examples based on the same principle that exploit self-organisation of the sensorimotor loop in robotics morphologies can be found in the work of Martius et. al. and Hesse et. al.
Regarding the intuitive aspects of the interface, interesting research is described by Niwa et.al. (2010). In their approach they define the interaction between the user and the interface as an "intention translation" mechanism, by which user intentions are translated to instructions or commands that the interface can understand, so that the user can interact with it. Providing a mapping between user intentions and robot behaviours can lead to an intuitive interface. The operator's intentions are taken into account – through the manipulation of the input device – making the interfacing process easier and more personalised. In this reversed paradigm, users do not have to familiarise themselves with the interface, but the interface can learn from the interaction with the user. Based on the reactions of the user to the exhibited behaviours of the robot, the interface is able to correlate the two, forming a control pattern.
Sensing and acting machines in form of robots had always attracted attention, both as a research field and as a commercial market. Human robot interaction and remote control have long been surging fields for both research communities and commercial markets. Whether robotic morphologies are built for entertainment purposes or for more "serious" applications, the remote control of such robots is one of the main forms of interaction between humans and robots.
Tele-operation of complicated mechanical devices requires a great deal of knowledge about the interfacing mechanisms and the robotic morphology at hand, both from the operator and from the designer of the controlling interface, in order to make the interface tailored to the given robot and, often, to the operator (e.g. see the unique controllers designed to accommodate different types of motor disabilities). To obtain such knowledge the operator has to undergo a suitable training on the use of the interface. For the communication of a continuous control sequence, for example, in most of the approaches available so far, the operator has to pass a sequence of commands through a controlling device. This can be difficult to remember and prone to mistakes, that can be difficult to correct on the fly.
Starting from these observations, we aim to design an integrated methodology focused on the human and capable of seamlessly translating any type of human motion into meaningful robotics actions and behaviours, something that we can call, as the title suggest, a human-centric approach in designing tele-operation of robotic morphologies. The main concept is based on the principle that the interfacing controller should be capable to adaptively "understand" and translate human motion into controlling commands for the robot, rather than having the human, the operator, learning the use of the interface.
Davide Marocco, Jeremy Goslin, Mike Phillips, Ruth Way (Plymouth University), Dr. Hiroyuki Iizuka (Osaka University)