Controlling Torobo Humanoid by using a motion capture suit
Spontaneous interactions between two robots
Humanoid Robot OP2 performs imitative interaction through learning. They shows spontaneous turn-taking and switching of movement patterns. This experiment has been conducted by Jungsk Hwang, Nadine Wirkuttis, and Jun Tani.
B4NeuroAct1_0
A humanoid robot using MTRNN developes functinal hierarchy through learning where a set of primitive behaviors are learned in the lower level characterised by fast dynamics and their sequential combinations in the higher level characterized by slow dynamics. See (Yamashita & Tani, 2008) for details.
iCub robot controlled by MTRNN
iCub robot is controlled by MTRNN. This was done by Martin Peniak at Univ. of Plymouth
Spontaneous generation of actions by developing deterministic chaos
A humanoid robot spontaneously generates sequences of learned primitives. Chaos self-organized in the higher level of artificial brain (MTRNN model) generates pseudo stochastic sequences of moving an object among left, middle and right positions on a table.
Goal-directed planning using a framework analogous to active inference
Goal-directed planning based on visual image of a robot is performed by using predictive coding and active inference frameworks. The model is implemented on MTRNN. (see details in Arie et al., 2009)
Pathology of schizophrenia reconstructed in a humanoid robot
Pathology of schizophrenia (delusion of control) is reconstructed in a humanoid robot. The delusion of control is manifested under malfunction of top-down and bottom-up interactions in MTRNN.
Embodied language
A mobile robot with a hand learns to associate primitive sentences and corresponding behaviors with certain level of generalization. In the video, a robot, by recognizing a sentence “hit red”, generated the corresponding behavior. The robot was implemented with RNNPB model.
Online modification of motor plan of a robot by using a framework analogous to active inference
Robot performed online modification of motor program and execution of it by using a framework analogous to active inference (Friston et al., 2010) implemented in hierarchically-organized RNN. The robot was trained with two types of behavior patterns associated with two positions of a visual object. The video shows the moment of environmental situation change when the visual object was moved from a habituated position to another. It can be seen that representation in the immediate past window as well as future movement plan were modified in online by backpropagating the prediction error through time in the past window. See details in (Tani, 2003)