American researchers tested how the robot’s behavior in a group with people affects their overall interaction when performing tasks. It turned out that in groups with robots, admit their mistakes and expressing emotions, the interaction takes place on a higher level than in the case of silent or neutral robots. Article published in the journal Proceedings of the National Academy of Sciences.
An important part of research in robotics is the development of new technologies and the study of characteristics of human interaction with robots. These studies often lead to counterintuitive results, such as the effect of “sinister valley”, or it turns out, for example, that people prefer robots who commit mistakes. In addition, studies also showthat the credibility of the robots and their attractiveness correlate with how they explain their actions and explain it at all.
A group of scientists from Yale University led by Nicholas Christakis (Christakis, Nicholas) joined the two types of research and check the error and explanation affect the interaction in the group, consisting of people and robot. In fact, the purpose of the study was to test whether the difference in the behavior of the robot affect not only the interaction of type robot-human, but also on the interaction between the type of person to person.
In total, the study included 153 volunteers, who together with a small humanoid robot NAO, in turn, formed a 51 group of four participants. During the experiment, a group of three people and a robot was sitting at a table with four sides, and before them lay the tablets with the game. In each round participants were asked a set of different lengths of rails, and 40 seconds was necessary to build the optimal train path between two points. All in the game it was 30 rounds, after which the volunteers filled out a questionnaire and then explained to them the research questions and other details which could affect the result, if they were known in advance.