In this work, we investigate whether the transparency of a robot’s behaviour is improved when human preferences on the actions the robot performs are taken into account during the learning process.
The lecture was attended by approximately 30 master's students and hosted by Professor Friederike Eyssel.
This research was a collaboration with Pompeu Fabra University, led by Professor Vladimir Estivill Castro.
In collaboration with Prof. Friederike Eyssel's team, we developed the first scale to measure the perceived transparency in human-robot interactions.
A study with 143 participants revealed that explanations significantly enhance transparency, though results varied when incorporating prior knowledge.
The workshop is held in collaboration with Georgia Institute of Technology, USA.
The study highlights how robots using inner speech significantly enhance transparency in their learning process.
In collaboration with Prof. Angelo Cangelosi's team, we investigated theory of mind and transparency in human-robot interaction.