Archive for category Human-Robot Interaction

Eldercare will change

Here is a link to a short video summary of our work for the SOCRATES EU project. The overarching focus of this project is on Robotics in eldercare. The use-cases have become extremely relevant with the coronavirus outbreak. We often tended to assume that the lack of sufficient professional personnel will be the main reason for implementing and distributing social robots for the older population. Now we see the necessity of robots for maintaining the safety of older adults and avoiding the spread of disease – virus among those who are more vulnerable.


In SOCRATES we (Samuel Olatunji our doctoral student, Yael Edan my colleague and myself) look at the necessary balance between the robot’s level of autonomy (LOA) and the amount and pace of information it should provide (LOT – level of transparency) – so that people will get just the right amount of feedback from the robot (too much may distract them, too little may cause confusion, distrust, and abandonment fo this technology).

Our participants are active older adults who were willing to come to the lab and help us in developing our algorithms and applications. We wish them all well and to stay healthy. We hope to see them all again in the lab when the time comes and it is possible again.

The robot that you see in the film is not teleoperated, it moves autonomously following the user’s path and pace. This is the YouTube link:

To read more about this work and about Samuel


, , , , ,

Leave a comment

I have been busy

Not very many posts in 2019 but this does not mean that we have not conducted some really interesting research in our lab. On the contrary

So, over the next few weeks I will begin posting some of our most recent accomplishments.

Here is just one:

Closing the feedback loop – the relationship between input and output modalities in HRI, presentation at the Human Friendly Robotics workshop in Rome 2019

ABC student poster- Tamara Markovich and Shanee Honig



Leave a comment

Understanding and Resolving Failures in Human-Robot Interaction

Shanee Honig and I have just finished a literature review on resolving failures in HRI.  The Full publication can be found in Frontiers .

We mapped a taxonomy of failures, separating technical failures from interaction failures [see 1].


A human-robot failure taxonomy

After reviewing the cognitive considerations that influence people’s ability to detect and solve robot failures, as well as the literature in failure handling in human-robot interactions, we developed an information processing model called the Robot Failure Human Information Processing (RF-HIP) Model, modeled after Wogalter’s C-HIP (an elaboration of Shannon & Weavers 1948 model of communication), to describe the way people perceive, process, and act on failures in human robot interactions.

  • RF-HIP can be used as a tool to systematize the assessment process involved in determining why a particular approach to handling failure is successful or unsuccessful in order to facilitate better design.



The RF-HIP (robotic failure – human information processing) Model


While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people’s perceptions and feelings towards robots, and how these effects can be mitigated. 52 studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction, and mitigating failures. Since little research has been done on these topics within the Human-Robot Interaction (HRI) community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing (RF-HIP)), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for human-robot interactions.


, , ,

Leave a comment

Towards Socially Aware Person-Following Robots

Here is a new publication from our lab. This is a literature review that is focused on person-following in robotics from the perspective of the user. 1Published in IEEE THMS.



Significant R&D has been invested in technical issues related to person following. However, a systematic approach for designing robotic person-following behavior that maintains appropriate social conventions across contexts has not yet been developed. To understand why this may be the case, an in-depth literature review of 221 articles on person-following robots was performed, from which 107 are referenced. From these papers, six relevant topics were identified that shed light on the types of social interactions that have been studied in person-following scenarios: a) applications; b) robotic systems; c) environments; d) following strategies; e) human-robot communication; and f) evaluation methods. Gaps in the existing research on person-following robots were identified, mainly in addressing social interaction and user needs, noting that only 25 articles reported proper user studies. Human-related, robot-related, task-related, and environment-related factors that are likely to influence people’s spatial preferences and expectations of a robot’s person-following behavior are then discussed. To guide the design of socially aware person following robots, a user-needs layered design framework that combines the four factor categories is proposed. The framework provides a systematic way to incorporate social considerations in the design of person-following robots. Finally, framework limitations and future challenges in the field are presented and discussed.

Leave a comment

Multimodal communication for guiding a person following robot

Come meet us at Ro-Man 2017, where Dr, Vardit Sarne-Fleischmann and Shanee Honig will present our work on Gesture vocabulary for a person following robot.

Abstract— Robots that are designed to support people in different tasks at home and in public areas need to be able to recognize user’s intentions and operate accordingly. To date, research has been mostly concentrated on developing the technological capabilities of the robot and the mechanism of recognition. Still, little is known about navigational commands that could be intuitively communicated by people in order control a robot’s movement. A two-part exploratory study was conducted in order to evaluate how people naturally guide the motion of a robot and whether an existing gesture vocabulary used for human-human communication can be applied to human-robot interaction. Fourteen participants were first asked to demonstrate ten different navigational commands while interacting with a Pioneer robot using a WoZ technique. In the second part of the study participants were asked to identify eight predefined commands from the U.S. Army vocabulary. Results show that simple commands yielded higher consistency among participants regarding the commands they demonstrated. Also, voice commands were more frequent than using gestures, though a combination of both was sometimes more dominant for certain commands. In the second part, an inconsistency of identification rates for opposite commands was observed. The results of this study could serve as a baseline for future developed commands vocabulary promoting a more natural and intuitive human-robot interaction style.

link to our poster.


, , ,

Leave a comment