Archive for category HRI

PhD Positions SOCRATES @ BGU


SOCRATES see project recruitment-poster

BGU has three open PhD offers

With Prof. Yael Edan, we are looking for a Ph.D. student  in Human-Robot Interaction Design. The research topic will be: Interaction design for varying levels of automation

Ben-Gurion University is seeking outstanding candidates for a PhD student position in Interaction design for varying levels of automation, at the Department of Industrial Engineering and Management. BGU is an internationally recognized research university that attracts outstanding faculty and researchers from around the world with over 19,000 students. The Industrial Engineering and Management Dept. at BGU includes multidisciplinary faculty with expertise in operations research, applied statistics, intelligent systems, human factors engineering, and information systems. Advanced innovative multidisciplinary robotics research at BGU is conducted under the auspices of the ABC Robotics Initiative.

The recruitment is done as part of SOCRATES (SOcial Cognitive Robotic Agents in The European Society), a new Marie Skłodowska-Curie European Training Network (ETN) comprising of 7 universities/research institutes: Umeå University and Örebro University in Sweden, Universität Hamburg and Fraunhofer IPA, Stuttgart in Germany, CSIC Barcelona in Spain, University West of England, and Ben-Gurion University of the Negev in Israel. Additional non-academic partners are: Pal Robotics, Adele Robots, Alfred Nobel Science Park, Urquhart-Dykes & Lord LLP, Center for Digital Innovation, UMINOVA, Asea Brown Boveri, S.A, and Fundació ACE.

In total 15 Early Stage Researchers (ESRs) will be recruited as PhD students for research on various aspects of social robotics aiming at eldercare. The wide range of projects covers a spectrum from technical design of hardware and interaction methodology, to personalization, user studies, and robot ethics. The researchers will receive training in both academic and entrepreneurial spirit and expertise, well suited for a career in both academy and industry. The training includes a research project, courses, seminars, and workshops. An overview of all available positions can be found at

This Research project:  Within the realm of assistive robotics for the elderly, the Ph.D. student will aim to develop advanced human-robot interfaces and means of interaction for dynamically changing situations. The focus is on means to improve coordination between users and their robots, and allow the user and the robot to operate as a team with varying levels of control and autonomy, dependent on the context and tasks, in particular in robot learning scenarios. The user involvement and hence the Interaction Quality will vary as a result of the learning progress. This is particularly important to consider when the robot interacts with older adults who have difficulties in identifying changes in the robot’s behavior.

The student will visit Örebro University and the Ängen test facility in Sweden to record and analyze user acceptance for different interface and interaction designs and modalities. Specific experiments will be designed so as to simulate the different types of feedback and changing levels of interaction. These will be implemented on robots in different use-case scenarios with older adults and for different modalities and means of interaction. The research will also include two secondments; one to Bristol Robotics Labs, UK to investigate the relation between adaptive safety control and the human-robot interface design, and one industrial secondment to ADELE Robots, to investigate practical case studies.

About the position: The successful applicant will receive a competitive salary for a period of three years of full time research, provided that the expected study and research results are achieved. No teaching is expected. The salary will be based on the standard Marie Skłodowska-Curie Early-Stage Researcher living and mobility allowances. Expected starting date is 1st of April 2017.

 Admission requirements:The applicants must have completed their MSc or MA thesis in Engineering, Computer Sciences, Psychology, or Cognitive science. The applicant must be skilled in both oral and written communication in English, be able to work independently as well as in collaboration with others. We are looking for candidates with strong technical and programming skills. Experience in robotics, human factors, machine learning and statistics are merits. Candidates should have interest in studying human-robot interaction (although should not necessarily have background in such topics) and be passionate about learning and developing knowledge in a novel and exciting area.

Once approved by BGU’s SOCRATES graduate committee the student must be accepted to BGU’s Kreitman graduate school ( and obtain a visa and working permit according to the Israeli Ministry of Interior requirements. The candidate must submit a research proposal and go through a Qualification Exam within one year of studies on his/her research proposal.

To promote mobility, the following rule applies: at the time of recruitment, the applicants must not have resided or carried out their main activity in Israel for more than 12 months during the last 3 years. Compulsory national service, work in international organizations, and short stays such as holidays are not taken into account. The applicants must not, at the time of recruitment, have spent more than 4 years doing research, and must not have been awarded a doctoral degree.

 Application – a complete application should contain the following documents:

  • A cover letter including a description of your research interests, your reasons to apply for the position, and your contact information.
  • A curriculum vitae.
  • Copies of degree certificates, including documentation of completed academic courses and obtained grades.
  • Copy of completed MSc or MA thesis and other original research publications.
  • Contact information for three persons willing to act as references (including your thesis advisor).
  • Documentation of programming skills and software development experience.

Applications must be submitted electronically to the following email by November 30, 2016.

Applications will be accepted until the position is filled.

 For additional information about the position, please contact: Prof. Yael     Prof. Tal Oron-Gilad –

For general information about the SOCRATES project, please contact: Prof. Thomas Hellström –

, , , , , ,

1 Comment

IEEE RO-MAN 2016 presentations

Two of our works have been accepted as full papers for presentation and publication in the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016).

Postures of a Robot Arm – window to robot intentions?” authored by my doctoral student Sridatta Chaterjee and co-authored by my colleagues Drs. Oren Shriki and Idit Shalev.

Abstract— Body language of robot arms, have rarely been explored as a medium of conveying robot intentions. An exploratory study was done focusing on two questions: one, if robot arm postures can convey robot intentions, and two, if participants coming in contact with this robot arm for the first time can associate any meaning to the postures without watching the robot in action, or working with it. Thirty five participants of a wide age range (25-70) took part in this exploratory study. Results show that participants could interpret some postures. Four distinct types of postures have been selected to four separate categories by the majority of participants irrespective of their age. In addition, postures selected in categories like, ‘Robot giving object in a friendly manner’; ‘Robot is saying Hi!’, ‘Robot has been told not to disturb’ show similarity to body language exhibited by humans and animals while communicating such messages.


Posture 8, what is the robot doing?

The Influence of Following Angle on Performance Metrics of a Human-Following Robot” co-authored by our graduate students Shanee Honig and Dror Katz, and my colleague Prof. Yael Edan.

Abstract— Robots that operate alongside people need to be able to move in socially acceptable ways. As a step toward this goal, we study how and under which circumstances the angle at which a robot follows a person may affect the human experience and robot tracking performance. In this paper, we aimed to assess three following angles (0◦ angle, 30◦ angle, and 60◦ angle) under two conditions: when the robot was carrying a valuable personal item or not. Objective and subjective indicators of the quality of following and participants’ perceptions and preferences were collected. Results indicated that the personal item manipulation increased awareness to the quality of the following and the following angles. Without the manipulation, participants were indifferent to the behavior of the robot. Our following algorithm was successful for tracking at a 0◦ and 30◦ angle, yet it must be improved for wider angles. Further research is required to obtain better understanding of following angle preferences for varying environment and task conditions.


Following angles of a person-following robot: straight from behind or wider angles?

NY, Looking forward to two great presentations!

, , , , , ,

Leave a comment

Following Angle of a Human-Following Robot

Human-following capabilities of robots may become important in assistive robotic applications to facilitate many daily tasks (e.g. carrying personal items or groceries). Robot’s following distance, following angle and acceleration influence the quality of the interaction between the human and the robot by impacting walking efficiency (e.g., pace, flow and unwanted stops), user comfort and robot likability.

ICR Our team gave a presentation at the ICR 2016 conference focusing on Subjective preferences regarding human-following robots: preliminary evidence from laboratory experiments.

The Influence of Following Angle on Performance Metrics -

Following Angles of a human-following Pioneer LX Robot (Honig, Katz, Edan & Oron-Gilad)

  • This research effort is led by our graduate student Shanee Honig
  • For the person-tracking and following algorithm (Dror Katz & Yael Edan, work in progress) we use the Pioneer LX Robot’s built in camera and a Microsoft Kinect.
  • Currently we focus on 3 angles of following: back following (0 degree angle), a 30 degree angle, and a 60 degree angle.
  • We use a personal item manipulation (e.g., wallet) to examine how participants engage with the robot. Naturally when participants place a personal item on the robot, they become more engaged with it.
  • Come see us at the HCII 2016 where we will present a poster on sensitivity of older users (68 and above) to the quality of interaction, depending on robot’s following distance and acceleration, and the context of walk – Follow Me: Proxemics and Responsiveness Preferences of Older Users in a Human-Following Robot.


, , ,

Leave a comment

What do we think we are doing: principles of coupled self-regulation in human-robot interaction (…

The use of domestic service robots is becoming widespread. While in industrial settings robots are often used for specified tasks, the challenge in the case of robots put to domestic use is to affo…

via What do we think we are doing: principles of coupled self-regulation in human-robot interaction (….


Leave a comment

Open positions in Human Factors engineering, Human-robot interaction or Human computer interaction

BGU is seeking for excellent candidates for senior or junior faculty positions in the Dept. of Industrial Engineering and Management. Candidates will be part of the Human Factors engineering team.

Relevant topics are: HCI, HRI, Usability, HFE, or any affiliated fields.

For more information please contact: Prof. Tal Oron-Gilad at

, ,

Leave a comment

call for PhD student or postdoctoral student in HRI

Ben-Gurion University of the Negev

Department of Brain and Cognitive Sciences

Department of Industrial Engineering and Management

Department of Computer Science

ABC (Agricultural, Biological and Cognitive) Robotics Center

Doctoral/Post-doctoral Position in

Human-Robot Collaboration

Promoting intent and context based interaction and collaboration of humans and robots will be of high importance in the near future, when ‘things’ around us will have more intelligence.

We are looking for a highly motivated PhD student or post-doctoral fellow to lead a research aimed at promoting intent and context based interaction and collaboration of humans and robots. Such interaction requires the development of novel intent based interfaces (e.g., brain computer interaction) and the generation of a shared (e.g., via augmented reality) mental model for both human and robot. It also requires investigation of how collaboration is built over time and how context may affect it. The current effort takes a multidisciplinary perspective of human-robot relations and focuses on integration of multiple paradigms.

We are looking for candidates with a strong computational background interested in setting up and leading new and exciting research directions.

Closing date for applications: 30 May 2014 or until all positions are filled.

Candidates applying by above closing date will be informed by July 2014.

Starting date: 1 October 2014 or earlier

For more information, please contact:

Prof. Tal Oron-Gilad, Human Factors Engineering –

Dr. Oren Shriki, Cognitive and Brain Sciences –

Dr. Idit Shalev, Cognitive and Brain Sciences –

Dr. Jihad El-Sana, Computer Science and augmented reality –

See also

, , , , ,

Leave a comment

Scalable interfaces for dismounted soldiers–displaying multiple video feed sources simultaneously

  • One way to enhance soldiers’ orientation and SA is by adding various sources of information (including feeds from unmanned systems) to generate a broader perspective of the environment.


This is a demonstration of a key-hole effect, where it may be difficult to determine where in the map (left) the feed shown from the UAV is located.

  • Researchers and practitioners have recently begun to examine the use of several types of unmanned systems combined.
  • In order to do this well, it is important to minimize the visual load imposed on the soldier, a load that is obviously increasing due to multiple parallel displays.
  • Additional views can increase operator comprehension of the situation but may also cause overload and confusion. Often, too many choices, characteristics and applications may even harm the operator as much as lack of choices.

Our effort aims to examine the needs of dismounted soldiers in a multiple video feed environment (i.e., more than one source of information can be provided at a time) and to identify displays devices and interfaces that can support dismounted soldiers in such more complex intelligence gathering missions.

Combining UAV and UGV feed.

  • UAVs are meant to deliver the “larger” picture and are necessary for orientation tasks.
  • UGVs are meant to deliver a more focused and specific image.
  • Combination of the two should be advantageous when information is complex or ambiguous e.g., one may want to detect a target and then identify its features in more detail.


This is an example of a combined display, where both UAV and UGV video feeds are shown in addition to the aerial map. Waypoints of interest are marked on the map.

Coming soon  – experimental results of attentional allocation and performance on intelligence gathering tasks in such displays.


1 Comment