Archive for category robotics

Multimodal communication for guiding a person following robot

Come meet us at Ro-Man 2017, where Dr, Vardit Sarne-Fleischmann and Shanee Honig will present our work on Gesture vocabulary for a person following robot.

Abstract— Robots that are designed to support people in different tasks at home and in public areas need to be able to recognize user’s intentions and operate accordingly. To date, research has been mostly concentrated on developing the technological capabilities of the robot and the mechanism of recognition. Still, little is known about navigational commands that could be intuitively communicated by people in order control a robot’s movement. A two-part exploratory study was conducted in order to evaluate how people naturally guide the motion of a robot and whether an existing gesture vocabulary used for human-human communication can be applied to human-robot interaction. Fourteen participants were first asked to demonstrate ten different navigational commands while interacting with a Pioneer robot using a WoZ technique. In the second part of the study participants were asked to identify eight predefined commands from the U.S. Army vocabulary. Results show that simple commands yielded higher consistency among participants regarding the commands they demonstrated. Also, voice commands were more frequent than using gestures, though a combination of both was sometimes more dominant for certain commands. In the second part, an inconsistency of identification rates for opposite commands was observed. The results of this study could serve as a baseline for future developed commands vocabulary promoting a more natural and intuitive human-robot interaction style.

link to our poster.

 

Advertisements

, , ,

Leave a comment

IEEE RO-MAN 2016 presentations

Two of our works have been accepted as full papers for presentation and publication in the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016).

Postures of a Robot Arm – window to robot intentions?” authored by my doctoral student Sridatta Chaterjee and co-authored by my colleagues Drs. Oren Shriki and Idit Shalev.

Abstract— Body language of robot arms, have rarely been explored as a medium of conveying robot intentions. An exploratory study was done focusing on two questions: one, if robot arm postures can convey robot intentions, and two, if participants coming in contact with this robot arm for the first time can associate any meaning to the postures without watching the robot in action, or working with it. Thirty five participants of a wide age range (25-70) took part in this exploratory study. Results show that participants could interpret some postures. Four distinct types of postures have been selected to four separate categories by the majority of participants irrespective of their age. In addition, postures selected in categories like, ‘Robot giving object in a friendly manner’; ‘Robot is saying Hi!’, ‘Robot has been told not to disturb’ show similarity to body language exhibited by humans and animals while communicating such messages.

2016-05-28_10h34_06

Posture 8, what is the robot doing?

The Influence of Following Angle on Performance Metrics of a Human-Following Robot” co-authored by our graduate students Shanee Honig and Dror Katz, and my colleague Prof. Yael Edan.

Abstract— Robots that operate alongside people need to be able to move in socially acceptable ways. As a step toward this goal, we study how and under which circumstances the angle at which a robot follows a person may affect the human experience and robot tracking performance. In this paper, we aimed to assess three following angles (0◦ angle, 30◦ angle, and 60◦ angle) under two conditions: when the robot was carrying a valuable personal item or not. Objective and subjective indicators of the quality of following and participants’ perceptions and preferences were collected. Results indicated that the personal item manipulation increased awareness to the quality of the following and the following angles. Without the manipulation, participants were indifferent to the behavior of the robot. Our following algorithm was successful for tracking at a 0◦ and 30◦ angle, yet it must be improved for wider angles. Further research is required to obtain better understanding of following angle preferences for varying environment and task conditions.

2016-05-28_10h33_13

Following angles of a person-following robot: straight from behind or wider angles?

NY, Looking forward to two great presentations!


, , , , , ,

Leave a comment

Following Angle of a Human-Following Robot

Human-following capabilities of robots may become important in assistive robotic applications to facilitate many daily tasks (e.g. carrying personal items or groceries). Robot’s following distance, following angle and acceleration influence the quality of the interaction between the human and the robot by impacting walking efficiency (e.g., pace, flow and unwanted stops), user comfort and robot likability.

ICR Our team gave a presentation at the ICR 2016 conference focusing on Subjective preferences regarding human-following robots: preliminary evidence from laboratory experiments.

The Influence of Following Angle on Performance Metrics -

Following Angles of a human-following Pioneer LX Robot (Honig, Katz, Edan & Oron-Gilad)

  • This research effort is led by our graduate student Shanee Honig
  • For the person-tracking and following algorithm (Dror Katz & Yael Edan, work in progress) we use the Pioneer LX Robot’s built in camera and a Microsoft Kinect.
  • Currently we focus on 3 angles of following: back following (0 degree angle), a 30 degree angle, and a 60 degree angle.
  • We use a personal item manipulation (e.g., wallet) to examine how participants engage with the robot. Naturally when participants place a personal item on the robot, they become more engaged with it.
  • Come see us at the HCII 2016 where we will present a poster on sensitivity of older users (68 and above) to the quality of interaction, depending on robot’s following distance and acceleration, and the context of walk – Follow Me: Proxemics and Responsiveness Preferences of Older Users in a Human-Following Robot.

 

, , ,

Leave a comment

What do we think we are doing: principles of coupled self-regulation in human-robot interaction (…

The use of domestic service robots is becoming widespread. While in industrial settings robots are often used for specified tasks, the challenge in the case of robots put to domestic use is to affo…

via What do we think we are doing: principles of coupled self-regulation in human-robot interaction (….

,

Leave a comment

BGU is seeking PhD and postdoctoral students for advanced research in multidisciplinary robotics

ABC Robotics Center (Agricultural, Biological and Cognitive Robotics) at BGU is  seeking outstanding students for advanced research in multidisciplinary robotics

All applicants must be skilled in both oral and written communication in English and be able to work independently as well as in collaboration with others.

PhD applicants must have completed an MSc degree in Engineering, Natural Sciences, Computer Sciences or Psychology with a thesis. Experience in artificial intelligence, robotics, cognitive science and programming is an advantage. The application should include a CV, a list of academic grades, a copy of degree project report, a list of publications, three personal references (one from the MSc thesis advisor) and one A4 page describing the personal motivation for applying for this position. Ph.D. candidates must submit a research proposal and pass a qualification exam on their research proposal within the first year of the PhD studies. The PhD thesis should be completed within a 4-year timeframe. The ABC Robotics Ph.D. Scholarship covers tuition fees and a monthly stipend. The candidate will receive a minimum of 6,930 NIS per month for a duration of 4 years.

The ABC Robotics Postdoc Scholarship is 10,116 NIS per month for a duration of 2 years.

Additional requirements and details may be found at: http://in.bgu.ac.il/en/kreitman_school/Pages/admission.aspx

Applicants should send all necessary registration information to Ms. Sima Koram, email: simagel@exchange.bgu.ac.il as indicated in

http://aristo4bgu.bgu.ac.il/PhdEnglishApplication/PhdApplicationForm/

and send a copy of their application to: abc-robotics@bgu.ac.il

 ******     Specific research topics are proposed at: www.bgu.ac.il/abc-robotics

Closing date for applications: 30 May 2014 or until all positions are filled. Candidates applying by above closing date will be informed by July 2014.

Starting date: 1 October 2014 or earlier

, , , , , ,

Leave a comment

Tools and Techniques to support operators in MOMU (Multiple Operator Multiple UAV) environments

The ‘RICH’ (Rapid Immersion tools/techniques for Coordination and Hand-offs) research project is a US-Israel collaboration. The project aims to research, design and develop tools, techniques and procedures to aid operators in MOMU environments; to facilitate task switching and/or coordinate with other operators all for the benefit of improving overall mission performance.The Israeli partners on this task are Jacob Silbiger from Synergy Integration, Lt. Col. Michal Rottem-Hovev from the IAF, and Drs. Tal Oron-Gilad and Talya Porat from the Dept. of Industrial Engineering and Management. The US parents are Jay Shively, Lisa Fern (Human Systems Integration Group Leader, Aeroflightdynamics Directorate, US Army Research Development and Engineering Command (AMRDEC)), and Dr. Mark Draper (USAFRL). RICH is part of the US/Israel MOA (mutual operation agreement) on Rotorcraft Aeromechanics & Man/Machine Integration Technology.

Here I describe in brief the goals of the Israeli team and some of the tools developed.

Motivation: Multiple operators controlling multiple unmanned aerial vehicles (MOMU) can be an efficient operational setup for reconnaissance and surveillance missions. However, it dictates switching and coordination among operators. Efficient switching is time-critical and cognitively demanding, thus vitally affecting mission accomplishment. As such, tools and techniques (T&Ts) to facilitate switching and coordination among operators are required. Furthermore, development of metrics and test-scenarios becomes essential to evaluate, refine, and adjust T&Ts to the specifics of the operational environment.

Tools: Tools can be divided into two categories: 1) tools that facilitate ‘quick setup’, i.e., aimed to ease the way of the operator into a new mission or area of operation; and 2) tools that facilitate on-going missions where acquiring new UAVs, delegating, or switching is necessary to complete the tasks at hand. The Israeli team focused primarily on tools of the second type. Some “successful” tools have been the Castling rays (see CHI paper for detail), the TIE/coupling tool, and the Maintain coverage area.

Several outcomes of this effort have been presented and appear in the following conference proceedings.

Leave a comment

Scalable interfaces for dismounted soldiers–displaying multiple video feed sources simultaneously

  • One way to enhance soldiers’ orientation and SA is by adding various sources of information (including feeds from unmanned systems) to generate a broader perspective of the environment.

image

This is a demonstration of a key-hole effect, where it may be difficult to determine where in the map (left) the feed shown from the UAV is located.

  • Researchers and practitioners have recently begun to examine the use of several types of unmanned systems combined.
  • In order to do this well, it is important to minimize the visual load imposed on the soldier, a load that is obviously increasing due to multiple parallel displays.
  • Additional views can increase operator comprehension of the situation but may also cause overload and confusion. Often, too many choices, characteristics and applications may even harm the operator as much as lack of choices.

Our effort aims to examine the needs of dismounted soldiers in a multiple video feed environment (i.e., more than one source of information can be provided at a time) and to identify displays devices and interfaces that can support dismounted soldiers in such more complex intelligence gathering missions.

Combining UAV and UGV feed.

  • UAVs are meant to deliver the “larger” picture and are necessary for orientation tasks.
  • UGVs are meant to deliver a more focused and specific image.
  • Combination of the two should be advantageous when information is complex or ambiguous e.g., one may want to detect a target and then identify its features in more detail.

image

This is an example of a combined display, where both UAV and UGV video feeds are shown in addition to the aerial map. Waypoints of interest are marked on the map.

Coming soon  – experimental results of attentional allocation and performance on intelligence gathering tasks in such displays.

,

1 Comment