Archive for category News
Following Angle of a Human-Following Robot
Posted by Tal Oron-Gilad in HRI, News, robotics on April 26, 2016
Human-following capabilities of robots may become important in assistive robotic applications to facilitate many daily tasks (e.g. carrying personal items or groceries). Robot’s following distance, following angle and acceleration influence the quality of the interaction between the human and the robot by impacting walking efficiency (e.g., pace, flow and unwanted stops), user comfort and robot likability.
Our team gave a presentation at the ICR 2016 conference focusing on Subjective preferences regarding human-following robots: preliminary evidence from laboratory experiments.

Following Angles of a human-following Pioneer LX Robot (Honig, Katz, Edan & Oron-Gilad)
- This research effort is led by our graduate student Shanee Honig
- For the person-tracking and following algorithm (Dror Katz & Yael Edan, work in progress) we use the Pioneer LX Robot’s built in camera and a Microsoft Kinect.
- Currently we focus on 3 angles of following: back following (0 degree angle), a 30 degree angle, and a 60 degree angle.
- We use a personal item manipulation (e.g., wallet) to examine how participants engage with the robot. Naturally when participants place a personal item on the robot, they become more engaged with it.
- Come see us at the HCII 2016 where we will present a poster on sensitivity of older users (68 and above) to the quality of interaction, depending on robot’s following distance and acceleration, and the context of walk – Follow Me: Proxemics and Responsiveness Preferences of Older Users in a Human-Following Robot.
Supervising and controlling unmanned systems: A multi-phase study with subject matter experts
Posted by Tal Oron-Gilad in Military & Law Enforcement Applications, News, UAV, unmanned aerial systems on April 7, 2016
At last, a new publication in frontiers in Psyhcology co authored with Talya Porat, Michal Rottem-Hovev and Jacob Silbiger (Synergy Integration).
In this article we conduct a retrospective examination of studies concerned with man-UAS ratio, i.e., how many systems should a single operator control, should a team share (multiple operator – multiple UASs; MOMU).

Mutiple operator -Multiple UAS – MOMU simulated environment
Abstract
Proliferation in the use of Unmanned Aerial Systems (UASs) in civil and military operations has presented a multitude of human factors challenges; from how to bridge the gap between demand and availability of trained operators, to how to organize and present data in meaningful ways. Utilizing the Design Research Methodology (DRM), a series of closely related studies with subject matter experts (SMEs) demonstrate how the focus of research gradually shifted from “how many systems can a single operator control” to “how to distribute missions among operators and systems in an efficient way”. The first set of studies aimed to explore the modal number, i.e., how many systems can a single operator supervise and control. It was found that an experienced operator can supervise up to 15 UASs efficiently using moderate levels of automation, and control (mission and payload management) up to 3 systems. Once this limit was reached, a single operator’s performance was compared to a team controlling the same number of systems. In general, teams led to better performances. Hence, shifting design efforts towards developing tools that support teamwork environments of multiple operators with multiple UASs (MOMU). In MOMU settings, when the tasks are similar or when areas of interest overlap, one operator seems to have an advantage over a team who needs to collaborate and coordinate. However, in all other cases, a team was advantageous over a single operator.
Validation study: Dome Pedestrian Simulator
Abstract
Here we report upon results of a validation study conducted on our unique pedestrian simulator.
The simulator validation study confirms the simulator’s ability to correctly simulate the real road environment, and strengthens the reliability as a source for statistical Inference. The goal of this work was to investigate whether the Dome simulator successfully simulates typical pedestrian environment in a manner that will elicit people to act in the same manner as they would in the real world crossing situations. Data analysis shows that the simulator delivers more reliable results concerning speeds rather than distances. Questionnaires analyses show that the simulator’s faith to reality regarding the display, sound effect and perspective is medium.
The effects of automation failure and secondary task on drivers’ ability to mitigate hazards in highly or semi-automated vehicles
Posted by Tal Oron-Gilad in News on February 22, 2016
In Advances in Transportation Studies an international Journal 2016 Special Issue, Vol. 1
In this article co-authored by Avinoam Borowsky and myself in memory of our missed colleague Dr. Adi Ronen who initiated this research, we present an experimental test-bed for evaluation of levels of vehicle automation, in-vehicle secondary tasks, and hazardous scenarios.
- Four levels of automation were implemented – Manual-no automation (M), Adaptive Cruise Control (ACC), Automatic Steering (AS), and Automated Driving (AD).
- Two types of secondary tasks were included: (1) Driving related. This task required on road glances; (2) Driving unrelated.

The BGU driving simulator (Left) and the in-vehicle secondary task “squares task” (Right)
In the “squares task”, drivers observe nine squares on the in-vehicle display to identify the lighted square. Once identified, the driver is required to press on the lighted square, then another square is lighted and so on. Once the secondary task begins, drivers have 5 seconds to accurately press on as many squares they can until the task ends. They receive a printed feedback after each press either showing the response time or indicating that the press was erroneous.
An empirical evaluation was conducted to examine how well drivers mitigate road hazards when automation fails unexpectedly, looking at situations where drivers were either engaged with secondary tasks or not prior to the automation failure and/or the hazardous event. In each driving section, typical hazardous events appeared. Automation failure (i.e., the need to assume manual control) was alerted by sound and visually on the touchscreen.
Results showed that while engagement with a non-driving related secondary task lead to more crashes, automation failure did not, especially when drivers were monitoring the road. In addition, drivers’ performance on the secondary task revealed differential effects of automation mode with respect to the road conditions.
IsraHCI – The Fourth Israeli Human-Computer Interaction Research Conference
Posted by Tal Oron-Gilad in News on September 4, 2015
IsraHCI – February 18, 2016
Call for submissions
Conference website: http://israhci.org
Facebook group: https://www.facebook.com/groups/IsraHCI/
LinkedIn group: http://www.linkedin.com/groups?gid=4698270
Important Dates
Submission Date: November 15, 2015.
Notification of Acceptance: December 7, 2015.
Conference: Feb 18th, Shenkar College of Engineering, Design, Art, Ramat-Gan
Topics of Interest – Topics include, but are not limited to:
Interactive artifacts and wearable computing
Ubiquitous and pervasive computing
Interaction models for children and the elderly
Social aspects of human-computer interaction
New interaction techniques, devices and interfaces
Mobile interaction
Tangible human-computer interaction
Human-robot interaction
Cognitive aspects of human-computer interaction
Evaluation methods for usability and user experience
Group and collaborative interactions
Interactive information visualization
User interaction in the car and in other high-stake environments
Design methods
The organizational and business context of computer interaction
Universal access and international interfaces
Specific issues that are relevant to Israel’s political situation and population
Augmented and virtual reality interfaces
Usability of privacy and security mechanisms
