This is our most recent publication, accepted for publication in Safety Science.
Please cite this article in press as: Tapiro, H., et al. Cell phone conversations and child pedestrian’s crossing behavior; a simulator study. Safety Sci. (2016), http://dx.doi.org/10.1016/j.ssci.2016.05.013
Cell phone conversations and child pedestrian’s crossing behavior; a simulator study
Hagai Tapiro, Yisrael Parmet and Tal Oron-Gilad
Child pedestrians are highly represented in fatal and severe road crashes and differ in their crossing behavior from adults. Although many children carry cell phones, the effect that cell phone conversations have on children’s crossing behavior has not been thoroughly examined. A comparison of children and adult pedestrians’ crossing behavior while engaged in cell phone conversations was conducted. In a semi-immersive virtual environment simulating a typical city, 14 adults and 38 children (11 children aged 7-8; 18 aged 9-10 and 9 aged 11-13), experienced road crossing related traffic-scene scenarios. They were requested to press a response button whenever they felt it was safe to cross. Eye movements were tracked. Results have shown that all age groups’ crossing behaviors were affected by cell phone conversations. When busy with more cognitively demanding conversation types, participants were slower to react to a crossing opportunity, chose smaller crossing gaps, and allocated less visual attention to the peripheral regions of the scene. The ability to make better crossing decisions improved with age, but no interaction with cell phone conversation type was found. The most prominent improvement was shown in ‘safety gap’; each age group maintained a longer gap than its predecessor younger age group. In accordance to the current study, it is safe to say that cell phone conversations can hinder child and adult pedestrians’ safety. Thereby, it is important to take those findings in account when aiming to train young pedestrians for road-safety and increase public awareness.
Interested in seeing an interactive visualization app of our data?https://eyemove.shinyapps.io/cell-phone/
Two of our works have been accepted as full papers for presentation and publication in the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016).
“Postures of a Robot Arm – window to robot intentions?” authored by my doctoral student Sridatta Chaterjee and co-authored by my colleagues Drs. Oren Shriki and Idit Shalev.
Abstract— Body language of robot arms, have rarely been explored as a medium of conveying robot intentions. An exploratory study was done focusing on two questions: one, if robot arm postures can convey robot intentions, and two, if participants coming in contact with this robot arm for the first time can associate any meaning to the postures without watching the robot in action, or working with it. Thirty five participants of a wide age range (25-70) took part in this exploratory study. Results show that participants could interpret some postures. Four distinct types of postures have been selected to four separate categories by the majority of participants irrespective of their age. In addition, postures selected in categories like, ‘Robot giving object in a friendly manner’; ‘Robot is saying Hi!’, ‘Robot has been told not to disturb’ show similarity to body language exhibited by humans and animals while communicating such messages.
“The Influence of Following Angle on Performance Metrics of a Human-Following Robot” co-authored by our graduate students Shanee Honig and Dror Katz, and my colleague Prof. Yael Edan.
Abstract— Robots that operate alongside people need to be able to move in socially acceptable ways. As a step toward this goal, we study how and under which circumstances the angle at which a robot follows a person may affect the human experience and robot tracking performance. In this paper, we aimed to assess three following angles (0◦ angle, 30◦ angle, and 60◦ angle) under two conditions: when the robot was carrying a valuable personal item or not. Objective and subjective indicators of the quality of following and participants’ perceptions and preferences were collected. Results indicated that the personal item manipulation increased awareness to the quality of the following and the following angles. Without the manipulation, participants were indifferent to the behavior of the robot. Our following algorithm was successful for tracking at a 0◦ and 30◦ angle, yet it must be improved for wider angles. Further research is required to obtain better understanding of following angle preferences for varying environment and task conditions.
NY, Looking forward to two great presentations!
Human-following capabilities of robots may become important in assistive robotic applications to facilitate many daily tasks (e.g. carrying personal items or groceries). Robot’s following distance, following angle and acceleration influence the quality of the interaction between the human and the robot by impacting walking efficiency (e.g., pace, flow and unwanted stops), user comfort and robot likability.
Our team gave a presentation at the ICR 2016 conference focusing on Subjective preferences regarding human-following robots: preliminary evidence from laboratory experiments.
- This research effort is led by our graduate student Shanee Honig
- For the person-tracking and following algorithm (Dror Katz & Yael Edan, work in progress) we use the Pioneer LX Robot’s built in camera and a Microsoft Kinect.
- Currently we focus on 3 angles of following: back following (0 degree angle), a 30 degree angle, and a 60 degree angle.
- We use a personal item manipulation (e.g., wallet) to examine how participants engage with the robot. Naturally when participants place a personal item on the robot, they become more engaged with it.
- Come see us at the HCII 2016 where we will present a poster on sensitivity of older users (68 and above) to the quality of interaction, depending on robot’s following distance and acceleration, and the context of walk – Follow Me: Proxemics and Responsiveness Preferences of Older Users in a Human-Following Robot.
At last, a new publication in frontiers in Psyhcology co authored with Talya Porat, Michal Rottem-Hovev and Jacob Silbiger (Synergy Integration).
In this article we conduct a retrospective examination of studies concerned with man-UAS ratio, i.e., how many systems should a single operator control, should a team share (multiple operator – multiple UASs; MOMU).
Proliferation in the use of Unmanned Aerial Systems (UASs) in civil and military operations has presented a multitude of human factors challenges; from how to bridge the gap between demand and availability of trained operators, to how to organize and present data in meaningful ways. Utilizing the Design Research Methodology (DRM), a series of closely related studies with subject matter experts (SMEs) demonstrate how the focus of research gradually shifted from “how many systems can a single operator control” to “how to distribute missions among operators and systems in an efficient way”. The first set of studies aimed to explore the modal number, i.e., how many systems can a single operator supervise and control. It was found that an experienced operator can supervise up to 15 UASs efficiently using moderate levels of automation, and control (mission and payload management) up to 3 systems. Once this limit was reached, a single operator’s performance was compared to a team controlling the same number of systems. In general, teams led to better performances. Hence, shifting design efforts towards developing tools that support teamwork environments of multiple operators with multiple UASs (MOMU). In MOMU settings, when the tasks are similar or when areas of interest overlap, one operator seems to have an advantage over a team who needs to collaborate and coordinate. However, in all other cases, a team was advantageous over a single operator.
Here we report upon results of a validation study conducted on our unique pedestrian simulator.
The simulator validation study confirms the simulator’s ability to correctly simulate the real road environment, and strengthens the reliability as a source for statistical Inference. The goal of this work was to investigate whether the Dome simulator successfully simulates typical pedestrian environment in a manner that will elicit people to act in the same manner as they would in the real world crossing situations. Data analysis shows that the simulator delivers more reliable results concerning speeds rather than distances. Questionnaires analyses show that the simulator’s faith to reality regarding the display, sound effect and perspective is medium.
The effects of automation failure and secondary task on drivers’ ability to mitigate hazards in highly or semi-automated vehicles
In this article co-authored by Avinoam Borowsky and myself in memory of our missed colleague Dr. Adi Ronen who initiated this research, we present an experimental test-bed for evaluation of levels of vehicle automation, in-vehicle secondary tasks, and hazardous scenarios.
- Four levels of automation were implemented – Manual-no automation (M), Adaptive Cruise Control (ACC), Automatic Steering (AS), and Automated Driving (AD).
- Two types of secondary tasks were included: (1) Driving related. This task required on road glances; (2) Driving unrelated.
In the “squares task”, drivers observe nine squares on the in-vehicle display to identify the lighted square. Once identified, the driver is required to press on the lighted square, then another square is lighted and so on. Once the secondary task begins, drivers have 5 seconds to accurately press on as many squares they can until the task ends. They receive a printed feedback after each press either showing the response time or indicating that the press was erroneous.
An empirical evaluation was conducted to examine how well drivers mitigate road hazards when automation fails unexpectedly, looking at situations where drivers were either engaged with secondary tasks or not prior to the automation failure and/or the hazardous event. In each driving section, typical hazardous events appeared. Automation failure (i.e., the need to assume manual control) was alerted by sound and visually on the touchscreen.
Results showed that while engagement with a non-driving related secondary task lead to more crashes, automation failure did not, especially when drivers were monitoring the road. In addition, drivers’ performance on the secondary task revealed differential effects of automation mode with respect to the road conditions.
At the HFES Annual meeting we presented two studies related to interfaces for dismounted soldiers.
Tactile Interfaces for Dismounted Soldiers: User-perceptions on Content, Context and Loci
Nuphar Katzman, Tal Oron-Gilad, and Yael Salzer
Reviews of Human Factors and Ergonomics. 2015; 59:421-425. [Abstract] [PDF]
Interfaces for dismounted soldiers: examination of non-perfect visual and tactile alerts in a simulated hostile urban environment
Tal Oron-Gilad, Yisrael Parmet, and Daniel Benor
Reviews of Human Factors and Ergonomics. 2015; 59:145-149. [Abstract] [PDF]