The effects of road environment complexity and age on pedestrian’s visual attention and crossing behavior

Work on pedestrian distraction co-authored with Hagai Tapiro and Yisrael Parmet

Abstract

Introduction: Little is known how the characteristics of the environment affect pedestrians’ road crossing behavior. Method: In this work, the effect of typical urban visual clutter created by objects and elements in the road proximity (e.g., billboards) on adults and children (aged 9–13) road crossing behavior was examined in a controlled laboratory environment, utilizing virtual reality scenarios projected on a large dome screen. Results: Divided into three levels of visual load, results showed that high visual load affected children’s and adults’ road crossing behavior and visual attention. The main effect on participants’ crossing decisions was seen in missed crossing opportunities. Children and adults missed more opportunities to cross the road when exposed to more cluttered road environments. An interaction with age was found in the dispersion of the visual attention measure. Children, 9–10 and 11–13 years old, had a wider spread of gazes across the scene when the environment was highly loaded—an effect not seen with adults. However, unexpectedly, no other indication of the deterring effect was found in the current study. Still, according to the results, it is reasonable to assume that busier road environments can be more hazardous to adult and child pedestrians. Practical Applications: In that context, it is important to further investigate the possible distracting effect of causal objects in the road environment on pedestrians, and especially children. This knowledge can help to create better safety guidelines for children and assist urban planners in creating safer urban environments.

 

Read this article

 

, , , ,

Leave a comment

I have been busy

Not very many posts in 2019 but this does not mean that we have not conducted some really interesting research in our lab. On the contrary

So, over the next few weeks I will begin posting some of our most recent accomplishments.

Here is just one:

Closing the feedback loop – the relationship between input and output modalities in HRI, presentation at the Human Friendly Robotics workshop in Rome 2019

ABC student poster- Tamara Markovich and Shanee Honig

 

 

Leave a comment

Calibrating Adaptable Automation to Individuals

At last its out in the public. This study co-authored by Jen Thropp, James Szalma and PA Hancock investigates how and if LOA (level of automation) should be calibrated in individuals’ traits (specifically here, attentional control).

to read more click on this link

Abstract:

A detailed understanding of operator individual differences can serve as a foundation for developing a critical window on effective, adaptable, user-centered automation, and even for more autonomous systems. Adaptable automation that functions according to such principles and parameters has many potential benefits in increasing operator trust and acceptance of the automated system. Our current study provides an assessment of the way that individual differences in attentional control (AC) affect the preference for a selection of a desired level of automation (LOA). Participants who scored low or high on AC were either allowed to choose among four possible LOAs or restricted to a predetermined LOA. These manipulations were engaged while the operator was performing visual and auditory target detection tasks. The AC level was found to be inversely proportional to the LOA preference. Operators also performed better when they were preassigned to a fixed LOA rather than given a choice. Individual differences can thus be shown to affect the performance with the automated systems and should be considered in associated design processes. When deciding whether to give the operator control over LOA in a complex system, engineers should consider that the amount of control that operators may want does not necessarily reflect their actual needs.

 

https://ieeexplore.ieee.org/document/8396314/

 

, ,

Leave a comment

Eurohaptics 2018 – Katzman & Oron-Gilad

Towards a Taxonomy of Vibro-Tactile Cues for Operational Missions, a poster presented by Nuphar Katzman

Abstract. The present study is aimed to serve as a preliminary stage in the examination and implementation of a taxonomy of vibro-tactile cues for operational missions. Previous researches showed that using the tactile modality can help increase soldiers’ performance in terms of response time, accuracy in navigation and communication under busy conditions and/or high workload. The experimental pilot reported here focuses on how users (infantry soldiers) perceive tactile cues in terms of implication and urgency during such missions. Fifteen reserve soldiers completed a navigation mission in a virtual environment. During the navigation they received random tactile cues and were asked to assess the suitability of each cue to a specific context. At the end of the session, participants filled a subjective questionnaire about their experience with the tactile cues. Results revealed three (out of five) superior cues, in terms of accurate identification and consistent association. This work provides the foundation to further develop a taxonomy of tactile cues for information types in operational missions. Future work should examine the identification of cues and their associated meanings when the relevant events occur in the simulation and outside in field tests.

Katzman and Oron-Gilad Eurohaptics 2018

Capture

 

, ,

Leave a comment

Understanding and Resolving Failures in Human-Robot Interaction

Shanee Honig and I have just finished a literature review on resolving failures in HRI.  The Full publication can be found in Frontiers .

We mapped a taxonomy of failures, separating technical failures from interaction failures [see 1].

1

A human-robot failure taxonomy

After reviewing the cognitive considerations that influence people’s ability to detect and solve robot failures, as well as the literature in failure handling in human-robot interactions, we developed an information processing model called the Robot Failure Human Information Processing (RF-HIP) Model, modeled after Wogalter’s C-HIP (an elaboration of Shannon & Weavers 1948 model of communication), to describe the way people perceive, process, and act on failures in human robot interactions.

  • RF-HIP can be used as a tool to systematize the assessment process involved in determining why a particular approach to handling failure is successful or unsuccessful in order to facilitate better design.

 

1

The RF-HIP (robotic failure – human information processing) Model

abstract

While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people’s perceptions and feelings towards robots, and how these effects can be mitigated. 52 studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction, and mitigating failures. Since little research has been done on these topics within the Human-Robot Interaction (HRI) community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing (RF-HIP)), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for human-robot interactions.

 

, , ,

Leave a comment