Archive for category human factors engineering

Calibrating Adaptable Automation to Individuals

At last its out in the public. This study co-authored by Jen Thropp, James Szalma and PA Hancock investigates how and if LOA (level of automation) should be calibrated in individuals’ traits (specifically here, attentional control).

to read more click on this link

Abstract:

A detailed understanding of operator individual differences can serve as a foundation for developing a critical window on effective, adaptable, user-centered automation, and even for more autonomous systems. Adaptable automation that functions according to such principles and parameters has many potential benefits in increasing operator trust and acceptance of the automated system. Our current study provides an assessment of the way that individual differences in attentional control (AC) affect the preference for a selection of a desired level of automation (LOA). Participants who scored low or high on AC were either allowed to choose among four possible LOAs or restricted to a predetermined LOA. These manipulations were engaged while the operator was performing visual and auditory target detection tasks. The AC level was found to be inversely proportional to the LOA preference. Operators also performed better when they were preassigned to a fixed LOA rather than given a choice. Individual differences can thus be shown to affect the performance with the automated systems and should be considered in associated design processes. When deciding whether to give the operator control over LOA in a complex system, engineers should consider that the amount of control that operators may want does not necessarily reflect their actual needs.

 

https://ieeexplore.ieee.org/document/8396314/

 

, ,

Leave a comment

Open positions in Human Factors engineering, Human-robot interaction or Human computer interaction

BGU is seeking for excellent candidates for senior or junior faculty positions in the Dept. of Industrial Engineering and Management. Candidates will be part of the Human Factors engineering team.

Relevant topics are: HCI, HRI, Usability, HFE, or any affiliated fields.

For more information please contact: Prof. Tal Oron-Gilad at orontal@bgu.ac.il

, ,

Leave a comment

Visual search strategies of child-pedestrians in road crossing tasks

Hagai Tapiro, Anat Meir, Yisrael Parmet & Tal Oron-Gilad

Presentation at HFES-EU Annual meeting, Torino 2013

Abstract

Children are over-represented in road accidents, often due to their limited ability to perform well in road crossing tasks. The present study examined children’s visual search strategies in hazardous road-crossing situations. A sample of 33 young participants (ages 7-13) and 21 adults observed 18 different road-crossing scenarios in a 180° dome shaped mixed reality simulator. Gaze data was collected while participants made the crossing decisions. It was used to characterize their visual scanning strategies. Results showed that age group, limited field of view, and the presence of moving vehicles affect the way pedestrians allocate their attention in the scene. Adults tend to spend relatively more time in further peripheral areas of interest than younger pedestrians do. It was also found that the oldest child age group (11-13) demonstrated more resemblance to the adults in their visual scanning strategy, which can indicate on a learning process that originates from gaining experience and maturation. Characterization of child pedestrian eye movements can be used to determine readiness for independence as pedestrians. The results of this study, emphasize the differences among age groups in terms of visual scanning. This information can contribute to promote awareness and training directions.

Dirichlet regression model and analysis

For each scenario, five areas of interest were defined (as shown in the Figure). The close range central area was defined as the 10 meters of road in each side from the pedestrian’s point of view (AOI 3). Then symmetrically areas to the right of the center and to the left were defined. The medium right/left range (AOIs 2/4) was the part of the road distant at least 10 meter to the right/left of the point of view but less than 100 meters away. The far right/left range (AOIs 1/5) was the part of the road at least 100 meter or more to the right/left of the pedestrian point of view.

Picture2

Picture1

Open this link to see a sample video of a scenario as seen by a young pedestrian

Why Dirichlet?

  • For each participant and scenario, the total Gaze distribution over the five AOI’s sums up to one.
  • Therefore Gaze distribution is compositional data i.e., non-negative proportions with unit-sum.
  • These types of data arise whenever we classify objects into disjoint categories and record their resulting relative frequencies, or partition a whole measurement into percentage contributions from its various parts.
  • Attempts to apply statistical methods for unconstrained data often lead to inappropriate inference.
  • Dirichlet regression suggested by Hijazi and Jernigan (2009) is more suitable for such cases.

How to use?

  • The Dirichlet regression model was fitted using DirichletReg package, in R Language. Applying a backward elimination procedure found the best fitting model has three significant main effects.

What did we find?

  • The dependent variable was the vector of AOIs and the independent variables were Age-group, POV and FOV; all of them were statistically significant (p <0.05). Predicted means for the percentage of time spent in each AOI for each age group based on the Dirichlet regression model are shown in the following figure and reveal differences among age groups. Note how children aged 9-10 spend more time gazing at the central area, note also the differences between mid-left and mid-right.
Predicted means (in each AOI) using Dirichlet model across all scenarios per age group

Predicted means (in each AOI) using Dirichlet model across all scenarios per age group

, ,

Leave a comment

Tools and Techniques to support operators in MOMU (Multiple Operator Multiple UAV) environments

The ‘RICH’ (Rapid Immersion tools/techniques for Coordination and Hand-offs) research project is a US-Israel collaboration. The project aims to research, design and develop tools, techniques and procedures to aid operators in MOMU environments; to facilitate task switching and/or coordinate with other operators all for the benefit of improving overall mission performance.The Israeli partners on this task are Jacob Silbiger from Synergy Integration, Lt. Col. Michal Rottem-Hovev from the IAF, and Drs. Tal Oron-Gilad and Talya Porat from the Dept. of Industrial Engineering and Management. The US parents are Jay Shively, Lisa Fern (Human Systems Integration Group Leader, Aeroflightdynamics Directorate, US Army Research Development and Engineering Command (AMRDEC)), and Dr. Mark Draper (USAFRL). RICH is part of the US/Israel MOA (mutual operation agreement) on Rotorcraft Aeromechanics & Man/Machine Integration Technology.

Here I describe in brief the goals of the Israeli team and some of the tools developed.

Motivation: Multiple operators controlling multiple unmanned aerial vehicles (MOMU) can be an efficient operational setup for reconnaissance and surveillance missions. However, it dictates switching and coordination among operators. Efficient switching is time-critical and cognitively demanding, thus vitally affecting mission accomplishment. As such, tools and techniques (T&Ts) to facilitate switching and coordination among operators are required. Furthermore, development of metrics and test-scenarios becomes essential to evaluate, refine, and adjust T&Ts to the specifics of the operational environment.

Tools: Tools can be divided into two categories: 1) tools that facilitate ‘quick setup’, i.e., aimed to ease the way of the operator into a new mission or area of operation; and 2) tools that facilitate on-going missions where acquiring new UAVs, delegating, or switching is necessary to complete the tasks at hand. The Israeli team focused primarily on tools of the second type. Some “successful” tools have been the Castling rays (see CHI paper for detail), the TIE/coupling tool, and the Maintain coverage area.

Several outcomes of this effort have been presented and appear in the following conference proceedings.

Leave a comment

Child Pedestrian Crossing Study – a few updates

We have just completed this study. Analysis of results and full report are being prepared.

The objective of the research is to lay the foundations for examining whether training child-pedestrians’ HP skills while crossing a road may improve their ability to perceive potentially hazardous situations and to predict hazards prior to their materialization.

  • A first step in developing a training program is to form understanding of child-pedestrians’ traffic behavior patterns. Comparing adults and children provides a depiction of what elements in the traffic environment are crucial for the road-crossing task.
  • In the present study, children and adults participant in a two-phase experiment. They observe typical urban scenarios (see Figure 1) from a pedestrian’s point of view (see Figure 2) and a required to: (1) Press a response button each time they feel it is safe to cross. (2) Describe the features that they perceive as relevant for a safe road-crossing decision, i.e., the conceptual model each group of pedestrians has. Participants’ eye-movements were recorded throughout the experiment utilizing a helmet mounted tracker (Model H6-HS, Eyetrack 6000).
  • To achieve this a three dimensional database of a prototypical Israeli city was built in cooperation with b.design (http://www.b-d.co.il/) , a leading provider of 3-D content. Cars, trees, billboards and various other urban elements were also designed uniquely for this environment. Using the VR-Vantage and VR-Forces different scenarios were developed to examine crossing behavior at various conditions.

 

image

Figure 1. The generic city simulated environment presented in the Dome setting (it looks a bit awkward here because its intended to be projected on a dome screen). The Field of View is: (1) Unrestricted (above); (2) Partially obscured by the road’s curvature (middle); (3) Partially obscured by parked vehicles (below).

 

image

Figure 2. Simulated environment from a child-pedestrian’s point of view.

, ,

1 Comment