Archive for category human factors engineering

Open positions in Human Factors engineering, Human-robot interaction or Human computer interaction

BGU is seeking for excellent candidates for senior or junior faculty positions in the Dept. of Industrial Engineering and Management. Candidates will be part of the Human Factors engineering team.

Relevant topics are: HCI, HRI, Usability, HFE, or any affiliated fields.

For more information please contact: Prof. Tal Oron-Gilad at orontal@bgu.ac.il

, ,

Leave a comment

Visual search strategies of child-pedestrians in road crossing tasks

Hagai Tapiro, Anat Meir, Yisrael Parmet & Tal Oron-Gilad

Presentation at HFES-EU Annual meeting, Torino 2013

Abstract

Children are over-represented in road accidents, often due to their limited ability to perform well in road crossing tasks. The present study examined children’s visual search strategies in hazardous road-crossing situations. A sample of 33 young participants (ages 7-13) and 21 adults observed 18 different road-crossing scenarios in a 180° dome shaped mixed reality simulator. Gaze data was collected while participants made the crossing decisions. It was used to characterize their visual scanning strategies. Results showed that age group, limited field of view, and the presence of moving vehicles affect the way pedestrians allocate their attention in the scene. Adults tend to spend relatively more time in further peripheral areas of interest than younger pedestrians do. It was also found that the oldest child age group (11-13) demonstrated more resemblance to the adults in their visual scanning strategy, which can indicate on a learning process that originates from gaining experience and maturation. Characterization of child pedestrian eye movements can be used to determine readiness for independence as pedestrians. The results of this study, emphasize the differences among age groups in terms of visual scanning. This information can contribute to promote awareness and training directions.

Dirichlet regression model and analysis

For each scenario, five areas of interest were defined (as shown in the Figure). The close range central area was defined as the 10 meters of road in each side from the pedestrian’s point of view (AOI 3). Then symmetrically areas to the right of the center and to the left were defined. The medium right/left range (AOIs 2/4) was the part of the road distant at least 10 meter to the right/left of the point of view but less than 100 meters away. The far right/left range (AOIs 1/5) was the part of the road at least 100 meter or more to the right/left of the pedestrian point of view.

Picture2

Picture1

Open this link to see a sample video of a scenario as seen by a young pedestrian

Why Dirichlet?

  • For each participant and scenario, the total Gaze distribution over the five AOI’s sums up to one.
  • Therefore Gaze distribution is compositional data i.e., non-negative proportions with unit-sum.
  • These types of data arise whenever we classify objects into disjoint categories and record their resulting relative frequencies, or partition a whole measurement into percentage contributions from its various parts.
  • Attempts to apply statistical methods for unconstrained data often lead to inappropriate inference.
  • Dirichlet regression suggested by Hijazi and Jernigan (2009) is more suitable for such cases.

How to use?

  • The Dirichlet regression model was fitted using DirichletReg package, in R Language. Applying a backward elimination procedure found the best fitting model has three significant main effects.

What did we find?

  • The dependent variable was the vector of AOIs and the independent variables were Age-group, POV and FOV; all of them were statistically significant (p <0.05). Predicted means for the percentage of time spent in each AOI for each age group based on the Dirichlet regression model are shown in the following figure and reveal differences among age groups. Note how children aged 9-10 spend more time gazing at the central area, note also the differences between mid-left and mid-right.
Predicted means (in each AOI) using Dirichlet model across all scenarios per age group

Predicted means (in each AOI) using Dirichlet model across all scenarios per age group

, ,

Leave a comment

Tools and Techniques to support operators in MOMU (Multiple Operator Multiple UAV) environments

The ‘RICH’ (Rapid Immersion tools/techniques for Coordination and Hand-offs) research project is a US-Israel collaboration. The project aims to research, design and develop tools, techniques and procedures to aid operators in MOMU environments; to facilitate task switching and/or coordinate with other operators all for the benefit of improving overall mission performance.The Israeli partners on this task are Jacob Silbiger from Synergy Integration, Lt. Col. Michal Rottem-Hovev from the IAF, and Drs. Tal Oron-Gilad and Talya Porat from the Dept. of Industrial Engineering and Management. The US parents are Jay Shively, Lisa Fern (Human Systems Integration Group Leader, Aeroflightdynamics Directorate, US Army Research Development and Engineering Command (AMRDEC)), and Dr. Mark Draper (USAFRL). RICH is part of the US/Israel MOA (mutual operation agreement) on Rotorcraft Aeromechanics & Man/Machine Integration Technology.

Here I describe in brief the goals of the Israeli team and some of the tools developed.

Motivation: Multiple operators controlling multiple unmanned aerial vehicles (MOMU) can be an efficient operational setup for reconnaissance and surveillance missions. However, it dictates switching and coordination among operators. Efficient switching is time-critical and cognitively demanding, thus vitally affecting mission accomplishment. As such, tools and techniques (T&Ts) to facilitate switching and coordination among operators are required. Furthermore, development of metrics and test-scenarios becomes essential to evaluate, refine, and adjust T&Ts to the specifics of the operational environment.

Tools: Tools can be divided into two categories: 1) tools that facilitate ‘quick setup’, i.e., aimed to ease the way of the operator into a new mission or area of operation; and 2) tools that facilitate on-going missions where acquiring new UAVs, delegating, or switching is necessary to complete the tasks at hand. The Israeli team focused primarily on tools of the second type. Some “successful” tools have been the Castling rays (see CHI paper for detail), the TIE/coupling tool, and the Maintain coverage area.

Several outcomes of this effort have been presented and appear in the following conference proceedings.

Leave a comment

Child Pedestrian Crossing Study – a few updates

We have just completed this study. Analysis of results and full report are being prepared.

The objective of the research is to lay the foundations for examining whether training child-pedestrians’ HP skills while crossing a road may improve their ability to perceive potentially hazardous situations and to predict hazards prior to their materialization.

  • A first step in developing a training program is to form understanding of child-pedestrians’ traffic behavior patterns. Comparing adults and children provides a depiction of what elements in the traffic environment are crucial for the road-crossing task.
  • In the present study, children and adults participant in a two-phase experiment. They observe typical urban scenarios (see Figure 1) from a pedestrian’s point of view (see Figure 2) and a required to: (1) Press a response button each time they feel it is safe to cross. (2) Describe the features that they perceive as relevant for a safe road-crossing decision, i.e., the conceptual model each group of pedestrians has. Participants’ eye-movements were recorded throughout the experiment utilizing a helmet mounted tracker (Model H6-HS, Eyetrack 6000).
  • To achieve this a three dimensional database of a prototypical Israeli city was built in cooperation with b.design (http://www.b-d.co.il/) , a leading provider of 3-D content. Cars, trees, billboards and various other urban elements were also designed uniquely for this environment. Using the VR-Vantage and VR-Forces different scenarios were developed to examine crossing behavior at various conditions.

 

image

Figure 1. The generic city simulated environment presented in the Dome setting (it looks a bit awkward here because its intended to be projected on a dome screen). The Field of View is: (1) Unrestricted (above); (2) Partially obscured by the road’s curvature (middle); (3) Partially obscured by parked vehicles (below).

 

image

Figure 2. Simulated environment from a child-pedestrian’s point of view.

, ,

1 Comment

Inexperienced drivers training program – Trailer

Driving is a demanding task combining complex motor and cognitive skills. A typical driving task may include maneuvering among other vehicles, paying attention to various road users (e.g., drivers and pedestrians), and discerning static and dynamic road signs and obstacles). The total amount and rate of information presented to the driver is  more than a human brain can handle at a given time. Thus, the road presents a vast array of accessible information, but drivers notice and attend only to a small fraction of it.

Recent evidence suggests that among all driving skills, only hazard awareness – the ability of drivers to read the road and identify hazardous situations –correlates with traffic crashes (e.g., Horswill and McKenna, 2004). Furthermore, McKenna et al. (2006) have shown that improving hazard awareness skills (via training to identify hazardous situations) resulted in a decrease in risk taking attitudes for novice drivers. These findings and others (e.g., Pradhan et al., 2009; Borowsky et al., 2010; Pollatsek et al., 2006; Deery, 1999) acknowledge that young-novice drivers might be less aware of potential hazards and risks embedded in a situation, and thus are more susceptible to taking risks while driving because of this lack of awareness.

References

Borowsky, A., Shinar, D., & Oron-Gilad, T. (2010). Age and skill differences in driving related hazard perception, Accident Analysis and Prevention, vol. 42, pp. 1240-1249.

Deery, H.A. (1999). Hazard and risk perception among young novice drivers. Journal of Safety Research, 30(4), 225-236.

Horswill, M. S., & McKenna, F. P. (2004). Drivers’ hazard perception ability: Situation awareness on the road. In S. Banbury and S. Tremblay (Eds.), A cognitive approach to situation awareness: Theory and application (pp. 155-175). Aldershot, United Kingdom: Ashgate.

McKenna, F. P., Horswill, M. S., & Alexander, J. L. (2006). Does anticipation training affect drivers’ risk taking? Journal of Experimental Psychology, 12, 1-10.

Pollatsek, A., Narayanaan, V., Pradhan, A., & Fisher, D. L. (2006). Using eye movements to evaluate a PC-based risk awareness and perception training program on a driving simulator. Human Factors, 48, 447–464.

 Pradhan, A. K., Pollatsek, A., Knodler, M., & Fisher, D. L. (2009). Can younger drivers be trained to scan for information that will reduce their risk in roadway traffic scenarios that are hard to identify as hazardous? Ergonomics, 52, 657-673.

Leave a comment

Utilizing Hand Gesture Interaction in Standard PC-based Interfaces

  • This work was conducted by my former graduate student Jenny Grinberg. It focused on how a gesture vocabulary should be applied when gestures are being used in standard window interfaces (Windows, files and folders). We are currently in process of writing up the publication.
  • Interface technologies have only started to adopt hand gestures and most human-computer controls still require physical devices such as keyboard or mouse.
  • To evaluate the influence of keyboard interaction, gestures and combined interaction on user experience an existing hand gesture recognition system (developed by Stern & Efros, 2005) was integrated into a common Windows environment.
  • Two experiments varied in the way the Gesture Vocabulary (GV) was introduced; bulk (Experiment 1) or gradual learning (Experiment 2).
  • Results indicated that all gestures used in the GV were simple and could be executed within a relatively short learning period.
  • Nevertheless, keyboard interaction remained the most efficient, least demanding, and most preferred way.
  • Performance and subjective ratings of gestures and combined interaction were significantly different from those of the keyboard, but not from each other.

Interesting differences among genders emerged:

  • Combined interaction was preferred over gestures-alone among women.
  • With regard to the GV introduction, experiment one revealed that performance time and error rate with gestures were significantly higher for females than for males. However, gradual introduction of gestures (experiment two) improved females’ subjective satisfaction, decreased their performance time, and did not worsen error rate. For males, no such differences were found.
  • Men and women related differently to the gesture displays and women perceived textual labels as more useful.

Here is a screen shot of the application consisting of a standard window which enables to perform the most commonly used commands with folders and files (e.g., open a folder, move the cursor to the right folder, etc.) via hand gestures or via keyboard. To the right is the gesture feedback window (which is part of the gesture recognition system developed by Stern & Efros, 2005).

image

  • To the right, the visual display as captured by the gesture recognition camera
  • To the left, the main task window containing files in folders
  • at the bottom of the screen are various parameters regarding the hand’s position and a label with the name of the current command

Gesture Vocabulary (GV) design.

Nine dynamic gestures were defined with one of them as the start/end position. The other eight represented the most commonly used commands in file management navigation processes; right, left, up and down, entering and exiting a folder, and copy/paste commands.

image

Here is a video demo of the various gestures used.

Gesture Vocabulary demo

 

Initial findings were reported in Grinberg J. and Oron-Gilad T., Utilizing Hand-Gesture Interaction in Standard PC Based Interfaces, proceeding of the  International Ergonomica Association IEA 2009, Bejing, China.

,

2 Comments