Archive for category Military & Law Enforcement Applications
Close Target Reconnaissance: A Field Evaluation of Dismounted Soldiers Utilizing Video Feed From an Unmanned Ground Vehicle in Patrol Missions
Oron-Gilad and Parmet (2016) in the Journal of Cognitive Engineering and Decision Making.
- How is the decision cycle of dismounted soldiers affected by the use of a display device displaying video feed from an unmanned ground vehicle in a patrol mission?
- Via a handheld monocular display, participants received a route map and sensor imagery from the vehicle that was ~20–50 m ahead.
- Twenty-two male participants were divided into two groups, with or without the sensor imagery. Each participant navigated for 2 km in a MOUT training facility, while encountering civilians, moving and stationary suspects, and improvised explosive devices.
- Boyd’s OODA loop (observe–orient–decide–act) framework was used to examine
- The experimental group was slower to respond to threats and to orient. They also reported higher workload, more difficulties in allocating their attention to their environment, and more frustration.
- The breakdown of performance metrics into the OODA loop components revealed the major difficulties in the decision-making process and highlighted the need for new roles in combat-team setups and for additional training when unmanned vehicle sensor imagery is introduced.
•• The use of a handheld monocular device for intelligence gathering of information from a UGV affected participants’ ability to detect events with their own eyes.
•• Soldiers were aware of the toll that display devices had on their operational mission, yet it continuously attracted their attention.
•• Soldiers must gain understanding of the capabilities and limitations of the unmanned vehicle and its sensor video; they should be able to control the pace of its progress.
•• Team setups, where only limited designated roles attend to the sensor video and more than one individual attends to the immediate environment, may be a better setup for utilization of the technology.
At last, a new publication in frontiers in Psyhcology co authored with Talya Porat, Michal Rottem-Hovev and Jacob Silbiger (Synergy Integration).
In this article we conduct a retrospective examination of studies concerned with man-UAS ratio, i.e., how many systems should a single operator control, should a team share (multiple operator – multiple UASs; MOMU).
Proliferation in the use of Unmanned Aerial Systems (UASs) in civil and military operations has presented a multitude of human factors challenges; from how to bridge the gap between demand and availability of trained operators, to how to organize and present data in meaningful ways. Utilizing the Design Research Methodology (DRM), a series of closely related studies with subject matter experts (SMEs) demonstrate how the focus of research gradually shifted from “how many systems can a single operator control” to “how to distribute missions among operators and systems in an efficient way”. The first set of studies aimed to explore the modal number, i.e., how many systems can a single operator supervise and control. It was found that an experienced operator can supervise up to 15 UASs efficiently using moderate levels of automation, and control (mission and payload management) up to 3 systems. Once this limit was reached, a single operator’s performance was compared to a team controlling the same number of systems. In general, teams led to better performances. Hence, shifting design efforts towards developing tools that support teamwork environments of multiple operators with multiple UASs (MOMU). In MOMU settings, when the tasks are similar or when areas of interest overlap, one operator seems to have an advantage over a team who needs to collaborate and coordinate. However, in all other cases, a team was advantageous over a single operator.
At the HFES Annual meeting we presented two studies related to interfaces for dismounted soldiers.
Tactile Interfaces for Dismounted Soldiers: User-perceptions on Content, Context and Loci
Nuphar Katzman, Tal Oron-Gilad, and Yael Salzer
Reviews of Human Factors and Ergonomics. 2015; 59:421-425. [Abstract] [PDF]
Interfaces for dismounted soldiers: examination of non-perfect visual and tactile alerts in a simulated hostile urban environment
Tal Oron-Gilad, Yisrael Parmet, and Daniel Benor
Reviews of Human Factors and Ergonomics. 2015; 59:145-149. [Abstract] [PDF]
In January 2015, the Gordon Center for Systems Engineering at the Technion conducted its Annual meeting. This year the meeting was dedicated to Human Factors and how it is relevant to system design.
During this day, lectures focused on the importance of integrating human factors into systems design. Two communities: human factors practitioners and researchers and system engineers from leading Industries in Israel had the opportunity to interact and learn. Clearly there is a need for better integration of the human factors engineering discipline in product and project development Read the rest of this entry »
- Oron-Gilad T., Hancock, P.A., & Helmick-Rich J.(accepted October 2013), Coding warnings without interfering with dismounted soldiers’ missions, Applied Ergonomics.
Objectives: Warnings are an effective way to communicate hazard, yet they can also increase task demand when presented to operators involved in real-world tasks. Furthermore, in military-related tasks warnings are often given in codes to avoid counter-intelligence, which may foster additional working memory load. Background: Adherence to warnings in the military domain is crucial to promote safety and reduce accidents and injuries. The empirical question arises as to how aspects of coding the warning may interfere with the primary task the individual is currently performing and vice versa. Method: Six experimental conditions were designed to assess how warning-code storage format, response format, and increasing working memory demand (retention) affected both performance on the primary task and the rate of compliance to warnings, considered here as the secondary task. Results: Results revealed that the combination of warning-code storage and response format affected compliance rate and the highest compliance occurred when warnings were presented as pictorials and responses were coded verbally. Contrary to the proposed hypotheses, warning storage format did not affect performance on the primary task, which was only affected by the level of working memory demand. Thus, the intra-modal warning storages did not interfere with the visual/spatial nature of the primary operational task. However, increase in working memory demand, by increasing the number of memorized warning codes, had an effect on both compliance rate and primary task performance. Conclusions: Rather than warning code storage alone, it is the coupling of warning storage and response format that has the most significant effect on compliance.
The WCCOM (Warning-Color COding Modality) compliance task
This task was developed in collaboration with our colleague Prof. Paul Ward now at Greenwich University in the UK.
The task has storage and retention components. Each warning item is paired with one of ten possible colours. The storage component requires memorizing the colour associated with each warning symbol (e.g., boots – black). The retention component involves recalling the stored symbol from the colour presented (e.g., black means boots). Both components of the task, the warning item and the color, were displayed in the same modality. There were three options of storage; pictorial, written or verbal as shown in the Figure. This task aims to examine the sensitivity of working memory to presentation modality when engaging in a demanding operational task.
UVID International conference held on 10/10/2013.
Our paper “Is More Information Better? How Dismounted Soldiers Use Video Feed From Unmanned Vehicles: Attention Allocation and Information Extraction Considerations” won the best paper award for research articles in the area of unmanned systems and human factors. The award ceramony will take place at the conference.
Conference program can be seen at the following link: