Preprint
Article

UsingTask Support Requirements During Socio-Technical Systems Design

Altmetrics

Downloads

129

Views

38

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

11 July 2024

Posted:

15 July 2024

You are already at the latest version

Alerts
Abstract
Critical phases in socio-technical systems (STS) design are the definition of functional requirements for automated or software supported human activities while also addressing social and human interaction issues. To define automation support for human operations, STS designers need to ensure that specifications will satisfy not only non-functional requirements (NFR) of the system but also of its human actors such as human reliability/workload. However, such human factors aspects are not addressed sufficiently with traditional STS design approaches which could lead to systems failure or rejection. This paper proposes a new STS design method that addresses this problem and introduces a novel type of requirements namely Task Support Requirements (TSR) that assists in specifying the functionality that the system should have to support human agents in undertaking their tasks by addressing human limitations. The proposed method synthesizes a requirements/software engineering approach to STS design with functional allocation and an HCI perspective which facilitate application of human factors knowledge in conceptual models and evaluation through VR simulation. A case study methodology is employed in this work that allows in-depth, multi-faceted explorations of the complex issues that characterize STS.
Keywords: 
Subject: Computer Science and Mathematics  -   Information Systems

1. Introduction

Socio-technical systems (STS) exhibit technical and social complexity and hence are more difficult to specify than software or hardware systems alone [1,2]. The technical and social parts of STS have complex interactions and thus the identification of suitable specifications is far from easy [3,4]. Capturing requirements constitutes a central activity in STS design and implementation. Failing to thoroughly capture and validate requirements is a key reason for system failure [5,6]. During requirements capture, designers need to decide on the most suitable level of automation, also referred to as functional allocation (FA). During this activity, technology should be viewed as a tool to assist humans to meet their goals, rather than implemented because of assumed efficiency or cost-savings [7]. Functional allocation and other human factors issues, that refer to technological, environmental and organisational aspects that influence human performance, are rarely modelled and linked to requirements in STS design [8]. The need to address human factors in STS has been highlighted in domains such as healthcare [9], transportation [10,11], military [12], city design [13], policy design [14] and business [15] with [16] emphasizing that designers should consider human psychology, throughout the STS design. Methods that address STS requirements focus on work processes [17] and apply formal modelling such as REASSURE [18], which may utilise input from experts, although they do not explicitly address human factors.
Practitioners in STS design [16,19,20], have requested methods that are not just descriptive but explanatory or predictive in nature and with the ability to test the integrated human activity and task-support through computer-based models and simulations, such as system dynamics [21,22], agent-based modelling or digital twins in virtual settings [23]. These techniques however have not been used effectively for designing STS with human factors in mind. New simulation approaches are required to link the top-level aspects of systems with low level specifications that support human factors concerns [21]. Evidence from the application of two popular deign methods used by human factors experts [24], the cognitive work analysis and its successor, the cognitive work analysis design toolkit [7], highlight that software tools, simulations and computer-based modelling are needed for evaluating the effects of different designs. In this paper we address this limitation through the application of VR simulation while also introducing a new requirement to bridge the gap between the human and technical facets of STS. We define these as Task Support Requirements (TSRs) to explicitly describe how technology can support human activity (tasks) and performance while addressing human cognitive limitations. TSRs aim to also provide a ‘lingua franca’ for software engineers and HF experts to discuss requirements that relate to functional allocation and other HF issues.
The proposed method utilises virtual prototyping based on [25] and it is related to simulation-based requirements validation methods [26,27,28], that utilise Bayesian networks and evolutionary computing to validate non-functional requirements (NFR) and optimise requirements specifications in complex STS. Alternative methods such as physical prototyping, could be used to test TSRs, but are expensive to implement whereas simulated environments can reduce validation costs, especially for complex systems [29,30]
The contribution of this paper is a new STS method that incorporate TSRs as a representation to bridge the gap between what people do (Tasks), and what the computer will provide (Functional requirements) and the shared user interface. The method combine existing requirements engineering notations (i*, Goal trees-GORE, and Design Rationale) through a framework for considering design alternatives that are influenced by human limitations. The problem addressed is the lack of methods that explicitly consider human factors when specifying requirements to support human limitations. Unlike other STS approaches such as, [19,31,32,33,34,35,36] that are based on conceptual models or address functional allocation in a limited manner, the proposed method aims to optimise human activity while validating solutions experimentally by virtual prototyping.
The proposed method is evaluated using case study methodology in two phases. The first phase provides a detailed application of the method during the early stages of designing a smart in-vehicle information system (IVIS). The second phase provides an empirical evaluation with expert and novice designers in specification of a road planning application to enhance pedestrian safety. The research questions addressed are:
  • Does the introduction of TSRs and the STS design method improve quality of the system design?
  • Does the proposed methodology produce designs that are useful?
The paper is organised as follows. First, we review the literature on STS design, requirements analysis approaches and human factors issues: situation awareness (SA), workload and functional allocation. Next we define TSRs and the proposed methodology that utilises TSRs. A detailed case study is presented showing an application of the method during the design and validation of an in-vehicle information system. Next, the empirical evaluation of the method is presented using a different case study (risks from contagious diseases during pedestrians commuting). The paper concludes with lessons learned, discussion of the implications of this method and the findings.

2. Related Work

Methods for STS design attempt to elicit user needs either through understanding the problem or designing an optimum solution given the properties of constituent system parts. ETHICS [37,38,39] and QUICKethics [40] claim to give the same attention to the needs of the people involved as to the demands of the technology; however they have been criticised for being slow and costly, with the involvement of unskilled users in the design process [41], and lacking tool support [42]. Hickey et al. [43] tried to integrate ETHICS with agile approaches such as Extreme Programming, Dynamic Systems Development Method, and Scrum [44] which incorporate user involvement to address user needs; however, agile approaches are mostly concerned with end-user requirements with no reference to human factors that inherently affect user performance. Soft Systems Methodology [45,46] takes into account stakeholders’ differing viewpoints to solve a defined problem, but also ignores human factors. Cognitive Work Analysis [36,47] aims to predict what a STS could do, and refers to actors’ cognitive skills but not their cognitive limitations. Cognitive systems engineering [34,35] deals with the analysis of organisational issues based on human factors; however, it lacks the technical systems design dimension. Human-centred design [48] is based on understandings users’ needs and requirements and explicitly refers to social and cultural factors, including working practices and organisational structure, by applying human factors/ergonomics, and usability knowledge and techniques. The main criticism of this method is that the analysis tends to view human activities as a static sequential process [49]. The System-Scenarios-Tool is a user-centred methodology for designing or re-designing work systems that uses human and machine properties. Its main limitation is that is largely a conceptual method without tool support for modelling and simulation [4]. Other systems engineering methods for STS design include Adaptive socio-technical systems [31], which use a requirements-driven approach to self-reconfigurable designs using Tropos goal modelling; and the Functional Resonance Analysis Method(FRAM) [19] which is based on resilience engineering and analyses possible emergent behaviour in complex systems.
Overall, STS design methods use evaluation tools based on static or simplified conceptual models or mock-ups that do not explicitly take into consideration how human factors should be addressed during the functional specification of interactive software. However, the majority of human factors analyses investigate the human factors alone and not how they can be used to specify solutions to support people working practices [50,51,52]. These shortcomings highlight the need to improve STS to address the complexity of human-system interaction [26] and to optimise level of automation, (functional allocation) [53,54,55], that could be in any of the eight different levels defined in [56]. When allocating tasks between the human operators and the automated system, inefficient automation design often arises from a lack of consideration of the role and limitations of human operators and of their interaction with the automated system [57]. Early FA methods such as the Fitts heuristics [7] aid allocation of functions between human operators and machines, by defining tasks that machines tend to perform “better” than humans and those that humans perform “better” than machines. Fitts suggested that machines perform better routine tasks that require high speed, force and computational power, while humans undertake tasks that require judgment and creativity. They also acknowledge the limitations of humans in correctly employing these capabilities when overloaded with excessive task demands, or maintaining alertness due to fatigue. Fitts’ MABA (machines are best as) list, despite its age, has persisted throughout the history of functional allocation [58].
One strategy is to increase automation and design out human error; but this comes with its own penalty of impoverished situation awareness (SA), the ability of a human agent knowing what is happening around him/her. This in turn leads to subsequent errors from leaving the human agent out of the loop [52]. The use of software in safety-critical areas such as intelligent transport systems has increased significantly, so software failures can impair system safety [59]. This highlights the need for better allocation of functionality between human and technology. Results from the analysis of accident causality indicate that more rigour is needed in analysing HF requirements in safety-related systems [60]. Inadequate or misunderstood requirements [61] relating to HF is a major cause of accidents [60]. Methods for partitioning functions between automation, human-only operation and cooperative human-computer functions have been proposed in Human Computer Interaction [62] and need to be addressed explicitly and strategically at an early stage of STS design to maximise chances of success [8].
We argue that requirements analysis should incorporate FA to specify software requirements to support human tasks, capabilities, and skills. Previous work such as task descriptions [63] define what user and system must do together [64] using problem space analysis to identify requirements [65]. Work on the integration of goal-oriented with problem-oriented requirements engineering address a wider scope of the to-be system [66], however, they fail to address the human factors part that need to be supported to minimise STS failure.
Human factors concerns has been partially addressed in i* modelling [32] through skills and human agent capabilities and goals-skills matching [33]. However, i* does not address mapping human activities and capabilities to system requirements that support human action and cognition (SA, workload, etc.). Past NFR frameworks [67] with Softgoal Interdependency Graphs [68] using the i* notation [32], have addressed issues such as reliability/performance, however criteria for their satisfaction are judged without reference to Human factors.
In [69], authors use Quantified Softgoal Interdependency Graphs (QSIGs) to assess the degree of softgoal satisfaction. However, the assessment is based on subjective estimates of the degree of interdependencies among soft goals. Virtual prototypes [50,70,71] have provided designers with multiple viewpoints of the system’s functionality that assists in requirements validation, e.g., the Immersive Scenario-based Requirements Engineering method [25]. In the automotive industry, VR (virtual reality) is used to test the safety of a vehicle while minimising design costs [72]. The advantages of VR and simulation however have not been fully leveraged for STS design due to the complexities of such systems.

3. TSR Definition

Task Support Requirements (TSRs) are requirements for software which interacts with people and directly supports their tasks or jobs. Tasks in HCI (and HF) are actions and procedures linked to goals in a hierarchy. In goal-oriented requirements engineering (GORE) goals are also modelled in a hierarchy, although Requirements Engineering tends to focus on what the design should do (functional requirements) whereas HCI/HF describes what people do. TSR are therefore a sub class of Functional Requirements (FR) which directly support users, similar to problem-oriented specification in [65], and hence exclude fully automated functions and embedded systems. TSR specify the interface between the user and the system (UI), which can vary from simple information displays to complex simulations. For example, in a tourist information system, the UI could display a simple map of nearby locations of interest, or an interactive map through which the user can request more details via a touch screen informed by a recommender engine. TSR also describe the human performance and qualities that should be satisfied for the system to operate effectively. Associated NFRs may specify properties of operation such as safety and privacy or the desired level of human-system reliability. These can be refined into measures such as the maximal acceptable error rate, learning times, usability problems, etc. TSR, therefore, involve specification of (1) software support for the human operator, (2) the user interface that delivers system support (3) human factors criteria that affect system performance.

4. Proposed STS Design Method

The method proposed in this paper is a combination of goal modelling [73], TSR specification, functional allocation [53,54], design rationale [74] and virtual prototyping [25,75]. It is similar to [76] which utilises objectives, design specification and evaluation through a design rationale framework and [77] which uses goal hierarchy to derive function allocation for the design of an adaptive automation system. However, in contrast to these methods, we propose the use of VR simulation when appropriate to evaluate prototype designs that emerge from the proposed approach. The simulated VR environment is suitable for highly dynamic scenarios (e.g., traffic situations) where prototypes are difficult to create and analyse using traditional techniques such as Wizard of Oz, paper-based sketches and mock-ups.
An initial version of methodology was based on existing literature and the authors’ experience in STS design.
A process model of the methodology is presented in Figure 1 that initiates with the problem decomposition, followed by functional allocation and TSR specification at different levels of granularity. Phase 1 answers the question “what is the problem and which human tasks should be supported through technology”, phase2: “what is the optimum specification of the STS to support these tasks” and phase 3 “does the proposed STS provide sufficient task support to solve the problem?”.
Central to our method is the notion of functional allocation (FA) that addresses the distribution of functions to human (manual task), computer (full automation) or human-computer cooperation. Different frameworks have been proposed for the distribution of functions between human and automation [78,79,80,81,82,83]. Common to all these models is the assumption that automation constitutes a continuum from no support to full automation of all functions. Adverse effects of inappropriate functional allocation may become apparent when the human operator is taken out of the active decision-making loop, leading to a loss of situation awareness and inability to respond to unexpected events.
At a lower level of granularity, the method is decomposed in the following steps:
  • Analysis of the problem domain and the main human factors issues that need to be considered during STS design.
  • Decomposition of the problem into sub problems until goals become apparent and can be realised through technology. This goal hierarchy analysis is performed using the GORE method [68]. Goals are statements of the intentions and desired outcomes of a system that have different levels of abstraction. During this step goals are refined into sub-goals up until the human factors issues become apparent.
  • Next is i* modelling focusing on a sub-problem from the goal-hierarchy of step 2. The key human factors that need to be satisfied are specified as NFR (soft-goals) realised through functional requirements. The i* framework is a goal-oriented requirement engineering technique that models relationships between different actors in the STS and is used in the early phase of system modelling [32]. Soft-goals in i* are satisfied when their subgoals are satisfied. Tasks refer to activities performed by human or machine agents in the STS. The i*diagram elaborates on the tasks, goals, soft-goals and resources required for the selected sub-problem.
  • Functional allocation (FA) analysis of the selected goal from the i* diagram, to identify the best automation scheme. The selected FA scheme is refined into different human-machine interaction options. Different human factors evaluation criteria are used (i.e., situation awareness, reliability) to analyse the effect of each HCI modality on human performance. To visualise the influence of each evaluation criterion we chose the Questions, Options, and Criteria (QOC) notation [91], since it is expressive and deals directly with evaluation of system features [92]. Questions in QOC represent key issues (goals in i*), “Options” are alternative functional allocation solutions/modality responding to a question, while “Criteria” represent the desirable properties of the system that it must satisfy, for instance, cost of development, safety, or HF criteria. The output from this step is the best functional allocation scheme for the task.
  • The next step focuses on its decomposition into low level tasks (sub-tasks) that need to be performed either by IT or human to satisfy the goal associated with it in i*.
  • Identify the best level of automation for each of the tasks from previous step. For tasks that are either not fully manual or can not be fully automated (HCI tasks) specify the required functionality that the technology should have to support the human agent to perform his/her task without failing (TSRs).
The following 2 steps refer to TSRs specification using domain specific reasoning and trade offs exploration and are presented in Appendix B.
6a.
This step uses design space exploration to identify candidate user interface (UI) metaphors (representing familiar analogies e.g., radar analogy). Design rationale is used to explain the reasons behind the decisions made. Options are alternative design solutions. Criteria represent the desirable properties (NFR /Softgoals) of the technology and the requirements that it must satisfy. The links between options and criteria make trade-offs explicit and turn the focus on to the purpose of the design.
6b.
The selected UI metaphors from step 6a are used to refine the TSRs identified at step 6. TSRs are then evaluated against a set of non-functional requirements (criteria) and the best TSRs are selected to form the specification of the candidate designs that will be prototyped.
7.
Implement VR prototypes of each candidate design and specify VR scenarios and NFR metrics.
8.
Evaluate STS prototypes experimentally with users in VR settings. Evaluation criteria (defined as human NFR) are assessed explicitly during the experiments using different metrics (e.g., electroencephalography-EEG, Eye tracking’s Eye fixations, Heart Rate, Respiration etc). If the performance of the design is not satisfactory (evaluation metrics not satisfactory) the TSRs are refined, and the process is repeated.

5. Detailed Application of the Proposed Method

A case-study illustrates an application of the STS design method in the context of smart in-vehicle information systems. The aim is the design of a STS to support drivers’ situation awareness (problem). The process starts with the specification of drivers’ information needs in terms of goals, identifies the optimum distribution of tasks between the driver and potential software technology (functional allocation) to address these needs, refines the selected FA option into TSRs, and validates the TSRs through VR simulation.
Step 1. Problem specification: Driver safety & support systems
Design of In-Vehicle Information Systems (IVIS) and Advanced Driver Assistance Systems (ADAS) to assist drivers with the complex demands associated with the driving task [84] have explored technologies such as lane departure warning, lane departure prevention, active lane keeping, front crash prevention, blind spot monitoring, rear-cross traffic alert and driver monitoring systems [85]. Automotive design guidelines describe desirable practices that are not mandatory and hence are less strict than standards [86].
In traffic safety, situation awareness (SA) and workload constitute critical safety factors as associated Non-Functional requirements. Situation awareness enables the driver to anticipate events under the perceived driving and environmental conditions [87], and is defined as the process of perceiving information (level 1) from the environment, comprehending its meaning (level 2) and projecting it into the future (level 3). This is linked to the 3-level model of driving (operational/tactical/strategic) [88], referring to actions for stable control, manoeuvring and route planning. Work by [89,90] stress that operational driving tasks such as steering and braking responses primarily require level 1 situation awareness support, although level 2 situation awareness may also be involved [89]. For IVIS to improve drivers’ situation awareness it is essential to enhance their ability to perceive and interpret traffic and environmental information (situation awareness levels 1 and 2) to support the tactical and operational tasks of driving.
Notifications can assist drivers’ tasks and changes in their environment [91] however, design of effective notifications is challenging [92] since notifications can also be distractors. In the same vein, workload is linked to situation awareness and refers to the limited cognitive resources of humans and how this can affect human reliability. So, if a hazardous situation emerges when the driver is overloaded, the risk of committing an error is increased.
Step 2. Goal modelling and high-level TSR specification
Goals in this model refer to the 3-level model of driving tasks: strategic, tactical and operational [88]. The first is associated with strategic driver decisions and tasks that relate to the selection of the best route to arrive at the destination. The criterion here could be travel time, scenery, etc. At the tactical level, goals are associated with actions of the driver that relate to the desired manoeuvres to achieve short-term objectives such as overtaking. At the operational level are goals relating to manoeuvres to control the basic operations of driving such as acceleration, deceleration (speed control) and lateral deviations (direction control). Figure 2 depicts the goal hierarchy graph.
During this step goals are decomposed to a level where assumptions about automation become apparent, such as “control vehicle” in contrast to a non-automated solution such as walking. At this stage TSR become apparent for achieving lower-level goals. For “control direction” and “speed”, task support is delivered by standardised controls of steering wheels, brake and accelerator pedals, although further decomposition and definition of TSR is possible; for instance, in cruise control for “control speed”. In the case of cruise control the user interface implications are refined into status displays and controls to set/disengage cruise control mode.
Step 3. STS modelling using i*.
The i* model of Figure 3 provides the link between goals in Figure 2 and softgoals (NFR) that need to be satisfied by functional requirements for the system to be successful. The focus in this step is “Monitor environment” and “Respond to hazards” sub-goals (Figure 2). Monitor environment, depend on the softgoals “Maintain safety” and “Maintain situation awareness” in Figure 3. The “Respond to hazards” goal also depends on the NFR “Maintain situation awareness” and is decomposed into the tactical, operational tasks of, halt, avoid and warn (pedestrian risks and other road users). These tasks can be supported by technology/functionality (depending on desired level of automation explored in subsequent steps) which in return will satisfying the associated NFR/softgoals “Maintain situation awareness” and “Maintain Safety”. For instance, in the case of system warning, the specification could be refined to provide only critical information of traffic conditions to the driver with an audio warning of imminent threats that are not visible.
Step 4. Functional Allocation analysis for “automated warning” option and HCI modality analysis
In this case study we illustrate the rational for selecting the automated driver warning option through functional allocation (FA) analysis, selection of the most appropriate HCI modality, and the refinement of this option into low-level tasks (Figure 6) and TSRs (Table 1).
The FA approach we adopted is based on the combination of the Fitts model and the automation taxonomy framework of [81]. Parasuraman’s framework is mapped on to different stages of human information processing: 1) information acquisition, 2) information integration (comprehension) 3) decision making, and 4) response. Automation can operate at varying levels in all stages of information processing.
Figure 4 illustrates a high-level functional allocation analysis for the “respond to hazards” goal as a design rationale diagram where the goal corresponds to the question asked. The figure illustrates two functional allocation options: “manual” and “automated hazard recognition and response”, with the former being rejected as it provides no support for SA. The Automated option is decomposed into three more detailed options: 1) complete automation for recognition and hazard avoidance which would require considerable AI processing and is currently being developed in driverless vehicle technology; 2) automated halting which relies on AI; and 3) automated warning of the driver with speech/audio or visual display which does not depend on AI technology.
The AI option of automated halting and avoidance would be the most expensive; however, all options depend on some automated processing to detect dynamic hazards, i.e., other vehicles and pedestrians. If the automated halt/avoid technology works reliably it would be the safest option, but reliability and security doubts [93] reduce this advantage [94]. Warning the driver of hazards contributes to safety and situation awareness with a lower cost and better reliability. This option is selected as it represented the best criteria (NFR) trade off. The warning option is then decomposed further to investigate different HCI modalities, audio/speech/haptic warnings or visually through a visual situation display as depicted in Figure 5. The speech option may encounter reliability difficulties in giving precise instructions and location of the hazard within the microsecond time scale. Furthermore, the driver may not have the necessary mental map of the situation to execute immediate response, so situation awareness is not supported. The same applies for audio messages such as beeping from different orientations within the vehicle. The option “Visual warning” does support situation awareness and should encourage the driver to maintain a mental map and awareness of the road situation and potential hazards. Therefore, a “Visual situation display” option that provides support for situation awareness is chosen as the best solution.
Step 5. Decomposition of the automated warning via “Visual situation display” task into sub-tasks that need to be performed to maintain adequate situation awareness.
The selected visual warning option is refined into low-level tasks showing key activities the driver needs to perform to maintain sufficient level of situation awareness (see Figure 6). This analysis depends on domain knowledge, in-vehicle systems design literature and driver information needs [95]. The functional allocation decisions (Table 1) for each task need to specify which of these tasks should be supported by the visual situation display and what level of automation is appropriate for each task. Table 1 illustrates summary of the trade off issues.
Step 6. Functional allocation analysis for the sub-tasks of the “Provide visual warnings” task and specification of functional requirements to support these tasks.
Functional allocation analysis of tasks in Figure 6 is performed in tabular notation as shown in Table 1. This is used as an alternative to QOC diagrams when the number of options and criteria combinations is large. The evaluation of each activity in terms of reliability and automation capability is assessed on the scale of high, medium, low (H/M/L). High indicates that technology is judged to provide superior results to human operation, hence full automation of the activity is possible with current technologies. Low indicates that people are better at performing this activity than the available technology and this task should be allocated to the human.
Tasks that are suitable for human-machine collaboration are specified in terms of task support requirements (TSRs) for interactive user interfaces, while the human-only tasks become manual operating procedures. The TSR specified in Table 1 (rightmost column) refer to the visual situation display option that is based on automated warnings and visual HCI modality selected in previous steps. TSR are further analysed in the following design rationale step (Step 6a), where the design specification of candidate options becomes more apparent (Step 6b). In a similar manner the “maintain optimal workload” goal can be refined into its TSR and analysed for functional allocation options.
Steps (6a and 6b) describe the transition from the general method aimed at specification into the design phase where domain specific reasoning and trade offs are explored. This detail is given in Appendix B which reports further design rationale analysis that produces two preferred options (radar/arrows) which are then subject to validation studies using virtual prototyping in the final stage (step 7,8 presented next).
Step 7. Implementation of virtual prototypes of the Radar and Arrows designs based on selected TSRs and specification of NFR metrics and VR scenarios.
Virtual prototyping is used to determine which of the two candidate designs will be optimal under a range of operational conditions. An experiment was conducted with participants using a VR driving simulator that incorporated the candidate designs of Arrows and Rarar (see Figure 9—Appendix B). Designs are evaluated against situation awareness and workload NFRs.
The VR simulator is customised to create a replica of the environment and hazard scenarios that drivers are likely to experience, to simulate increased workload and stress their situation awareness. Three steps are followed during VR customisation: 1) development of the test traffic environment in terms of buildings, infrastructure and traffic flow; 2) model scenarios in terms of traffic flow and hazards; and 3) modelling the candidate designs through head-up display (HUD) technology. The HUD designs were specified from the requirements refinement process and the design rationale steps in Appendix B, producing the virtual prototypes (Figure 9).
Virtual prototyping may require input from HF experts; however, domain analysis should provide scenarios, and the design rationale trade off criteria become measures in the experiment. During the experiment, driving behaviours were monitored and logged into the simulator’s database. The logged observations from the simulation were analysed to represent performance data (i.e., driver errors, potential accidents, perception of hazard-critical information) to select the design that best satisfies the NFR criteria. If the minimum level of the NFR criteria is not satisfied, then the virtual prototype needs to be redesigned and the process repeated until the NFR is satisfied.
To be confident that the design supported situation awareness, the situation awareness score was set at >= 60% indicating that the driver should be able to perceive 6 out of 10 separate critical information cues, which represents the minimum level of situation awareness required to maintain safe driving and is a quantitative estimate of the driver’s awareness of: vehicle(s) in blind spot, vehicle(s) ahead, behind and to the side of the host vehicle, pedestrians on road, obstacles, own speed limit, parked cars, congestion, position in the road-lane, distance from vehicle(s) ahead and behind. This threshold is based on Miller’s [96] seven plus/minus two model and the useful field of view test, indicating the minimum information an individual can extract from a dynamic environment [97] along with general driver visual information processing capacity [98,99,100]. Workload NFR satisfiability was measured through an optimal rage of electroencephalography (EEG) scores in the range 45 to 70 out of 100, indicating the optimum level of workload under which the driver remains vigilant but not bored.
Step 8. Simulation-based validation of TSR based on selected NFR evaluation metrics.
Seventeen participants from the local population, with a valid driver’s licence and 20/20 vision or corrective glasses or lenses, took part in the experiment. Subjects selected had at least seven years’ driving experience and were under 55 years old. Prior to the experiment, they were screened for colour blindness or susceptibility to simulator sickness. They were introduced to the various simulator controls, made adjustments to the seat and were given a five-minute training session. In the before stage, subjects had to complete the Manchester Driving Style questionnaire [51] to identify their driving style along with demographic information (average age was 37.1 years and the gender distribution was 55% female to 45% male).
During the experiment drivers were expected to drive along a pre-specified path in the virtual environment. The driving controls include a real steering wheel, brake and accelerator pedals, and a simulated automatic gearbox. Driver behaviour data was recorded including: lane deviations, headway (distance or duration between vehicles), speed, acceleration, EEG and deceleration. In total, 8,460 data-points were collected from each participant and each of the variables. The situation awareness assessment was conducted using the SAGAT technique (Situation Awareness Global Assessment Technique) [87], by freezing the simulator at different points during the experiment and asking the participants to answer a number of questions that referred to the driving situation. Questionnaire responses from this process were analysed and assessed on a 0-100 score by comparing the actual situation with what the participants reported in their results.
Results from the experiment showed that both designs were significantly better than the control condition (no use of visual situation display). The required levels of the NFR criteria for both designs were satisfied, with drivers’ situation awareness level being on average 60% in all road sections. The two-way ANOVA repeated measure analysis that was carried out on the aggregated SAGAT score and the other dependent variables (speed, EEG and headway) for three data collection points that coincided with hazardous events, and three design conditions (radar, arrows, and control), identified a significant main effect for design on situation awareness (F(2,15)=10.90, p<0.01). The radar design (mean 74.3) was superior to arrows (71.72) and the control condition (51.15). Both the arrows and radar designs were significantly better than the control (post-hoc tests, p<.001) which verifies that the designs as specified by TSR satisfy the NFR “Maintain situation awareness” and “Respond to hazards” goal in i* model of Figure 6. Thus, the design process ends.

6. Empirical Evaluation of the Proposed Method

A summative, qualitative evaluation of the method was conducted with 19 participants, 9 experts from the domains of information systems, intelligent transportation systems, and computer science, who had been working as professional systems analysts/consultants for more than seven years, 5 postgraduate students that recently completed a postgraduate course in e-business systems design and 5 novice participants with computer science background. The average age was 35 years, and the gender distribution was 63.2% male) The validation focused on the first 6 steps of the proposed method and aimed to evaluate the usefulness and correctness of TSR specifications that emerge while participants designed a hypothetical STS.
The criteria utilised to evaluate the method are based on [7,11,22] and cover aspects pertaining to the method’s generalisability, learnability, effectiveness, usability, and support for human factors in the design.
A workshop was prepared in a domain that all participants were familiar with: minimise the risks from contagious diseases (e.g., COVID19 pandemic). Expert and novice subjects applied the method to design a new mobile application to assist travellers minimise their risk of conceiving a contagious disease (COVID19) while commuting in public spaces, by undertaking the activities in Table 2 and answering the questions in Appendix A. The evaluation was carried out in two phases, the first included a 3-hour session with students and novice subjects while the second phase involved 90 min individual sessions with experts. Subjects were initially trained on the methodology using the driver situation awareness example, and then asked to apply it to the COVID19 scenario. The goal was to specify the most appropriate functional allocation and specification of TSRs that could address one aspect of the COVID19 problem such as contact tracing/symptoms checking. Upon completing the exercise expert participants were asked to complete an online questionnaire (see Appendix A) followed by interviews by the researchers. The interview was unstructured and started with open questions to elicit experts’ opinions about the method’s advantages and limitations. All interviews were recorded. Following the exercise, novice subjects were asked to complete an online questionnaire and asked to explain how they applied the method to come up with their design/TSR. Both experts and novices submit their completed questionnaires via Google forms and their designs via email. Participants’ designs/ were evaluated in terms of how well they contributed to the problem (COVID19).
The evaluation of results showed that experts perceived the method as easy to learn, structured, helpful in framing their thinking and efficient in addressing HF through the specification of TSRs. Figure 7 show the percentage of subjects assessing each evaluation question with a score above the 3 in a 5-point Likert scale. These results indicates that the method can contribute positively to designing STS and address human factors issues effectively. The practical part of the evaluation was completed by experts, with >75% of participants scoring >65% in each assigned task as shown in Table 2. The 65% threshold was selected to focus on participants with above the minimum acceptable level of performance (50%). Higher thresholds (e.g., >75% or more) were not used since they minimised the number of passing cases and constrained the knowledge to be drawn from these cases. The evaluation of each task was performed by examining the correctness of the produced outcomes with reference to the requirements of Table 2. For instance, in the first task two example correct answers were “Find the best route to my destination with the minimum infection risk” and “Being aware of the infection risk at a given public place. Contrary to the experts, students and novice participants found the method more challenging possibly because of their limited knowledge in systems design. They primarily addressed functional allocation at the high level with limited attention to human factors. In contrast, the experts addressed the human factors in more detail and their designs had a strong link with the associated NFR criterion (contextual risk awareness).
Analysis of interviews with experts highlighted limitations and recommendations. Experts mentioned the need for tool support on functional allocation’s selection-criteria (what criteria should be used to decide for functional allocation) and possible software support to guide the exploration of the vast space of possible solutions. Experts recommended possible improvements: the use of taxonomy or advice on potential human factors limitations in different domains and tool support for design space exploration to assist in selecting the best UI options (modality, metaphor) from past similar systems using techniques such as analogical reasoning.
Overall, the evaluation of the method showed that it is useful in specifying requirements of STS to support human activity and addressing human limitations.

7. Threats to Validity

With regards to internal and external validity, the research was conducted using different controls and issues with generalisability were considered. Regarding internal validity, training of novice and expert participants prior to human factors analysis provided the means to partially control for this threat. Similarly, the TSRs of the proposed designs were implemented as virtual prototypes and evaluated in controlled settings using a VR simulation environment. Hence confounding variables have been eliminated and the true effects of the designed artifacts to situation awareness have been accurately measured.
External validity concerns the generalizability of the findings, it is dependent on the case study application and the number/variety of subject used in the evaluation. Generalisation about the utility and usability of the method is limited by the evaluation case study and participant backgrounds. However, the method have a general applicability in STS in which functional allocation is key. The need for human factors training, identified in the evaluation, poses some limitations on the applicability of the method, although we argue the initial method’s steps and TSR concept has a more general application, independent of human factors knowledge. Validity limitations for the VR prototyping phase of the method were mitigated by level of realism in the virtual environment and the immersion of participants along with increased familiarisation time with VR environment.

8. Lessons Learned

Application of the method in the IVIS case study and its evaluation by experts and novice subjects identified both strengths and weaknesses. The main weakness is the need for at least basic human factors knowledge by designers to adequately address all its steps. This is highlighted during the evaluation of the method, with novice subject finding it difficult to specify the human factors relevant to the problem. Secondly, the method needs to provide tailored interpretations of non-functional properties that are relevant to different types of STS since, for instance, situation awareness in aviation differs from situation awareness in road transport and should therefore be interpreted differently. Other concerns with the method were: the cost of developing the VR prototypes, the design and execution of experiments in the CAVE facility, and the analysis of results. However, depending on the domain, the use of head-mounted VR equipment might be suitable for experimentation while the use of rapid VR development tools such as Unity makes this process more affordable. Alternative approaches to prototyping that do not require VR technology could be used, such as paper-based (Wizard of Oz) or screen-based techniques, depending on the complexity of the domain. Overall, the method is complex and could be adapted as a combination of sub-sets of its steps and applied separately in different domains. Nevertheless, the method addresses a significant gap in the body of knowledge that relates to the importance of non-functional issues in STS design through the introduction of TSR and their explicit specification. The method makes the connection between high-level goals that are relevant to stakeholders, with design options that embrace functional allocation and design rationale, to specify functional requirements (TSR) that address important NFRs that relate to human factors.

9. Discussion

The FA technique employed in the method stems from the HF literature and provides guidelines for the best allocation of tasks between humans and technology according to their strengths and weaknesses (Men are better/ Machines are better at- MABA/MABA) [54]. Work by [101], developed the task technology fit model to describe the optimum fit between managerial tasks and mobile IT capabilities under different environmental conditions, to improve overall task performance. Tasks are described in terms of routineness, structure, time criticality and interdependencies; while capabilities of mobile IT are seen in terms of functionality and user interface, and context in terms of distractions and obstacles. Their model, however, is specific to the mobile IT domain and requires further empirical research before application in other settings. Our TSR approach, is based on general cognitive theories, can be used in different disciplines with minor adjustments according to the available automation capabilities.
Some FA theories argue that the a priori allocation of functions as illustrated in Fitts’ List is an oversimplification [34,102,103] claiming that capitalizing on some strengths of computers does not replace a human weakness. Instead it creates new human strengths and weaknesses that are often unanticipated [73]. Dekker and Woods [103] recommended that system developers abandon the traditional “‘who does what” approach of FA and move towards STS. Despite this criticism Fitts model remains popular for its generalisability and descriptive adequacy [58]. Hence, in our method we utilise the Fitts List in accordance with Parasuraman’s model for the specification of an initial functional allocation.
Several STS design methods have been developed over the past 40 years, including ETHICS, QUICKethics [37,40], Soft Systems Methodology [46], Cognitive Systems Engineering [34,35] and Human-Centred Design [48]. However, most of these are rarely used [20]; the main criticism being their limited capability in addressing prospective STS designs or providing evaluations, concentrating on problem analysis with existing systems rather than design solutions. An important issue in existing STS design methods is the different and sometimes conflicting value systems among stakeholders, such as improving job satisfaction and the work-life balance while at the same time achieving the organisation’s economic objectives. Empathic design [104] and contextual design e.g., [105] do consider the user’s environment as part of the development process, but their application has been limited. The STS method we proposed uses participatory techniques by involving users in the evaluation of the prospective system design. The experimental nature of the evaluation step encourages involvement of stakeholders (drivers in our example application).
Many STS methods have focussed on safety engineering involving diverse approaches such as Activity theory [106]; cybernetics [34]; Joint Cognitive System [34]; Work Domain Analysis [107]; Functional Resonance Analysis Method [19,108] and Framework for Slack Analysis [109]. These are either techniques to address specific problems or are descriptive in nature and focus on showing how work is currently performed. Baxter et al. [20] highlighted the inability of exiting STS methods to address prospective designs due to the difficulty of predicting the interaction between people, technology and context in a system world that does not exist (new world problem). Our method offers a solution to this problem through the introduction of TSRs that bridge the disciplines of HF and technology design and integrates existing modelling languages from software engineering and other disciplines.
TSRs extends previous approaches to STS design [20] by providing a more detail-focused method that addresses the frontier between software design and higher-level heuristic design (human factors and goals) of STS. Mumford’s ETHICS [40] containing general heuristics for analysing and shaping the components and human roles, within a framework of principles for human design of work practice, workplaces and organisations; however it does not address technology. A review [20] of STS approaches, propose a research agenda for a system engineering approach to STS, oriented towards a high-level view of process and systems organisation. Similarly Design X, [16] provides another high-level view on STS emphasizing system complexity, the role of people therein, emergent properties and the inherent complexity in STS. In contrast TSR provide a lower level design focus where components of human-computer activity could be considered within a higher-level framework provided by ETHICS and related approaches [16]. Activity theory [110] can operate at a similar level of granularity as TSR; however, it only provides a modelling framework of goals, objects, and activities without any view on functional allocation or definition of requirements.
The closer relatives of TSRs and our method are human factors oriented methods, such as Ecological Interface Design (EiD), [36], CREAM [111] and FRAM [19] which focus on human safety engineering rather than functional allocation orientation of TSR. FRAM provides organizing principles and an activity modelling approach for analysing systems functions and their interfaces; however, as Hollnagel notes, it is a framework for problem diagnosis, rather than giving more detailed advice on FA, human factors guidelines and user interface design. Nevertheless, these methods could be used in conjunction with TSR. For example, the graphical representation as realistic metaphors of the system and software world for control interfaces, which is proposed in EiD [36], could elaborate the user interface component of TSRs TSR draw upon human error frameworks [51] and more general ergonomic advice [112] to inform design and specification.
Requirements engineering methods e.g., [113] have not addressed specification of software support for human decision-making and system operation, which form the focus of TSRs. Modelling of human agents and activities is presented in requirements engineering models such as i* [114] which also support investigation of functional requirements (hard goals in i*) and NFRs (i* soft goals). However, i* modeling does not advise on design of user interface components or functional allocation. Modeling of requirements for adaptive systems [31] provides detailed agent-activity-goal models using a formal extension of i*, combined with a ‘monitor- diagnose-reconcile-compensate’ framework for considering modification of user support requirements however task allocation and human factors advice is not supported. FRAM and TSR could be complementary with FRAM operating at the high system components level while TSR unpack system components in terms of software requirements, human operational activities and desired operational conditions. Investigation and validation activities using VR or simulation [115] are time consuming, nevertheless, the low level of granularity employed VR simulation enable the quantitative evaluation of prospective systems, filling the gap in the existing STS design methods identified by [20]. Guo et al. [116] report a VR-based system to assist product design in its early stages. Through interacting with virtual prototypes in an immersive environment, the designer can gain a more explicit understanding of the product before its realisation. Their application of VR, however, is not linked to a systematic process of product design. Secondly, the development of high-fidelity VR prototypes can be prohibitively expensive. An alternative could be head-mounted VR displays, although users may be even more prone to motion sickness than in fixed-base [72,117].
Although it is intended to be generic, the method was tested in a specific context, and therefore generalizations about its effectiveness need further investigation. Furthermore, there may be significant differences in the level of complexity in STS, which may relate to challenges not identified in this study.

10. Conclusions

A new STS design approach is proposed that extends functional allocation with a new type of requirements referred to as TSRs; It aims to support the design of systems through the identification of requirements that support human activities and satisfy a set of qualities that relate to human factors.
An example from the automotive domain demonstrated the application of the method for the design of an IVIS system by addressing the cognitive limitations of human agents in such systems. Workload and situation awareness were identified as critical success factors that needed support by technology. TSRs were specified for prospective systems to support these NFR, and virtual prototypes were developed. The simulated evaluation of the prototypes revealed the design that best satisfied the NFR.
An evaluation of the proposed method conducted with participants provided insights into the method’s advantages and limitations. The results demonstrated that the method can contribute positively to designing STS by addressing human factors issues effectively. Future work will include quantitative means against which the level of automation (FA) can be specified and tool support for design space exploration using analogical reasoning.

Appendix A. Evaluation Criteria during Empirical Evaluation

Evaluation criteria COVID19 -Method’s Evaluation Questions Scale
Logical steps How easy was the method to follow (logic and structure)? [1 very hard – 5 very easy]
Learnability How easy was the method to learn? (learnability) [1 very hard – 5 very easy]
Structure my thinking The method framed my thinking by providing me with a record of my previous design decisions [1 absolutely disagree – 5 absolutely agree]
Technical /human aspects The method helped me to address both the technical and the human factors part of sociotechnical systems [1 absolutely disagree – 5 absolutely agree]
TSR useful Task Support Requirements are useful for identifying human factors issues during sociotechnical systems design [1 absolutely disagree – 5 absolutely agree]
Functional Allocation useful Functional allocation analysis helped me to identify the best level of automation for the new system based on selected system-qualities [1 absolutely disagree – 5 absolutely agree]
Produce effective design The method helped me to produce an effective system design that solves or contributes towards the solution of a specific aspect of the COVID19 problem [1 absolutely disagree – 5 absolutely agree]
Efficient design process The method helped me to produce a design of the system in an efficient manner (guided me towards a solution) [1 absolutely disagree – 5 absolutely agree]
Drill down The method enabled me to view the problem at high level and then drill down into specific functional requirements of a new system that will address the problem [1 absolutely disagree – 5 absolutely agree]

Appendix B. Design Exploration Phase of the Method in the Automotive Vehicles-Safety Domain

Step 6a. User Interface design space exploration for the “Visual warning” option
This step identifies user interface design metaphors for the “Visual warning” option. Warnings, can cause drivers to suffer from divided attention between the primary driving tasks and interpreting the hazard information [118], leading to overloading and reduced situation awareness (SA). This problem has increased the interest in head-up displays (HUDs) which might reduce the divided attention by overlaying hazard warnings on the driver’s view of the road and surrounding environment [119,120]. HUDs present information in line with the driver’s natural field of vision to improve driver SA. HUDs may also affect the driver’s detection sensitivity to unexpected events, due to their information-capturing attention [120,121,122]. The psychological advantages and disadvantages of HUD displays, such as split attention, SA and cognitive overloading, have been known for some time [123,124,125] with factors such as display position [126]; and text size in visual displays [127] indicating that HUDs are superior to head-down displays for SA as they reduce workload. An overly cluttered HUD can negatively affect SA through ineffective scanning [128]. Designers must therefore specify only the most useful and unambiguous visual cues [121] in HUD as shown in Table 1. Thus, the information requirements of the driver are used during the specification of TSRs of the visual situation display.
The QOC diagram in Figure 8 addresses the “Provide visual warnings” task from Figure 6, which has three user interface metaphor options: 1) radar-like display, 2) arrows display and 3) combination of arrows and audio speech warning. In this step, the selected user interface metaphors (1 and 2) are refined into TSRs. The radar display option provides a street/road map of the current location overlaid with potential hazards. This contributes good SA, and reasonable safety, but imposes a split attention penalty because the driver has to monitor the map as well as the external world. The arrows design is based on directional minimal alert [138], expressed visually in the form of arrows highlighting the hazard in driver’s field of view. This design has the advantage of less information interfering with the driver’s view, a reasonable safety contribution, and reducing the driver’s cognitive workload of having to attend to the map as well as the external road environment. The third option combines an audio speech warning with the arrows hazard display. This has the disadvantage of increasing the driver’s workload and has been the subject of several studies [98,120,129]. Therefore, arrows with audio design was deemed inferior, given the increased workload and annoyance [131], to the simultaneous audio warning and display of hazards, and no further refinement was performed.
Figure 8. QOC Design rationale diagram for different HCI metaphors of the i* task “Provide visual warning” and its realisation through a “Visual situation display”, with best options shaded.
Figure 8. QOC Design rationale diagram for different HCI metaphors of the i* task “Provide visual warning” and its realisation through a “Visual situation display”, with best options shaded.
Preprints 111881 g008
For the selected designs (arrows, radar) from Figure 8, the TSRs in Table 1 are refined utilising accident causality knowledge [132,133] and visual attention principles of sensory and cognitive affordance [134], along with drivers’ information needs. Given that most traffic accidents are caused or influenced by low SA such as, inattention at intersections or during lane change [132,133], or drivers failing to recognise vehicles’ trajectories at intersections [95], failing to notice traffic behind when decelerating or changing lanes, or cutting across in front of another vehicle too soon after overtaking, the TSRs should alleviate these risks by addressing the information needs of drivers. To increase SA TSRs should specify how visual cues can signal peripheral risks (vehicles, obstacles etc.) in a non destructive manner [135,136]. Relevant SA design knowledge from complex systems design [137] could also be utilised during TSR refinement.
Based on the above knowledge, the TSRs of the selected two designs are refined further. The cognitive affordance (arrows) design, aims to support driver’s situation awareness through the minimal alert paradigm, as illustrated in the virtual user interface prototype in Figure 9 (left side). It is based on the DENSO’s intersection movement assist [138] and prioritises information based on risk level. It aims to warn drivers of vehicles that are expected to pull out from side roads but are not yet visible, or vehicles that are in the driver’s blind spot. This design entails features comparable to the spatial attention mechanism of [139] but with minimum visual cues. Also, is similar to the Mercedes blind-spot assist system [140], with extended capabilities to warn drivers of threats that are not yet visible. As illustrated in Figure 9, shows arrows on the HUD appear at approximately 45 degrees from the driver’s natural line of sight, pointing in the direction of the imminent threat [75]. At any given time only one arrow per threat is depicted on the screen, with a maximum of two concurrent warnings. In the event of more than one simultaneous threat, larger red arrows indicate the most critical one that needs to be attended to first. Unlike DENSO’s design that superimposes vehicles through augmented reality on side buildings, the approach proposed here alerts drivers using arrows on the windscreen, thus providing a simple alert with minimal information. Similarly, work by [141] improves driver awareness of the rearward road-scene using digital side-mirrors. Their results indicate reductions in decision time and eyes-off-road time when using digital mirrors, suggesting that such technology may enable drivers to take in salient information more rapidly from the rearward view and hence reduce lane-changing accidents.
Figure 9. Screenshot from the first-person-view of the Arrows (top left) and Radar (top right) designs on the HUD. Superimposed icons on the HUD show visualisations of TSRs. Arrows show imminent vehicle threats about to emerge at intersection (right arrow) and a vehicle following closely (lower-centre arrow). Radar shows host vehicle as a blue car in the centre of radar and traffic hazards as red circles. Below a participant engaging a scenario during an experiment in VR cave simulator using the radar design.
Figure 9. Screenshot from the first-person-view of the Arrows (top left) and Radar (top right) designs on the HUD. Superimposed icons on the HUD show visualisations of TSRs. Arrows show imminent vehicle threats about to emerge at intersection (right arrow) and a vehicle following closely (lower-centre arrow). Radar shows host vehicle as a blue car in the centre of radar and traffic hazards as red circles. Below a participant engaging a scenario during an experiment in VR cave simulator using the radar design.
Preprints 111881 g009
The radar design uses an information-rich metaphor [75] as illustrated in Figure 9 (top right side). Similar to the global view of surroundings [142], the radar design shows prioritised threats, while informing drivers of the traffic situation in surrounding roads. It enables the user to see and distinguish different visual elements, such as issues and anomalies at various priority levels. The host vehicle is shown as a blue overlaid car, surrounded by red and green vehicles of different circle sizes. The size and colour of surrounding vehicles denotes their proximity and level of risk. Hence, vehicles that are in the driver’s blind spot are considered high risk and are represented by large red circles. This is in line with work by [141] exploring the effects of text size on visual demand of in-vehicle displays. Close proximity or hidden vehicles at intersections are also high risk and hence are displayed large and red. The visualisation metaphor is depicted on the vehicle’s windscreen. The UI design maintains the principle of a “searchlight” of visual attention [134] to place the threat alerts, while avoiding the high-superimposition problem [143]. The second principle of pre-attentive processing, which explains how an “odd one out” object can be perceived in visual feature space, is realised by the different colour and size of threats on the HUD. Given that the detection rate is better for moving targets than static ones [144], entities in the radar visualisation pane move according to their criticality and proximity to the host vehicle.
A prerequisite for the realisation of both designs is the availability of information regarding peripheral vehicles’ positions and speeds. These are assumed to be provided from on-board vehicle sensors and vehicle-to-vehicle communication protocols that utilise connected-vehicles technology [145].
Step 6b. TSR refinement & TSR selection for the Radar and Arrows designs
The design rationale diagrams in Figure 10 and Figure 11 elaborate on the Arrow and Radar designs by evaluating them against the criteria of workload and SA. For the arrows design, the candidate TSRs as illustrated in Figure 10, were specified based on: 1) size of arrows (variable or fixed) on the visual display, to indicate risk-level; 2) use of static or dynamic positioning of arrows, to indicate relative location of threat; 3) colour-coding (or not) of arrows, to indicate risk level; and 4) the maximum number of concurrent arrows on the visual display (<3 or <4). Variable size, position and the colour of arrows contribute positively to SA by directing the driver’s attention to hazard-relevant information in an intuitive manner, improving decision-making-performance-time. In contrast, fixed-sized arrows with no colour coding, that are statically positioned on the visual display, contribute negatively to workload, since drivers have to decide which threat to evaluate first. Similarly, to minimise the negative effect on distraction and workload, the number of concurrent arrows on the HUD should be minimal. The design rationale in Figure 10 presents the eight TSRs, the four denoted in bold have been selected for implementation in the arrow VR prototype. These are: 1) dynamic positioning of the arrows on the screen according to the relative position of threats; 2) dynamic arrows’ size by level of risk; 3) colour coding of arrows by risk type; and 4) maximum number of concurrent arrows less than 3.
Figure 10. Design rationale for the “arrows” design against two criteria. In shading, the selected TSR to be evaluated in the VR simulator.
Figure 10. Design rationale for the “arrows” design against two criteria. In shading, the selected TSR to be evaluated in the VR simulator.
Preprints 111881 g010
For the radar design, the TSRs as illustrated in Figure 11, were specified based on: 1) dynamic versus static size of icons on visual display (denoting other vehicles/hazards), to indicate assessed risk-level; 2) dynamic versus static colour coding of icons, by risk-level; and 3) limited or unlimited number of concurrent icons on display. Dynamic size and colour-coding of icons, with no limit on the number of icons concurrently on display, contribute positively to SA by directing the driver’s attention to critical cues by prioritising threats, while at the same time providing contextual information regarding traffic congestion in the peripheral road network. The highlighted TSRs in Figure 11 were the ones selected for prototyping in VR.
From the above process it is evident that consideration of TSRs and design options is complex and could be delegated to HF experts. However, these experts may not be available. This motivates step 7 of the method, to test TSR design options by virtual prototyping, and evaluating their merit through experiments against pre-set NFR criteria.
Figure 11. Design rationale for the radar design against two criteria. In shading, selected TSR to be evaluated in the VR simulator.
Figure 11. Design rationale for the radar design against two criteria. In shading, selected TSR to be evaluated in the VR simulator.
Preprints 111881 g011

References

  1. E. L. Trist and K. W. Bamforth, “Some Social and Psychological Consequences of the Longwall Method of Coal-Getting: An Examination of the Psychological Situation and Defences of a Work Group in Relation to the Social Structure and Technological Content of the Work System,” Hum. Relations, vol. 4, no. 1, pp. 3–38, 1951.
  2. C. W. Clegg, “Sociotechnical principles for system design,” Appl. Ergon., no. 31, pp. 463–477, 2000.
  3. A. Lee, “Editor’s comments: MIS quarterly’s editorial policies and practices,” MIS Q., 2001.
  4. H. P. N. Hughes, C. W. Clegg, L. E. Bolton, and L. C. Machon, “Systems scenarios: a tool for facilitating the socio-technical design of work systems,” Ergonomics, vol. 60, no. 10, pp. 1319–1335, 2017.
  5. S. Schneider, J. Wollersheim, H. Krcmar, and A. Sunyaev, “Erratum to: How do requirements evolve over time? A case study investigating the role of context and experiences in the evolution of enterprise software requirements,” J. Inf. Technol., vol. 33, no. 2, p. 171, Jun. 2018.
  6. H. N. N. Mohd and S. Shamsul, “Critical success factors for software projects: A comparative study,” Sci. Res. Essays, vol. 6, no. 10, pp. 2174–2186, May 2011.
  7. G. J. M. Read, P. M. Salmon, M. G. Lenné, and N. A. Stanton, “Designing sociotechnical systems with cognitive work analysis: putting theory back into practice,” Ergonomics, 2015.
  8. R. Challenger, C. W. Clegg, and C. Shepherd, “Function allocation in complex systems: reframing an old problem,” Ergonomics, vol. 56, no. 7, pp. 1051–1069, Jul. 2013.
  9. G. J. Hay, F. E. Klonek, and S. K. Parker, “Diagnosing rare diseases: A sociotechnical approach to the design of complex work systems,” Appl. Ergon., vol. 86, p. 103095, Jul. 2020.
  10. O. F. Hamim, M. Shamsul Hoque, R. C. McIlroy, K. L. Plant, and N. A. Stanton, “A sociotechnical approach to accident analysis in a low-income setting: Using Accimaps to guide road safety recommendations in Bangladesh,” Saf. Sci., vol. 124, p. 104589, Apr. 2020.
  11. L. de Vries and L.-O. Bligård, “Visualising safety: The potential for using sociotechnical systems models in prospective safety assessment and design,” Saf. Sci., vol. 111, pp. 80–93, Jan. 2019.
  12. D. P. Jenkins, N. A. Stanton, P. M. Salmon, G. H. Walker, and M. S. Young, “Using cognitive work analysis to explore activity allocation within military domains,” Ergonomics, vol. 51, no. 6, pp. 798–815, Jun. 2008.
  13. N. P. Patorniti, N. J. Stevens, and P. M. Salmon, “A systems approach to city design: Exploring the compatibility of sociotechnical systems,” Habitat Int., vol. 66, pp. 42–48, Aug. 2017.
  14. T. Carden, N. Goode, G. J. M. Read, and P. M. Salmon, “Sociotechnical systems as a framework for regulatory system design and evaluation: Using Work Domain Analysis to examine a new regulatory system,” Appl. Ergon., vol. 80, pp. 272–280, Oct. 2019.
  15. E. E. Makarius, D. Mukherjee, J. D. Fox, and A. K. Fox, “Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization,” J. Bus. Res., vol. 120, pp. 262–273, Nov. 2020.
  16. D. A. Norman and P. J. Stappers, “DesignX: Complex Sociotechnical Systems,” She Ji, 2015.
  17. Ö. Kafali, N. Ajmeri, and M. P. Singh, “Normative requirements in sociotechnical systems,” in Proceedings - 2016 IEEE 24th International Requirements Engineering Conference Workshops, REW 2016, 2017.
  18. S. Dey and S. W. Lee, “REASSURE: Requirements elicitation for adaptive socio-technical systems using repertory grid,” Inf. Softw. Technol., 2017.
  19. E. Hollnagel, FRAM: The Functional Resonance Analysis Method. CRC Press, 2017.
  20. G. Baxter and I. Sommerville, “Interacting with Computers Socio-technical systems: From design methods to systems engineering,” Interact. Comput., vol. 23, no. 1, pp. 4–17, 2011.
  21. L. J. Hettinger, A. Kirlik, Y. M. Goh, and P. Buckle, “Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments,” Ergonomics, 2015.
  22. G. J. M. Read, P. M. Salmon, N. Goode, and M. G. Lenné, “A sociotechnical design toolkit for bridging the gap between systems-based analyses and system design,” Hum. Factors Ergon. Manuf., 2018.
  23. H. Wache and B. Dinter, “The Digital Twin – Birth of an Integrated System in the Digital Age,” in Proceedings of the 53rd Hawaii International Conference on System Sciences, 2020.
  24. G. J. M. Read, P. M. Salmon, and M. G. Lenné, “When paradigms collide at the road rail interface: evaluation of a sociotechnical systems theory design toolkit for cognitive work analysis,” Ergonomics, vol. 59, no. 9, pp. 1135–1157, Sep. 2016.
  25. A. Sutcliffe, B. Gault, and N. Maiden, “ISRE: immersive scenario-based requirements engineering with virtual prototypes,” Requir. Eng., vol. 10, no. 2, pp. 95–111, May 2005.
  26. A. Gregoriades and A. Sutcliffe, “A socio-technical approach to business process simulation,” Decis. Support Syst., 2008.
  27. A. Gregoriades and A. Sutcliffe, “Scenario-based assessment of nonfunctional requirements,” IEEE Trans. Softw. Eng., 2005.
  28. A. Sutcliffe, W. C. Chang, and R. Neville, “Evolutionary requirements analysis,” in Proceedings of the IEEE International Conference on Requirements Engineering, 2003.
  29. J. Wolfartsberger, “Analyzing the potential of Virtual Reality for engineering design review,” Autom. Constr., vol. 104, pp. 27–37, Aug. 2019.
  30. R. K. Radha, “Flexible smart home design: Case study to design future smart home prototypes,” Ain Shams Eng. J., 2021.
  31. F. Dalpiaz, P. Giorgini, and J. Mylopoulos, “Adaptive socio-technical systems: a requirements-based approach,” Requir. Eng., vol. 18, no. 1, pp. 1–24, Mar. 2013.
  32. E. S. K. Yu and J. Mylopoulos, “From E-R To ‘a-R’ — Modelling Strategic Actor Relationships for Business Process Reengineering,” Int. J. Coop. Inf. Syst., vol. 04, no. 02n03, pp. 125–144, 1995.
  33. S. Liaskos, S. M. Khan, M. Soutchanski, and J. Mylopoulos, “Modeling and reasoning with decision-theoretic goals,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013.
  34. E. Hollnagel and D. D. Woods, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Taylor & Francis, 2005.
  35. D. Woods and E. Hollnagel, Joint cognitive systems: Patterns in cognitive systems engineering. CRC/Taylor & Francis, 2006.
  36. K. J. Vicente, Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. CRC Press, 1999.
  37. E. Mumford, Designing Human Systems for New Technology: The ETHICS Method. Manchester Business School, 1983.
  38. E. Mumford, “The story of socio-technical design: reflections on its successes, failures and potential,” Inf. Syst. J., vol. 16, no. 4, pp. 317–342, Oct. 2006.
  39. E. Mumford, S. Hickey, and H. Matthies, Designing Human Systems. LULU, 2006.
  40. E. Mumford, “The ETHICS Approach,” Commun. ACM, vol. 36, no. 4, pp. 82--83, Jun. 1993.
  41. D. Aveson and G. Fitzgerald, “Methodologies for Developing Information Systems: A Historical Perspective,” in The Past and Future of Information Systems: 1976--2006 and Beyond, 2006, pp. 27–38.
  42. P. Adman and L. Warren, “Participatory sociotechnical design of organizations and information systems – an adaptation of ETHICS methodology,” J. Inf. Technol., vol. 15, pp. 39–51, 2000.
  43. S. Hickey, H. Matthies, and E. Mumford, Designing human systems: An Agile Approach to ETHICS. 2006.
  44. P. Abrahamsson, O. Salo, J. Ronkainen, and J. Warsta, Agile software development methods: rewiew and analysis. Oulou, Finland: VTT Technical Reaserch Centre of Finland, 2002.
  45. P. Checkland, Systems Thinking, Systems Practice. Chichester, UK: John Wiley and Sons, 1981.
  46. P. Checkland and J. Scholes, Soft Systems Methodology in Action. Wiley, 1991.
  47. J. Rasmussen, A. M. Pejtersen, and L. P. Goodstein, Cognitive Systems Engineering, 1st ed. New York, NY, USA: Wiley-Interscience, 1994.
  48. International Standard Organisation, “Ergonomics of Human-System Interaction - Part 210: Human-centered Design for Interective Systems,” Geneva, Switzerland, 2010.
  49. D. A. Norman, “Human-centered design considered harmful,” interactions, vol. 12, no. 4, p. 14, Jul. 2005.
  50. E. Hollnagel, Human reliability analysis: context and control. 1993.
  51. J. Reason, Human Error. Cambridge University Press, 1990.
  52. E. Hollnagel and A. Bye, “Principles for modelling function allocation,” Int. J. Hum. Comput. Stud., vol. 52, no. 2, pp. 253–265, 2000.
  53. J. Sharit, “Allocation of functions,” in Handbook of Human Factors and Ergonomics, G. Salvendy, Ed. New York: Wiley., 1998.
  54. P. M. Fitts, Human engineering for an effective air navigation and traffic control system. Washington, DC: National Research Council, 1951.
  55. C. Clegg, “Appropriate technology for humans and organizations,” J. Inf. Technol., vol. 3, no. 3, pp. 133–146, 1988.
  56. M. Vagia, A. A. Transeth, and S. A. Fjerdingen, “A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?,” Applied Ergonomics. 2016.
  57. J. D. Lee and B. D. Seppelt, “Human Factors and Ergonomics in Automation Design,” in Handbook of Human Factors and Ergonomics: Fourth Edition, 2012.
  58. J. C. F. de Winter and D. Dodou, “Why the Fitts list has persisted throughout the history of function allocation,” Cogn. Technol. Work, vol. 16, no. 1, pp. 1–11, Feb. 2014.
  59. A. Saeed, R. de Lemos, and T. Anderson, “On the safety analysis of requirements specifications for safety-critical software,” ISA Trans., vol. 34, no. 3, pp. 283–295, 1995.
  60. A. Simpson and J. Stoker, “Will it be Safe? --- An Approach to Engineering Safety Requirements,” in Components of System Safety, 2002, pp. 140–164.
  61. V. Ratan, K. Partridge, J. Reese, and N. Leveson, “Safety analysis tools for requirements specifications,” Proc. 11th Annu. Conf. Comput. Assur. COMPASS ’96, pp. 149–160, 1996.
  62. A. G. Sutcliffe and N. A. M. Maiden, “Bridging the requirements gap: policies, goals and domains,” in Proceedings of 1993 IEEE 7th International Workshop on Software Specification and Design, 1993, pp. 52–55.
  63. S. Lauesen and M. A. Kuhail, “Task descriptions versus use cases,” Requir. Eng., vol. 17, no. 1, pp. 3–18, 2012.
  64. S. Lauesen, “Task Descriptions as Functional Requirements,” IEEE Softw., vol. 20, no. 2, pp. 58–65, Mar. 2003.
  65. S. Lauesen, “Problem-Oriented Requirements in Practice -- A Case Study,” in Requirements Engineering: Foundation for Software Quality, 2018, pp. 3–19.
  66. K. Beckers, S. Faßbender, M. Heisel, and F. Paci, “Combining Goal-Oriented and Problem-Oriented Requirements Engineering Methods,” in Availability, Reliability, and Security in Information Systems and HCI, 2013, pp. 178–194.
  67. L. Chung, B. A. Nixon, E. Yu, and J. Mylopoulos, Non-Functional Requirements in Software Engineering. Boston, MA: Springer US, 2000.
  68. L. Chung, B. A. Nixon, E. Yu, and J. Mylopoulos, “Softgoal Interdependency Graphs,” in Non-Functional Requirements in Software Engineering, Boston, MA: Springer US, 2000, pp. 47–88.
  69. T. Marew, J.-S. Lee, and D.-H. Bae, “Tactics based approach for integrating non-functional requirements in object-oriented analysis and design,” J. Syst. Softw., vol. 82, no. 10, pp. 1642–1656, Oct. 2009.
  70. J. De Winter, P. Van Leeuwen, and R. Happee, “Advantages and disadvantages of driving simulators: a discussion,” in Measuring Behavior, 2012, pp. 47–50.
  71. R. Stone, “Virtual reality for interactive training: an industrial practitioner’s viewpoint,” Int. J. Hum. Comput. Stud., vol. 55, no. 4, pp. 699–711, 2001.
  72. F. Weidner, A. Hoesch, S. Poeschl, and W. Broll, “Comparing VR and non-VR driving simulations: An experimental user study,” Proc. - IEEE Virtual Real., pp. 281–282, 2017.
  73. S. W. A. Dekker, Ten questions about human error: a new view of human factors and system safety. Lawrence Erlbaum, 2005.
  74. A. Maclean, R. M. Young, V. M. E. Bellotti, and T. P. Moran, “Questions, Options and Criteria: Elements of Design Space Analysis,” Human-Computer Interact., vol. 6, no. 3/4, p. 208, 1991.
  75. A. Gregoriades and A. Sutcliffe, “Simulation-based evaluation of an in-vehicle smart situation awareness enhancement system,” Ergonomics, vol. 61, no. 7, pp. 947–965, 2018.
  76. R. Looije, M. A. Neerincx, and K. V. Hindriks, “Specifying and testing the design rationale of social robots for behavior change in children,” Cogn. Syst. Res., vol. 43, pp. 250–265, 2017.
  77. J. M. Bindewald, M. E. Miller, and G. L. Peterson, “A function-to-task process model for adaptive automation system,” J. Hum. Comput. Stud., vol. 72, no. 12, pp. 822–834, 2014.
  78. P. Milgram, A. Rastogi, and J. J. Grodski, “Telerobotic control using augmented reality,” in Proceedings 4th IEEE International Workshop on Robot and Human Communication, 1995, pp. 21–29.
  79. M.. Endsley and D.. Kaber, “Level of automation effects on performance, situation awareness and workload in a dynamic control task,” Ergonomics, vol. 42, no. 3, pp. 462–492, 1999.
  80. M. Endsley and E. O. Kiris, “The out-of-the-loop performance problem and level of control in automation,” Hum. Factors, vol. 37, no. 2, pp. 381–394, 1995.
  81. R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and levels of human interaction with automation,” IEEE Trans. Syst. Man, Cybern. Part ASystems Humans., vol. 30, no. 3, pp. 286–297, 2000.
  82. V. Riley, “A general model of mixed-initiative human-machine systems,” in 33rd Annual Human Factors Society Conference, 1989, pp. 124–128.
  83. Sheridan T.B., “Function allocation: algorithm, alchemy or apostasy?,” Int. J. Hum. Comput. Stud., vol. 52, pp. 203–216, 2000.
  84. B. H. Vrkljan and J. Miller-Polgar, “Advancements in vehicular technology: potential implications for the older driver,” Int. J. Veh. Inf. Commun. Syst., vol. 1, no. 1–2, 2005.
  85. I. J. Reagan, J. B. Cicchino, L. B. Kerfoot, and R. A. Weast, “Crash avoidance and driver assistance technologies – Are they used?,” Transp. Res. Part F Traffic Psychol. Behav., vol. 52, pp. 176–190, 2018.
  86. P. Green, “Driver Interface Safety and Usability Standards: An Overview,” in Driver Distraction: Theory, Effects, andMitigation, CRC Press, 2009.
  87. Mica, R. Endsley, “Situation Awareness,” in Handbook of Human Factors and Ergonomics: Fourth Edition, G. Salvendy, Ed. Hoboken, New Jersey: John Wiley & Sons, Inc., 2012, pp. 553–568.
  88. J. A. Michon, “A critical review of driver models: What do we know, what should we do?,” in Human Behavior and Traffic Safety, 1985.
  89. M. L. Matthews, D. J. Bryant, R. D. G. Webb, and J. L. Harbluk, “Model for Situation Awareness and Driving: Application to Analysis and Research for Intelligent Transportation Systems,” Transp. Res. Rec. J. Transp. Res. Board, vol. 1779, no. 1, pp. 26–32, Jan. 2001.
  90. N. J. Ward, “Automation of task processes: An example of intelligent transportation systems,” Hum. Factors Ergon. Manuf., vol. 10, no. 4, pp. 395–408, 2000.
  91. S. T. Iqbal and E. Horvitz, “Notifications and awareness,” in ACM conference on Computer supported cooperative work - CSCW ’10, 2010, no. May 2014, p. 27.
  92. S. J. J. Gould, D. P. Brumby, A. L. Cox, V. M. González, D. D. Salvucci, and N. A. Taatgen, “Multitasking and interruptions: a SIG on bridging the gap between research on the micro and macro worlds,” Chi 2012, 2012.
  93. B. Sheehan, F. Murphy, M. Mullins, and C. Ryan, “Connected and autonomous vehicles: A cyber-risk classification framework,” Transp. Res. Part A Policy Pract., vol. 124, pp. 523–536, Jun. 2019.
  94. A. Papadoulis, M. Quddus, and M. Imprialou, “Evaluating the safety impact of connected and autonomous vehicles on motorways,” Accid. Anal. Prev., vol. 124, pp. 12–22, 2019.
  95. H. Xing, H. Qin, and J. W. Niu, “Driver’s Information Needs in Automated Driving,” in International Conference on Cross-Cultural Design, 2017.
  96. J. G. Miller, “Living systems: Basic concepts,” Behav. Sci., vol. 10, no. 3, pp. 193–237, 1965.
  97. C. Owsley, K. Ball, M. E. Sloane, D. L. Roenker, and J. R. Bruni, “Visual/cognitive correlates of vehicle accidents in older drivers.,” Psychol. Aging, vol. 6, no. 3, pp. 403–415, 1991.
  98. F. Schwarz and W. Fastenmeier, “Augmented reality warnings in vehicles: Effects of modality and specificity on effectiveness,” Accid. Anal. Prev., vol. 101, pp. 55–66, 2017.
  99. G. A. Alvarez and P. Cavanagh, “The Capacity of Visual Short-Term Memory Is Set Both by Visual Information Load and by Number of Objects,” Psychol. Sci., vol. 15, no. 2, pp. 106–111, 2004.
  100. K. Pammer, A. Raineri, V. Beanland, J. Bell, and M. Borzycki, “Expert drivers are better than non-expert drivers at rejecting unimportant information in static driving scenes,” Transp. Res. Part F Traffic Psychol. Behav., vol. 59, pp. 389–400, Nov. 2018.
  101. J. Gebauer, M. Shaw, and M. Gribbins, “Task-technology fit for mobile information systems,” JIT, vol. 25, no. 3, pp. 259–272, 2010.
  102. S. W. A. Dekker and E. Hollnagel, “Human factors and folk models,” Cogn. Technol. Work, vol. 6, pp. 79–86, 2004.
  103. S. W. A. Dekker and D. D. Woods, “MABA-MABA or Abracadabra? Progress on Human-Automation Co-ordination,” Cogn. Technol. Work, vol. 4, no. 4, pp. 240–244, 2002.
  104. D. A. Leonard and J. F. Rayport, “Managing Knowledge Assets, Creativity and Innovation,” Harv. Bus. Rev., vol. 75, no. 6, pp. 102–113, 1997.
  105. H. Beyer and K. Holtzblatt, “Contextual Design,” Interactions, vol. 6, no. 1, pp. 32–42, 1999.
  106. S. Bodker and C. N. Klokmose, “The human-artifact model: An activity theoretical approach to artifact ecologies,” Human-Computer Interact., vol. 26, no. 4, pp. 315–371, 2011.
  107. N. Naikar, R. Hopcroft, and A. Moylan, “Work domain analysis: Theoretical concepts and methodology,” Victoria, Australia, 2005.
  108. G. Praetorius, E. Hollnagel, and J. Dahlman, “Modelling Vessel Traffic Service to understand resilience in everyday operations,” Reliab. Eng. Syst. Saf., vol. 141, pp. 10–21, Sep. 2015.
  109. T. A. Saurin and N. J. B. Werle, “A framework for the analysis of slack in socio-technical systems,” Reliab. Eng. Syst. Saf., vol. 167, pp. 439–451, 2017.
  110. O. Bertelsen and S. Bødker, “Activity Theory,” in HCI models, theories, and frameworks: Toward a multidisciplinary science, MK, 2003, pp. 291–324.
  111. Erik Hollnagel, Cognitive Reliability and Error Analysis Method (CREAM). Elsevier, 1998.
  112. R. Bailey, Human Performance Engineering: Designing High Quality Professional User Interfaces for Computer Products, Applications and Systems, 3rd ed. Prentice Hall, 1996.
  113. S. Robertson and J. Robertson, “Mastering the Requirements Process Getting Requirements Right,” Work, 2013.
  114. E. S. K. Yu, “Modeling organizations for information systems requirements engineering,” in [1993] Proceedings of the IEEE International Symposium on Requirements Engineering, 1993, pp. 34–41.
  115. A. G. Sutcliffe and A. Gregoriades, “Automating Scenario Analysis of Human and System Reliability,” IEEE Trans. Syst. Man, Cybern. - Part A Syst. Humans, vol. 37, no. 2, pp. 249–261, Mar. 2007.
  116. Z. Guo, D. Zhou, J. Chen, J. Geng, C. Lv, and S. Zeng, “Using virtual reality to support the product’s maintainability design: Immersive maintainability verification and evaluation system,” Comput. Ind., vol. 101, pp. 41–50, Oct. 2018.
  117. B. Aykent, Z. Yang, F. Merienne, and A. Kemeny, “Simulation sickness comparison between a limited field of view virtual reality head mounted display (Oculus) and a medium range field of view static ecological driving simulator (Eco2),” in Driving Simulation Conference Europe, 2014.
  118. C. D. Wickens, “Multiple resources and performance prediction,” Theor. Issues Ergon. Sci., vol. 3, no. 2, pp. 159–177, 2002.
  119. S. Kim and A. K. Dey, “Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation,” in 27th international conference on Human factors in computing systems - CHI 09, 2009, pp. 133–142.
  120. G. Jakus, C. Dicke, and J. Sodnik, “A user study of auditory, head-up and multi-modal displays in vehicles,” Appl. Ergon., vol. 46, no. Part A, pp. 184–192, 2015.
  121. S. Fadden, P. M. Ververs, and C. D. Wickens, “Costs and Benefits of Head-Up Display Use: A Meta-Analytic Approach,” Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 42, no. 1, pp. 16–20, 1998.
  122. L. C. Thomas and C. D. Wickens, “Eye-tracking and Individual Differences in off-Normal Event Detection when Flying with a Synthetic Vision System Display,” Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 48, no. 1, pp. 223–227, 2004.
  123. L. Prinzel and M. Risser, “Head-Up Displays and Attention Capture,” 2004.
  124. C. D. Wickens and A. L. Alexander, “Attentional Tunneling and Task Management in Synthetic Vision Displays,” Int. J. Aviat. Psychol., vol. 19, no. 2, pp. 182–199, Mar. 2009.
  125. P. M. Ververs and C. D. Wickens, “Head-up displays: effects of clutter, display intensity, and display location on pilot performance.,” Int J Aviat Psychol, vol. 8, no. 4, pp. 377–403, 1998.
  126. W. J Horrey, A. Alexander, and C. Wickens, “The Effects of Head-Up Display Clutter and In-Vehicle Display Separation on Concurrent Driving Performance,” in Human Factors and Ergonomics Society 47th Annual Meeting, 2003, pp. 1880–1884.
  127. E. Crundall, D. R. Large, and G. Burnett, “A driving simulator study to explore the effects of text size on the visual demand of in-vehicle displays,” Displays, vol. 43, pp. 23–29, 2016.
  128. M. Yeh, J. L. Merlo, C. D. Wickens, and D. L. Brandenburg, “Head Up versus Head Down: The Costs of Imprecision, Unreliability, and Visual Clutter on Cue Effectiveness for Display Signaling,” Hum. Factors, vol. 45, no. 3, pp. 390–407, 2003.
  129. J. Fagerlönn, “Urgent alarms in trucks: effects on annoyance and subsequent driving performance,” IET Intell. Transp. Syst., vol. 5, no. 4, pp. 252–258, Dec. 2011.
  130. Y. Zhang, X. Yan, and Z. Yang, “Discrimination of Effects between Directional and Nondirectional Information of Auditory Warning on Driving Behavior,” Discret. Dyn. Nat. Soc., vol. 2015, pp. 1–8, 2015.
  131. L. M. Stanley, “Haptic and Auditory Interfaces as a Collision Avoidance Technique during Roadway Departures and Driver Perception of These Modalities,” 2006.
  132. NHTSA, “Analysis of Lane Change Crashes,” 2003.
  133. S. G. Klauer, T. a. Dingus, V. Neale, J. D. Sudweeks, and D. J. Ramsey, “The Impact of Driver Inattention On Near Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data,” 2006.
  134. C. Ware, Information Visualization: Perception for Design. Elsevier Science, 2013.
  135. M. Beggiato, M. Pereira, T. Petzoldt, and J. Krems, “Learning and development of trust, acceptance and the mental model of ACC. A longitudinal on-road study,” Transp. Res. Part F Psychol. Behav., vol. 35, pp. 75–84, 2015.
  136. A. J. May, T. Ross, and S. H. Bayer, “Driver’s information requirements when navigating in an urban environment,” J. Navig., vol. 56, no. 1, pp. 89–100, 2003.
  137. M. R. Endsley and D. G. Jones, Designing for situation awareness: an approach to human-centered design, Second. CRC Press, 2012.
  138. DENSO, “Technology to Keep People Safe Wherever They Drive,” 2016.
  139. F. Biocca, C. Owen, A. Tang, and C. Bohil, “Attention Issues in Spatial Information Systems: Directing Mobile Users’ Visual Attention Using Augmented Reality,” J. Manag. Inf. Syst., vol. 23, no. 4, pp. 163–184, May 2007.
  140. Mercedes-Benz, “Active Blind Spot Assist,” 2016.
  141. D. R. Large, E. Crundall, G. Burnett, C. Harvey, and P. Konstantopoulos, “Driving without wings: The effect of different digital mirror locations on the visual behaviour, performance and opinions of drivers,” Appl. Ergon., vol. 55, pp. 138–148, 2016.
  142. H. Cheng, Z. Liu, N. Zheng, and J. Yang, “Enhancing a Driver’s Situation Awareness using a Global View Map,” in Multimedia and Expo, 2007 IEEE International Conference on, 2007.
  143. H. J. Oh, S. M. Ko, and Y. G. Ji, “Effects of Superimposition of a Head-Up Display on Driving Performance and Glance Behavior in the Elderly,” Int. J. Hum. Comput. Interact., vol. 32, no. 2, pp. 143–154, Feb. 2016.
  144. H. E. Petersen and D. J. Dugas, “The Relative Importance of Contrast and Motion in Visual Detection,” 1972.
  145. Rodovan Miucic, Connected Vehicles: Intelligent transportation systems. Springer, 2019.
Figure 1. Overview of proposed method initiating with problem analysis (human factors perspective), how the problem can be addressed through a TSR specification and if the proposed solution is satisfactory.
Figure 1. Overview of proposed method initiating with problem analysis (human factors perspective), how the problem can be addressed through a TSR specification and if the proposed solution is satisfactory.
Preprints 111881 g001
Figure 2. Goal Hierarchy for Driving Tasks associated with the problem of completing a journey safely.
Figure 2. Goal Hierarchy for Driving Tasks associated with the problem of completing a journey safely.
Preprints 111881 g002
Figure 3. The driver goals modelled in i* notation, showing agents, goals and softgoals for NFR (e.g., safety) and human factors desiderata (Situation awareness). The overlaid “D” symbol on links denotes dependence of the softgoal on another goal/softgoal/task for its realisation. In shading the goal on which the analysis will focus.
Figure 3. The driver goals modelled in i* notation, showing agents, goals and softgoals for NFR (e.g., safety) and human factors desiderata (Situation awareness). The overlaid “D” symbol on links denotes dependence of the softgoal on another goal/softgoal/task for its realisation. In shading the goal on which the analysis will focus.
Preprints 111881 g003
Figure 4. High level FA analysis using QOC notation for the “respond to hazards” goal in the i* model. Solid lines indicate positive contribution to the criterion. In shading the best FA option.
Figure 4. High level FA analysis using QOC notation for the “respond to hazards” goal in the i* model. Solid lines indicate positive contribution to the criterion. In shading the best FA option.
Preprints 111881 g004
Figure 5. HCI modality analysis of the “Automated warnings” task from Figure 4 for the identification of the best modality option (shaded).
Figure 5. HCI modality analysis of the “Automated warnings” task from Figure 4 for the identification of the best modality option (shaded).
Preprints 111881 g005
Figure 6. i* goal hierarchy and decomposition of “Respond to hazards” goal into the sub-task “Provide visual warning” from the initial functional allocation step, and specification of the tasks that needs to be realised by the human or technology to satisfy the “Maintain Situation Awareness” NFR.
Figure 6. i* goal hierarchy and decomposition of “Respond to hazards” goal into the sub-task “Provide visual warning” from the initial functional allocation step, and specification of the tasks that needs to be realised by the human or technology to satisfy the “Maintain Situation Awareness” NFR.
Preprints 111881 g006
Figure 7. Percentage of subjects with evaluation score > 3 in 5-point Likert scale questions.
Figure 7. Percentage of subjects with evaluation score > 3 in 5-point Likert scale questions.
Preprints 111881 g007
Table 1. Functional allocation analysis for the sub-tasks of the task “Provide visual warning” of Figure 6. The last column show TSR specification of Information that the new design need to provide to the driver through the “Visual situation display”.
Table 1. Functional allocation analysis for the sub-tasks of the task “Provide visual warning” of Figure 6. The last column show TSR specification of Information that the new design need to provide to the driver through the “Visual situation display”.
Driver Tasks for adequate Situation Awareness Capability of automation to implement requirement (H/M/L) Reliability of automation in realising the requirement (H/M/L) Functional Allocation
(HUMAN/COMPUTER/HCI)
TSRs: Information requirements specification for situation awareness support using automated warnings, (Visual situation display)
Assess proximity to vehicles ahead, in relation to host vehicle H H HCI: visualise information for human to decide Information on threat risks in different colours
Assess proximity to rear vehicles, in relation to host vehicle H H HCI: visualise information for human to decide Information on tailgating vehicle risk i
Assess direction of other vehicle movements H M HCI: visualise information for human to decide Information on risk-level of peripheral vehicles
Assess risks from right-turning vehicles at unsignalled intersections (right-hand rule) M L HCI: visualise information for human to decide Information on right-turning vehicles risk
Assess risks from left-turning vehicles at unsignalled intersections (left-hand rule) M L HCI: visualise information for human to decide Information on left-turning vehicles risk
Assess following vehicles risk (blind spot, tailgating) H H HCI: visualise information for human to decide Information on blind spot risk
Assess congestion information on peripheral roads M M HCI: visualise information for human to decide Information on peripheral road network traffic
Assess priority of hazards M M HCI: visualise information for human to decide Prioritise hazard risk information
Assess intention of other vehicles behind and ahead of host vehicle L L Human task None
Assess risks of hidden vehicles at intersections H M HCI: visualise information for human to decide Information on hidden vehicles risk information
Table 2. Performance scores of experts and novice subjects during the practical part of the evaluation.
Table 2. Performance scores of experts and novice subjects during the practical part of the evaluation.
Tasks performed by expert and novice participants during the practical part of the COVID19 case study Percentage of Experts subjects that addressed the question correctly with score > 65/100 Percentage of Novice subjects that addressed the question correctly with score > 65/100
Write down the human task you focused on to address the problem (e.g., respond to hazards while driving a vehicle). Which Non-functional Requirement (human factors) is important to complete this task successfully? (e.g., maintain good driver situation awareness) 77.7%
(mean:66.11,SD:5.4)
30%
(mean:57.5,SD:15)
What is your recommended functional allocation for the above task and what were your selection criteria? (e.g., improve driver situation awareness through an in-vehicle warning system) 77.7% (mean:77.5,SD:19.8) 40%
(mean:60,SD:20)
Specify the tasks required to be performed by a human or technology to realise the selected level of automation from previous step (e.g., monitor my vehicle’s blind spot while on motorway) 77.7%
(mean:68.6,SD:9.9)
30%
(mean:79.8,SD:20.12)
Specify the most appropriate functional allocation for each of the tasks you identified. What were the selection criteria you used? (e.g., automate the assessment of following vehicles’ proximity, let me decide what to do by consulting a user interface) 88.8%
(mean:74.6,SD:21.5)
50%
(mean:62.7,SD:26.12)
Write down the user interface’s functional requirements for each task from previous step. (e.g., present visual warnings on a head-up display depending on type and direction of blind spot risk) 88.8%
(mean:79.8,SD:20.12)
30%
(mean:60,SD:29.4)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated