Altmetrics
Downloads
375
Views
313
Comments
0
A peer-reviewed article of this preprint also exists.
This version is not peer-reviewed
Submitted:
22 January 2024
Posted:
23 January 2024
You are already at the latest version
“The safety culture of an organization is the product of individual and group values, attitudes perceptions, competencies, and patterns of behavior that determine the commitment to, and the status and proficiency of, an organization’s health and safetymanagement”
‘’Shared values (what is important) and beliefs (how things work) that interact with an organisation’s structures and control systems to produce behavioural norms (the way we do things around here).’
“…the broad suite of technologies that can match or surpass human capabilities, particularly those involving cognition.”
“Just culture means a culture in which front-line operators or other persons are not punished for actions, omissions or decisions taken by them that are commensurate with their experience and training, but in which gross negligence, wilful violations and destructive acts are not tolerated.”
“the burden of responsibility gravitates towards the organisation to provide sufficient and appropriate training to air traffic controllers. If they are not well trained it will be hard to blame them for actions, omissions or decisions arising from AI/ML situations...”
“The functioning of AI challenges traditional tests of intent and causation, which are used in virtually every field of law.”
“It seems counter-intuitive, then, to categorise the level of automation by degree of autonomous control gained over human control lost, when in practice both are needed to ensure safety.” [56]
“Machines are good at doing things right; humans are good at doing the right thing.”
1 | This particular variant is for all aviation sectors (ATM, airlines and airports); the ‘official’ EUROCONTROL questionnaire has slightly different wording in some items and is for ATM organisations only. |
2 | Fatigue appears in this diagram, as it is sometimes added to the other dimensions because of its importance as a factor in aviation, though it is not strictly speaking a safety culture dimension, and is not used in the ATM-only version. |
3 | For a recent general survey on potential industrial safety and security governance of AI systems, see [60] |
Questionnaire Item | Dimension | IA Impact | H/M/L |
---|---|---|---|
B01 My colleagues are committed to safety. | Colleague commitment to safety | The IA would effectively be a digital colleague. The IA’s commitment to safety would likely be judged according to the IA’s performance. Human-Supervised Training, using domain experts with the IA would help engender trust. The concern is that humans might ‘delegate’ some of their responsibility to the IA. A key issue here is to what extent the IA sticks rigidly to ‘golden rules’ such as aircraft separation minima (5NM lateral separation and 1000 feet vertical separation) or is slightly flexible about them as controllers may (albeit rarely) need to be. The designer needs to decide whether to ‘hard code’ some of these rules or allow a little leeway (within limits); this determines whether the IA behaves like ‘one of the guys’ or never, ever breaks rules. |
High |
B04 Everyone I work with in this organization feels that safety is their personal responsibility. | Colleague commitment to safety | Since an IA cannot effectively take responsibility, someone else may be held accountable for an IA’s ‘actions’. If a supervisor fails to see an IA’s ‘mistake’, who will be blamed? HAIKU use cases may shed light on this, if there can be scenarios where the IA gives ‘poor’ or incorrect advice. If an IA is fully autonomous, this may affect the human team’s collective sense of responsibility, since in effect they can no longer be held responsible. |
High |
B07 I have confidence in the people that I interact with in my normal working situation. | Colleague commitment to safety | As for B01, this will be judged according to performance. Simulator training with IAs should help pilots and others ‘calibrate’ their confidence in the IA. This may overlap significantly with B01. |
High |
B02 Voicing concerns about safety is encouraged. | Just culture and reporting | The IA could ‘speak up’ if a key safety concern is not being discussed or has been missed. This could be integrated into Crew Resource Management and Threat and Error Management practices, and CRM’s corollary, Team Resources `Management in ATM. However, then the IA may be considered a ‘snitch’, a tool of management to check up on staff. This could also be a two-way street, so that the crew could report on THE IA’s performance. |
High |
B08 People who report safety related occurrences are treated in a just and fair manner. | Just culture and reporting | The IA could monitor and record all events and interactions in real time, and would be akin to a ‘living’ Black Box recorder. This could affect how humans behave and speak around the IA, if AI ‘testimony’ via data forensics was ever used against a controller in a disciplinary or legal prosecution case. | High |
B12 We get timely feedback on the safety issues we raise. | Just culture and reporting | The IA could significantly increase reporting rates, depending on how its reporting threshold is set, and also record and track how often a safety issue is raised. | Medium |
B14 If I see an unsafe behaviour by a colleague I would talk to them about it. | Just culture and reporting | [See also B02] The IA can ‘query’ behaviour or decisions that may be unsafe. Rather than ‘policing’ the human team, the IA could possibly bring the risk to the human’s attention more sensitively, as a query. | High |
B16 I would speak to my manager if I had safety concerns about the way that we work. | Just culture and reporting | If managers have full access to IA records, the IA could potentially become a ‘snitch’ for management. This would most likely be a deal-breaker for honest teamworking. | Low |
C01 Incidents or occurrences that could affect safety are properly investigated. | Just culture and reporting | As for B08, the IA’s record of events could shed light on the human colleagues’ states of mind and decision-making. There need to be safeguards around such use, however, so that it is only used for safety learning. | High |
C06 I am satisfied with the level of confidentiality of the reporting and investigation process. | Just culture and reporting | As for B16, the use of IA recordings as information or even evidence during investigations needs to be considered. Just Culture policies will need to adapt/evolve to the use of IAs in operational contexts. |
High |
C09 A staff member prosecuted for an incident involving a genuine error or mistake would be supported by the management of this organisation. | Just culture and reporting | This largely concerns management attitudes to staff and provision of support. However, the term ‘genuine error or mistake’ needs to encompass the human choice between following IA advice which turns out to be wrong, and ignoring such advice which turns out to be right, since in either case there was no human intention to cause harm. This can be enshrined in Just Culture policies, but judiciaries (and the travelling public) may take an alternative viewpoint. In the event of a fatal accident, black-and-white judgements sharpened by hindsight may be made which do not reflect the complexity of IA’s and Human-AI Teams’ operating characteristics and the local rationality at the time, nor the over-riding benefits to the industry. | High |
C13 Incident or occurrence reporting leads to safety improvement in this organisation. | Just culture and reporting | This is partly administrative and depends on financial costs of safety recommendations. Nevertheless, the IA may be seen as adding dispassionate evidence and more balanced assessment of severity, and how close an event actually came to being an accident (e.g., via Bayesian and other statistical analysis techniques). It will be interesting to see if the credence given to the IA by management is higher than that given to its human counterparts. |
High |
C17 A staff member who regularly took unacceptable risks would be disciplined or corrected in this organisation. | Just culture and reporting | As for C09, an IA may be aware of an individual who takes more risks than others. However, there is a secondary aspect, linked to B07, that the IA may be trained by humans, and may be biased by their own level of risk tolerance and safety-productivity trade-offs. If an IA is seen as offering solutions judged too risky, or conversely ‘too safe’, nullifying operational efficiency, the will need ‘re-training’ or re-coding in some way. | High |
B03 We have sufficient staff to do our work safely. | Staff and equipment | Despite many assurances that AI will not replace humans, many see strong commercial imperatives for doing exactly that (e.g., a shortage of commercial pilots and impending shortage of air traffic controllers, post-COVID low return-to-work rate at airports, etc.). | High |
B23 We have appropriate support from safety specialists. | Staff and equipment | The IA could serve as a ‘safety encyclopaedia’ for its team, with all safety rules, incidents and risk models stored in its knowledge base. | Medium |
C02 We have the equipment needed to do our work safely. | Staff and equipment | The perceived safety value of IAs will depend on how useful the IA is for safety, and will be a major question for the HAIKU use cases. One ‘wrong call’ could have a big impact on trust. | High |
B05 My manager is committed to safety. | Management commitment to safety | The advent of IAs needs to be discussed with senior management, to understand if it affects their perception of who/what is keeping their organisation safe. They may come to see the IA as a more manageable asset than people, one that can be ‘turned up or down’ with respect to safety. | High |
B06 Staff have a high degree of trust in management with regard to safety. | Management commitment to safety | Conversely, operational managers may simply be reluctant to allow the introduction of IAs into the system, due to both safety and operational concerns. | Medium |
B10 My manager takes action on the safety issues we raise. | Management commitment to safety | See C13 above. | Low |
B19 Safety is taken seriously in this organization. | Management commitment to safety | Depends on how much the IA is designed to focus on safety. The human team will watch the IA’s ‘behaviour’ closely and judge for themselves whether the IA is there for safety or for other purposes. These could include profitability, but also a focus on environment issues. Ensuring competing priorities do not conflict may be challenging. | Medium |
B22 My manager would always support me if I had a concern about safety. | Management commitment to safety | See B16, C09, C17. If the IA incorporates a dynamically updated risk model, concerns about safety could be rapidly assessed and addressed according to their risk importance (this is the long-term intent of Use Case 5 in HAIKU). | Low |
B28 Senior management takes appropriate action on the safety issues that we raise. | Management commitment to safety | See B12. A further aspect is whether (and how quickly) the management supports getting the IA ‘fixed’ if its human teammates think it is not behaving safely. | Low |
B09 People in this organization share safety related information. | Communication | The IA could become a source of safety information sharing, but this would still depend on the organisation in terms of how the information would be shared and with whom. The IA could however share important day-to-day operational observations e.g., by flight crew, who can pass on their insights to the next crew flying the same route, for example, or by ground crew at an airport (some airports already use a ‘Community App’ for rapid sharing of such information). |
Medium |
B11 Information about safety related changes within this organisation is clearly communicated to staff. | Communication | The IA could again be an outlet for information sharing, e.g., notices could be uploaded instantly and the IA could ‘brief’ colleagues or inject new details as they become relevant during operations. The IA could also upload daily NOTAMs (Notices to Airmen) and safety briefings for controllers, and could distill the key safety points, or remind the team if they forget something from procedures / NOTAMs / briefings notes |
Medium |
B17 There is good communication up and down this organisation about safety. | Communication | An IA could reduce the reporting burden of operational staff if there could be an IA function to transmit details of concerns and safety observations directly to safety departments (though the ‘narrative’ should still be written by humans). An IA ‘network’ or hub could be useful for safety departments to quickly assess safety issues, and prepare messages to be cascaded down by senior/middle management. | Medium |
B21 We learn lessons from safety-related incident or occurrence investigations. | Communication | The IA could provide useful and objective input for safety investigations, including inferences on causal and contributory factors. Use of Bayesian inference and other similar statistical approaches could avoid some typical human statistical biases, to help ensure the right lessons are learned and are considered proportionately to their level of risk. Alternatively, if information is biased or counterfactual evidence is not considered, the way the IA judges risk may be incorrect, leading to a lack of trust by operational people. It could also leave managers focusing on the wrong issues. |
High |
B24 I have good access to information regarding safety incidents or occurrences within the organisation. | Communication | IAs or other AI-informed safety intelligence units could store a good deal of information on incidents and accidents, with live updates, possibly structured around risk models, and capturing more contextual factors than are currently reported (this is the aim of HAIKU Use Case 5). Information can then be disseminated via an App or via the IA itself to various crews / staff. | High |
B26 I know what the future plans are for the development of the services we provide. | Communication | The implementation and deployment of IAs into real operational systems needs careful and sensitive introduction, as there will be many concerns and practical questions. Failure to address such concerns may lead to very limited uptake of the IA. | Medium |
C03 I read reports of incidents or occurrences that are relevant to our work. | Communication | The IA could be used to store incidents, but this would not require anything so sophisticated as an IA. However, if the IA is used to provide concurrent (in situ) training, it could bring up past incidents related to the current operating conditions. | Low |
C12 We are sufficiently involved in safety risk assessments. | Communication | Working with an IA might give the team a better appreciation of underlying risk assessments and their relevance to current operations. | Low |
C15 We are sufficiently involved in changes to procedures. | Communication | The IA could build up evidence of procedures that regularly require workarounds or are no longer fit for purpose. The IA could highlight gaps between ‘work as designed’, and ‘work as done’. | Medium |
C16 We openly discuss incidents or occurrences in an attempt to learn from them. | Communication | [See C03] Unless this becomes an added function of the IA, it has low relevance. However, if a group learning review [70], or Threat and Error Management is used in the cockpit following an event, the AI could provide a dispassionate and detailed account of the sequence of events and interactions. | Low |
C18 Operational staff are sufficiently involved in system changes. | Communication | There is a risk that if the IA is a very good information collector, people at the sharp end might be gradually excluded in updates to system changes, as the systems developers will consult data from the IA instead. | Medium |
B13 My involvement in safety activities is sufficient. | Collaboration | As for C15 and C18. | Low |
B15r People who raise safety issues are seen as troublemakers. | Collaboration | It needs to be seen whether an IA could itself be perceived as a trouble-maker if it continually questions its human team-mates’ decisions and actions. | Medium |
B20 My team works well with the other teams within the organization. | Collaboration | The way different teams ‘do’ safety in the same job may vary (both inside companies, and between companies). The IA might need to be tailored to each team, or able to vary/nuance its responses accordingly. If people move from one team or department to another, they may need to learn ‘the way the IA does things around here.’ | Medium |
B25r There are people who I do not want to work with because of their negative attitude to safety. | Collaboration | There could conceivably be a clash between an IA and a team member who, for example, was taking significant risks or continually overriding / ignoring safety advice, or an IA that was giving poor advice. If the IA is a continual learning system, its behaviour may evolve over time, and diverge from optimum, even if it starts off safe when first implemented. |
High |
B27 Other people in this organization understand how my job contributes to safety. | Collaboration | The implementation of an IA in a particular work area (e.g., a cockpit; an air traffic Ops room; an airport/airline operational control centre) itself suggests safety criticality of human tasks in those areas. If an IA becomes an assimilator of all safety relevant information and activities, it may become clearer how different roles contribute to safety. | Medium |
C05 Good communication exists between Operations and Engineering/ Maintenance to ensure safety. | Collaboration | If Engineering/Maintenance ‘own’ the IA, i.e., are responsible for its maintenance and upgrades, then there will need to be good communication between these departments and Ops/Safety. A secondary aspect here is that IAs used in Ops could transmit information to other departments concerning engineering and maintenance needs observed during operations. |
Medium |
C10 Maintenance always consults Operations about plans to maintain operational equipment | Collaboration | It needs to be determined who can upgrade an IA’s system and performance characteristics. E.g. if a manual adjustment is made to the IA to better account for an operational circumstance that has caused safety issues, who makes this adjustment and who needs to be informed? | Medium |
B18 Changes to the organisation, systems and procedures are properly assessed for safety risk. | Risk Handling | The IA could have a model of how things work and how safety is maintained, so any changes will need to be incorporated into that model, which may identify safety issues that may have been overlooked or played down. This is similar to current use of AIs for continuous validation and verification of operating systems, looking for bugs or omissions. Conversely, the IA may give advice that does not make sense to the human team or the organisation, yet be unable to explain its rationale. Humans may find it difficult to adhere to such advice. |
High |
C07r We often have to deviate from procedures. | Risk Handling | The IA will observe (and perhaps be party to) procedural deviation, and can record associated reasons as well as frequencies (highlighting common ‘workarounds’). Such data could be used to identify procedures that are no longer fit for purpose, or else inform retraining requirements if the procedures are in fact still fit for purpose. | High |
C14r I often have to take risks that make me feel uncomfortable about safety. | Risk Handling | The IA will likely be unaware of any discomfort on the human’s part (unless emotional AI is employed), but the human can probably uilise the IA’s advice to err on the side of caution. Conversely, a risk-taker or someone who puts productivity first, may consult an IA until it finds a way to get around the rules (human ingenuity can be used for the wrong reasons). |
High |
C04 The procedures describe the way in which I actually do my job. | Procedures and training | People know how to ‘fill in the gaps’ when procedures don’t really fit the situation, and it is not clear how an IA will do this. [This was in part why the earlier Expert Systems movement failed to deliver, leading to the infamous ‘AI winter’]. Also, the IA could conceivably record work as done and contrast it to work as imagined (the procedures). This would, over time, create an evidence base on procedural adequacy (see also C07r). |
High |
C08 I receive sufficient safety-related refresher training. | Procedures and training | The IA could take note of human fluency with the procedures and how much support it has to give, thus gaining a picture of whether more refresher training might be beneficial. | Medium |
C11 Adequate training is provided when new systems and procedures are introduced. | Procedures and training | As for C08. | Medium |
C19 The procedures associated with my work are appropriate. | Procedures and training | When humans find themselves outside the procedures, e.g., in a flight upset situation in the cockpit, an IA could rapidly examine all sensor information and supply a course of action for the flight crew. | High |
C20 I have sufficient training to understand the procedures associated with my work. | Procedures and training | As for C08 and C11. | Medium |
Safety Culture Concerns | Safety Culture Affordances |
---|---|
Humans may become less concerned with safety if the IA is seen as handling safety aspects. This is an extension of the ‘complacency’ issue with automation, and may be expected to increase as the IA’s autonomy increases. | The IA could ‘speak up’ if it assesses a human course of action as unsafe. |
Humans may perceive a double-bind: if they follow ‘bad’ IA advice or fail to follow ‘good’ advice, and there are adverse consequences, they might find themselves being prosecuted. This will lead to lack of trust in the IA. | The IA could be integrated into Crew Resource Management practices, helping decision-making and post-event review in the cockpit or air traffic Ops Room. |
If the IA reports on human error or human risk-taking or other ‘non-nominal behaviour’ it could be considered a ‘snitch’ for management, and may not be trusted. | The IA could serve as a living black box recorder, recording more of decision-making strategies than is the case today. |
If IA recordings are used by incident and accident investigators, Just Culture policies will need to address such usage both for ethical reasons and to the satisfaction of the human teams involved. Fatal accidents in which an IA was a part of the team are likely to raise new challenges for legal institutions. | If the IA is able to collect and analyse day-to-day safety occurrence information it may be seen as adding objective (dispassionate) evidence and a more balanced sassessment of severity, as well as an unbiased evaluation of how close an event came to being an accident (e.g., via Bayesian analysis). |
An IA that is human-trained may adopt its human trainers’ level of risk tolerance, which may not always be optimal for safety. | The IA could significantly increase reporting rates, depending on how its reporting threshold is set, and could also record and track how often a safety-related issue is raised. |
The introduction of Intelligent Assistants may inexorably lead to less human staff. Although there are various ways to ‘sugar-coat’ this, e.g., current shortfalls in staffing across the aviation workforce, it may lead to resentment against IAs. This factor will likely be influenced by how society gets on more generally with advanced AI and IAs . | The IA could serve as a safety encyclopedia, or Oracle, able to give instant information on safety rules, risk assessments, hazards, etc. |
If the IA queries humans too often it may be perceived as policing them, or as a trouble-maker. | The IA can upload all NOTAMs and briefings etc. so as to be able to keep the human team current, or to advise them if they have missed something. |
If the IA makes unsafe suggestions, trust will be eroded rapidly. | If the IA makes one really good ‘save’, its perceived utility and trustworthiness will increase. |
The IA may have multiple priorities (e.g., safety, environment, efficiency/profit). This may lead to advice that humans find conflicted or confusing. | The IA could share important day-to-day operational observations, e.g., by flight crew, controllers, or ground crew, who can pass on their insights to the incoming crew. |
Management may come to see the IA as a more manageable safety asset than people, one where they can either ‘turn up’ or ‘tone down’ the accent on safety. | The IA could reduce the reporting ‘burden’ of operational staff by transmitting details of human concerns and safety observations directly to safety departments. An IA ‘network’ or hub would allow safety departments to quickly assess safety issues and prepare messages to be cascaded down by senior/middle management. |
Operational managers may simply be reluctant to allow the introduction of IAs into the system, due to both safety and operational concerns. | The IA could provide objective input for safety investigations, including inferences on causal and contributory factors. Use of Bayesian inference and other similar statistical approaches could help avoid typical human statistical biases, thereby ensuring the right lessons are learned and are considered proportionately to their level of risk. |
If information is biased or counterfactual evidence is not considered, the way the IA judges risk may be incorrect, leading to a lack of trust by operational people. It could also have managers focusing on the wrong issues. | IAs could store information on incidents and associated (correlated) contextual factors, with live updates structured around risk models, and disseminate warnings of potential hazards on the day via an App or via the IA itself communicating with crews / staff. |
There is a risk that if the IA is a very good information collector, that people at the sharp end are gradually excluded in updates to system changes, as the systems developers will consult data from the IA instead. | The IA might serve as a bridge between the way operational people and safety analysts think about risks, via considering more contextual factors not normally encoded in risk assessments. |
There could conceivably be a clash between an IA and a team member who, for example, was taking significant risks or continually over-riding / ignoring safety advice, or, conversely, an IA that was giving bad advice. | The IA could build up evidence of procedures that regularly require workarounds or are no longer fit for purpose. The IA could highlight gaps between ‘work as designed’, and ‘work as done’. |
IAs may need regular maintenance and fine-tuning, which may affect the perceived ‘stability’of the IA by Ops people, resulting in loss of trust or ‘rapport’. | IAs used in Ops could transmit information to other departments concerning engineering and maintenance needs observed during operations. |
The IA may give advice that does not make sense to the human team or the organisation, yet be unable to explain its rationale. Managers and operational staff may find it difficult to adhere to such advice. | The IA could have a model of how things work and how safety is maintained, so that any changes will need to be incorporated into the model, which may identify safety issues that have been overlooked or ‘played down’. This is similar to current use of AIs for continuous validation and verification of operating systems, looking for bugs or omissions. |
A human risk-taker or someone who puts productivity first, may consult (‘game’) an IA until it finds a way to get around the rules. | The human can uilise the IA’s safety advice to err on the side of caution, if she or he feels pressured to cut safety corners either due to self, peer or management pressure. |
People know how to fill in the gaps when procedures don’t really fit the situation, and it is not clear how an IA will do this. The AI’s advice might not be so helpful unless it is human-supervisory-trained. | When humans find themselves outside the procedures, e.g., in a flight upset situation in the cockpit, an IA could rapidly examine all sensor information and supply a course of action for the flight crew. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 MDPI (Basel, Switzerland) unless otherwise stated