Results of RQ2
To answer the second research question, the factors for AI adoption as described in section 2 were considered within the articles. For this, each paper was ranked regarding its extend against the single criteria. A summary is reported in
Table 3. Unless otherwise stated, a paper is assigned an empty space if the respective factor is not mentioned at all. In case of naming without deeper description of a fact, papers are given a semi-filled circle. If a detailed description over several sentences can be found, papers are ranked with a filled circle. In case of the latter situation, a brief introduction about the realization within the respective paper is presented thereupon.
First, the extent of needed
Personnel was regarded. Eleven papers did not mention required employees and competencies. Only technical requirements and functionalities were described. Eleven authors briefly introduced the work of affected employees, and seven papers describe affected roles and their tasks in more detail. As such, Villanueva Zacarias et al. [
46] introduce a framework where domain experts are responsible for the problem definition, whereas data engineer and data scientist take over algorithm-related tasks. Kranzer et al. [
53] follow a different approach. They describe the user’s interaction with the system which is realized by a tablet PC and augmented reality. Senna et al. [
54] sub-divide their development steps into three pillars, out of which human-machine interaction is one of them. Indeed, the authors aim to display relevant information to decision-makers in a human-centered way. Yet, its realization is not outlined. Bocklisch et al. [
62] put strong focus on the later user by testing his interaction with the developed system and subsequently collect his feedback. Neunzig et al. [
35] introduce an assistance system aiming at employees from development and planning departments. To address user requirements’ they develop three different interaction modes that are based on different skill levels. Angulo et al. [
68] describe the development of a cognitive assistance system that interacts with its user. To achieve appropriate interaction, the authors additionally collect user’s feedback by empirical methods and respective scales. Wellsandt [
67] develop a DAS that is able to interact with its users by text-to-speech methods. The user is thereby equipped through additional information.
The next aspect deals with the
IT infrastructure on the shop floor and connectivity towards the presented assistance systems. It should be noted that the aim here is not to show the extent of IT within the systems, but the connection to IT systems on the production floor. From the results, it is apparent that 31% of the authors did not mention the IT infrastructure needed by a company interested in the developed system. 48% of the publications at least named requirements or described them in few words. Another 21% of the papers gave further descriptions about how to connect the developed model to existing IT infrastructure. For example, Rousopoulo et al. [
57] make use of a data acquisition module that is connected to factory machines and cloud services using an open-source hardware system as well as a Message Queuing Telemetry Transport (MQTT) broker. Liu et al. [
51] integrate several industrial ethernet, fieldbus and serial communication protocols as well different communication protocols which allows data collection from numerous sensors directly implemented in a machining process. Wu et al. [
45] list a number of communication protocols that is used in their application to interact with physical devices in the production hall. As such, several wireless communication technologies (e.g., Wi-Fi, and 4G LTE) enable network connectivity whereas MTConnect ensures interoperability. Likewise, Deshpande, et al. [
58] also make use of MTConnect and use Hypertext Transfer Protocol (HTTP) for data transport. A similar approach follow Woo et al. [
60] who connect their platform to a manufacturing execution system (MES) using MTConnect. Heimes et al. [
66] connect their platform to several open source and commercial databases, such as Hadoop, Open Shift, Microsoft Azure or Amazon Web Services.
As described by Jöhnk et al. [
41], a basic requirement for successful adoption is the
AI awareness of its functionalities. Hence, the third factor analyzes the amount of knowledge about ML that affected employees need to have. The review reveals that in six of the papers high knowledge is needed, especially about several algorithms, metrices, among others. 17 articles present a model that requires some basic knowledge about ML or statistics, deeper knowledge is taken over by the framework. The remaining six papers describe easy-to-use models in terms of required background knowledge. As such, the system developed by Villanueva Zacarias et al. [
46] allows users to give instructions in a language they are familiar with. ML-based tasks are then overtaken by respective experts. The model described by Senna et al. [
54] requires little ML-knowledge due to an expert system that deals with numerous steps of the ML-pipeline and therefore simplifies its use. As the system described by Kranzer et al. [
53] requires little interaction with the user, it is also assigned a full circle. Data is collected via an interface from the Supervisory Control and Data Acquisition (SCADA) system and output given to users finally. Fischbach et al. [
55] develop a model where many steps from the ML-pipeline is transferred to the assistance system. The user is basically responsible for data generation and result evaluation. Users of the model presented by Garouani et al. [
70] require little previous ML knowledge due to the high number of automated tasks such as data ingestion, algorithm selection and tuning as well as provision of recommendations based on a knowledge-base. Due to the focus on visual inspection, the DAS by Deshpande et al. [
58] allows users to perform ML applications more easily and intuitively. Theoretically, the system developed by Neunzig et al. [
35] has to be attributed different ratings to as it integrates three different skill modes (beginner, advanced and expert). Those user modes thereby differ in the scope of the instructions and in the variety of functions. Given the beginner mode, within this publication, a full circle indicating little required ML knowledge was considered most appropriate.
Jöhnk et al. [
41] furthermore state that “
upskilling enables employees to learn and develop AI or AI-related skills”. In this context, papers within the review at hand were investigated regarding its ability to function for a so-called work-integrated learning. Papers were rated with a full circle if a detailed description of procedures and background knowledge and thereby methods for non-formal learning were provided, with a semi-filled circle in case of a brief explanation and an empty one otherwise. Precisely, one paper contains an in-depth knowledge support, four articles provide at least some ideas and 24 publications do not contain any deeper knowledge description at all. Other than described above, also papers with a half-filled circle are to be described here. Angulo et al. [
68] make use of a cognitive module that analyzes its environment and extracts information. This information is provided to the user for learning reasons. Another possible method for realization of upskilling deliver Garouani et al. [
70] by the integration of explainable AI, whereby facilitating the interpretability of algorithms. Likewise, Terziyan et al. [
65] transfer human knowledge to their system and use this to support the decision-making in later steps. As described earlier, Senna et al. [
54] aim to enhance users’ cognitive abilities by their assistance system. However, they do not describe a realization of this goal. As described before, Neunzig et al. [
35] make use of different user modes depending on the previous experience of the users. They describe that, i.e., the length of instructions varies in this context. Thus, beginners are given longer text to introduce them in the subject and explain in more detail what to do and what will happen in the DAS.
Not only an explanation of stakeholder was under examination, but also their
Collaborative work. The analysis demonstrates that 22 of the articles do not provide a description of different functions/departments (e.g., manufacturing operators, information technology or human resources). Six of the papers at least briefly mention or describe the role of several stakeholders. Only in one paper, a detailed description with roles and integrative work is explained. As already introduced above, Villanueva Zacarias et al. [
46] indicate that domain experts are responsible for the problem definition and model evaluation in terms of applicability in manufacturing, whereas data engineer and data scientist are in charge for algorithm-related tasks. Hence, a delimitation of tasks is described.
An essential prerequisite for ML models is the
Data availability. Thus, both the quantity and quality were investigated. The review demonstrates that six papers do not address at all in what way data was used. Some of them neither validate their models. 22 of the articles validate the model by either using open-source data or by using a complete data set from learning factories or industrial partner. Only one publication generates data when using the model developed and demonstrate practical applicability in that context. As such, Woo et al. [
60] use their framework for energy prediction on a milling machine. In the context of the prototype implementation, they record data with a given set of work piece, machine tool and operation.
Also,
Data Quality can be considered to be crucial for ML implementation. Nevertheless, 55% of the articles do not outline in what way data quality is ensured. 24% of the publications briefly describe methods to improve data within their model. Six articles extensively ensure that data quality is considered and improved. The model described by Villanueva Zacarias et al. [
46] consists of four sub-modules out of which one is meant for increasing data quality. It also allows to summarize a profile of the later to be used in later steps. Zhang et al. [
47] describe in detail and over several paragraphs necessary steps for ensuring high data quality and how it realized in their assistance system. Similarly, Rousopoulou et al. [
57] included data cleaning with i.e., missing value handling and normalization as well as remove low variance features as both decrease the model performance. Equal steps are taken by Garouani et al. [
70] who also conduct a robustness test in order to ensure the applicability of the model in the long-term. Lechevalier et al. [
49] include a data pre-processing module in their system aiming to clean, reduce and transform data as necessary. Heimes et al. [
66] place a filter to maintain data quality at the beginning of their DAS. In this way, they ensure that only high-quality data is used and that, in case of doubt, adjustments are made to the data set at an early stage. To achieve this, they rely on various visualization tools.
As stated by Jöhnk et al. [
41]
Data accessibility should also be considered. It can be outlined that slightly half of the papers (15) do not provide information about access to data. Further eleven articles only mention accessibility, while three articles elucidate in detail the access to data that they used within their model. In the validation of those papers listed here with a full circle, data must be collected directly from a machine. Otherwise, the accessibility cannot be proven. Liu et al. [
51] describe several sensors and connectors to allocate data directly from machines. In consequence, their system allows data analytics in real-time. Wu et al. [
45] make use of MTConnect and Open Platform Communications Unified Architecture (OPC UA) to gather data directly from the shop floor and then store it in a local data base. As previously shown, Heimes et al. [
66] link their assistance system with various cloud platforms and can therefore easily access data. They then divide the data into different categories so that their DAS can analyze it precisely.
In addition, a focus was laid on the
validation in industrial environment. Papers were rated with a full circle if the validation was indeed conducted in manufacturing environment and with semi-filled if the validation took either place on an open-source data set or in a learning factory. In case that there was no validation at all, papers were rated with an empty circle. The research reveals that five research ideas were validated in the manufacturing environment of partner enterprises. Another nineteen of the articles validated their models on open-source data sets and learning factories, respectively, and five developments were not validated at all. Frye et al. [
61] perform wear and tear monitoring and vibration prediction in a milling process of a real product. After conducting necessary steps, they outline next steps for long-term deployment. Terziyan et al. [
65] use their assistance system to facilitate decision-making in the absence of actual decision-makers at a company site in Ukraine. It simplifies the decision-making process for non-experts. Rousopoulou et al. [
57] perform anomaly detection on six injection molding machines of an anonymous company site and extract relevant information for a high-quality machining process. Jun et al. [
56] conduct condition monitoring in an injection company. They extract data from an MES and feed it into their assistance system. González Rodríguez et al. [
52] solve a hybrid flow shop problem in an industrial production planning process. There, they aim to control the stocks at a tactical level. Heimes et al. [
66] validate their solution in two use cases of an automotive battery production for electric vehicles. In this context, they record data from several sensors and try to investigate whether there exists a correlation.
Lastly, it was investigated whether the validation was carried out only by the authors of the papers or whether the
target group was actively involved. Deviating from the previously described classification, a paper reporting a validation with non-ML experts was rated with a full circle, an empty one otherwise. From the findings, it can be seen that the target group was directly involved in four of the 29 papers. In the other 25 publications, only the work of the developers was described. González Rodríguez et al. [
52] for example assign specific tasks to several users that are relevant for the validation in practice. Yet, from their description, it can be concluded that the authors themselves still strongly support the users during execution. As described above, Bocklisch et al. [
62] test their assistance system with one user, observe him while execution and thereupon collect his feedback. Terziyan et al. [
65] point out that three employees from a targeted company were involved in the validation. Nevertheless, it remains unclear what their specific tasks were. Angulo et al. [
68] describe how an operator can collaboratively work with the system, especially what his tasks are and in what way he can overrule the proposals made by the assistance system. Garouani et al. [
70] perform interviews with the target group after execution for collecting feedback when working with their system. A detailed description of the feedback is given subsequently.
Finally, it can be highlighted that the sub-factors Financial budget, AI ethics, Innovativeness, Change management and Data flow were not considered in the papers.