3.1. Research Process Model
The process of creating the artefact should involve a search process for a solution to a specified problem, drawing on existing theories and body of knowledge Peffers
, et al. [
35]. Meanwhile, study’s findings must be effectively communicated to the appropriate audience [
34]. The DSR represents an incremental and iterative process [
34]. Also, the iterative cycles imply constant reflection and abstraction [
36], which are necessary foundations for developing a design theory and artefact. Design theory describes how an artefact should be constructed in order to achieve the desired initiatives and results [
36]. Thus, the DSR process presented by [
35] suited our research aims. The research process model developed by [
35] provides a useful synthesised general model that builds on other approaches [
37].
Figure 1.
AI-enabled Process Model adopted.
Figure 1.
AI-enabled Process Model adopted.
Furthermore, we find this process model to be consistent with our research aims, which contains: (1) identifying the problem, (2) define objectives of a solution, (3) design and development, (4) demonstration, (5) evaluation; and (6) communication. Considering the process model of Peffers
, et al. [
35], our research process model (1) identified the problem through analysing the literature to (2) found and formed the objectives of a solution, (3) developed a conceptual design, (4) builds an artefact as an instantiation of the problem in a further study, (5) evaluates the design of the artefact conceptually and practically, and (6) communicate the problem, the solution and usefulness of the solution to researchers and other audience. Moreover, three iterations are conducted to ensure a variety of evaluation methods, and for more validity of the artefact's design.
3.1.1. Phase 1: Problem Identification
This phase identifies a research problem and the importance of solving the proposed problem. While such instantiations demonstrated ML capabilities in different instances and studies [
4,
30,
31] reported in the literature, there is a lack of prescriptive design knowledge to guide researchers and practitioners in systematically implementing them for DSS in NPOs for analysing donor behaviour. To expand the awareness of the research problem beyond the literature, two informal interviews were conducted with two experts from NPOs during this phase. During interviews, we asked the experts (1) to describe the process of donor behaviour analysis, (2) state the challenges they face in designing such a DSS that helps in describing and predicting donor behaviour, and (3) explicate the potential of creating a design theory that guides the process of designing AI-enabled DSS, or any other suggestions.
Table 1 summarises these interviews stating the process of analysing donor behaviour, the challenges faced by some of NPOs, and suggestions for creating an artefact that analyses donor behaviour.
All valuable insights were noted from the interviews. For example, experts mentioned that descriptive and predictive analytics assist NPOs in making better decisions to increase efficiency and performance and understanding the influential factors on donations in NPOs. Furthermore, these analytics can be generated and visualised through a DSS. At this stage, the interviews helped identify the problem and increase awareness of creating a design theory of an artefact to analyse donor behaviour.
3.1.2. Phase 2: Objectives of a Solution
In this phase, the objectives and the requirements of the intended artefact are elicited to determine the main functionalities of the AI-enabled DSS. The initial requirements for creating an artefact are defined based on meta requirements of Meth
, et al. [
36] and on the decision theory by Silver [
38]. Also, the guidelines are followed of developing an artefact by Hevner
, et al. [
34]. The guidelines are intended to help researchers, reviewers, authors, and readers understand what is required for effectual research [
34].
A design scientist must understand the artefact's objectives . The objectives of the artefact can be defined through design requirements.
Table 2 introduces the initial design requirements derived from Meth
, et al. [
36]. Existing research in decision support theory typically describes two primary goals of decision-makers: ensuring maximum decision quality and reducing effort [
21,
36]. However, a DSS may offer the user only limited selections of strategies [
38], which requires designers of a DSS to ensure minimising the restrictions [
39]. The degree to which the DSS pre-selects decision techniques and, as a result, only provides decision makers with a limited variety of strategies - which may not include their preferred ones - is known as system restrictiveness [
39]. Ultimately, the most crucial characteristics of any DSS are the perceived advice quality, perceived cognitive effort, and perceived restrictiveness [
36]. Therefore, the three design requirements from Meth
, et al. [
36] borrowed, offered a basement of our conceptual design and provided potentials for constructing our conceptual design of AI-enabled DSS for analysing donor behaviour.
Table 2 presents the design requirements browed from Meth
, et al. [
36] with an explanation for each DR, and a justification.
3.1.3. Phase 3: Design and Development
This phase creates definitions of design principles (DPs) and design Features (DFs) which interpret the design requirements in the previous phase. es can be a statement that tells what the artefact should do [
18]. DFs are unique artefact capabilities to fulfil design principles [
36]. DPs are statements that help develop an artefact that meets the design requirements [
36]. DPs are essential design theory elements because they contain important design knowledge [
40]. Because one aim is to build an artefact (AI-enabled DSS for analysing donor behaviour), the DPs should be stated as “the should do., or the system should fulfil… ” [
40].
Table 3 presents six DPs together with their explanation.
DFs are specific capabilities that map or address the DPs and design requirements [
36]. DFs are specific artefact functionalities required to meet DPs [
36]. The DFs are introduced in the last phase of conceptual design and are created to interpret the DPs (
Table 4).
After defining the design requirements, principles and features, a conceptual design is presented in phase 4, which is the demonstration. After demonstrating a conceptual design and evaluating it, an artefact of AI-enabled DSS will be built, and evaluated to ensure the validity of design requirements, principles and features.
3.1.4. Phase 4: Demonstration
The demonstration phase is to present an instantiation of the AI-enabled DSS. The aim of demonstration stage is to that the usage of the artefact can solve the problem. In this research context, the demonstration stage is divided into two steps: a conceptual design of the AI-enabled DSS and an artefact (AI-enabled DSS) for analysing donor behaviour. Noticeably, in this paper, our aim of the demonstration is to present a conceptual design of the AI- enabled DSS to ensure the validity of design requirements, DPs and DFs can solve the research problem. Therefore, the first part of the evaluation phase (iteration one) is conducted, which is to apply such a required change to the conceptual design.
From the previous stage (Design and Development), we combined design requirements (the requirements of the AI-enabled DSS), that link to the DPs (our objectives), and the DFs which interpret our execution of the design requirements and DPs. A preliminary conceptual design is developed and evaluated with NPOs’ stakeholders. The evaluation phase includes interviews with experts from NPOs to provide valuable feedback on the conceptual design. Before iteration one, we met with experts to demonstrate this preliminary conceptual design and explained how the components emerged.
Figure 2 shows three main components of the conceptual design which are three design requirements, six DPs, and four DFs.
3.1.5. Phase 5: Evaluation
In this phase, the framework of evaluation introduced by Venable
, et al. [
42] is used, which has two types of evaluations, formative and summative. The assessment is meant for evaluating the AI-enabled DSS and the design theory with relevant design requirements, DPs, and DFs. Formative evaluations are utilised to generate experimentally verified interpretations that serve as the foundation for effective action in enhancing the traits or performance of the evaluand [
42]. Summative evaluation provides a foundation to produce common meanings of the evaluation in a different context. The evaluation phase will run three iterations; after each iteration, some changes will be applied to the design and development of the AI-enabled DSS.
Most importantly, because the demonstration is only to present the conceptual design (not instantiated/functional) so far, a cycle goes back to phases 2 (Objectives of a solution) then phase 3 (Design and Development), for ensuring that evaluation results of iteration one have been addressed. Further, an artefact (AI-enabled DSS) will be built based on the evaluation results from iteration one. After that, Iteration two will occur to ensure the functionality of the artefact, followed by iteration three to collect expert’s feedback on the effectiveness, the efficiency, control, and the success of the AI-enabled DSS
1.
Iteration one aims to evaluate the initial design requirements, DPs and DFs. Iteration one resulted after design requirements, DPs, and DFs being evaluated (formative assessment) to ensure their relevance to our research aims and objectives. For iteration one evaluation, semi-structured interviews were conducted with NPOs decision-makers, data scientists, volunteers, systems designers and analysists, experts in NPOs, and managers of NPOs. Interviews, one of the qualitative research methods, are frequently concerned with obtaining a thorough grasp of a situation or determining the specific phenomenon [
43]. During interviews, those experts are involved in evaluating our conceptual design. This iteration results lead to applying any changes or suggestions on design requirements, DPs and DFs.
3.2. Data Collection and Interview Analysis for Iteration One Evaluation
Iteration one of the evaluation phase was conducted using semi-structured interviews with a total of 16 interviewees from NPOs. In the concept of qualitative research methods, the sample number of interviews varies depending on the number of questions and the research objectives. In qualitative research methods, the sample size is frequently less than in quantitative research methods [
44], because qualitative research methods are frequently concerned with gaining a thorough grasp of a phenomenon or determining the meaning [
43]. Therefore, 16 interviewees
2 were invited to participate in the interviews, considering the variety of their experience, their deep understanding of the research problem, and their availabilities of conducting the interviews within a certain period of the study.
Each interviewee was invited via email with a consent form and a brief introduction about the research problem and proposed solution. Each interviewee signed a consent form and gave an agreement for the recording which was used to analyse the interviews. After meeting with each interviewee in a certain time, the conceptual design is introduced briefly during the interviews using a short presentation. The presentation duration was for 10 minutes, which included a brief introduction about the research problem, the research aims, the conceptual design, the expected output of the research, and an explanation of the interview process. This is followed by introducing 11 questions (shown in
Appendix A) distributed in five phases of Appreciative Inquiry Theory [
45]. Appreciative Inquiry is a method of focusing on what is excellent in a company in order to improve it and build a better future [
45]. A consideration of the Appreciative Inquiry in designing the questions would provide the best guidance in obtaining the best answers from the stakeholders. Also, the questions were designed to make it easier for the participants to understand the questions and provide sufficiently detailed answers.
Following the flow of the Appreciative Inquiry which contains five phases, experts were asked several questions relative to each phase. The five phases are:
Participants: the questions are to ask about experts’ experience working in NPOs.
Discovery: the questions are to ask experts about their experience working on DSS, ML, and data analytics, either in NPOs or in profitable organisations.
Dream: the questions are to collect the experts’ feedback on the conceptual design of AI-enabled DSS for analysing donor behaviour.
Design: the questions are to ask experts about any additional design requirements, DPs and DFs that can be added to the conceptual design.
Destiny: the questions are to measure experts’ expectations of the AI-enabled DSS for analysing donor behaviour in NPOs.
Furthermore, all the records of the interviews were saved on the University of Technology Sydney OneDrive of the research investigator. Each interview was for less than an hour, including an introduction about our research framework, explanation of the conceptual design, and the questions. Therefore, subsection 3.1.7 and section 4 present a comprehensive analysis and the results of the interviews. Qualitative data analysis strategies vary widely, depending on the purpose of each collected qualitative data [
46]. However, in this study, two strategies of qualitative data analysis, which are: to code and to categorise, are applied for the interview analysis. For some uses, coding entails giving a datum a symbolic meaning. Coding is a process of understanding the meanings of various data sections. On the other hand, categorising in qualitative data analysis is to group similar or comparable codes for further analysis. In this paper, four categories are provided to report the analysis results.
Interestingly, the four categories have various codes, which are explained accordingly. Thus, some codes from different categories are linked to providing such insightful information. To help the categories and the coding process, we use MaxQDA software that specializes in analysing qualitative data. The four categories of all answers to the interviews are as follows:
This category summarises interviewees’ answers during the phases: participants and discovery of Appreciative Inquiry Theory.
Table 5 shows the code of working experience of participants in the interviews. Experts were interviewed who have experience in data science, software engineering, systems design and analysis, social science, management, and volunteering experience as consultants. The variety of experience provided richness in the answers and the evaluation. Also, conducting interviews aim to make categories of answers from different experts. These categories led to discovering hidden patterns among all the interviewees [
43].
During the first part of our interviews, the experts were asked simple questions about their working experience. We found that most experts have some experience working or volunteering in NPOs (different types of NPOs such as charities, religious centres, and youth centres). Following that, two software engineering experts, who had short experience volunteering in NPOs, provided relative answers during the interviews. Three NPOs’ managers and one CEO of different NPOs also answered our questions, but they comprehensively explained the challenges of analysing donor behaviour in NPOs. Interestingly, one researcher in NPOs studies who support our claims that DSS are critical for NPOs to target more donors. He stated that NPOs require a clear vision of the donor behaviour of donations over a long time. The variety of involving experts in our interviews helped us raise awareness of the problem, understand some of the decision-making requirements of NPOs, and draw a path of opinions that assist us in designing the AI-enabled DSS in NPOs. These codes are integrated with the following categories to provide such meaningful analysis of the interviews.
Category’s construction attempts to group things that appear to be similar that appear to be appropriate [
46]. Categorization is an act of interpretation, which may help in interpreting other categories and their codes. Thus, category of working experience help knowing different answers of different experts,
with their variety of experience. Relevant and various experience may include knowledge and skills in the evaluation which lead to affective evaluation. Therefore, the codes of work experience are linked to the following categories and codes for obtaining the maximum benefits of the evaluation, drawing useful conclusions of the interview analysis.
- 2.
Evaluation of the conceptual design:
This category is to collect the relevant answers of participants’ evaluation of the conceptual design. This category summarises interviewees’ answers during the dream of Appreciative Inquiry Theory phase. After asking experts about their additional design requirements, DPs, and DFs, they were asked about their opinions on mapping design requirements, DPs, DFs (the conceptual design). Then, we analysed each answer to assign it to a certain code to form the evaluation category. The evaluation category eventually has five codes of answers reported by experts generally evaluating our conceptual design.
Table 6 shows the association between codes of work experience and codes of evaluation of the conceptual design. The evaluation codes combined all experts’ answers who share the same opinions that generally evaluate our conceptual design. A variety of experts in NPOs claim that the conceptual design is great, abstractive, and systematic design which indicates a precise links between all the main components of design requirements, DPs, and DFs. Interestingly, one data scientist and an NPO manager agreed that the mapping of the three components of the conceptual design is good, but they would consider adding “adaptive systems” and “security”. The evaluation of the conceptual design reassured us that the mapping of design requirements, DPs, and DFs is good design. Still, certain additional requirements (followed in category 3) should be considered when building the AI-enabled DSS.
- 3.
Additional design requirements, DPs and DFs:
This category is to combine the similarity of additional design requirements, DPs, and DFs by participants in the interviews. This category summarises the answers of interviewees during the phase of design of Appreciative Inquiry Theory. For example, three NPOs managers required the “useability”, indicating that usability is a key requirement for those lacking technical skills. Similarly, “very friendly system” is required by one experienced volunteer with NPOs. Noticeably, the “Quality of data” is also required in addition to other requirements because they are believed to be essential requirements for data scientists. It is stated that any data analysis should be based on accurate and high-quality data [
47]. Wang and Strong [
48] grouped more than 100 quality data elements into four groups: relevance, accuracy, accessibility, and representation. However, quality of data is considered when building the AI-enabled DSS in a further study. The consideration of data quality will be through checking these four categories of data quality during the step of data preparation, which is prior to applying such data analysis using ML techniques. There are unique additional requirements requested by some experts such as “Increasing efficiency” and “Adaptive system”. Increasing efficiency of decision-making is typical of our DR 2. However, “Adaptive system” is an interesting requirement for interactive systems [
49]. When all of the necessary input characteristics are unknown or there are some slow variations in the input data, an adaptive system is typically used [
50]. “Adaptive systems” requirement is out of our scope and research objective for this study and the further studies of building an AI-enabled DSS for analysing donor behaviour in NPOs. In addition, a social expert in social science mentioned that more NPOs would benefit substantially when have a flexible system to install, edit contents of the AI-enabled DSS. This unique requirement is also considered when building the analytical models (Iteration two) and in (Iteration three) of the AI-enabled DSS design science framework. Interestingly, DPs are derived from the design requirements.
Therefore, we asked the interviewees to add DPs according to the additional design requirements. For example, experts who asked for “Usability” as an additional DR, stated that “the AI-enabled DSS should be easy to use to describe and predict donor behaviour”. Another social expert in NPOs, claimed that a possible DP could be “the enabled DSS should be flexible to install and access by NPOs’ stakeholders”. This is to ensure that the “flexibility” of the additional DR can be achieved and save time and effort for NPOs’ by decision-makers.
Consequently, experts who asked for additional design requirements, are asked about any additional DFs. Coincidentally, experts who added “usability” as additional design requirements, asked for “Tooltips”, in addition “easy to navigate” and “choice of colours” as other DFs.
Table 7 represents the category the additional design requirements, DPs, and DFs, linked with the category of work experience.
- 4.
Expectations of the AI-enabled DSS:
This category is to collect the relevant answers of participants’ expectations about our AI-enabled DSS for analysing donor behaviour in NPOs, and group them similarly. This category summarises interviewees' answers during the destiny of Appreciative Inquiry Theory phase. Before concluding each interview, we asked the experts about what they expect from the AI-enabled DSS for analysing donor behaviour. One question was asked for all interviewees, which is: “What results/analysis do you expect when implementing the AI-enabled DSS to analyse donor behaviours?”. Further, all answers were analysed and assigned it under a code. As a result, four codes of expectations were obtained of the AI-enabled DSS.
Table 8 shows the association between work experience category and experts’ expectations of AI-enabled DSS for analysing donor behaviour of AI-enabled DSS for analysing donor behaviour in NPOs. The work experience category codes combined all interviewees with the same role.
Most participants expected that our artefact expects to predict and describe donor behaviour, representing our main research objectives. Other experts expected that the AI-enabled DSS would be a helpful solution to enhance decision-making in NPOs based on their understanding the conceptual three components (design requirements, DPs, and DFs). Essentially, one data scientist and a researcher in social studies in NPOs, expected that the ML techniques are required to achieve the objectives of AI-enabled DSS for analysing donor behaviour. The association of evaluation codes and work experience assist in providing useful feedback, with users’ different experience. For example, when different experts agreed on one code of evaluation, it indicates the importance of considering that code when applying such changes in the following iterations.
The results of the interviews led to discuss about applying the required changes of the conceptual design. The required changes (explained in section 4) offered the authors different perspectives of the experts during the evaluation of the conceptual design of AI-enabled DSS for analysing donor behaviour. Moreover, the results confirm that the mapping of design requirements, DPs, and DFs is well-presented, which reflects the achievement of the research aims ultimately.