To derive these predictions, a specific level of delegation strength was needed to determine trusting behavior. While it would have been easy to assume that above 50% connoted a delegation decision and interpret anything equal or below 50 as a not-delegate decision, this would have been arbitrary. First, the bimodal distribution of the data with a large percentage of observations above 55% suggests that delegation decisions may be higher than an equal odds approach. Second, other empirical research on trust behaviors with AI and robots performance had to be between 70 percent to 80 percent to elicit a trust decision from a participant [60–62]. For these reasons, the predictions used considered 75% or above as commensurate with a delegation decision by a human to the AI. While higher deviations (e.g., violations of total probability were found at lower thresholds (i.e., 50 percent, 60 percent), the researchers decided to align with extant research that suggests that trust in machines were only exhibited at higher levels of reliability which may be equated with a delegation or trust decision.
4.1. Modeling Delegation Strength with Quantum Open Systems to the Study Data
The violation of total probability is an indicator for using modeling techniques that can account for these violations. One of the theories that captures these types of violation is quantum models of decision making. To elucidate the difference between classical and quantum models, first the Markov decision model is discussed and then the quantum model thereafter.
In this model, choice outcomes for a decision maker are agree or disagree with a machine (i.e., AI) and decision outcomes are delegate or not delegate. The set of the choice outcomes states is
, where A = “agree,” and DisA = “disagree.” The set of the decision outcomes states is
, where D = “delegate,” and notD = “not delegate.” For simplicity, suppose the initial probability distribution for the agree/disagree choice is represented by a 2 x 1 probability matrix:
where
and
are positive real numbers. In this decision process the decision maker starts with the probability distribution expressed in equation (1). From the agree/disagree choice, the decision maker transitions to delegate/not delegate states. The transition matrix that captures this behavioral process can be written as:
The matrix in equation (2) represents the four transition probabilities (e.g.,
represents the probability of transitioning to delegate (D) state from disagree (DisA) state); hence, entries within each column are non-negative and the rows within each column adds up to one. Then the probability of delegate (D = delegate, notD = not-delegate) outcome can be written as:
To interpret and elucidate the final probability distribution in equation (3), a 2 x 2 joint probability distribution table of these four events is shown in
Table 2.
By using
Table 2, one can write the total probability of delegate as follows:
Subsequently, equation (4) can be written as:
Table 2 shows the decision process that elicits the agree/disagree choice of the decision maker before deciding to delegate or not delegate. This delegate or not-delegate decision outcome can also be attained without eliciting the agree/disagree choice. The probability values for this decision process can be represented with
Table 3.
To use the Markov model shown in equation (3) to capture the decision process for both the conditions shown in
Table 2 and
Table 3, it is necessary to assume that the condition in equation (3) holds true. After that, to fit the data equation (3) requires three parameters,
,
and
. These three parameters are obtained from the data,
,
,
. In return, since there are four data points (
,
,
, and
) one degrees of freedom remains to test the model. This degree of freedom is imposed by the law of total probability that Markov model must obey. Thus, the Markov model requires (and must predict) that
. In the case where
, the Markov model becomes applicable if the transition matrix entries change; in this case, the model cannot be empirically tested [
10,58].
Similar to the Markov model, the choice outcomes states are
the set of the decision outcomes states is
. The initial amplitude values for the agree/disagree choice are represented by a 2 x 1 matrix:
The probability of observing an agree choice becomes ; the probability of observing disagree choice becomes . The sum of the squared amplitude equals one, .
In the case of quantum model, choice and decision outcomes form two orthogonal bases in a two-dimensional Hilbert space. Peculiar to the quantum model, the events described choice-agree basis can be incompatible with decision-delegate basis, and used to represent system states as:
When a stimulus is presented to a decision-maker, the cues in the situation generate a superposition state with respect to agree and disagree, delegate and not-delegate. By asking a choice question, agree-disagree, the system state, shown in equation (6), become a superposition of
and
. As shown in
Figure 4 and 5, when a decision maker chooses to agree (results in agree then delegate, shown in
Figure 4) or disagree (results in disagree and delegate, shown in
Figure 5), the superposition of states with respect agree-disagree basis resolved.
Figure 5 corresponds to the first element (
) of the total probability shown in equation (4);
Figure 6 represents the second element (
) of equation (4). Subsequently, either delegate or not-delegate state is chosen from a new superposition of states, with the delegate or not-delegate states. Contrary to this process as shown in
Figure 6, if the agree-disagree question is never asked, a decision maker chooses delegate or not-delegate (without expressing agree or disagree choice), the decision maker never resolves the superposition concerning the agree/disagree basis. This is one of the salient differences between quantum and Markov models.
Another difference between the quantum and Markov model is the calculation of the transition matrix. In the quantum model, the transition amplitudes are represented as the elements of a unitary matrix:
Since the matrix in (7) is a unitary matrix, it must satisfy the following conditions:
The requirements in equation (8) also imply:
Similar to a Markov model, a transition matrix is generated in the quantum model. In this case, the elements of the transition matrix are generated from the unitary matrix. The resulting transition matrix is a doubly stochastic matrix, each of whose rows and columns sums to unity. For the choice condition, transition probabilities are calculated by the squared magnitudes of the unitary matrix elements:
where
represents the transition matrix, hence
must be doubly stochastic. A decision only situation, directly deciding to delegate or not delegate, is modeled by the following matrix product:
Solving equation (9) results in:
After expanding the complex conjugate, the equation (9) becomes
where the
term in equation (10) is the phase of the complex number
; in equation (10) only the real part of the complex number is used
.
Equation (10) is called the law total amplitude [
10,58]. As can be seen in equation (11), because of the interference term, equation (10) violates the law of total probability. Depending on the value of
the probability produced by equation (10) can be higher than equation (4) or less. In the case of having
, since the interference term becomes zero, the equation (10). Following the discussion in [
10,58], we proceed with four dimensional model because the two dimensional model of this decision making process demonstrate the same violation of the double stochasticity requirement for the quantum model.
Capitalizing on the state concepts in two-dimensional models, the four-state model will include the following combination of decision states:
The state
in equation (12) represents the state that the decision maker agrees with AI and delegates the decision to the AI. Due to the dynamics of the Markov model, even though the state of the system is not known by the modeler, the system will be in a definite state and jump from one to another state or stay on the same state. The initial probability distribution of this four-dimensional model is represented by a 4 x 1 matrix:
Each row in equation (13) represents the probability of being in any of the states listed in equation (12) at time, ; for example, probability of agree and delegate at .
For the choice (agree/disagree) then decide (delegate/not-delegate) task, the condition of choice being agree would require having zero values for the third () and fourth () entries of the matrix. In return, because having choice being agree allows to have only these two states as the probable outcome. As a result, the initial probability distribution, .
In the task decide-only, for the Markov model, it is assumed both agree and disagree is probable, but these probability values are not known. Then, by capitalizing on the discussion in Busemeyer et al. (2009), for this task initial probability distribution becomes the following mixture:
where
which represents the implicit probability of agree for the decision alone task.
The state evolution for the choice condition is as follows. After choosing to agree/disagree, a decision-maker decides to delegate/not-delegate. The decision can take place anytime
t after the agree/disagree choice. The cognitive process that represents the state evolution of the decision maker during time
can be represented by a 4 x 4 transition matrix,
. This transition matrix represents transitioning probabilities from state i to state j. The time dependent probability distribution across all of the states in equation (12) can be expressed as:
Then at any time
t, the probability of delegating can be expressed as:
The transition matrix for any Markov model must satisfy the Chapman-Kolmogorov equation, the solution of this transition matrix results in:
where
K is the intensity matrix with non-negative off diagonal entries and the rows within each columns sum to zero, which is required to generate a transition matrix for the Markov model. Typically, the mental processes concerning agree/disagree and delegate/not-delegate are captured with the intensity matrix.
Following the discussion in [58], defining an indicator matrix is required to operationalize and link equations (17) and (18) to the choice and decision tasks. The indicator matrix for these two tasks will be a 4 x 4 matrix:
The matrix in equation (19) ensures that only the delegate events are included in the matrix multiplication of
and
; to calculate equation (15) from
a 1 x 4 row matrix is necessary to calculate the probability of delegate:
By using the L matrix, the final probability for agree and delegate (
)
To complete equation (20) by using the L matrix, the final probability for disagree and delegate (
)
For the decision only task, the probability of delegate is expressed as
In this study, by using previous research method by [58,63], the values of implicit probabilities are determined by and . and are observed probabilities from categorization tasks. Although this might involve subjective determination of these values, the Markov model in equation (24), becomes weighted average of and , and will not match the agree/disagree and delegate condition probabilities (Busemeyer et al. 2009).
Identical to four-dimensional Markov model states, shown in equation (12), four-dimensional quantum model has four decision states:
The state
in equation (25) represents the state that the decision maker agrees with AI and will delegate the decision to the AI. Due to the nature of the quantum model, the initial probability distribution is in a superposition of all of the states shown in equation (25). Initial probability distribution of this model is represented by a 4 x 1 column vector:
The elements of equation (26) represent the probability amplitudes (not transition amplitudes), which are complex number, for each of the states in equation (26), and their sum of the squared amplitudes is one:
Similar to the Markov model, these probability amplitudes vary with the experimental task.
For the task in which the choice is agree/disagree and the decision is delegate/not-delegate, if the choice equals agree, then
. As a result, the initial amplitude distribution is:
. The foundational difference between the Markov and quantum models is distinguishable for the second task, which is decision only (delegate/not-delegate) condition. In this condition, according to the quantum model, a decision-maker never resolves his/her superposition of states concerning agree/disagree; hence, the initial amplitude distribution becomes:
As it happens in the Markov model, after choosing to agree or disagree, the decision maker decides at some period of time,
t. To represent the cognitive processes of deliberation between choosing to agree/disagree at time
t, 4 x 4 unitary matrix (
) is used. This
updates the superposition of the initial amplitude distribution:
where
to preserve the inner products and
, which is the transition probability matrix. For example,
being the unitary matrix, the transition probability from state
to
equals to:
The transition matrix in (30), must be doubly stochastic. As discussed in [58], the transition matrix for the quantum model satisfies Chapman-Kolmogorov equation,
; therefore, the unitary matrix,
, satisfies the following equation:
where
is the Hermitian Hamiltonian matrix. The solution of the equation (31) is:
Equation (32) is a matrix exponential function, and it allows the construction of a unitary matrix at any point time with the same Hamiltonian.
Equation (29) represents the amplitude distribution at any time t, and can be expressed as:
By using equation (33), the probability of delegate can be expressed as:
To represent the probability values, as defined in the Markov model, a 4 x 4 matrix is defined for the quantum model as well:
Multiplication of
with
results in a vector (
) includes amplitudes for both the delegate then, agree and disagree cases. As a result, the probability of delegation becomes:
Following this discussion, the probability values of delegate for the agree and disagree conditions become:
In the condition which comprises decision only condition, the probability of delegate becomes:
where
is the phase angle of the complex number
. As can be seen in equation (39), the total probability is violated when
.
4.2. Comparison of Markov and Quantum Models
Any decision task involving multiple agents (human & machine or human & human), conflicting or inconsistent information for identifying a target and delegating a decision to the other agent involves multiple cognitive processes and their state evolution. Incoming information can often be uncertain or inconsistent, and some instances, a decision must still be made as quickly as possible. However, reprocessing and resampling data is often impractical and such required work may result in missing a critical temporal decision window.
Decision tasks and their conceptual periphery accentuate the importance of trust to be considered in decision making. A decision theory used in this context can provide the probability of making a choice, and typically is done by assigning a trust rating to decision outcomes and the distribution of time to decide or choose. For instance, random walks are commonly used to model these type of decision tasks. In fact, Markov models and quantum random walks are among these models. Random walk models are used quite often in the field of cognitive psychology [57] and are a good fit to model multi-agent decision making situations. These models are additionally beneficial because when stimulus is presented, a decision makers samples evidence from the “source” at each point in time. The sampled evidence/information changes the trust regarding the decision outcomes (e.g., delegate or not delegate). Trust may increase or decrease depending on the adjacent confidence level and the consistency of the sampled-out information to include the trustworthiness of the source. This switching between states continues until a probabilistic threshold (intrinsic to decision maker) is reached to engender a delegate or not delegate decision. In this context, trust continuously influence the time evolution of the system state as it transitions from one state to another as shown in
Figure 7. This influence can be captured by the intensity matrix for the Markov model, or by Hamiltonian for the quantum model.
In the context of this research, a Markov model and quantum model were used to describe the state transitions. In the case of a 9-state model, using a Markov model requires that the decision maker will be at one of the nine states at any time (even if the modeler does not know that state as shown in
Figure 7). The initial probability distribution concerning the states for the Markov model will thus be
; consequently, the system starts on one these nine states. On the other hand, using a quantum model, the 9-states are represented with nine orthonormal bases in a 9-dimensional Hilbert space. Another key difference of the quantum model is that there is no definite state at any time t system will be in a superposition state and the initial distribution is also a superposition of the nine states as seen in
Figure 8. Therefore, instead of a probability distribution there is an amplitude distribution with equal values amplitude values,
.
In addition to the initial state distribution and evolution of system state, jumping from one state to another vs. evolution as a superposition state, Markov models must obey the law of total probability, and quantum models obey the law of double stochasticity. Due to the nature of the Markov model, the law of total probability and jumping from definite state to another, generates a definite accumulating trajectory for the evolution of delegation rate, which is influenced by the inconsistencies of the information, evidence, and trust of the source. On the other hand, the quantum model starts in a state of superposition and evolves as a superposition of states across time for the duration of the tasks.
As can be seen in
Figure 9 (9-state case), the Markov model predicts delegation rate gradually increasing and subsequently reaching an equilibrium state around 3.5 (which could mean that state is jumping between 3 and 4). As discussed in [
10] the probability distribution across states for the Markov model, behaves like sand blown by the wind. The sand pile starts from an uniform distribution, then as the wind blows the pile up against a wall on the right-hand side of the graph which is analogous to evidence accumulation. As more sand piles up on the right, the delegation rate becomes trapped between a certain state, which is the equilibrium state.
As can be seen in
Figure 9 again, the quantum model predicts that delegation rate initially increases then begins oscillating around an average value of 1.1; however, there is no definite state for this distribution. As analogized in [
10], the quantum model behaves like water blown by the wind. The water is initially distributed equally across states, but when a wind begins blowing, the water is pushed against the wall on the right-hand side of graph and then recoils back to the left side of the graph; hence, the oscillatory behavior emerges. In the context of trust, these two behaviors can capture two unique aspects of trust. Markov model can represent the case in which trust pushes the decision maker to a decision in favor of AI; or in the no-trust case, the decision maker is pushed to a decision which is not in favor of AI. However, real time decision making involves hesitation that results in vacillation for the decision maker. As can be seen in
Figure 9, this can be captured by quantum model.