Before commencing the modelling process, several underlying assumptions must be considered. Firstly, honest users generate dedicated, confidential information only once for a given execution and do not reuse it. The Intruder, however, is capable of generating keys, timestamps, and messages for backup purposes, storing them, copying elements from others, presenting them as their own, impersonating users, and taking control. Encryption is assumed to be secure, and the Intruder cannot decrypt messages without the appropriate key. The Intruder may intercept communication and attempt to exploit any information gained after earning the trust of honest users. Additionally, it is essential to consider temporal dependencies, such as lifetimes and delays. All of these assumptions are incorporated into our modelling approach.
3.1. TIS for NSPKT Executions Modelling
We now explain how the previously defined structures are utilised to build the TIS model for NSPKT executions. Translating the protocol specified in ProToc into a Timed Interpreted System extends the method presented in [
13,
14].
The so-called execution automata represent the transmission of messages within the network. Each execution of the protocol is modelled as a distinct automaton, where each transition corresponds to a single step of the protocol. Additionally, we introduce a separate automaton that models the system’s time clock. The collection of all such automata constitutes the environment of our TIS model. In real network scenarios, the execution of protocols depends on the user’s knowledge, as every step can only be executed with the user possessing the requisite knowledge. To represent this, we define and employ knowledge automata that model the knowledge of users [
14]. For instance, an automaton
models the knowledge of user
A regarding the timestamp
. These automata function similarly to characteristic functions over the set of knowledge primitives. They consist of two states: the first states indicate that the user lacks a particular piece of knowledge (such as
), while the second state represents the situation where the user has acquired that knowledge (
). In such automata, the first transition model possesses the process of the proper primitive, and the loop in the second state models the necessity of knowing a primitive for executing a protocol’s step. These loops are specific guards for executing steps from the user’s knowledge point of view. In this way, we can model the protocol’s executions as the work of a TIS system that consists of three agents: a collection of automata that model the environment (executions automata [
14] and clock automaton) and the users that execute the protocol (modelled by knowledge automata). For one, honest execution of NSPKT, we have the following structure (
Figure 1).
This system involves three agents: one representing each honest user and one representing the environment. Within the model, we distinguish automata that map the execution of the protocol (the first one in
Figure 1) and the clock automaton, which can reset each clock within the system. The product of these automata constitutes the environment
. We also differentiate a network of synchronised automata that model the knowledge of individual agents, with blue representing agent
A and green representing agent
B. For the components of the environment depicted, we consider the locations:
,
,
,
,
; the actions associated with the protocol steps,
; and a set of clocks
. The clock
serves as a global clock, reflecting the passage of time between individual steps. Thus, to transition to the next state, a certain amount of time
must elapse, where
i denotes the step number. The value of
corresponds to the time required for ticket generation, encryption, message creation, or decryption. In some steps of the protocol, encryption or decryption time is not considered; this occurs when users send encrypted messages they received earlier or need more knowledge to decrypt the message.
Therefore, each transition depends on the time condition – guard , and for every step, the global clock is reset . The agent used the other two clocks to check the validity of the generated data. They are reset in transitions, during which the appropriate ticket is generated. For example, the environment’s clock , corresponding to the ticket , is reset when transition labelled with the action is executed because is generated in this step.
We assume the following local evolution functions for the environment:
,
,
.
As previously noted, agents are modelled using knowledge models. Each automaton, , represents the process by which a given agent X gains knowledge about a specific piece of data Y. For instance, the automaton models the behaviour of agent A concerning the data . These automata have two possible states: – representing the absence of knowledge, and – representing possession of knowledge. Additionally, there are temporal conditions to consider. For example, in the case of , during the action , a check is performed to determine whether the validity period of the information has expired, specified by . It is important to note that we distinguish different lifetimes for different tickets, as the later a ticket is generated, the shorter its duration is during the protocol’s execution. The local evolution functions for and are:
,
,
,
,
,
,
.
Analysing a single execution typically does not suffice to identify an attack. It is necessary to consider multiple executions, including those involving the Intruder, who will intertwine these executions precisely as they would occur in reality. There are three steps for each execution, and we can intertwine all of them. The rule is that the order of the steps within a single execution must be preserved, and a step cannot be executed if the sender does not possess the necessary knowledge to perform it. The second condition significantly reduces the search space. For example, consider the analysis of step . This step cannot be executed immediately after step because the Intruder does not know the ticket (he cannot decode the information received from B). In our model, the Intruder can continue operations (allowing appropriate transitions in the automata) by either sending complete information (as in step ) or by gathering the necessary individual components during the interlacing steps (as in and ).
Three automata are required to model the attack on the NSPKT protocol in the form of a Timed Interpreted System (TIS) (
Figure 2), where their combined product represents the environment. Two of these automata are utilised to model the protocol’s execution, while the third automaton is responsible for resetting the clocks.
Figure 2.
Timed Interpreted System for Lowe attack on NSPKT
Figure 2.
Timed Interpreted System for Lowe attack on NSPKT
For these components of the environment, we define the locations as follows: , , , , actions , and a set of clocks . As observed, the superscript indicates the protocol execution number. Notably, there are no additional clocks, as the Intruder does not generate any new information. Instead, he merely duplicates the information that he has intercepted. The additional actions, denoted by , represent the Intruder’s enhanced capabilities. Specifically, he can retransmit the entire message if it has been intercepted previously or reconstruct it from individual components.
An additional agent is introduced to represent the Intruder in the version that includes an attack. Automata are also used to model the Intruder’s knowledge (depicted in red in
Figure 2). For the Intruder, we have two standard automata,
and
, as well as an automaton that represents the possibility of the Intruder obtaining the entire message (
).
The automata
and
, which lack transitions between states
and
(
Figure 2), model the Intruder’s inability to proceed. We refer to these as
blocking automata. Their inclusion allows the constructed interlacing model to be limited to only realistic behaviours. Thus, any path where the Intruder sends a message they could not formulate will not be represented.
We define the local evolution functions in a similar manner for the environment and the initial execution:
,
,
,
;
the second execution:
,
,
,
,
;
and the resetting automaton:
.
For the agents, we define the local evolution functions for the :
,
,
,
,
;
and for the ticket :
,
,
.
The intruder-specific automaton for a complete message can be described by the following functions:
,
;
and all the blocking automata:
,
The timed model induced by the above description of both NSPKT scenarios, as outlined in
subsection 4.1, should be straightforward to infer.