3.2. Main Stages of Neural Network Encryption
Encryption is performed over plaintext using a key, which consists of a given number of neurons in the neural network, a matrix of weights and masking operations. Let’s consider the main stages of message encryption.
Choice of neural network architecture. The architecture of the neural network is determined by the number of neuro elements
, the number of inputs
and the bit inputs
. The number of neural elements is determined by the following formula:
where
is the bit size of the message, and
—the bit size of the inputs.
The incoming messages which are encrypted can have different bit size () and different number of inputs (), which is equal to the number of neuroelements . The architecture of the neural network depends on the value of the bit size of the message n and the number of inputs k. The following variants of the neural network architecture are possible for the bit message: , , ; , , ; , , , and for they are: , , ; , , ; , , , , , ; , , ; , , .
Calculation of the matrix of weights. For data encryption-decryption we will use an auto-associative neural network, which learns non-iteratively using the principal components analysis (PCA), which performs a linear transformation following the formula:
According to formula (2), the matrix is used to convert the input vector into the output vector . The transformation is performed by a system of linearly independent vectors that selects an orthonormal system of eigenvectors corresponding to the eigenvalues of the covariance matrix of the input data.
Let the input data be represented as a set of
vectors
, and each of the vectors has dimension
,
:
The autocovariance matrix for
vectors
can be written as:
where each of the elements is defined by the expression:
where
, and
,
—mathematical expectations of vectors
,
.
The eigenvalues of
symmetric non-negative matrix are real and positive numbers. We arrange them in descending order
. Similarly, we place the eigenvectors corresponding to
. Then the matrix
W defines a linear transformation (2), where
is a vector of the principal components of the PCA, which corresponds to the input data vector
. The number of vectors of the principal components
N corresponds to the number of input data vectors
[
29]. The matrix of weights used to encrypt the data is as follows:
The basic operation of the neural network used to encrypt data is the operation of calculating the scalar product. This operation should be implemented using the tabular-algorithmic method because the matrix of weights , where , , is pre-calculated.
Calculation of the table of macro-partial products for data encryption. The specificity of the scalar product calculation operation used in data encryption is that the weights are pre-calculated (constants) and set in floating point format, and the input data X_j is in fixed point format with its fixing before the high digit of a number. The scalar product is calculated by means of the tabular-algorithmic method according to the formula:
where
—number of products,
—input data,
—
-th weigh coefficient,
bit size of the input data,
—partial product,
—macro-partial product, formed by adding
partial products
, as follows:
.
Formation of the tables of macro-partial products for floating-point weights (where —mantissa of weigh coefficient, —order of weigh coefficient) foresees the following operations to be performed:
defining the largest common order of weights ;
calculation of the difference of orders for each weigh coefficient: ;
shift the mantissa to the right by a difference of orders ;
calculation of macro-partial product for the case when ;
determining the number of overflow bits q in the macro-partial product for the case when ;
obtaining scalable mantissas by shifting them to the right by the number of overflow bits;
adding to the largest common order of weight the number of overflow bits q, as per formula .
The table of macro-partial products is calculated by the formula:
where
—address inputs of the table,
—mantissa of
weigh coefficient brought to the greatest common order.
The possible combinations number of
macro-partial products and accordingly the table volume is determined by the formula:
The memory volume can be reduced by dividing all products by parts and . Separate tables of macro-partial products and are formed for each of these parts. Tables for and can be stored in separate memory blocks or a single memory block. When using two memory blocks, parts of the macro-partial products and are read in one clock cycle, and in one memory block—in two clock cycles. The macro-partial product is the sum of two macro-partial products and .
Neural network tabular-algorithmic data encryption. The matrix of weights
W, formed by the eigenvectors of the autocovariance matrix of the input data
R, is determined during the training of the neural network. The type of auto-associative neural network used for data encryption is shown in
Figure 2, where
is the mask for the
-th input,
is the
-th input data, XOR is the masking operation using the exclusive OR elements.
The main operation of neural network data encryption is reduced to multiplying the
matrix of weights by the input data vector
under the following formula:
The multiplication of the matrix of weights
by the vector of input data
is reduced to performing
operations of calculating the scalar product:
where
number of products,
.
The calculation of scalar products will be achieved using the tabular-algorithmic method, where the weights
are set in floating-point format, and the input data
—in a fixed-point format with fixation before the highest digit. Tabular-algorithmic calculation of the mantissa of the scalar product is reduced to reading the macro-partial product
from the j-th table (memory) at the address corresponding to the i-th bit slice of N input data, and adding it to the before accumulated sums according to:
where
,
,
—bit size of the input data. The number of tables of macro-partial products corresponds to
the number of rows of the matrix (10). The result of calculating of the scalar product
consists of the mantissa
and the order
.
The time required to compute the mantissa of the scalar product (SP) is determined by the formula:
where
is the time of calculation of the scalar product,
is the time of reading from the table (memory),
is the time of reading (writing) from the register,
is the time of adding.
Data encryption can be performed either sequentially or in parallel, depending on the speed required. In the case of sequential encryption, the encryption time is the result of the formula:
where
—time required for encryption. The encryption time can be reduced by performing N operations of calculating the scalar product in parallel.
At the output of the neural network, we obtain encrypted data in the following form , where —mantissa at the -th output, —order value at the -th output. To transmit the encrypted data for decryption, it is advisable to bring all encrypted data to the highest common order. The reduction to the greatest common order is performed in three stages:
define the greatest order ;
for each encrypted data calculate the difference between the orders ;
by performing shift of the mantissa to the right by the difference of orders we obtain mantissa of the encrypted data reduced to the greatest common order.
The mantissa of the encrypted data which are reduced to the largest common order, and the largest common order are sent for decryption.
3.3. The Main Stages of Neural Network Cryptographic Data Decryption
Encrypted data in the form of mantissa , which are reduced to the largest common order (block-floating point), come to be decrypted. Therefore, the main stages of decrypting the encrypted data are considered further.
Configuration of the neural network architecture for the decryption of encrypted data. The architecture of the neural network for the decryption of encrypted data, in terms of the number of neural elements, is the same as the architecture of the neural network used for the encryption of data. In this neural network, the number of inputs and the number of neurons corresponds to the number of the encrypted mantissa
. The neural network architecture used to decrypt encrypted data is presented in
Figure 3.
In the neural network for decrypting encrypted data, the bit rate of the inputs corresponds to the bit rate of the encrypted mantissa , which determines the decryption time. To reduce the decryption time, the lower bits of the mantissa may be discarded, it will not affect the recovery of the original message.
Formation of the matrix of weights. The matrix of weights for decrypting encrypted data is formed from a matrix of weights for encrypting input data by transposing it:
The basic operation for encryption of input data and decryption of encrypted data is the calculation of the scalar product, which is implemented using a tabular-algorithmic method.
Calculation of the table of macro-partial products for decryption of encrypted data. A specific feature of the scalar product calculation operation used to decrypt encrypted data is that the weights are pre-calculated (constants) and set in floating-point format, while the encrypted data are received in block-floating-point format. The calculation of the scalar product using the tabular-algorithmic method is performed by formula (7). Preparation and calculation of possible variants of macro-partial products is performed as in the previous case under the formula (8). The number of possible variants of macro-partial products and, accordingly, the volume of the table depends on the amount of encrypted data. For each table of macro-partial products, its largest common order is computed.
Neural network tabular-algorithmic decryption of encrypted data. The main operation of neural network decryption of encrypted data is to multiply the matrix of weights
by the vector of encrypted data
by the following formula:
The multiplication of the matrix of weights
by the vector of input data
is reduced to performing
operations of scalar product calculation:
where
—number of products,
.
Tabular-algorithmic calculation of the mantissa of the scalar product is reduced to reading the macro-partial product
from the table (memory) at the address corresponding to the i-th bit-slice of k input data, and adding it to the previously accumulated sums, according to the formula:
where
,
—bit rate of the encrypted data. The time necessary to calculate the scalar product mantissa is defined under the formula:
where
—time for scalar product calculation,
– time for reading from a table (memory),
—time of reading (writing) from the register,
—time for adding. The result of the calculation of
scalar product consists of a mantissa
and order, which is equal to
.
At the output of the neural network (see
Figure 3), we obtain
decrypted data in the following form
, where
is the mantissa at the
-th output,
is the value of the order at the
-th output. To obtain the input data, it is necessary to shift the
-th mantissa
by the value of the order
.