Preprint
Article

Backpropagation and F-adjoint

Altmetrics

Downloads

94

Views

74

Comments

0

This version is not peer-reviewed

Submitted:

26 April 2023

Posted:

27 April 2023

You are already at the latest version

Alerts
Abstract
This paper presents a concise mathematical framework for investigating both feed-forward and backward process, during the training to learn model weights, of an artificial neural network (ANN). Inspired from the idea of the two-step rule for backpropagation, we define a notion of F_adjoint which is aimed at a better description of the backpropagation algorithm. In particular, by introducing the notions of F-propagation and F-adjoint through a deep neural network architecture, the backpropagation associated to a cost/loss function is proven to be completely characterized by the F-adjoint of the corresponding F-propagation relatively to the partial derivative, with respect to the inputs, of the cost function.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction

An Artificial Neural Network (ANN) is a mathematical model which is intended to be a universal function approximator which learns from data (cf. McCulloch and Pitts, [1]). In general, an ANN consists of a number of units called artificial neurons, which are a composition of affine mappings, and non-linear (activation) mappings (applied element wise), connected by weighted connections and organized into layers, containing an input layer, one or more hidden layers, and an output layer.The neurons in an ANN can be connected in many different ways. In the simplest cases, the outputs from one layer are the inputs for the neurons in the next layer. An ANN is said to be a feedforward ANN, if outputs from one layer of neurons are the only inputs to the neurons in the following layer. In a fully connected ANN, all neurons in one layer are connected to all neurons in the previous layer (cf. page 24 of [2]). An example of a fully connected feedforward network is presented in Figure 1.
In the present work we focus essentially on feed-forward artificial neural networks, with L hidden layers and a transfer (or activation) function σ , and the corresponding supervised learning problem. Let us define a simple artificial neural network as follows:
X 0 = x , Y h = W h . X h 1 , X h = σ ( Y h ) , h = 1 , , L
where x R n is the input to the network, h indexes the hidden layer and W h is the weight matrix of the h-th hidden layer. In what follows we shall refer to the two equations of (1) as the two-step recursive forward formula. The two-step recursive forward formula is very useful in obtaining the outputs of the feed-forward deep neural networks.
A major empirical issue in the neural networks is to estimate the unknown parameters W h with a sample of data values of targets and inputs. This estimation procedure is characterized by the recursive updating or the learning of estimated parameters. This algorithm is called the backpropagation algorithm. As reviewed by Schmidhuber [3], back-propagation was introduced and developed during the 1970’s and 1980’s and refined by Rumelhart et al. [4]). In addition, it is well known that the most important algorithms of artificial neural networks training is the back-propagation algorithm. From mathematical point view, back-propagation is a method to minimize errors for a loss/cost function through gradient descent. More precisely, an input data is fed to the network and forwarded through the so-called layers ; the produced output is then fed to the cost function to compute the gradient of the associated error. The computed gradient is then back-propagated through the layers to update the weights by using the well known gradient descent algorithm.
As explained in [4], the goal of back-propagation is to compute the partial derivatives of the cost function J. In this procedure, each hidden layer h is assigned teh so-called delta error term δ h . For each hidden layer, the delta error term δ h is derived from the delta error terms δ k , k = h + 1 , L ; thus the concept of error back-propagation. The output layer L is the only layer whose error term δ L has no error dependencies, hence δ L is then given by the following equation
δ L = J W L σ ( Y L ) ,
where ⊙ denotes the element-wise matrix multiplication (the so-called Hadamard product, which is exactly the element-wise multiplication " * " in Python). For the error term δ h , this term is derived from matrix multiplying δ h + 1 with the weight transpose matrix ( W h + 1 ) and subsequently multiplying (element-wise) the activation function derivative σ with respect to the preactivation Y h . Thus, one has the following equation
δ h = ( W h + 1 ) δ h + 1 σ ( Y h ) , h = ( L 1 ) , , 1
Once the layer error terms have been assigned, the partial derivative J W h can be computed by
J W h = δ h + 1 ( X h )
In particular, we deduce that the back-propagation algorithm is uniquely responsible for computing weight partial derivatives of J by using the recursive equations (4) and (3) with the initialization data given by (2). This procedure is often called "the generalized delta rule". The key question to which we address ourselves in the present work is the following: how could one reformulate this "generalized delta rule" in two-step recursive backward formula as (1) ?
In the present work, we shall provided a concise mathematical answer to the above question. In particular, we shall introduce the so-called two-step rule for back-propagation, recently proposed by the author in [5], similar to the one for forward propagation. Moreover, we explore some mathematical concepts behind the two-step rule for backpropagation.
The rest of the paper is organized as follows. Section 2 outlines some notations, setting and ANN framework. In Section 3 we recall and develop the two-step rule for back-propagation. In Section 4 we introduce the concepts of F-propagation and the associated F-adjoint and rewrite the two-step rule with these notions. In Section 5 we provide some application of this method to study some simple cases. In Section 6 conclusion, related works and mention future work directions are given.

2. Notations, Setting and the ANN

Let us now precise some notations. Firstly, we shall denote any vector X R n , is considered as columns X = X 1 , , X n and for any family of transfer functions σ i : R R , i = 1 , , n , we shall introduce the coordinate-wise map σ : R n R n by the following formula
σ ( X ) : = σ 1 ( X 1 ) , , σ n ( X n ) .
This map can be considered as an “operator” Hadamard multiplication of columns σ = σ 1 , , σ n and X = X 1 , , X n , i.e., σ ( X ) = σ X . Secondly, we shall need to recall some useful multi-variable functions derivatives notations. For any n , m N * and any differentiable function with respect to the variable x
F : R x F ( x ) = F i j ( x ) 1 i m 1 j n R m × n
we use the following notations associated to the partial derivatives of F with respect to x
F x = F i j ( x ) x 1 i m 1 j n
In adfdition, for any n , m N * and any differentiable function with respect to the matrix variable
F : R m × n X = X i j 1 i m 1 j n F ( X ) R
we shall use the so-called denominator layout notation (see page 15 of [6]) for the partial derivative of F with respect to the matrix X
F X = F ( X ) X i j 1 i m 1 j n
In particular, this notation leads to the following useful formulas: for any q N * and any matrix W R q × m we have
( W X ) X = W ,
when X R n with X n = 1 one has
( W X ) X = W
where W is the matrix W whose last column is removed (this formula is highly useful in practice). Moreover, for any matrix X R m × n we have
( W X ) W = X .
Then, by the chain rule one has for any q , n , m N * and any differentiable function with respect to the matrices variables W , X :
F : ( W , X ) Z : = W X R q × n F ( Z ) R
F X = W F Z and F W = F Z X .
Furthermore, for any differentiable function with respect to Y
F : R n Y X : = σ ( Y ) R n F ( X ) R
we have
F Y = F X σ ( Y ) .
Throughout this paper, we consider layered feedforward neural networks and supervised learning tasks. Following [2] (see (2.18) in page 24), we will denote such an architecture by
A [ N 0 , , N h , , N L ]
where N 0 is the size of the input layer, N h is the size of hidden layer h, and N L is the size of the output layer; L is defined as the depth of the ANN, then the neural network is called as Deep Neural Network (DNN). We assume that the layers are fully connected, i.e., neurons between two adjacent layers are fully pairwise connected, but neurons within a single layer share no connections.
In the next section we shall recall, develop and improve the two-scale rule for backpropagation, recently introduced by the author in [5].

3. Two-step rule for backpropagation

First, let us mention that the two-step rule for backpropagation is very useful in obtaining some estimation of the unknown parameters W h with a sample of data values of targets and inputs (see some examples given in [5]).
Now, let α i j h denote the weight connecting neuron j in layer h 1 to neuron i in hidden layer h and let the associated transfer function denoted σ i h . In general, in the application two different passes of computation are distinguished. The first pass is referred to as the forward pass, and the second is referred to as the backward pass. In the forward pass, the synaptic weights remain fixed throughout the network, and the output X i h of neuron i in hidden layer h is computed by the following recursive-coordinate form :
X i h : = σ i h ( Y i h ) where Y i h : = j = 1 N h 1 α i j h X j h 1
In two-step recursive-matrix form, one may rewrite the above formulas as
X h = σ h ( Y h ) where Y h = W h X h 1 ,
with
σ h : = ( σ 1 h , , σ N h h ) T , W h : = ( α i j h ) R N h × R N h 1 .
Let us assume that for all h { 1 , , L } , σ h = ( σ , , σ ) , where σ is a fixed activation function, and define a simple artificial neural network as follows:
given X 0 = x , for h = 1 , , L , Y h = W h . X h 1 , X h = σ ( Y h ) .
where x R n is the input to the network.
To optimize the neural network, we compute the partial derivatives of the cost J ( . ) w.r.t. the weight matrices J ( . ) W h . This quantity can be computed by making use of the chain rule in the back-propagation algorithm. To compute the partial derivative with respect to the matrices variables { X h , Y h , W h } , we put
X * h = J ( . ) X h , Y * h = J ( . ) Y h , δ W h = J ( . ) W h .
Figure 1. Example of an A [ N 0 , , N L ] architechture.
Figure 1. Example of an A [ N 0 , , N L ] architechture.
Preprints 71913 g001
Let us notice that the forward pass computation between the two adjacent layers h 1 and h may be represented mathematically as the composition of the following two maps:
Preprints 71913 i001
The backward computation between the two adjacent layers h and h 1 may be represented mathematically as follows:
Preprints 71913 i002
One may represent those maps as a simple mapping diagrams with X h 1 and X * h as inputs and the respective outputs are Y h = W h X h 1 , X h = σ ( Y h ) and Y * h , X * h 1 (see Figure 2).
Remark 1. 
It is crucial to remark that, if we impose the following setting on X h , W h and σ N h h :
1. 
all input vectors have the form X h = [ X 1 h , , X N h 1 h , 1 ] for all 0 h ( L 1 ) ;
2. 
the last functions σ N h h in the columns σ h for all 1 h ( L 1 ) are constant functions equal to 1.
Then, the A [ N 0 , , N L ] neural network will be equivalent to a ( L 1 ) -layered affine neural network with ( N 0 1 ) -dimensional input and N L -dimensional output. Each hidden layer h will contain ( N h 1 ) “genuine” neurons and one (last) “formal”, associated to the bias; the last column of the matrix W h will be the bias vector for the h-th layer (For more details, see the examples given in Section 5).
Figure 2. Mapping diagrams associated to the forward and backward passes between two adjacent layers.
Figure 2. Mapping diagrams associated to the forward and backward passes between two adjacent layers.
Preprints 71913 g002
Now, we state our first main mathematical result to answer the above question, in the following theorem.
Theorem 1
(The two-step rule for backpropagation).
Let L be the depth of a Deep Neural Network and N h the number of neurons in the h-th hidden layer. We denote by X 0 R N 0 the inputs of the network, W h R N h × N h 1 the weights matrix defining the synaptic strengths between the hidden layer h and its preceding h 1 . The output Y h of the hidden layer h are thus defined as follows:
X 0 = x , Y h = W h X h 1 , X h = σ ( Y h ) , h = 1 , 2 , , L .
Where σ ( . ) is a point-wise differentiable activation function. We will thus denote by σ ( . ) its first order derivative, x R N 0 is the input to the network and W h is the weight matrix of the h-th layer. To optimize the neural network, we compute the partial derivatives of the loss J ( f ( x ) , y ) w.r.t. the weight matrices J ( f ( x ) , y ) W h , with f ( x ) and y are the output of the DNN and the associated target/label respectively. This quantity can be computed similarly by the following two-step rule:
X * L = J ( f ( x ) , y ) X L , Y * h = X * h σ ( Y h ) , X * h 1 = W h Y * h , h = L , , 1 .
Once Y * h is computed, the weights update can be computed as
J ( f ( x ) , y ) W h = Y * h X h 1 .
Hereafter, the two-step recursive formulae given in (22) and (23) will be referred to as "the two-step rule for back-propagation" (see the mapping diagram represented by Figure 3.). In particular, this rule provide us a simplified formulation of the so-called "generalized delta rule" similarly to (1). Thus, we have answered our key question.
Proof. 
Firstly, for any h = 1 , , L  let us recall the simplified notations introduced by (20):
X * h = J ( f ( x ) , y ) X h , Y * h = J ( f ( x ) , y ) Y h .
Secondly, for fixed h { 1 , , L } , X h = σ ( Y h ) , then (14) implies that
J ( f ( x ) , y ) Y h = J ( f ( x ) , y ) X h σ ( Y h )
thus
Y * h = X * h σ ( Y h ) .
On the other hand, Y h = W h X h 1 , thus
J ( f ( x ) , y ) X h 1 = ( W h ) J ( f ( x ) , y ) Y h
by vertue of (13). As consequence,
X * h 1 = ( W h ) Y * h .
Equations (24) and (25) implies immediately (22). Moreover, we apply again (13) to the cost function J and the relation Y h = W h X h 1 , we deduce that
J ( f ( x ) , y ) W h = J ( f ( x ) , y ) Y h ( X h 1 ) = Y * h ( X h 1 ) .
This end the proof of the Theorem 1. Furthermore, in the practical setting mentioned in Remark 1, one should replace W h by W h by vertue of (11). Thus, we have for all h { L , , 2 }
X * h 1 = ( W h ) Y * h ,
and then the associated two-step rule is given by
X * L = J ( f ( x ) , y ) X L , Y * h = X * h σ ( Y h ) , X * h 1 = W h Y * h , h = L , , 1 .

4. The F-propagation and F-adjoint

Based on the above idea of two-scale rule for back-prpagation, we shall introduce the following two natural definitions :
Definition 1
(An F-propagation).
Le X 0 R N 0 be a given data, σ be a coordinate-wise map from R N h into R N h + 1 and W h R N h × N h 1 for all 1 h L . We say that we have a two-step recursive F-propagation F through the DNN A [ N 0 , , N L ] if one has the following family of vectors
F : = X 0 , Y 1 , X 1 , , X L 1 , Y L , X L
with
Y h = W h X h 1 , X h = σ ( Y h ) , X h R N h , h = 1 , , L .
Before going further, let us point that in the above definition the prefix "F" stands for "Feed-forward".
Definition 2
(The F-adjoint of an F-propagation).
Let X * L R N L be a given data and a two-step recursive F-propagation F through the DNN A [ N 0 , , N L ] . We define the F-adjoint F * of the F-propagation F as follows
F * : = X * L , Y * L , X * L 1 , , X * 1 , Y * 1 , X * 0
with
Y * h = X * h σ ( Y h ) , X * h 1 = ( W h ) Y * h , h = L , , 1 .
Remark 2.
It is immediately seen that if for every h = 1 , , L , W h is an orthogonal matrix i.e. ( W h ) W h = W h ( W h ) = I N h , σ ( X ) = X and X 0 = X * 0 one has
Y 1 = W 1 X * 0 = W 1 ( W 1 ) Y * 1 = Y * 1 ,
Y 2 = W 2 X * 1 = W 2 ( W 2 ) Y * 2 = Y * 2 ,
by recurrence we obtain
Y L = W L X * L 1 = W L ( W L ) Y * L = Y * L .
On the other hand,
X 1 = σ ( W L X 0 ) = W L X * 0 = Y * 1 = X * 1 ,
also, by recurrence we get
X L = W L X L 1 = W L ( W L ) Y * L = Y * L = X * L .
Consequently, we obtain F * = F and we refer to this property as F-symmetric. We may conjecture that under the following assumption : for all vector X, σ ( X ) = X , a DNN is F-symmetric if and only if ( W h ) W h = W h ( W h ) = I N h , h = 1 , L , i.e. for all h = 1 , , L , W h is an orthogonal matrix.
Similarly to the note at the end of the proof of Theorem 1, if we are in the practical setting mentioned in Remark 1, one should replace W h by W h by vertue of (11) and (30) by the following relations :
Y * h = X * h σ ( Y h ) , X * h 1 = W h Y * h , h = L , , 1 .
Now, following these definitions, we can deduce our second main mathematical result.
Theorem 2
(The backpropagation is an F-adjoint of the associated F-propagation).
Let A [ N 0 , , N L ] be a Deep Neural Network and let F : = X 0 , Y 1 , X 1 , , X L 1 , Y L , X L an F-propagation through A [ N 0 , , N L ] , with weights matrix W h R N h × N h 1 and fixed point-wise activation σ h = σ , h = 1 , , L .
Let X 0 = x R N 0 be the input to this network and J ( f ( x ) , y ) be the loss function, with f ( x ) and y are the output of the DNN and the associated target/label respectively.
Then the associated backpropagation pass is computed with the F-adjoint of F with X * L = J X L ( f ( x ) , y ) , that is F * : = X * L , Y * L , X * L 1 , , X * 1 , Y * 1 , X * 0 . Moreprecisely, for all h = 1 , , L
J ( f ( x ) , y ) W h = Y * h ( X h 1 ) .
Before proving the Theorem 2, let us as remark that this mathematical result is a simplified version of Theorem 1 based on the notion of F-adjoint introduced in Definition 2.
Proof. 
By definition of the F-adjoint one has for all  h = L , , 1 ,
Y * h = X * h σ ( Y h ) , X * h 1 = ( W h ) Y * h .
Moreover, by the two-step rule for backpropagation, we have
X * L = J ( . ) X L , Y * h = X * h σ ( Y h ) , δ W h = Y * h ( X h 1 ) , X * h 1 = ( W h ) Y * h ,
As consequence, we deduce that the backpropagation is exactely determined by the F-adjoint F *  relatively to the choice of  X * L = J ( . ) X L . Here we present a new proof of Theorem 1 of [5] which is simpler than the original one.

5. Application to the two simplest cases A [ 1 , 1 , 1 ] and A [ 1 , 2 , 1 ]

The present section shows that in the following four simplest cases associated to the DNN A [ 1 , N 1 , 1 ] with N 1 = 1 , 2 , we shall apply the notion of F-adjoint to compute the partial derivative of the elementary cost function J defined by J ( f ( x ) , y ) = f ( x ) y for any real x and fixed real y. In this particular setting, we have the two simplest cases A [ 1 , 1 , 1 ] and A [ 1 , 2 , 1 ] : one neuron and two neurons in the hidden layer (see Figure 4 and Figure 5).

5.1. The first case: A [ 1 , 1 , 1 ]

The first simplest case corresponds to A [ 1 , 1 , 1 ] architecture is shows by the Figure 4. Let us denote by a W 1 = α 11 1 α 12 1 and W 2 = α 1 2 α 2 2 the weights in the first and second layer. We will evaluate the δ W 1 and δ W 2 by the differential calculus rules firstly and then recover this result by the two-step rule for back-propagation.
Figure 4. The DNN associated to the case 1.
Figure 4. The DNN associated to the case 1.
Preprints 71913 g004

The F-propagation through this DNN is given by : 

Obviously, we have
X 0 = x 1 , Y 1 = α 11 1 x + α 12 1 , X 1 = σ ( α 11 1 x + α 12 1 ) 1 ,
Y 2 = α 1 2 σ ( α 11 1 x + α 12 1 ) + α 2 2 , X 2 = σ α 1 2 σ ( α 11 1 x + α 12 1 ) + α 2 2 .
Let us denote y 1 : = α 11 1 x + α 12 1 and y 2 : = α 1 2 σ ( y 1 ) + α 2 2 , then
F = X 0 = x 1 , Y 1 = y 1 , X 1 = σ ( y 1 ) 1 , Y 2 = y 2 , X 2 = σ ( y 2 )
and
J ( f ( x ) , y ) = X 2 y = σ ( y 2 ) y = σ α 1 2 σ ( α 11 1 x + α 12 1 ) + α 2 2 y .
Hence, by using the differential calculus rules one gets
δ W 2 = J α 1 2 J α 2 2 = σ ( y 2 ) σ ( y 1 ) σ ( y 2 ) , δ W 1 = J α 11 2 J α 12 2 = σ ( y 2 ) σ ( y 1 ) α 1 2 x σ ( y 2 ) σ ( y 1 ) α 1 2 .

The F-adjoint of the above F-propagation is given by 

F * = X * 2 = 1 , Y * 2 = σ ( y 2 ) , X * 1 = α 1 2 σ ( y 2 ) , Y * 1 = α 1 2 σ ( y 2 ) σ ( y 1 ) , X * 0 = α 11 1 α 1 2 σ ( y 2 ) σ ( y 1 )
since by (31) we have
Y * h = X * h σ ( Y h ) , X * h 1 = W h Y * h , h = 2 , 1
then we deduce that
Y * 2 = X * 2 σ ( y 2 ) = σ ( y 2 ) , X * 1 = W 2 Y * 2 = α 1 2 σ ( y 2 ) , Y * 1 = X * 1 σ ( y 1 ) = α 1 2 σ ( y 2 ) σ ( y 1 )
and
X * 0 = W 1 Y * 1 = α 11 1 Y * 1 = α 11 1 α 1 2 σ ( y 2 ) σ ( y 1 )
Hence, by using the F-adjoint and the two-step rule (26) one gets
δ W 2 = Y * 2 ( X 1 ) = σ ( y 2 ) σ ( y 1 ) σ ( y 2 ) , δ W 1 = Y * 1 ( X 0 ) = σ ( y 2 ) σ ( y 1 ) α 1 2 x σ ( y 2 ) σ ( y 1 ) α 1 2 .
Thus we recover, via the F-adjoint F * , the computation given by (32).

5.2. The second case: A [ 1 , 2 , 1 ]

The second simplest case corresponds to A [ 1 , 2 , 1 ] architecture is shows by the Figure 5. In this case we have W 1 = α 11 1 α 12 1 α 21 1 α 22 1 and W 2 = α 1 2 α 2 2 α 3 2 . Thus, one deduce immediately that
Figure 5. The DNN associated to the case 2.
Figure 5. The DNN associated to the case 2.
Preprints 71913 g005

The F-propagation through this DNN is given by : 

F = X 0 = x 1 , Y 1 = y 11 1 y 21 1 , X 1 = σ ( y 11 1 ) σ ( y 21 1 ) 1 , Y 2 = y 2 , X 2 = σ ( y 2 )
Hence, by using the differential calculus rules one gets
δ W 2 = σ ( y 2 ) σ ( y 11 1 ) σ ( y 2 ) σ ( y 21 1 ) σ ( y 2 ) , δ W 1 = α 1 2 x σ ( y 11 1 ) σ ( y 21 1 ) α 1 2 σ ( y 11 1 ) σ ( y 2 ) α 2 2 x σ ( y 21 1 ) σ ( y 2 ) α 2 2 σ ( y 21 1 ) σ ( y 2 )
with
y 2 : = α 1 2 σ ( w 1 x ) + α 2 2 σ ( w 2 ) + α 3 2 , y 11 1 : = α 11 1 x + α 12 1 , y 21 1 : = α 21 1 x + α 22 1 .

The F-adjoint of the above F-propagation is given by 

F * = X * 2 = 1 , Y * 2 = σ ( y 2 ) , X * 1 = α 1 2 σ ( y 2 ) α 2 2 σ ( y 2 ) , Y * 1 = α 1 2 σ ( y 2 ) σ ( y 11 1 ) α 2 2 σ ( y 2 ) σ ( y 21 1 ) , X * 0 = x 0
with x 0 = α 11 1 α 1 2 σ ( y 2 ) σ ( y 11 1 ) + α 21 1 α 2 2 σ ( y 2 ) σ ( y 21 1 ) . As above, by using also the F-adjoint and the two-step rule (26) one gets
δ W 2 = Y * 2 ( X 1 ) = σ ( y 2 ) σ ( y 11 1 ) σ ( y 2 ) σ ( y 21 1 ) σ ( y 2 ) , δ W 1 = Y * 1 ( X 0 ) = α 1 2 x σ ( y 11 1 ) σ ( y 2 ) α 1 2 σ ( y 11 1 ) σ ( y 21 1 ) α 2 2 x σ ( y 21 1 ) σ ( y 21 1 ) α 2 2 σ ( y 21 1 ) σ ( y 21 1 ) .

6. Related works and Conclusion

To the best of our knowledge, in literature, the related works to this paper are [7] and [8]. In particular, in the first paper the authors uses some decomposition of the partial derivatives of the cost function, similar to the two-step formula (cf. (22)), to replace the standard back-propagation. In addition, they show (experimentally) that for specific scenarios, the two-step decomposition yield better generalization performance than the one based on the standard back-propagation. But in the second article, the authors find some similar update equation similar to the one given by (22) that report similarly to standard back-propagation at convergence. Moreover, this method discovers new variations of the back-propagation by learning new propagation rules that optimize the generalization performance after a few epochs of training.
In conclusion, we have provided a two-step rule for back-propagation similar to the one for forward propagation and a new mathematical notion called F-adjoint which is combined by the two-step rule for back-propagation describes, in a simple and direct way, the computational process given by the back-propagation pass. We hope that, the power and simplicity of the F-adjoint concept may inspire the exploration of novel approaches for optimizing some artificial neural networks training algorithms and investigating some mathematical properties of the F-propagation and F-adjoint notions. As future work, we envision to explore some mathematical results regarding the F-adjoint for deep neural networks with respect to the choice of the activation function.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics 1943, 5, 115–133.
  2. Baldi, P. Deep learning in science; Cambridge University Press: Cambridge, 2021.
  3. Schmidhuber, J. Deep learning in neural networks: An overview. Neural networks 2015, 61, 85–117.
  4. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. nature 1986, 323, 533–536.
  5. Boughammoura, A. A Two-Step Rule for Backpropagation 2023. Preprint at https://www.preprints.org/manuscript/202303.0001/v3, doi:10.20944/preprints202303.0001.v3. [CrossRef]
  6. Ye, J.C. Geometry of Deep Learning; Springer: Heidelberg, 2022.
  7. Alber, M.; Bello, I.; Zoph, B.; Kindermans, P.J.; Ramachandran, P.; Le, Q. Backprop evolution 2018. Preprint at https://arxiv.org/abs/1808.02822.
  8. Hojabr, R.; Givaki, K.; Pourahmadi, K.; Nooralinejad, P.; Khonsari, A.; Rahmati, D.; Najafi, M.H. TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2020, pp. 1–5.
Figure 3. Mapping diagram associated to the two-step rule for backpropagation.
Figure 3. Mapping diagram associated to the two-step rule for backpropagation.
Preprints 71913 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated