Preprint
Article

Iterative List Patterned Reed-Muller Projection Detection-based Packetized Unsourced Massive Random Access

Altmetrics

Downloads

116

Views

15

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

09 July 2023

Posted:

10 July 2023

You are already at the latest version

Alerts
Abstract
We consider a coded compressed sensing protocol for unsourced massive random access (URA) that concatenates a shared Patterned Reed-Muller (PRM) inner codebook to an outer error-correction code. In this paper, an iterative list PRM projection algorithm is proposed to supplant the signal detector associated with the inner PRM sequences. In particular, we first propose an enhanced paths-saving algorithm called list PRM projection detection for the single-user scenario that keeps multiple candidates for the first few layers to remedy the risk of spreading errors. On this basis, we further propose an iterative list PRM projection algorithm for the multi-user scenario. The vectors for PRM codes and channel coefficients are jointly detected in an iterative manner, which offers significant improvements regarding the convergence rate for signal recovery. Furthermore, the performances of the proposed algorithms are analyzed mathematically, and we verified that the theoretical simulations are consistent with the numerical simulations. Finally, we concatenate the inner PRM codes that employ the iterative list detection to two practical error-correction outer codes. According to the simulation results, we conclude that the packetized URA with the proposed iterative list projection detection works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.
Keywords: 
Subject: Engineering  -   Telecommunications

1. Introduction

One of the most prominent use cases for 6G wireless networks will still be the massive machine-type communication’s (mMTC+) continuous evolution [1]. Indeed, mMTC+ services feature the presence of a large number of machines that sporadically link to the system while carrying short data packets. Besides, they are battery-limited and intended to achieve low transmission latency [2,3,4].
Grant-free transmission [5,6], in which packets are transmitted to the network without informing the base station in advance, is the approach that provides the most promise. The instance of "Sourced Random Access" [7] uses grant-free transmission and allocates distinct dictionaries to different users. Since there are so many users, employing various encoders would complicate reception since the decoder would first have to determine which encoders were being used. Therefore, using the same coding technique for all users is the most promising strategy. These systems are referred to as "Unsourced random access (URA)" [8]. In this way, the identification and decoding responsibilities are decoupled, and the URA receiver only needs to decode the messages without knowing who sent them. Additionally, all active users’ per-user probability of error (PUPE) is used to examine the access and transmission performance of the entire system.
The fundamental limits for the Gaussian multiple-access channel are described in [8], which formulates the URA problem from an information-theoretic perspective. Based on [8], an asymptotic improvement is presented in [9]. Both examine the scenario where the receiver has a fixed count of active users. These findings are expanded in [10] to the case where the receiver’s active user count is arbitrary and unknowable. For the Gaussian multiple-access channel, a range of low-complexity URA algorithms have been suggested. The main schemes are T-fold irregular repetition, which includes the T-fold irregular repetition slotted ALOHA protocol with collision detection [11,12], the sparse interleave division multiple access (IDMA) scheme [13], random spreading and correlation-based energy detection [14,15], as well as the coded compressed sensing (CCS) based URA scheme [16]. Coded compressed sensing is a divide-and-conquer strategy that concatenates the inner codes to outer A-channel codes, where a tree code is a specific instance of an outer A-channel code devised for this kind of application [17,18]. This makes it simple to accommodate novel channel models owing to the concatenated form’s adaptability for URA. Several later investigations on URA employed an outer A-channel code [19,20,21,22,23,24]. In paper [25], the study scope of a tree code behaving as the outer A-channel code is extended through the integration of the approximate message passing (AMP) mechanism with sparse regression codes [16]. The coded compressed sensing approach is further enhanced in  [19] by enabling the inner AMP decoder and the outer tree decoder to communicate soft information through a common message-passing protocol. In order to simplify the high dimensional sparse signals’ joint detection pertaining to several bases, the research work in [26] suggests a coded demixing method.
Compressed sensing has made extensive use of codebook construction techniques related to second-order Reed-Muller (RM) sequences [27]. RM sequences are excellent options for massive access senario in continuous evolution of massive machine-type communication owing to the high capacity and capability for low-complexity random access [28]. In [29], the non-orthogonal Reed-Muller sequences are used as user identifiers (IDs) for active user identification in the grant-based access mechanism in mMTC+, which has low access potency and high expenses of signaling. In [30], a strategy for massive random access is proposed that makes use of the nested characteristics of RM codes. Besides, the coded compressed sensing scheme has made extensive use of inner codebook construction techniques related to second-order RM sequences [31,32], i.e., RM sequences are utilized as inner codes for the CCS protocol, and the outer tree codes are employed for coupling messages for slotted transmissions. In [33], a shared patterned Reed-Muller (PRM) codebook that embeds zero patterns in the second-order RM codes according to a binary vector space partition principle is employed as the inner codebook for the coded compressed sensing protocol.
This paper is inspired by a slot-controlled CCS protocol that concatenates outer error-correction codes to a common PRM inner codebook [33]. We replace the inner decoder related to PRM with a proposed iterative list PRM projection algorithm. We then deduce the theoretically successful detection probabilities of the proposed algorithm and validate its correctness via numerical simulations. We finally assess the performance of the slot-controlled URA system with the proposed iterative list algorithm as the inner code detector. According to simulation results, we demonstrate that the schemes using the proposed detection can help improve the overall system’s performance.

1.1. Contributions and Arrangement

Here is a summary of our contributions of this article:
  • A shared PRM codebook that embeds zero patterns in the second-order RM codes according to a binary vector space partition principle is employed as the inner codebook for the coded compressed sensing protocol. On this basis, we propose an enhanced algorithm called the "list PRM projection algorithm" that uses a list of candidates to hinder error spreading in each layer detection.
  • As for the multi-sequence scenario, an iterative list PRM projection algorithm is proposed. More specifically, except for the user’s information during the current loop, we consider all signals as interference. We remove the interference in priority and then employ the proposed list projection algorithm to recover the PRM sequence. The PRM estimations are then inserted into the channel estimator to enhance precision. After that, the modified channel estimations are utilized in the subsequent iteration to improve PRM detection further. As a result, the proposed iterative list PRM projection algorithm offers significant advantages regarding the convergence rate.
  • The theoretical successful detection probabilities of the proposed list projection and the iterative list projection algorithms are analyzed mathematically, and the results demonstrate that: (i) the factors that affect the efficiency of the list PRM projection algorithm are shown in the simulation results, which validate that the recovery reliability of the first few layers is crucial to the whole performance (see Figure 1);  (ii) simulation results further explore the effect of the relation between rank Υ and m for PRM sequence properties on the theoretical successful detection probability (see Figure 2, Figure 3 and Figure 4);  (iii) we verified that the theoretical results are consistent with the simulation results (see Figure 5 and  Figure 7).
  • We substitute the inner decoder of the scheme [33] with the proposed iterative list PRM projection algorithm. Simulations of the quasi-static Rayleigh fading multiple-access channel (MAC) are performed numerically (see Figure 8), and we conclude the observations as follows: (i) the proposed scheme is superior to the OMP-based inner codes detection, i.e., the chance of successful recovery is significantly improved since the PRM detector and channel estimator share information, and the negative impacts can be further reduced by eliminating those sequences that are incorrectly detected in each operation; (ii) by increasing the count of candidates in the first few layers, we can boost the origin PRM projection algorithm’s performance, which further confirms the above proof of importance for ensuring the reliable recovery of the first few layers; (iii) the simulation results of the overall URA system confirm that the packetized URA with the proposed iterative list projection detection works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.
Figure 1. Given m, N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Figure 1. Given m, N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Preprints 78981 g001
Figure 2. Given r , N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with m.
Figure 2. Given r , N 0 = 10 and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with m.
Preprints 78981 g002
Figure 3. Given r , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 .
Figure 3. Given r , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 .
Preprints 78981 g003
Figure 4. Given N 0 , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Figure 4. Given N 0 , m and Υ , the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r .
Preprints 78981 g004
Figure 5. The comparison of numerical simulation results and theoretical analysis results of list PRM projection algorithm.
Figure 5. The comparison of numerical simulation results and theoretical analysis results of list PRM projection algorithm.
Preprints 78981 g005
Figure 6. The successful detection probability versus SNR for different schemes, m = 8 .
Figure 6. The successful detection probability versus SNR for different schemes, m = 8 .
Preprints 78981 g006
Figure 7. The successful detection probability of Algorithm 1 versus SNR.
Figure 7. The successful detection probability of Algorithm 1 versus SNR.
Preprints 78981 g007
Figure 8. The simulations of energy efficiency are depicted. The simulations are listed as follows: list PRM-tree scheme for t = 1 , , 3 from [33], iterative list PRM-tree scheme for t = 1 , , 3 , Reed-Solomon scheme from [23], list PRM-RS scheme without iteration form [33], and iterative list PRM-RS schemes for H = 1 and H = 8 , respectively.
Figure 8. The simulations of energy efficiency are depicted. The simulations are listed as follows: list PRM-tree scheme for t = 1 , , 3 from [33], iterative list PRM-tree scheme for t = 1 , , 3 , Reed-Solomon scheme from [23], list PRM-RS scheme without iteration form [33], and iterative list PRM-RS schemes for H = 1 and H = 8 , respectively.
Preprints 78981 g008
The rest of this paper is arranged as follows: in Section 2, we induce the system model and review the patterned Reed-Muller sequences as well as their projection algorithm. An enhanced list PRM projection algorithm and an iterative list PRM projection algorithm are proposed in Section 3. The theoretical probability of the algorithms proposed is derived in Section 4. The simulation results are presented in Section 5, and the conclusion of the paper is concluded in Section 6.

1.2. Conventions

It is stipulated that all vectors, whether complex or binary, will be column vectors. We employ F 2 m to present the finite (binary) field in this paper. P m denotes the size of m × m matrix. P T stands for the transpose of matrix P and P T stands for the inverse transposed of matrix P . The superscript H stands for the conjugate transpose of a matrix. Vectors 0 N and 1 N are all-zero and all-one vectors of size N × 1 , respectively. We use A = a i i = 1 I to stand for the vector whose capacity is I, consisting of the components a i ( i = 1 , , I ) . The vectors B 1 C T 1 × 1 and B 2 C T 2 × 1 are added together to produce B = B 1 ; B 2 C ( T 1 + T 2 ) × 1 . CN ( 0 , I N ) stands for a complex standard normal random vector. The notation x N ( μ , σ 2 ) stands for that x is a Gaussian random variable whose mean is μ and variance is σ 2 , its probability density function (PDF) is written as f N ( x ; μ , σ 2 ) = 1 2 π σ 2 exp ( x μ ) 2 2 σ 2 . while x CN ( μ , σ 2 ) denotes that the random variable x follows circular symmetric complex Gaussian distribution with its PDF being f C N ( x ; μ , σ 2 ) = 1 π σ 2 exp | x μ | 2 σ 2 .

2. System Model

Let us reconsider the system of CCS protocol. K prospective users exist in the URA model, and only K a elements are engaged in accessing the system owing to the system’s occasional use. This case typically fulfills K a K for the mMTC+ circumstance.
As for the transmitter of practical transmission, a B-bit message is carried by each active user as part of the set { W 1 , W 2 , , W K a } for accessing the system. We let N = 2 B , and each message W k ( 1 k K a ) is encoded into an H N -length outer error-correction code, thus W k [ N ] , where [ N ] = 1 , , N . Furthermore, a length- N = 2 m sequence C P ^ , B , I χ ( k , n ) represents an inner PRM code randomly picked from a shared codebook Γ that is sent by the k-th user within the n-th chunk, where 1 n H N . There is a one-to-one match between the sequence C P ^ , B , I χ and a partitioned message of length B / H N . Using the slot-pattern-control rule from [33], we arrange H N PRM sequences into N T chunks, which leads to the system employing T = N · N T channel uses to operate a slotted-controlled transmission.
As for the receiver of the system, the outer decoder couples the divided information chunks after detecting codewords in each slot. The miss detection rate (MDR) and the false-alarm rate (FAR), which are denoted as P e 1 K a k K a Pr W k D ^ Y and P f Pr D ^ Y { W k | k K a } ( K a = { 1 , , K a } ) , respectively, are used as the main assessments for the performance evaluation from the standpoint of messages transmission.
The geometry properties of PRM codes and the decoder of the slot-controlled URA structure are presented roughly for the sake of clarity.

2.1. Patterned Reed-Muller Sequence

A new version of the Binary Subspace Chirp (BSSC) [34,35,36], which embeds zero patterns in the second-order RM codes according to the partition rule of the binary vector space, is proposed by reducing the subspace matrix H χ to I χ , where χ [ m ] indicates all possible subsets of rank Υ { 1 , 2 , , m } under full dimension m.
The simplified sequence is denoted as patterned Reed-Muller (PRM) and can be writtern as follows
C P ^ , B , I χ ( v ) = 1 2 Υ i 2 w T B | 1 Υ + w T P ^ w , if v = I χ w + I χ ˜ B | Υ + 1 m , 0 , otherwise ,
where w F 2 Υ , symmetric matrix P ^ and vector B are dominant parameters for RM ( Υ , 2 ) , besides, sub-vector B | Υ + 1 m indentifies the sequence of the last ( m Υ ) -bit segment of the m-length vector B , and subspace I χ is a matrix of size ( m × Υ ) with column vectors e χ ( 1 ) , e χ ( 2 ) , , e χ ( Υ ) , where e χ ( r ) is the unit vector with non-zero at the r-th position, 1 r Υ . Similarly, matrix I χ ˜ follows the same rule.
For the complex domain sequence, PRM codes have its geometry property : for any z F 2 m Υ , the PRM sequence satisfies
E I χ e r , I χ ˜ z + I χ P ^ r · C P ^ , B , I χ = ( 1 ) B T ( e r ; 0 m Υ ) + ( B | Υ + 1 m ) T z · C P ^ , B , I χ ,
where P ^ r = P ^ e r . According to (2), the PRM sequence has its specific projective property by letting z = 0 m Υ :
λ P R M , ϖ ( r ) ( 0 m Υ ) · C P ^ , B , I χ = C P ^ , B , I χ , if ϖ = ( 1 ) B r ; , 0 , if ϖ ( 1 ) B r .
where
λ P R M , ϖ ( r ) ( 0 m Υ ) = I N + ϖ · E I χ e r , I χ P ^ r 2 ,
is a projection operator for PRM sequences.

2.2. Detection Algorithm of Multi-PRM Sequence

The signal received in the receiver for the n-th slot is expressed as
y n = k = 1 K a h k , n · x k , n + n n ,
where h k , n C N ( 0 , 1 ) represents a constant channel coefficient that is supposed to be stable for the duration of a user’s transmission between the active user k and the access point (AP) in n-th chunk. while, n n C N ( 0 , I N ) is the complex additive white Gaussian noise (AWGN). The detection algorithm suggested in [33] is carried out by the receiver. The efficiency of the method in [33] can be improved using a modified version based on list decoding, which will be described in the next part.

3. Iterative List PRM Projection Algorithm-based Inner-Code Detector

Our proposed detection scheme for inner codes is described in this section. In addition to the modified list PRM projection detection adopted for a single sequence, we propose an iterative list PRM projection algorithm implemented at the AP for multiple sequences.

3.1. List PRM Projection Algorithm

The received signal is y = C P ^ , B , I χ + n and the enhanced algorithm based on the detection proposed in [33] is executed for the single-user scenario. Specifically, in this single-user scenario, we propose a modified algorithm that utilizes a candidate-saving technique during each layer of detection to boost performance. The Algorithm 1’s particular processes are outlined in the following.
Three elements I χ , P ^ , B of the PRM sequence need to be recovered, and the subspace matrix I χ should be retrieved first. Since the rank is unknown, we go over every conceivable rank in 1 Υ m and set aside L potential subspace matrices for each one (in Lines 3 and 4): after we calculate F sum ( f χ ( Υ ) ) = j = 1 2 ( m Υ ) y H E 0 , f χ ( Υ ) ( j ) y for every f χ ( Υ ) ( j ) F 2 m , employ the relation of f χ ( Υ ) ( j ) j = 1 2 ( m Υ ) = span I ˜ χ ( Υ ) and seek the biggest 2 ( m Υ ) estimations of F sum , the set f χ ( Υ ) ( j ) j = 1 2 ( m Υ ) and the subspace matrix I ˜ χ ( Υ ) can be identified. We finally keep L candidates as I χ ( Υ ) = I ˜ χ ( Υ ) ( l ) l = 1 L , Υ .
The recovery of remaining elements P ^ , B depends on the subspaces in I χ ( Υ ) . In the following, we will decode the vector P l ^ and bit B l in a layer-by-layer manner. Furthermore, we will keep H parent candidates for the next layer at the end of each iteration. We initialize the corresponding parameters in Line 6: we denote the list of updated symmetric matrix and vector (under I ˜ χ ( Υ ) ( l ) ) as H l during each layer, and update the list of search space as F l , 1 ( h ) h = 1 H using (6). During the iteration, as for each candidates derived from the previous layers, we get a list of H candidates for the current layer (in Lines 8-11): we first update the search space from the previous knowledge using (6) (in Lines 6 and 19). The detected layers notated as R ( R = 1 , , r 1 ) and utilize the recovered vector P ^ r , l ( h ) ( P ^ r , l ( h ) is h-th candidates for r -th column vector of P ^ under subspace I χ ( Υ ) ( l ) ) to fill out the symmetric matrix, thus, the matrix under the r -th layer can be represented as P ^ ̲ R ( h ) . On this basis, the search space is limited to
F l , r ( h ) = f F 2 m f i = P ^ ̲ R ( h ) ( i , r ) for all i R ,
where the value of P ^ ̲ R ( h ) ( i , r ) refers to the i-th row and r -th column of matrix P ^ ̲ R ( h ) . Next, we derive H paths for each candidates of previous decoding (in Lines 9 and 10): for all vectors f F l , r ( h ) , we compute F sum ( f ) = j = 1 2 ( m Υ ) y H E I χ ( Υ ) ( l ) · e r , f y to search H largest F sum , the corresponding H sets of f j j = 1 2 ( m Υ ) is derived. Based on f j j = 1 2 ( m Υ ) = span I ˜ χ ( Υ ) ( l ) + I χ ( Υ ) ( l ) P ^ r , l e r , vector P r , l ( h ) can be obtained. Once 2 H iterations are complete, we finally get the P ^ r , l ( h ) h = 1 H within 2 H possibles.
As for the detection of B l ( h ) (in Lines 14-17), we make z ( m Υ ) = 0 m Υ and compute F r ( f ) = y H E I χ ( Υ ) ( l ) · e r , I χ ( Υ ) ( l ) P ^ r , l e r y , then we obtain sign F r ( f ) = ( 1 ) B r , l . In light of this, B r can be obtained as follows
B r , l = 1 , if sign F r ( f ) = 1 ; 0 , if sign F r ( f ) = 1 .
Preprints 78981 i001
Line 19 updates all parameters for the next layer detection. In Line 23, after conducting all iterations, we evaluate each rank’s ( L × H ) results and retain the optimal results. The final results is selected at the end of the procedure for all ranks (in Line 25).

3.2. Iterative List PRM projection Algorithm

We consider a scenario where simultaneous presence of multiple active users in a system. As described below, we will detect multiple PRM sequences iteratively. We begin by introducing some notations in this section: it is possible to detect a maximum of K 0 users. We denote the notation C ^ P ^ , B , I χ ( s , k ) as the detected PRM sequence, and h ^ k ( s ) represents the channel estimation. Both notations indicate the case under the k-th user and during the i-th iteration ( 1 s S m a x ). The particular processes of Algorithm 2 are outlined in the following.
Preprints 78981 i002
Firstly, the PRM sequence and the channel coefficient are initialized as C ^ P ^ , B , I χ ( 0 , k ) = 0 N and h ^ k ( 0 ) = 0 for s = 0 , respectively (in Line 1). we then denote an interference for the user k during the s-th iteration as
Λ k ( s ) = k = 1 a n d k k K 0 h ^ k ( s ) · C ^ P ^ , B , I χ ( s , k ) ,
where
s = s , if 1 k k 1 , s = s 1 , if k + 1 k K 0 .
Then, in order to retrieve a PRM sequence C ^ P ^ , B , I χ ( s , k ) , we feed the signal ( y Λ k ( s ) ) into Algorithm 1.
Next, when detecting each user’s PRM sequence, the detected sequences are inserted into the linear least square channel estimator to improve their accuracy further, specified as (8) in Algorithm 2. Due to the exchange of information between the sequence detector and the channel estimator, the effectiveness gradually improves, and it terminates once the results have converged or the maximum number of iterations have been reached.

4. Performance Analysis

4.1. Successful Recovery Probability of the List PRM Projection Algorithm

For detecting a single PRM sequence, Algorithm 1 recovers the column vectors of the matrix P ^ layer by layer. In this section, we will derive the probability of successful recovery of each layer element P ^ r for 1 r m . The corresponding conclusions are summarized in Lemma 1 as well as Theorem 1.
Lemma 1.
Suppose that Algorithm 1’s the input signal is y = h C P ^ , B , I χ + n , where h C N ( 0 , 1 ) is the channel coefficient, and n C N ( 0 , I N ) denotes the complex additive white Gaussian noise (AWGN). The notation P ( r ) ( 1 r m ) represents the event “the vector P ^ r is recovered successful” and the notation P ( 1 , , r 1 ) denotes the event " the vectors P ^ 1 , , P ^ r 1 are correctly recovered". On this basis, if h and P ( 1 , , r 1 ) is given, then for any z F 2 m Υ , Pr P ( r ) | P ( 1 , , r 1 ) , h can be expressed as
Pr P ( r ) | P ( 1 , , r 1 ) , h Q 2 Υ | h | 2 · ( 1 ) B r + ( B | Υ + 1 m ) T z 2 σ F , r · Q 2 Υ | h | 2 · ( 1 ) B r + ( B | Υ + 1 m ) T z 2 σ F , r 2 ( Υ r + 1 ) 1 .
In (11), function Q ( · ) is Q ( x ) = x 1 2 π exp t 2 2 d t , and the variance σ F , r 2 is defined as
σ F , r 2 = 2 m 1 2 | h | 2 N 0 2 r 1 + N 0 2 r 1 2 .
Theorem 1.
Taking into account the channel coefficients h and the results of (11), the average probability of successful detection for Algorithm 1 can be calculated as
P s u c c = h f CN ( h ; 0 , 1 ) · P ˇ Υ , N 0 | h d h ,
where P ˇ Υ , N 0 | h = r = 1 Υ Pr P ( r ) | P ( 1 , , r 1 ) , h and f CN ( h ; 0 , 1 ) = 1 π exp | μ | 2 .
Proof 
(Proof). We assume that the subspace matrix I χ is decoded correctly and H = 1 . Define a concept F S ( f r ) related to f r F 2 m as
F S ( f r ) = ( h C P ^ , B , I χ + n ) H · E I χ e r , f r · ( h C P ^ , B , I χ + n ) = | h | 2 C P ^ , B , I χ H E I χ e r , f r C P ^ , B , I χ F A ( f r ) + n H E I χ e r , f r n Z 1 ( f r ) + h C P ^ , B , I χ H E I χ e r , f r n + n H E I χ e r , f r C P ^ , B , I χ Z 2 ( f r ) ,
According to (14), Z 1 ( f r ) and Z 2 ( f r ) are the sources of interference, and we use Z ( f r ) = Z 1 ( f r ) + Z 2 ( f r ) to conclude all interferences. The following discussion will demonstrate Lemma 1 through mathematical induction.
( 1 ) when r = 1 : according to Algorithm 1, the event P ( 1 ) occurring is described as z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 (we omit the subscript S and record it as F ( · ) in this section), where p ˜ χ , 1 = I χ P ^ e 1 , and symbol p ¯ ¯ χ , 1 denotes any possible vector that is different from p ˜ χ , 1 . It is known that the vector p ¯ ¯ χ , 1 has a total of ( 2 Υ 1 ) possibilities, then we denote the probability of the event P ( 1 ) as follows:
Pr P ( 1 ) | h = Pr z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > max z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 = [ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr z F 2 m Υ F I χ ˜ z + p ˜ χ , 1 > z F 2 m Υ F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) .
In (15), [ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 is the set of all possible vectors except p ˜ χ , 1 . Since each term within the summation function obeys the same distribution, calculating (15) is equivalent to compute its approximate probability as
[ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) .
For convenience, we let notation Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 and Ξ ¯ ¯ χ , z ( 1 ) = I χ ˜ z + p ¯ ¯ χ , 1 . Since the ( 2 Υ 1 ) events in (16) are independent of one another, we calculate the probability of one event first:
Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 = ( a ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) = ( b ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 + Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) < 0 ( c ) Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) · Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 ,
where the equal sign ( a ) is based on C P ^ , B , I χ H E I χ e 1 , Ξ ¯ ¯ χ , z ( 1 ) C P ^ , B , I χ = 0 , ( b ) and ( c ) are obtained according to the following equations:
Pr ( | A | | B | ) = Pr ( A > | B | ) Pr ( A > 0 ) + Pr ( A > | B | ) Pr ( A < 0 ) Pr ( A > | B | ) Pr ( A > 0 ) ,
and | | A | | B | | | A ± B | | A | + | B | .
To calculate the two probabilities of the result of (17), it is necessary to study the distribution of the variables | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ , Z ( Ξ ˜ χ , z ( 1 ) ) and Z ( Ξ ¯ ¯ χ , z ( 1 ) ) respectively. We prioritize the study of | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ : we omit the factor 1 / 2 r for the representation of C P ^ , B , I χ and denote F A ( Ξ ˜ χ , z ( 1 ) ) as follows
F A ( Ξ ˜ χ , z ( 1 ) ) = | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ = | h | 2 · i ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) w F 2 m C P ^ , B , I χ w + I χ e 1 ¯ · ( 1 ) w T · Ξ ˜ χ , z ( 1 ) · C P ^ , B , I χ ( w ) = | h | 2 · w F 2 m i 2 ( P ^ 1 ; 0 ) T · Q 1 w + P 1 , 1 + 2 B T ( e 1 ; 0 ) · i ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) · ( 1 ) w T · Ξ ˜ χ , z ( 1 ) · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ = | h | 2 · i 2 B T ( e 1 ; 0 ) + P 1 , 1 w F 2 m i 2 ( P ^ 1 ; 0 ) T · Q 1 w + 2 w T · Ξ ˜ χ , z ( 1 ) + ( I χ e 1 ) T · Ξ ˜ χ , z ( 1 ) · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ ,
where P 1 , 1 = e 1 T P e 1 = e 1 T P 1 and ϕ B , Q 1 w , Υ = 1 in the case of B | Υ + 1 m = ( Q 1 w ) | Υ + 1 m . We then substitute Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 in (19), and the equation expression becomes:
F A ( Ξ ˜ χ , z ( 1 ) ) = F A I χ ˜ z + p ˜ χ , 1 = | h | 2 · i 2 B T ( e 1 ; 0 ) + 2 P 1 , 1 · w F 2 m i 2 ( I χ ˜ z ) T w · ϕ B , Q 1 w + ( e 1 ; 0 ) , Υ · ϕ B , Q 1 w , Υ = | h | 2 · i 2 B T ( e 1 ; 0 ) + 2 P 1 , 1 · w F 2 m i 2 ( B | Υ + 1 m ) T z = 2 Υ · | h | 2 ( 1 ) B 1 + ( B | Υ + 1 m ) T z ,
where B 1 = e 1 T B and z F 2 m Υ . To sum up, F A ( Ξ ˜ χ , z ( 1 ) ) takes the value of 2 Υ · | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z if and only if Ξ ˜ χ , z ( 1 ) = I χ ˜ z + p ˜ χ , 1 .
Next, we calculate the distribution of Z ( Ξ ˜ χ , z ( 1 ) ) : we expand Z ( Ξ ˜ χ , z ( 1 ) ) into two parts:
Z 1 ( Ξ ˜ χ , z ( 1 ) ) = n 1 H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 n 1 = i ( I χ e 1 ) T · z F 2 m r ( I χ ˜ z + I χ P ^ 1 ) w F 2 m n 1 w + I χ e 1 ¯ · ( 1 ) w T · z F 2 m r ( I χ ˜ z + I χ P ^ 1 ) · n ( w ) .
Therefore, the variance of Z 1 ( Ξ ˜ χ , z ( 1 ) ) is 2 m · N 0 2 . Following the same rule, Z 2 turns into
Z 2 ( Ξ ˜ χ , z ( 1 ) ) = h · C P ^ , B , I χ H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 n + n H E I χ e 1 , I χ ˜ z + p ˜ χ , 1 C P ^ , B , I χ = h · i ( I χ e 1 ) T · ( I χ ˜ z + I χ P ^ 1 ) w F 2 m C P ^ , B , I χ w + I χ e 1 ¯ · ( 1 ) w T · ( I χ ˜ z + I χ P ^ 1 ) · n ( w ) + h · i ( I χ e 1 ) T · ( I χ ˜ z + I χ P ^ 1 ) w F 2 m n w + I χ e 1 ¯ · ( 1 ) w T · ( I χ ˜ z + I χ P ^ 1 ) · C P ^ , B , I χ ( w ) .
According to (22), the mean of variable Z 2 ( Ξ ˜ χ , z ( 1 ) ) is 0 and the variance is 2 · 2 m | h | 2 N 0 . We denote the notation σ F , 1 2 = 2 m 1 2 | h | 2 N 0 + N 0 2 . Since the distribution of Z ( Ξ ¯ ¯ χ , z ( 1 ) ) is the same with Z ( Ξ ˜ χ , z ( 1 ) ) , we have Z ( Ξ ˜ χ , z ( 1 ) ) CN ( 0 , 2 σ F , 1 2 ) , Z ( Ξ ¯ ¯ χ , z ( 1 ) ) CN ( 0 , 2 σ F , 1 2 ) and F A ( Ξ ˜ χ , z ( 1 ) ) + Z ( Ξ ˜ χ , z ( 1 ) ) CN ( 2 Υ · | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z , 2 σ F , 1 2 ) .
Based on above distribution, the first term of the last equation in (17) can be approximated as:
Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > Z ( Ξ ¯ ¯ χ , z ( 1 ) ) = 1 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) > 0 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) < 0 = 1 2 Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ · Pr Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) > 0 ( d ) 1 Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 ,
where (d) depends on Z 1 ( Ξ ¯ ¯ χ , z ( 1 ) ) + Z 2 ( Ξ ¯ ¯ χ , z ( 1 ) ) Z ( Ξ ˜ χ , z ( 1 ) ) N ( 0 , 4 σ F , 1 2 ) . We then compute the latter term of (17) as
Pr | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ + Z ( Ξ ˜ χ , z ( 1 ) ) > 0 = Pr Z ( Ξ ˜ χ , z ( 1 ) ) > | h | 2 C P ^ , B , I χ H E I χ e 1 , Ξ ˜ χ , z ( 1 ) C P ^ , B , I χ = Pr Z ( Ξ ˜ χ , z ( 1 ) ) > 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z = Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 .
By substituting (23) and (24) into (16), we finally get
[ p ¯ ¯ χ , 1 ( j ) ] j = 1 2 Υ 1 Pr F I χ ˜ z + p ˜ χ , 1 > F I χ ˜ z + p ¯ ¯ χ , 1 ( j ) Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 · Q 2 Υ | h | 2 · ( 1 ) B 1 + ( B | Υ + 1 m ) T z 2 σ F , 1 2 Υ 1 .
Equation (25) is consistent with (11) in Lemma 1, so Lemma 1 holds for r = 1 . 4 (2) when 2 r Υ : Algorithm 1’s input signal becomes
y r = λ P R M , ϖ ( r ) · y r 1 = C P ^ , B , I χ + i = 1 r 1 λ P R M , ϖ ( i ) · n = C P ^ , B , I χ + 1 2 i 1 i = 1 r 1 I N + ϖ i · E I χ e i , I χ P ^ i · n .
By substituting (26) into (14), we have σ F , r 2 = 2 m 1 2 | h | 2 N 0 2 r 1 + N 0 2 r 1 2 . Thus, Z ( Ξ ˜ χ , z ( r ) ) CN ( 0 , 2 σ F , r 2 ) , and Z ( Ξ ¯ ¯ χ , z ( r ) ) CN ( 0 , 2 σ F , r 2 ) . Furthermore, we get F A ( Ξ ˜ χ , z ( r ) ) + Z ( Ξ ˜ χ , z ( r ) ) CN ( 2 Υ · | h | 2 ( 1 ) B r + ( B | Υ + 1 m ) T z , 2 σ F , r 2 ) . The derivation process is similar to r = 1 and we omit it here. The result is comsistent with (11) and (12) in Lemma 1.
The PRM sequence is reconstructed successfully if the estimations for all layers are correct. Therefore, Algorithm 1 has the successful detection probability as follows:
P ˇ Υ , N 0 | h = r = 1 Υ Pr P ( r ) | P ( 1 , , r 1 ) , h .
Taking the channel coefficient h into account, we get the following equation
P s u c c = h f CN ( h ; 0 , 1 ) · P ˇ Υ , N 0 | h d h ,
thus, the proof of Theorem 1 is complete. □
The factors that affect the efficiency of layer-by-layer PRM projection algorithm are discussed below. According to Lemma 1, the conditional probability Pr P ( r ) | P ( 1 , , r 1 ) , h of the vector P ^ r is related to three factors, namely Υ , r , m and N 0 . Since it is challenging to mathematically express the relationship between them, we fix two elements and track the changes in the conditional probability as a result of changing the third factor. In accordance with the Figure 1Figure 4, the following results are obtained:
  • Assume that the PRM code is of full rank with Υ = m , when m and N 0 are given, as r increases from 1 to m, the value of Pr P ( r ) | P ( 1 , , r 1 ) , h increases (Figure 1). The results illustrate the properties of the projection algorithm: if the first few layers are error-free, the remaining layers are highly likely to be recovered correctly. Besides, a further conclusion can be drawn by observing the above proof that n r CN ( 0 , N 0 2 r 1 ) indicates that the noise power in each layer decreases exponentially as r increases, leading to the successful recovery probability of the vector increasing layer by layer. Overall, the recovery reliability of the first few layers of the layer-by-layer PRM projection algorithm is crucial to its performance. This motivates us to increase the number of candidate paths in the first few layers to obtain better detections.
  • As shown in Figure 2, when r , Υ and N 0 are given, Pr P ( r ) | P ( 1 , , r 1 ) , h increases as m increases, which shows that we can improve the performance of the PRM sequence detection algorithm by increasing the length of the PRM sequence.
  • Figure 3 gives the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with N 0 for fixed r , m and Υ . It is clear that as N 0 increases, the Signal-to-Noise Ratio (SNR) decreases, resulting in Pr P ( r ) | P ( 1 , , r 1 ) , h decreases consequently.
  • Figure 4 gives the variation of Pr P ( r ) | P ( 1 , , r 1 ) , h with r for fixed N 0 , m and Υ : the larger Υ indicates the better performance under the same m, and this result is consistent with the fact that a greater signal power will result in a greater signal-to-noise ratio. Furthermore, the larger the m, the worse the performance under the same layer r , indicating that the closer Υ to m, the better the performance will be.

4.2. Successful Recovery Probability of the Iterative List PRM Projection Algorithm

In this section, Lemma 2 and Theorem 2 analyze the performance of the iterative list PRM projection algorithm in the multi-user scenario.
Lemma 2.
Suppose that the input signal of Algorithm 2 contains a linear combination of all K a active users’ sequences, We let these sequences be detected in the order of ( ρ 1 , ρ 2 , , ρ K a ) . We denote the event "the ρ k -th recoverd sequence has been successfully detected" as B ρ k and the event "the sequences with order ( ρ 1 , , ρ k 1 ) have been successfully detected and have been subtracted from the received signal" as B ρ 1 , , ρ k 1 . Thus, given the channel parameters h ̲ = [ h 1 , , h K a ] and the event B ρ 1 , , ρ k 1 , the chance of the event B ρ 1 , , ρ k 1 occurs in the first iteration of Algorithm 2 can be approximated as
Pr B ρ k | B ρ 1 , , ρ k 1 , h = Pr P ρ k ( 1 ) | h ̲ · P ˇ Υ ρ k 1 , N 0 ( Υ ρ k 1 ) | h ̲ .
In (29), Pr P ρ k ( 1 ) | h ̲ refers to the probability of P ^ ρ k e 1 being successfully recovered under the condition h ̲ , which is expressed as
Pr P ρ k ( 1 ) | h ̲ = j = 1 2 Υ ρ k 1 Q 2 Υ ρ k | h ρ k | 2 + ρ k + 1 i ρ K a n ρ k , i · 2 Υ ρ i | h i | 2 · n i , j · 2 Υ ρ i | h i | 2 2 σ Λ , 1 · Q 2 Υ ρ k | h ρ k | 2 σ Λ , 1 2 Υ ρ k 1 ,
where the notation n ρ k , i indicates the number of intersections of positions corresponding to non-zero F ( · ) for the i-th detected sequence and ρ k -th detected sequence. The notation n i , j represents the number of intersections of positions corresponding to non-zero F ( · ) for the i-th detected sequence and the j-th coset of the set related to non-zero F ( · ) for the first layer recovery of the ρ k -th detected sequence. Besides, the variance in (30) is
σ Λ , 1 2 = ρ = ρ k ρ K a ρ = ρ k a n d ρ ρ ρ K a | h ρ H · h ρ | 2 · σ μ , 1 2 + 2 m 1 · ρ = ρ k ρ K a 2 | h ρ | 2 N 0 + N 0 2 ,
in which μ is the inner product created by two PRM sequences of length- 2 m and σ μ , 1 2 denotes the variance of the inner product. The second component P ˇ Υ ρ 1 1 , N 0 ( Υ ρ k 1 ) | h in the Equation (29) represents the matrix P ^ Υ ρ k 1 ’s likelihood of a successful recovery given h , where P ^ Υ ρ k 1 denotes the submatrix of the symmetric matrix P ^ ρ k with rank ( Υ ρ k 1 ) for order ρ k . Besides, the notation N 0 ( Υ ρ k 1 ) = 1 2 ( ρ = ρ k + 1 ρ K a | h ρ | 2 + N 0 ) .
Theorem 2.
Suppose that the input signal of Algorithm 2 contains a linear combination of all K a active users’ sequences, we denote each sequence by the specific notation k ( 1 k K a ) and let these sequences be detected in the order of ( ρ 1 , ρ 2 , , ρ K a ) . Then, given the channel parameter h ̲ , the probability that Algorithm 2 can successfully detect the sequence ρ ( 1 ρ K a ) in the first iteration is
P ˇ ˇ ρ K a , m , N 0 | h ̲ = Pr B ρ | h ̲ + k = 2 K a ( ρ 1 , , ρ k 1 ) N k k = 1 k 1 Pr ( B ρ k | B ρ 1 , , ρ k 1 , h ̲ ) · Pr ( B ρ | B ρ 1 , , ρ k 1 , h ̲ ) ,
where the set N k contains all permutations of ( k 1 ) indexes of order from the set K a ρ . Furthermore, by taking the channel parameters into consideration, the average probability of successful detection for the first iteration of the iterative PRM sequence can be approximated as
P ˇ ˇ ave K a , m , N 0 | h ̲ = 1 K a h ̲ ρ = 1 K a f CN h ρ ; 0 , 1 · ρ = 1 K a P ˇ ˇ ρ K a , m , N 0 | h ̲ .
Proof 
(Proof). For convenience, two users are assumed to be active in the system. since users are not provided with IDs in the URA scenario, we can deduce the theoretical results of successful detection probabilities with the help of the detection order between sequences: we denote two sequences as a and b, respectively, and the detection order between them is represented by ρ a and ρ b . At this point, the input signal of the algorithm is rewritten as a linear superposition of the two users of form: y = h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n . The rank of PRM codes C P ^ a , B a , I χ , a and C P ^ b , B b , I χ , b are noted as Υ a and Υ b , respectively. According to the expansion of (14), the F M can be expanded regarding the r -layer as
F M ( f r ) = ( h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n ) H · E I χ e r , f r · ( h a C P ^ a , B a , I χ , a + h b C P ^ b , B b , I χ , b + n ) = | h a | 2 C P ^ a , B a , I χ , a H E I χ e r , f r C P ^ a , B a , I χ , a F a ( f r ) + | h b | 2 C P ^ b , B b , I χ , b H E I χ e r , f r C P ^ b , B b , I χ , b F b ( f r ) + h a H · h b C P ^ a , B a , I χ , a H E I χ e r , f r C P ^ b , B b , I χ , b Z a b , 1 ( f r ) + h a · h b H C P ^ b , B b , I χ , b H E I χ e r , f r C P ^ a , B a , I χ , a Z a b , 2 ( f r ) + h a C P ^ a , B a , I χ , a H E I χ e r , f r n + n H E I χ e r , f r C P ^ a , B a , I χ , a X a , 1 ( f r ) + h b C P ^ b , B b , I χ , b H E I χ e r , f r n + n H E I χ e r , f r C P ^ b , B b , I χ , b X a , 2 ( f r ) + n H E I χ e r , f r n N ( f r ) .
For ease of use, we make ( Z a b , 1 + Z a b , 2 + X a , 1 + X a , 2 + N ) ( f r ) = Λ ( f r ) .
We first assess the performance of the first-layer ( r = 1 ) detection for ρ a -th sequence: suppose in the first iteration of Algorithm 2, active users are detected successively according to the order ( ρ a , ρ b ) . In the following, we write the probability of the event P ρ a ( 1 ) as form
Pr P ρ a ( 1 ) | h = Pr z F 2 m Υ a F I ˜ χ , a z + p ˜ χ a , 1 > max z F 2 m Υ a F I ˜ χ , a z + p ¯ ¯ χ a , 1 = [ p ¯ ¯ χ a , 1 ( j ) ] j 2 Υ a 1 Pr z F 2 m Υ a F I ˜ χ , a z + p ˜ χ a , 1 > z F 2 m Υ a F I ˜ χ , a z + p ¯ ¯ χ a , 1 ( j ) .
We omit the subscript M and record it as F ( · ) in this section. As seen in (35), the present situation is more complicated than the single-user scenario. One of the problems is that the 2 m Υ a jointly summed events are no longer independent and no longer obey the same distribution. In addition, for the property of the PRM sequence, the vector ( I ˜ χ , a z + p ˜ χ a , 1 ) in (35) is specific to the ρ a -th detected sequences, and the ρ b -th sequence utilizes a PRM whose rank Υ b is not necessarily equivalent to Υ a , therefore, the analysis requires a detailed consideration of the interference of the other user. For simplicity, we first approximate a single term in (35) as follows:
Pr z F 2 m Υ a F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + F b ( Ξ ˜ χ a , z ( 1 ) ) z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = ( e ) Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) ( f ) Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 ,
where Ξ ˜ χ a , z ( 1 ) = I ˜ χ , a z + p ˜ χ a , 1 , Ξ ¯ ¯ χ a , z ( j , 1 ) = I ˜ χ , a z + p ¯ ¯ χ a , 1 ( j ) (different from (17), the superscript "j" can not be removed from Ξ ¯ ¯ χ a , z ( j , 1 ) since the events in joint multiply from (35) are not independent anymore) and Ξ ¯ ¯ χ a , z ( j , 1 ) is the j-th coset of the set Ξ ˜ χ a , z ( 1 ) ( 1 j 2 Υ a 1 ). it is known that F a ( Ξ ˜ χ a , z ( 1 ) ) take non-zero values for the set Ξ ˜ χ a , z ( 1 ) . Besides, the equal sign ( e ) in (36) retains one of the terms on both sides owing to the same distribution. The inequality sign ( f ) follows the formula (18). In the following, we will calculate each of the two probabilities of the result of (36).
We first solve the probability of the latter term. Since we know that F a ( Ξ ˜ χ a , z ( 1 ) ) takes a fixed value, it is sufficient to obtain the distribution of Λ ( Ξ ˜ χ a , z ( 1 ) ) . Thus, the distribution of Z a b , 1 ( Ξ ˜ χ a , z ( 1 ) ) and Z a b , 2 ( Ξ ˜ χ a , z ( 1 ) ) in Λ ( Ξ ˜ χ a , z ( 1 ) ) can be extended as
Z a b , 1 ( Ξ ˜ χ a , z ( 1 ) ) = h a H · h b · C P ^ a , B a , I χ , a H E I χ a e r , Ξ ˜ χ a , z ( 1 ) C P ^ b , B b , I χ , b = h a H · h b · i ( I χ e r ) T Ξ ˜ χ a , z ( 1 ) w F 2 m C P a ^ , B a , I χ a w + I χ a e r ¯ · ( 1 ) w T · Ξ ˜ χ a , z ( 1 ) · C P b ^ , B b , I χ b ( w ) = h a H · h b · i ( I χ · e r + 2 w ) T · Ξ ˜ χ a , z ( 1 ) · μ ρ a ¯ , ρ b ( 1 )
and
Z a b , 2 ( Ξ ˜ χ a , z ( 1 ) ) = h a · h b H · C P ^ b , B b , I χ , b H E I χ a e r , Ξ ˜ χ a , z ( 1 ) C P ^ a , B a , I χ , a = h a · h b H · i ( I χ e r ) T Ξ ˜ χ a , z ( 1 ) w F 2 m C P b ^ , B b , I χ b w + I χ a e r ¯ · ( 1 ) w T · Ξ ˜ χ a , z ( 1 ) · C P a ^ , B a , I χ a ( w ) = h a · h b H · i ( I χ · e r + 2 w ) T · Ξ ˜ χ a , z ( 1 ) · μ ρ a , ρ b ¯ ( 1 ) ,
repectively, where μ ρ a ¯ , ρ b ( 1 ) and μ ρ a , ρ b ¯ ( 1 ) are inner product of two sequences. The inner product is related to the pattern form of the PRM. Since the distribution of the occupancy pattern have equal probability, the mean of the inner product is 0 and we denote its variance as σ μ , 1 2 . Thus we can obtain Λ ( Ξ ˜ χ a , z ( 1 ) ) obeys the distribution N ( 0 , σ Λ , 1 2 ) , where the variance is
σ Λ , 1 2 = ( | h a H · h b | 2 + | h a · h b H | 2 ) · σ μ , 1 2 + 2 m 1 2 ( | h a | 2 + | h b | 2 ) N 0 + N 0 2 .
In light of this, the latter term of (36) can be computed as
Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 = Pr Λ ( Ξ ˜ χ a , z ( 1 ) ) > F a ( Ξ ˜ χ a , z ( 1 ) ) = Pr Λ ( Ξ ˜ χ a , z ( 1 ) ) > 2 Υ a | h a | 2 · ( 1 ) B a , 1 + ( B a | Υ + 1 m ) T z = ( g ) Q 2 Υ a | h a | 2 σ Λ , 1 .
In equal sign (g), we set z = 0 m Υ , this setting is also extended in the following formulas.
We next solve the probability of the first term in (36), we suppose that the set corresponding to one-zero F b ( · ) is noted as Ξ ˜ χ b , z ( 1 ) and we make | Ξ ˜ χ a , z ( 1 ) Ξ ˜ χ b , z ( 1 ) | = n a b , obviously, | Ξ ˜ χ b , z ( 1 ) | = 2 m Υ b . We further denote | Ξ ¯ ¯ χ a , z ( j , 1 ) Ξ ˜ χ b , z ( 1 ) | = n b , j and n a b + j = 1 2 Υ a 1 n b , j = 2 m Υ b . On this basis, the first term of the results of (36) yeilds the following equation:
Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + n a b · 2 Υ b | h b | 2 Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + n b , j · 2 Υ b | h b | 2 2 Υ b = 1 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 · Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) > 0 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 · Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) < 0 1 Pr Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 = ( h ) 1 Q 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 2 σ Λ , 1 ,
the equal sign (h) is based on Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) Λ ( Ξ ˜ χ a , z ( 1 ) ) N ( 0 , 2 σ Λ , 1 2 ) . On this basis, we return to (36) and get the final result
j = 1 2 Υ a 1 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) + z F 2 m Υ a F b ( Ξ ˜ χ a , z ( 1 ) ) Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + z F 2 m Υ a F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = j = 1 2 Υ a 1 Q 2 Υ a | h a | 2 + n a b · 2 Υ b | h b | 2 n b , j · 2 Υ b | h b | 2 2 σ Λ , 1 · Q 2 Υ a | h a | 2 σ Λ , 1 .
This equation is consistent with (30) in Lemma 2.
Now we present a special example to provide a further explanation for the result of (41): we suppose that two users have the same submatrix I χ and different vectors P ^ 1 , thus (35) directly becomes
j = 1 2 Υ a 1 Pr z F 2 m Υ a F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > z F 2 m Υ a F b + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) j = 1 2 Υ a 1 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > 0 = Q 2 Υ a | h a | 2 σ Λ , 1 2 Υ a 1 · j = 1 2 Υ a 2 Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) I · Pr F a + Λ ( Ξ ˜ χ a , z ( 1 ) ) > F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) + Λ ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) I I = Q 2 Υ a | h a | 2 σ Λ , 1 2 Υ a 1 · Q 2 Υ a | h a | 2 2 σ Λ , 1 2 Υ a 2 · Q 2 Υ a | h a | 2 2 Υ b | h b | 2 2 σ Λ , 1 .
In this case, term I refers to interference imposed by the cosets other than set Ξ ˜ χ a , z ( 1 ) on the ρ a -th detected sequence. Since | Ξ ¯ ¯ χ a , z ( j , 1 ) Ξ ˜ χ b , z ( 1 ) | = 2 Υ b for one particular j, we conclude that there is only one case in term I I . Besides, owing to F b ( Ξ ¯ ¯ χ a , z ( j , 1 ) ) = 0 for the remaining ( 2 Υ a 2 ) possible j, it allows ( 2 Υ a 2 ) possibles for the term I. The results of (42) satisfies (41) provided that n a b = 0 and n b , j = 1 for one j, 1 j 2 Υ a 1 . This equation is also consistent with (30) in Lemma 2.
Based on the above results, after recovering vector P ^ ρ a e 1 , the y 2 becomes
y 2 = λ P R M , ρ a ( 1 ) · y 1 = h ρ a C P ^ ρ a , B ρ a , I χ , ρ a + λ P R M , ρ a ( 1 ) · h ρ b C P ^ ρ b , B ρ b , I χ , ρ b + n = h ρ a C P ^ ρ a , B ρ a , I χ , ρ a + λ P R M , ρ a ( 1 ) · h ρ b C P ^ ρ b , B ρ b , I χ , ρ b + λ P R M , ρ a ( 1 ) · n ,
where λ P R M , ρ a ( 1 ) is the first layer projection operator for the user whose detection order is ρ a . We can approximate the noise as a Gaussian random variable whose mean is 0 and whose variance is set to be N 0 ( Υ a 1 ) = | h ρ b | 2 + N 0 2 . In light of this, the following detection procedure in layer 2 r Υ a is equivalent to the recovery of symmetric matrix P ^ ρ a Υ ρ a 1 in a single sequence scenario under the condition that the noise variance is N 0 ( Υ a 1 ) . Since the ρ a -th detected sequence can be successfully recovered if and only if both P ^ ρ a e 1 and P ^ ρ a Υ ρ a 1 are correct, the successful detection probability of event "the ρ k -th recoverd sequence has been successfully detected" is
Pr B ρ a | h ̲ = Pr P ρ a ( 1 ) | h ̲ · P ˇ Υ ρ a 1 , N 0 ( Υ a 1 ) | h ̲ ,
This result is consistent with (29) in Lemma 2.
Next, we assume that the sequence C P ^ ρ a , B ρ a , I χ , ρ a is ideally eliminated from the signal y , the receiver proceeds to recover C P ^ ρ b , B ρ b , I χ , ρ b , and we denote its probability of successful detection as Pr B ρ b | B ρ a , h ̲ = P ˇ Υ ρ b , N 0 | h ρ b .
As a next step, we will calculate the average probability of two sequences a and b being detected successfully. In the following two scenarios, the sequences a can be recovered:
  • in the first instance, sequence a is detected successfully firstly, which has a chance of Pr B a | h ̲ ;
  • in the second instance, sequence b is detected in priority with the probability of Pr B b | h ̲ , followed by sequence a is detected from the substracted signal with the probability Pr B a | B b , h ̲ .
As a result, the probability that sequence a will be successfully detected equals
P ˇ ˇ K a , m , N 0 | h ̲ = Pr B 1 | h ̲ + Pr B 2 | h ̲ · Pr B 1 | B 2 , h ̲ .
The equation is consistent with (32) in Theorem 2. Finally, the average probability of successful detection equals
P ˇ ˇ ave K a , m , N 0 | h ̲ = 1 2 h ̲ ρ = 1 2 f CN h ρ ; 0.1 · P ˇ ˇ 1 K a , m , N 0 | h ̲ + P ˇ ˇ 2 K a , m , N 0 | h ̲ .
The equation is consistent with (33) in Theorem 2. □

5. Simulation Results

In this section, we first examine the effectiveness of the proposed algorithms by evaluating the numerical simulation of the successful detection probability. The successful detection probability is the mean proportion of determined active users to the system’s overall number of active users. Additionally, a slot-controlled CCS scheme with the proposed iterative list projection algorithm in place of the origin inner decoder has been measured against several benchmarks.

5.1. Performances of the Algorithms 1 and 2

As for the performance of Algorithm 1: Figure 5 compares the numerical simulation results of the list PRM projection algorithm (Algorithm 1) with the theoretical analysis results based on (11) for different settings of m. We fix Υ = m , H = 1 and the transmit power of an active user is normalized to one. This figure illustrates that the numerical simulation results and the theoretical analysis are closely matched, and it is clear that the difference between the two simulations is the result of some approximations in the theoretical analysis. Thus, this result confirms the correctness of the theoretical analysis in Lemma 1. In addition, comparing the successful detection probability for different m, we find that the performance of the list PRM projection algorithm improves as the sequence length increases, further validating the conclusion in Section 4.1.
Next, we will compare the performance of the list PRM projection algorithm with the existing decoding algorithm for RM sequences. The numerical simulation results are shown in Figure 6, where "Origin PRM" and "List PRM" represent the origin projection detection algorithm in [33] and its list version (Algorithm 1), respectively. "RM LLD" represents the RM sequence detection algorithm proposed in [29], which is based on the Shift-and-Multiply operation on the received signal, and "List RM LLD" represents its list version detection. For all algorithms, we set the sequence length to N = 2 8 and the list size to H = 8 . As can be seen from Figure 6, the successful detection probabilities of all algorithms are close to one when the signal-to-noise ratio (SNR) is larger than 2 dB . However, the "RM LLD" algorithm suffers from severe performance degradation under low SNR conditions compared to the "Origin PRM", which shows strong robustness. In addition, since the enhanced list detection, the "List RM LLD" and the "List PRM" bring a more outstanding performance gain under lower SNR when compared with no candidate-saving algorithms.
As for the performance of Algorithm 2: in Figure 7, we compare the numerical simulation results of the iterative list PRM projection detection (Algorithm 2) with the theoretical analysis results derived from Lemma 2. We set the number of iterations in the numerical simulation 1. From the Figure 7, the theoretical analysis results can fit the numerical simulation results well, especially under high SNR conditions, thus verifying the correctness of Theorem 2. In addition, both simulation and theoretical results show that the detection capability of the algorithm decreases as the number of active users in the system increases.

5.2. Iterative List PRM Projection Algorithm-based Slot-controlled URA

In this section, we combine an inner code employing an iterative list PRM projection detector as the decoder with two types of practicable outer codes (t-tree and Reed-Solomon codes) to form a slot-pattern control-based CCS system (form [33]) and demonstrate the positive effects of the proposed algorithms on the overall system via simulation results. We consider a communication system in which each user transmits B = 100 bits by employing a H N length G-ary outer-code using T = 2 15 channel uses. The H N positions are carefully selected within N T chunks. There are G codewords in the inner codebook, each of which has a length of N = T / N T . We employ the PRM sequence under one rank Υ rather than all ranks in this section, and Algorithm 2 is used to decode the inner codes. Numerical results are presented in Figure 8. The energy efficiency of different schemes is plotted as: energy efficiency is the minimum E b / N 0 over the possible scheme-specific parameters in the URA scheme, besides, P e < 0.1 is the probability of error constraint for each user, and P f < 0.001 is the FAR constraint.
First of all, we concatenate the inner-code with iterative list PRM detection to an outer decoder that can correct up to t errors for various setups, i.e., the benchmarks called “ t = 3 , list PRM-tree, H = 1 ”, “ t = 2 , list PRM-tree, H = 1 ” and “ t = 1 , list PRM-tree, H = 1 ” are plotted in light blue in Figure 8, and we employ blue lines to illustrate the simulations for “ t = 3 , iterative list PRM-tree, H = 8 ”, “ t = 2 , iterative list, H = 8 ” and “ t = 1 , iterative list PRM-tree, H = 8 ”. Specifically, "list PRM-tree, H = 1 " for different t directly drawn from [23] are regarded as benchmarks (it is named "PRM-RS" in [33]), and "iterative list PRM-tree, H = 8 " for various t are the notations of substituting the origin detection with the proposed iterative list detection. Besides, a greedy information bit allocation method is applied to all list PRM-tree schemes, where each subsequent slot is assigned the maximum amount of information bits while maintaining the E [ | V l | ] 2 25 constraint and finding the minimum E b / N 0 . The energy efficiency is the minimum power needed to serve K a users with the per-user probality of error less than 0.1 , we depict the curves in the blue line in Figure 8 under the optimal setups.
Next, we denote the CCS scheme that concatenates the inner codes whose distribution is i.i.d. uniform on the power shell to an outer Reed-Solomon code as the "RS scheme” (it is named "Reed-Solomon scheme" in [23]), and use it as the first benchmark. The scheme called "list PRM-RS, H = 1 " is directly drawn from [33] as the second benchmark (it is named "PRM-RS" in [33]). Besides, “iterative list PRM-RS, H = 1 ” is the scheme whose multi-user inner-code detector is substituted by Algorithm 2 without saving candidates while calling Algorithm 1, whereas “iterative list PRM-RS, H = 8 ” keeps H = 8 candidates during each layer while conducting Algorithm 1. The energy efficiency is depicted in Figure 8.
From the simulations depicted in Figure 8, the following observations can be drawn:
  • as for all "List PRM-tree" schemes, since the outer-code length H N = 32 is suitable for scheme t = 1 and H N = 64 for t = 2 , 3 cases, the former case ( t = 1 case) can accommodate more active users than t = 2 , 3 owing to a smaller number of simultaneous appearances in each slot (the outer code length affects the probability of appearance over slots. See more setup details in [23]). Besides, a higher t for the outer code increases performance when the inner code has the same length. Therefore, t = 3 is superior to Furthermore, when comparing the curves of "list PRM-tree" and "iterative list PRM-tree", we conclude that the enhanced inner code with the proposed iterative list projection algorithm can help the entire system lower the energy-per-bit requirement to meet a target error probability as well as boost the ability to accommodate more users’ transmission.
  • Figure 8 illustrates how the "iterative list PRM-RS, H = 1 " curve outperforms the "list PRM-RS, H = 1 " curve regarding energy consumption when the number of active users is fixed. This observation is an unambiguous demonstration of the benefit of Algorithm 2, which carries out iterative detection.
  • when compared with different schemes of "iterative list PRM-RS, H = 1 " and "iterative list PRM-RS, H = 8 ", we demonstrate that the performance of the detection can be refined by raising the count of candidates in the first few layers, thereby confirming the significance of Algorithm 1, which guarantees the first few layers’ reliable recoveries.

6. Conclusions

In this paper, we address a packetized and slotted transmission CCS framework, which concatenates an inner PRM code to an outer error correction code. Firstly, we improve the decoder of the inner-code: we propose an enhanced algorithm that makes use of multiple candidates so as to remedy the venture of spreading errors. On this basis, an iterative list PRM projection algorithm is proposed for the multi-sequence scenario. We then deduce the theoretical probabilities of the list projection and iterative list projection detection methods. Via numerical simulations, we indicate the theoretical and the simulation results agreed. Finally, we implement the iterative list PRM projection algorithm in two practical error-correction codes. From the simulation results, we conclude that the packetized URA with the proposed iterative list projection detection works better than benchmarks in terms of the number of active users it can support in each slot and the amount of energy needed per bit to meet an expected error probability.
Additional interesting research avenues include: (i) the PRM-based system can be expanded to support multiple antennas, (ii) the decomposition of the Clifford matrix G F associated with the patterned RM code can be considered as a transmitter’s information reconstruction, and the iterative approach is generalized at the receiver.

Author Contributions

Methodology, W.X. and H.Z.; software, W.X. and R.T.; validation, W.X.; formal analysis, W.X.and R.T.; investigation, W.X. and R.T.; resources, W.X.; data curation, W.X.; writing—original draft preparation, W.X.; writing—review and editing, W.X. and R.T.; visualization, W.X. and R.T.; supervision, H.Z.; project administration, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saad, W.; Bennis, M.; Chen, M. A vision of 6G wireless systems: Applications, trends, technologies, and open research problems. IEEE network 2019, 34(3), 134–142. [Google Scholar] [CrossRef]
  2. Guo, F.; Yu, F. R.; Zhang, H.; Li, X.; Ji, H.; Leung, V. C. Enabling massive IoT toward 6G: A comprehensive survey. IEEE Internet of Things Journal 2021, 8(15), 11891–11915. [Google Scholar] [CrossRef]
  3. Pan, C.; Mehrpouyan, H.; Liu, Y.; Elkashlan, M.; Arumugam, N. Joint pilot allocation and robust transmission design for ultra-dense user-centric TDD C-RAN with imperfect CSI. IEEE Wirel. Commun 2018, 17(3), 2038–2053. [Google Scholar] [CrossRef]
  4. Chettri, L.; Bera, R. A comprehensive survey on Internet of Things (IoT) toward 5G wireless systems. IEEE Internet of Things Journal 2020, 7(1), 16–32. [Google Scholar] [CrossRef]
  5. Masoudi, M.; Azari, A.; Yavuz, E. A.; Cavdar, C. Grant-free radio access IoT networks: Scalability analysis in coexistence scenarios. In Proceedings of the IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20-24 May 2018; pp. 1–7. [Google Scholar]
  6. Shahab, M. B.; Abbas, R.; Shirvanimoghaddam, M.; Johnson, S. J. Grant-free non-orthogonal multiple access for IoT: A survey. IEEE Commun. Surv. Tut 2020, 22(3), 1805–1838. [Google Scholar] [CrossRef]
  7. Chen, X.; Chen, T. Y.; Guo, D. Capacity of Gaussian many-access channels IEEE Trans. Inf. Theory 2017, 63(6), 3516–3539. [Google Scholar] [CrossRef]
  8. Polyanskiy, Y. A perspective on massive random-access. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2523–2527. [Google Scholar]
  9. Zadik, I.; Polyanskiy, Y.; Thrampoulidis, C. Improved bounds on Gaussian MAC and sparse regression via Gaussian inequalities. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7-12 July 2019; pp. 430–434. [Google Scholar]
  10. Ngo, K. H.; Lancho, A.; Durisi, G. Unsourced multiple access with random user activity. IEEE Trans. Inf. Theory 2023, 1–1. [Google Scholar] [CrossRef]
  11. Ordentlich, O.; Polyanskiy, Y. Low complexity schemes for the random access Gaussian channel. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2528–2532. [Google Scholar]
  12. Marshakov, E.; Balitskiy, G.; Andreev, K.; Frolov, A. A polar code based unsourced random access for the Gaussian MAC. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, Hawaii, USA, 22–25 September 2019; pp. 1–5. [Google Scholar]
  13. Pradhan, A. K.; Amalladinne, V. K.; Vem, A.; Narayanan, K. R.; Chamberland, J. F. Sparse IDMA: A Joint Graph-Based Coding Scheme for Unsourced Random Access. IEEE Trans. Commun. 2022, 70(11), 7124–7133. [Google Scholar] [CrossRef]
  14. Ahmadi, M. J.; Duman, T. M. Random spreading for unsourced MAC with power diversity. IEEE Commun. Lett 2021, 25(12), 3995–3999. [Google Scholar] [CrossRef]
  15. Pradhan, A.K.; Amalladinne, V.K.; Narayanan, K.R.; Chamberland, J.F. LDPC Codes with Soft Interference Cancellation for Uncoordinated Unsourced Multiple Access. In Proceedings of the IEEE International Conference on Communications (ICC), Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar]
  16. Amalladinne, V.K.; Chamberland, J.F.; Narayanan, K.R. A Coded Compressed Sensing Scheme for Unsourced Multiple Access. IEEE Trans. Inf. Theory 2020, 66, 6509–6533. [Google Scholar] [CrossRef]
  17. Amalladinne, V. K.; Vem, A.; Soma, D. K.; Narayanan, K. R.; Chamberland, J. F. A coupled compressive sensing scheme for unsourced multiple access. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Alberta, Canada, April 15-20, 2018; pp. 6628–6632. [Google Scholar]
  18. Lancho, A.; Fengler, A.; Polyanskiy, Y. Finite-blocklength results for the A-channel: Applications to unsourced random access and group testing. In Proceedings of the 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, Illinois, September 28–30, 2022; pp. 1–8. [Google Scholar]
  19. Amalladinne, V.K.; Pradhan, A.K.; Rush, C.; Chamberland, J.F.; Narayanan, K.R. Unsourced random access with coded compressed sensing: Integrating AMP and belief propagation. IEEE Trans. Inf. Theory 2022, 68, 2384–2409. [Google Scholar] [CrossRef]
  20. Andreev, K.; Rybin, P.; Frolov, A. Reed-Solomon coded compressed sensing for the unsourced random access. In Proceedings of the 2021 17th International Symposium on Wireless Communication Systems (ISWCS), Berlin, Germany, 6–9 September 2021; pp. 1–5. [Google Scholar]
  21. Che, J.; Zhang, Z.; Yang, Z.; Chen, X.; Zhong, C.; Ng, D.W.K. Unsourced random massive access with beam-space tree decoding. IEEE J. Sel. Areas Commun. 2021, 40, 1146–1161. [Google Scholar] [CrossRef]
  22. Fengler, A.; Haghighatshoar, S.; Jung, P.; Caire, G. Non-Bayesian Activity Detection, Large-Scale Fading Coefficient Estimation, and Unsourced Random Access With a Massive MIMO Receiver. IEEE Trans. Inf. Theory 2021, 67, 2925–2951. [Google Scholar] [CrossRef]
  23. Andreev, K.; Rybin, P.; Frolov, A. Coded Compressed Sensing with List Recoverable Codes for the Unsourced Random Access. IEEE Trans. Commun. 2022, 70, 7886–7898. [Google Scholar] [CrossRef]
  24. Liang, Z.; Zheng, J.; Ni, J. Index modulation–aided mixed massive random access. Front. Commun. Networks 2021, 2, 694557. [Google Scholar] [CrossRef]
  25. Fengler, A.; Jung, P.; Caire, G. SPARCs for unsourced random access. IEEE Trans. Inf. Theory 2021, 67, 6894–6915. [Google Scholar] [CrossRef]
  26. Ebert, J.R.; Amalladinne, V.K.; Rini, S.; Chamberland, J.F.; Narayanan, K.R. Coded demixing for unsourced random access. IEEE Trans. Signal Process. 2022, 70, 2972–2984. [Google Scholar] [CrossRef]
  27. Zhang, L.; Luo, J.; Guo, D. Neighbor discovery for wireless networks via compressed sensing. Perform. Eval. 2013, 70, 457–471. [Google Scholar] [CrossRef]
  28. Zhang, H.; Li, R.; Wang, J.; Chen, Y.; Zhang, Z. Reed-Muller sequences for 5G grant-free massive access. In Proceedings of the GLOBECOM 2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–7. [Google Scholar]
  29. Wang, J.; Zhang, Z.; Hanzo, L. Joint active user detection and channel estimation in massive access systems exploiting Reed–Muller sequences. IEEE J. Sel. Top. Signal Process 2019, 13, 739–752. [Google Scholar] [CrossRef]
  30. Wang, J.; Zhang, Z.; Hanzo, L. Incremental massive random access exploiting the nested Reed-Muller sequences. IEEE Trans. Wirel. Commun. 2020, 20, 2917–2932. [Google Scholar] [CrossRef]
  31. Calderbank, R.; Thompson, A. CHIRRUP: A practical algorithm for unsourced multiple access. Inform. Inference J. IMA 2020, 9, 875–897. [Google Scholar] [CrossRef]
  32. Wang, J.; Zhang, Z.; Chen, X.; Zhong, C.; Hanzo, L. Unsourced massive random access scheme exploiting reed-muller sequences. IEEE Trans. Commun. 2020, 70, 1290–1303. [Google Scholar] [CrossRef]
  33. Xie, W.; Zhang, H. Patterned Reed–Muller Sequences with Outer A-Channel Codes and Projective Decoding for Slot-Controlled Unsourced Random Access. Sensors 2023, 23(11), 5239. [Google Scholar] [CrossRef] [PubMed]
  34. Pllaha, T.; Tirkkonen, O.; Calderbank, R. Binary subspace chirps. IEEE Trans. Inf. Theory 2022, 68, 7735–7752. [Google Scholar] [CrossRef]
  35. Pllaha, T.; Tirkkonen, O.; Calderbank, R. Reconstruction of multi-user binary subspace chirps. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 531–536. [Google Scholar]
  36. Tirkkonen, O.; Calderbank, R. Codebooks of complex lines based on binary subspace chirps. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated