Preprint
Review

Optical Imaging Using Coded Aperture Correlation Holography (COACH) With PSF of Spatial-Structured Longitudinal Light Beams – A Study Review

Altmetrics

Downloads

115

Views

59

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

28 December 2023

Posted:

29 December 2023

You are already at the latest version

Alerts
Abstract
Spatial-structured longitudinal light beams are optical fields sculpted in three-dimensional (3D) space by diffractive optical elements. These beams have been recently suggested for use in im-proving several imaging capabilities, such as 3D imaging, enhancing image resolution, engineering the depth of field, and sectioning 3D scenes. All these imaging tasks are performed using coded aperture correlation holography systems. Each system designed for a specific application is characterized by a point spread function of a different spatial-structured longitudinal light beam. This article reviews the topic of applying certain structured light beams for optical imaging.
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

The topic of structured light beams is wide, with many types of optical beams sorted into several classifications [1]. In this review article, we concentrate on a single class out of many types of structured beams. The generic beam of this group is termed spatial-structured longitudinal light beam (SSLLB), and with this class, we focus our attention only on the use of SSLBs for optical imaging to improve certain imaging capabilities. Each beam of the SSLLBs has its own three-dimensional (3D) distribution, but the common characteristic of all SSLLBs is a tube shape with a much longer length than any transverse size. Another common feature of all SSLLB members is the ability to create each of them by a diffractive optical element (DOE) [2] or computer-generated hologram (CGH) [3] displayed either on a spatial light modulator (SLM) [4] or fabricated as a static optical mask [5]. SSLLB is created if and when the DOE/CGH is illuminated by spatially coherent quasimonochromatic light. However, for imaging purposes, SSLLBs are usually used in spatially incoherent imaging systems. There is no contradiction in these two claims because SSLLBs are used as point spread functions (PSFs) for imaging systems, and the PSF is a response to a light point that by definition is a spatially coherent light source even if the system is illuminated by a spatially incoherent light source. Based on considerations of power efficiency, we strive to implement DOEs/CGHs with nonabsorbing phase masks. Specifically, the SSLLBs reviewed in this article are axicon-generated beams [6], Airy beams [7], self-rotating beams [8], and pseudonondiffracting beams generated by radial quartic phase functions [9,10], all of which have been recently applied in imaging systems by our research groups. Each beam of this list has its own spatial distribution and is applied for different imaging applications, as described in this review.
Attempts to use SSLLB for imaging purposes were made shortly after researchers succeeded in controlling SSLLBs [11]. The connection between SSLLB and imaging started with attempts to extend the depth of field of imaging systems using a limited nondiffracting beam as the PSF of the imaging system [12]. The main difficulty with this approach is the relatively high sidelobes of the PSF in each transverse plane along the nondiffracting range of the PSF. These sidelobes introduce undesired noise to the output image obtained by these systems.
Recently, the use of SSLLB for imaging has received increased attention because of the invention of a new imaging technology called coded aperture correlation holography (COACH) [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. Like digital holography [41,42] in general and incoherent digital holography in particular [43,44], COACH is a two-stage imaging process of electro-optical recording of a blurred pattern in the first stage and digital reconstruction of the final image in the second stage. In fact, COACH is considered a method of digital holography in which, between the object and the camera that records the digital hologram, there is a coded aperture that modulates the object’s light in a certain way [45,46]. More explicitly, in response to an object in the input, a blurred or scattered pattern is first recorded by a digital camera. In the second stage, a digital reconstruction operates on the blurred image to yield the desired image in the output. Because of the digital reconstruction, the previously mentioned problems of noise because of high sidelobes are partly or completely solved. The digital reconstruction in COACH is performed by linear [13,14,15,16,17,18] or nonlinear [19,40] cross-correlation of the object response with the PSF of the system. Alternatively, digital reconstruction can be performed by an iterative algorithm that contains linear [19] or nonlinear [47,48,49] cross-correlations between the image of the n-th iteration and the system PSF in each iteration. As mentioned above, the 3D PSF is the element that connects the world of SSLBs to imaging. In other words, the PSF of the COACH system is designed as one or as a set of beams that belong to the SSLLB family. The use of SSLBs in imaging applications is justified if this technology has any benefit over other PSFs not belonging to SSLBs. As we see in the following, certain PSFs from the SSLLB group were found to be useful for 3D imaging, improving the image resolution, image sectioning, and depth of field engineering with a performance that justifies the use of SSLLBs.
The link between SSLLB and COACH started in 2020 [50] by using SSLBs created by axicons. In the four years between the first COACH and the year 2020, COACH appeared in a few versions with several various limitations. The first COACH system [13] recorded self-interference incoherent holograms [51], one hologram of the PSF and the other for the object. A self-interference incoherent hologram means that the light from each object point splits into two mutually coherent waves. Each of these waves is modulated differently, and they both produce an interference pattern on the camera plane, creating a digital hologram of the object point [51]. The incoherent digital hologram of a general object is a collection of these point holograms, each of which belongs to a different object point. Each complex-valued hologram was a superposition of three camera shots according to the well-known phase-shifting procedure [52]. In addition to the need for three shots, the holographic setup was complicated, needed a special calibration and a relatively high temporal coherence light source and was sensitive to the setup vibrations. These limitations were removed by the interferenceless COACH (I-COACH) [16], in which there was no beam splitting and only a single object wave was modulated by the coded aperture. I-COACH was subsequently demonstrated with only two shots [17] and even a single shot [18]. The PSFs in all these versions [13,14,15,16,17,18] had a chaotic uniform distribution over a predetermined area on the camera plane. Although 3D imaging was demonstrated with these PSFs, the signal-to-noise ratio (SNR) was relatively low because the light intensity at each object point was spread over a much larger sensor area than the area of the point image in conventional imaging. Hence, the signal intensity per camera pixel was much lower than that in the case of direct conventional imaging. The SNR was later increased by nonlinear image reconstruction [19]. However, the SNR was dramatically improved only after the PSF was changed to a pattern of randomly distributed sparse dots [23]. Unfortunately, the cost of the enhanced SNR was the loss of 3D imaging capability since only two-dimensional imaging was possible with this I-COACH version. The reason for this limitation is the longitudinal distribution of the PSF of sparse dots. The dots are all focused on a single transverse plane, whereas before and after this plane, the light is scattered over a wide area such that the SNR decreases below the threshold of acceptable imaging.
Initially, the inability to image a 3D scene with the PSF of a dot pattern was the main driver for introducing SSLLB into I-COACH; however, later, other applications were demonstrated using SSLLBs in I-COACH systems. Note that the term SSLLB is new, and other various names were used in connection with I-COACH before the present review. The next section describes the general methodology of using SSLLBs in I-COACH systems. As mentioned before, the first example of using I-COACH with SSLLB was the use of axicon-generated beams as the system PSF [50]. This topic is described in the third section. The PSF of Airy beams is discussed in the fourth section, and the PSF of self-rotating beams is described in the fifth section. The sixth section addresses the topic of depth-of-field (DOF) engineering using the PSF of limited nondiffracting beams. The PSF of tilted limited nondiffracting beams enables image sectioning from a single point of view, which is the topic of the seventh section. The ability to sculpt the axial characteristics of incoherent imagers is discussed in the eighth section. The closing section summarizes the entire article.

2. Methodology

The imaging concept of I-COACH and its integration with SSLLBs are described in this section. The optical configuration of I-COACH is shown in Figure 1. Light from a point object indicated by a red dot with a glow at the head of Beatle in the top layer in Figure 1 is incident on a coded phase mask (CPM). The CPM is engineered to generate one SSLLB [6,7,8,9,10], multiple SSLLBs with random 3D propagation characteristics [53,56–58], and both, an SSLLB with regular converging or diverging beams [49,59] depending on the required optical characteristics, limitations of the illumination and the nature of the observed scene. The light modulated by the CPM is recorded as the point spread function (IPSF). The point object is then shifted to multiple axial locations, and the PSF library is recorded. After this calibration procedure, an object O is placed within the axial limits of the PSF library, and the object intensity pattern IO is recorded. The object intensity pattern is processed with the PSF library to reconstruct 3D information about the object. 3D imaging with I-COACH is possible with a scattering mask, as demonstrated during its development in 2017 [16]. However, the depth of focus when reconstructing an IO using a PSF recorded at a particular z is governed by the axial resolution of the system defined by its numerical aperture (NA). With an ensemble of SSLLBs, it is possible to control the depth of focus by controlling the ensemble. Tuning the depth of focus is possible even in direct imaging mode by controlling the NA, but when the axial resolution given by ~λ/NA2 is changed, the lateral resolution dependent on λ/NA is also changed. The bottom two rows of Figure 1 show the depth of focus tunability in I-COACH and direct imaging. In I-COACH, when the depth of focus is changed, the lateral resolution remains unaffected, while in direct imaging, when the depth of focus is changed, the lateral resolution is affected. The mathematical formulation is presented next, which is universal to any I-COACH technique with CPM, although the recording and reconstruction methods can be different between the various I-COACH systems.
The point object located at r s ¯ = ( x s , y s ) at a distance of zs from the CPM emits light with an amplitude of I s . The complex amplitude at the CPM is given as C 1 I s L ( r s ¯ / z s ) Q 1 / z s , where Q a = e x p j π a λ ( x 2 + y 2 ) and L ( s ¯ / u ) = e x p [ j 2 π ( s x x + s y y ) / ( λ z s ) ] are the quadratic and linear phase functions and C1 is a complex constant. The CPM transparency function is given as Ψ = e x p j Φ C P M . The complex amplitude after the CPM is given as C 2 I s Q 1 / z s L ( r s ¯ / z s ) Ψ , where C2 is a complex constant. The IPSF recorded at a distance of z h from the CPM is given as
I P S F r ¯ O ;   r ¯ s , z s = C 2 I s Q 1 z s L r ¯ s z s Ψ Q 1 z h 2 ,
where r ¯ O is the location vector in the sensor plane and ‘⊗’ is a 2D convolution operator. Equation (1) can be expressed as
I P S F r ¯ O ;   r ¯ s , z s = I P S F r ¯ O z h z s r ¯ s ; 0 , z s .
The object distribution at some plane can be considered a collection of points given as
o   r ¯ s = p = 1 S a p δ r ¯ r ¯ s , p ,
where a p is the amplitude of p-th point and S is the number of object points. In this study, only spatially incoherent illumination is considered; thus, the light from every point does not interfere with one another; rather, the light intensities increase. Therefore, the object intensity is given as
I O r ¯ O ; z s = p = 1 S a p I P S F r ¯ O z h z s r ¯ s , p ; 0 , z s
The image of the object is reconstructed by a cross-correlation between the IO and IPSF, given as
I R r R ¯ = I O r ¯ O ; z s I P S F * r ¯ O r ¯ R ; z s d r ¯ 0 = p = 1 M a p I P S F r ¯ O z h z s r ¯ s , p ; 0 , z s I P S F * r ¯ O r ¯ R ; z s d r ¯ O = p = 1 M a p Λ r R ¯ z h z s r ¯ s , p ,
where Λ is a delta-like function that composes the object. The Λ size represents the lateral resolution of the system and is related to λzh/zsNA governed by the NA. The minimum limit of the axial resolution is approximately λ/NA2 and the maximum limit is the focal depth of the SSLLB.
The reconstruction of an object image using regular cross-correlation between the IO and IPSF, according to Equation (5), is not always the optimal reconstruction method. There are numerous methods available that can be used to reconstruct object information from IO and IPSF, such as phase-only filtering [60], Wiener deconvolution (WD) [61], nonlinear reconstruction (NLR) [19], the Lucy-Richardson algorithm (LRA) [62,63], the Lucy-Richardson-Rosen algorithm (LRRA) [47], and the recently developed incoherent nonlinear deconvolution using an iterative algorithm (INDIA) [64]. A comparison study was carried out between the above reconstruction methods for different types of PSFs, and both the LRRA and WD methods were found to perform better than the LRA and NLR methods. Hence, LRRA and WD are suitable for providing initial guess solutions for INDIA. Overall, INDIA is better than the other four methods.
The reconstruction approach of the INDIA is briefly discussed next. INDIA is considered the next generation of nonlinear deconvolution methods. Both the NLR and INDIA are based on the fundamental fact that most of the information about an object is in the phase of its spectrum. A nonlinear correlation between the IO and IPSF can be expressed with Fourier transforms as I R = F 1 I ~ P S F α e x p j · a r g I ~ P S F I ~ O β e x p j · a r g I ~ O , where α and β are constants whose values are searched until a certain cost function is minimized. arg(∙) refers to the phase, and A ~ is the Fourier transform of A. As described above, the phase information is retained, but the magnitude is tuned to achieve the minimum cost function. However, in the NLR, tuning the magnitude does not necessarily yield the optimal solution because the magnitude is not tuned pixel-wise but rather on the matrix as a whole. The parameters α and β change the entire matrix, i.e., all the pixel values by the same power. In INDIA, the magnitude is tuned pixel-wise using an iterative approach, which allows us to estimate a better solution than can be obtained from the NLR. Since it has been recently established that LRRA usually has better performance than NLR, the output of LRRA is given as the initial guess for INDIA. The LRRA was developed from the LRA by replacing the regular correlation with the NLR. A schematic of the LRRA is shown on the right side of Figure 2. LRRA begins with an initial guess of the object information, which is usually IO (Rq=1) in the first iteration (q=1) but can be any matrix, where Rq is the solution of the qth iteration. The algorithm starts by convolving the initial guess solution IO with the recorded IPSF, and the resulting IO is used to divide the IO to obtain the ratio. This ratio is the discrepancy between the recorded IO and the estimated IO’ from the initial guess solution. This ratio is correlated with IPSF by the NLR to obtain the residual weight, which is multiplied by the previous solution Rq=1. With every iteration, the solution approaches the ideal solution.
A schematic of the INDIA is shown on the left side of Figure 2. In the first step, the phase matrices of the spectra of IO and IPSF are calculated and multiplied to generate the phase information. The magnitude of the spectrum obtained by LRRA was used as the initial guess spectral magnitude and was multiplied by the phase information, after which the inverse Fourier transform was computed. The resulting complex amplitude’s phase information is retained while a limiting window constraint is applied to the magnitude. This limiting window is the size of the object, which can be estimated either by an autocorrelation or from the result of the NLR or LRRA. The results are compared with those of direct imaging, and the above process is iterated to obtain the optimal solution. INDIA is applied to I-COACH with different types of SSLLBs.
Before the development of I-COACH, tuning axial resolution was demonstrated in COACH by hybridization methods, where a hybrid CPM is formed by combining the CPMs of COACH and Fresnel incoherent correlation holography (FINCH) [65–68] and tuning their contributions [15,30]. FINCH has a low axial resolution but a higher lateral resolution, whereas COACH has axial and lateral resolutions comparable to those of direct imaging systems with the same NA. In [15], by tuning the contributions of FINCH and COACH in the CPM, the axial and lateral resolutions were tuned between the limits of FINCH and COACH. In [30], a unique hybridization was used to achieve the high axial resolution of COACH and high lateral resolution of FINCH.

3. I-COACH with Bessel Beams

I-COACH with a CPM that can generate a random array of sparse Bessel beams was used to tune the axial resolution independent of lateral resolution by controlling chaos in the form of the number of Bessel beams [56]. The greater the number of Bessel beams is, the greater the axial resolution. A Bessel beam, when used as a PSF, exhibits lower axial and lateral resolutions than does the PSF of a Gaussian beam [6]. A lower axial resolution [i.e., longer DOF] arises due to the constant intensity of the Bessel beam distribution at different depths. The lower lateral resolution results from the characteristic strong sidelobes of the transverse Bessel beam distribution. In I-COACH, the reconstructed response for a point is not a Bessel distribution but a Delta-like function; due to the NLR, this Delta-like function is close to the diffraction-limited spot size, which reaches the maximum limit of the lateral resolution. The DOF of the indirect imaging mode is calculated as the width of the axial curve given by the central value (x=y=0) of I P S F ( z r e f ) β α I P S F ( z ) along z, where ‘ β α ’ indicates the NLR operation. The obtained DOF is the same as the axial length of the Bessel beam. This is because the 2D cross-correlation between mutually axially shifted Bessel beams is close to the 2D autocorrelation of a Bessel beam.
However, if there are two Bessel beams that are not collinear, then with a change in z, even though the two Bessel distributions do not change with respect to themselves, there is a change relative to one another. Therefore, I P S F ( z r e f ) and I P S F ( z ) are no longer the same but vary mainly due to the relative variation between the two Bessel distributions. When the number of Bessel distributions increases further, the difference between I P S F ( z r e f ) and I P S F ( z ) increases further and reaches the axial resolution limit of the direct imaging system with a Gaussian beam. However, the lateral resolutions for all the above cases remain the same. The optical configuration of I-COACH with an ensemble of Bessel beams is shown in Figure 3(a).
A simulation study is shown in Figure 3(b). The simulation was performed with square matrices of 250 K pixels for a wavelength of 0.65 µm, and the physical size of each pixel was set at 10 µm. The number of Bessel beams was increased in steps of 1, starting from p = 1 to 4, and the behavior was simulated for each value of p. Two test objects, namely, the logos of Ben Gurion University of the Negev and University of Tartu, were used for the simulation study. The object distances zs for the two test objects were 37 cm and 40 cm. The recording distance zh was 40 cm for both targets. The CPMs were designed by random multiplexing [24,27] of diffractive axicons given as Ψ = k = 1 p exp j 2 π Λ k 1 x a k 1 2 + y a k 2 2 e x p j 2 π Λ k 2 x e x p j 2 π Λ k 3 y M a s k k with different locations and linear phases, where Λ k 1 , Λ k 2 and Λ k 3 are the constants of the axicons in the x and y directions, respectively; a k 1 and a k 2 are the shifts in the x and y directions, respectively; and Mask(k) is the kth binary random mask, which is orthogonal to any other Mask(m) km. There are different approaches for designing mutually exclusive binary random matrices. In our study, a normalized grayscale random matrix was created in the first step. Mutually exclusive binary random matrices are created by assigning the pixel locations of the grayscale random matrix with different ranges of values to one and zeros to other locations. For instance, for p = 4, four binary mutually exclusive matrices are created by selecting the pixels of the grayscale random matrix with values of 0 to 0.25, 0.25 to 0.5, 0.5 to 0.75 and 0.75 to 1 such that the sum of all the mutually exclusive random matrices yields a uniform matrix with all the pixels set to a value of 1. For p = 1, the Mask function is uniform. The images of the phase mask, IPSFs at two distances, the object intensity pattern for the two test objects at locations z1 and z2 and the reconstruction results using INDIA with the output of LRRA as the initial guess and approximately ten iterations are shown in Figure 3(b).

4. I-COACH with Airy Beams

The optical configuration of I-COACH with an ensemble of Airy beams is shown in Figure 4(a). Airy beams have focal characteristics similar to those of Bessel beams but exhibit nonrectilinear propagation paths. While most studies with Airy beams involve a coherent light source [69–71], there are also few studies with incoherent sources that demonstrate that the focal characteristics are quite similar to those of a coherent source [57,72]. Like Bessel beams, Airy beams are not radially symmetric and have peculiar asymmetric distributions, which give rise to sidelobes in correlation if sufficient diversity is not introduced in the ensemble. In addition to designing cubic phase masks, additional unique shifts and coordinate rotations are introduced. The CPM for generating an ensemble of Airy beams is given as Ψ = k = 1 p M a s k k e x p j 2 π λ ξ k x k + a k 1 3 + η k y k + a k 2 3 , where x k = x 0 c o s θ k + y 0 s i n θ k , y k = y 0 c o s θ k x 0 s i n θ k , ξ k , and η k are the scaling factors along the x and y directions, respectively. The simulation conditions are similar to those for Bessel beams. The images of the phase masks, IPSFs for z1 and z2, and IOs and IRs corresponding to z1 and z2 are shown in Figure 4(b).
The difference between using an ensemble of Bessel beams and Airy beams is that with Airy beams, it is possible to transversely shift images of the objects differently along the z-axis. This property arises due to the nonrectilinear propagation of Airy beams. This effect is observed when a single Airy beam is used. Two cases for two CPMs to demonstrate the push and pull effects with an Airy beam are demonstrated, as shown in Figure 5. The distance between the two objects was increased to 15 cm. In row 1, the objects are pushed away from one another, while in row 2, the objects are pulled toward one another in the transverse direction. This property allows us to separate the transversely overlapped object information corresponding to two axial planes.

5. I-COACH with Self-Rotating Beams

A self-rotating beam is a recently discovered optical beam that has a long focal depth where the intensity distribution rotates around an axis [8]. While the self-rotating beam proposed in [8] involves a complicated design with topological charges, such self-rotation beams can also be generated following the conventional methods of designing lenses with a long focal depth, such as axilens [73]. In the axilens, there is a radial dependent focal length, i.e., the Fresnel lens is divided into circular zones, and each zone has a different focal length. Consequently, the resulting intensity distribution is similar to that of a Bessel distribution. To design a diffractive element for the generation of a self-rotating beam, the focal length is changed with respect to the angle. For an optical configuration, as shown in Figure 6(a), the focal length is given as f ( θ ) = f 0 + Δ f 2 π θ , where Δf is the depth of focus and f0 is the focal length of the sector with θ = 0. The diffractive element is partitioned azimuthally with every infinitesimal sector designed for a focal length between f0 and f0f. For the optical configuration shown in Figure 6(a), 1 f ( θ ) = 1 z s + 1 z i   , and the image distance is z i θ = 1 f ( θ ) 1 z s 1 . The phase of the diffractive element is given as Φ F Z L A V F ( R , θ ) 2 π 2 R 2 λ 2 π f 0 + Δ f θ . The CPM is designed to create an ensemble of self-rotating beams with different values of Δf and f0. The transparency function of the CPM is given as Ψ = k = 1 p M a s k k e x p j 2 π 2 R 2 λ 2 π f 0 ( k ) + Δ f ( k ) θ . The simulation study was carried out under the same conditions as those used for the Bessel and Airy beams, and the results are shown in Figure 6(b). Images of the CPMs, IPSFs at the two axial locations, and IO and IRs at the two axial locations are shown in columns 1 to 6, respectively. Once again, the change in axial resolution is independent of lateral resolution for different numbers of self-rotating beams. The performance of an ensemble of self-rotating beams is similar to that of an ensemble of Bessel beams because the paths of the beams are rectilinear and because of the asymmetry in the intensity distributions. In [58], a higher SNR was reported for self-rotating beams than for Airy beams when the NLR was used for reconstruction. However, with LRRA and INDIA, the SNRs obtained for the Airy beams and self-rotating beams are similar, as shown in Figure 5 and Figure 6.

6. Depth-of-Field Engineering

DOF engineering means that the imaging system is designed to image objects located at different depths from the system in different ways such that their images appear in the output or disappear according to their axial location. In the DOF engineering scheme, the object space is divided into several isolated volumes along the longitudinal axis with different lengths. Objects located at a certain volume are selectively imaged with the same sharpness regardless of where the object is located inside the volume, where the length of the volume determines the DOF of the system for all objects inside this volume. Objects positioned outside any volume are not imaged at all, meaning that their image does not appear in the system’s output. By using DOF engineering, one can selectively image objects from any chosen volume separately or simultaneously with other volumes. These choices can be performed in the digital reconstruction stage after recording the object hologram, and images from different volumes can be transversely shifted to avoid overlap of images from various volumes. In this section, we briefly summarize the work of DOF engineering in I-COACH imaging systems based on the PSF of SSLLB of limited nondiffracting beams [53].
The DOF engineering proposed in [53] is performed by introducing radial quartic phase functions (RQPFs) into the aperture of the I-COACH system, as schematically shown in Figure 7. The use of RQPFs for creating limited nondiffracting beams was proposed in [9,10] (with a different term than RQPF). The mathematical justification of using RQPF is based on the McCutchen theorem [54]. According to the McCutchen theorem, there is a Fourier relation between the radial distribution of a transverse aperture of a focusing light field and the longitudinal field distribution around the focal point, where the focal point is the axis origin of this longitudinal field distribution. In formal notation, the 2D focusing aperture is g ( r , θ ) , where ( r , θ ) are the polar coordinates of the aperture and a focusing aperture means that g ( r , θ ) is illuminated by a converging spherical wave. The aperture radial distribution is t r = 0 2 π   g r , θ d θ . According to the McCutchen theorem, the intensity along the z-axis satisfies the relation I z F t ρ 2 , where F indicates a one-dimensional Fourier transform, the focal point is the origin of the longitudinal axis z, and ρ = r 2 . Recalling that the goal is to create a limited nondiffracting beam, I z should be a constant along a limited range. In a table of Fourier transforms, there are two well-known functions for creating a constant intensity I z . One is the Dirac delta function t ρ = δ ρ ρ o , where ρ o can be any constant between zero and the squared radius of the aperture. This delta function dictates an annular aperture for g ( r , θ ) , and when g ( r , θ ) is illuminated by a focusing beam, the resulting beam near the focus is the well-known Bessel beam [55], which is indeed a limited nondiffracting beam. The other optional function to have a constant I z is the quadratic phase function in ρ , which is a quartic phase function in r , i.e., t ρ = e x p i 2 π ρ 2 / q 4 = e x p i 2 π r 4 / q 4 , where q is a real constant. This function of t ρ is a pure phase function, and if g r , θ = e x p i 2 π r 4 / q 4 , this aperture, unlike the annular aperture, does not absorb light when it is introduced as a mask in the optical system. Therefore, RQPF has become an attractive alternative to narrow annular apertures or any other light-absorbing apertures. Note that because the McCutchen theorem is valid only for focusing beams, to create a limited nondiffracting beam with RQPF, one needs to illuminate the RQPF with a focusing beam. Alternatively, two attached components, an RQPF and a thin spherical lens, can be illuminated by a spherical wave such that a converging spherical wave is emitted from the attached components, and according to the McCutchen theorem, the limited nondiffracting beam starts from the focal point of the converging wave.
The simplest case of DOF engineering is the extension of the imaging system’s DOF, which means that there is only a single volume instead of the multivolume scheme mentioned above, as shown in Figure 7. The way to extend the DOF proposed in [53] is to integrate a single RQPF into the I-COACH system with a PSF of sparse dots [23]. Without the RQPF, the I-COACH system with a PSF of sparse dots has an aperture of two optical elements. The first element is the CPM, which creates a randomly distributed pattern of several dots. The other element is a converging lens that guarantees that the dots are obtained on the camera plane. The RQPF, which is now the third mask in the combined coded aperture, extends the length of the dots and extends the DOF. In principle, various optical masks can be introduced into the system separately; however, in the present case [53], all the elements appear in a single-phase mask displayed on the SLM, where the mask has a phase distribution that is a multiplication of the three phase functions as follows e x p i 2 π r 4 / q 4 π r 2 / λ f + Φ r , where λ is the central wavelength and Φ r is the chaotic function of the CPM, which creates the randomly distributed dots [23]. The focal distance f of the positive diffractive lens is set to satisfy the imaging relations between the object and camera planes. In formal notation, the focal length f and the gaps between the object and the SLM u and between the SLM and camera v fulfill the relation 1/f = (1/u) + (1/v), known as the imaging equation. When the RQPF is attached to the CPM and to the converging lens, the PSF is changed from some number of focusing dots to the same number of rods that remain in focus along a limited interval; consequently, the DOF of the dots is extended. Therefore, any object point located in the axial range of the DOF yields approximately the same dot pattern on the camera. Moreover, because of the extended DOF of the PSF, any object located in the axial range of the DOF is approximately imaged with the same sharpness. The phase function of the RQPF, e x p i 2 π r 4 / q 4 , extends the randomly distributed dots from a focal point to an intensity rod shape, where q is the constant dictating the extent of the rod and hence the length of the DOF. In other words, the RQPF generates limited nondiffracting beams on the camera plane with an approximately constant intensity along a limited gap and a relatively narrow beam-like profile at any lateral plane. According to [53], the DOF of the system is λ u 2 W 2 / q 4 λ u W 2 , where W is the diameter of the aperture. The DOF length and the initial point of the rods are determined by modifying the focal length of the lens f and the constant q of the RQPF, respectively.
The DOF extension method is demonstrated by an optical setup, shown in Figure 7. Two objects located at different distances from the coded aperture were considered positioned at each edge of the observed volume. The objects are positioned at distances of 24 and 26 cm from the coded aperture displayed on the SLM. Both targets are different parts of the USAF transmission resolution chart; one is element 6 of Group 2, and the other is element 1 of Group 3. A distance of 22 cm was set between the camera and the SLM. Displaying on the SLM, only a single diffractive positive lens creates a direct image of the targets on the camera plane [shown in Figure 8a,b], where each time, a different value of the focal length satisfies the imaging equation for each object located at a different depth. For direct imaging, Figure 8(a) and Figure 8(b) clearly show that the longitudinal distance between these objects was too long to be the focus of both objects during the same camera shot. The I-COACH PSF contained a CPM that produced 10 randomly distributed dots on the detection plane. The I-COACH-recovered image is shown in Figure 8(c), which reveals that both objects are in focus without a resolution reduction. According to these results, the system is engineered to have a DOF of 3 cm.
Next, we describe a more advanced technique of DOF engineering in which objects from more than a single volume are imaged. Objects from the two volumes shown in Figure 7 are processed such that two situations are considered. In the first situation, in-focus imaging is shown first for objects within volume No. 1 and then for those within volume No. 2, where other objects located outside these volumes are always out-of-focus. In the second situation, two images belonging to two different volumes are reconstructed simultaneously with a lateral shift between them. Moreover, they are reconstructed from the same object hologram used in the first situation. In other words, different reconstruction operations performed on the same hologram yield different images, and there is no need to record a different hologram for every image. The image lateral shift is meaningful for targets positioned on the same line of sight and consequently overlapping each other. From the SLM, Volume 1 was positioned at an interval of 16.9-18.5 cm, and Volume 2 was positioned at 22.2-24.7 cm. The goal of the first situation is to recover an image from a single recorded hologram, an image belonging only to either volume 1 or 2. Because volumes 1 and 2 are located at different depths, different sets of parameters are needed for the three-phase components of the RQPF, lens, and CPM. The two sets of phase elements can be multiplexed either in time or in space, where time multiplexing means recording more than a single hologram, and space multiplexing means a reduced SNR in comparison to time multiplexing. To have one phase aperture on the SLM and to record a single hologram, for objects from both volumes, two different sets of three phase masks were spatially multiplexed, as shown in Figure 9. Binary circular gratings with a ring width of 30 pixels, as shown in the fifth column of Figure 9, were used to divide the SLM area between the two sets of phase components, each for every volume. Specifically, the phase masks corresponding to volume 1 were displayed on white rings, and those corresponding to volume 2 were displayed on black rings. Therefore, the final coded aperture has the capability to image targets from both volumes. Further linear phase masks, shown in the first column of Figure 9, were added to the two sets to transversely separate the images of the targets from the two volumes. For the image lateral shift, horizontal linear phase shifts of +0.9° and −0.9° were introduced to the phase masks of volumes 1 and 2, respectively. These linear phases deflect the light beyond the SLM in two opposite directions; hence, images of objects from the two volumes are shifted in opposite directions.
The imaging results for the two volumes in the abovementioned two situations are described in the following. Elements 5-6 of group 3 (12.7-14.25 lp/mm) and element 1 of group 3 (8 lp/mm) of the USAF resolution chart were positioned in volumes 1 and 2, respectively. Figure 10(a) shows direct images of the objects where the image from volume 2 is in-focus and the image from volume 1 is out-of-focus. The images of the two different volumes overlap each other since they are located on the same sightline. This disturbing overlap can be avoided using DOF engineering, as demonstrated in Figure 10(b)-(d). Two different PSFs corresponding to volumes 1 and 2 were formed digitally with the same values of scattering and number of dots. Separate reconstructed images of the targets from the two volumes are shown in Figure 10(b) and Figure 10(c). Cross-correlation with the corresponding PSF of each volume yields these two different images. The results shown in Figure 10(b) and Figure 10(c) illustrate the abovementioned first situation, where the image in Figure 10(d) describes the second situation. The goal of the experiment shown in Figure 10(d) was to reconstruct images of the targets from both volumes via a single digital reconstruction from the same hologram used for the first situation. Figure 10(d) simultaneously shows in-focus images from the two different volumes with transverse separation of the images. The ability of DOF engineering to reconstruct images in different volumes from a single hologram, as well as to achieve lateral separation between the images from the different volumes, has been successfully demonstrated.

7. Image Sectioning from a Single Viewpoint

In this section, we describe a different application for limited nondiffracting beams based on the work published in [40]. The application of this section is image sectioning in a 3D scene from a single viewpoint. Image sectioning means that an image of a single lateral slice of the whole 3D object scene appears in the output, and the out-of-focus distribution from the object outside the considered slice is suppressed as much as possible. Image sectioning is considered a special case of DOF engineering in which the entire 3D scene is considered the only processed volume, and the system aims to minimize the DOF for each lateral slice. The sectioning tools proposed in [40] are similar, but not identical, to those used in the DOF engineering [53] discussed in the previous section. Here, we consider three types of phase elements, RQPF, linear phase, and diffractive lens, used to satisfy the Fourier relation between the coded aperture plane and the detection plane. The lens appears in combination only once, where several pairs of RQPFs and a linear phase are multiplexed on a single-phase mask together with the lens. Each linear phase has a random vector a ¯ =(ax, ay) that multiplies the spatial variables (x,y). By combining a diffractive lens with several pairs of RQPFs and linear phases, a set of parallel rods is generated on the camera plane in random order. The transverse location of each rod is determined by the corresponding linear phase. To make this PSF capable of performing image sectioning, one needs to tilt each rod at a different angle relative to the z-axis. Therefore, the proposed method is called sectioning by tilted intensity rods (STIR). As described and demonstrated in [40], a shift between the centers of each RQPF and the center of the lens causes a tilt of the rod by an angle relative to the z-axis. In formal notation, the aperture of the I-COACH system is a multiplication of the positive lens with several shifted RQPFs, each of which is multiplied by a different linear phase as follows:
T ( r ¯ ) = e x p i π r ¯ 2 λ f 2 k = 1 K Θ k r ¯ e x p i π b r ¯ d ¯ k 4 e x p i 2 π a ¯ k · r ¯ ,
where r ¯ = x , y ,   f2 is the focal length of the diffractive lens, K is the number of tilted beams in the PSF, b is the parameter of the RQPFs, which dictates the length of the rods, and a ¯ k = ( s i n α x , k , s i n α y , k ) / λ is the vector of the linear phase parameters of the kth beam, generating K different shifts from the origin of the detection plane of f 2 s i n α x , k horizontally and a vertical shift of f 2 s i n α y , k to each kth rod. Θ k r ¯ are the orthogonal binary functions that randomly jump between the values 1 and 0 and satisfy k = 1 K Θ k r ¯ = N M , assuming that the I-COACH aperture is displayed on a matrix of N×M pixels. When the phase-only mask of Equation (6) is illuminated with a plane wave, a set of K tilted rods are randomly distributed over the detection plane. All the rods begin from the back focal plane of the diffractive lens and extend along a finite length dependent on the constant b. Each k-th rod has its own vector shift d ¯ k ; hence, each rod is tilted at a different angle and orientation. Therefore, when the point object moves back and forth inside the object volume, in every axial location, the camera records a different PSF of dots, as shown in Figure 11. Every PSF has the same number of dots, but their arrangement on the detection plane is different because of the different tilts of each rod. The system stores these entire PSFs in the PSF library. In the reconstruction stage, when one PSF from the library is cross-correlated with the object hologram, only one slice of the 3D object is imaged on the output plane, the slice that corresponds to the chosen PSF.
In the following experiment, the imaged scene was designed so that an image from one plane obscures that from another plane by changing the lateral locations of the objects. In such a scenario, successful reconstruction of all objects located at different planes is considered image sectioning, a challenging task with data captured from only a single viewpoint without scanning along the z-axis. Figure 12 on the left shows the image reconstruction of each plane of interest based on a single hologram, along with lens-based imaging. Here, the compared holographic methods are I-COACH, STIR, and I-COACH with long DOF (CLDOF), which are accomplished by the PSF of straight (untitled) light rods, as discussed in section 6 and in [53]. The CLDOF was incorporated into the experiment to emphasize the significance of the inclination of light beams for sectioning. Obviously, STIR accomplishes sectioning of the 3D scene, whereas all the other methods do not. Apparently, the occluded signal cannot be recovered from an I-COACH hologram without considerable noise, and the recovered distorted signals contain undesired traces of the object that resides in the adjacent plane. On the other hand, in STIR, the multiple and pseudorandom transverse dislocations between adjacent planes enable us to fully recover the occluded image, leading to a complete and accurate reconstruction of objects from each plane of interest. Like in STIR, CLDOF also extends the in-focus range and captures a hologram that contains multiple planes of interest in focus simultaneously. However, in CLDOF, all the image replicas have the same spatial distribution as that of the 3D scene. Hence, the CLDOF cannot distinguish the different planes within the volume, and it recovers all the structures with the same quality. As an additional trait, we inspected how each method would perform under full overlap between the objects from each plane. In this demonstration, the vertical grating was replaced by the digit ‘1′, and the transverse location was set accordingly. Figure 12 on the right shows that the tilted beam approach of STIR can perform an optical sectioning procedure, exposing the fully obscured object with high fidelity. On the other hand, COACH and CLDOF cannot distinguish multiple planes of interest, and the reconstruction results are highly distorted.

8. Sculpting Axial Characteristics with Incoherent Imagers

Recently, a modified approach to tune the axial resolution of an incoherent imaging system without affecting the lateral resolution was proposed using an ensemble of only two beams, one with high and the other with low focal depths [59]. This method was based on the hybridization method, which was demonstrated in 2017 to combine the properties of FINCH and COACH [15]. FINCH compared to COACH has a higher lateral resolution and a lower axial resolution, and COACH has a lower lateral resolution and a higher axial resolution. To create an incoherent digital holography system that has the mixed characteristics of FINCH and COACH, a hybridization method was developed. In that hybridization approach, the phase masks of FINCH and COACH were combined, and the parameter α was used to tune the contributions of the two phase masks: α = 0, FINCH and α = 1, COACH; for other values of α, mixed properties of FINCH and COACH were obtained. However, in that study [15], the lateral resolution was not constant but rather varied when the axial resolution was varied. Since the hybridization method was developed in the initial stages of COACH, three camera shots were needed.
To demonstrate the tuning of axial resolution without affecting lateral resolution, two-phase masks, namely, a diffractive axicon and a diffractive lens, were combined using two parameters, namely, T1 and T2. The CPM function is given as Ψ exp i π T 1 λ f 1 x 2 + y 2 + exp i 2 π T 2 Λ 1 x 2 + y 2 , where f is the focal length of the diffractive lens, Λ is the period of the diffractive axicon, 0 T 1 1 and 0 T 2 1 . When T1 = 0 and T2 = 1, the CPM is a diffractive lens; when T1 = 1 and T2 = 0, the CPM is a diffractive axicon; and for other combinations, mixed properties of the diffractive lens and axicon were obtained. In the original study [59], random multiplexing was avoided using a phase retrieval algorithm called the transport of amplitude into phase based on the Gerchberg–Saxton algorithm (TAP-GSA). This approach improved the SNR and light throughput, as described in [59,74]. Several previous studies [75] have shown that it is possible to use the phase mask obtained from the phase of the complex amplitude of the sum of phases of diffractive axicon and diffractive lens instead of random, or other, multiplexing by TAP-GSA.
The simulation conditions are similar to those used for the Airy, Bessel and self-rotating beams. Three cases were considered: (T1,T2)=(1,0), (T1,T2)=(0,1), and (T1,T2)=(0.5,0.5); namely, a diffractive lens, a diffractive axicon, 50% of the lens and 50% of the axicon. The images of the CPMs, the IPSFs at the two axial locations, and the IO and IRs at the two axial locations are shown in columns 1 to 6 of Figure 13, respectively. As shown in the figures, the axial resolution decreases as we move from the lens to the axicon. By choosing appropriate values of T1 and T2, it is possible to sculpt the axial characteristics. Compared to previous cases in which Airy, Bessel or self-rotating beams were used, the proposed study requires only two beams, which simplifies the design. The above approach also improves the SNR compared to an ensemble of beams with a high focal depth because only one random matrix is needed instead of many, resulting in reduced scattering effects.

9. Conclusions

This review addresses the successful integration of two unrelated technologies. On the one hand, a well-known technique beginning in the 1980s, longitudinal beam shaping [76], was used mainly for directing laser beams in free space. On the other hand, beam shaping technology in general and SSLLB in particular have been integrated into a recently proposed imaging technology known as I-COACH [16]. As described in this review, this integration has solved several different problems and has been implemented in various imaging applications. Once again, we see that a combination of two unrelated techniques can yield a new technology with superior capabilities that do not exist in other technologies. This new technology still has much to be improved in terms of hardware and software. In hardware, better SLMs with more and smaller pixels will undoubtedly improve the imaging performance. In software, more sophisticated image reconstruction algorithms increase the SNR of the output images.
What is the future of the I-COACH+SSLLB combination? In the short term, additional imaging applications might be implemented using new, or known, types of SSLLBs, and new types of SSLLBs might perform better in already known, or new, imaging applications. In the long run, we predict that the SSLLB technique will be integrated into coherent imaging systems in general [24,27] and for quantitative phase imaging [77,78] in particular. We hope to see these exciting developments as soon as possible.

Author Contributions

Conceptualization, J. R. and V. A.; methodology, J. R. and V.A.; software, J. R. and V.A.; validation, J. R. and V.A.; investigation, J. R. and V.A.; resources, J. R. and V.A.; writing—original draft preparation, J. R. and V.A.; writing—review and editing, J. R. and V.A.; funding acquisition, J. R. and V.A. All the authors have read and agreed to the published version of the manuscript.

Funding

European Union’s Horizon 2020 Research and Innovation Programme grant agreement No. 857627 (CIPHR). The Israel Innovation Authority under MAGNET program No. 79555.

Data Availability Statement

Data is available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.; Liang, Y. Generation and Detection of Structured Light: A Review. Front. Phys. 2021, 9, 688284. [Google Scholar] [CrossRef]
  2. Doskolovich, L.L.; Mingazov, A.A.; Byzov, E.V.; Skidanov, R.V.; Ganchevskaya, S.V.; Bykov, D.A.; Bezus, E.A.; Podlipnov, V.V.; Porfirev, A.P.; Kazanskiy, N.L. Hybrid design of diffractive optical elements for optical beam shaping. Opt. Express 2021, 29, 31875–31890. [Google Scholar] [CrossRef]
  3. Tao, S.H.; Yuan, X.-C.; Ahluwalia, B.S. The generation of an array of nondiffracting beams by a single composite computer generated hologram. J. Opt. A Pure Appl. Opt. 2005, 7, 40–46. [Google Scholar] [CrossRef]
  4. Mok, F.; Diep, J.; Liu, H.; Psaltis, D. Real-time computer-generated hologram by means of liquid-crystal television spatial light modulator. Opt. Lett. 1986, 11, 748–750. [Google Scholar] [CrossRef]
  5. Tamura, H.; Ishii, Y. Computer-generated hologram fabricated by electron-beam lithography for noise reduction. Opt. Rev. 2012, 19, 50–57. [Google Scholar] [CrossRef]
  6. Scott, G.; McArdle, N. Efficient generation of nearly diffraction-free beams using an axicon. Opt. Eng. 1992, 31, 2640–2643. [Google Scholar] [CrossRef]
  7. Wang, S.; Lin, Q. Non-diffraction properties of truncated Airy beams. Optik 1995, 101, 47–48. [Google Scholar]
  8. Niu, K.; Zhao, S.; Liu, Y.; Tao, S.; Wang, F. Self-rotating beam in the free space propagation. Opt. Express 2022, 30, 5465–5472. [Google Scholar] [CrossRef] [PubMed]
  9. Rosen, J.; Salik, B.; Yariv, A. Pseudonondiffracting beams generated by radial harmonic functions. J. Opt. Soc. Am. A 1995, 12, 2446–2457. [Google Scholar] [CrossRef]
  10. Rosen, J.; Salik, B.; Yariv, A. Pseudonondiffracting beams generated by radial harmonic functions. J. Opt. Soc. Am. A 1996, 13, 387. [Google Scholar] [CrossRef]
  11. Fatemi, M.; Arad, M.A. A novel imaging system based on nondiffracting X waves. IEEE 1992 Ultrasonics Symposium Proceedings. 92CH3118–7,1, 609–612.
  12. Lu, J.-Y. Bowtie limited diffraction beams for low-sidelobe and large depth of field imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1995, 42, 1050–1063. [Google Scholar] [CrossRef]
  13. Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography—A new type of incoherent digital holograms. Opt. Express 2016, 24, 12430–12441. [Google Scholar] [CrossRef]
  14. Vijayakumar, A.; Rosen, J. Spectrum and space resolved 4D imaging by coded aperture correlation holography (COACH) with diffractive objective lens. Opt. Lett. 2017, 42, 947–950. [Google Scholar] [CrossRef] [PubMed]
  15. Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography system with improved performance [Invited]. Appl. Opt. 2017, 56, F67–F77. [Google Scholar] [CrossRef]
  16. Vijayakumar, A.; Rosen, J. Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference. Opt. Express 2017, 25, 13883–13896. [Google Scholar] [CrossRef] [PubMed]
  17. Kumar, M.; Vijayakumar, A.; Rosen, J. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses. Sci. Rep. 2017, 7, 11555. [Google Scholar] [CrossRef]
  18. Rai, M.R.; Vijayakumar, A.; Rosen, J. Single camera shot interferenceless coded aperture correlation holography. Opt. Lett. 2017, 42, 3992–3995. [Google Scholar]
  19. Rai, M.R.; Vijayakumar, A.; Rosen, J. Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH). Opt. Express 2018, 26, 18143–18154. [Google Scholar] [CrossRef]
  20. Liu, C.; Zia, A.; Wan, Y. Interferenceless coded aperture correlation holography (I-COACH) adaptive compression imaging. Proc. SPIE 11209, 11th International Conference on Information Optics and Photonics 112092P (2019).
  21. Rosen, J.; Anand, V.; Rai, M.R.; Mukherjee, S.; Bulbul, A. Review of 3D Imaging by Coded Aperture Correlation Holography (COACH). Appl. Sci. 2019, 9, 605. [Google Scholar] [CrossRef]
  22. Ji, T.; Zhang, L.; Li, W.; Sun, X.; Wang, J.; Liu, J.; Shao, X. Research Progress of Incoherent Coded Aperture Correlation Holography. Laser & Optoelectronics Progress 2019, 56, 080005. [Google Scholar]
  23. Rai, M.R.; Rosen, J. Noise suppression by controlling the sparsity of the point spread function in interferenceless coded aperture correlation holography (I-COACH). Opt. Express 2019, 27, 24311–24323. [Google Scholar] [CrossRef] [PubMed]
  24. Hai, N.; Rosen, J. Interferenceless and motionless method for recording digital holograms of coherently illuminated 3D objects by coded aperture correlation holography system. Opt. Express 2019, 27, 24324–24339. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, C.; Man, T.; Wan, Y. Optimized reconstruction with noise suppression for interferenceless coded aperture correlation holography. Appl. Opt. 2020, 59, 1769–1774. [Google Scholar] [CrossRef] [PubMed]
  26. Rai, M.R.; Rosen, J. Resolution-enhanced imaging using interferenceless coded aperture correlation holography with sparse point response. Sci. Rep. 2020, 10, 5033. [Google Scholar] [CrossRef] [PubMed]
  27. Hai, N.; Rosen, J. Doubling the acquisition rate by spatial multiplexing of holograms in coherent sparse coded aperture correlation holography. Opt. Lett. 2020, 45, 3439–3442. [Google Scholar] [CrossRef] [PubMed]
  28. Kumar, M.; Vijayakumar, A.; Rosen, J.; Matoba, O. Interferenceless coded aperture correlation holography with synthetic point spread holograms. Appl. Opt. 2020, 59, 7321–7329. [Google Scholar] [CrossRef]
  29. Ma, T.; Liu, C.; Wan, Y. Interferenceless coded aperture correlation holography with enhanced reconstruction image quality by employing an optimization coded phase mask. Proc. SPIE 11898, 2021, Holography, Diffractive Optics, and Applications XI, 118980T.
  30. Bulbul, A.; Hai, N.; Rosen, J. Coded aperture correlation holography (COACH) with a superior lateral resolution of FINCH and axial resolution of conventional direct imaging systems. Opt. Express 2021, 29, 42106–42118. [Google Scholar] [CrossRef]
  31. Yu, X.; Wang, K.; Xiao, J.; Li, X.; Sun, Y.; Chen, H. Recording point spread functions by wavefront modulation for interferenceless coded aperture correlation holography. Opt. Lett. 2022, 47, 409–412. [Google Scholar] [CrossRef]
  32. Liu, C.; Man, T.; Wan, Y. High-quality interferenceless coded aperture correlation holography with optimized high SNR holograms. Appl. Opt. 2022, 61, 661–668. [Google Scholar] [CrossRef] [PubMed]
  33. Yu, X.; Chen, H.; Xiao, J.; Sun, Y.; Li, X.; Wang, K. Incoherent optical image encryption based on coded aperture correlation holography. Opt. Comm. 2022, 510, 127889. [Google Scholar] [CrossRef]
  34. Dubey, N.; Rosen, J. Interferenceless coded aperture correlation holography with point spread holograms of isolated chaotic islands for 3D imaging. Sci Rep 2022, 12, 4544. [Google Scholar] [CrossRef] [PubMed]
  35. Xiong, R.; Zhang, X.; Ma, X.; Qi, L.; Li, L.; Jiang, X. Enhancement of Imaging Quality of Interferenceless Coded Aperture Correlation Holography Based on Physics-Informed Deep Learning. Photonics 2022, 9, 967. [Google Scholar] [CrossRef]
  36. Xiong, R.; Zhang, X.; Ma, X.; Li, L.; Qi, L.; Wang, J.; Jiang, X. Full-Dimensional Surface Characterization Based on Polarized Coherent Coded Aperture Correlation Holography. IEEE Photonics J. 2022, 14, 1–7. [Google Scholar] [CrossRef]
  37. Rosen, J.; Bulbul, A.; Hai, N.; Rai, M.R. Coded Aperture Correlation Holography (COACH) - A Research Journey from 3D Incoherent Optical Imaging to Quantitative Phase Imaging. Holography - Recent Advances and Applications. IntechOpen, 2023.
  38. Zhang, M.; Wan, Y.; Man, T.; Qin, Y.; Zhou, H.; Zhang, W. Interferenceless coded aperture correlation holography based on Deep-learning reconstruction of Single-shot object hologram. Opt. Laser Technol. 2023, 2023 163, 109349. [Google Scholar] [CrossRef]
  39. Liu, C.; Wan, Y.; Ma, T.; Ma, T.; Man, T. Annular multi-focal-phase mask multiplexing based large depth of field imaging by interferenceless coded aperture correlation holography. Sci. Rep. 2023, 13, 11598. [Google Scholar] [CrossRef] [PubMed]
  40. Hai, N.; Rosen, J. Single viewpoint tomography using point spread functions of tilted pseudo-nondiffracting beams in interferenceless coded aperture correlation holography with nonlinear reconstruction. Opt. Laser Technol. 2023, 167, 109788. [Google Scholar] [CrossRef]
  41. Tahara, T.; Quan, X.Y.; Otani, R.; Takaki, Y.; Matoba, O. Digital holography and its multidimensional imaging applications: A review. Microscopy 2018, 67, 55–67. [Google Scholar] [CrossRef]
  42. Javidi, B.; Carnicer, A.; Anand, A.; et al. Roadmap on digital holography. Opt. Express. 2021, 29, 35078–35118. [Google Scholar] [CrossRef]
  43. Liu, J.-P.; Tahara, T.; Hayasaki, Y.; Poon, T.-C. Incoherent Digital Holography: A Review. Appl. Sci. 2018, 8, 143. [Google Scholar] [CrossRef]
  44. Tahara, T.; Zhang, Y.; Rosen, J.; Anand, V.; Cao, L.; Wu, J.; Koujin, T.; Matsuda, A.; Ishii, A.; Kozawa, Y.; et al. Roadmap of incoherent digital holography. Appl. Phys. B. 2022, 128, 1–31. [Google Scholar] [CrossRef]
  45. Rosen, J.; Vijayakumar, A.; Hai, N. Digital holography based on aperture engineering. In SPIE Spotlight E Book Series; Bellingham: Washington, DC, USA, 2023. [Google Scholar]
  46. Rosen, J.; Hai, N.; Rai, M.R. Recent progress in digital holography with dynamic diffractive phase apertures [Invited]. Appl. Opt. 2022, 61, B171–B180. [Google Scholar] [CrossRef] [PubMed]
  47. Anand, V.; Han, M.; Maksimovic, J.; Ng, S.H.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S. Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm. Opto-Electron. Sci. 2022, 1, 210006. [Google Scholar]
  48. Praveen, P.A.; Arockiaraj, F.G.; Gopinath, S.; Smith, D.; Kahro, T.; Valdma, S.-M.; Bleahu, A.; Ng, S.H.; Reddy, A.N.K.; Katkus, T.; et al. Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm. Photonics 2022, 9, 625. [Google Scholar] [CrossRef]
  49. Xavier, A.P.I.; Arockiaraj, F.G.; Gopinath, S.; Rajeswary, A.S.J.F.; Reddy, A.N.K.; Ganeev, R.A.; Singh, M.S.A.; Tania, S.D.M.; Anand, V. Single-Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy–Richardson–Rosen Algorithm. Photonics 2023, 10, 987. [Google Scholar] [CrossRef]
  50. Anand, V.; Katkus, T.; Juodkazis, S. Randomly Multiplexed Diffractive Lens and Axicon for Spatial and Spectral Imaging. Micromachines 2020, 11, 437. [Google Scholar] [CrossRef] [PubMed]
  51. Rosen, J.; Vijayakumar, A.; Kumar, M.; Rai, M.R.; Kelner, R.; Kashter, Y.; Bulbul, A.; Mukherjee, S. Recent advances in self-interference incoherent digital holography. Adv. Opt. Photon. 2019, 11, 1–66. [Google Scholar] [CrossRef]
  52. Nobukawa, T.; Muroi, T.; Katano, Y.; Kinoshita, N.; Ishii, N. Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings. Opt. Lett. 2018, 43, 1698–1701. [Google Scholar] [CrossRef]
  53. Rai, M.R.; Rosen, J. Depth-of-field engineering in coded aperture imaging. Opt. Express 2021, 29, 1634–1648. [Google Scholar] [CrossRef]
  54. McCutchen, C.W. Generalized aperture and the three-dimensional diffraction image. J. Opt. Soc. Am. 1964, 54, 240–244. [Google Scholar] [CrossRef]
  55. Durnin, J.; Miceli, J.J., Jr.; Eberly, J.H. Diffraction-free beams. Phys. Rev. Lett. 1987, 58, 1499. [Google Scholar] [CrossRef] [PubMed]
  56. Anand, V. Tuning Axial Resolution Independent of Lateral Resolution in a Computational Imaging System Using Bessel Speckles. Micromachines 2022, 13, 1347. [Google Scholar] [CrossRef]
  57. Kumar, R.; Vijayakumar, A.; Rosen, J. 3D single shot lensless incoherent optical imaging using coded phase aperture system with point response of scattered airy beams. Sci. Rep. 2023, 13, 2996. [Google Scholar] [CrossRef] [PubMed]
  58. Bleahu, A.; Gopinath, S.; Kahro, T.; Angamuthu, P.P.; Rajeswary, A.S.J.F.; Prabhakar, S.; Kumar, R.; Salla, G.R.; Singh, R.P.; Kukli, K.; Tamm, A.; Rosen, J.; Anand, V. 3D Incoherent Imaging Using an Ensemble of Sparse Self-Rotating Beams. Opt. Express 2023, 31, 26120–26134. [Google Scholar] [CrossRef]
  59. Gopinath, S.; Rajeswary, A.S.J.F.; Anand, V. Sculpting axial characteristics of incoherent imagers by hybridization methods. Opt. Lasers Eng. 2024, 172, 107837. [Google Scholar] [CrossRef]
  60. Horner, J.L.; Gianino, P.D. Phase-only matched filtering. Appl. Opt. 1984, 23, 812–816. [Google Scholar] [CrossRef] [PubMed]
  61. Dhawan, A.; Rangayyan, R.; Gordon, R. Image restoration by Wiener deconvolution in limited-view computed tomography. Appl. Opt. 1985, 24, 4013–4020. [Google Scholar] [CrossRef] [PubMed]
  62. Richardson, W. Bayesian-Based Iterative Method of Image Restoration*. J. Opt. Soc. Am. 1972, 62, 55–59. [Google Scholar] [CrossRef]
  63. Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J. 1974, 79, 745–754. [Google Scholar] [CrossRef]
  64. Rosen, J.; Anand, V. Incoherent nonlinear deconvolution using an iterative algorithm for recovering limited-support images from blurred digital photographs. Opt. Express 2024, 32, 1034–1046. [Google Scholar] [CrossRef] [PubMed]
  65. Rosen, J.; Brooker, G. Digital spatially incoherent Fresnel holography. Opt. Lett. 2007, 32, 912–914. [Google Scholar] [CrossRef]
  66. Brooker, G.; Siegel, N.; Wang, V.; Rosen, J. Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy. Opt. Express 2011, 19, 5047–5062. [Google Scholar] [CrossRef] [PubMed]
  67. Rosen, J.; Brooker, G. Non-scanning motionless fluorescence three-dimensional holographic microscopy. Nat. Photon. 2008, 2, 190–195. [Google Scholar] [CrossRef]
  68. Rosen, J.; Alford, S.; Anand, V.; Art, J.; Bouchal, P.; Bouchal, Z.; Erdenebat, M.-U.; Huang, L.; Ishii, A.; Juodkazis, S.; et al. Roadmap on Recent Progress in FINCH Technology. J. Imaging 2021, 7, 197. [Google Scholar] [CrossRef] [PubMed]
  69. Siviloglou, G.A.; Broky, J.; Dogariu, A.; Christodoulides, D.N. Observation of accelerating Airy beams. Phys. Rev. Lett. 2007, 99, 213901. [Google Scholar] [CrossRef] [PubMed]
  70. Vettenburg, T.; Dalgarno, H.I.C.; Nylk, J.; Coll-Lladó, C.; Ferrier, D.E.K.; Cizmar, T.; Gunn-Moore, F.J.; Dholakia, K. Light-sheet microscopy using an Airy beam. Nat. Methods 2014, 11, 541–544. [Google Scholar] [CrossRef] [PubMed]
  71. Clerici, M.; Hu, Y.; Lassonde, P.; Milián, C.; Couairon, A.; Christodoulides, D.N.; Chen, Z.G.; Razzari, L.; Vidal, F.; Légaré, F.; et al. Laser-assisted guiding of electric discharges around objects. Sci. Adv. 2015, 1, e1400111. [Google Scholar] [CrossRef]
  72. Lumer, Y.; Liang, Y.; Schley, R.; Kaminer, I.; Greenfield, E.; Song, D.; Zhang, X.; Xu, J.; Chen, Z.; Segev, M. Incoherent self-accelerating beams. Optica 2015, 2, 886–892. [Google Scholar] [CrossRef]
  73. Davidson, N.; Friesem, A.A.; Hasman, E. Holographic axilens: high resolution and long focal depth. Opt. Lett. 1991, 16, 523–525. [Google Scholar] [CrossRef]
  74. Gopinath, S.; Bleahu, A.; Kahro, T.; Rajeswary, A.S.J.F.; Kumar, R.; Kukli, K.; Tamm, A.; Rosen, J.; Anand, V. Enhanced design of multiplexed coded masks for Fresnel incoherent correlation holography. Sci. Rep. 2023, 2023 13, 7390. [Google Scholar] [CrossRef]
  75. Katz, B.; Rosen, J.; Kelner, R.; Brooker, G. Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM). Opt. Express 2012, 20, 9109–9121. [Google Scholar] [CrossRef] [PubMed]
  76. Salik, B.; Rosen, J.; Yariv, A. One-dimensional beam shaping. J. Opt. Soc. Am. A 1995, 12, 1702–1706. [Google Scholar] [CrossRef]
  77. Hai, N.; Rosen, J. Single-plane and multiplane quantitative phase imaging by self-reference on-axis holography with a phase-shifting method. Opt. Express 2021, 29, 24210–24225. [Google Scholar] [CrossRef] [PubMed]
  78. Balasubramani, V.; Kujawińska, M.; Allier, C.; Anand, V.; Cheng, C.-J.; Depeursinge, C.; Hai, N.; Juodkazis, S.; Kalkman, J.; Kuś, A.; et al. Roadmap on Digital Holography-Based Quantitative Phase Imaging. J. Imaging 2021, 7, 252. [Google Scholar] [CrossRef] [PubMed]
Figure 1. I-COACH with an ensemble of SSLLBs.
Figure 1. I-COACH with an ensemble of SSLLBs.
Preprints 94768 g001
Figure 2. Schematic of LRRA and INDIA. OTF-Optical transfer function; q-number of iterations of LRRA; ⊗-2D convolutional operator; Rq is the qth solution; and q is an integer; when q = 1, Rq = IO; and α and β are varied from -1 to 1. C, Φ, and M are complex, phase and magnitude values, respectively; NLR is a nonlinear reconstruction; LRRA is the Lucy–Richardson–Rosen algorithm; F and F 1 are Fourier and inverse Fourier transforms, respectively; IT is the threshold of the iteration number s; and exp(P) is the conjugated phase of the IPSF spectrum.
Figure 2. Schematic of LRRA and INDIA. OTF-Optical transfer function; q-number of iterations of LRRA; ⊗-2D convolutional operator; Rq is the qth solution; and q is an integer; when q = 1, Rq = IO; and α and β are varied from -1 to 1. C, Φ, and M are complex, phase and magnitude values, respectively; NLR is a nonlinear reconstruction; LRRA is the Lucy–Richardson–Rosen algorithm; F and F 1 are Fourier and inverse Fourier transforms, respectively; IT is the threshold of the iteration number s; and exp(P) is the conjugated phase of the IPSF spectrum.
Preprints 94768 g002
Figure 3. (a) Optical configuration of I-COACH with an ensemble of Bessel distributions. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Figure 3. (a) Optical configuration of I-COACH with an ensemble of Bessel distributions. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Preprints 94768 g003
Figure 4. (a) Optical configuration of I-COACH with an ensemble of Airy distributions. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Figure 4. (a) Optical configuration of I-COACH with an ensemble of Airy distributions. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Preprints 94768 g004
Figure 5. Push and pull effects on objects located at two axial locations using Airy beams in the I-COACH framework.
Figure 5. Push and pull effects on objects located at two axial locations using Airy beams in the I-COACH framework.
Preprints 94768 g005
Figure 6. (a) Optical configuration of I-COACH with an ensemble of self-rotating beams. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Figure 6. (a) Optical configuration of I-COACH with an ensemble of self-rotating beams. (b) Simulation results of tuning axial resolution independent of lateral resolution. The number of beams was increased from p = 1 to 4, and the results are shown in rows 1 to 4, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Preprints 94768 g006
Figure 7. Optical scheme of the depth-of-field engineering system. DL - diffractive lens; CPM - coded phase mask; RQPF - radial quartic phase function. Adapted from [53].
Figure 7. Optical scheme of the depth-of-field engineering system. DL - diffractive lens; CPM - coded phase mask; RQPF - radial quartic phase function. Adapted from [53].
Preprints 94768 g007
Figure 8. (a, b) Direct images of the objects with two different lenses and (c) reconstructed image from a single hologram using depth-of-field engineering. Adapted from [53].
Figure 8. (a, b) Direct images of the objects with two different lenses and (c) reconstructed image from a single hologram using depth-of-field engineering. Adapted from [53].
Preprints 94768 g008
Figure 9. Multiplexing process of two sets of phase masks for multivolume imaging. Adapted from [53].
Figure 9. Multiplexing process of two sets of phase masks for multivolume imaging. Adapted from [53].
Preprints 94768 g009
Figure 10. (a) Direct images with overlap between the objects. (b) Reconstructed image with PSH of subvolume 1; the CPMs of the two subvolumes are different. (c) Same as (b) but with the PSH of subvolume 2. (d) Reconstructed image when the CPM is the combination of CPMs of the two volumes. Adapted from [53].
Figure 10. (a) Direct images with overlap between the objects. (b) Reconstructed image with PSH of subvolume 1; the CPMs of the two subvolumes are different. (c) Same as (b) but with the PSH of subvolume 2. (d) Reconstructed image when the CPM is the combination of CPMs of the two volumes. Adapted from [53].
Preprints 94768 g010
Figure 11. Optical scheme of the image sectioning system with two positions on the object point indicated by the black and red points. The PSFs on the image sensor resulting from the two points are distributed differently, allowing them to be imaged separately. The phase mask is given in Equation (6).
Figure 11. Optical scheme of the image sectioning system with two positions on the object point indicated by the black and red points. The PSFs on the image sensor resulting from the two points are distributed differently, allowing them to be imaged separately. The phase mask is given in Equation (6).
Preprints 94768 g011
Figure 12. Image sectioning of a volumetric scene by several compared methods. STIR has superior performance compared to other approaches when objects from two transverse planes are separated by 4 mm. The white scale is equivalent to 30 μm. Adapted from [40].
Figure 12. Image sectioning of a volumetric scene by several compared methods. STIR has superior performance compared to other approaches when objects from two transverse planes are separated by 4 mm. The white scale is equivalent to 30 μm. Adapted from [40].
Preprints 94768 g012
Figure 13. (a) Optical configuration of I-COACH with an ensemble of self-rotating Bessel and spherical beams. (b) Simulation results of tuning axial resolution independent of lateral resolution. The cases for a pure diffractive lens (T1,T2)=(1,0), a hybrid diffractive lens-axicon (T1,T2)=(0.5,0.5) and a pure diffractive axicon (T1,T2)=(0,1) are shown in rows 1 to 3, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Figure 13. (a) Optical configuration of I-COACH with an ensemble of self-rotating Bessel and spherical beams. (b) Simulation results of tuning axial resolution independent of lateral resolution. The cases for a pure diffractive lens (T1,T2)=(1,0), a hybrid diffractive lens-axicon (T1,T2)=(0.5,0.5) and a pure diffractive axicon (T1,T2)=(0,1) are shown in rows 1 to 3, respectively. The images of the phase mask, IPSF at z1 and z2, and IO and IR at z1 and z2 are shown in columns 1 to 6, respectively.
Preprints 94768 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated