Preprint
Article

Single Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy-Richardson-Rosen Algorithm

Altmetrics

Downloads

154

Views

151

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

02 August 2023

Posted:

03 August 2023

You are already at the latest version

Alerts
Abstract
Coded aperture 3D imaging techniques have been rapidly evolving in the recent years. The two main directions of evolution are in aperture engineering to generate the optimal optical field and in development of computational reconstruction to reconstruct the object’s image from the intensity distribution with a minimal noise. The goal is to find the ideal aperture-reconstruction method pair and if not, to optimize one to match the other for designing an imaging system with required 3D imaging characteristics. Lucy-Richardson-Rosen algorithm (LR2A), a recently developed computational reconstruction method was found to perform better than its predecessors such as matched filter, Weiner filter, phase-only filter, Lucy-Richardson algorithm and non-linear reconstruction (NLR) for certain apertures when the point spread function (PSF) is a real and symmetric function. For other cases of PSF, NLR performed better than the rest of the methods. In this tutorial, LR2A has been presented as a generalized approach for any optical field along with MATLAB codes for reconstruction of any image when the PSF is known. The common problems and pitfalls in using LR2A has been discussed. Simulation and experimental studies for common optical fields such as spherical, Bessel, vortex beams and exotic optical fields such as Airy, scattered and self-rotating beams have been presented. From this study, it can be seen that it is possible to transfer the 3D imaging characteristics from non-imaging type exotic fields to indirect imaging systems faithfully using LR2A. The application of LR2A to medical images such as colonoscopy images and cone beam computed tomography images with synthetic PSF has been demonstrated. We believe that the tutorial will provide a deeper understanding of computational reconstruction using LR2A.
Keywords: 
Subject: Physical Sciences  -   Optics and Photonics

1. Introduction

Imaging refers to the process of capturing and reproducing the characteristics of an object as close as possible to reality. There are two primary approaches to imaging: direct and indirect. Direct imaging is the oldest imaging concept, while indirect imaging is relatively much younger. A direct imaging system as the name suggests, creates the image of an object directly on a sensor. Most of the commonly available imaging systems such as laboratory microscope, digital camera, telescope, and even human eye are based on direct imaging concept. Indirect imaging concept requires at least two steps to complete the imaging process [1]. Holography is an example of indirect imaging concept consisting of two steps, namely optical recording, and numerical reconstruction. While the imaging process of holography is complicated and set up expensive when compared to that of direct imaging, the advantages in holography are remarkable. In holography, it is possible to record the entire 3D information of an object by a single camera shot in a 2D matrix. In direct imaging, a single camera shot can obtain only 2D information of an object. With holography, it is possible to record phase information which enables seeing thickness and refractive index variations in transparent objects. The above significant advantages justify the complicated and expensive recording set up and two-step imaging process of holography [2].
In coherent holography, light from an object is interfered with a reference wave that has no information of the object but derived from the same source to obtain the hologram. The above hologram recording process is suitable only if the illumination source is a coherent one [3]. Most of the commonly available imaging systems like the ones pointed out – laboratory microscopes, telescopes and digital camera rely on incoherent illumination. There are many reasons for choosing an incoherent illumination over a coherent one starting from the primary reasons that it is not practical to shine laser on any object and natural light is incoherent. There are other advantages in incoherent imaging such as lesser imaging noises and higher imaging resolution compared to coherent imaging. However, recording a hologram with an incoherent source is a challenging task and impossible in the framework of coherent holography. A new concept of interference is needed in order to realize holography with an incoherent source [4,5].
One of the aims of holography is to compress 3D information into a 2D intensity distribution. In coherent holography, it is achieved by interfering the object wave that carries the phase fingerprint with a reference wave derived from the same source. The above method converts the phase fingerprint into an intensity distribution. The 3D object wave can be reconstructed by illuminating the recorded intensity distribution’s amplitude or phase transmission matrix with the reference wave either physically as in analog holography or numerically as in digital holography. The above mode of recording a hologram is not possible with incoherent illumination due to the lack of coherence. To realize holography with incoherent illumination, a new concept called self-interference was proposed which means that light from any object point is coherent with respect to itself and therefore can coherently interfere [3,5]. Therefore, instead of interfering the object wave with a reference wave, the object wave from every point can be interfered coherently with the object wave from the same point for incoherent holography. The temporal coherence can be improved by trading-off some light using a spectral filter. In this line of research, there have been many interesting architectures such as rotational shearing interferometer [6,7,8] triangle interferometer [9,10] and conoscopic holography [11,12]. The modern-day incoherent holography approaches based on SLM [6,7,8,9,10,11,12] are Fresnel incoherent correlation holography (FINCH) [13,14] and Fourier incoherent single channel holography [15,16] developed by Rosen and team. The field of incoherent holography rapidly evolved with advancements in optical configuration, recording and reconstruction methods by the contributions of many researchers such as Poon, Kim, Tahara, Matoba, Nomura, Nobukawa, Bouchal, Huang, Kner and Potcoava [17,18]. Even with all the advancements, the latest version of FINCH required at least two camera shots and required multiple optical components [19,20].
In FINCH, light from an object point is split into two differently modulated by two diffractive lenses and interfered to record the hologram. The object’s image is reconstructed by Fresnel back propagation [13]. A variation of FINCH called coded aperture correlation holography (COACH) was developed in 2016 where instead of one of the two diffractive lenses, a quasi-random scattering mask was used [21]. The reconstruction in COACH is also different from that of FINCH as Fresnel back propagation is not possible due to the lack of an image plane. Therefore, in COACH, the point spread hologram is recorded using a point object in the first step which is then cross-correlated with the object hologram to reconstruct the image of the object. This type of reconstruction is common in coded aperture imaging (CAI) [22,23]. The origin of CAI can be traced back to the motivation of developing imaging technologies to non-visible regions of electromagnetic spectrum such as X-rays, Gamma rays, etc., where manufacturing of lenses is a challenging task. In such cases, the light from an object was modulated by a random array of pinholes created on a plate made of a material that is not transparent to that radiation [22,23]. The point spread function (PSF) is recorded in the first step and the object intensity distribution was recorded next and the image of the object is reconstructed by processing the two intensity patterns in the computer.
While CAI was used for recording 2D spatial and spectral information, it was not used for recording 3D spatial information as in holography [24]. With the development of COACH, a cross-over point between holography and CAI was identified which led to the development of interferenceless COACH (I-COACH) [25]. In I-COACH, light from an object is modulated by a quasi-random phase mask and the intensity distribution is recorded. But instead of a single PSF, a PSF library is recorded corresponding to different axial locations. This new approach connected incoherent holography with CAI. Since I-COACH with a quasi-random phase mask resulted in a low SNR, the next direction of research was set on engineering of phase masks to maximize the SNR. Different phase masks were designed and tested for I-COACH for the generation of dot patterns [26], Bessel patterns [27], Airy patterns [28] and self-rotating patterns [29]. The above studies also revealed some unexpected capabilities of I-COACH such as capability to tune axial resolution independent of lateral resolution. Another direction of development in I-COACH was on improving the computational reconstruction method. The first version of I-COACH used matched filter for processing PSF and object intensity distributions. Later phase-only filter was applied to improve the SNR [30]. In the subsequent studies of I-COACH, a novel computational reconstruction method called non-linear reconstruction (NLR) was developed whose performance was significantly better than both matched and phase-only filters [31]. An investigation was made to understand if NLR is a universal reconstruction method for all optical fields with semisynthetic studies for a single plane and the results were promising [32]. However, additional techniques such as raising images to power of p and median filters were necessary to obtain a reasonable image reconstruction for deterministic optical fields [33].
Recently, a novel computational reconstruction method called Lucy-Richardson-Rosen algorithm (LR2A) was developed by combining the well-known Lucy-Richardson algorithm (LRA) with NLR [34,35,36]. The performance of LR2A was found to be significantly better than LRA and NLR in many studies [37,38,39]. In this tutorial, we present the possibilities of using LR2A as a generalized reconstruction method for deterministic as well as random optical fields for incoherent 3D imaging applications. The physics of optical fields when switching from a coherent source to an incoherent source is different [40,41]. Many optical fields such as Bessel, Airy, vortex beam have unique spatio-temporal characteristics that are useful for many applications. However, they cannot be used for imaging applications as the above fields do not have a point focus. In this tutorial, we discuss the procedure for transferring the exotic 3D characteristics of the above special beams for imaging applications using LR2A. For the first time, the optimized computational MATLAB codes for implementing LR2A are also provided. The manuscript consists of five sections. The methodology is described in the second section. The third section contains the simulation studies. The experimental studies are presented in the fourth section. In the fifth section, an interesting method to tune axial resolution is discussed. The conclusion and future perspectives of the study are presented in the final section.

2. Materials and Methods

The aim of this tutorial is to show the step-by-step procedure on how to faithfully transfer the 3D axial characteristics present in exotic beams to imaging systems using indirect imaging concept and LR2A. There have been many recent studies that have achieved this in the framework of I-COACH [42,43]. In the above studies, NLR has been used for reconstruction which performs best with scattered distributions and so in both studies, the deterministic fields’ axial characteristics have been transferred to the imaging system with additional sparsely random encoding. The optical configuration of the proposed system is shown in Figure 1. Light from an object is incident on a phase mask and recorded by an image sensor. The point spread function (PSF) is pre-recorded and used as the reconstructing function to obtain the object information. There are numerous methods available to process the object intensity distribution with the PSF which includes matched filter, phase-only filter, Weiner filter, NLR and LR2A. In this study, LR2A has been used which has a better performance than other methods when deterministic optical fields are considered.
The proposed analysis is limited to spatially incoherent illumination. The methods can be extended to coherent light when the object is a single point for depth measurements. A single point source with an amplitude of I s and located at r ̄ s , z s is considered. The complex amplitude at the SLM is given as I s C 1 L r ̄ s z s Q 1 z s , where L is a linear phase function given as L s ¯ z = exp i 2 π λ z 1 s x x + s y y and Q is a quadratic phase function given as Q ( a ) = exp i π a λ 1 R 2 , where R = x 2 + y 2 1 / 2 and C1 is a complex constant. In the SLM, phase mask for the generation of an optical field is displayed. The phase function ψ P M modulates the incoming light and the complex amplitude after the phase mask is given as I s C 1 L r ̄ s z s Q 1 z s ψ P M . At the image sensor, the recorded intensity distribution is given as
I P S F r ̄ 0 ; r ¯ s , z s = I s C 1 L r ̄ s z s Q 1 z s ψ M Q 1 z h 2 ,
where, r ̄ 0 = ( u , v ) is the location vector in the sensor plane, ‘⊗’ is a 2D convolutional operator and Q 1 z h propagates I s C 1 L r ̄ s z s Q 1 z s ψ P M to the image sensor. For depth measurements, as mentioned above, both coherent as well as incoherent light sources can be used with a point object. In that case, it is sufficient to measure the correlation value at the origin obtained from I P S F r ̄ 0 ; r ¯ s , z s I P S F r ̄ 0 ; r ¯ s , z s + Δ z , where Δz is the depth and ‘ ' is a correlation operator. The magnification of the system is given as MT=zh/zs. The lateral and axial resolution limits of the system for a typical lens are 1.22λzs/D and 8λ(zs/D)2, where D is the diameter of the SLM.
The IPSF can be expressed as
I P S F r ̄ 0 ; r ¯ s , z s = I P S F r ̄ 0 z h z s r ̄ s ; 0 , z s .
A 2D object with M point sources can be represented mathematically as M Kronecker Delta functions as
o r ̄ s = j M b j δ r ̄ r ̄ s , j ,
where b j ' s are constants. In this study, only spatially incoherent illumination is considered and so the intensity distribution of light diffracted from every point add up in the sensor plane. Therefore, the object intensity distribution can be expressed as
I O r ̄ 0 ; z s = j M b j I P S F r ̄ 0 z h z s r ̄ s , j ; 0 , z s .
In this study, the image reconstruction is carried out using LR2A, where the (n+1)th reconstructed image is given as
I R n + 1 = I R n I O I R n I P S F I P S F ,
where ‘ ’ refers to NLR which is defined for two functions A and B as F 1 A ~ α e x p i   a r g A ~ B ~ β e x p i   a r g B ~ , where X ~ is the Fourier transform of X. The α and β are tuned between -1 and 1. In the case of LRA, α and β are set to 1 which is a matched filter. The loop is iterated until an optimal reconstruction is obtained. There is a forward convolution I R n I P S F and the ratio between this and I O is non-linearly correlated with I P S F . The better estimation from NLR enables a rapid convergence.

3. Simulation results

In this tutorial, a wide range of phase masks are demonstrated as coded apertures with some of them commonly used and some are exotic. In the previous studies with LR2A, it was shown that the performance of LR2A is best when the IPSF is a symmetric distribution along x and y direction [39,44]. The shift variance in LR2A was demonstrated [44]. Some solutions in the form of post-processing to address the problems associated with the asymmetry of the IPSF have been discussed in [39]. However, all the above effects were due to the fact that LR2A was not generalized. In this study, LR2A has been generalized to all shapes of IPSFs and both real and complex cases. The simulation space has been constructed in MATLAB with the following specifications: Matrix size = 500 × 500 pixels, pixel size Δ = 10 μm, wavelength λ = 650 nm, object distance zs = 0.4 m, recording distance zh = 0.4 m and focal length of diffractive lens f = 0.2 m. For diffractive lens, to prevent direct imaging mode, the zh value was modified to 0.2 m. The phase masks of a diffractive lens, spiral lens, axicon, spiral axicon are given as exp i π ( λ f ) 1 R 2 , exp i π ( λ f ) 1 R 2 × exp i L θ , exp i 2 π Λ 1 R , exp i 2 π Λ 1 R × exp i L θ respectively, where L is the topological charge and Λ is the period of axicon. The optical configuration in this study is simple consisting of three steps namely free space propagation, interaction and again a free space propagation. The equivalent mathematical operations for the above three steps propagation, interaction and propagation are convolution, product and convolution respectively. The first mathematical operation is quite direct as a single point is considered which is a Kronecker Delta function. Any function convolved with a Delta function creates the replica of the function. Therefore, the complex amplitude obtained after the first operation is equivalent to exp i π ( λ z s ) 1 R 2 . In the next step, the above complex amplitude is multiplied to ψ P M by an element wise multiplication operation. In the final step, the resulting complex amplitude is propagated to the sensor plane by a convolution operation expressed as three Fourier transforms F 1 F exp i π ( λ z s ) 1 R 2 × ψ P M × F exp i π ( λ z h ) 1 R 2 . The intensity distribution IPSF can be obtained by squaring the absolute value of the complex amplitude matrix obtained at the sensor plane. The steps in MATLAB software can be found in the supplementary files of [45].
The simulation was carried out by shifting zs from 0.2 to 0.6 m in steps of 4 mm and the IPSF(zs) was accumulated into a cube matrix (x, y, z, I). The images of the phase masks for diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5) are shown in row - 1 of Figure 2. The 3D axial intensity distributions for different diffractive elements such as diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon, spiral axicon (L = 1, 3 and 5) are shown in row - 2 of Figure 2. The diffractive lens does not have a focal point as the imaging condition was not satisfied within this range of zs when zh is 0.2 m. For spiral lens, a focused ring is obtained at the recording plane corresponding to zs = 0.4 m and the ring blurs and expands similar to the case of a diffractive lens. For a diffractive axicon, a Bessel distribution is obtained in the recording plane and it is invariant with changes in zs [46,47]. A similar behavior is seen in Higher Order Bessel Beams (HOBB) [48]. As seen in the axial intensity distributions, none of the beams can be directly used for imaging applications. It may be argued that Bessel beams can be used, but as it is known, the non-changing intensity distribution comes at a price which is the loss of higher spatial frequencies [33]. Consequently, imaging using Bessel beam results in low resolution images.
The holographic point spread function is not IPSF but I P S F I P S F , where ‘ ' is the LR2A operator with n iterations which is equivalent to the autocorrelation function. It must be noted that what is done in LR2A is not a regular correlation as in matched filter. The autocorrelation function was calculated for the above zs variation for the different cases of phase masks and accumulated in a cube matrix as shown in the third row of Figure 2. As seen, the autocorrelation function is a cylinder with uniform radius for all values of zs. This is the holographic 3D PSF from whose profile it seems that it is possible to reconstruct the object information with a high resolution for all the object planes. To understand the axial imaging characteristics in the holographic domain, the IPSF(zs) was cross-correlated with IPSF of a reference plane which in this case has been set at zs = 0.4 m. Once again, this cross-correlation is the nearest equivalent term in LR2A which is calculated as I P S F ( z s ) I P S F ( z s = 0.4   m ) . The cube data obtained for different phase masks are shown in the fourth row from top in Figure 3. As seen in the figure, the axial characteristics have been faithfully transferred to the imaging system except that now it is possible to perform imaging at any one or multiple planes of interest simultaneously. With a diffractive lens, a focal point is seen at a particular plane and rapidly blurred in other planes with respect to zs. The same behavior can be said about the spiral lenses which consist of the diffractive lens. The case of axicon shows a long focal depth and a similar behavior is seen for the cases of spiral axicon. With indirect imaging concept and LR2A, it is possible to faithfully transfer the exotic axial characteristics of special beams to an imaging system. It must be noted that the cross-correlation was carried out with fixed values of α, β and n for every case. In some of the planes of both Airy pattern and self-rotating pattern, there is some scattering seen indicating either non-optimal reconstruction condition or slightly lower performance of LR2A.
All the above cases demonstrated consist of IPSFs that are radially symmetric. When asymmetric cases such as Airy beams, self-rotating beams, speckle patterns, NLR performed better than LR2A as it was not optimized for asymmetric cases. In this study, LR2A has been generalized to both complex as well as asymmetric shapes. Three phase masks: cubic phase mask with a phase function exp i 2 π / λ a x 3 + b y 3 , where a=b~1000 [28,40], diffractive lens with azimuthally varying focal length (DL-AVF) e x p i 2 π 2 R 2 λ 2 π f 0 + Δ f θ [29,49] and quasi-random lens with a scattering ratio of 0.04 obtained using Gerchberg-Saxton algorithm [3] are investigated. The image of the phase masks for the above three cases are shown in column – 1 in Figure 3. In this case, to show the curved path of the Airy pattern, the axial range was extended for zs from 0.4 m to 0.6 m. The 3D intensity distribution in direct imaging mode for the three cases are shown in column – 2 of Figure 3. The 3D autocorrelation distribution is shown in column – 3 of Figure 3 for the three cases. The 3D cross-correlation distribution obtained by LR2A for the three cases are shown in column – 4 of Figure 3. As seen from the columns – 2 and 4, it can be seen that the axial characteristics of the exotic beams have been faithfully transferred to the imaging system. The quasi-random lens or any scattering mask behaves exactly like a diffractive lens in holographic domain. Comparing the results in Figure 3 and the previous results [39,44] a significant improvement is seen.
The simulation study of a two-plane object consisting of two test objects with letters “CIPHR” and “TARTU” with zs = 0.4 m and 0.5 m respectively, is presented next. The images of the IPSFs for zs = 0.4 m and 0.5 m, the object intensity pattern IO obtained by convolution of object “CIPHR” with IPSF (zs = 0.4 m) and convolution of “TARTU” with IPSF (zs = 0.5 m) followed by a summation and the reconstructions IR corresponding to the two planes using LR2A for a diffractive lens, spiral lens (L = 5), spiral axicon (L = 3), cubic phase mask, DL-AVF and quasi-random lens are shown in Figure 4. As seen from the results in Figure 4, the cases of diffractive lens and quasi-random lens appear similar with respect to the axial behavior, i.e., when a particular plane information is reconstructed only that plane information is focused and enhanced while the other plane information is blurred and weak. However, for the elements such spiral lens and spiral axicon, the other plane information consists of “hot spots” which are prominent and sometimes even stronger than the information in the reconstructed plane. This is due to the fact that there is similarity between the two IPSFs. This is one of the pitfalls in using such deterministic optical fields. The problem with such hotspots is that it is not possible to discriminate if the hot spot corresponds to any useful information related to the object or blurring due to a different plane if the object is not known prior. The axial characteristics of Airy beam and self-rotating beam have been faithfully transferred to the imaging system.
Another pitfall in 3D imaging in indirect imaging mode is the depth-wavelength reciprocity [33,50]. The changes in intensity distribution may appear identical to a change in depth or change in wavelength or both. This is true with both deterministic as well as random optical fields except for beams carrying OAM as a change in wavelength causes fractional topological charge which is unique. Another point to consider when using IPSFs from special functions is their sampling. Except for the cases of quasi-random lens and diffractive lens, the other cases have distorted the object information in some ways. In the case of Airy beam, the IPSF consists of periodic dot patterns along x and y directions which when samples the object information, the curves have been sampled into square shapes. The IPSFs with rings have shaped the object information in this fashion which again raises concern on the reliability of the measurement, when the object information is not already known.

4. Experiments

The experimental setup is shown in Figure 5. The setup was built using the optical components: high-power LED (Thorlabs, 940 mW, λ = 660 nm and Δλ = 20 nm), iris, diffuser, polarizer, refractive lenses, object/pinhole, beam splitter, spatial light modulator (SLM) (Thorlabs Exulus HD2, 1920×1200 pixels, pixel size = 8 μm) and an image sensor (Zelux CS165MU/M 1.6 MP monochrome CMOS camera, 1440×1080 pixels with pixel size ~3.5 µm). The light from the LED was passed through iris (I1) which controls the light illumination and then passed through a diffuser (Thorlabs Ø1" Ground Glass Diffuser-220 GRIT) which is used to remove the LED’s grating lines. The light from the diffuser was collected using a refractive lens (L1) (f = 5 cm) and it is passed through a polarizer which is oriented along the active axis of the SLM. Two objects digit ‘3’ and ‘1’ from Group - 5 from R1DS1N - Negative 1951 USAF Test Target, Ø1" were used. A pinhole of 50 μm was used to record the IPSF. The object is critically illuminated using a refractive lens (L2) (f = 5 cm). The light from the object is collimated by another refractive lens (L3) (f =5 cm) and passed through the beam splitter and incident on the SLM. On the SLM, phase masks of deterministic and random optical fields were displayed one after the another and the IPSF and IO were recorded by the image sensor. The IO for two objects are recorded at two different depths (zs = 5 cm) and (zs = 5.6 cm) and summed to demonstrate 3D imaging. The experimental results are presented in Figure 6. The phase masks of the deterministic and random optical fields are shown in column – 1 and their corresponding IPSFs (zs = 5 cm) and IPSFs (zs = 5.6 cm) are shown in columns - 2 and 3 respectively, IO is shown in column – 4, IR (zs = 5 cm) and IR (zs = 5.6 cm) are shown in columns – 5 and 6 respectively. Once again, it can be seen that the 3D characteristics of the beams have been faithfully transferred to the indirect imaging system faithfully.
To demonstrate the application of LR2A to real applications in day-to-day life, medical images were obtained from surgeons. A 65-year-old patient, male with a known case of prostrate cancer had radiotherapy about a year ago. He developed bleeding and mucous discharge from the anus. The colonoscopy finding shows mucosal pallor, telangiectasias, edema spontaneous hemorrhage and friable mucosa. It can be noted that the mucosa was congested with ulceration stricture. The images were captured using colonoscope Olympus system and CaptureITPro medical imaging software. The direct images obtained using the colonoscope are shown in Figure 7(a) and 7(e). The IPSF can be synthesized in computer or an isolated dot can be taken from Figure 7(a), padded with zeros and used as the reconstructing function. In this study, the IPSF was synthetic [38,39]. The different color channels were extracted from the image and processed separately using synthetic IPSF and LR2A and then combined as discussed in [38]. The reconstructed images are shown in Figure 7(b) and 7(f). The magnified regions of Figure 7(a) and 7(b) are shown in Figure 7(c) and 7(d) respectively. The magnified regions of Figure 7(e) and 7(f) are shown in Figure 7(g) and 7(h) respectively. The red region indicates Telangiectasias and the black region shows necrosis which is the death of body tissue. The yellow region shows mucosal sloughing and the magnified region shows necrotic area with dead mucosa.
A diagnostic imaging equipment – Cone beam computed tomography (CBCT) was used for imaging a patient with focal spot 0.5 mm, field of view 8 × 8 cm, voxel size of 0.2 mm/ 0.3 mm and exposure time was 15.5 seconds. The images obtained for a patient with multiple implants placed in the jaw with metallic artifacts compromised the clarity and blurred the finer details as shown in Row – 1 of Figure 8. Once again synthetic IPSF and LR2A were used to improve the resolution and contrast of the images. The reconstructed images are shown in Row – 2 of Figure 8. The reconstructed images have a better resolution and contrast compared to the direct images.

5. Discussion

In this study, LR2A and indirect imaging concept has been used as a tool to transfer the 3D imaging characteristics faithfully from the beam to the imaging system [42,43,51]. In our recent study [51], the possibility of tuning axial resolution of an imaging system after completing the recording process has been demonstrated by post hybridization methods. The same can be done with the common and exotic beams. A hybrid PSF IHPSF can be formed by summing the pure IPSFs with appropriate weights w in the form: I H P S F ( z ) = k = 1 m w k I P S F ( k , z ) . The object intensity distribution can be hybridized by the same weights as used for creating the hybrid PSFs as I H O = k = 1 m w k I O ( k ) . The object information can be reconstructed with the effective imaging characteristics using LR2A. Two cases are simulated here. In the first case, hybridization has been achieved between Airy pattern and self-rotating beam and the second case hybridization has been achieved between Airy pattern and quasi-random lens. The 4D intensity distributions generated for the hybrid PSFs are shown in Figure 9(a) and 9(c) for cases 1 and 2 respectively. The reconstructed 4D patterns using LR2A for cases 1 and 2 are shown in Figure 9(b) and 9(d) respectively. As expected, the focal depth decreased in the second case more than the first case as the second ingredient in second case has a high axial resolution. This study is not limited to only two ingredients but any type of ensemble with m number of beams can be constructed easily post recording.

6. Conclusion

In this tutorial, LR2A has been presented as a generalized computational reconstruction method for different types of deterministic fields and scattered patterns. The algorithm was also tested for a wide range of beams and variations and it was always possible to obtain a high-quality reconstruction result by tuning the parameters α, β and n. In all the cases, the axial characteristics have been faithfully transferred from the non-imaging beam to the imaging system using LR2A. Further, medical images recorded directly using colonoscope and CBCT have been processed using synthetic IPSF and LR2A and an enhancement in resolution and contrast was observed. The method can be extended for aberration correction as well by choosing an aberrated dot from the recorded images. The above high-quality reconstructions in both laboratories set up with wide range of deterministic optical fields and scattered field and medical images demonstrates the wide applicability of LR2A making it a universal computational reconstruction method.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

Author Contributions

Conceptualization, V. A.; methodology, V. A., A. N. K. R., R. A. G., M. S. A. S., S. D. M. T., A. P. I. X., F. G. A., S. G., A. S. J. F. R.; software, V. A., M. S. A. S., S. D. M. T.; validation, V. A., A. N. K. R., R. A. G., M. S. A. S., S. D. M. T., A. P. I. X., F. G. A., S. G., A. S. J. F. R.; formal analysis, V.A., M. S. A. S., S. D. M. T.; investigation, all the authors; resources, V.A., M. S. A. S., S. D. M. T.; writing—original draft preparation, A. P. I. X., F. G. A., V. A.; writing—review and editing, all the authors; supervision, V. A.; project administration, A. S. J. F. R.; funding acquisition, V.A., M. S. A. S., S. D. M. T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Union’s Horizon 2020 research and innovation programme grant agreement No. 857627 (CIPHR).

Informed Consent Statement

Informed consent was obtained for the medical images involved in the study. The informed consent has been obtained from the patients of Dr. Scott’s Laser Piles Fistula Center, Nagercoil, Tamil Nadu 629201, India, and Darshan Dental and Orthodontic Clinic, Kanyakumari, Tamil Nadu 629401, India, to publish this paper.

Data Availability Statement

The data can be obtained from the authors upon reasonable request.

Acknowledgments

The authors thank Dr. Scott Clinic, Dr. Jeyasekharan Medical Trust, Rajas Dental College and Hospital and Darshan Dental and Orthodontic Clinic, Kanyakumari, India.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodman, J.W. Introduction to Fourier Optics, 3rd ed.; Roberts and Company: Englewood, CO, USA, 2005. [Google Scholar]
  2. Javidi, B.; Carnicer, A.; Anand, A.; Barbastathis, G.; Chen, W.; Ferraro, P.; Goodman, J.W.; Horisaki, R.; Khare, K.; Kujawinska, M.; et al. Roadmap on digital holography. Opt. Express. 2021, 29, 35078–35118. [Google Scholar] [CrossRef]
  3. Rosen, J.; Vijayakumar, A.; Kumar, M.; Rai, M.R.; Kelner, R.; Kashter, Y.; Bulbul, A.; Mukherjee, S. Recent advances in selfinterference incoherent digital holography. Adv. Opt. Photonics. 2019, 11, 1–66. [Google Scholar] [CrossRef]
  4. Liu, J.P.; Tahara, T.; Hayasaki, Y.; Poon, T. C. ; Incoherent digital holography: a review. Applied Sciences. 2018, 8, 143. [Google Scholar] [CrossRef]
  5. Rosen, J.; Vijayakumar, A.; Hai, N. ; Digital holography based on aperture engineering, SPIE Spotlight E Book Series, Bellingham, Washington, USA, 2023.
  6. Murty, M.V.R.K.; Hagerott, E.C. Rotational shearing interferometry. Appl. Opt. 1966, 5, 615–619. [Google Scholar] [CrossRef]
  7. Armitage, J.D.; Lohmann, A. Rotary shearing interferometry. Opt. Acta. 1965, 12, 185–192. [Google Scholar]
  8. Roddier, C.; Roddier, F.; Demarcq, J. ; Compact rotational shearing interferometer for astronomical applications. Opt. Eng. 1989, 28, 280166. [Google Scholar] [CrossRef]
  9. Cochran, G. ; New method of making Fresnel transforms with incoherent light. J. Opt. Soc. Am. 1966, 56, 1513–1517. [Google Scholar] [CrossRef]
  10. Marathav, A.S. ; Noncoherent-object hologram: its reconstruction and optical processing. J. Opt. Soc. Am. 1987, 4, 1861–1868. [Google Scholar] [CrossRef]
  11. Sirat, G.Y. Conoscopic holography. I. Basic principles and physical basis. J. Opt. Soc. Am. 1992, 9, 70–83. [Google Scholar] [CrossRef]
  12. Mugnier, L.M.; Sirat, G.Y. On-axis conoscopic holography without a conjugate image. Opt. Lett. 1992, 17, 294–296. [Google Scholar] [CrossRef]
  13. Rosen, J.; Brooker, G. Digital spatially incoherent Fresnel holography. Opt. Lett. 2007, 32, 912–914. [Google Scholar] [CrossRef] [PubMed]
  14. Rosen, J.; Brooker, G. Non-scanning motionless fluorescence three-dimensional holographic microscopy. Nat. Photon. 2008, 2, 190–195. [Google Scholar] [CrossRef]
  15. Kelner, R.; Rosen, J. ; Spatially incoherent single channel digital Fourier holography. Opt. Lett. 2012, 37, 3723–3725. [Google Scholar] [CrossRef] [PubMed]
  16. Kelner, R.; Rosen, J.; Brooker, G. ; Enhanced resolution in Fourier incoherent single channel holography (FISCH) with reduced optical path difference. Opt. Express. 2013, 21, 20131–20144. [Google Scholar] [CrossRef] [PubMed]
  17. Rosen, J.; Alford, S.; Vijayakumar,A. ; Art, J.; Bouchal, P.; Bouchal, Z.; Erdenebat, M.U.; Huang, L.; Ishii, A.; Juodkazis, S.; et al. Roadmap on recent progress in FINCH technology. J. Imaging. 2021, 7, 197. [Google Scholar] [CrossRef]
  18. Tahara, T.; Zhang, Y.; Rosen, J.; et al. Roadmap of incoherent digital holography. Appl. Phys. 2022, 128, 193. [Google Scholar] [CrossRef]
  19. Tahara, T.; Kozawa, Y.; Ishii, A.; Wakunami, K.; Ichihashi, I.; Oi, R. ; Two-step phase-shifting interferometry for self-interference digital holography. Opt. Lett. 2021, 46, 669–672. [Google Scholar] [CrossRef]
  20. Siegel, N.; Lupashin, V.; Storrie, B.; et al. High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers. Nature Photon. 2016, 10, 802–808. [Google Scholar] [CrossRef]
  21. Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography—a new type of incoherent digital holograms. Opt. Express. 2016, 24, 12430–12441. [Google Scholar] [CrossRef]
  22. Ables, J.G. Fourier transform photography: A new method for X-ray astronomy. Proc. Astron. Soc. 1968, 1, 172. [Google Scholar] [CrossRef]
  23. Dicke, R.H. Scatter-hole cameras for X-rays and gamma rays. Astrophys. J. 1968, 153, L101. [Google Scholar] [CrossRef]
  24. Wagadarikar, A.; John, R.; Willett. ; Brady, D.; Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt. 2008, 47, B44–B51. [Google Scholar] [CrossRef] [PubMed]
  25. Vijayakumar, A.; Rosen, J. Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference. Opt. Express 2017, 25, 13883–13896. [Google Scholar] [CrossRef] [PubMed]
  26. Rai, M.R.; Rosen,J. ; Noise suppression by controlling the sparsity of the point spread function in interferenceless coded aperture correlation holography (I-COACH). Opt. Express. 2019, 27, 24311–24323. [Google Scholar] [CrossRef]
  27. Vijayakumar, A. ; Tuning Axial Resolution Independent of Lateral Resolution in a Computational Imaging System Using Bessel Speckles. Micromachines. 2022, 13, 1347. [Google Scholar]
  28. Kumar, R.; Vijayakumar, A.; Anand, J. ; 3D single shot lensless incoherent optical imaging using coded phase aperture system with point response of scattered airy beams. Sci Rep. 2023, 13, 2996. [Google Scholar] [CrossRef]
  29. Bleahu, A.; gopinath,,S. ; Kahro, T.; Angamuthu, P.P.; Rajeswary, A.S.J.F.; Prabhakar, S.; Kumar, R.; Salla, G.; Singh, R.; Rosen, J.; Anand, V. 3D Incoherent Imaging Using an Ensemble of Sparse Self-Rotating Beams. Opt. Express. 2023, 31, 26120–26134. [Google Scholar] [CrossRef]
  30. Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography system with improved performance. Appl. Opt. 2017, 56, F67–F77. [Google Scholar] [CrossRef]
  31. Rai, M.R.; Vijayakumar,A. ; Rosen, J. Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH). Opt. Express. 2018, 26, 18143–18154. [Google Scholar] [CrossRef]
  32. Smith, D.; Gopinath, S.; Arockiaraj, F.G.; et al. Nonlinear Reconstruction of Images from Patterns Generated by Deterministic or Random Optical Masks—Concepts and Review of Research. Journal of Imaging. 2022, 8, 174. [Google Scholar] [CrossRef]
  33. Vijayakumar, A.; Rosen, J.; Juodkazis, s. ; Review of engineering techniques in chaotic coded aperture imagers. Light: Advanced Manufacturing 2021, 3, LAM2021090035. [Google Scholar]
  34. Vijayakumar. ; Han, M.; Maksimovic, J.; Ng, S.H.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S. Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm. Opto-Electron. Sci. 2022, 1, 210006. [Google Scholar] [CrossRef]
  35. Richardson, W.H. ; Bayesian-Based Iterative Method of Image Restoration. Journal of the Optical Society of America. 1972, 62, 55–59. [Google Scholar] [CrossRef]
  36. Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef]
  37. Praveen, P. A.; Arockiaraj, F.G.; Gopinath, S.; Smith, D.; et al. Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm. Photonics. 2022, 9, 625. [Google Scholar] [CrossRef]
  38. Jayavel, A.; Gopinath, S.; Angamuthu, P.P.; Arockiaraj,F. G.; et al. Improved Classification of Blurred Images with Deep-Learning Networks Using Lucy-Richardson-Rosen Algorithm. Photonics. 2023, 10, 396. [Google Scholar] [CrossRef]
  39. Gopinath, S.; Angamuthu, P.P.; Kahro, A.; Bleahu, A.; et al. Implementation of a Large-Area Diffractive Lens Using Multiple Sub-Aperture Diffractive Lenses and Computational Reconstruction. Photonics. 2022, 10, 3. [Google Scholar] [CrossRef]
  40. Lumer, Y.; Liang, Y.; Schley, R.; Kaminer, I.; et al. Incoherent self-accelerating beams. Optica. 2015, 2, 886–892. [Google Scholar] [CrossRef]
  41. Wang, H. ; Wang,H. ; Ruan, Q.; Chan, J.Y.E.; et al. Coloured vortex beams with incoherent white light illumination,” Nat. Nanotechnol. 2023, 18, 264–272. [Google Scholar]
  42. Rai, M. R.; Rosen, J. Depth-of-field engineering in coded aperture imaging. Opt. Express 2021, 29, 1634–1648. [Google Scholar] [CrossRef]
  43. Dubey, N.; Kumar, R.; Rosen, J. Multi-wavelength imaging with extended depth of field using coded apertures and radial quartic phase functions. Opt. Lasers Eng. 2023, 169, 107729. [Google Scholar] [CrossRef]
  44. Anand, V.; Khonina, S.; Kumar, R.; Dubey, N.; Reddy, A.N.K.; Rosen, J.; Juodkazis, S. Three-dimensional incoherent imaging using spiral rotating point spread functions created by double-helix beams. Nanoscale Res. Lett. 2022, 17, 37. [Google Scholar] [CrossRef] [PubMed]
  45. Anand, V.; Katkus, T.; Linklater, D.P.; Ivanova, E.P.; Juodkazis, S. Lensless Three-Dimensional Quantitative Phase Imaging Using Phase Retrieval Algorithm. J. Imaging 2020, 6, 99. [Google Scholar] [CrossRef] [PubMed]
  46. Khonina, S.N.; Kazanskiy, N.L.; Karpeev, S.V.; Butt, M.A. Bessel Beam: Significance and Applications—A Progressive Review. Micromachines 2020, 11, 997. [Google Scholar] [CrossRef] [PubMed]
  47. Khonina, S.N.; Kazanskiy, N.L.; Khorin, P.A.; Butt, M.A. Modern Types of Axicons: New Functions and Applications. Sensors 2021, 21, 6690. [Google Scholar] [CrossRef]
  48. Khonina, S.N.; Morozov, A.A.; Karpeev, S.V. Effective transformation of a zero-order Bessel beam into a second-order vortex beam using a uniaxial crystal. Laser Phys. 2014, 24, 056101. [Google Scholar] [CrossRef]
  49. Niu, K.; Zhao, S.; Liu, Y.; Tao, S.; Wang, F. Self-rotating beam in the free space propagation. Opt. Express 2022, 30, 5465–5472. [Google Scholar] [CrossRef]
  50. Wan, Y.; Liu, C.; Ma, T.; Qin, Y.; lv, S. Incoherent coded aperture correlation holographic imaging with fast adaptive and noise-suppressed reconstruction. Opt. Express 2021, 29, 8064–8075. [Google Scholar] [CrossRef]
  51. Gopinath, S.; Rajeswary, A. S. J. F.; Anand, V. Sculpting Axial Characteristics of Incoherent Imagers by Hybridization Methods. Anand, Vijayakumar and Gopinath, Shivasubramanian and Rajeswary, Aravind Simon John Francis, Sculpting Axial Characteristics of Incoherent Imagers by Hybridization Methods. Available at SSRN: https://ssrn.com/abstract=4505825. [CrossRef]
Figure 1. Concept figure: Recording images with different phase masks – diffractive lens, spiral lens, spiral axicon and axicon and reconstruction using LR2A by processing object intensity and PSF. OTF—Optical transfer function; n—number of iterations; ⊗—2D convolutional operator; I * —refers to complex conjugate following a Fourier transform; I 1 - Inverse Fourier transform; Rn is the nth solution and n is an integer, when n = 1, Rn = I; ML – Maximum Likelihood; α and β are varied from -1 to 1.
Figure 1. Concept figure: Recording images with different phase masks – diffractive lens, spiral lens, spiral axicon and axicon and reconstruction using LR2A by processing object intensity and PSF. OTF—Optical transfer function; n—number of iterations; ⊗—2D convolutional operator; I * —refers to complex conjugate following a Fourier transform; I 1 - Inverse Fourier transform; Rn is the nth solution and n is an integer, when n = 1, Rn = I; ML – Maximum Likelihood; α and β are varied from -1 to 1.
Preprints 81356 g001
Figure 2. Row – 1: Phase masks of a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5). Row – 2: Cube data of axial intensity distributions obtained at the recording plane for a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5), row – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, row – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.2 to 0.6 m.
Figure 2. Row – 1: Phase masks of a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5). Row – 2: Cube data of axial intensity distributions obtained at the recording plane for a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5), row – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, row – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.2 to 0.6 m.
Preprints 81356 g002
Figure 3. Column – 1: Phase images of cubic phase mask, DL-AVF and quasi-random lens. Column – 2: Cube data of axial intensity distributions obtained at the recording plane for cubic phase mask, DL-AVF and quasi-random lens, Column – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, Column – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.1 to 0.7 m.
Figure 3. Column – 1: Phase images of cubic phase mask, DL-AVF and quasi-random lens. Column – 2: Cube data of axial intensity distributions obtained at the recording plane for cubic phase mask, DL-AVF and quasi-random lens, Column – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, Column – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.1 to 0.7 m.
Preprints 81356 g003
Figure 4. Column – 1: Images of IPSF(zs = 0.4 m), column – 2: images of IPSF(zs = 0.5 m), column – 3: object intensity distributions for the two plane object consisting of “CIPHR” and “TARTU”, column – 4: reconstruction results IR using column – 1, column – 5: reconstruction results IR using column – 2, for diffractive lens (row – 1), spiral lens (L = 5) (row – 2), spiral axicon (L = 3) (row – 3), cubic phase mask, DL-AVF (row – 5) and quasi-random lens (row – 6).
Figure 4. Column – 1: Images of IPSF(zs = 0.4 m), column – 2: images of IPSF(zs = 0.5 m), column – 3: object intensity distributions for the two plane object consisting of “CIPHR” and “TARTU”, column – 4: reconstruction results IR using column – 1, column – 5: reconstruction results IR using column – 2, for diffractive lens (row – 1), spiral lens (L = 5) (row – 2), spiral axicon (L = 3) (row – 3), cubic phase mask, DL-AVF (row – 5) and quasi-random lens (row – 6).
Preprints 81356 g004
Figure 5. Experimental setup: (1) LED, (2) LED power controller, (3) iris(I1), (4) diffuser, (5) refractive lens (L1), (6) polarizer, (7) refractive lens (L2), (8) object/pinhole, (9) refractive lens (L3), (10) iris (I2), (11) beam splitter, (12) SLM, (13) image sensor.
Figure 5. Experimental setup: (1) LED, (2) LED power controller, (3) iris(I1), (4) diffuser, (5) refractive lens (L1), (6) polarizer, (7) refractive lens (L2), (8) object/pinhole, (9) refractive lens (L3), (10) iris (I2), (11) beam splitter, (12) SLM, (13) image sensor.
Preprints 81356 g005
Figure 6. Experimental results. Column – 1: Images of phase masks, column – 2: images of IPSF (zs = 5 cm), column – 3: images of IPSF (zs = 5.6 cm), column – 4: images of summed object intensity distributions IO of the two objects at different depths IO (zs = 5 cm) and IO (zs = 5.6 cm), column – 5: Reconstruction results IR (zs = 5 cm) using corresponding IPSF (zs = 5 cm) in column – 2, column – 6: Reconstruction results IR (zs = 5.6 cm) using corresponding IPSF (zs = 5.6 cm) in column – 3.
Figure 6. Experimental results. Column – 1: Images of phase masks, column – 2: images of IPSF (zs = 5 cm), column – 3: images of IPSF (zs = 5.6 cm), column – 4: images of summed object intensity distributions IO of the two objects at different depths IO (zs = 5 cm) and IO (zs = 5.6 cm), column – 5: Reconstruction results IR (zs = 5 cm) using corresponding IPSF (zs = 5 cm) in column – 2, column – 6: Reconstruction results IR (zs = 5.6 cm) using corresponding IPSF (zs = 5.6 cm) in column – 3.
Preprints 81356 g006
Figure 7. Experimental colonoscopy results. Red region – Telangiectasias; black region – necrosis which is the death of body tissue; yellow region – mucosal sloughing; magnified region – Necrotic area (dead mucosa).
Figure 7. Experimental colonoscopy results. Red region – Telangiectasias; black region – necrosis which is the death of body tissue; yellow region – mucosal sloughing; magnified region – Necrotic area (dead mucosa).
Preprints 81356 g007
Figure 8. Experimental cone beam computed tomography results. The saturated region indicates metal artifacts. The size of each square is 8 cm × 8 cm.
Figure 8. Experimental cone beam computed tomography results. The saturated region indicates metal artifacts. The size of each square is 8 cm × 8 cm.
Preprints 81356 g008
Figure 9. Simulation results of post-hybridization. 4D distribution of hybrid beam obtained by combining (a) Airy beam and self-rotating beam and (b) its reconstruction using LR2A. 4D distribution of hybrid beam obtained by combining (a) Airy beam and scattered beam and (b) its reconstruction using LR2A.
Figure 9. Simulation results of post-hybridization. 4D distribution of hybrid beam obtained by combining (a) Airy beam and self-rotating beam and (b) its reconstruction using LR2A. 4D distribution of hybrid beam obtained by combining (a) Airy beam and scattered beam and (b) its reconstruction using LR2A.
Preprints 81356 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated