1. Introduction
Imaging refers to the process of capturing and reproducing the characteristics of an object as close as possible to reality. There are two primary approaches to imaging: direct and indirect. Direct imaging is the oldest imaging concept, while indirect imaging is relatively much younger. A direct imaging system as the name suggests, creates the image of an object directly on a sensor. Most of the commonly available imaging systems such as laboratory microscope, digital camera, telescope, and even human eye are based on direct imaging concept. Indirect imaging concept requires at least two steps to complete the imaging process [
1]. Holography is an example of indirect imaging concept consisting of two steps, namely optical recording, and numerical reconstruction. While the imaging process of holography is complicated and set up expensive when compared to that of direct imaging, the advantages in holography are remarkable. In holography, it is possible to record the entire 3D information of an object by a single camera shot in a 2D matrix. In direct imaging, a single camera shot can obtain only 2D information of an object. With holography, it is possible to record phase information which enables seeing thickness and refractive index variations in transparent objects. The above significant advantages justify the complicated and expensive recording set up and two-step imaging process of holography [
2].
In coherent holography, light from an object is interfered with a reference wave that has no information of the object but derived from the same source to obtain the hologram. The above hologram recording process is suitable only if the illumination source is a coherent one [
3]. Most of the commonly available imaging systems like the ones pointed out – laboratory microscopes, telescopes and digital camera rely on incoherent illumination. There are many reasons for choosing an incoherent illumination over a coherent one starting from the primary reasons that it is not practical to shine laser on any object and natural light is incoherent. There are other advantages in incoherent imaging such as lesser imaging noises and higher imaging resolution compared to coherent imaging. However, recording a hologram with an incoherent source is a challenging task and impossible in the framework of coherent holography. A new concept of interference is needed in order to realize holography with an incoherent source [
4,
5].
One of the aims of holography is to compress 3D information into a 2D intensity distribution. In coherent holography, it is achieved by interfering the object wave that carries the phase fingerprint with a reference wave derived from the same source. The above method converts the phase fingerprint into an intensity distribution. The 3D object wave can be reconstructed by illuminating the recorded intensity distribution’s amplitude or phase transmission matrix with the reference wave either physically as in analog holography or numerically as in digital holography. The above mode of recording a hologram is not possible with incoherent illumination due to the lack of coherence. To realize holography with incoherent illumination, a new concept called self-interference was proposed which means that light from any object point is coherent with respect to itself and therefore can coherently interfere [
3,
5]. Therefore, instead of interfering the object wave with a reference wave, the object wave from every point can be interfered coherently with the object wave from the same point for incoherent holography. The temporal coherence can be improved by trading-off some light using a spectral filter. In this line of research, there have been many interesting architectures such as rotational shearing interferometer [
6,
7,
8] triangle interferometer [
9,
10] and conoscopic holography [
11,
12]. The modern-day incoherent holography approaches based on SLM [
6,
7,
8,
9,
10,
11,
12] are Fresnel incoherent correlation holography (FINCH) [
13,
14] and Fourier incoherent single channel holography [
15,
16] developed by Rosen and team. The field of incoherent holography rapidly evolved with advancements in optical configuration, recording and reconstruction methods by the contributions of many researchers such as Poon, Kim, Tahara, Matoba, Nomura, Nobukawa, Bouchal, Huang, Kner and Potcoava [
17,
18]. Even with all the advancements, the latest version of FINCH required at least two camera shots and required multiple optical components [
19,
20].
In FINCH, light from an object point is split into two differently modulated by two diffractive lenses and interfered to record the hologram. The object’s image is reconstructed by Fresnel back propagation [
13]. A variation of FINCH called coded aperture correlation holography (COACH) was developed in 2016 where instead of one of the two diffractive lenses, a quasi-random scattering mask was used [
21]. The reconstruction in COACH is also different from that of FINCH as Fresnel back propagation is not possible due to the lack of an image plane. Therefore, in COACH, the point spread hologram is recorded using a point object in the first step which is then cross-correlated with the object hologram to reconstruct the image of the object. This type of reconstruction is common in coded aperture imaging (CAI) [
22,
23]. The origin of CAI can be traced back to the motivation of developing imaging technologies to non-visible regions of electromagnetic spectrum such as X-rays, Gamma rays, etc., where manufacturing of lenses is a challenging task. In such cases, the light from an object was modulated by a random array of pinholes created on a plate made of a material that is not transparent to that radiation [
22,
23]. The point spread function (PSF) is recorded in the first step and the object intensity distribution was recorded next and the image of the object is reconstructed by processing the two intensity patterns in the computer.
While CAI was used for recording 2D spatial and spectral information, it was not used for recording 3D spatial information as in holography [
24]. With the development of COACH, a cross-over point between holography and CAI was identified which led to the development of interferenceless COACH (I-COACH) [
25]. In I-COACH, light from an object is modulated by a quasi-random phase mask and the intensity distribution is recorded. But instead of a single PSF, a PSF library is recorded corresponding to different axial locations. This new approach connected incoherent holography with CAI. Since I-COACH with a quasi-random phase mask resulted in a low SNR, the next direction of research was set on engineering of phase masks to maximize the SNR. Different phase masks were designed and tested for I-COACH for the generation of dot patterns [
26], Bessel patterns [
27], Airy patterns [
28] and self-rotating patterns [
29]. The above studies also revealed some unexpected capabilities of I-COACH such as capability to tune axial resolution independent of lateral resolution. Another direction of development in I-COACH was on improving the computational reconstruction method. The first version of I-COACH used matched filter for processing PSF and object intensity distributions. Later phase-only filter was applied to improve the SNR [
30]. In the subsequent studies of I-COACH, a novel computational reconstruction method called non-linear reconstruction (NLR) was developed whose performance was significantly better than both matched and phase-only filters [
31]. An investigation was made to understand if NLR is a universal reconstruction method for all optical fields with semisynthetic studies for a single plane and the results were promising [
32]. However, additional techniques such as raising images to power of
p and median filters were necessary to obtain a reasonable image reconstruction for deterministic optical fields [
33].
Recently, a novel computational reconstruction method called Lucy-Richardson-Rosen algorithm (LR
2A) was developed by combining the well-known Lucy-Richardson algorithm (LRA) with NLR [
34,
35,
36]. The performance of LR
2A was found to be significantly better than LRA and NLR in many studies [
37,
38,
39]. In this tutorial, we present the possibilities of using LR
2A as a generalized reconstruction method for deterministic as well as random optical fields for incoherent 3D imaging applications. The physics of optical fields when switching from a coherent source to an incoherent source is different [
40,
41]. Many optical fields such as Bessel, Airy, vortex beam have unique spatio-temporal characteristics that are useful for many applications. However, they cannot be used for imaging applications as the above fields do not have a point focus. In this tutorial, we discuss the procedure for transferring the exotic 3D characteristics of the above special beams for imaging applications using LR
2A. For the first time, the optimized computational MATLAB codes for implementing LR
2A are also provided. The manuscript consists of five sections. The methodology is described in the second section. The third section contains the simulation studies. The experimental studies are presented in the fourth section. In the fifth section, an interesting method to tune axial resolution is discussed. The conclusion and future perspectives of the study are presented in the final section.
2. Materials and Methods
The aim of this tutorial is to show the step-by-step procedure on how to faithfully transfer the 3D axial characteristics present in exotic beams to imaging systems using indirect imaging concept and LR
2A. There have been many recent studies that have achieved this in the framework of I-COACH [
42,
43]. In the above studies, NLR has been used for reconstruction which performs best with scattered distributions and so in both studies, the deterministic fields’ axial characteristics have been transferred to the imaging system with additional sparsely random encoding. The optical configuration of the proposed system is shown in
Figure 1. Light from an object is incident on a phase mask and recorded by an image sensor. The point spread function (PSF) is pre-recorded and used as the reconstructing function to obtain the object information. There are numerous methods available to process the object intensity distribution with the PSF which includes matched filter, phase-only filter, Weiner filter, NLR and LR
2A. In this study, LR
2A has been used which has a better performance than other methods when deterministic optical fields are considered.
The proposed analysis is limited to spatially incoherent illumination. The methods can be extended to coherent light when the object is a single point for depth measurements. A single point source with an amplitude of
and located at
is considered. The complex amplitude at the SLM is given as
, where
L is a linear phase function given as
and
Q is a quadratic phase function given as
, where
and
C1 is a complex constant. In the SLM, phase mask for the generation of an optical field is displayed. The phase function
modulates the incoming light and the complex amplitude after the phase mask is given as
. At the image sensor, the recorded intensity distribution is given as
where,
is the location vector in the sensor plane, ‘⊗’ is a 2D convolutional operator and
propagates
to the image sensor. For depth measurements, as mentioned above, both coherent as well as incoherent light sources can be used with a point object. In that case, it is sufficient to measure the correlation value at the origin obtained from
, where Δz is the depth and ‘
is a correlation operator. The magnification of the system is given as
MT=
zh/
zs. The lateral and axial resolution limits of the system for a typical lens are 1.22
λzs/
D and 8
λ(
zs/
D)
2, where
D is the diameter of the SLM.
The
IPSF can be expressed as
A 2D object with
M point sources can be represented mathematically as
M Kronecker Delta functions as
where
are constants. In this study, only spatially incoherent illumination is considered and so the intensity distribution of light diffracted from every point add up in the sensor plane. Therefore, the object intensity distribution can be expressed as
In this study, the image reconstruction is carried out using LR
2A, where the (
n+1)
th reconstructed image is given as
where ‘
’ refers to NLR which is defined for two functions
A and
B as
, where
is the Fourier transform of
X. The
α and
β are tuned between -1 and 1. In the case of LRA,
α and
β are set to 1 which is a matched filter. The loop is iterated until an optimal reconstruction is obtained. There is a forward convolution
and the ratio between this and
is non-linearly correlated with
. The better estimation from NLR enables a rapid convergence.
3. Simulation results
In this tutorial, a wide range of phase masks are demonstrated as coded apertures with some of them commonly used and some are exotic. In the previous studies with LR
2A, it was shown that the performance of LR
2A is best when the
IPSF is a symmetric distribution along
x and
y direction [
39,
44]. The shift variance in LR
2A was demonstrated [
44]. Some solutions in the form of post-processing to address the problems associated with the asymmetry of the
IPSF have been discussed in [
39]. However, all the above effects were due to the fact that LR
2A was not generalized. In this study, LR
2A has been generalized to all shapes of
IPSFs and both real and complex cases. The simulation space has been constructed in MATLAB with the following specifications: Matrix size = 500 × 500 pixels, pixel size Δ = 10 μm, wavelength
λ = 650 nm, object distance
zs = 0.4 m, recording distance
zh = 0.4 m and focal length of diffractive lens
f = 0.2 m. For diffractive lens, to prevent direct imaging mode, the
zh value was modified to 0.2 m. The phase masks of a diffractive lens, spiral lens, axicon, spiral axicon are given as
,
,
,
respectively, where
L is the topological charge and
Λ is the period of axicon. The optical configuration in this study is simple consisting of three steps namely free space propagation, interaction and again a free space propagation. The equivalent mathematical operations for the above three steps propagation, interaction and propagation are convolution, product and convolution respectively. The first mathematical operation is quite direct as a single point is considered which is a Kronecker Delta function. Any function convolved with a Delta function creates the replica of the function. Therefore, the complex amplitude obtained after the first operation is equivalent to
. In the next step, the above complex amplitude is multiplied to
by an element wise multiplication operation. In the final step, the resulting complex amplitude is propagated to the sensor plane by a convolution operation expressed as three Fourier transforms
. The intensity distribution
IPSF can be obtained by squaring the absolute value of the complex amplitude matrix obtained at the sensor plane. The steps in MATLAB software can be found in the supplementary files of [
45].
The simulation was carried out by shifting
zs from 0.2 to 0.6 m in steps of 4 mm and the
IPSF(
zs) was accumulated into a cube matrix (
x,
y,
z,
I). The images of the phase masks for diffractive lens, spiral lens (
L = 1, 3 and 5), diffractive axicon and spiral axicon (
L = 1, 3 and 5) are shown in row - 1 of
Figure 2. The 3D axial intensity distributions for different diffractive elements such as diffractive lens, spiral lens (
L = 1, 3 and 5), diffractive axicon, spiral axicon (
L = 1, 3 and 5) are shown in row - 2 of
Figure 2. The diffractive lens does not have a focal point as the imaging condition was not satisfied within this range of
zs when
zh is 0.2 m. For spiral lens, a focused ring is obtained at the recording plane corresponding to
zs = 0.4 m and the ring blurs and expands similar to the case of a diffractive lens. For a diffractive axicon, a Bessel distribution is obtained in the recording plane and it is invariant with changes in
zs [
46,
47]. A similar behavior is seen in Higher Order Bessel Beams (HOBB) [
48]. As seen in the axial intensity distributions, none of the beams can be directly used for imaging applications. It may be argued that Bessel beams can be used, but as it is known, the non-changing intensity distribution comes at a price which is the loss of higher spatial frequencies [
33]. Consequently, imaging using Bessel beam results in low resolution images.
The holographic point spread function is not
IPSF but
, where ‘
is the LR
2A operator with
n iterations which is equivalent to the autocorrelation function. It must be noted that what is done in LR
2A is not a regular correlation as in matched filter. The autocorrelation function was calculated for the above
zs variation for the different cases of phase masks and accumulated in a cube matrix as shown in the third row of
Figure 2. As seen, the autocorrelation function is a cylinder with uniform radius for all values of
zs. This is the holographic 3D PSF from whose profile it seems that it is possible to reconstruct the object information with a high resolution for all the object planes. To understand the axial imaging characteristics in the holographic domain, the
IPSF(
zs) was cross-correlated with
IPSF of a reference plane which in this case has been set at
zs = 0.4 m. Once again, this cross-correlation is the nearest equivalent term in LR
2A which is calculated as
. The cube data obtained for different phase masks are shown in the fourth row from top in
Figure 3. As seen in the figure, the axial characteristics have been faithfully transferred to the imaging system except that now it is possible to perform imaging at any one or multiple planes of interest simultaneously. With a diffractive lens, a focal point is seen at a particular plane and rapidly blurred in other planes with respect to
zs. The same behavior can be said about the spiral lenses which consist of the diffractive lens. The case of axicon shows a long focal depth and a similar behavior is seen for the cases of spiral axicon. With indirect imaging concept and LR
2A, it is possible to faithfully transfer the exotic axial characteristics of special beams to an imaging system. It must be noted that the cross-correlation was carried out with fixed values of
α,
β and
n for every case. In some of the planes of both Airy pattern and self-rotating pattern, there is some scattering seen indicating either non-optimal reconstruction condition or slightly lower performance of LR
2A.
All the above cases demonstrated consist of
IPSFs that are radially symmetric. When asymmetric cases such as Airy beams, self-rotating beams, speckle patterns, NLR performed better than LR
2A as it was not optimized for asymmetric cases. In this study, LR
2A has been generalized to both complex as well as asymmetric shapes. Three phase masks: cubic phase mask with a phase function
, where
a=
b~1000 [
28,
40], diffractive lens with azimuthally varying focal length (DL-AVF)
[
29,
49] and quasi-random lens with a scattering ratio of 0.04 obtained using Gerchberg-Saxton algorithm [
3] are investigated. The image of the phase masks for the above three cases are shown in column – 1 in
Figure 3. In this case, to show the curved path of the Airy pattern, the axial range was extended for
zs from 0.4 m to 0.6 m. The 3D intensity distribution in direct imaging mode for the three cases are shown in column – 2 of
Figure 3. The 3D autocorrelation distribution is shown in column – 3 of
Figure 3 for the three cases. The 3D cross-correlation distribution obtained by LR
2A for the three cases are shown in column – 4 of
Figure 3. As seen from the columns – 2 and 4, it can be seen that the axial characteristics of the exotic beams have been faithfully transferred to the imaging system. The quasi-random lens or any scattering mask behaves exactly like a diffractive lens in holographic domain. Comparing the results in
Figure 3 and the previous results [
39,
44] a significant improvement is seen.
The simulation study of a two-plane object consisting of two test objects with letters “CIPHR” and “TARTU” with
zs = 0.4 m and 0.5 m respectively, is presented next. The images of the
IPSFs for
zs = 0.4 m and 0.5 m, the object intensity pattern
IO obtained by convolution of object “CIPHR” with
IPSF (
zs = 0.4 m) and convolution of “TARTU” with
IPSF (
zs = 0.5 m) followed by a summation and the reconstructions
IR corresponding to the two planes using LR
2A for a diffractive lens, spiral lens (
L = 5), spiral axicon (
L = 3), cubic phase mask, DL-AVF and quasi-random lens are shown in
Figure 4. As seen from the results in
Figure 4, the cases of diffractive lens and quasi-random lens appear similar with respect to the axial behavior, i.e., when a particular plane information is reconstructed only that plane information is focused and enhanced while the other plane information is blurred and weak. However, for the elements such spiral lens and spiral axicon, the other plane information consists of “hot spots” which are prominent and sometimes even stronger than the information in the reconstructed plane. This is due to the fact that there is similarity between the two
IPSFs. This is one of the pitfalls in using such deterministic optical fields. The problem with such hotspots is that it is not possible to discriminate if the hot spot corresponds to any useful information related to the object or blurring due to a different plane if the object is not known prior. The axial characteristics of Airy beam and self-rotating beam have been faithfully transferred to the imaging system.
Another pitfall in 3D imaging in indirect imaging mode is the depth-wavelength reciprocity [
33,
50]. The changes in intensity distribution may appear identical to a change in depth or change in wavelength or both. This is true with both deterministic as well as random optical fields except for beams carrying OAM as a change in wavelength causes fractional topological charge which is unique. Another point to consider when using
IPSFs from special functions is their sampling. Except for the cases of quasi-random lens and diffractive lens, the other cases have distorted the object information in some ways. In the case of Airy beam, the
IPSF consists of periodic dot patterns along
x and
y directions which when samples the object information, the curves have been sampled into square shapes. The
IPSFs with rings have shaped the object information in this fashion which again raises concern on the reliability of the measurement, when the object information is not already known.
4. Experiments
The experimental setup is shown in
Figure 5. The setup was built using the optical components: high-power LED (Thorlabs, 940 mW, λ = 660 nm and Δλ = 20 nm), iris, diffuser, polarizer, refractive lenses, object/pinhole, beam splitter, spatial light modulator (SLM) (Thorlabs Exulus HD2, 1920×1200 pixels, pixel size = 8 μm) and an image sensor (Zelux CS165MU/M 1.6 MP monochrome CMOS camera, 1440×1080 pixels with pixel size ~3.5 µm). The light from the LED was passed through iris (I1) which controls the light illumination and then passed through a diffuser (Thorlabs Ø1" Ground Glass Diffuser-220 GRIT) which is used to remove the LED’s grating lines. The light from the diffuser was collected using a refractive lens (L1) (
f = 5 cm) and it is passed through a polarizer which is oriented along the active axis of the SLM. Two objects digit ‘3’ and ‘1’ from Group - 5 from R1DS1N - Negative 1951 USAF Test Target, Ø1" were used. A pinhole of 50 μm was used to record the
IPSF. The object is critically illuminated using a refractive lens (L2) (
f = 5 cm). The light from the object is collimated by another refractive lens (L3) (
f =5 cm) and passed through the beam splitter and incident on the SLM. On the SLM, phase masks of deterministic and random optical fields were displayed one after the another and the
IPSF and
IO were recorded by the image sensor. The
IO for two objects are recorded at two different depths (
zs = 5 cm) and (
zs = 5.6 cm) and summed to demonstrate 3D imaging. The experimental results are presented in
Figure 6. The phase masks of the deterministic and random optical fields are shown in column – 1 and their corresponding
IPSFs (
zs = 5 cm) and
IPSFs (
zs = 5.6 cm) are shown in columns - 2 and 3 respectively,
IO is shown in column – 4,
IR (
zs = 5 cm) and
IR (
zs = 5.6 cm) are shown in columns – 5 and 6 respectively. Once again, it can be seen that the 3D characteristics of the beams have been faithfully transferred to the indirect imaging system faithfully.
To demonstrate the application of LR
2A to real applications in day-to-day life, medical images were obtained from surgeons. A 65-year-old patient, male with a known case of prostrate cancer had radiotherapy about a year ago. He developed bleeding and mucous discharge from the anus. The colonoscopy finding shows mucosal pallor, telangiectasias, edema spontaneous hemorrhage and friable mucosa. It can be noted that the mucosa was congested with ulceration stricture. The images were captured using colonoscope Olympus system and CaptureITPro medical imaging software. The direct images obtained using the colonoscope are shown in
Figure 7(a) and 7(e). The
IPSF can be synthesized in computer or an isolated dot can be taken from
Figure 7(a), padded with zeros and used as the reconstructing function. In this study, the
IPSF was synthetic [
38,
39]. The different color channels were extracted from the image and processed separately using synthetic
IPSF and LR
2A and then combined as discussed in [
38]. The reconstructed images are shown in
Figure 7(b) and 7(f). The magnified regions of
Figure 7(a) and 7(b) are shown in
Figure 7(c) and 7(d) respectively. The magnified regions of
Figure 7(e) and 7(f) are shown in
Figure 7(g) and 7(h) respectively. The red region indicates Telangiectasias and the black region shows necrosis which is the death of body tissue. The yellow region shows mucosal sloughing and the magnified region shows necrotic area with dead mucosa.
A diagnostic imaging equipment – Cone beam computed tomography (CBCT) was used for imaging a patient with focal spot 0.5 mm, field of view 8 × 8 cm, voxel size of 0.2 mm/ 0.3 mm and exposure time was 15.5 seconds. The images obtained for a patient with multiple implants placed in the jaw with metallic artifacts compromised the clarity and blurred the finer details as shown in Row – 1 of
Figure 8. Once again synthetic
IPSF and LR
2A were used to improve the resolution and contrast of the images. The reconstructed images are shown in Row – 2 of
Figure 8. The reconstructed images have a better resolution and contrast compared to the direct images.
Author Contributions
Conceptualization, V. A.; methodology, V. A., A. N. K. R., R. A. G., M. S. A. S., S. D. M. T., A. P. I. X., F. G. A., S. G., A. S. J. F. R.; software, V. A., M. S. A. S., S. D. M. T.; validation, V. A., A. N. K. R., R. A. G., M. S. A. S., S. D. M. T., A. P. I. X., F. G. A., S. G., A. S. J. F. R.; formal analysis, V.A., M. S. A. S., S. D. M. T.; investigation, all the authors; resources, V.A., M. S. A. S., S. D. M. T.; writing—original draft preparation, A. P. I. X., F. G. A., V. A.; writing—review and editing, all the authors; supervision, V. A.; project administration, A. S. J. F. R.; funding acquisition, V.A., M. S. A. S., S. D. M. T. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Concept figure: Recording images with different phase masks – diffractive lens, spiral lens, spiral axicon and axicon and reconstruction using LR2A by processing object intensity and PSF. OTF—Optical transfer function; n—number of iterations; ⊗—2D convolutional operator; —refers to complex conjugate following a Fourier transform; - Inverse Fourier transform; Rn is the nth solution and n is an integer, when n = 1, Rn = I; ML – Maximum Likelihood; α and β are varied from -1 to 1.
Figure 1.
Concept figure: Recording images with different phase masks – diffractive lens, spiral lens, spiral axicon and axicon and reconstruction using LR2A by processing object intensity and PSF. OTF—Optical transfer function; n—number of iterations; ⊗—2D convolutional operator; —refers to complex conjugate following a Fourier transform; - Inverse Fourier transform; Rn is the nth solution and n is an integer, when n = 1, Rn = I; ML – Maximum Likelihood; α and β are varied from -1 to 1.
Figure 2.
Row – 1: Phase masks of a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5). Row – 2: Cube data of axial intensity distributions obtained at the recording plane for a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5), row – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, row – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.2 to 0.6 m.
Figure 2.
Row – 1: Phase masks of a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5). Row – 2: Cube data of axial intensity distributions obtained at the recording plane for a diffractive lens, spiral lens (L = 1, 3 and 5), diffractive axicon and spiral axicon (L = 1, 3 and 5), row – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, row – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.2 to 0.6 m.
Figure 3.
Column – 1: Phase images of cubic phase mask, DL-AVF and quasi-random lens. Column – 2: Cube data of axial intensity distributions obtained at the recording plane for cubic phase mask, DL-AVF and quasi-random lens, Column – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, Column – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.1 to 0.7 m.
Figure 3.
Column – 1: Phase images of cubic phase mask, DL-AVF and quasi-random lens. Column – 2: Cube data of axial intensity distributions obtained at the recording plane for cubic phase mask, DL-AVF and quasi-random lens, Column – 3: Cube data of the intensity of the autocorrelation function for the different cases of phase masks, Column – 4: Cube data of the cross-correlation function for the different cases of phase masks, when zs is varied from 0.1 to 0.7 m.
Figure 4.
Column – 1: Images of IPSF(zs = 0.4 m), column – 2: images of IPSF(zs = 0.5 m), column – 3: object intensity distributions for the two plane object consisting of “CIPHR” and “TARTU”, column – 4: reconstruction results IR using column – 1, column – 5: reconstruction results IR using column – 2, for diffractive lens (row – 1), spiral lens (L = 5) (row – 2), spiral axicon (L = 3) (row – 3), cubic phase mask, DL-AVF (row – 5) and quasi-random lens (row – 6).
Figure 4.
Column – 1: Images of IPSF(zs = 0.4 m), column – 2: images of IPSF(zs = 0.5 m), column – 3: object intensity distributions for the two plane object consisting of “CIPHR” and “TARTU”, column – 4: reconstruction results IR using column – 1, column – 5: reconstruction results IR using column – 2, for diffractive lens (row – 1), spiral lens (L = 5) (row – 2), spiral axicon (L = 3) (row – 3), cubic phase mask, DL-AVF (row – 5) and quasi-random lens (row – 6).
Figure 5.
Experimental setup: (1) LED, (2) LED power controller, (3) iris(I1), (4) diffuser, (5) refractive lens (L1), (6) polarizer, (7) refractive lens (L2), (8) object/pinhole, (9) refractive lens (L3), (10) iris (I2), (11) beam splitter, (12) SLM, (13) image sensor.
Figure 5.
Experimental setup: (1) LED, (2) LED power controller, (3) iris(I1), (4) diffuser, (5) refractive lens (L1), (6) polarizer, (7) refractive lens (L2), (8) object/pinhole, (9) refractive lens (L3), (10) iris (I2), (11) beam splitter, (12) SLM, (13) image sensor.
Figure 6.
Experimental results. Column – 1: Images of phase masks, column – 2: images of IPSF (zs = 5 cm), column – 3: images of IPSF (zs = 5.6 cm), column – 4: images of summed object intensity distributions IO of the two objects at different depths IO (zs = 5 cm) and IO (zs = 5.6 cm), column – 5: Reconstruction results IR (zs = 5 cm) using corresponding IPSF (zs = 5 cm) in column – 2, column – 6: Reconstruction results IR (zs = 5.6 cm) using corresponding IPSF (zs = 5.6 cm) in column – 3.
Figure 6.
Experimental results. Column – 1: Images of phase masks, column – 2: images of IPSF (zs = 5 cm), column – 3: images of IPSF (zs = 5.6 cm), column – 4: images of summed object intensity distributions IO of the two objects at different depths IO (zs = 5 cm) and IO (zs = 5.6 cm), column – 5: Reconstruction results IR (zs = 5 cm) using corresponding IPSF (zs = 5 cm) in column – 2, column – 6: Reconstruction results IR (zs = 5.6 cm) using corresponding IPSF (zs = 5.6 cm) in column – 3.
Figure 7.
Experimental colonoscopy results. Red region – Telangiectasias; black region – necrosis which is the death of body tissue; yellow region – mucosal sloughing; magnified region – Necrotic area (dead mucosa).
Figure 7.
Experimental colonoscopy results. Red region – Telangiectasias; black region – necrosis which is the death of body tissue; yellow region – mucosal sloughing; magnified region – Necrotic area (dead mucosa).
Figure 8.
Experimental cone beam computed tomography results. The saturated region indicates metal artifacts. The size of each square is 8 cm × 8 cm.
Figure 8.
Experimental cone beam computed tomography results. The saturated region indicates metal artifacts. The size of each square is 8 cm × 8 cm.
Figure 9.
Simulation results of post-hybridization. 4D distribution of hybrid beam obtained by combining (a) Airy beam and self-rotating beam and (b) its reconstruction using LR2A. 4D distribution of hybrid beam obtained by combining (a) Airy beam and scattered beam and (b) its reconstruction using LR2A.
Figure 9.
Simulation results of post-hybridization. 4D distribution of hybrid beam obtained by combining (a) Airy beam and self-rotating beam and (b) its reconstruction using LR2A. 4D distribution of hybrid beam obtained by combining (a) Airy beam and scattered beam and (b) its reconstruction using LR2A.