Preprint
Article

Source Design Optimization for Depth Image Reconstruction in X-ray Imaging

Altmetrics

Downloads

96

Views

31

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

15 April 2024

Posted:

15 April 2024

You are already at the latest version

Alerts
Abstract
X-ray tomography is an effective non-destructive testing method for industrial quality control. Limited angle tomography can be used to reduce the amount of data that needs to be acquired and thus speed up the process. In some industrial applications, however, objects are flat and layered, and laminography is preferred. It can deliver 2D images of the structure of a layered object at a particular depth from very limited data. An image at a particular depth is obtained by summing those parts of the data that contribute to that slice. This produces a sharp image of that slice while superimposing a blurred version of structures present at other depths. In this paper, we investigate an optimal experimental design (OED) problem for laminography that aims to determine the optimal source positions. Not only can this be used to mitigate imaging artifacts, it can also speed up the acquisition process in case where moving the source and detector is time consuming (e.g., in robotic arm imaging systems). We investigate the imaging artifacts in detail through a modified Fourier Slice Theorem. We address the experimental design problem within the Bayesian risk framework using empirical Bayes risk minimization. Finally, we present numerical experiments on simulated data.
Keywords: 
Subject: Computer Science and Mathematics  -   Computational Mathematics

1. Introduction

Non-destructive testing plays a key role in industrial quality control [1]. X-ray imaging in particular is a highly effective non-destructive testing technique that can deliver detailed 3D images of the internal structure of products. It is therefore widely used in various industries for quality control [2,3,4]. Conventional tomographic acquisition requires a full rotation around the object, limiting its applicability to off-line inspection of small to medium-sized objects. To overcome this limitation, several approaches to tomographic reconstruction from limited-angle data have been developed. These methods typically assume additional prior knowledge of the object to fill in information that is missing in the measurements. A prime example is discrete tomography, where objects consisting of a few distinct materials can be reconstructed from severely under-sampled data. In some applications, objects are wide and flat, and tomographic data can only be collected for a very limited angular range. This pushes the limits of conventional limited-angle tomography and we have to resort to another class of methods [5]. Laminography offers a method to reconstruct an image of a specific slice of a flat layered object. As shown in Figure 1, it needs a flexible source and detector that can move and collect data from several directions such that during scanning the X-ray tube is located on the same side of the flat object and the detector is on the opposite side of it [6].
In this paper we consider a particular setup for laminographic imaging which requires the acquisition of X-ray projection images from N source positions on a fixed grid, while the detector is fixed on the opposite side of the object. Our aim is to be able to optimally design acquisition protocols with K N sources for a given set of objects. By optimally designing the setup we can reduce imaging artifacts as well as potentially speed up the data-acqisition process in cases where moving the source is time consuming (e.g., robotic arm imaging). We pose the resulting optimal experimental design (OED) problem [7,8] as a bi-level optimization problem that employs derivative-based methods to facilitate implementation in high-dimensional settings [9].
Our main contributions are the following.
  • We analyze the corresponding 3D and 2D (slice-based) image reconstruction process, present an analog of the Fourier Slice Theorem for this setup and analyze the image artifacts present in slice-images.
  • We pose an experimental design problem for selecting the K most informative source positions for laminography of a class of objects, and formulate a bi-level optimization in the Bayes risk minimization framework to solve it.
The remainder of the paper is organized as follows. In Section 2, we present the forward and backward operators in the continuous domain for the depth imaging procedure. We study depth reconstruction in the discrete domain in Section 3. In Section 4, we propose a bi-level optimization problem to find a source design with K sources chosen from a fixed array of N K sources. In Section 5, we propose an algorithm to solve the proposed bi-level optimization problem. In Section 6, we show the results obtained by our method and compare them to optimal source designs, obtained by exhaustive search. In Section 8, we provide a discussion on the limitations and future work of the proposed method.

1.1. Notation

Throughout the paper, scalars and vector elements are denoted by lower-case letters, vectors by lower-case boldface letters, and matrices by upper-case boldface letters. The notation · stands for transpose of either vector or matrix. Moreover, # denotes cardinality of a vector and x = i = 1 n x i 2 denotes the 2 -norm of a vector.

2. Forward and Backward Operators in the Continuous Domain

In this section, we will first review the linear X-ray projection model, followed by introducing the forward and backward operators for the setup shown in Figure 2.

2.1. Linear Projection Model

During the X-ray imaging process, an object is exposed to a beam of X-ray photons emitted by an X-ray tube. Some of these photons are absorbed by the object, while others reach the detector. The captured measurements can be modeled using the Beer-Lambert law, which describes the exponential relationship between the intensity of the X-ray beam and the distance it travels through the object
I i = I 0 exp x t i + η i r d r .
Here, I i denotes the number of photons that reach the detector from a starting point t i in the direction of η i , where i = 1 , , m . I 0 is the initial intensity of the beam (i.e., the number of photons emitted by the source), and x is the function representing the attenuation coefficient of the object. This relation is usually written as:
y i = x t i + η i r d r ,
with y i = log I i I 0 representing the measured data for a single ray. In the following, we present the forward and backward operators for the X-ray setup shown in Figure 2. To study the depth imaging procedure within this specific setup, we need to understand forward and backward operators for both 2D and 3D objects.

2.2. Forward and Backward Operators for the Slice Imaging Setup

The proposed imaging setup is schematically depicted in Figure 2. X-rays are emitted by a source at lateral position s R 2 and height h. Without loss of generality, we parametrize the detector position by r R 2 . The forward operator, A , is now given by
y ( r , s ) = A x ( r , s ) = x t r + ( 1 t ) s , ( 1 t ) h d t ,
as shown in Figure 2. Here, x ( u , z ) denotes the attenuation coefficient of the object in terms of lateral ( u R 2 ) and vertical ( z R ) coordinates. In fact, it integrates along the ray from the source at ( s , h ) to the detector located at ( r , 0 ) .

2.2.1. A Fourier Slice Theorem

Defining the measurements in receiver-offset coordinates as g ( r , o ) = y ( r , r o ) , we find the following relation between the 2D Fourier transform of g and the 3D Fourier transform of x. Let ξ and ζ represent the spatial frequency coordinates, then it reads: (see the derivation in Appendix A.1)
F 2 D ( g ) ( ξ , o ) = F 3 D ( x ) ( ξ , ζ ) , ζ = h 1 ξ o ,
where F 2 D and F 3 D denote the 2D and 3D Fourier transforms, respectively, and o = r s signifies the vector difference between the detector position r and the source position s . Figure 3 depicts sampled elements F 2 D ( g ) (black circles) with respect to the Fourier object samples F 3 D ( x ) (shown by green squares). Obviously, the sampling procedure is missing a portion of the object spectrum due to the limited values of o , i.e., the forward operator samples from ζ = h 1 ξ o . Since the range of o is finite, this leads to a missing wedge in the frequency domain. This missing wedge leads to an ill-posed inverse problem of recovering x from g (or equivalently, y).

2.2.2. The 2D Slice at Depth z

To study the depth-imaging procedure, we introduce the corresponding forward operator A z which maps a 2D image at depth z, denoted by x z u to the corresponding projection data y ( r , s ) (See Appendix A.2),
y r , s = A z x z ( s , r ) = x z 1 h 1 z r + h 1 z s .
The corresponding adjoint operator A z * is given by (See Appendix A.2),
A z * y ( u ) = 1 1 h 1 z y u h 1 z s 1 h 1 z , s d s .
We then find that A z * A z is given by
A z * A z x z ( u ) = d s 1 h 1 z x z ( u ) ,
so that for a limited range of s , A z * A z I and hence is trivially inverted. Depth imaging of an object consisting of a single thin layer can thus be achieved by a simple back-projection. When multiple layers are present we expect artifacts, which we study next.

2.3. Depth Image Reconstruction

Here, we are interested in studying the artifacts that arise when imaging a slice from measurements of a 3D layered object. In the remainder of this section, we let h = 1 for ease of notation. As argued above, we can form a depth image by back projection, so it suffices to study the operator A z * A (See Appendix A.3). In particular, we expect to image layered objects, which we can model as
x ( u , z ) = i = 1 M x i ( u ) δ ( z z i ) .
Using a finite number of sources { s j } j = 1 N , the depth image of the k th layer is then given by
x ˜ k ( u ) = i = 1 M j = 1 N x i 1 z i 1 z k u + z i z k 1 z k s j
= x k ( u ) + N 1 i k j = 1 N x i 1 z i 1 z k u + z i z k 1 z k s j
Hence, the error in the depth image of the k th layer is a superposition of spatially averaged versions of the other layers. An illustration of the resulting spatial sampling patterns is shown in Figure 4.
The source design problem is now to find a set of source positions s j such that unwanted parts in the depth image are minimized. As we don’t expect to be able to find a closed-form solution to the source design problem in general, we discretize the problem and resort to numerical optimization.

3. Discrete Problem

3.1. Forward and Back Projection

We discretize the forward model in (3) and express it as
y = A x ,
where y R m , x R n and A R m × n . Note that the number of rows in A is the product of the number of detector pixels and the number of sources. Every block of rows corresponds to a particular X-ray source.
Let A z , x z denote the forward matrix and the slice object at depth z, respectively. In general, A z A z will not be a multiple of the identity as in the continuous setting (7), so we consider solving A z A z x z = A z y . We can view this as an approximation of 3D reconstruction through the Schur complement [10]. To see this, let A c and x c represent the forward operator and the object for all pixels except the ones contained in x z . The linear equations for the forward projection (10) can now be expressed as follows:
A z A c x z x c = y .
By multiplying A , it reads
A z A z A z A c A c A z A c A c x z x c = A z y A c y .
By utilizing Schur complement, one can re-write this in terms of x z as
A x z = y ,
where
A = A z A z A z A c A c A c 1 A c A z , y = A z y A z A c A c A c 1 A c y .
However, it is computationally not feasible to compute the corresponding Schur complements, and we can interpret depth imaging as neglecting the additional terms involving A c .

3.2. Algebraic Reconstruction

Although the operator A z is unitary (up to a scaling factor), the corresponding discrete operator A z is generally not. Moreover, it is desirable to stabilize the reconstruction with additional regularization. We therefore pose depth imaging as a regularized least-squares problem:
x ^ z = argmin x z 1 2 A z x z y 2 + λ 2 x z 2 ,
where λ controls the trade-off between the two terms. While the problem is not extremely ill-posed, we utilize λ as a safety factor to ensure the uniqueness of the reconstructed depth image. The above optimization can be solved using iterative algorithms such as the LSQR [11]. Various other regularization techniques, such as Total Variation regularization, or sparse regularization, introduce different penalty functions that promote desired properties in the reconstructed image.

4. Efficient Source Design for K Sources

Let β R N be the design parameter, where the components of β determine the influence of each source in the reconstruction process. The problem of finding the optimal design parameter β for a fixed number of sources in the reconstruction of a specific depth plane can be formulated as a bi-level optimization problem as below,
min β min α ( β ) 1 2 α ( β ) x ^ z ( β ) x gt 2 s . t # β = K , β { 0 , 1 } N ,
where x gt R n represents the ground truth and the hyper-parameter α ( β ) R serves as scaling factor, ensuring that α ( β ) x ^ z ( β ) is on the same scale as x gt . Also, x ^ z ( β ) is obtained by solving
x ^ z ( β ) = argmin x z 1 2 T β ( A z x z y ) 2 + λ 2 x z 2 ,
where y denotes the data corresponding to x gt and T β is a linear operator which applies the weights β to each source’s corresponding measurements. The optimization problem (15) minimizes the Euclidean distance between the reconstructed image (scaled by α ( β ) ) and the ground truth subject to certain constraints on β to enforce its binary nature with K elements.
The binary constraint on β in (15) can be relaxed using bound constraints [12], and the bi-level optimization is then formulated as
min β 1 2 α ^ ( β ) x ^ z ( β ) x gt 2 s . t β 1 = K , 0 β 1 ,
while the optimal value of the scalar parameter α ( β ) in (15) is substituted in above optimization with α ^ ( β ) = x ^ z ( β ) x gt x ^ z ( β ) 2 . Mathematically speaking, this problem seeks a β that minimizes the Euclidean distance of x ^ ( β ) and x gt and lies on the intersection of a hyperplane β 1 = K and the hyperbox 0 β 1 . Note that we cannot guarantee that this will produce binary solutions. In such cases, we convert the solution to a binary form by setting the K largest elements to one and assigning zeros to the remaining elements.
In practice the ground truth of x gt is unknown, however, a set of calibration data { x gt i } i = 1 M of representative images may be available. This can then be utilized to compute an efficient source design such that minimizes the empirical Bayes risk. More precisely, for a given set of calibration data { x gt i } i = 1 M with corresponding measurements { y i } i = 1 M , we can generalize (17) to
min β 1 2 M i = 1 M α ^ i ( β ) x ^ z i ( β ) x gt i 2 s . t β 1 = K , 0 β 1 ,
where x ^ z i ( β ) is the solution of (16) for the i-th calibration data y i .

5. Implementation

To address the implementation of the Efficient K-Source Design (EKSD) method, it is worth mentioning that the non-linear optimization problem of (17) requires numerical optimization algorithms to start from an initial guess and refine the solution iteratively. This section discusses the solution of (17) using a projected gradient method.

5.1. Projected Gradient Method

In general, the projected gradient (PG) method solves,
min β f β s . t . β B ,
where f : R N R is a smooth function and B is a nonempty closed convex set. The projection onto B is the mapping P : R N R N defined by:
P ( u ) = argmin x x u 2 s . t . x B .
The PG algorithm is defined by,
β k + 1 = P β k γ f β k
where γ 0 , 2 / L is the step size, L is the Lipschitz constant of f , and f β k is the gradient of f at β k . The stopping criteria is defined as the condition where the Euclidean norm of the difference between consecutive design parameters, denoted as β k + 1 β k , is less than or equal to a specified threshold value, denoted as ϵ .

5.2. Gradient of the Objective Function

To implement the non-linear optimization of (17) using the PG algorithm, the gradient calculation of F ( x ^ ( β ) , α ^ ( β ) ) : = 1 2 α ^ ( β ) x ^ ( β ) x gt 2 is required. Here, it reads,
F β = α ^ x ^ z β β α ^ x ^ z ( β ) x gt = 2 α ^ C A T β λ α ^ x ^ z β x gt ,
where A T β λ = A T β T β A + λ I 1 A T β , and C = c 1 , , c N
with c i = T β β i y A x ^ z β . (See the derivation in Appendix A.4)

5.3. Projection Onto the Simplex Constraint

Here, we investigate the projection operator P presented in (21) which maps each point onto the simplex set of B = { u R m | u 1 = K , 0 u i 1 } . (See the derivation in Appendix A.5)
P ( p ) = min { max { p μ 1 , 0 } , 1 } ,
where p , 1 R m and μ can be obtained by solving the following optimization problem:
i min { max { p i μ , 0 } , 1 } = K ,
which can be solved using the Newton method.

6. Numerical Experiments

Here, we present some results related to our proposed method (EKSD). We evaluate the performance of EKSD using the following criteria:
i: Normalized mean square error (NMSE) can be defined as the error between the reconstructed image and ground truth
NMSE : = α ^ x ^ z x gt 2 x gt 2 .
ii: Structural similarity index measure (SSIM): Unlike NMSE, SSIM takes into account the structural information and perceived quality of the images. It compares three aspects of the images: luminance, contrast, and structure, and computes a similarity index between 0 and 1, where 1 indicates perfect similarity [13].

6.1. Data Set

As a benchmark example we consider a two-layer PCB of size 80 × 250 × 250 pixels. It consists of two distinct wire designs (as illustrated in Figure 5) with a thickness of 25 pixels each. These designs are separated by a gap of 20 pixels. The layer of interest is the one depicted in Figure 5a.
We create 200 defect shapes in various locations, forming a data set of defective wire design for the bottom layer. A few samples of this data set are depicted in Figure 6. We use half of this data set for source design (calibration) and half for validation.

6.2. Implementation Details

We define the X-ray setup as shown in Figure 2. The distance h, between the center of the array and the detector, is equal to 6 L . Here, L represents the width of the phantom. Moreover, we consider r [ L / 2 , L / 2 ] 2 (discretized uniformly with 250 × 250 pixels) and s [ D / 2 , D / 2 ] 2 (discretized uniformly with N × N source positions), where D is the fixed array length. The projection data are generated using the ASTRA toolbox [14]. For illustration, we present the projection images from all sources in a 4 × 4 array of sources in Figure 7. It is challenging to inspect individual layers of a the phantom with a single projection, as overlapping them hinders a proper investigation.
We choose a small regularization parameter λ = 0.1 to ensure the solution’s stability and uniqueness without significantly biasing the reconstruction. We carefully tuned the step size, γ = 10 9 , to be sufficiently small, reducing the risk of algorithmic overshooting during optimization. Additionally, we manually adjusted the stopping threshold, ϵ = 0.001 , to ensure that the algorithm stops at approximately consistent convergence points. The initial value of the parameter vector β is initialized either with a vector ones or with i.i.d. uniform random values in [ 0 , 1 ] . We refer to these methods as EKSD(K) and EKSDR(K), respectively. In some experiments, we will compare the results to completely random designs of K out of N source positions. We refer to this as the RANDOM method.

6.3. Convergence

We evaluate the convergence behavior of the EKSD algorithm across various iterations. Here, we let K = 10 , N = 16 , D = 4 L . In Figure 8, we depict the NMSE at each iteration as evaluated on the calibration (left) and validation (right) data. This figures shows that the design is not overfitted on the calibration data.

6.4. Exploring the Effects of Array Length

Here, we study the influence of the array length D on the reconstruction error for N = 16 , 36 , 64 . Hence, we compare the average NMSE over the validation data across various array lengths, as depicted in Figure 9. The heightened errors observed for both smaller and larger array lengths indicate that certain source positions within the array contribute to artifacts in the depth image, leading to an elevated NMSE.

6.5. Exhaustive Search

We carried out an exhaustive search to assess the performance of EKSD. We consider N = 16 source positions with D = 4 L . Figure 10 shows the average NMSE over the validation data for all possible K-source designs for each K, as well as the average NMSE achieved by the EKSD-design. We see the the designs found by EKSD are nearly optimal.

6.6. Source Design 1

We consider three configurations with N = 16 , 36 , 64 at D = 4 L , and we show the NMSE and SSIM results in Table 1. Initially, we display results for depth-reconstructed images utilizing all source positions within the array. Subsequently, we derive K = 10 efficient source designs with EKSD and EKSDR and present the corresponding NMSE and SSIM results. For each value of N, we observe that EKSD and EKSDR outperform than the case using all source positions. This improvement can be attributed to the reduction of artifacts in the images, which is likely due to the selective nature of measurements in the EKSD and EKSDR approaches. As a comparison we show the average NMSE and SSIM of random designs with K = 10 sources, and these are significantly worse than all other methods.
Figure 11 shows the corresponding depth images of the bottom layer reconstructed with source positions suggested by EKSD. As demonstrated in Section 2.3, artifacts in these images attributed to the superposition of features from different layers. The lack of rotational acquisition in laminography leads to this artifacts. Clearly, the designs proposed by EKSD minimize artifacts in depth images compared to random designs, facilitating accurate defect detection within the depth images.

6.7. Source Design 2

We repeat the previous experiment with the array length optimally according to Figure 9; D = 2.4 L for different N values. The results are summarized in Table 2. As the full designs are already optimal in a sense, there is less dramatic improvement when optimizing the design with fewer source positions. However, the results for N = 64 suggest that increasing the number of positions while maintaining a fixed array length, thereby reducing the distance between sources, may result in a non-binary solution. Then, converting the solution into a sparse vector with K = 10 elements, results in a less optimal outcome. The proposed source designs and corresponding depth images are shown in Figure 12.

7. Discussion

In this research, we explore the methodology of experimental design for a laminographic setup, acknowledging potential challenges that may arise in real-world applications. Here, we outline several significant challenges with our proposed method: (i) The non-convex nature of the objective function may lead to entrapment in local minima. Addressing this, diversifying initial conditions enhances the likelihood of identifying more optimal solutions or the global minimum. (ii) Our method optimizes the positions of sources within a specified grid. However, this grid may not represent the most efficient set of source positions for optimization. An alternative approach could involve optimizing within a continuous domain of positions, leveraging the closed-form formulation for depth image reconstruction presented in this paper. (iii) While our investigation of the experimental design problem assumes noise-free measurements, this assumption may not hold in real-world scenarios characterized by noisy data. This can be addressed by incorporating noise directly into the optimization process to enhance real-world applicability.

8. Conclusion

In real world scenarios, imaging a particular depth of a planar object can yield valuable information with less effort than a full 3D reconstruction. In a depth image, the information corresponding to the chosen layer appears sharp, while the details of other layers appear blurred. We argue that source design plays a role in mitigating these unwanted artifacts and pose the experimental design problem as a bi-level optimization problem. To solve this problem, we employed a bilvel optimization and implemented it by gradient-based methods. The suggested framework can be tailored more specifically to a particular application by incorporating different regularization terms in lower level optimization or by adopting alternate loss function in the upper level optimization to more effectively capture the characteristics of the image.

Appendix A

Appendix A.1. Forward Operator for the 3D Object

Figure 2 presents the 2D view of acquisition setup for a planar object x u , z where u R 2 is the lateral coordinate and z R the depth coordinate. The line integral of x u , z from s , h to r , 0 for a fixed h can be expressed as follows:
y r , s = x s + r s t , h h t d t
where s R 2 , r R 2 denote the source and detector positions, respectively. Next, we substitute x ( u , z ) by its representation in the Fourier domain F 3 D ( x ) ( ξ , ζ ) :
x ( u , z ) = F 3 D ( x ) ξ , ζ e j 2 π ξ u + ζ z d ξ d ζ ,
using (A2) in (A1), it yields:
y ( r , s ) = F 3 D ( x ) ξ , ζ e j 2 π ξ s + r s t + ζ h h t d t d ξ d ζ ,
using e j 2 π ξ r s ζ h t d t = δ ξ r s ζ h , it follows,
y ( r , s ) = F 3 D ( x ) ξ , ζ e j 2 π ξ s + ζ h δ ξ r s ζ h d ζ d ξ ,
and then,
y ( r , s ) = F 3 D ( x ) ξ , h 1 ξ ( r s ) e j 2 π ξ r d ξ
using o = r s , it can be written,
g ( r , o ) = F 3 D ( x ) ξ , h 1 ξ o e j 2 π ξ r d ξ
which means we can link the 3D Fourier transform of x to the 2D Fourier transform of the data as,
F 2 D ( g ) ( ξ , o ) = F 3 D ( x ) ξ , h 1 ξ o .

Appendix A.2. Forward Operator for a 2D Slice on the Depth z

We introduce x z as a slice of the object x at depth z, i.e.,
x ( u , z ) = x z ( u ) δ ( z z ) .
We then define the forward operator A z as projecting a 2D object to the detector at z = 0 via,
y r , s = x z s + 1 h 1 z r s
or equivalently as,
y r , s = x z r 1 h 1 z + h 1 z s .
The adjoint operator A z * can be obtained by considering the definition A z x z , y ( s , r ) = x z , A z * y v . First, we have
A z x z , y ( s , r ) = x z r 1 h 1 z + h 1 z s y r , s d r d s .
Now let v = r 1 h 1 z + h 1 z s , then r = v h 1 z s 1 h 1 z and d r = 1 1 h 1 z d v , and it follows,
A z x z , y ( s , r ) = 1 1 h 1 z x z ( v ) y v h 1 z s 1 h 1 z , s d v d s = x z , A z * y .
We thus find that A z * f is defined as
A z * y ( u ) = 1 1 h 1 z y u h 1 z s 1 h 1 z , s d s .
By using (A9) in (A11), the composition A z * A z can be written as below
A z * A z x z ( u ) = x z ( u ) 1 h 1 z d s .

Appendix A.3. Back Projection for a Depth Image

We use y r , s in (A1) to express A z * A using (A11),
A z * A x ( u ) = 1 1 h 1 z x u h 1 z s 1 h 1 z t + 1 t s , h 1 t d t d s .
Now let z = h ( 1 t ) , then it can be rewritten as
A z * A x ( u ) = 1 h z x h z h z u + z z h z s , z d z d s

Appendix A.4. Gradient Calculation of the Objective Function

To find the gradient of F ( x ^ ( β ) , α ^ ( β ) ) : = 1 2 α ^ ( β ) x ^ ( β ) x gt 2 with respect to β , by the chain roll it reads:
F β ( x ^ z ( β ) , α ^ ( β ) ) = F x ^ z · x ^ z β + F α ^ · α ^ β
where the second term is equal to zero because α ^ is the optimal point of F . Then, the first term can be written as:
F β = x ^ z β F x ^ z = α ^ x ^ z β α ^ x ^ z ( β ) x gt .
To obtain an expression for the derivative of x ^ with respect to β , we exploit the fact that the lower-level optimization problem (16) has a closed-form solution:
x ^ z ( β ) = A T β T β A + λ I 1 A T β T β y .
We define H β : = A T β T β A + λ I , and then the derivative follows,
x ^ ( β ) β i = H β 1 β i A T β T β y + H β 1 A T β T β y β i ,
recall that H β 1 β i = H β 1 H β β i H β 1 , therefore,
H β 1 β i = 2 H β 1 A T β T β β i A H β 1
then (A18) can be rewritten as,
x ^ ( β ) β i = 2 H β 1 A T β T β β i A H β 1 A T β T β y y ,
where H β 1 A T β T β y is equal to x ^ ( β ) in (A17) and H β 1 A T β is denoted by T β A λ , then it can be rewritten,
x ^ β β i = 2 T β A λ T β β i y A x ^ β = 2 T β A λ c i ,
for simplicity, we denote T β β i y A x ^ β as c i R m and it follows,
x ^ ( β ) β = 2 T β A λ C
where C = [ c 1 , , c N ] . By replacement of (A22) into (22), yields
F = 2 α ^ C A T β λ α ^ x ^ z β x gt .

Appendix A.5. Projection Onto the Simplex Constraint

Let P be an operator that maps each point onto the simplex set of B = { u R m | u 1 = K , 0 u i 1 } . In fact, the orthogonal projection of p is the unique solution of:
argmin t 1 2 t p 2 s . t t 1 = K , 0 t i 1 ,
to solve the above optimization via its dual problem, we introduce the Lagrangian [15],
L ( t , μ ) : = 1 2 t p 2 + μ t 1 K ,
where μ R is a Lagrangian multiplier. The dual problem can now be written as
J ( μ ) = inf 0 t i 1 L t , μ ,
where J μ is the dual function as the minimum value of Lagrangian over t . Regarding strong duality the optimal solution of (A24) and (A26) are equal if there exists a dual variable of μ R that makes (A26) as a tight lower bound for (A24). The solution of the dual problem of (A26) can be written as [16],
t ( μ ) = min { max { p μ 1 , 0 } , 1 }
where μ R is obtained by solving t ( μ ) 1 = K . Let’s define the function w ( μ ) = i min { max { p i μ , 0 } , 1 } K that is non-increasing and can be solved using the Newton method. The derivative is given by,
w ( μ ) = | I | , I = { i { 0 , , n } : 0 p i μ 1 } .

References

  1. Medici, V.; Martarelli, M.; Paone, N.; Pandarese, G.; van de Kamp, W.; Verhoef, B.; Sipsas, K.; Broechler, R.; Besada, L.B.; Alexopoulos, K.; Nikolakis, N. Integration of Non-Destructive Inspection (NDI) systems for Zero-Defect Manufacturing in the Industry 4.0 era. 2023 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0&IoT), 2023, pp. 439–444. [CrossRef]
  2. Wang, B.; Zhong, S.; Lee, T.L.; Fancey, K.S.; Mi, J. Non-destructive testing and evaluation of composite materials/structures: A state-of-the-art review. Advances in mechanical engineering 2020, 12, 1687814020913761. [Google Scholar] [CrossRef]
  3. Schimleck, L.; Dahlen, J.; Apiolaza, L.A.; Downes, G.; Emms, G.; Evans, R.; Moore, J.; Pâques, L.; Van den Bulcke, J.; Wang, X. Non-Destructive Evaluation Techniques and What They Tell Us about Wood Property Variation. Forests 2019, 10. [Google Scholar] [CrossRef]
  4. Hassler, U.; Oeckl, S.; Bauscher, I.; others. Inline ct methods for industrial production. International Symposium on NDT in Aerospace, 2008, pp. 123–131.
  5. Gondrom, S.; Zhou, J.; Maisl, M.; Reiter, H.; Kröning, M.; Arnold, W. X-ray computed laminography: an approach of computed tomography for applications with limited access. Nuclear Engineering and Design 1999, 190, 141–147. [Google Scholar] [CrossRef]
  6. O’Brien, N.; Mavrogordato, M.; Boardman, R.; Sinclair, I.; Hawker, S.; Blumensath, T. Comparing cone beam laminographic system trajectories for composite NDT. Case Studies in Nondestructive Testing and Evaluation 2016, 6, 56–61. [Google Scholar] [CrossRef]
  7. Haber, E.; Horesh, L.; Tenorio, L. Numerical methods for experimental design of large-scale linear ill-posed inverse problems. Inverse Problems 2008, 24, 055012. [Google Scholar] [CrossRef]
  8. Haber, E.; Magnant, Z.; Lucero, C.; Tenorio, L. Numerical methods for A-optimal designs with a sparsity constraint for ill-posed inverse problems. Computational Optimization and Applications 2012, 52, 293–314. [Google Scholar] [CrossRef]
  9. Ruthotto, L.; Chung, J.; Chung, M. Optimal Experimental Design for Inverse Problems with State Constraints. SIAM Journal on Scientific Computing 2018, 40, B1080–B1100. [Google Scholar] [CrossRef]
  10. Zhang, F. The Schur complement and its applications; Vol. 4, Springer Science & Business Media, 2006.
  11. Paige, C.C.; Saunders, M.A. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Transactions on Mathematical Software (TOMS) 1982, 8, 43–71. [Google Scholar] [CrossRef]
  12. Nemhauser, G.; Wolsey, L., Linear Programming. In Integer and Combinatorial Optimization; John Wiley & Sons, Ltd, 1988; chapter I.2, pp. 27–49, [https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781118627372.ch2]. [CrossRef]
  13. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  14. van Aarle, W.; Palenstijn, W.J.; Cant, J.; Janssens, E.; Bleichrodt, F.; Dabravolski, A.; Beenhouwer, J.D.; Batenburg, K.J.; Sijbers, J. Fast and flexible X-ray tomography using the ASTRA toolbox. Opt. Express 2016, 24, 25129–25147. [Google Scholar] [CrossRef] [PubMed]
  15. Boyd, S.; Vandenberghe, L. Convex optimization; Cambridge university press, 2004.
  16. Kadu, A.; van Leeuwen, T.; Batenburg, K.J. CoShaRP: a convex program for single-shot tomographic shape sensing. Inverse Problems 2021, 37, 105005. [Google Scholar] [CrossRef]
Figure 1. A laminographic setup designed for scanning flat objects, featuring a flexible source that moves along a raster trajectory. Note that dashed arrows specify the source trajectory.
Figure 1. A laminographic setup designed for scanning flat objects, featuring a flexible source that moves along a raster trajectory. Note that dashed arrows specify the source trajectory.
Preprints 103922 g001
Figure 2. 2D view of the acquisition setup. The source is located at lateral position s R 2 and height h; the detector is parameterized without loss of generality by r R 2 . The object is characterized by x as a function of lateral coordinate u and vertical coordinate z. The ray from the source to the detector is shown by the dashed line.
Figure 2. 2D view of the acquisition setup. The source is located at lateral position s R 2 and height h; the detector is parameterized without loss of generality by r R 2 . The object is characterized by x as a function of lateral coordinate u and vertical coordinate z. The ray from the source to the detector is shown by the dashed line.
Preprints 103922 g002
Figure 3. Schematic depiction of the missing wedge in the Fourier domain sampling of F 3 D ( x ) by the measurements F 2 D ( g ) . The green squares represent F 3 D ( x ) ξ , ζ and black circles represent F 2 D ( g ) ξ , o which are limited in ζ by h 1 ξ o max . The limited aperture in o gives rise to the missing wedge.
Figure 3. Schematic depiction of the missing wedge in the Fourier domain sampling of F 3 D ( x ) by the measurements F 2 D ( g ) . The green squares represent F 3 D ( x ) ξ , ζ and black circles represent F 2 D ( g ) ξ , o which are limited in ζ by h 1 ξ o max . The limited aperture in o gives rise to the missing wedge.
Preprints 103922 g003
Figure 4. Example of the spatial sampling patterns when imaging the middle layer (at z = 0 ) of an object with M = 3 layers and N = 100 regularly spaced sources in [ 1 , 1 ] 2 at z = 1 . The red circle indicates the point u at which we evaluate the output.
Figure 4. Example of the spatial sampling patterns when imaging the middle layer (at z = 0 ) of an object with M = 3 layers and N = 100 regularly spaced sources in [ 1 , 1 ] 2 at z = 1 . The red circle indicates the point u at which we evaluate the output.
Preprints 103922 g004
Figure 5. The wire designs with a size of 250 × 250 are located at the bottom and top layers within the PCB, respectively.
Figure 5. The wire designs with a size of 250 × 250 are located at the bottom and top layers within the PCB, respectively.
Preprints 103922 g005
Figure 6. Various defects in different locations on the wire design.
Figure 6. Various defects in different locations on the wire design.
Preprints 103922 g006
Figure 7. Different views from sources in a 4 × 4 array of sources.
Figure 7. Different views from sources in a 4 × 4 array of sources.
Preprints 103922 g007
Figure 8. The EKSD algorithm’s convergence as applied to a calibration data set, is depicted. The blue line indicates the mean values, while the black dots highlight the variance of the NMSE at each iteration.
Figure 8. The EKSD algorithm’s convergence as applied to a calibration data set, is depicted. The blue line indicates the mean values, while the black dots highlight the variance of the NMSE at each iteration.
Preprints 103922 g008
Figure 9. The reconstruction error for various ranges of the area occupied by the source.
Figure 9. The reconstruction error for various ranges of the area occupied by the source.
Preprints 103922 g009
Figure 10. The error range applies to any K source positions within the array of 16 positions, where K can be 4, 6, 8, or 10.
Figure 10. The error range applies to any K source positions within the array of 16 positions, where K can be 4, 6, 8, or 10.
Preprints 103922 g010
Figure 11. epth images in a, b, c are reconstructed using 10 proposed source positions from the total of 16, 36, and 64 positions, respectively. Conversely, the depth images in d, e, f are derived from 10 random source positions out of the same total positions.
Figure 11. epth images in a, b, c are reconstructed using 10 proposed source positions from the total of 16, 36, and 64 positions, respectively. Conversely, the depth images in d, e, f are derived from 10 random source positions out of the same total positions.
Preprints 103922 g011
Figure 12. Depth images shown in a, b, c are reconstructed by 10 proposed source positions shown in d, e, f, respectively.
Figure 12. Depth images shown in a, b, c are reconstructed by 10 proposed source positions shown in d, e, f, respectively.
Preprints 103922 g012
Table 1. NMSE and SSIM results with the occupied array length of 4 L . For randomized methods, we present the average and standard deviation over 50 runs.
Table 1. NMSE and SSIM results with the occupied array length of 4 L . For randomized methods, we present the average and standard deviation over 50 runs.
Source design NMSE (N) SSIM (N)
N = 16 N = 36 N = 64 N = 16 N = 36 N = 64
ALL( K = N ) 0.1749 0.1632 0.1603 0.5630 0.5921 0.6046
EKSD ( K = 10 ) 0.1611 0.1508 0.1528 0.6275 0.7304 0.7397
EKSDR ( K = 10 ) 0.1611 ± 3 × 10 4 0.1508 ± 4 × 10 4 0.1518 ± 1.1 × 10 3 0.6275 ± 3 × 10 4 0.7304 ± 4 × 10 4 0.7380 ± 4.6 × 10 3
RANDOM ( K = 10 ) 0.1819 ± 7 × 10 3 0.1781 ± 1.1 × 10 2 0.1759 ± 1.4 × 10 2 0.57 ± 1.5 × 10 2 0.6075 ± 2.1 × 10 2 0.6248 ± 3 × 10 2
Table 2. NMSE and SSIM results with the occupied array length of 2.4 L . For randomized methods, we present the average and standard deviation over 50 runs.
Table 2. NMSE and SSIM results with the occupied array length of 2.4 L . For randomized methods, we present the average and standard deviation over 50 runs.
Source design NMSE (N) SSIM (N)
N = 16 N = 36 N = 64 N = 16 N = 36 N = 64
ALL( K = N ) 0.1467 0.1454 0.1459 0.7030 0.7174 0.7233
EKSD ( K = 10 ) 0.1488 0.1478 0.1548 0.7230 0.7271 0.7011
EKSDR ( K = 10 ) 0.1488 ± 5 × 10 4 0.1484 ± 9 × 10 4 0.1530 ± 2.5 × 10 3 0.7230 ± 5 × 10 4 0.7259 ± 2.7 × 10 3 0.7040 ± 4.7 × 10 3
RANDOM ( K = 10 ) 0.1537 ± 4 × 10 3 0.1580 ± 5 × 10 3 0.1614 ± 8 × 10 3 0.7040 ± 1 × 10 2 0.7178 ± 1.1 × 10 2 0.7237 ± 1.5 × 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated