Medical imaging plays a crucial role in oncology, particularly in radiotherapy. Computed tomography (CT) is the primary modality used in radiation therapy for high-resolution patient geometry and accurate dose calculations [
1]. However, CT is associated with high patient exposure to ionizing radiation. Cone-beam computed tomography (CBCT) has the potential to provide faster imaging and reduce patient exposure to non-therapeutic radiation, making it a valuable imaging modality used for patient positioning and monitoring in radiotherapy. CBCT is currently used to monitor and detect changes in patient anatomy throughout the treatment; it is also compatible with fractional dose delivery, making it a patient-safe imaging modality with less additional non-therapeutic dose than traditional CT. However, this modality can introduce scattered radiation image artifacts like shading, cupping, and beam-hardening [
2,
3]. The artifacts resulting from scattered radiation in CBCT images can cause fluctuations in pixel values, making it difficult to use these images directly for dose calculation. Consequently, CBCT images cannot be used directly for dose calculations unless correction methods are applied. Reliable correction techniques for calibrating CBCT images to Hounsfield Unit (HU) values used by CT scanners would expand the clinical usage of CBCT in treatment planning and evaluation of tumor shrinkage and organ shift [
4,
5,
6]. In recent years, traditional approaches, such as anti-scatter grid, partial beam blockers, and scattering estimators [
7,
8,
9], have been joined to deep learning-based methods, which have been showing interesting potential to improve CBCT quality [
10]. Such methods, leveraging mainly convolutional neural networks (CNN) and generative adversarial networks (GAN), were investigated to map the physical model of the x-ray interaction with matter disregarding the underlying complex analytics and avoiding the use of explicit statistical approaches such as Monte Carlo. Aiming at removing scatter and correcting HU units in CBCT scans, many authors explored various types of CNN, ranging from UNet trained with a supervised training approach [
11,
12,
13,
14,
15] to the more complex cycle-consistent Generative Adversarial Network (cGAN), based on an unsupervised training approach [
16,
17,
18,
19,
20,
21]. cGAN model consists of two subnetworks, the generator and the discriminator, with opposite roles. While the generator tries to learn how to convert one dataset to another, the discriminator distinguishes between real and synthetic images. This process creates a cycle-consistent loop that improves the generator’s ability to produce synthetic images that look just like real ones. Focusing on CBCT-to-CT mapping, Xie et al. proposed a scatter artifact removal CNN based on a contextual loss function trained on the pelvis region of 11 subjects to correct the CBCT artifacts in the pelvic area [
22]. Another research focused on a cGAN model to calibrate CBCT HU values in the pelvis region. The model was trained on 49 patients with unpaired data and tested on nine independent subjects, and the authors claimed the method kept the anatomical structure of CBCT images unchanged [
18]. Exploring the use of deep residual neural networks in this field, a study demonstrated the capability of such architectures by proposing an iterative tuning-based training, where images with increasing resolutions are used at each step [
23]. Likewise, our group recently reported that cGAN has better capability than CNN trained with pure supervised techniques to preserve anatomical coherence [
24]. All these contributions, however, did not address the consistency of the treatment planning performed with the corrected CBCT. Conversely, Zhang et al. [
25] reported the test of pelvis treatment planning in proton therapy performed on CBCT corrected with CNN. However, they summarized that the dose distribution calculated for traditional photon-based treatment outperformed the one computed for proton therapy. CBCT corrected with a cGAN was applied to evaluate the quality of the proton therapy planning in cancer treatment across different datasets with satisfactory results [
13,
20]. All the mentioned research works focused on the problem of CBCT-to-CT HU conversion exploiting CBCT with a wide field of view (FOV). However, some systems present in clinical practice have a limited FOV, not sufficient to contain the entire volume of the patient, e.g., in the presence of large regions such as the pelvis or with obese patients [
26]. Considering the current use of CBCT for patient positioning purposes, small FOV CBCT systems could be preferred due to their reduced imaging dose, shorter computation time, and increased resolution over the treatment region of interest [
27]. However, the limited FOV also causes a truncation problem during reconstruction [
28,
29]. Consequently, the non-uniqueness of the solution for the iterative reconstruction causes additional bright-band effects that add artifacts to the CBCT [
30]. Even in the case of optimal HU calibration and scatter reduction, a CBCT, acquired in a narrow FOV cannot be used for adaptive dose planning. Especially, narrow FOV CBCT lacks important anatomical information (e.g., the air/skin interface) necessary for properly calculating the beam path. The present work aimed to propose a deep-learning framework that elaborates the CBCT to calibrate the HU, remove artifacts due to the conic geometry acquisition, and handle narrow FOV issues to demonstrate the potential use of the corrected CBCT in the context of proton treatment planning updates. The work is part of a larger study carried out in collaboration with the Italian National Center of Hadrontherapy (CNAO, Pavia, Italy) that aims to explore the possibility of using the in-house narrow FOV CBCT system not only for patient positioning but also for dosimetric evaluation without hardware modifications [
31]. The deep-learning framework took its root from the CBCT-to-CT mapping model based on cGAN proposed in [
24] that was here extended to address the case of narrow FOV. Tests were carried out on a public dataset of planning CT scans of 40 oncological patients affected by pancreatic cancer. In a first attempt, synthetic raw CBCT volumes were properly generated from CT scans throughout the Monte Carlo simulation. This enabled us to dump anatomical variations usually present in real CBCT with respect to the corresponding planning CT. Moreover, in order to demonstrate the feasibility of the methodology also with real data, we replicated each experiment with the clinical CBCT included within the dataset. As the dataset provided annotation data about the segmented lesion and organs at risk, particle beam dosimetry was computed in the original planning CT and the corrected CBCT volume, verifying the coherency between the two dose distributions. The main contributions of this paper may therefore be summarized as: