Sort by
Silicon Supplementation — via Fertigation and Foliar Sprays — Enhances Zucchini Resilience Against Powdery Mildew
Esmée France de Graaf
,Johanna A. Bac-Molenaar
,Albartus Evenhuis
Posted: 24 December 2025
A Survey on Efficient Protein Language Models
Shouren Wang
,Debargha Ganguly
,Vinooth Kulkarni
,Wang Yang
,Zhuoran Qiao
,Daniel Blankenberg
,Vipin Chaudhary
,Xiaotian Han
Posted: 24 December 2025
HB-Eval: A System-Level Reliability Evaluation and Certification Framework for Agentic AI
Abuelgasim Mohamed Ibrahim Adam
Posted: 24 December 2025
Assessing the Validity of Abnormal Repetitive Behaviours as Indicators of Poor Animal Welfare: A Narrative Review
Georgia Mason
,Lindsey Kitchenham
Posted: 24 December 2025
Catalytic Performance of Flexible Polyionic Liquid Nanofiber Membranes Derived from Polyacrylonitrile for Advanced Applications
Yue Gao
,Xuan Qi
,Junfeng Zhang
Posted: 24 December 2025
Nonprofit Evolution: Leading Innovation in Social Ventures
Ulrich Vadez Noubissie
Posted: 24 December 2025
Adaptive Multi-Modal Contextual Verification for Enhanced Cross-Modal Entity Consistency
Ruohan Qi
,Tianhao Nian
Posted: 24 December 2025
Optimizing Onboard Deep Learning and Hybrid Models for Resource-Constrained Aerial Operations: A UAV-Based Adaptive Monitoring Framework for Heterogeneous Urban Forest Environments
Won-Ki Jo
,Seung-Hwan Go
,Jong-Hwa Park
Unmanned Aerial Vehicles (UAVs) are essential tools for high-resolution urban remote sensing; however, maximizing their operational efficiency is often hindered by the Size, Weight, and Power (SWaP) constraints inherent to aerial platforms. High-end sensors (e.g., LiDAR) provide dense data but reduce flight endurance and require extensive post-processing, delaying actionable intelligence. To address the challenge of maximizing data utility through cost-effective means, this study evaluates an adaptive multi-modal monitoring framework utilizing high-resolution RGB imagery. Using a DJI Matrice 300 RTK, we assessed the performance of RGB-based advanced AI architectures across varying urban density zones. We stress-tested End-to-End Deep Learning models (Mask R-CNN, YOLOv8-seg) and a Hybrid approach (U-Net++ fused with RGB-derived Canopy Height Models) to determine their viability for replacing active sensors in precision analysis. Results indicate that the RGB-based Hybrid model achieved superior Semantic IoU (0.551), successfully demonstrating that optical imagery combined with deep learning can substitute for heavy active sensors in area-based estimation tasks. Crucially for autonomous UAV operations, YOLOv8-seg achieved inference speeds of 3.89 seconds per tile, approximately 1.86 times faster than Mask R-CNN, validating its suitability for onboard inference on embedded systems. This study establishes a protocol for high-precision analysis using standard RGB sensors, offering a strategic pathway for deploying scalable, consumer-grade UAV fleets in complex urban environments.
Unmanned Aerial Vehicles (UAVs) are essential tools for high-resolution urban remote sensing; however, maximizing their operational efficiency is often hindered by the Size, Weight, and Power (SWaP) constraints inherent to aerial platforms. High-end sensors (e.g., LiDAR) provide dense data but reduce flight endurance and require extensive post-processing, delaying actionable intelligence. To address the challenge of maximizing data utility through cost-effective means, this study evaluates an adaptive multi-modal monitoring framework utilizing high-resolution RGB imagery. Using a DJI Matrice 300 RTK, we assessed the performance of RGB-based advanced AI architectures across varying urban density zones. We stress-tested End-to-End Deep Learning models (Mask R-CNN, YOLOv8-seg) and a Hybrid approach (U-Net++ fused with RGB-derived Canopy Height Models) to determine their viability for replacing active sensors in precision analysis. Results indicate that the RGB-based Hybrid model achieved superior Semantic IoU (0.551), successfully demonstrating that optical imagery combined with deep learning can substitute for heavy active sensors in area-based estimation tasks. Crucially for autonomous UAV operations, YOLOv8-seg achieved inference speeds of 3.89 seconds per tile, approximately 1.86 times faster than Mask R-CNN, validating its suitability for onboard inference on embedded systems. This study establishes a protocol for high-precision analysis using standard RGB sensors, offering a strategic pathway for deploying scalable, consumer-grade UAV fleets in complex urban environments.
Posted: 24 December 2025
The Effect of Preoperative Visual Explanation on Anxiety in Children: A Randomized Controlled Trial
Hülya Tosun Söner
,Süleyman Kızıldağ
,Osman Uzundere
,Fatma Acil
,Meral Erdal Erbatur
,Selen Topalel
,Ayhan Kaydu
,Cem Kıvılcım Kaçar
,Erhan Gökçek
,Enes Sirma
+2 authors
Background and Objectives: This study aimed to investigate the effects of explaining the perioperative process to pediatric patients scheduled for adenotonsillectomy using pictures on their anxiety levels. Materials and Methods: A prospective, randomized controlled trial was conducted, enrolling 58 patients. The patients were divided into two groups: Group 1 (n=29), where the perioperative process was explained using pictures, and Group 2 (n=29), the control group, where no pictures were used. Child anxiety was assessed using the modified Yale Preoperative Anxiety Scale Short Form (mYPAS-SF) at five observation time points before anesthesia induction. Parents’ anxiety was measured using the Visual Analog Scale for Anxiety. Results: Patients in Group 1 had significantly lower heart rates during induction and the intraoperative period compared to Group 2 (p = 0.031, p = 0.025, respectively). In terms of anxiety and RSAS scores, patients in Group 1 had significantly lower mYPAS-SF scores at time points t2, t3, t4, and t5 compared to Group 2 (t2: p = 0.001; t3-t5: p < 0.001). No significant difference was observed at t1 (p = 0.068). The mean RSAS scores were also significantly lower in Group 1 (p = 0.029). Parents’ anxiety was significantly lower in Group 1 at all three time points (t1: p = 0.017; t2: p = 0.006; t3: p = 0.036). Conclusion: Our study results demonstrate that illustrating the perioperative process in children undergoing adenotonsillectomy can significantly reduce preoperative anxiety and prevent awakening agitation. Given its ease of implementation, we believe that using visual aids to explain the perioperative process to pediatric patients can facilitate process management for patients, parents, and physicians.
Background and Objectives: This study aimed to investigate the effects of explaining the perioperative process to pediatric patients scheduled for adenotonsillectomy using pictures on their anxiety levels. Materials and Methods: A prospective, randomized controlled trial was conducted, enrolling 58 patients. The patients were divided into two groups: Group 1 (n=29), where the perioperative process was explained using pictures, and Group 2 (n=29), the control group, where no pictures were used. Child anxiety was assessed using the modified Yale Preoperative Anxiety Scale Short Form (mYPAS-SF) at five observation time points before anesthesia induction. Parents’ anxiety was measured using the Visual Analog Scale for Anxiety. Results: Patients in Group 1 had significantly lower heart rates during induction and the intraoperative period compared to Group 2 (p = 0.031, p = 0.025, respectively). In terms of anxiety and RSAS scores, patients in Group 1 had significantly lower mYPAS-SF scores at time points t2, t3, t4, and t5 compared to Group 2 (t2: p = 0.001; t3-t5: p < 0.001). No significant difference was observed at t1 (p = 0.068). The mean RSAS scores were also significantly lower in Group 1 (p = 0.029). Parents’ anxiety was significantly lower in Group 1 at all three time points (t1: p = 0.017; t2: p = 0.006; t3: p = 0.036). Conclusion: Our study results demonstrate that illustrating the perioperative process in children undergoing adenotonsillectomy can significantly reduce preoperative anxiety and prevent awakening agitation. Given its ease of implementation, we believe that using visual aids to explain the perioperative process to pediatric patients can facilitate process management for patients, parents, and physicians.
Posted: 24 December 2025
Age Prediction of Hematoma from Hyperspectral Images Using Convolutional Neural Networks
Arash Keshavarz
,Gerald Bieber
,Daniel Wulff
,Carsten Babian
,Stefan Lüdtke
Posted: 24 December 2025
Heart Failure and Atrial Fibrillation in Women: Pathophysiological Links, Clinical Challenges and Therapeutic Perspectives
Luminiţa-Bianca Grosu
,Camelia Cristina Diaconu
,Laura Gabriela Gavril
Posted: 24 December 2025
Integrating Agentic AI to Automate ICD-10 Medical Coding
Kitti Akkhawatthanakun
,Lalita Narupiyakul
,Konlakorn Wongpatikaseree
,Narit Hnoohom
,Chakkrit Termritthikun
,Paisarn Muneesawang
Posted: 24 December 2025
Clarifying Observer-Relative Cosmological GUT Curvature from Dissipative Scale Evolution: External Flatness and Internal Einstein Limits in Forced High Energy Noise Limits
Madison Newell
Posted: 24 December 2025
Leveraging Mobile Phones for Enhanced Wireless Health Monitoring: An Architectural Approach
Ulrich Noubissie
Posted: 24 December 2025
Large Language Model Agents: A Comprehensive Survey on Architectures, Capabilities, and Applications
Yiming Lei
,Jiawei Xu
,Chia Xin Liang
,Ziqian Bi
,Xiaoming Li
,Danyang Zhang
,Junhao Song
,Zhenyu Yu
Posted: 24 December 2025
Effects of the Minimum Wage: A Systematic Review of the Evidence for Spain
María José Asensio-Coto
,Celia Sánchez-López
,Manuela A. De Paz-Báñez
Posted: 24 December 2025
A Grazing-Incidence SEM Strategy for High-Contrast Imaging of Multiscale Nanomaterials. MoS2 , a Case Study
Mariano Palomba
,Francesca Nicolais
,Filippo Giubileo
,Antonio Di Bartolomeo
,Gianfranco Carotenuto
,Angela Longo
Posted: 24 December 2025
Packaging Glasses from Containers to Encapsulation: Composition, Performance, and Sustainability Pathways
Leonardo Pagnotta
Posted: 24 December 2025
Weighted Lp Estimates for Multiple Generalized Marcinkiewicz Functions
Mohammed Ali
,Hussain Al-Qassem
Posted: 24 December 2025
AI-Driven Multi-Modal Assessment of Visual Impression in Architectural Event Spaces: A Cross-Cultural Behavioral and Sentiment Analysis
Riaz-ul-haque Mian
,Yen-Khang Nguyen-Tran
Posted: 24 December 2025
of 5,372