Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Andrei Khrennikov

Abstract: Contemporary discussions of the gap between natural and artificial intelligence often emphasize human capacities such as contextual reasoning, cognitive flexibility, and non-classical decision-making. This paper proposes that quantum and quantum-like models of cognition and decision processes offer a principled framework for addressing these differences. A growing body of empirical evidence shows that human reasoning systematically violates the assumptions of classical probability and logic, exhibiting contextuality, order effects, interference phenomena, and task incompatibility. Quantum probability theory and related quantum-like formalisms provide mathematically rigorous tools—based on Hilbert spaces, superposition, and entanglement—that capture these features more naturally than classical models. While quantum and quantum-like approaches share a common mathematical structure, they differ in physical implementation, motivating two complementary directions in artificial intelligence: quantum AI and quantum-like AI. Together, these approaches suggest a viable pathway toward narrowing, and potentially bridging, the divide between natural and artificial intelligence by grounding AI architectures in models aligned with the structure of human cognition.

Article
Engineering
Marine Engineering

Fatih Ahmad Fachriza

,

Teguh Putranto

,

I Ketut Aria Pria Utama

,

Dendy Satrio

,

Noorlaila Hayati

Abstract: Stiffened panels are essential structural elements that play a critical role in maintaining the integrity of engineering structures, particularly when subjected to torsional loading. Ensuring their adequate strength is therefore a fundamental requirement in design and assessment. Conventional approaches to strength evaluation using the finite element method (FEM) often face challenges due to the complexity of modeling stiffened geometries and the time-consuming setup required, which can reduce efficiency and limit accessibility for practical applications. To overcome these limitations, this study introduces the development of a graphical user interface (GUI) specifically designed to facilitate FEM-based strength analysis of stiffened panels under torsional loads. The GUI, implemented in Python, automates essential modeling steps, streamlines the input process, and enhances user interaction through an intuitive interface, thereby making torsional strength analysis more efficient and user-friendly. Numerical simulations were carried out on nine panel configurations, systematically combining three variations of plate thickness with three variations of longitudinal stiffener geometry. The results demonstrate that plate thickness has a direct influence on torsional resistance, with thicker plates exhibiting significantly higher strength, while stiffener design was also found to strongly affect performance: the 80 x 80 x 8 stiffener provided the greatest resistance against general torsional loading, whereas the 100 x 65 x 9 stiffener displayed superior behavior under pure torque conditions. These findings are consistent with theoretical predictions, confirming the reliability of the developed approach, and overall, the proposed GUI proves to be an effective tool in supporting FEM-based strength assessment of stiffened panels, offering both accuracy and efficiency while highlighting the potential of integrating computational modeling with user-oriented interfaces to broaden the applicability of FEM in structural engineering practice, particularly in analyzing complex torsional behaviors of stiffened panel systems.

Article
Social Sciences
Education

Joseph Xhuxhi

Abstract: This paper seeks to consider the impact of increased accountability on the professional identity of academics in British Higher Education and consequently the implications for Academic Professionalism. It explores and interrogates how the context of professional practice and the conditions of academic work are affected and changing. The manuscript discusses in detail the main challenges facing professionals in higher education and how the notions of trust and of autonomy and academic freedom are contested and challenged. I argue that the widespread changes challenge the traditional notion of academic professionalism and result both in the de-professionalisation and the re-professionalisation of the academic. The concept of a new academic professionalism is examined, drawing upon perspectives from the relevant literature. I conclude by suggesting a twofold action, the rethinking and reshaping of accountability together with a redefinition of academic professionalism. The manuscript draws upon theoretical perspectives, the relevant literature and my own practical experience from my professional environment.

Article
Physical Sciences
Theoretical Physics

Emmanuil Manousos

Abstract: We present an axiomatic framework in which fundamental interactions emerge as necessary consequences of intrinsic self-variation of particle properties, constrained by energy–momentum conservation and causal propagation. Starting from four axioms, we show that Abelian and non-Abelian gauge structures arise naturally. Electromagnetism, Quantum Electrodynamics (QED), Quantum Chromodynamics (QCD), and the electroweak sector are recovered as effective descriptions of underlying self-variation dynamics. Renormalization, confinement, chiral symmetry breaking, and the Higgs mechanism are reinterpreted as geometric and dynamical consequences of self-variation. The Standard Model is thus an effective local limit rather than a fundamental theory.

Review
Chemistry and Materials Science
Analytical Chemistry

Sasa Savic

,

Sanja Petrovic

,

Zorica Knežević-Jugović

Abstract:

Polyphenols are a structurally diverse group of plant secondary metabolites widely recognized for their antioxidant, anti-inflammatory, antimicrobial, and chemoprotective properties, which have stimulated their extensive use in food, pharmaceutical, nutraceutical, and cosmetic products. However, their chemical heterogeneity, wide polarity range, and strong interactions with plant matrices pose major challenges for efficient extraction, separation, and reliable analytical characterization. This review provides a critical overview of contemporary strategies for the extraction, separation, and identification of polyphenols from plant-derived matrices. Conventional extraction methods, including maceration, Soxhlet extraction, and percolation, are discussed alongside modern green technologies such as ultrasound-assisted extraction, microwave-assisted extraction, pressurized liquid extraction, and supercritical fluid extraction. Particular emphasis is placed on environmentally friendly solvents, including ethanol, natural deep eutectic solvents, and ionic liquids, as sustainable alternatives that improve extraction efficiency while reducing environmental impact. The review further highlights chromatographic separation approaches—partition, adsorption, ion-exchange, size-exclusion, and affinity chromatography—and underlines the importance of hyphenated analytical platforms (LC–MS, LC–MS/MS, and LC–NMR) for comprehensive polyphenol profiling. Key analytical challenges, including matrix effects, compound instability, and limited availability of reference standards, are addressed, together with perspectives on industrial implementation, quality control, and standardization.

Article
Public Health and Healthcare
Health Policy and Services

Rodney P. Jones

Abstract:

Queuing theory and the Erlang equation are directly applicable to small hospital departments such as maternity and pediatrics. Bed capacity tables can be easily generated linking annual births/admissions to the required available beds, using expected births/admissions and length of stay (LOS). Two bed calculators are provided. For example, in maternity the total bed days includes any admissions during pregnancy and after birth, i.e., excluding the time spent in the birthing unit. It is emphasized that bed days must be calculated using real time length of stay as opposed to the usual midnight figure. The bed occupancy margin is directly linked to size and not ‘efficiency’. Based on the Erlang B equation which links available beds, occupied beds and turn-away, a figure of 0.1% turn-away has been chosen as the minimum acceptable number of beds, i.e., only 1 in a thousand admissions suffer a delay before a bed can be found. Two bed calculators are provided which can be used for obstetric, maternity, midwife-led, birthing wards and neonatal unit bed capacity. Specific issues relating to neonatal critical care bed capacity are highlighted. The negative effects of turn-away are likely to be context specific, hence, critical care > theatres > birthing unit > maternity unit. The far greater uncertainty regarding future births is discussed along with the variable nature of seasonality in births. For pediatrics much of bed demand is also influenced by the trend in births. Suggestions are made for a pragmatic approach to bed planning. Evidence is presented which suggests that for maternity (and other relative short stay admissions) the majority of overhead/indirect costs and most staffing costs should be apportioned based on admissions, and not LOS. Apportionment based on LOS creates the spurious illusion that LOS is the major cost driver and that reducing LOS will immediately save costs. Several lines of evidence point to the minimum cost per patient in maternity (antenatal + postnatal) lying greater than 30 beds (plus associated labor/birthing beds), and the minimum economic size around 12 beds. Around 30 beds probably mark the point where it is possible to make small cost savings by reducing LOS. Allocating total organizational costs to individual units and then to patients is far less precise than is realized and can be done in different ways which all heavily rely on the steady-state assumption. The real world of daily arrivals, case mix and clinical severity is never in steady state. Below 20 to 30 beds Poisson statistical plus environment induced randomness in daily arrivals imply that staff costs become increasingly fixed irrespective of LOS. When bed availability is the bottleneck then reducing LOS may increase throughput per bed and increase income, however, is this for the benefit of the patient or for the benefit of the organization, and does it lead to higher unanticipated total costs including patient harm? Finally, a list of nine ‘never do this’ catastrophic pitfalls are given for doctors to identify dubious capacity advice from managers and external ‘experts’.

Article
Medicine and Pharmacology
Clinical Medicine

Jeong Woo Kim

,

Chang Hee Lee

Abstract:

Background/Objectives: Considering the excretion pathways and gadolinium concentrations of gadolinium-based contrast agents (GBCAs), our institution has developed a tailored administration protocol for patients with renal impairment to facilitate more rapid elimination and minimal retention of gadolinium. This study aims to evaluate the 8-year clinical outcomes and safety of this institutional protocol. Methods: This single-center retrospective study included patients with renal impairment who underwent GBCA-enhanced MRI between January 2015 and December 2022. The protocol recommended specific GBCAs and adjusted doses based on chronic kidney disease (CKD) stage and serum bilirubin levels: gadoxetate disodium was used for normal serum bilirubin level due to its dual excretion pathway, while macrocyclic agents were used for those with elevated serum bilirubin levels. During the follow-up period, occurrence of nephrogenic systemic fibrosis (NSF) and evidence of gadolinium deposition in brain tissues were evaluated. Results: A total of 288 patients (age, 64.6 ± 11.7 years; male, 64.9%) underwent 716 GBCA-enhanced MRI examinations in accordance with the institutional protocol. The cohort included 62 patients with CKD stage 4 and 131 patients with CKD stage 5 or undergoing hemodialysis. In patients with CKD stage 4 and 5 and those undergoing hemodialysis, 597 examinations were performed using gadoxetate disodium, and 119 used macrocyclic agents. No cases of NSF or gadolinium deposition in brain tissues were identified over mean follow-up intervals of 27.5 and 27.8 months, respectively. Conclusions: The tailored GBCA administration protocol, considering the excretion pathways and gadolinium concentrations, appears to be safe with respect to NSF and gadolinium deposition in brain tissues for patients with renal impairment.

Article
Biology and Life Sciences
Ecology, Evolution, Behavior and Systematics

Abraham Hefetz

Abstract:

The epicuticle of Cataglyphis niger is endowed with hydrocarbons comprising both linear and branched alkanes. The linear alkanes create an impermeable layer that protects the ants from desiccation, whereas the branched alkanes have communicative roles. Studies of the biosynthesis of both classes of hydrocarbons revealed disparate pathways, which suggests an independent evolution. It is hypothesized that the driving force for the evolution of alkanes was acquiring means for attaining impermeability. Being more abundant in foragers linear alkanes have been secondarily coopted for signaling colony foraging intensity and accordingly adjusting task allocation. The evolution of branched alkanes is less clear and seems more complex. They are biosynthetically derived from branched fatty acid that may have been the roots of their evolution. Due to their bactericide activity branched fatty acids evolved as protective means. Secondarily, the biosynthesis of these acids was coopted for producing branched alkanes for communicative roles. Using branched alkanes as signals is adaptive due to their numerous isomers that convey large informational content. Moreover, being hydrophobic they blend within the linear alkane layer that covers the ants’ body surface. However, branched alkanes decrease the cuticular impermeability, so hypothetically their proportions are the result of a tradeoff steady state.

Article
Medicine and Pharmacology
Epidemiology and Infectious Diseases

Ming Zheng

Abstract: This study aims to characterizing both the pre-existing conditions that increase susceptibility and the long-term, post-acute sequelae ("long flu") following influenza. A longitudinal cohort study was conducted using data from the FinnGen cohort of 429,209 individuals including 9,204 influenza cases. A disease-wide association study (DWAS) framework was employed, using Cox proportional hazards models adjusted for age and sex to analyze 110 influenza-comorbid clinical endpoints. Pre-existing conditions, most notably cardiovascular diseases such as heart failure, coronary atherosclerosis, atrial fibrillation, and stroke, were significantly associated with an increased likelihood of a subsequent influenza diagnosis. Following influenza, individuals had a substantially elevated risk for a durable, multi-system "long flu" syndrome. The most robust and persistent risks were for new-onset cardiovascular and neurological diseases. Risks for thromboembolic events, heart failure, atrial fibrillation, stroke, and myocardial infarction remained significantly elevated for one to five years following influenza. Similarly, influenza was associated with a long-term increased incidence of neurodegenerative disorders, including migraine (with and without aura), Alzheimer's disease, and dementia. These findings underscore the urgent need to intensify preventive strategies, particularly through targeted vaccination of at-risk individuals, and to develop integrated care pathways to manage the multi-organ sequelae of long flu.

Article
Computer Science and Mathematics
Applied Mathematics

Silvia Dedu

,

Florentin Șerban

Abstract:

Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty.

Article
Biology and Life Sciences
Agricultural Science and Agronomy

Surendra Neupane

,

Adam Varenhorst

,

Madhav P. Nepal

Abstract:

Soybean aphid (SBA), Aphis glycines Matsumura (Hemiptera: Aphididae), and soybean cyst nematode (SCN), Heterodera glycines Ichinoe (Tylenchida: Heteroderidae), are major pests of soybean, Glycine max L. Merr., in the U.S. Midwest. This study examined three-way interactions among soybean, SBA, and SCN using demographic and transcriptomic analyses. SCN-resistant and SCN-susceptible cultivars were evaluated under three treatments (SBA, SCN, SCN+SBA) in a randomized complete block design with six replicates, repeated eight times in greenhouse cone-tainers. Plants were infested with 2,000 SCN eggs at planting or 15 SBA at the V2 stage. Aphid populations were counted at 5-, 15-, and 30-days post-infestation (dpi), and SCN eggs sampled at 30 dpi. SCN egg density increased significantly in the susceptible cultivar but remained unchanged in the resistant cultivar in the presence of SBA, while SBA populations declined under SCN infestation. RNA-seq identified 4,637 differentially expressed genes (DEGs) at 5 dpi and 19,032 DEGs at 30 dpi. Analyses focused on DEGs shared across treatments but discordantly expressed in resistant cultivars during SBA–SCN interactions. Weighted Gene Co-expression Network Analysis revealed seven and nine modules at 5 and 30 dpi, respectively. Enrichment analyses identified ‘Plant–Pathogen Interaction’ and ‘Cutin, Suberin, and Wax Biosynthesis’ at 5 dpi, and ‘Isoflavonoid Biosynthesis’ and ‘One-Carbon Pool by Folate’ at 30 dpi. Several DEGs overlapped with SCN resistance QTLs, identifying candidate genes for cross-resistance breeding.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Asri Mulyani

,

Muljono

,

Purwanto

,

Moch Arief Soeleman

Abstract: Diabetic retinopathy (DR) is a leading cause of vision impairment and permanent blindness worldwide, requiring an accurate and automated system to classify its multi-grade severity to ensure timely patient intervention. However, standard Convolutional Neural Networks (CNNs) often struggle to capture the fine, high-frequency microvascular patterns critical for diagnosis. This study proposes a Robust Intelligent CNN Model (RICNN) designed to improve multi-level DR classification by integrating Gabor-based feature extraction with deep learning. The model also incorporates SMOTE (Synthetic Minority Oversampling Technique) balancing and Adam optimization for efficient convergence. The proposed RICNN was evaluated on the Messidor dataset (1,200 images) across four severity levels: Mild, Moderate, Severe, and Proliferative DR. The results showed that RICNN achieved superior performance with 89% accuracy, 88.75% precision, 89% recall, and 89% F1-score. The model also demonstrated high robustness in identifying advanced stages, achieving AUCs of 97% for Severe DR and 99% for Proliferative DR. Comparative analysis confirms that texture-aware Gabor enhancement significantly outperforms Local Binary Pattern (LBP) and Color Histogram approaches. These findings indicate that the proposed RICNN provides a reliable and intelligent foundation for clinical decision support systems, potentially reducing diagnostic errors and preventing vision loss in high-risk populations.

Article
Medicine and Pharmacology
Psychiatry and Mental Health

Ngo Cheung

Abstract: Background: Hoarding disorder (HD) has been classed alongside obsessive-compulsive disorder (OCD) for decades, yet its later age of onset, ego-syntonic saving, and limited response to OCD treatments imply a separate biology.Methods: We re-analysed the 2022 genome-wide association meta-analysis of hoarding symptoms with the same three-step pipeline recently applied on a larger 2025 OCD GWAS. The approach combined (1) MAGMA gene-based tests, (2) partitioned heritability by stratified LD-score regression and custom χ² enrichment, and (3) S-PrediXcan transcriptome-wide association in six brain regions. Identical annotation panels—two glutamatergic sets, two pruning sets, a monoaminergic control, and a housekeeping control—were applied to both disorders to allow direct comparison.Results: HD showed no single-variant genome-wide hits but did reveal pathway-level patterns distinct from OCD. Hoarding heritability concentrated in genes supporting adult synaptic plasticity and cellular metabolism, most notably the BDNF → TrkB → mTOR → CREB cascade and sigma-1/CYP homeostatic modules. The strongest nominal signals included predicted down-regulation of NTRK2 and enrichment of several mTOR components. Pruning pathways displayed modest, secondary enrichment. By contrast, OCD heritability was dominated by immune-mediated synaptic elimination, adhesion, and astrocytic support genes, with glutamatergic panels contributing little.Conclusions: The data argue against a single "pruning-driven" mechanism for compulsive disorders. Instead, they support a model in which HD arises mainly from impaired adult synaptic remodeling and metabolic resilience within reward and decision circuits, producing enduring attachment to possessions rather than ritualistic neutralisation. This plasticity framework matches the clinical picture of HD and suggests new treatment directions that enhance circuit flexibility—such as BDNF or mTOR agonism—rather than attempting to reverse developmental pruning defects. Replication in larger, deeply phenotyped hoarding cohorts is needed to confirm these findings and to refine therapeutic targets.

Review
Medicine and Pharmacology
Medicine and Pharmacology

Mariana Hirata

,

Rogerio Padovan Gonçalves

,

Maria Eduarda Teixeira Pereira Cândido da Silva

,

Geovanna de Castro Feitosa

,

Caio Sérgio Galina Spilla

,

Domingos Donizeti Roque

,

Lisete Horn Belon Fernandes

,

Virgínia Maria Cavallari Strozze Catharin

,

Vitor Cavallari Strozze Catharin

,

Leila Maria Guissoni Campos

+8 authors

Abstract:

Background/Objectives: Breast cancer is a biologically complex malignancy whose high prevalence and therapeutic resistance represent a continuous challenge for global health. The Tumor Microenvironment (TME) is a crucial component in disease progression, and the Extracellular Matrix (ECM), particularly its 3D collagen architecture, is recognized for mediating interactions that influence invasion, metastasis, and pharmacological response. This review aims to critically synthesize recent evidence to elucidate the multifaceted role of collagen in the progression and modulation of therapeutic response in breast adenocarcinoma. Methods: A comprehensive literature review was conducted, analyzing studies addressing specific collagen subtypes, ECM stiffening (fibrosis), biomechanical signaling, and its impact on drug transport kinetics and immunomodulatory effects. Results: The results demonstrate that structural alterations of collagen not only orchestrate a pro-tumoral microenvironment, fostering aggressive phenotypes and immune evasion, but also create a physical barrier that compromises drug delivery efficiency and promotes metastatic dissemination. The synthesis of the data reinforces collagen as a potent prognostic biomarker and a promising therapeutic target for overcoming stroma-mediated resistance. Conclusions: Targeting the collagen-rich stroma and its 3D network is a critical frontier for therapeutic innovation. Developing adjuvant strategies to modulate the ECM has the potential to enhance clinical outcomes and optimize the distribution of antineoplastic agents, especially in patients with high degrees of tumor fibrosis.

Article
Chemistry and Materials Science
Inorganic and Nuclear Chemistry

Ian R. Butler

,

Peter N. Horton

,

William Clegg

,

Simon J. Coles

,

Lorretta Murphy

,

Steven Elliott

Abstract:

The family of N,N-dimethylaminomethylferrocenes is one of the most important in ferrocene chemistry. They serve as precursors for a range of anti-malaria and anti-tumour medicinal compounds in addition to being key precursors for ferrocene ligands in the Lucite alpha process. A brief discussion on the importance of, and the synthesis of N,N-dimethylaminomethyl-substituted ferrocenes preludes the synthesis of the new ligand 1,1´,2,2´-tetrakis-(N,N-dimethylaminomethyl)ferrocene. The crystal structure of this compound is reported and a comparison is made with its disubstituted analogue, 1,2-bis-(N,N-dimethylaminomethyl)ferrocene. The tetrahedral nickel dichloride complexes of both these ligands have been crystallographically characterised. Finally, a pointer to future research in the area is given which includes a discussion of a new method to extract ferrocenylmethylamines from mixtures using additives and a new synthetic avenue from substituted cyclopentadiene itself.

Article
Physical Sciences
Astronomy and Astrophysics

U.V. S. Seshavatharam

,

S. Lakshminarayana

Abstract: Traditional cosmological redshift is defined as unbounded wavelength stretching from zero to infinity, which is inconsistent with a photon‑energy interpretation and implies physically unreasonable energy loss or divergence. In earlier work, a photon‑energy–based redshift z_new=z/(1+z), naturally bounded between 0 and 1, was introduced and embedded in a Hubble–Hawking cosmological model with positive curvature and light‑speed cosmic rotation. Methods: Using the energy‑based redshift within this rotating Hubble–Hawking framework, direct analytic relations are derived connecting the cosmic scale factor, the Hubble parameter, the age of the universe, luminosity and comoving distances, galactic recession speeds, and a revised form of Hubble’s law with angular velocity equal to the Hubble parameter. The same redshift prescription is then applied to the Son et al. progenitor‑age–corrected Pantheon+ supernova sample to perform a purely kinematic re‑analysis of the expansion history. Results: The analytic relations indicate that the universe has been continuously decelerating since the Planck era, as steadily increasing baryonic mass slows an initially light‑speed expansion, and they predict a slow future decline of the 2.725 K cosmic microwave background temperature. In the re‑analysis of Pantheon+ with progenitor‑age bias removed and z_new=z/(1+z) adopted, the best‑fit solution shifts from mild deceleration to strong, continuous deceleration, incompatible with the late‑time acceleration required by flat ΛCDM; the supernova data no longer favor an accelerating universe but instead support a cosmos that has been decelerating throughout its post‑Planck evolution. Independently, the CMB temperature can be related to a Hubble–Hawking temperature via the geometric mean of the Hubble mass and the Planck mass, implying that the product of cosmic mass and the square of the cosmic temperature remains approximately constant, and yielding a current baryon acoustic bubble radius of 135.2 Mpc that can be used to refine the true expansion (or deceleration) rate. On galactic scales, an empirical “super gravity” relation with a mass limit for ordinary gravity of roughly 180 million solar masses reproduces both low‑dark‑matter and high‑dark‑matter galaxies by scaling effective dark mass as (baryon mass)1.5. Conclusions: Taken together, the energy‑based redshift, the nearly isotropic CMB sky, and the spinning black‑hole–like Hubble–Hawking universe form a single, self‑consistent narrative in which the present cosmos is nearly radially static, dominated by light‑speed rigid rotation and net deceleration rather than late‑time acceleration. In this picture, supernova distances, CMB isotropy, and BAO scales are unified by strong binding and rigid co‑rotation into a rotating universe with negligible true expansion. Planck’s nearly isotropic CMB sky, usually interpreted as evidence for a homogeneous FLRW universe with accelerating expansion, can instead be reinterpreted as evidence for a late, slow deceleration phase in a light‑speed rotating, positively curved Planck–Hubble–Hawking universe.

Article
Medicine and Pharmacology
Psychiatry and Mental Health

Piotr Lorkiewicz

,

Justyna Adamczuk

,

Justyna Kryńska

,

Mateusz Maciejczyk

,

Małgorzata Żendzian-Piotrowska

,

Robert Flisiak

,

Anna Moniuszko – Malinowska

,

Napoleon Waszkiewicz

Abstract: Viral infections have been implicated in psychiatric outcomes through immune-mediated pathways. This 12-month prospective cohort study compared psychiatric symptoms and inflammatory cytokine profiles in patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), hepatitis C virus (HCV), and tick-borne encephalitis virus (TBEV), and assessed their predictive value. 37 patients hospitalized with viral infections and 32 healthy controls were evaluated using psychiatric interviews and the Hospital Anxiety and Depression Scale (HADS). The study was divided into two stages. In Stage 1, during the acute infection, a psychiatric assessment was conducted and cytokine levels were measured in the patients’ blood. In Stage 2, one year later, the psychiatric as-sessment was repeated. No significant differences were found in psychiatric diagnosis rates or symptom severity between infection groups, regardless of viral type or neu-roinvasive capacity. Some cytokines (eg., IL-1β, TNF-α, IL-10, and sIL-2Rα) showed as-sociations with individual symptoms, but these were inconsistent and not predictive. Cluster analysis identified two distinct inflammatory profiles - one characterized by higher cytokine levels (predominantly in COVID-19 and TBEV cases) and the other by lower cytokine levels (mostly in HCV and controls). However, different cytokine profiles did not correspond to clinical outcomes. The results suggest that psychiatric sequelae after viral infections are not directly driven by specific cytokines or infection type but rather emerge from a complex interaction of immune, psychological, and environmental factors. Single cytokine measurement is insufficient and cannot be used as a tool for assessing the risk of developing psychiatric disorders. Future studies should focus on composite bi-omarkers and systems-based models such as neuroimmune-metabolic-oxidative path-ways (NIMETOX), Immune-Inflammatory Response System (IRS)/ Compensatory Im-mune Response System (CIRS)/ Oxidative & Nitrosative Stress (O&NS) for improved predictive accuracy.

Article
Business, Economics and Management
Business and Management

Batoul Modarress-Fathi

,

Alexander Ansari

,

Al Ansari

Abstract: This research examines how rising pressures from global risks, including natural disasters, geopolitical conflicts, cybercrime, and government regulations, affect sustainable supply chains and logistics systems. These pressures are becoming more frequent, intense, and unpredictable. The Global Pressure Supply Chain Index, developed by the Federal Reserve Bank of New York, serves as a proxy for quantifying these pressures and as a comprehensive metric of stress within global supply chains and logistics systems driven by macroeconomic factors. Further investigation is warranted. Quantitative analyses indicate that systemic global risks have a significantly positive effect on these systems. However, additional analyses show that the influence of macroeconomic indicators on these systems remains generally low to moderate. Supplementary statistical tests demonstrate that, among external systemic risks, government trade regulations, cybercrimes, cyberattacks, the transportation index, and political conflicts are significant predictors of pressures on global supply chains and logistics. These factors serve as indicators for forecasting economic fluctuations, which lead to disruptions, delays, and costs in supply chains and logistics systems.

Review
Environmental and Earth Sciences
Soil Science

Saif Alharbi

,

Khalid Al Rohily

Abstract: Land degradation (LD) is a dominant threat of the decade, which is deteriorating arable lands globally. Therefore, this intensification of LD has stimulated global governing bodies and researchers to take the initiative against this dilemma through sustainable and eco-friendly approaches. Geographical mapping is critical for analyzing land formation, its types, and uses; data-based maps provide a detailed overview of land use. In this study, we have created simplified SRTM-based maps for Saudi Arabia related to soil types, soil thickness, and soil uses either as vegetation or for agricultural aspects using GIS tools. Results of these GIS analyses showed that the maximum area of the country is sandy, followed by loam and sandy loam. Meanwhile, the maximum soil thickness is either under 0-4 meters or 43-50 meters. This geological display of the country could be instrumental in assessing the soil types and what sort of inputs or steps can be taken to make each type of soil fertile. Moreover, we also mentioned the land degradation pathways impacting the country’s arable lands and explained the pathways that can help assess such land losses. Besides land loss pathways, we explained the most suitable mitigation strategies, including mulching, cover cropping, agroforestry, riparian buffer strips, agroforestry, terracing, and nutrient use efficiency. In this article, we also focused on the aims of the Saudi Green Initiative and the steps that are being taken by international governing bodies like UNDP, UNEP, FAO, and the World Bank to mitigate land degradation in the region. However, further studies are required to assess the intensity of these solutions at each soil type and thickness.

Brief Report
Environmental and Earth Sciences
Sustainable Science and Technology

Martin Kozelka

,

Jiří Marcan

,

Vladislav Poulek

,

Václav Beránek

,

Tomáš Finsterle

,

Agnieszka Klimek-Kopyra

,

Marcin Kopyra

,

Martin Libra

,

František Kumhála

Abstract:

Ground‑mounted photovoltaics, including agrivoltaic concepts, are increasingly deployed on agricultural land. In practice, damaged modules from repowering modules are sometimes stored on‑site for prolonged periods, creating localized vegetation suppression and land‑stewardship concerns that are rarely quantified. We present two anonymized case studies from Czechia (nominal capacities of 0.861 and 1.109 MWp; commissioned 2010 and 2009; repowered 2022 and 2021), where cracked backsheets and/or broken front‑glass modules were stacked and stored directly on grasslands within PV parcels. Using GIS delineation on orthophotos supported by field photographs, we quantified the land area (19,560 and 22,100 m²), PV panel area (plan‑ view; 4,960 and 5,080 m²), and stored PV module area (plan‑ view storage footprint; 109 and 100 m²). Stored module counts were estimated from visible stacks (≈1800 and ≈2000 modules). Using a conservative mass range of 18–25 kg/module, the stored masses were ~32–45 t and ~36–50 t, respectively. Although the storage footprints constitute <1% of the land area, they create persistent “dead zones” on agricultural land and concentrate tens of tonnes of material directly on the soil. We discuss regulatory and economic barriers to timely removal in the context of circular‑economic goals and propose practical reporting indicators for repowering projects on agricultural land: Astore (m²), Nstore (pcs), Mstore (t), storage duration, condition class, and storage interface.

of 5,420

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated