Physical Sciences

Sort by

Article
Physical Sciences
Theoretical Physics

Shin-Ichi Inage

,

Kouei Ajito

Abstract: This paper introduces the Monte Carlo Stochastic Optimization Technique (MOST), a global optimization framework based on region-wise integral comparison. Unlike classical pointwise methods, MOST evaluates candidate regions through aggregated objective values, enabling a structured and global exploration of the search space. We establish a unified theoretical foundation. Deterministic geometric shrinking of regions ensures that their diameters converge to zero, while a non-circular integral separation principle guarantees global convergence. Incorporating Monte Carlo estimation, we derive exponential concentration bounds and prove almost sure convergence under suitable sampling schedules. For constrained problems, we introduce an extended functional whose minimizers are equivalent to Karush–Kuhn–Tucker (KKT) points, allowing constraint handling without projection or penalty tuning. The framework is further extended to multi-objective optimization, where convergence to Pareto–KKT stationary points is established. Numerical experiments on multimodal benchmark functions confirm the theoretical results. Overall, MOST provides a derivative-free, deterministic–probabilistic framework for global optimization that extends naturally to constrained and multi-objective settings.

Article
Physical Sciences
Particle and Field Physics

Tejinder P. Singh

Abstract: We present a self-contained gauge-sector account of the octonionic programme, starting from the underlying trace-dynamics Lagrangian and ending with closed-form expressions for the strong and electromagnetic couplings, together with a brief review of the weak mixing angle. The derivation has three steps. First, inside the visible bosonic sector we derive the broken-phase relation αsαem=16, from a single visible Yang--Mills coupling before symmetry breaking. The mechanism combines the standard visible charge-trace factor \( 8/3 \) with a six-direction support factor \( 6 \) on the real octonionic ladder space \( H_6 \). Second, we recall the 2022 Eur. Phys. J. Plus. paper [1], where the minimal visible charge quantum \( q_0=1/3 \) fixes the exponential seed A:=exp[q0(q0−38)]=exp[13(13−38)]. Combining this seed with the charged-sector datum \( 3/8 \) gives αs\thv(MZ)=964exp[23(13−38)]=0.11675418, while the broken-phase factor \( 16 \) then yields αem\thv(0)=91024exp[23(13−38)]=0.00729713629. Third, we briefly review the earlier spinorial derivation of the weak mixing angle~\cite{RajSinghBosonic}, which leads to 1=cos⁡(θW/2)2+sin⁡(θW/2),sin2⁡θW\thv=0.24969776. A key conceptual point is that the seed is attached to the \emph{minimal visible charge quantum} \( q_0=1/3 \), not to a specific particle species. The electron, whose charge is \( 1=3q_0 \), is not omitted: its contribution enters explicitly through the electromagnetic charge trace \( k_{\mathrm{em}}=8/3 \). In this form the derivation of $\alpha_{\mathrm{em}}$ is conceptually sharper than in the earlier Eur. Phys. J. Plus. presentation [1], because the factor\( 1/16 \) is no longer hidden in a length-identification step but is derived directly from the visible broken-phase gauge structure.

Article
Physical Sciences
Astronomy and Astrophysics

Huang Hai

Abstract: We derive an effective gravitational potential \( Φ_{halo} (r)∼-[ln⁡( r/r_*)+1]/r \) from the asymptotic behavior of dark matter halo models. At microscopic scales, the logarithmic term changes sign, producing repulsion that prevents matter from collapsing into a singularity. The corresponding logarithmically corrected Schwarzschild metric yields parameter-free, a priori predictions for the shadows of Sgr A* and M87* that agree with Event Horizon Telescope observations. Six falsifiable predictions for unobserved black holes, particularly NGC315, can discriminate this metric from the Kerr solution. On galactic scales, the same logarithmic term fits rotation curves of the Milky Way, Andromeda, and NGC2974 using only ordinary matter, and passes the Bullet Cluster lensing test. Tidal effects in the Solar System are far below current experimental limits, ensuring consistency with the equivalence principle and parameterized post-Newtonian tests. We further derive the modified field equations via coarse-grained variation (Appendix B) from the effective action of a quantum vortex background, thus providing a more complete theoretical bridge to the modified Poisson equation and metric used in the main text. This effective theoretical framework indicates that various gravitational phenomena from black holes to galaxies may share a common quantum topological origin. It provides a unified, testable alternative to the dark matter problem, and also points out a potential path for the observable detection of quantum gravity effects.

Article
Physical Sciences
Theoretical Physics

Shin-Ichi Inage

Abstract: This study presents a unified measure-theoretic formulation of the Monte Carlo Stochastic Optimization Technique (MOST), establishing a rigorous framework that encompasses both continuous and discrete optimization. Unlike conventional optimization methods that operate on pointwise evaluations, MOST is based on regional evaluation through normalized integrals, enabling robust and global exploration of the search space. We first reformulate MOST within a finite measure space, where the evaluation of a region is defined as the measure-weighted average of the objective function. This formulation naturally connects regional optimization with expectation under an induced probability measure and provides a theoretical foundation for Monte Carlo approximation. Building upon this framework, we construct a discrete version of MOST by introducing the counting measure and extend it further using weighted measures to rigorously handle odd-cardinality partitions via midpoint sharing. A central contribution of this work is the demonstration that continuous and discrete MOST are structurally identical algorithms arising from a single measure-based principle, differing only in the choice of underlying measure. This result eliminates the traditional separation between continuous and discrete optimization within the MOST framework. Theoretical analysis reveals that MOST is particularly effective when near-optimal regions possess non-negligible measure, while its performance may degrade in the presence of isolated global minima. These properties are validated through numerical experiments using benchmark functions, including the Ackley and Sphere functions, under uniform discretization. The results confirm that discrete MOST achieves accurate approximations of global optima, with errors controlled by discretization resolution and strong robustness in multimodal landscapes. Overall, this work establishes MOST as a measure-based optimization paradigm, offering a unified, theoretically grounded, and practically robust approach to global optimization across continuous and discrete domains.

Article
Physical Sciences
Quantum Science and Technology

Moses Rahnama

Abstract: We prove a conditional uniqueness theorem for projective measurements on pure states: given the \( L^2 \) Hilbert space structure of quantum mechanics, the Born rule \( P = |\psi|^2 \) is the only outcome-local probability assignment compatible with five operational postulates (outcome-locality, normalization, phase independence of the classical record, tensor product factorization, and continuity), with interference consistency used only as a post-hoc check. Physically, the theorem addresses a boundary problem: amplitudes support interference and cancellation before measurement, whereas a stabilized classical record cannot retain the full relative-phase distinction structure in accessible form. Irreversible record formation motivates this phase-insensitivity requirement, although no thermodynamic quantity enters the formal proof. Mathematically, phase independence reduces the map to a function of modulus, factorization and continuity force a power law, and consistency with \( L^2 \) normalization fixes the exponent to \( 2 \). Within this framework, the squared modulus is the unique classicalization map from phase-sensitive amplitudes to accessible record weights. The result is complementary in physical motivation to Gleason-type derivations but narrower in scope: it is confined to the pure-state projective setting and does not derive the general trace rule for mixed states and POVMs.

Article
Physical Sciences
Theoretical Physics

Michael I. Ojovan

Abstract: Human subjective time exhibits a universal acceleration with age which is described using a formal analogy between the dynamics of subjective time and gravitational time dilation in general relativity. For that it is constructed a two-dimensional informational state space defined by accumulated structured experience I(t) and future directed value λ(t). An informational potential Ψ(I,λ) induces a metric of subjective time analogous to the Schwarzschild metric. This framework naturally yields horizons, points of no return, and black hole like collapse corresponding to terminal cognitive decline. The model provides a unified geometric interpretation of aging, burnout, life cycles, and the phenomenology of time.

Article
Physical Sciences
Optics and Photonics

Steffen Wilbrandt

,

Olaf Stenzel

Abstract: The determination of the linear optical constants of solids is an important part of solid state optical characterization. Reflection spectroscopy and ellipsometry of surfaces or thin solid films represent established techniques to access those optical constants, however, they may suffer from an ambiguity of the obtained optical constants. We discuss methods for identifying the physically meaningful solution from the solution multiplicity, making use of a proper combination of independent measurements. Elaborating contours of constant reflectance (iso-reflectance curves) facilitates reliable identification of correct optical constants. A numerical criterion is further provided to select suitable combinations of measurements. The procedure is demonstrated in application to simulated spectra of a Nb2O5 film in the spectral region where the onset of the fundamental absorption edge is observed.

Article
Physical Sciences
Astronomy and Astrophysics

John Henderson

Abstract: A number of approaches to a theory of quantum gravity assume the cosmological fabric of spacetime is distinct from the spacetime of the material world of matter and energy. On a short time scale, one cannot distinguish between the fabric of spacetime expanding, nominally due to dark energy, or the scale of the material world contracting with respect to the fabric of spacetime. Contraction of the scale of the material world (length, time, mass, and charge contracting equally) maintains observable physical laws and results in a decreasing derived dark energy density matching that reported by the DESI Collaboration in March 2025. The DESI Collaboration fits to the dark energy density over time show a distinct difference between those using scale-dependent supernovae data and those using mostly scale-independent angular measurements, such as from CMB and BAO measurements. That difference is resolved by applying a scale contraction rate of ~3%/Gyr to the supernovae data. Scale contraction of the material world eliminates the need for dark energy to explain the apparent expansion of space, resolving the ~10^122 discrepancy between the dark energy density required to match observation and that calculated for the vacuum energy as the mechanism for dark energy. The large force of the vacuum energy is a potential mechanism for compression of the material world, and would explain why the observed expansion only occurs outside of gravitationally bound systems. A scale-contraction model for cosmological kinematics explains why the dark energy density appears to be decreasing without requiring the underlying vacuum energy to be changing with time. Scale contraction of the material world predicts the observed directions and order of magnitude of the Hubble tension and the S8 tension, which has been a challenge to other proposed modifications of Lambda-CDM since those two tensions have opposite trends over time, the Hubble constant being about 10% larger in the late universe compared to the early universe, and the structure constant, S8, about 10% smaller. Scale contraction of the material world can be tested by modifying the Lambda-CDM cosmological model to include scale contraction over time, and assessing if the Hubble, S8, and other tensions are quantitatively reduced or resolved.

Article
Physical Sciences
Theoretical Physics

Harmen H. Hollestelle

Abstract: This paper is Part 2 continuing the investigation from Part 1, in 2024. To recall, in Part 1, with the derivation of two theorems, defined is an axiomatic approach to the time interval only description. The time interval only set approach is an alternative to the usual one moment time description and the traditional tangent approach of change and differentiation. The two fundamental and nearly equivalent set theorems are the multiplication linearity theorem and the multiplication closure theorem. Both these theorems claim for two time intervals the multiplication result is a time interval.The present paper includes the construction of an overall simultaneous emission example surface for any approximation of the limit surface of simultaneous spherical symmetric wave emission, applying a variation to a well known construction for indecomposable continua. To evaluate the example surface and its construction several comments and theorems are derived. These include proof of two new theorems, emission theorem I and II, related to the identity transformation or mapping, which have value in their own right when considered as mathematical theorems. Emission theorem I and II depend on the ‘current parameter’ concept applying a multiple ‘dimensional domain definition’ within both the one moment time and time interval only description, concepts defined applying set representation theory. Another part of the approach is the linear functional theorem.Within this approach defined is, within the time interval only description, zero and non-zero equilibrium and simultaneous emission, with ‘zero’ temperature to be the non-zero lowest bound temperature, resembling Boson type of simultaneous emission within a cosmology of groups of multiple star sources. Since finite time intervals are introduced being asymmetric this approach can be integrated in Curie’s principle for asymmetry. The current parameter definition allows for a comment on the fundamental theorem on polynomials. Assuming simultaneous emission internal interaction, including e.m. kinetic energy and gravitation energy, provides with several overall results including introduction of a time interval only description gravitation constant.

Communication
Physical Sciences
Theoretical Physics

Andrew Wutke

Abstract: Relative simultaneity remains a highly debated issue. It is presented as a necessary physical consequence of the Lorentz transformation (LT). However, we demonstrate that the phenomenon is an artefact of 'mixed-coordinate' algebraic representation, which does not guarantee that the stationary system S 4-vector transformed to the moving system S’ are both covariant. A covariant representation of a transformed 4-vector which components are explicit functions of time, requires them to be expressed as functions of the local time in S and in the moving frame S'. While the one step multiplication of LT matrix by the 4-vector in S, yields correct algebraic expressions, the ‘raw’ resulting 4-vector retains the variable t throughout all components. This ‘mixed-coordinate’ representation is incomplete; it is not in a form covariant with the vector in S because its components are not functions of t'. The variable t must be replaced by equating the first component of the transformed vector to ct’ and substituting the resulting expression into all instances of t in the ‘raw’ 4-vector. After this procedure applied to two simultaneous events in S, the apparent time difference between the events in S’ becomes ∆t’=0. The effect of relative simultaneity, which appears in the ‘mixed-coordinate’ representation, is absent. This highlights the role of emergent ‘absolute-like time’ hidden within the structure of LT equations affecting temporal relations, suggests that the "Relative Now" discussed by Eddington in 1927 resulting from widely known conclusion ∆t’≠0 is a mathematical artefact of coordinates convention rather than the physical reality.

Article
Physical Sciences
Quantum Science and Technology

Moses Rahnama

Abstract: We propose that quantum measurement can be analyzed as an operational irreversibility transition, or boundary event in the limited operational sense used here: the protocol stage at which a reversible system/pointer correlation is driven across a practical irreversibility threshold into an operationally stable record-bearing channel. We formulate a three-stage taxonomy separating reversible premeasurement (Stage 1), irreversible record stabilization (Stage 2), and memory reset (Stage 3), and identify the stage at which known information-thermodynamic bounds become experimentally testable. Under explicit operational conditions (C1 to C6) in the uncontrolled-decoherence regime, known information-thermodynamic second-law bookkeeping specializes to a conditional prediction: the record-formation channel must dissipate at least kBT ln 2 of heat per bit of classical mutual information I(X;Y).We propose a circuit-QED differential microcalorimetry experiment with matched ON/OFF branches that share identical premeasurement pulses and routing losses, differing only in whether the irreversible Stage 2 channel is opened. The measurand is the differential deposited energy ΔQ ≡ QON − QOFF, which isolates the branch-differential dissipative load associated with opening the Stage 2 channel from common-mode backgrounds. In the deep-quantum regime this signal is expected to be dominated by pointer-energy thermalization rather than by an isolated Landauer floor. The primary deep-quantum demonstration targets the temporal coincidence of heat onset and reversibility loss via a reversal-delay sweep (Control 3), providing a timing diagnostic of irreversibility onset even when ΔQ ≫ kBT ln 2. This timing test is not, by itself, a device-independent proof of objective classicality. Near-floor residual tests (r ≡ ΔQ − kBT ln 2 · I(X;Y)) require lower-energy pointer implementations or elevated operating temperatures and are presented as a roadmap. The bound is falsified if r is negative at high statistical significance under verified conditions.

Article
Physical Sciences
Theoretical Physics

Evlondo Cooper

Abstract: We present a causal, falsifiable law of observer-indexed entropy retrieval dynamics whose growth rate of retrievable entropy is proportional to the remaining entropy gap, modulated by a hyperbolic-tangent regulator that switches on at a characteristic proper time τchar. Unlike ensemble-averaged, non-causal Page-curve phenomenology, this law follows directly from bounded Tomita–Takesaki modular flow and admits an inverse map for extracting observer-indexed retrieval rates from measured correlation structure. The framework converts global entropy conservation into a Lorentzian-causal, observer-specific access process without invoking global reconstruction or post hoc averaging. It predicts a joint, experimentally testable signature in the g²(t1, t2) correlation envelope, including constrained saturation, protocol-dependent separation, inverse bandwidth scaling of onset time, and interference suppression under controlled asymmetry. Numerical results on a 48-qubit MERA lattice (bond dimension 8) are consistent with the derived law. A modified Ryu–Takayanagi prescription embeds the retrieval dynamics in AdS/CFT without replica-wormhole or island constructions. By replacing ensemble-averaged Page curves with a causal, testable retrieval mechanism, the model reframes the black-hole information paradox as an experimentally accessible dynamical question. Here Smax denotes the Bekenstein–Hawking entropy, γ(τ) the modular-flow retrieval rate, and τchar the characteristic proper-time scale.

Article
Physical Sciences
Astronomy and Astrophysics

Pier Franco Nali

Abstract: The old Le Sage’s hypothesis on the corpuscular origin of gravity is revisited. The discussion is developed along three lines: the "modern" wave approach, a "mass–flux" model of a relativistic fluid, and the traditional corpuscular model. The predictions obtained in all the three approaches are convergent with other current attempts. The main outcomes are the emergence of a maximal gravitational acceleration – compatible with the surface gravity of neutron stars – and the absence of gravitational field divergences for arbitrarily large or collapsed masses. The resulting theory differs from classical Newtonian gravity in its much clearer separation between the concepts of heavy mass and inert mass, a distinctive characteristic of the Le Sage-type (or “shadow gravity” or “Push–Gravity” (PG)) theories. The price to pay is the abandonment of the equivalence principle in its weak form, which might no longer be considered rigorously valid. We will only touch on the issue of experimental verification, which remains very difficult: the simple test we propose here is a rough estimate of gravity at the Earth’s equator and poles using PG theory, which indicates only qualitative agreement with experimental data. In this version, the section “XVII. NEWTONIAN VS RELATIVISTIC EFFECTS OF GRAVITY” has been added and small changes have been made here and there to the text and the bibliography. Finally, a cosmological speculation based on Le Sage’s idea is sketched, which is discussed at a preliminary and tentative level.

Article
Physical Sciences
Particle and Field Physics

Angelo Raffaele Fazio

,

Adam Smetana

Abstract: We present a novel proposal for the effective Lagrangian of the low-energy Yang–Mills quantum field theory. The proposed effective Lagrangian exhibits the spontaneous BRST symmetry breaking. We built the Fujikawa model that we couple to the Yang–Mills elementary field sector, motivated by the analogy with Chiral Quark Model. We interpret the Fujikawa fields as effective fields composite of the elementary gluon and ghost fields. In order to justify the existence of two massless Nambu–Goldstone modes among the Fujikawa fields, we require not only the BRST but also the anti-BRST invariance of the effective Lagrangian, both to be spontaneously broken. The most striking consequence of that is the emergence of the effective gluon and ghost masses. We reproduce the Curci–Ferrari model as a special case of our effective model upon the spontaneous BRST symmetry breaking. In order to reproduce also the non-nilpotent modified BRST symmetry, characteristic for the Curci–Ferrari model, we modify our effective Lagrangian to be invariant with respect to the extended-BRST symmetry, which mixes the elementary and Fujikawa field sectors, and which is nilpotent. The Curci–Ferrari is reproduced by the elementary field sector of the resulting Lagrangian. The remaining Fujikawa’s field dependent terms guarantee the underlying nilpotent extended-BRST symmetry, which is now hidden in the sense of the spontaneous symmetry breaking.

Article
Physical Sciences
Astronomy and Astrophysics

V. P. Dutra

Abstract: Background: Persistent cosmological tensions — particularly in the Hubble constant (H0) — motivate physically grounded alternatives to ΛCDM. We propose the Gibbs En ergy Redistribution Theory (GERT): a thermodynamic framework in which matter- and Λ-like contributions are promoted to density-controlled functions derived from the Gibbs free energy criterion. GERT interprets dark components as emergent manifestations of a single Primordial Enthalpic Reservoir, without new fields or fine-tuning. Methods: The dynamical H(z) is obtained by promoting FLRW source terms to ther modynamic functions fM(ρ) and fL(ρ), calibrated via MCMC against CMB, BAO, and Type Ia supernova data. Model complexity is reduced from 12 to 2 free parameters through thermodynamic priors. Results: The two-parameter implementation achieves χ2/dof ≈ 0.99 and infers H0 ≈ 72.5 kms−1Mpc−1, consistent with local distance-ladder determinations. GERT outper forms ΛCDM on WAIC and AIC. Companion papers (I–XIII) extend the framework to gravitational waves, galactic dynamics across 191 galaxies with zero free parameters, baryogenesis, and the proto-quantum frontier. Conclusions: GERT provides a thermodynamically causal account of cosmic evolution. The frozen parameter set constitutes a quantitative prediction accessible to future low redshift probes.

Article
Physical Sciences
Quantum Science and Technology

Henan Wang

,

Qimeng Zhang

,

Hengyan Wang

,

Hai-Jun Xing

,

Yixiao Huang

Abstract: We investigate a two-component Bose-Einstein condensate as a platform for quantum metrology and characterize the dynamical evolution of the quantum state using two complementary metrics: the quantum Fisher information and the normalized Shannon entropy. With time-dependent control, metrological resources can be prepared and stabilized over a finite time window. These schemes provide a comprehensive assessment of the quantum dynamics in terms of phase sensitivity and the concentration of the state distribution, thereby offering a theoretical basis for designing robust quantum metrology protocols.

Article
Physical Sciences
Theoretical Physics

Charles Opoku

Abstract:

We extend the 3.998D unified geometric framework into the territory of the Cosmic Microwave Background (CMB) radiation acoustic peaks. presenting a plausible alternative explanation that avoids reliance on ΛCDM’s the Big Bang, Bounce, or Inflation hypotheses. Having already demonstrated that a near-4D spectral geometry effectively reproduces all three generations of particle masses, while simultaneously accounting for galactic rotation curves and the Hubble tension. Demonstrating the framework’s universality is therefore seen as a logical next step. Here, we apply the same framework rules to reproduce both the positions (l) and power amplitudes (Dl) of the CMB peaks without invoking plasma acoustic mechanics. These values are derived from the spatial resonance within the manifold’s 4-simplex unit cell, where the primary peak (l1≈221.7) with a theoretical power amplitude of 5907 μK2 are determined aligning with the Planck 2018 observations (≈5750-5950). Subsequent power amplitudes for peak positions l2≈543.5, l3≈809.5 and l4 ~≈1109.5, are determined to be 1969 μK2, 2363 μK2, and 1082 μK2 respectively. Given that these values are practically indistinguishable from observations, the model offers a coherent causal origin for cosmological data and provides a more fundamental explanation than current 3D or 4D hypotheses.

Article
Physical Sciences
Optics and Photonics

Nadezhda M. Belozerova

,

Andrei A. Ushkov

,

Dmitriy Dyubo

,

Alexander V. Syuy

,

Alexander I. Chernov

,

Andrey A. Vyshnevyy

,

Sergey M. Novikov

,

Gleb I. Tselikov

,

Aleksey V. Arsenin

,

Vladimir G. Leiman

+1 authors

Abstract: The development of reproducible and stable plasmon-free substrates for surface-enhanced Raman scattering (SERS) is critical for practical applications in analytical chemistry. Transition metal dichalcogenides (TMDCs) have emerged as promising candidates due to their unique electronic properties, yet their performance is often constrained by the chemical inertness of their pristine basal planes. This work presents a systematic comparison of crystalline flakes and nanoparticles of tungsten diselenide (WSe2) and tungsten ditelluride (WTe2), prepared via liquid-phase ultrasonic exfoliation and non-equilibrium femtosecond pulsed laser ablation in liquid (PLAL), respectively. The results demonstrate that nanoparticle-based substrates consistently outperform their flake-based counterparts, achieving enhancement factors in the range of 104. The superior performance of the nanoparticles is attributed to the synthesis-induced defects and high-curvature regions in the nanoparticles shell which facilitates efficient, defect-mediated charge transfer between the substrate and the analyte. At the same time, the inner polycrystalline volume conserves the important characteristics of the bulk counterparts like excitons in semiconducting WSe2 and broadband absorption in semimetallic WTe2, which unblocks the tunable photothermal colloidal response. The study establishes morphology engineering through non-equilibrium synthesis as a powerful and generalizable strategy for designing high-performance, dual-function colloidal platforms, offering a pathway toward robust and reproducible analytical systems.

Article
Physical Sciences
Applied Physics

Dongxiao Ren

,

Xinyu Zhong

,

Zixiang Ye

,

Xing-Liang Xu

Abstract: For battery management systems, accurate remaining useful life (RUL) prediction is important, yet models trained offline may not remain well matched to individual cells during operation, because degradation trajectories differ across cells and evolve over aging stages. This study examines a lightweight online personalization strategy under a representative convolutional neural network–long short-term memory (CNN–LSTM) online-transfer setting while keeping the backbone architecture and fixed input length unchanged. The proposed method restricts online updates to a small adaptation path and adjusts the effective history span according to recent degradation behavior. Experiments on 22 test cells under unseen protocols show that the method improves average post-adaptation RUL performance relative to the representative baseline, reducing the root mean square error (RMSE) from 186.00 to 160.58. The number of trainable parameters involved in online updating is reduced from 74,880 to 2,193, while the average update time per step decreases slightly from 2.54 s to 2.29 s. Cell-level analysis further shows that the benefit is not uniform across all cells, motivating more selective updating for safer deployment. Overall, the results indicate that lightweight online personalization can improve the accuracy–cost trade-off of deployment-oriented battery prognostics.

Article
Physical Sciences
Theoretical Physics

Sacha Mohamed

Abstract:

We introduce an operational transport latency, quantum information copy time: the earliesttime at which a receiver confined to a region B can certify, with prescribed advantage, whichof two global hypotheses was prepared by local operations in a distant sender region A. The benchmark quantity is information-theoretic—the Helstrom advantage on B, namely the tracedistance between the reduced states—and it also admits receiver-restricted refinements that makemeasurement constraints explicit, including few-body and moment-channel receivers. We derivethe corresponding kinematic locality constraints for Hamiltonian and Lindbladian dynamicswith Lieb–Robinson tails, as well as for circuits and quantum cellular automata with strictlight cones. We then establish the theorem-level anchor of the manuscript in the quantumsymmetric simple exclusion process (Q-SSEP): for locally prepared charge-biased hypotheses,the Helstrom copy time obeys an unconditional diffusion-limited lower bound expressed in termsof the diffusion constant D and the static susceptibility χ. For closed Hamiltonian systems,we formulate a projection-based reduction in which the operational scaling statement is made conditional on explicit, diagnostically checkable hypotheses, thereby separating what is provedmicroscopically from what is inferred in a controlled hydrodynamic window. We complement the analytical framework with exact-diagonalization diagnostics in the XXZ chain and with abundled TEBD/MPS reference protocol plus validation suite (Supplementary S2 and Code SC1),explicitly cross-validated against exact evolution at small sizes. In the strengthened submissionwe add three robustness layers beyond the original ED figures: an eight-size common-windowtransport sweep for the nonintegrable XXZ structure-factor diagnostic, an extractor-robustness check comparing first-crossing and sustained-crossing copy-time rules on the same Helstrom dataset, and a receiver-side validation dataset showing when a simple block-charge measurement saturates the Helstrom advantage in a controlled conserving reference model. Finally, we comparecopy time with scrambling diagnostics based on out-of-time-ordered correlators and show how conservation laws can delay certifiability well beyond the ballistic operator-growth front without any contradiction with locality.

of 330

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated