Appendix A. The cotangent sum and Jordan totients.
The Euler ensemble expectation value of
can be reduced to the prime numbers as follows. We start with an unconstrained sum, which is elementary [
12].
Let us consider a sum
of arbitrary function
constrained to the coprime
. In our case
Such sum satisfies the following equation (with
denoting
S ordered prime factors of
m and
denoting coprime
).
Let us go into detail. Consider the first term for particular
We observe that the second term removes from the total sum in the first term all the terms with . In the same way, the other terms in the sum remove all the terms in the first sum with . However, there are terms in like , which are proportional to two prime factors , and we removed these terms twice, once in the term and the second time in . So, we have to add them back, with sign for each pair . This addition provides the next term with double sum .
In general, this formula is a particular case of the inclusion-exclusion principle [
13]. As the basic equation (
A3) is a linear functional of
H, we can solve this equation separately for
, and then by adding these solutions with proper coefficients, we get the solution for our particular
. Let us start with the simplest case,
. The solution is the Euler
. Here is how the equation is satisfied:
The next case,
is processed the same way, with the result
Finally, the function
Jordan introduced these generalized totients [
14]
in 1870. The asymptotic behavior of the Jordan totient summatory functions is known:
Appendix B. Algorithms.
The numerical simulation of the correlation function does not require significant computer resources. It is like a simulation of a one-dimensional Ising model with long-range forces. However, a much more efficient method is to simulate the Markov process, equivalent to the Fermionic trace of the [
10].
Regarding that quantum trace, we are simulating the random histories of the changes of the Fermionic occupation numbers, taking each step off history with Markov matrix probabilities.
We devised an algorithm that does not take computer memory growing with the number N of points at the fractal curve. This algorithm works for very large , and it can be executed on the GPU in the supercomputer cluster, expanding the statistics of the simulation by massive parallelization.
We use large values of . As for the random values of fractions with , we used the following Python algorithm.
We used even
and
, as it was shown in [
4] that these are the dominant values in the local limit
. [ frame=lines, framesep=2mm, baselinestretch=1.2, fontsize=, linenos ] python def EulerPair(self): M = self.m while (True): q = 2 * np.random.randint(2,M//2) p = np.random.randint(1, q) if gcd(p,q) ==1: if np.random.randint(0,M) < q: break pass pass return [ p, q] In other words, we generated random even
, after which we tried random
until we got a coprime pair
. Statistically, at large
N, this happens with probability
(see [
15]). On average, it takes
attempts to get a coprime pair.
Once we have a
p candidate, we accept it with the chance
, which, again, for large
has a finite acceptance rate. The probabilities multiply to the desired factor, which is proportional to Euler totient
The main task is to generate a sequence of random spins with prescribed numbers of and values. The statistical observables are additive and depend upon the partial sums .
We avoid storing arrays of , using iterative methods to compute these observables. Our RAM will not grow with N, with CPU time growing only linearly.
The C++ class RandomWalker generates this stream of .
[ frame=lines, framesep=2mm, baselinestretch=1.2, fontsize=, linenos ] cpp pragma once
include <random>
class RandomWalker public: RandomWalker(std::int64t Npos, std::int64t Nneg) : Npos(Npos), Nneg(Nneg), alpha(), gen(std::randomdevice()), unif(0, 1)
int Advance() int sigma = RandomSign(); (sigma == 1 ? Npos : Nneg)–; alpha += sigma; return sigma;
std::int64t getalpha() const return alpha;
private: int RandomSign() return (unif(gen) * double(Npos + Nneg) <= Nneg) ? -1 : 1;
std::int64t Npos; std::int64t Nneg; std::int64t alpha; // alpha[i] std::minstdrand gen; std::uniformrealdistribution<double> unif; ;
At each consecutive step, the random sign is generated with probabilities after which the corresponding number or is decremented, starting with at step zero. By construction, in the end, there will be precisely positive and negative random values .
This algorithm describes a Markov chain corresponding to sequential drawing random
from the set with initial
positive and negative spins, and equivalent to one-dimensional random walk on a number line. These conditional probabilities
are such that unconditional probability equals to the correct value:
regardless of the position
k of the variable
at the chain.
The formal proof goes as follows.
Proof. At
, without spins to the left on the chain, the conditional probability equals the total probability, and the statement holds. With some number of spins on the left, averaging over all these spins at a fixed value of
is equivalent to averaging over all values except
because the only condition on random spins is their net sum, which is a symmetric function of their values. Thus, the average of conditional probabilities equals the total probability (
A18) for any spin on the chain, not just the first one. □
Here is the code, which computes the sample of given . The 2D vectors are treated as complex numbers, using 128 bit complex arithmetic. [ frame=lines, framesep=2mm, baselinestretch=1.2, fontsize=, linenos ] cpp include <complex> include <cassert> include <iostream> include "Euler.h" include "RandomWalker.h" include "MatrixMaker.h"
using complex = std::complex<double>; using namespace std::complexliterals;
inline std::complex<double> expi(double a) double sina, cosa; sincos(a, sina, cosa); return cosa, sina;
double DS(std::int64t n, std::int64t m, std::int64t Npos, std::int64t Nneg, double beta, /*OUT*/ double *oo) assert(n < m); std::int64t M = Npos + Nneg; int sigman, sigmam, alpham, alphan; complex Snm, Smn; double Rnm;
RandomWalker walker(Npos, Nneg); std::int64t i = 0; for (; i != n; i++) // i = [0; n) Smn += expi(walker.getalpha() * beta); walker.Advance();
alphan = walker.getalpha(); Snm += expi(alphan * beta); sigman = walker.Advance(); // i = n for (i++; i != m; i++) // i = (n, m) Snm += expi(walker.getalpha() * beta); walker.Advance();
alpham = walker.getalpha(); Smn += expi(alpham * beta); sigmam = walker.Advance(); // i = m for (i++; i != M; i++) // i = (m, M) Smn += expi(walker.getalpha() * beta); walker.Advance();
*oo = -M * (M - 1) / 2 * sigman * sigmam / (2 * pow(tan(beta / 2), 2))
pow(sin(beta / 4 * (2 * (alpham - alphan) + sigmam - sigman)), 2);
Snm /= double(m - n); Smn /= double(n + M - m); return abs((Snm - Smn) / (2 * sin(beta / 2)));
This code fits the architecture of the supercomputer cluster with many GPU units at each node. These GPU units are aggregated in blocks by 32, with every init inside the block working in sync. We use this synchronization to collect statistics: every block is assigned with different random values of , and every unit inside the block performs the same computation of DS function with different values of the integer seed for the random number generator, automatically provided by the system.
When quantum computers become a reality, our sum over the sum of the Markov history of discrete spin (or Fermion) variables will run massively parallel on the qubits. Our Euler/Markov ensemble is ideally suited for quantum computers.
Appendix C. The O(3) group average
The correlation function (
59) involves an integral over the
group
with the vector
related to our fractal curve
in a particular sample in our ensemble.
Both vectors are real.
This integral is a particular case of a Harish-Chandra-Itzykson-Zuber integral formula [
16],
where
is the Vandermonde determinant.
In our case,
, so we use
formula with
where
are Pauli matrices. With proper normalization to 1 at
, the HCIZ integral reads
Next, let us consider the case of the Fourier integral needed for the Wilson loop.
The group integration is now involved in the averaging of the ensemble of our random loops
with vector steps
:
The matrix
is not symmetric but real, as the vectors
are real. These are unit vectors on a plane, equivalent to complex numbers
This integral does not reduce to the HCIZ integral and needs special treatment.
We use a quaternionic representation of
and write this integral:
In this representation, we have full symmetry of this integral over unit sphere in four dimensions, plus there is a symmetry of the tensor .
This Fourier integral can be computed in tens of milliseconds using a fast Python library
quadpy[
17]. It is optimized for integration over geometric shapes, including spheres in arbitrary dimensions. We licensed this library from the author and used it for our computations. Here is the code with its output for a random real symmetric tensor
R python def SphericalFourierIntegral(R): dim = 4 scheme = quadpy.un.mysovskikh2(dim) print("tol=", scheme.testtolerance) vol = scheme.integrate(lambda x: np.ones(x.shape[1]), np.zeros(dim), 1.0)
def func(x): return np.exp(1j * np.sum(x * (R.dot(x)), axis=0))/ vol return scheme.integrate(func, np.zeros(dim), 1.0) ”’ QuadPy.py::testFourieO3Integral PASSED [100
quadpy: SphericalFourierIntegral = (-0.05635295876166728-0.012727710917488425j) quadpy O(3) Fourier Integral 28.73 ms
”’