1. Introduction
The neuromuscular junction (
) is a specialized region that establishes communication between nerve and muscle. The language of this communication is chemical, in which acetylcholine molecules, packed inside organelles called vesicles, are released after their fusion into the synaptic cleft. Next, a diffusion occurs within the synaptic cleft where acetylcholine binds to cholinergic receptors in the motor end-plate, promoting a muscular response [
2]. Thanks to the extensive work of Katz and collaborators, his group systematically carried out rigorous characterization work from the 1950s onwards. In conjunction with these investigations, the discovery of miniature end-plate potentials (
) by Katz and Fatt represented a new perspective for understanding the biophysical nature of neurotransmission [
3]. Would there be a way to quantify the spontaneous release of acetylcholine? According to the vesicular hypothesis proposed later by Katz, the release took place discreetly in the terminal, where there would be a direct correspondence between the fusion of a single vesicle and the generation of a
[
4]. Therefore, based on these studies, Katz and Del Castillo offered a statistical pillar consistent with the physiological substrate. According to this proposal, the release occurred within a random regime, statistically governed by a Poisson regime.
The technological improvement allowed the development of electrophysiological instrumentation, raising the quality of the records. In conjunction with these empirical advances, sophisticated statistical models were also proposed, enabling the test of the validity of the Poissonian premises. Within this perspective, several studies have emerged showing a divergent scheme concerning the assumptions based on the randomness of the neurotransmission phenomenon. Especially from the 1970s onwards, several authors showed that neurotransmission could also obey other statistical models. For instance, Robinson, examining the
of amphibians, showed that
was best described when analyzed with the gamma function [
5]. Washio, studying cockroach neurojunctions, also observed deviations from the Poisson statistics, while van der Kloot, still examining the amphibian
, also reinforced such divergences [
6,
7]. More recently, Lowen
et al., and independently Takeda
et al., suggested a fractality acting on cholinergic release [
8,
9]. In addition, morphological studies have shown that the physical interaction between the vesicles can generate both inhibition and exacerbation of vesicular release [
10,
11]. This finding encourages the idea of mechanisms based on long-range interactions acting on the process responsible for the release. In this context, emerged the possibility for such mechanisms to be linked to a nonextensivity regime as documented by Silva
et al. [
1]. These authors suggested the q-Gaussian distribution as an adequate function to adjust
amplitudes. Therefore, these results indirectly provide support for the morphological investigations mentioned above.
Numerical patterns are identified in many phenomena of nature. Like so many other discoveries in the history of science, the Newcomb-Benford´s Law (
) was obtained in a completely unexpected manner. It was the result of an accurate examination of logarithm tables. Using them, Simon Newcomb discovered a curious numerical pattern in 1881 [
12]. Newcomb was a respectable researcher known for his remarkable skill in dealing with large amounts of data. His accurate examination allowed him, when handling a catalog with logarithmic tables worn out by routine consultation, to notice that the initial pages of the catalog, containing the number 1 as the first digit, were more worn out than the pages containing the digit 2, and so on. From these observations, he proposed a simple mathematical equation to explain the curious phenomenon, also known as the law of anomalous numbers or the law of the first digit, represented by an elegant logarithmic function.
After decades, Frank Benford independently rediscovered Newcomb findings, analyzing a large amount of data extracted from various sources, such as physical constants, molecular weights, and the height of the American population. By studying them, Benford reached the same conclusions previously pointed out by Newcomb [
13]. The
is classified among the several power or scaling laws in many physical systems. In recent years, it has emerged as a valuable tool for identifying patterns embedded in data from different data sources [
14]. Nevertheless,
remained a curious mathematical observation until a rigorous treatment explained why it works so well in different phenomena. Despite the functional simplicity of the law, researchers still need to understand why
works so well [
15]. In the middle of the 1990s, Hill [
16,
17,
18] offered a formal understanding of two remarkable characteristics of the law: scale and basis invariance.
Various experimental data has been accumulated in many fields of knowledge, attesting compliance with
[
19,
20,
21,
22,
23]. In particular, spatial invariance can have profound morphological implications in physiology, and it is well documented in the heart, lung, and brain [
24]. In this framework, it is plausible to hypothesize that if a given data collection obeys the
then it should exhibit base invariance behavior. Therefore, it is unsurprising that
has been confirmed in several biological systems. Studies carried out on electrocardiogram and electroencephalogram recordings have revealed to follow the
in the physiological level [
25,
26]. Moreover,
in vitro studies, using the
of mouse diaphragms, Silva
et al. performed a detailed electrophysiological investigation into the validity of the law at different extracellular calcium levels [
1]. According to these authors, the intervals between
obey the
, no matter the calcium concentration, indicating robust conformity of their data with the law. Motivated by this work, it is suitable to delve into investigations varying the extracellular content of other ionic species.
The potassium ions (
) is a univalent cation commonly found in corporal fluids, resting within
mM, being crucial for several physiological functions [
27]. For example, changes in
gradient represent a potential risk for cardiac functions, and it is also known for establishing the
-equilibrium potential, which is vital for several cell functions. Beyond that, the membrane potential depolarization due to high
, implies a dramatic increase in
frequency. The
traffic is mediated by several channels at the
membrane terminal [
28]. Thus, beyond the health issues, modifications in
content in the extracellular milieu represent fertile soil for mathematical modeling of the ionic impact on the nervous electrical activity. In this framework, the
emerges as a classical but still essential preparation to identify numerical patterns in a biological scope. Deviations from the
could eventually represent an opportunity for uncovering specific numerical patterns associated with determined pathological regimes playing a role in neurotransmission.
In summary, the present work is based on the manipulation of high extracellular potassium (
) because it approximately mimics a physiological stimulation [
29]. Also, impacts exerted by the manipulation of extracellular and intracellular [
] over the membrane potential in muscle preparations are well characterized [
30]. Next,
triggers a strong membrane depolarization followed by a dramatic acceleration of the
rate [
31]. Third, several studies have correlated morphological cellular modifications evoked by the accumulation of
[
32]. From this justification, the present work aims to expand our previous study, evaluating whether the intervals between
still obey the law in conditions of hyperkalemia. It is well accepted that the increase in extracellular potassium concentration increases the frequency of
. Therefore, this exacerbated electrophysiological activity allows rigorous verification of the law validity for a large amount of data, and the conformity level may be studied in more detail.
2. Mathematical Formulation of NBL
Hill introduced the probabilities for occurrences, inferred by the general equation expressed as follows:
The above expression can be particularized to analyze only the frequency of the first digits. In this case, the equation is then written as follows:
It is worth to highlight that second digit analysis is often performed in the
applications. For example, Diekmann documented that articles published in the American Journal of Sociology are well described by taking the second digit [
33]. Thus, probabilities for the appearance of a second digit are given by the expression:
Nigrini claims that regardless of the usual analysis of the first or second digits, providing important information about the compliance of the analyzed data, it is vital to consider the analysis of the first-two digits [
34]. According to this researcher, investigating the conformity of the first-two digits makes it possible to extract a more detailed scenario of how the phenomena obey the law. This case is written in the following functional form:
The expected frequencies for the first and second digit is resumed in the
Table 1:
Several authors have reported that fluctuations in the empirical first digit values can also occur, although the typically asymmetric distribution of digits. This observation implies deviations between the data and the frequency values predicted by the
. Evidence corroborating these observations comes from seismic activity and cognition experiments [
35,
36]. This issue motivated several authors to propose a
generalization. In this framework, for instance, one may highlight the theoretical description introduced by Pietronero
et al. [
37]. According to this author, assuming a probability distribution
(where d is the digit(s),
D represents an arbitrary digit(s) and
is a constant exponent related to the scale proportion), one may write:
or by the differential equation:
Solving eq. (
6) results in an
-logarithm:
According to eq. (8), defined as generalized
, more frequent first digits than expected by
implies
, while
means a first digit frequency below the predicted percentage. As expected, when
is fully recovered. Taking
, equation (8) rewritten as:
From the approach developed by Pietronero
et al. [
37], it is also possible to obtain expressions for the second digit:
Finally, the generalized Benford’s Probability for the first-two digits is presented as:
normalized for each
value.
3. Electrophysiological Recordings
The hemidiaphragm is a muscle that separates the thoracic from the abdominal cavity and presents several empirical advantages. One can highlight the easy identification and dissection, which facilitates muscle extraction. Another remarkable advantage is the stereotyped spontaneous electrophysiological activity. The experimental paradigm in the present work follows the same procedure used in our previous works [
1,
38]. All experimental procedures in the present work were approved by the Animal Research Committee (CETEA - UFMG, protocol 073/03) [
39]. Wild-type adult mice were euthanized by cervical dislocation, followed by diaphragms extraction, and quickly inserted into a physiological Ringer solution containing (in mM):
(137),
(26),
(5),
(
), glucose (10),
(
), and
(1.3). The
was adjusted to 7.4 after gassing with
and
In the experiments with high , sodium concentration was adjusted to maintain the osmotic equilibrium. The muscles were maintained in solution at least 30 minutes before the beginning of the electrophysiological recordings, allowing recovery from the mechanical trauma of their extraction. Next, the tissues were transferred to a recording chamber continuously irrigated with fresh fluid at ml/min at room temperature (). A standard intracellular recording technique was used to monitor the frequency of spontaneous by inserting a micropipette at the chosen muscle fiber. Borosilicate glass microelectrodes had resistances of when filled with solution (). A single pipette was inserted into the fiber near the end-plate region as guided by the presence of with rise times ms. The control experiments provided intervals extracted from 14 recordings, whereas intervals were collected from 12 experiments for . Thus, our experimental paradigm afforded an enormous data quantity, allowing rigorous analysis. Electrophysiology Software (John Dempster, University of Strathclyde), R Language, Origin (OriginLab, Northampton, MA), and MATLAB (The MathWorks, Inc., Natick, MA) were employed for electrophysiological acquisition and data analysis.
Figure 1.
A representative electrophysiological portions collected from two experiments carried at physiologica (left) and high (right).
Figure 1.
A representative electrophysiological portions collected from two experiments carried at physiologica (left) and high (right).
4. Conformity Analysis
There is an intense debate about
compliance testing. Several procedures are available, but the validity of many of these methods has been questioned. For instance, many investigators assume tests, although they manifest the "excess of power" problem, yielding in the literature accumulation of results with false claims of conformity. Adopting such tests is still more questionable when dealing with large data. In this sense, the "excess power" emerges because the tests consider the sample size in its mathematical formulation. On the other hand, while the sample size certainly confers a fundamental parameter in statistical analysis, the tests that do not consider the sample size can be interpreted as a distance of the data with those frequencies predicted by the
. Within this scheme, proposals for methods arose in which the sample size was ignored, avoiding the "excess power" problem. To address this issue, Nigrini and Kossovsky suggested the Mean Absolute Deviation (
) and the Sum of Squares Difference (
), respectively [
14,
34]. Authors suggest that despite
the importance of calculating conformity in certain situations,
is a superior test to
. The main reason is that the
test does not involve absolute value, a concept directly inspired by regression theory, which uses sums of squared errors. Notwithstanding conceptual differences,
and
are routinely applied in different investigations. In mathematical form,
is presented as:
where
and
are actual and expected proportion, respectively. Additionally, SSD is calculated with the following equation:
once again,
and
are the actual and expected proportion, respectively.
Table 2 presents the conformance range for
and
analysis.
Recent studies showed that even the
has inaccuracies, which will consequently reflect the actual compliance level. Investigating the foundations of this method, Lupi and Cerqueti addressed the inconsistencies in
premises, allowing these researchers to give an alternative formulation about extracting the conformance level [
40,
41]. These authors claim a test, still based on
, but consider the severity principle as useful to make adjustment in
values. The excess
test is presented as follows. Let us consider:
where
N is a normal distribuition. Furthermore,
is given by:
the simbol
represent the transpose,
is a k-vector of 1,
D is a diagonal matrix formed by
and
R is the covariance matrix defined as:
where
with
For equation (
15)
clearly depends on
n and
k, a fact reinforced by notation
which allows measure discrepancy of usual
for his mean, denoted by Excess
Therefore,
method, given by (
12), is not independent of
n, but rather it depends on
[
40,
41]. Thus, we will not include the excess
for
because it was only demonstrated that
, under the null of conformity with
, is approximately distributed as shown in (
15) [
40,
41]. In this sense, for
we will probably obtain another expression taking into account
values. Within this scope, subsequent investigations would delve in this problem in order to obtain a generalization of the
for the
, providing a
as a function of
,
k and
n (
), making possible to analyze its excess
as
.
5. Results
The conformity analysis summarized in
Table 3, showed that at normal
the experimental
intervals follow the
satisfactorily, considering the first and second digits, as much as the first-two digits. All data achieved at least marginal conformity. Moreover, it is essential to highlight that the
excess calculation adjusted the compliance levels, especially for the first-two digits, improving conformity in all cases. The excess
possibly attenuated the "excess power", while the non-conformities obtained even with the excess
may suggest that
in its classic format is inadequate to describe the
intervals. This observation is also reinforced by examining the
Figure 2, where it can be observed representative results extracted from two electrophysiological recordings, considering both physiological level and high
. For the first digit, a visual inspection enables one to observe an excellent agreement of experimental data and predicted values. However, taking all results given in
Table 3 and
Table 4, the conformity pattern revealed an exciting scenario, as different levels of compliance were observed regardless of the test used.
Figure 3 brings the statistical summary of all databases for both potassium contents, where, in general, it is possible to observe a more significant deviation of the data taken at high
. In spite of this more significant deviation, one may note, except for the second digit for high
, the characteristic asymmetric digits distribution predicted by the law.
The conformity levels obtained at high
, highlighted that although using
gave more frequent non-conformities, the overall results suggest that the law remains most obeyed. Despite these results, we decided to delve our attention into these deviations from the
proportions, especially for high
recordings, where we verified whether the
generalization might be more appropriate in the data adjustments. In this scheme, the results in
Table 5 shows for
in
is assumed, it generally implies conformity improvement. This observation is readily confirmed by comparing the
results of
Table 3 and
Table 5. The conformity analysis for
, summarized in
Table 4 and
Table 6, also suggested a better adherence of
to the data as compared to
calculations. In fact, according to these results, the compliance level improved in several cases.
The high number of intervals obtained in some electrophysiological recordings at high
offered the possibility of investigating how the level of compliance could be regulated as a function of the number of
intervals. To address this issue, we performed an analysis based on the data cumulative frequency. This procedure divided the experimental data into equal portions within the time series. Next, we successively calculated the compliance level for each data portion.
Figure 4, illustrates an analysis made from
intervals, which gives the conformity behavior considering the adopted tests. In general, for both
, the tests revealed fluctuations given by the presence of local conformities and non-conformities until achieving the final value giving the final compliance level. It is worth mentioning that this fluctuation was noted in all data, being more evident in those obtained (results not shown).
This calculation enabled us to glimpse how the data amount may contribute to regulating compliance. The data studied with the , yielded a gradual improvement in the first digit compliance level. This behavior was analogously observed when is used, where compliance also improved despite the more pronounced oscillation compared to result, along with the increase in the amount of successively added portions. For the second digit, there is a tendency for the level of compliance to increase and to be more unstable of the intervals quantity. On the other hand, the results for compliance of the first-two digits presented a slight variation, suggesting a more attenuated sensitivity concerning the data size. Furthermore, our calculations revealed that the exponent also demonstrated an interesting oscillatory behavior, especially for the second digit compared to the first and two first digits. Regarding the tests used to verify the conformity of the experimental data with the , a significant variation in the results of the second digit can be noticed, confirming the tendency observed in the analysis. In addition, adopting the improved the compliance level. In summary, these results strongly suggest that at least for intervals recorded at mammalian , the conformity level may be modulated by the frequency or the time series length.
The most pronounced deviations observed at
in the statistical summary presented in
Figure 3 motivated us to verify the data disposition considering the
.
Figure 5 provides the statistical summary comparing the distribution of all digits of the experiments carried out in high potassium, in which we can verify a satisfactory enhancement of the
with the experimental data. This visual conformity is seen for the first and first-two digits. The
relevance also highlighted by examining the exponent factor
, in which values close to 1, taken at physiological at
, indicates that
is sufficiently satisfactory to describe the
intervals (
Figure 6). Moreover, based on
, one may indicate
importance in modeling data extracted from high
.
6. Discussion
The present study expanded our previous investigation on how changes in the ionic concentration of the artificial physiological solution can modulate the level of compliance of intervals between . In this framework, this report confirmed the validity in a hyperkalemic environment. As already expected, the analysis initially showed that intervals of , recorded at normal , agree with the first, second, and first-two digits frequencies. We achieved this conclusion by assuming three different conformity tests. In = 5 mM, excess test enabled improved conformity for both first and second digit results. At the same time, for the first-two digits data, all nonconformities were converted into conformance. These findings suggest how "revealed as "excess power" may influence the results and data interpretation.
According to our analysis, at the = 25 mM, a heterogeneous conformity scenario emerged, in which nonconformity abundantly appeared as compared to the results at physiological solution. Besides this observation, generally pointed out that the distribution of digits obeys the . The level of compliance predicted by this test in most cases examined relied on an acceptable and marginal level. On the other hand, the results obtained from and excess calculations provided several nonconformities, showing the necessity to adopt a generalized version to understand applicability and limitations for = 25 mM. Also, the strong depolarization, promoted by the high , resulted in the loss of conformity. In this framework, assumption brings closer the intervals adherence to the law, especially when quantified by the test. In addition, values emerged as a helpful parameter for verifying if the data is better described by the or . When is assumed, the results for within the physiological content showed that median is close to 1. In contrast, at high , they deviate more significantly from , highlighting the importance of in data harvested at higher concentrations.
Based on the present findings, one may formulate the following question: What neural substrate might be associated with law validation at high ? Is there a relationship between morphological modifications and the rate of discharge of ? Therefore, is the , given by its parameter, a possible indicator of the structural changes in the ? Previous research suggested that high is related to morphological alterations as much as observed in a series of pathologies. Although our approach to tackle this question was indirect, focusing only on the electrical response, further combinations of morphological and electrophysiological studies are required to investigate how changes in morphology can be associated with exponent. Yet, within this scope, it would also be essential to investigate the validity of the law in situations of injury, in which it is well accepted that the terminal undergoes morphological restructuring as well. Examination of these issues may confirm the utility of in quantifying numerical patterns in pathologies known for modifying the architecture. If such correspondence could be finally confirmed, would arise as a suitable form for detecting the presence of an anomalous regime beyond those associated with hyperkalemia diseases.
In this work, the experiments were done considering the ambient temperature. Besides
increment, the thermal level arising is another critical parameter that promotes
frequency modulations. It is evident that thermal fluctuations modify the resting potential of the nerve terminal, depolarizing and hyperpolarizing as the biological membrane temperature is raised and lowered, respectively [
42]. Consequently, although changes in temperature and
imply different synaptic mechanisms, rising temperature similarly reflects an increment of
frequency [
43,
44]. Thus, one may hypothesize that at higher temperatures, such as observed for a hyperkalemic environment,
would emerge as the most appropriate formulation to study the first digit phenomenon at mammalian physiological temperature. In that case, it is plausible to argue that resting potential might be governed by physiological mechanisms ruled by generalized formalism. This conjecture is based on the following arguments. Firstly, Procópio and Fornés, inspired by the fluctuation-dissipation theorem, showed how voltage fluctuations impose a mechanism responsible for regulating the gating channel behavior [
45]. Secondly, influenced by generalized thermodynamics statistics (
), Chame and Mello generalized the fluctuation-dissipation theorem [
46]. Third, a direct mathematical relation between the
and
was deduced by Shao and Ma [
47]. Finally, studies at mammalian
performed by da Silva
et al. have shown that synaptic transmission statistics are best understood within an approach inspired by the
[
38,
48]. Taken together, these arguments sketch a theoretical pillar to hypothesize the existence of a relation between
within a generalized resting potential, likely valid at mammalian temperatures. In this scheme,
would imply a resting potential regulated in terms of
formalism and its famous
q-index. Therefore, the discussion given above offers a thermodynamic scenario for explaining the decrement or even conformity fails, computed for the first digits at high
. But, future investigations are required to better comprehend the relationship between the temperature,
, and
theory in the neurotransmission context.
Finally, it is essential to mention that large amounts of data, like those extracted at high , represent an excellent way to assess how compliance can be changed as a function of the size of the data amount. This especially became more significant for the second and first-two digits. Such discussion allows us to elaborate on a last profound question: Will the validity of the law reported here in in vitro conditions still be verified at the systemic level, where the junction is intact and attached to the animal organism? Although taken at an artificial hyperkalemic physiological solution, our results showed local variations in the conformity level. Is this compliance behavior sensitive to the sampling rate or the size of the time series? Our results show that, at least at the , the conformity level has a very dynamic behavior. Last, but not least, does the validity change throughout the rodent life? Could these results be equally extrapolated for the human ? These are conundrums within the applicability and validity of in physiological terms. Further experimental investigations are welcome to assess these intriguing questions.