Preprint
Article

Sparsity Limit to Prune Large Language Models for on-Device AI Assistants: Llama-2 as an Example

Altmetrics

Downloads

279

Views

218

Comments

0

This version is not peer-reviewed

Submitted:

06 August 2024

Posted:

08 August 2024

You are already at the latest version

Alerts
Abstract
Large language models (LLMs) have shown impressive performance and versatility. However, their billions of parameters and high computational costs hinder the development of personalized and privacy-preserving AI assistants operating locally on user devices. In this work, we explored the potential of pruning LLMs to create lightweight models suitable for user devices, using the moderate-sized Llama-2 7B model as an example. By adopting a simple yet effective pruning method, we found that up to 60% of the weights in the Llama-2 7B model could be pruned without significantly impairing its language modeling capabilities. Furthermore, despite occasional factual inaccuracies, the pruned model at the sparsity limit generated fluent and helpful answers to daily queries, demonstrating its feasibility of on-device AI assistants. These inaccuracies might originate from forgetting or hallucination due to pruning. We proposed a simple protocol to distinguish between the two mechanisms, as well as future directions to improve the pruned models for local AI assistants.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Large language models (LLMs) have shown impressive performance on various tasks and signficant impact on human society [1,2,3]. However, they require substantial computational resources and energy consumption. To make LLMs more accessible to individual consumers, a recent trend is to train moderate-sized models such as Llama [4,5], ChatGLM [6], and Falcon [7]. Despite being relatively smaller, these models still contain billions of parameters, posing challenges for the development of personalized and privacy-preserving AI assistants that can operate locally on user devices like laptops or mobile phones. For instance, the smallest model in the Llama-2 series has 7 billion parameters and requires over 20 GB memory, making it impractical to run on a mobile phone with limited RAM.
Recent advances in neuroscience and deep learning offer promising solutions by leveraging sparse connectivity. Biological neural networks and neuronal activities in the brain are generally sparse [8,9,10,11,12,13], in contrast to the dense connections and activations widely adopted in LLMs, including the QKV matrices and the attention matrices of the Transformer architecture [14]. Notably, a recent computational model found that in mammalian olfactory system, sparse inter-hemispheric projections can align two olfactory cortical neural representations of environmental odors when the number of olfactory cortical neurons is sufficiently high [15]. Similarly, theoretical studies revealed that under suitable conditions, sparse Transformers only need O ( n ) connections per attention layer to approximate any sequence-to-sequence function, rather than O ( n 2 ) , where n is the number of tokens [16]. In practice, the machine learning community have developed various pruning and sparsification skills to obtain sparse models [17], which date back to several decades ago [18,19,20].
Consequently, pruning and sparsifying LLMs has gained popularity, leading to the development of diverse methods [21,22,23]. For instance, [21] utilized gradient information to perform structural pruning of LLMs. Motivated by the empirical observations of emergent hidden-state features with large magnitudes in LLMs [24,25], [22] developed a simple but effective pruning approach on a per-output basis: the weights were firstly multiplied by the corresponding inputs, and those with smallest output magnitudes were pruned. Additionally, [23] proposed a two-step structured pruning procedure involving targeted structured pruning based on precedent models and dynamic batch loading that adjusts training data proportions based on respective losses.
Our research focuses on developing local LLMs as AI assistants on user devices, prioritizing lightweight models over perfect accuracy. Therefore, we adopted the simple yet effective method from [22] to prune the Meta-developed Llama-2 7B model [4], explored the sparsity limit of pruning, and evaluated the performance at the sparsity limit, to facilitate the development of personalized AI assistants on user devices. Our contributions are two-fold:
  • We demonstrated that 60 % of the weights in the Llama-2 7B model could be pruned without notable decrease in language modeling capability, as evidenced by the wikitext perplexity metrics.
  • We examined the user experience of daily dialogue and query handling at this sparsity limit to assess the feasibility of on-device AI assistants. The pruned model generated fluent and helpful answers but with factual inaccuracies, raising intriguing theoretical questions about the nature of pruning or sparsification. We hypothesized that these inaccuracies might result from forgetting or hallucination, proposed a simple protocol to distinguish between the two mechanisms, and discussed future directions to improve the pruned models.
In summary, our work paves the way for the development of lightweight and local LLMs suitable for user devices, balancing performance with resource efficiency.

2. Methods

We adopted the pruning method on a per-output basis detailed in [22]. As shown in Figure 1, unlike the magnitude-based and layer-wise pruning, this method examines the contribution of each weight W i j to a specific output i by multiplying it with corresponding inputs, i.e., the contribution of weight W i j to output i is estimated as
O i j = | W i j | · | | X j | | 2 , j ,
where | | X j | | 2 is the 2 norm of the j-th input dimension vector concatenated across different inputs. Denote s as the desired sparsity ratio, then for output i, we will prune a fraction s of the weights W i j among j based on the sorted O i j values. In this paper, we varied s from 0 to 90 % , to figure out the sparsity limit of pruned LLMs.
Secondly, among the different pruning strategies explored in [22], we adopted the unstructured pruning approach rather than the structured N:M sparsity pattern, where N out of M consecutive weights connected to an output were pruned [26]. Although the latter can utilize the NVIDIA GPU sparse tensor cores to accelerate matrix calculations, [22] demonstrated that the unstructured pruning generally performed better and was more robust.
Finally, following [22], we used the same calibration data (the C4 training set [27]) to estimate input distribution, calculate the contribution to output O i j , and facilitate pruning.

3. Results

3.1. The Sparsity Limit to Prune LLMs, Evaluated by Wikitext Perplexity

We varied the sparsity ratio from 0 to 90 % , to probe the sparsity limit of pruning the Llama-2 7B model, which we denoted as s . Following [21,22,25], wikitext perplexity was used to measure the language modeling capability of pruned models. As shown in Figure 2A, the perplexity increases slowly for sparsity ratios from 0 to 60 % , remaining the same order ( 10 ). However, from 60% to 90%, the perplexity grows exponentially. Furthermore, the perplexity of the full model is p 0 5.12 (the red dot in Figure 2B), and we confirmed that the 60 % -sparsity model with fewer than half of the connections has a perplexity p 10.05 < 2 p 0 , while that of 61 % -sparsity model is above 2 p 0 (the blue-white dot in Figure 2B). This dramatic change at the sparsity ratio of 60 % and the threshold of 2 p 0 suggest that, with current pruning method, the sparsity limit is roughly s = 60 % . Altogether, these results showed that Llama-2 7B model could be pruned up to 60 % ; in other words, only 40 % connections are required for relatively intact function.

3.2. Pruned Sparse LLM as AI Assistants: User Experience Examination

Next, we investigated the user experience of daily dialogue with the pruned Llama-2 model at the 60% sparsity limit, in order to evaluate the potential of pruned sparse LLMs as future on-device AI assistants. Specifically, to simulate daily dialogue of mobile phone users, we posed several queries to the 60 % -sparsity pruned models. Two example queries are:
  • “Tell me about Boston”;
  • “Describe the Python programming language, in terms of its syntax, history, user experience, and popularity”.
As shown in Figure S1, the 60 % -sparsity pruned model can generate fluent, meaningful, and helpful answers that are indistinguishable from those of the full Llama-2 7B model. For fairness, ChatGPT-4o was used to rate the answers of both models, and the 60 % -sparsity pruned model obtained good scores that are slightly lower than the full model (6 vs 7, and 7 vs 8 out of 10, Figures S2–S5).
However, we noticed that the 60 % -sparsity model sometimes produces factually incorrect statements (the red texts in Figure S1). For example, when talking about Python language, it wrongly alleged that “(Python) was created in 1990 by Guido van Rosenberg”, while the full model correctly pointed out that “(Python) was created by Guido van Rossum and first released in 1991”. Similarly, for the Boston-related query, the answer from the 60 % -sparsity model also contained many factual inaccuracies, as labeled by the red texts in Figure S1 and identified by ChatGPT-4o (Figure S2).
This observation raised interesting theoretical questions about the nature of pruning (sparsification) and the mechanism of generating factual inaccuracies. With some weights zeroed during pruning, are these factual mistakes due to forgetting (e.g., the pruned model forgot thus could not find the name of Guido van Rossum as the Python creator)? Or, the pruned model still remembered this person’s name and his role as the Python inventor, but did not find and generated some hallucinations due to the missing weights? Similar phenomenon named “catastrophic forgetting” has been observed and studied in the context of transfer learning [28,29] but less explored in pruning. In Discussion, we proposed a simple protocol to distinguish between the two mechanisms (forgetting vs hallucination) in future studies, as well as possible solutions to remedy this inaccuracy issue.
Last but not least, as mentioned in [21], the pruned model sometimes generated meaningless sentences, repetitive tokens, and even mixed multilingual characters. Also, we confirmed the importance of prompt engineering, as in many other unpruned LLMs (see the review [30]). For instance, if we rephrase the Python-related question as “Can you explain briefly to me what is the Python programming language?”, the pruned model occasionally repeat this question and end the dialogue. In Discussion, we provided some potential methods to mitigate these problems.

4. Discussion

Summary. We adopted a simple yet effective pruning approach to sparsify the Llama-2 7B model, and found that up to 60 % weights could be pruned without significant decrease in model performance. At this sparsity limit, the pruned model was able to generate fluent and helpful answers to daily-life queries, demonstrating its potential for local and lightweight AI assistants on user devices. Meanwhile, some factual inaccuracies were identified in the answers of the pruned model, whose mechanism and remedies will be studied in the future.
Future directions. To distinguish forgetting vs hallucination and account for the factual inaccuracies generated by the pruned model, notice that the key difference is whether it still remembers the true answers. Therefore, we propose to test the pruned model with appropriate prompts after identifying the factual mistakes. For example, we may ask the 60 % -sparsity model who is the Python creator or who is Guido van Rossum. If it correctly assigned Guido van Rossum as the Python creator instead of “Guido van Rosenberg”, we may conclude that this pruned model still remembers Guido van Rossum as the Python creator but simply did not find this name and generated the above hallucination. In reality, it is possible that both mechanisms coexist after pruning. Nevertheless, this attempt may serve as a small but useful step towards understanding the nature of pruning.
To eliminate these factual errors and improve the pruned models, we will adopt the fine-tuning methods such as that described in [31]. Another possible route is to perform layer-wise pruning with different sparsity ratios: currently, all layers are pruned evenly with the same sparsity ratio; however, it has been realized that different layers in deep neural networks are not equal, and generally, deeper layers can be pruned more [32,33,34,35,36]. Therefore, in the future, we will firstly calculate the significance of various layers as stated in [35,36], then assign the corresponding sparsity ratio to each layer for more efficient pruning.
In order to develop on-device AI assistants, we will also try more updated moderate-sized LLMs such as the newly-released Llama-3 8B model [5]. Moreover, we will employ various techniques to further reduce the model size while boosting the model performance [37]. For instance, LLM quantization has been an active research area, which stores the parameters with fewer digits to reduce the model size and save resources [25,38,39]. Another example is low-rank factorization which decomposes the weight matrices into smaller matrices [40]. Altogether, this work represents a small yet promising step towards on-device AI assistants, particularly when combined with various fine-tuning, layer-specific pruning, and LLM compression techniques in the long run.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

References

  1. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [CrossRef]
  2. Anthropic. Introducing claude, 2023.
  3. Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. [CrossRef]
  4. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [CrossRef]
  5. Meta AI. Meta llama 3, 2024.
  6. Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, Weng Lam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiaotao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yifan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, and Zihan Wang. Chatglm: A family of large language models from glm-130b to glm-4 all tools, 2024.
  7. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, et al. The falcon series of open language models. arXiv preprint arXiv:2311.16867, 2023. [CrossRef]
  8. Karl Friston. Hierarchical models in the brain. PLoS computational biology, 4(11):e1000211, 2008. arXiv:10.1371/journal.pcbi.1000211.
  9. Bruno A Olshausen and David J Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. arXiv:10.1038/381607a0.
  10. Ron A Jortner, S Sarah Farivar, and Gilles Laurent. A simple connectivity scheme for sparse coding in an olfactory system. Journal of Neuroscience, 27(7):1659–1669, 2007. [CrossRef]
  11. Cindy Poo and Jeffry S Isaacson. Odor representations in olfactory cortex:“sparse” coding, global inhibition, and oscillations. Neuron, 62(6):850–861, 2009. [CrossRef]
  12. Baktash Babadi and Haim Sompolinsky. Sparseness and expansion in sensory representations. Neuron, 83(5):1213–1226, 2014. [CrossRef]
  13. Evan S Schaffer, Dan D Stettler, Daniel Kato, Gloria B Choi, Richard Axel, and LF Abbott. Odor perception on the two sides of the brain: consistency despite randomness. Neuron, 98(4), 2018. [CrossRef]
  14. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  15. Bo Liu, Shanshan Qin, Venkatesh Murthy, and Yuhai Tu. One nose but two nostrils: Learn to align with sparse connections between two olfactory cortices. ArXiv, 2024.
  16. Chulhee Yun, Yin-Wen Chang, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. O (n) connections are expressive enough: Universal approximability of sparse transformers. Advances in Neural Information Processing Systems, 33:13783–13794, 2020. [CrossRef]
  17. Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1–124, 2021. [CrossRef]
  18. Sietsma and Dow. Neural net pruning-why and how. In IEEE 1988 international conference on neural networks, pages 325–333. IEEE, 1988.
  19. Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems, 2, 1989.
  20. Babak Hassibi and David Stork. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, 5, 1992.
  21. Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems, 36:21702–21720, 2023.
  22. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. arXiv preprint arXiv:2306.11695, 2023. [CrossRef]
  23. Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen. Sheared llama: Accelerating language model pre-training via structured pruning. arXiv preprint arXiv:2310.06694, 2023. [CrossRef]
  24. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems, 35:30318–30332, 2022. [CrossRef]
  25. Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023.
  26. Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378, 2021.
  27. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. [CrossRef]
  28. Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135, 1999. [CrossRef]
  29. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. [CrossRef]
  30. Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, and Shengxin Zhu. Unleashing the potential of prompt engineering in large language models: a comprehensive review. arXiv preprint arXiv:2310.14735, 2023. [CrossRef]
  31. Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199, 2021. [CrossRef]
  32. Sharath Girish, Shishira R Maiya, Kamal Gupta, Hao Chen, Larry S Davis, and Abhinav Shrivastava. The lottery ticket hypothesis for object recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 762–771, 2021. [CrossRef]
  33. Chao Jiang, Bo Hui, Bohan Liu, and Da Yan. Successfully applying lottery ticket hypothesis to diffusion model. arXiv preprint arXiv:2310.18823, 2023. [CrossRef]
  34. Bohan Liu, Zijie Zhang, Peixiong He, Zhensen Wang, Yang Xiao, Ruimeng Ye, Yang Zhou, Wei-Shinn Ku, and Bo Hui. A survey of lottery ticket hypothesis. arXiv preprint arXiv:2403.04861, 2024. [CrossRef]
  35. Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853, 2024. [CrossRef]
  36. Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts. The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887, 2024. [CrossRef]
  37. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. A survey on model compression for large language models. arXiv preprint arXiv:2308.07633, 2023. [CrossRef]
  38. Zhihang Yuan, Lin Niu, Jiawei Liu, Wenyu Liu, Xinggang Wang, Yuzhang Shang, Guangyu Sun, Qiang Wu, Jiaxiang Wu, and Bingzhe Wu. Rptq: Reorder-based post-training quantization for large language models. arXiv preprint arXiv:2304.01089, 2023. [CrossRef]
  39. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems, 6:87–100, 2024. [CrossRef]
  40. Mingxue Xu, Yao Lei Xu, and Danilo P Mandic. Tensorgpt: Efficient compression of the embedding layer in llms based on the tensor-train decomposition. arXiv preprint arXiv:2307.00526, 2023. [CrossRef]
Figure 1. The per-output-based pruning method, compared with the magnitude-based layer-wise approach (adapted from Fig. 1 in [22]). Given an LLM architecture and its Transformer blocks, this method extracts weights matrices (the left panel). For each matrix W (the 4 × 3 matrix in the middle as an illustration), its magnitude matrix | W | is multiplied by its input 2 -norm vector | | X | | 2 entry-wise to generate the contribution O i j to output i, according to Eq. (1); then based on the sorted O i j values, a fraction s of the weights per row are pruned (see the blue boxes, with s = 50 % here). By contrast, the magnitude-based approach only considers | W | and applies layer-wise, thus produces a different pruned matrix (the right panel).
Figure 1. The per-output-based pruning method, compared with the magnitude-based layer-wise approach (adapted from Fig. 1 in [22]). Given an LLM architecture and its Transformer blocks, this method extracts weights matrices (the left panel). For each matrix W (the 4 × 3 matrix in the middle as an illustration), its magnitude matrix | W | is multiplied by its input 2 -norm vector | | X | | 2 entry-wise to generate the contribution O i j to output i, according to Eq. (1); then based on the sorted O i j values, a fraction s of the weights per row are pruned (see the blue boxes, with s = 50 % here). By contrast, the magnitude-based approach only considers | W | and applies layer-wise, thus produces a different pruned matrix (the right panel).
Preprints 114528 g001
Figure 2. Wikitext perplexity p versus sparsity ratio s, to determine the sparsity limit s . (A) When sparsity ratio s < s = 60 % (the red line and the purple dot), wikitext perplexity increases very slowly and maintains the same order. However, for s > s , wikitext perplexity grows exponentially. y-axis: log scale. (B) The full model ( s 0 = 0 ) has a perplexity of p 0 = 5.12 (the red dot). The perplexity of s = 60 % is p 10.05 lower than 2 p 0 (the red line), while that of s = 61 % is p 11.05 > 2 p 0 .
Figure 2. Wikitext perplexity p versus sparsity ratio s, to determine the sparsity limit s . (A) When sparsity ratio s < s = 60 % (the red line and the purple dot), wikitext perplexity increases very slowly and maintains the same order. However, for s > s , wikitext perplexity grows exponentially. y-axis: log scale. (B) The full model ( s 0 = 0 ) has a perplexity of p 0 = 5.12 (the red dot). The perplexity of s = 60 % is p 10.05 lower than 2 p 0 (the red line), while that of s = 61 % is p 11.05 > 2 p 0 .
Preprints 114528 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated