Preprint
Article

Energy Efficiency Evaluation of Artificial Intelligence Algorithms

Altmetrics

Downloads

221

Views

111

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

13 September 2024

Posted:

13 September 2024

You are already at the latest version

Alerts
Abstract

This article advances the discourse on sustainable and energy-efficient software by examining the performance and energy efficiency of intelligent algorithms within the framework of green and sustainable computing. Building on previous research, it explores the theoretical implications of Bremermann's Limit on efforts to enhance computer performance through more extensive methods. The study presents an empirical investigation into heuristic methods for search and optimisation, demonstrating the energy efficiency of various algorithms in both simple and complex tasks. It also identifies key factors influencing the energy consumption of algorithms and their potential impact on computational processes. Furthermore, the article discusses cognitive concepts and their interplay with computational intelligence, highlighting the role of cognition in the evolution of intelligent algorithms. The conclusion offers insights into the future directions of research in this area, emphasising the need for continued exploration of energy-efficient computing methodologies.

Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

The increasing enthusiasm for environmentally sustainable practices naturally extends to green computing, where software plays a pivotal role. Green computing encompasses a wide range of strategies, including optimising hardware and software design, reducing power consumption, employing renewable energy sources, enhancing software efficiency, virtualising servers, and managing electronic waste (e-waste) [1]. The goal of green computing is to improve the performance and energy efficiency of computational systems through both hardware and software optimisation.
Recent societal and environmental concerns have driven the focus towards Responsible Artificial Intelligence (RAI), which aims to develop energy-efficient intelligent software systems [2,3]. While Artificial Intelligence (AI) holds the potential to create a future where all of humanity can thrive the energy consumption of Information Technologies (IT) – including portable devices, data centers, and cloud servers – has been escalating annually [4]. This surge in energy demand is reflected in global carbon emissions, as highlighted in recent global energy reviews [5,6].
The issue of computing energy efficiency requires a deeper examination. As noted in foundational research, there is a theoretical upper limit to the rate at which data processing can occur. This limit is applicable to all data processing systems, whether artificial or biological, and posits that “no data processing system, artificial or living, can process more than (2 times 1047) bits per second per gram of its mass” [7]. The formulation of these computational constraints is grounded in fundamental physical principles: “The capacity of any closed information transmission or processing does not exceed (mc2/h) bits per second, where (m) is the mass of the system, (c) is the speed of light, and (h) is Planck’s constant” [8].
More recent studies have refined this understanding, suggesting that Bremermann’s limit, initially proposed in 1962, should be corrected to align with the principles of general relativity. The revised limit is expressed as (c5/Gh)1/2 = ~1043 bits per second, where (c) is the speed of light, (G) is the gravitational constant, and (h) is Planck’s constant [9]. The existence of Bremermann’s limit suggests that further improvements in computational performance will encounter insurmountable physical barriers. Early signs that computing systems are nearing this threshold include the growing need for cooling systems to dissipate heat, increased electricity consumption, and changes in environmental heat pollution that may have irreversible effects on the climate.
A potential solution to these limitations is to enhance energy efficiency in a manner similar to natural systems, focusing on both hardware and software optimisation. While there has been significant progress in the energy efficiency of hardware in large computational systems – including those used by major AI services like OpenAI – with supercomputers improving their hardware energy efficiency by more than 200 times over the past 20 years based on the LINPAC Benchmark [10,11,12,13], such categorical advancements in software energy efficiency remain elusive.
This article, therefore, focuses on the critical need for improving software energy efficiency, particularly in the context of AI and intelligent algorithms, which often handle high volumes of uncertain, time-dependent data.

2. Survey of Related Literature

While hardware energy efficiency has consistently improved over time [14,15], concerns about the energy efficiency of software have only recently emerged, as evidenced by a growing number of publications on the topic [16]. Software sustainability encompasses a range of applications, including specific software products, online applications, and data processing systems. This involves minimising power consumption and optimising the entire software lifecycle, considering human, economic, and energy resources [16].
Given the increasing demand for portable devices such as smartphones, tablets, and laptops, significant research efforts have focused on reducing their energy consumption by enhancing software quality through techniques like code refactoring, which restructures existing source code to be more energy-efficient [17,18,19]. A more advanced approach involves assessing software sustainability holistically, considering factors like efficiency, quality, and other critical properties [20,21,22]. This approach encourages software practitioners to prioritize sustainability during design and development [20], while offering systematic guidelines and frameworks that help professionals evaluate the sustainability impact of software [21,22].
Artificial Intelligence (AI) and intelligent systems, as sophisticated types of software, are fundamental to computing, cloud services, and data processing, influencing multiple aspects of life. Current research predominantly focuses on enhancing the accuracy and reliability of AI-based systems, which often requires vast datasets, large AI models, and resource-intensive infrastructures [23]. Recent studies have proposed a hypothesis suggesting that when developing “green” AI systems, architectural decisions’ impact on energy efficiency must be better understood, managed, and reported to reduce computational power requirements [23].
Research also indicates that intelligent software development benefits from appropriate data abstractions, heuristic and metaheuristic algorithms, and the reduction of outdated limitations [24,25]. This allows for the adaptation of intelligent software to solve tasks with minimal computational resources, whether these tasks are relatively simple [24,26] or involve problems with a high number of parameters [27,28]. Consequently, there has been a shift towards developing sustainable, green AI-based software systems that utilize architecture-centric methods to model and develop energy-efficient AI systems [23]. However, these approaches often overlook essential properties of natural intelligence, such as cognition and adaptation, which are discussed further in this article.
Furthermore, a comprehensive study on design patterns for machine learning applications identifies 15 distinct patterns, such as solutions for real-time data processing and continuous reprocessing or storing data in a raw format [29]. However, these patterns are generally limited to analytical programming and fail to incorporate elements like abstraction and intuition, which are integral to natural intelligence. Evaluating AI systems’ sustainability and quality also requires measuring and assessing software products and components’ energy efficiency. Recent research efforts have focused on developing a Green Software Measurement Model to categorize existing measurement methods and create adapted methods for specific use cases, such as software types and system components [30]. This model has been adapted for the empirical research and evaluation of experimental software presented in this article.
Another aspect that requires further exploration is the impact of programming language choice on software energy efficiency. Recent studies have shown significant variations in energy consumption depending on the language and compiler used [31]. This highlights the need for more research into the influence of not only computer languages but also human languages on software energy consumption.
There are growing concerns regarding the current trajectory of AI, machine learning, and deep learning due to their exponentially increasing demand for data, training, and infrastructure [32,33]. These trends conflict with emerging regulations and requirements for efficiency and sustainability [34,35], as well as with the natural laws of selection [36]. Achieving harmony with these natural laws could enhance the sustainability of AI systems, prompting key questions, such as: How do biological systems manage data storage and transmission efficiently? Understanding these principles could inform the design of more sustainable AI.
In software engineering, especially for high-performance computing (HPC) systems, achieving a balance between energy efficiency and performance has become a critical non-functional requirement. Software developers must thoroughly understand both the problem domain and the target computer architecture, considering various programming models, languages, tools, and heterogeneous systems, which increases development complexity [37]. AI applications, in particular, demand high performance and energy efficiency, necessitating specialised knowledge from developers. Therefore, methodologies and tools that assist both specialised and general developers are crucial for optimising HPC systems. The time and energy consumption measurement approach discussed in this paper could be invaluable for evaluating intelligent computing and AI software systems.
In AI, computational intelligence, and software development, it is often observed that the same task can be accomplished using different resources and timeframes, similar to the behaviour of biological species. Examples in software include sorting algorithms [38] and adaptive heuristic algorithms in computational intelligence [39]. To enhance intelligent systems, genetic, swarm, evolutionary, heuristic, metaheuristic, and adaptive algorithms are promising. For instance, a study comparing over ten metaheuristic algorithms optimised by swarm intelligence for code smell detection demonstrated notable advancements in these algorithms’ performance [40]. However, this research also highlighted common limitations among these metaheuristics, suggesting the need for further improvements.
The ultimate goal of AI is to develop technology that enables machines to operate in highly intelligent ways [41]. This objective drives the creation of new algorithms and large, high-quality datasets. However, it remains challenging for AI systems to address all potential real-world scenarios fully. Therefore, a critical question is how to harness these uncertainties to ensure socially responsible behaviour in AI algorithms [42]. Defining AI in a manner that aligns with social responsibility remains a significant challenge, and this study questions whether AI can be considered socially responsible based on its energy efficiency and sustainability, necessitating further comprehensive research [43].

2. Materials, Tools and Methods

The selection of methodology and experimental settings for this study was guided by four key principles: minimising energy use, eliminating energy waste, thoroughly evaluating software processes, and avoiding specific settings that might unduly favour certain tasks.
To adhere to these principles, three algorithms and seven test problems were chosen, all of which have been previously studied with results documented in the literature. The selection of test problems was based on the following criteria:
  • The tests must involve problems with unknown optimal solutions.
  • The tests should be scalable to multidimensional formats.
  • The tests should feature heterogeneous landscapes.
The selected numerical tests meet these criteria, offering scalability and varying search spaces. Each test was configured with 100 parameters, and they include:
  • Griewank Test: A global optimisation problem with an optimal value of 0 [43].
  • Michalewicz Test: A global test with an unknown optimum that varies depending on the number of dimensions [44].
  • Norwegian Test: Another global test with an unknown optimum influenced by dimensionality [45].
  • Rastrigin Test: A global optimisation problem with an optimal value of 0 [46].
  • Rosenbrock Test: A smooth, flat test with a single solution and an optimal value of 0 [47].
  • Schwefel Test: A global optimisation problem with an optimal value of 0 [48].
  • Step Test: A test that introduces plateaus into the topology, which prevents reliance on local correlation in the search process. Its optimal value depends on the number of dimensions and may be unknown for various dimensions [49].
In alignment with the study’s principles, three algorithms were selected:
  • Particle Swarm Optimisation (PSO): A swarm-based algorithm for real-coded tasks over continuous spaces [50].
  • Differential Evolution (DE): A heuristic algorithm designed for optimising nonlinear and non-differentiable functions in continuous spaces [51].
  • Free Search (FS): An adaptive heuristic algorithm for search and optimisation within continuous spaces [52].
All algorithms were configured to operate on 10 candidate solutions, with a limit of 100,000 iterations over 320 sequential runs. The experiments aimed to measure both the processing time and energy consumption required to complete the specified number of iterations. Previous publications provide detailed evaluations of these algorithms in similar contexts [27,39,45,52].
The experiments were conducted on a computer system with the following specifications: an Intel XEON E5 1660 V2 processor overclocked to 4.750 GHz, operating in a 1 core – 1 thread configuration with a maximum thermal design power (TDP) of 130 W. The system was equipped with a CPU water cooler, RAM set at 2000 MHz, an ASUS P9X79-E WS motherboard, and a SanDisk Extreme SSD SATA III solid-state drive. Each experiment was executed individually, with one algorithm applied to a single test function at a time to ensure accurate measurement of performance and energy consumption.
By adhering to these methodological principles and experimental setups, the study aims to provide robust, replicable, and meaningful insights into the efficiency of different optimisation algorithms under real-world conditions.

Methodology

This study adopts a modified version of the Green Software Measurement Model as proposed in prior literature [30]. The model was adapted to align with the specific context and requirements of this research by focusing on the following key parameters:
  • Duration (min): Time taken for each experiment.
  • Number of iterations (integer): The count of repeated cycles for each algorithm.
  • Mean system power (W): Average power consumption of the entire system.
  • System energy (Wh): Total energy consumption over time.
  • CPU usage (%): The percentage of CPU utilization during algorithm execution.
  • CPU power (W): Power consumption specific to the CPU.
  • CPU cores (1 core - 1 thread): Number of active cores and threads during processing.
While the complete implementation of the Green Software Measurement Model as described in [30] is beyond the scope of this study, it remains a promising direction for future research, contingent on the availability of the required resources.
To measure power consumption, a digital Power Consumption Energy Meter was used to monitor the entire system’s power usage [53]. The power consumption of each algorithm was calculated as the difference between the system’s power level during the algorithm’s execution and the baseline power level recorded when the system was in standby mode, with only the operating system components and tools for CPU parameter measurement (CPU-Z) [54] and CPU core temperature monitoring (Core Temp) [55] running. These monitoring tools were kept active throughout all experiments, running concurrently with the algorithms.
Each experiment was restricted to a maximum of 100,000 iterations, with the duration recorded manually at the start and the end time automatically logged as an attribute of the results file. This approach ensured that the time-tracking process did not interfere with the performance of the algorithms or the system configuration.

3. Results

The experimental results are summarised in Table 1, which presents the performance metrics of three optimisation algorithms: Particle Swarm Optimization (PSO), Differential Evolution (DE), and Free Search (FS). The table outlines the time taken by each algorithm to complete the experiments, with the time recorded in the format of hours, minutes, and seconds (hh:mm:ss).
The mean system power consumption in standby mode, with the Task Monitor at 0% workload, was measured at 166W on the socket. Under these conditions, the CPU power consumption for a single core with one thread was recorded at 33.4W and 21.5W, respectively.
At full workload capacity (100% workload) for all experiments reported in Table 1, the mean system power consumption increased to 185W, with a variation of 3% throughout the execution period. This variation could potentially be attributed to changes in temperature or other environmental factors, which warrants further investigation. Additionally, under full workload conditions, the CPU power consumption for a single core with one thread was measured at 42.8W and 30.4W
Presented in Table 1 data indicates variation of time for execution per test and per algorithm.
The analysis of the experimental data reveals that the evaluation time per test is directly influenced by the complexity of the search space. More complex search spaces require longer evaluation times. Additionally, the duration of the search process varies depending on the capabilities and characteristics of the algorithms used.
Figure 1 illustrates the time taken per test, while Figure 2 shows the time required by each algorithm to complete 100,000 iterations for a selected test case. Among the algorithms analysed, Particle Swarm Optimisation consistently required the most time across all tests. Differential Evolution exhibited a moderate range of time consumption, while the Free Search algorithm completed all tests the fastest. Notably, the results for the Schwefel and Step tests (Figure 2) suggest the presence of specific factors that may affect exploration time, indicating that certain features of these functions could be influencing the efficiency of the search process.
During the evaluation period, three distinct components can be considered:
  • Time for Objective Function Evaluation: This represents the duration required to understand and assess the search space.
  • Time for Algorithm Execution: This refers to the time taken for the interpretation and assessment of the search space by the algorithm.
  • Time for Algorithm Decision Making: This is the duration needed for the algorithm to make decisions and select subsequent actions.
The energy consumption data presented in Table 2 is calculated based on the energy used by the algorithms. This is determined by the difference in power consumption between 100% workload during the experiments and 0% workload in standby mode, multiplied by the time taken to complete each experiment. Since different algorithms may take varying amounts of time to complete the same task, their energy consumption also differs accordingly. An analysis aimed at identifying systematic relationships between these components, summarised in Table 3, reveals only general qualitative differences
The relative time differences (expressed as percentages in Table 3) generally suggest that the Differential Evolution (DE) algorithm is faster than Particle Swarm Optimisation (PSO) across all tests, while the Firefly Search (FS) algorithm is faster than both the Random Search Optimisation (RSO) and DE algorithms. However, the magnitude of these differences varies significantly, and no precise systematic relationship can be identified for each test or algorithm. A more detailed quantitative analysis could be the focus of future research.

4. Discussion

This section critically examines the results of the study, interpreting them in light of previous research and exploring the role of intelligent algorithms in enhancing energy efficiency. The findings corroborate and, to some extent, clarify earlier studies [27] that investigate computational limitations, energy consumption, and processing time in intelligent algorithms. While it is evident that variations in efficiency can be attributed to differences in software design, implementation, and execution, it is also essential to understand how different software engineering techniques can embody intelligent behaviour.
To ground this analysis, we first turn to epistemological frameworks [56,57]. Models such as Data-Information-Knowledge (DIK) [58] and Data-Information-Knowledge-Wisdom (DIKW) [59] provide valuable insights into how intelligent beings and systems perceive and interact with their environment. These models illustrate a hierarchical process wherein data is generated and pre-processed to abstract essential information, which can then be further refined into knowledge for future use. This hierarchical abstraction not only reduces the amount of data that needs to be stored but also accelerates the processing of familiar cases while enabling adaptation to new ones. Both factors significantly enhance the efficiency and sustainability of intelligent entities. Translating this process into intelligent computing software can thereby contribute to the overall sustainability of AI systems.
Among the various definitions of knowledge, the one most applicable to software design and implementation is “Knowledge is the perception of the agreement or disagreement of two ideas” [60,61,62]. This conceptualisation is crucial for understanding how cognitive processes can strengthen machine learning algorithms and improve the sustainability of intelligent systems. According to the literature, “Knowledge of the external world can be obtained either by intuition or by abstraction” [63]. Understanding these cognitive processes, particularly intuition and abstraction, is pivotal for advancing the process of machine learning and knowledge construction.
William of Ockham provides a useful distinction between intuitive and abstractive cognition [64]:
  • Intuitive cognition involves the immediate apprehension that allows the intellect to make evident judgments about the existence or qualities of an object.
  • Abstractive cognition, on the other hand, is an act of cognition where such judgments cannot be evidently made.
Applying these concepts to the “Blackbox” model facilitates the operation of adaptive heuristic algorithms like Free Search, which can perform more efficiently across heterogeneous landscapes and tasks. Faster performance directly translates to better energy efficiency. For simple tasks, high computational intelligence and the competition between different algorithms and systems enhance software sustainability and energy efficiency. For more complex problems—such as those involving a search space exceeding 10^1,000,000 (10 to the power of 1,000,000) possible locations, where exploration time could approach infinity—adaptive intelligent behaviour becomes crucial in minimising both time and energy consumption. An example of this can be seen in the application of Free Search to optimise tasks involving 100,000 parameters, achieving notable efficiency gains [28].
Future research should focus on developing new models and software implementations that enhance machine learning, intelligent computing, environmental interaction, knowledge construction, and adaptive behaviour. These advancements are critical for creating more efficient and sustainable AI systems

5. Conclusions

This study contributes to the ongoing discourse on sustainable and energy-efficient software, specifically addressing the question: Can Artificial Intelligence (AI) be classified as socially responsible based on its energy efficiency and sustainability? While this question remains open and requires further comprehensive research, our findings provide a foundational perspective on the energy efficiency of computing systems, highlighting several key aspects:
  • The overall growth in energy consumption by computational systems poses significant challenges, especially considering the fundamental physical limitations that, if left unaddressed, could lead to global negative consequences.
  • Although there have been positive changes in hardware energy efficiency, the sustainability of software, particularly the energy efficiency of intelligent algorithms, plays a critical role.
  • Our empirical evaluation demonstrates the variation in time and energy consumption of intelligent, adaptive algorithms applied to heterogeneous numerical tests, revealing substantial differences in energy efficiency and speed when different algorithms are used for the same tasks.
  • The study identifies potential benefits of time- and energy-efficient software, underscoring the importance of optimising computational processes to reduce their environmental impact.
  • The discussion on the interrelationship between concepts, computational intelligence, and the role of cognition in advancing intelligent algorithms further elucidates the complexities involved in this area of study.
While the study provides valuable insights, it also raises several questions that remain unanswered and merit further investigation, such as the sustainability of other algorithms, the energy efficiency of algorithms when applied to real-world problems, and the broader contribution of intelligent computing to green computing. Future research should aim for more precise quantitative analyses, focusing on the evaluation and improvement of a wide range of software products and services to promote energy-efficient and sustainable computing.
By exploring these avenues, this study hopes to contribute to a deeper understanding of the potential for AI and other intelligent computing solutions to align with principles of social responsibility and environmental sustainability.

Author Contributions

Conceptualization - Kalin Penev & Alexander Gegov; methodology & software & validation - Kalin Penev; formal analysis - Kalin Penev & Olufemi Isiaq; investigation & resources & data curation - Kalin Penev; writing—original draft preparation Kalin Penev; writing—review and editing - Kalin Penev & Alexander Gegov & Olufemi Isiaq & Raheleh Jafari. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Paul, S.G.; Saha, A.; Arefin, M.S.; Bhuiyan, T.; Biswas, A.A.; Reza, A.W.; Alotaibi, N.M.; Alyami, S.A.; Moni, M.A. A Comprehensive Review of Green Computing: Past, Present, and Future Research. IEEE Access. 2023, 11, 87445–87494. [Google Scholar] [CrossRef]
  2. Cheng, L.; Varshney, K.R.; Liu, H. Socially responsible AI algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research. 2021, 71, 1137–1181. [Google Scholar] [CrossRef]
  3. Lee, S.U.; Fernando, N.; Lee, K.; Schneider, J. A survey of energy concerns for software engineering. Journal of Systems and Software. 2024, 210, 111944. [Google Scholar] [CrossRef]
  4. Naumann, S.; Dick, M.; Kern, E.; Johann, T. The GREENSOFT Model: A reference model for green and sustainable software and its engineering. Sustainable Computing: Informatics and Systems. 2011, 1, 294–304. [Google Scholar] [CrossRef]
  5. Raimi, D.; Zhu, Y.; Newell, R.G.; Prest, B.C. Global Energy Outlook 2024: Peaks or Plateaus? 2024. Available online: https://www.rff.org/publications/reports/global-energy-outlook-2024/ (accessed on 10 June 2024).
  6. IEA, GlobalEnergyReview2021. Available online: https://www.iea.org/reports/global-energy-review-2021 (accessed on 10 June 2024).
  7. Bremermann, H.J. Optimization through Evolution and Recombination. Self-organizing Systems, M.C. Yovits, G.T. Jacobi and G.D. Goldstein, Eds. Washington, DC: Spartan Books. 1962, 93-106.
  8. Bremermann, H.J. Quantum noise and information. Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, University of California Press Berkeley, CA. 1967, 4, 15–20. [Google Scholar]
  9. Gorelik, G. Bremermann’s Limit and cGh-physics. arXiv: General Relativity and Quantum Cosmology, arXiv:General Relativity and Quantum Cosmology. 2009. [CrossRef]
  10. Top500A 2024, FRONTIER - HPE CRAY EX235A, AMD OPTIMIZED 3RD GENERATION EPYC 64C 2GHZ, AMD INSTINCT MI250X, SLINGSHOT-11. Available online: https://top500.org/system/180047/ (accessed on 15 May 2024).
  11. Top500B, 2024, BLUEGENE/L - ESERVER BLUE GENE SOLUTION. Available online: https://top500.org/system/174210/ (accessed on 15 May 2024).
  12. Top500C, 2024, JEDI - BULLSEQUANA XH3000, GRACE HOPPER SUPERCHIP 72C 3GHZ, NVIDIA GH200 SUPERCHIP, QUAD-RAIL NVIDIA INFINIBAND NDR200. Available online: https://top500.org/system/180269/ (accessed on 15 May 2024).
  13. Dongarra, J. 2007, Frequently Asked Questions on the Linpack Benchmark and Top500. Available online: https://www.netlib.org/utk/people/JackDongarra/faq-linpack.html (accessed on 19 January 2024).
  14. JVA Initiative Committee and Iowa State University, 2011, ATANASOFF BERRY COMPUTER. Available online: https://jva.cs.iastate.edu/operation.php (accessed on 12 June 2024).
  15. Freiberger, Paul A. and Swaine, Michael R.. “Atanasoff-Berry Computer”. Encyclopedia Britannica, 20 Mar. 2023. Available online: https://www.britannica.com/technology/Atanasoff-Berry-Computer (accessed on 11 June 2024).
  16. Calero, C.; Mancebo, J.; Garcia, F.; Moraga, M.A.; Berna, J.A.G.; Fernandez-Aleman, J.L.; Toval, A. 5Ws of green and sustainable software. Tsinghua Science and Technology. 2020, 25, 401–414. [Google Scholar] [CrossRef]
  17. Gottschalk, M.; Jelschen, J.; Winter, A. Energy-Efficient Code by Refactoring. Softwaretechnik-Trends. 2013, 33. [Google Scholar] [CrossRef]
  18. Sanlıalp, İ.; Öztürk, M.M.; Yiğit, T. Energy Efficiency Analysis of Code Refactoring Techniques for Green and Sustainable Software in Portable Devices. Electronics. 2022, 11, 442. [Google Scholar] [CrossRef]
  19. Anwar, H.; Pfahl, D.; Srirama, S.N. Evaluating the impact of code smell refactoring on the energy consumption of Android applications. 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Kallithea, Greece, 2019, 82-86. [CrossRef]
  20. Noman, H.; Mahoto, N.; Bhatti, S.; Rajab, A.; Shaikh, A. Towards sustainable software systems: A software sustainability analysis framework. Information and Software Technology. 2024, 169, 107411. [Google Scholar] [CrossRef]
  21. Heldal, R.; Nguyen, N.; Moreira, A.; Lago, P.; Duboc, L.; Betz, S.; Coroamă, V.C.; Penzenstadler, B.; Porras, J.; Capilla, R.; Brooks, I.; Oyedeji, S.; Venters, C.C. Sustainability competencies and skills in software engineering: An industry perspective. Journal of Systems and Software. 2024, 211, 111978. [Google Scholar] [CrossRef]
  22. Venters, C.C.; Capilla, R.; Nakagawa, E.Y.; Betz, S.; Penzenstadler, B.; Crick, T.; Brooks, I. Sustainable software engineering: Reflections on advances in research and practice. Information and Software Technology. 2023, 164, 107316. [Google Scholar] [CrossRef]
  23. Martínez-Fernández, S.; Franch, X.; Durán, F. Towards green AI-based software systems: an architecture-centric approach (GAISSA). 2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Durres, Albania. [CrossRef]
  24. Penev, K.; Littlefair, G. Free Search—a comparative analysis. Information Sciences. 2005, 172, 173–193. [Google Scholar] [CrossRef]
  25. Penev, K.; Gegov, A. (Eds.) Free Search of real value or how to make computers think. St. Qu. 0955. [Google Scholar]
  26. Vasileva, V.; Penev, K. Free search of global value. 2012 6th IEEE International Conference Intelligent Systems, Sofia, Bulgaria. 1109. [Google Scholar]
  27. Penev, K. Free Search in Multidimensional Space M. Lirkov, I., Margenov, S. (eds) Large-Scale Scientific Computing. LSSC 2017. Lecture Notes in Computer Science, 2018, 10665. 399-407. 10. 1007. [Google Scholar]
  28. Penev, K. 2022, An optimal value for 100 000-dimensional Michalewicz test. Available online: https://pure.solent.ac.uk/files/33733992/100_000_dimensional_Michalewicz_test_2.pdf (accessed on 12 June 2024).
  29. Washizaki, H.; Khomh, F.; Gueheneuc, Y.; Takeuchi, H.; Natori, N.; Doi, T.; Okuda, S. Software-Engineering Design Patterns for Machine Learning Applications. Computer. 2022, 55, 30–39. [Google Scholar] [CrossRef]
  30. Guldner, A.; Bender, R.; Calero, C.; Fernando, G.S.; Funke, M.; Gröger, J.; Hilty, L.M.; Hörnschemeyer, J.; Hoffmann, G.; Junger, D.; Kennes, T.; Kreten, S.; Lago, P.; Mai, F.; Malavolta, I.; Murach, J.; Obergöker, K.; Schmidt, B.; Tarara, A.; De Veaugh-Geiss, J.P.; Weber, S.; Westing, M.; Wohlgemuth, V.; Naumann, S. Development and evaluation of a reference measurement model for assessing the resource and energy efficiency of software products and components—Green Software Measurement Model (GSMM). Future Generation Computer Systems. 2024, 155, 402–418. [Google Scholar] [CrossRef]
  31. Koedijk, L.; Oprescu, A. Finding Significant Differences in the Energy Consumption when Comparing Programming Languages and Programs. 2022 International Conference on ICT for Sustainability (ICT4S), Plovdiv, Bulgaria, 2022, 1-12. [CrossRef]
  32. Wu, C.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Chang, G.; Behram, F.A.; Huang, J.; Bai, C.; Gschwind, M.; Gupta, A.; Ott, M.; Melnikov, A.; Candido, S.; Brooks, D.; Chauhan, G.; Lee, B.; Lee, H.S.; Akyildiz, B.; Balandat, M.; Spisak, J.; Jain, R.; Rabbat, M.; Hazelwood, K. Sustainable AI: Environmental Implications, Challenges and Opportunities. ArXiv preprint arXiv:2111.00364, arXiv:2111.00364. 2021. [CrossRef]
  33. Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.; Rothchild, D.; So, D.; Texier, M.; Dean, J. Carbon Emissions and Large Neural Network Training. arXiv, 2021; arXiv:2104.10350. [Google Scholar] [CrossRef]
  34. EU, “Regulation (EU) 2020/852 of the european parliament and of the council of 18 june 2020 on the establishment of a framework to facilitate sustainable investment, and amending regulation (EU) 2019/2088 (Text with EEA relevance),” Official Journal of the European Communities, vol. 198, pp. 13–43, 2020. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32020R0852&from=EN (accessed on 13 June 2024).
  35. EU, AI Act, 2024. Available online: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed on 13 June 2024).
  36. Darwin, C., On the Origin of Species by Means of Natural Selection. New York: D. Appleton and Company, 443 & 445 Broadway. MDCCCLXI. Available online: https://darwin-online.org.uk/converted/pdf/1861_OriginNY_F382.pdf (accessed on 13 June 2024).
  37. Pinto, P.; Bispo, J.; Cardoso, J.M.P.; Barbosa, J.G.; Gadioli, D.; Palermo, G.; Martinovic, J.; Golasowski, M.; Slaninova, K.; Cmar, R.; Silvano, C. Pegasus: Performance Engineering for Software Applications Targeting HPC Systems. IEEE Transactions on Software Engineering. 2022, 48, 732–754. [Google Scholar] [CrossRef]
  38. GG, 2024, Sorting Algorithms. Available online: https://www.geeksforgeeks.org/sorting-algorithms/ (accessed on 13 June 2024).
  39. Penev, K. Free Search – comparative analysis 100. International Journal of Metaheuristics. 2014, 3, 118–132. [Google Scholar] [CrossRef]
  40. Jain, S.; Saha, A. Improving and comparing performance of machine learning classifiers optimized by swarm intelligent algorithms for code smell detection. Science of Computer Programming. 2024, 237, 103140. [Google Scholar] [CrossRef]
  41. Deng, L. Artificial Intelligence in the Rising Wave of Deep Learning: The Historical Path and Future Outlook [Perspectives]. IEEE Signal Processing Magazine. 2018, 35, 180–177. [Google Scholar] [CrossRef]
  42. Legg, S.; Hutter, M.A. A Collection of Definitions of Intelligence. Proceedings of the 2007 conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006. 2007, 17–24. [CrossRef]
  43. Griewank, A.O. Generalized Decent for Global Optimization. Journal of Optimization Theory and Applications. 1981, 34, 11–39. [Google Scholar] [CrossRef]
  44. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs. Springer Berlin, Heidelberg. [CrossRef]
  45. Penev, K. Free Search in Multidimensional Space II. Dimov, I., Fidanova, S., Lirkov, I. (eds) Numerical Methods and Applications. NMA 2014, Lecture Notes in Computer Science, Springer. 2015, 8962, 103–111. [Google Scholar] [CrossRef]
  46. Mühlenbein, H.; Schomisch, M.; Born, J. The Parallel Genetic Algorithm as Function Optimizer. Parallel Computing. 1991, 17, 619–632. [Google Scholar] [CrossRef]
  47. Rosenbrock, H. An automate method for finding the greatest or least value of a function. The Computer Journal. 1960, 3, 175–184. [Google Scholar] [CrossRef]
  48. Schwefel, H.P. Numerical Optimization of Computer Models. John Wiley & Sons, 1981. [Google Scholar]
  49. De Jong, K. An Analysis of the Behaviour of a Class of Genetic Adaptive Systems. PhD Thesis, University of Michigan.
  50. Kennedy, J.; Eberhart, R. Particle Swarm Optimisation. Proceedings of the IEEE International Conference on Neural Networks. 1995, 4, 1942–1948. [Google Scholar] [CrossRef]
  51. Storn, R.; Price, K. Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  52. Penev, K. Adaptive Heuristic Applied to Large Constraint Optimisation Problem. Lirkov, I., Margenov, S., Waśniewski, J. (eds) Large-Scale Scientific Computing. LSSC 2007. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg. 2008, 4818. [CrossRef]
  53. GMM-DDS108 (KWE-PM01) Digital Power Consumption Energy Meter UK Plug Socket, 2024. Available online: https://testmeter.sg/webshaper/pcm/files/Data%20Sheet/GMM-DDS108-KWE-PM01-UK.pdf (accessed on 17 June 2024).
  54. CUPID, 2024, CPU-Z for Windows® x86/x64. Available online: https://www.cpuid.com/softwares/cpu-z.html (accessed on 17 June 2024).
  55. CoreTemp, 2024, Core Temp 1.18.1. Available online: https://www.alcpu.com/CoreTemp/ (accessed on 17 June 2024).
  56. Stroll, Avrum and Martinich, A.P.. “epistemology”. Encyclopedia Britannica, 19 Apr. 2024. Available online: https://www.britannica.com/topic/epistemology (accessed on 18 June 2024).
  57. Steup, M.; Neta, R. Epistemology. The Stanford Encyclopedia of Philosophy (Spring 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), Metaphysics Research Lab, Stanford University. 2024. Available online: https://plato.stanford.edu/archives/spr2024/entries/epistemology/ (accessed on 18 June 2024).
  58. Zins, C. Conceptual approaches for defining data, information, and knowledge. Journal of the American Society for Information Science and Technology. 2007, 58, 479–493. [Google Scholar] [CrossRef]
  59. Rowley, J. The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science. 2007, 33, 163–180. [Google Scholar] [CrossRef]
  60. Locke, J., 1689, An Essay Concerning Human Understanding, 1689, ISBN 0-87220-217-8.
  61. Nonaka, I.; Takeuchi, H. The Knowledge – Creating Company: How Japanese Companies Create the Dynamics of Information. Oxford University Press, 5092. [Google Scholar]
  62. Davenport, T.H.; Prusak, L. Working Knowledge: how organisations manage what they know. Boston, MA: Harvard Business School Press. 8758. [Google Scholar] [CrossRef]
  63. Tommaso Campanella (1568-1639), Encyclopaedia of Philosophy, Macmillan, New York, Vol.2, p. 12.
  64. William of Ockham (1285 – 1349), Encyclopaedia of Philosophy, Macmillan, New York, Vol.8, p. 308.
Figure 1. Tests time comparison per algorithm.
Figure 1. Tests time comparison per algorithm.
Preprints 118123 g001
Figure 2. Algorithms time comparison per test.
Figure 2. Algorithms time comparison per test.
Preprints 118123 g002
Table 1. Time for execution of 100-dimensional version of the tests.
Table 1. Time for execution of 100-dimensional version of the tests.
Test PSO DE FS
Time Time Time
Griewank 01:45:00 00:41:00 00:14:00
Michalewicz 02:44:00 01:46:00 01:02:00
Norwegian 01:50:00 00:47:00 00:12:00
Rastrigin 01:46:00 00:45:00 00:11:00
Rosenbrock 01:39:00 00:40:00 00:05:00
Schwefel 02:44:00 01:03:00 00:27:00
Step 02:37:00 00:42:00 00:06:00
Table 2. Energy use for execution of 100-dimensional version of the tests.
Table 2. Energy use for execution of 100-dimensional version of the tests.
Test PSO DE FS
Wh Wh Wh
Griewank 33.25 12.98 4.43
Michalewicz 51.93 33.57 19.63
Norwegian 34.83 14.88 3.80
Rastrigin 33.57 14.25 3.48
Rosenbrock 31.35 12.67 1.58
Schwefel 51.93 19.95 8.55
Step 49.72 13.30 1.90
Table 3. Relative time difference per test in %.
Table 3. Relative time difference per test in %.
Test DE/PSO FS/PSO FS/DE
% % %
Griewank 39% 13% 34%
Michalewicz 65% 38% 58%
Norwegian 43% 11% 26%
Rastrigin 42% 10% 24%
Rosenbrock 40% 5% 13%
Schwefel 38% 16% 43%
Step 27% 4% 14%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated