How the SP System May Promote Sustainability in Energy Consumption in IT Systems
Abstract
:1. Introduction
1.1. Chasms to Be Bridged
1.1.1. Energy Demands of Deep Neural Networks
- Beating the world’s best human players at the game of Go (See, for example: AlphaGo, https://tinyurl.com/yaxg9vz4; Mastering the game of Go with deep neural networks and tree search, https://tinyurl.com/12ge3oht, all retrieved on 1 March 2021).
- Solving problems in the folding of proteins that are widely recognised to be very difficult. See, for example: AlphaFold: Using AI for scientific discovery, https://tinyurl.com/1bnwivkp, AI protein-folding algorithms solve structures faster than ever, https://tinyurl.com/3bt77zo4, all retrieved on 1 March 2021.
1.1.2. Communications and Big Data
“The Square Kilometre Array is one of the most ambitious scientific projects ever undertaken. Its organizers plan on setting up a massive radio telescope made up of more than half a million antennas spread out across vast swaths of Australia and South Africa.”.John E. Kelly III and Steve Hamm ([7], p. 62)
“The SKA is the ultimate big data challenge. ... The telescope will collect a veritable deluge of radio signals from outer space—amounting to fourteen exabytes of digital data per day ...”.(ibid., p. 63)
- “Communication networks face a potentially disastrous ‘capacity crunch’ .” This quote summarises the conclusions of a meeting organised by the UK’s Royal Society, introduced in [8], with other papers from the meeting at bit.ly/2fSy6qN, retrieved 1 March 2021.
- “Internet access may soon need to be rationed because the UK power grid and communications network can not cope with the demand from consumers.” This quote is from a website of the telecomms company BT, bit.ly/2eUfMbS, retrieved 1 March 2021.
- The SPS is intended, in itself, to combine Simplicity with descriptive and explanatory Power.
- In addition, because the SPS works entirely by the compression of information, and this may be seen as a process that creates structures that combine conceptual Simplicity with descriptive and explanatory Power.
2. Introduction to the SPS
3. Biological Foundations
- Natural selection. In human biology, and in the biology of non-human animals, it seems likely that IC would play a prominent role in natural selection because: (1) IC can speed up the transmission of a given body of information, I, in a given bandwidth; or it requires less bandwidth to transmit I at a given speed. (2) Likewise, IC can reduce the storage space required for a given body of information, I; or it can increase the amount of information that can be accommodated in a given store.
- Research in language learning. The SP programme of research is founded on earlier research developing computer models of language learning [14] and incorporates many insights from that research, especially the importance of IC in language learning.
- Cell assembly and pattern assembly. Although unsupervised learning in the SPS is entirely different from ‘Hebbian’ learning (See Appendix A.4), Donald Hebb’s [15] concept of a ‘cell assembly’ is quite similar to the SP-Neural concept of a pattern assembly (([16], Chapter 11), [17]).
- Localist v distributed representation of knowledge. The weight of evidence seems now to favour the ‘localist’ kind of knowledge representation adopted in the SPS and seems not to favour the ‘distributed’ style of knowledge representation adopted in DNNs ([18], pp. 461–463).
- SP-Neural. Although the SP-Neural version of the SPS (Appendix A.7) is still embryonic, there appears to be considerable potential for the expressions of IC via neural equivalents of ICMUP, SPMA, and SP-grammars, which are fundamental in the abstract version of the SPS.
4. One-Shot Learning
5. Transfer Learning
“Humans can learn from much less data because we engage in transfer learning, using learning from situations which may be fairly different from what we are trying to learn.”.Ray Kurzweil ([20], p. 230)
6. Catastrophic Forgetting
7. Information Compression
7.1. IC and Reducing the Size of Data
7.2. IC and Probabilities
7.2.1. IC and Probabilities Are Two Sides of the Same Coin
- IC via the matching and unification of patterns) (Appendix A.2). If two or more patterns match each other, it is not hard to see how, in subsequent processing, the beginnings of one pattern may lead to a prediction that the remainder is likely to follow. For example, if we know that ‘black clouds rain’ is a recurring pattern, then if we see ‘black clouds’, it is natural to predict that ‘rain’ is likely to follow.
- IC via SP-multiple-alignment (Appendix A.3). When the SPCM encounters a New SP-pattern like ‘t h e a p p l e s a r e s w e e t’, it is likely to start building an SPMA like the one shown in Figure A2. As it proceeds, it is guided by the kinds of probabilistic inferences just mentioned. These apply at all and any level in the hierarchy of structures: at the level of words, at the level of phrases, and at the level of sentences.
- IC via unsupervised learning (Appendix A.4). Unsupervised learning in the SPCM creates one or more SP-grammars which are effective in the compression of a given set of New SP-patterns. It is this process which ensures that in IC via SPMA, the Old SP-patterns and the values for IC and probability accord with the DONSVIC principle (Appendix A.4.8).
7.2.2. Probabilities and Saving Energy
7.3. Processing in Two Stages
- Phase 1: Create a grammar for data of a given type.
- Choose a largish sample of data which is representative of the kinds of data to be processed.
- Process the sample via unsupervised learning within the SPCM to create one or two ‘good’ SP-grammars for those representative data. This stage is relatively demanding and may be done on a relatively powerful computer or SP Machine (Appendix A.8).
- Phase 2: Process one or more new streams of data. Overall, the data for Phase 2 should be very much larger than the data for Phase 1.
- Use computers of relatively low power that have each been supplied with the SP-grammar from Phase 1.
- Providing each stream is not too large, it should be possible to achieve useful analyses of the data in terms of the SP-grammar.
- The analyses produced by this processing need not be simply parsings like the one shown in Figure A2. They may be any or all of the kinds of processing described in Appendix A.5.
7.4. Model-Based Coding
“Consider an image transmission system that works like this. At the transmitter, we have a person who examines the image to be transmitted and comes up with a description of the image. At the receiver, we have another person who then proceeds to create that image. For example, suppose the image we wish to transmit is a picture of a field of sunflowers. Instead of trying to send the picture, we simply send the words ‘field of sunflowers’. The person at the receiver paints a picture of a field of sunflowers on a piece of paper and gives it to the user. Thus, an image of an object is transmitted from the transmitter to the receiver in a highly compressed form.”.Khalid Sayood ([24], p. 592)
“Imagine that we had at the receiver a sort of rubbery model of a human face. Or we might have a description of such a model stored in the memory of a huge electronic computer. First, the transmitter would have to look at the face to be transmitted and ‘make up’ the model at the receiver in shape and tint. The transmitter would also have to note the sources of light and reproduce these in intensity and direction at the receiver. Then, as the person before the transmitter talked, the transmitter would have to follow the movements of his eyes, lips and jaws, and other muscular movements and transmit these so that the model at the receiver could do likewise.”.John Pierce ([25], Location 2278)
“Such a scheme might be very effective, and it could become an important invention if anyone could specify a useful way of carrying out the operations I have described. Alas, how much easier it is to say what one would like to do (whether it be making such an invention, composing Beethoven’s tenth symphony, or painting a masterpiece on an assigned subject) than it is to do it.”.([25], Locations 2278–2287)
7.4.1. Using an SP-Grammar for the Efficient Transmission of Data
7.4.2. Unsupervised Learning of G
7.4.3. Alice and Bob Both Receive Copies of G
7.4.4. E for Any Given D Would Normally Be Very Small Compared with D
7.4.5. Model-Based Coding Compared with Standard Compression Methods
- Any ‘learning’ with ordinary compression methods is part of the encoding stage, not an independent process.
- Any such learning with ordinary compression methods is normally relatively unsophisticated and designed to favour speed of processing on low-powered computers rather than high levels of information compression.
- In addition, if there is any ‘learning’ with ordinary compression methods, Alice transmits both G and E together, not E by itself. As we shall see, this is likely to mean much smaller savings than if E is transmitted alone.
- In some versions of MPEG compression, Alice and Bob may be provided with some elements of G—such as the structure of human faces or bodies—but these are normally hard coded and not learned.
7.4.6. The Potential of the SPS for Model-Based Coding
7.4.7. Concluding Remarks about Model-Based Coding
8. Conclusions
- IC makes data smaller, so there is less to process.
- The close connection between IC and concepts of probability mean that there are probabilities that can be exploited to improve the efficiency of searching for matches between patterns.
- Model-Based Coding, described by John Pierce in 1961, may become a reality with the development of an industrial-strength SP-Machine:
- –
- With a relatively powerful computer, create an SP-grammar from, for example, a collection of TV programmes.
- –
- Distribute the SP-grammar to TV transmitters and many computerised TV receivers.
- –
- For each programme to be transmitted, Alice first encodes it in terms of the SP-grammar, the relatively small encoding is then transmitted, and, finally, Bob decodes the encoding in terms of the SP-grammar to recreate the programme exactly.
- –
- This makes the greatest savings when the creation and distribution of the SP-grammar is relatively infrequent compared with the uses of the SP-grammar in the encoding and decoding of TV programmes and the like.
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
DNN | Deep Neural Network |
IC | Information Compression |
ICMUP | Information Compression via the Matching and Unification of Patterns |
SPCM | SP Computer Model |
SPMA | SP-multiple-alignment |
SPS | SP System |
Appendix A. Outline of the SPS
Appendix A.1. SP-Patterns, SP-Symbols, and Redundancy
Appendix A.1.1. SP-Patterns and SP-Symbols
Appendix A.1.2. Redundancy
Appendix A.2. IC via the Matching and Unification of Patterns
Appendix A.3. IC via SP-Multiple-Alignment
- At the beginning of processing, the SPCM has a store of Old SP-patterns including those shown in rows 1 to 8 (one SP-pattern per row), and many others. When the SPCM is more fully developed, those Old SP-patterns would have been learned from raw data as outlined in Appendix A.4, but for now they are supplied to the program by the user.
- The next step is to read in the New SP-pattern, ‘t h e a p p l e s a r e s w e e t’.
- Then the program searches for ‘good’ matches between SP-patterns, where ‘good’ means matches that yield relatively high levels of compression of the New SP-pattern in terms of Old SP-patterns with which it has been unified. The details of relevant calculations are given in (see ([29], Section 4.1) and ([16], Section 3.5)).
- As can be seen in the figure, matches are identified at early stages between (parts of) the New SP-pattern and (parts of) the Old SP-patterns ‘D 17 t h e #D’, ‘N Nr 6 a p p l e #N’, ‘V Vp 11 a r e #V’, and ‘A 21 s w e e t #A’.
- Each of these matches may be seen as a partial SPMA. For example, the match between ‘t h e’ in the New SP-pattern and the Old SP-pattern ‘D 17 t h e #D’ may be seen as an SPMA between the SP-pattern in row 0 and the SP-pattern in row 3.
- After unification of the matching symbols, each such SPMA may be seen as a single SP-pattern. So the unification of ‘t h e’ with ‘D 17 t h e #D’ yields the unified SP-pattern ‘D 17 t h e #D’, with exactly the same sequence of SP-symbols as the second of the two SP-patterns from which it was derived.
- As processing proceeds, similar pair-wise matches and unifications eventually lead to the creation of SPMAs like that shown in Figure A2. At every stage, all the SPMAs that have been created are evaluated in terms of IC (details of the coding are described in (see (Section 4.1 in [29]) and (Section 3.5 in [16])), and then the best SPMAs are retained and the remainder are discarded. In this case, the overall ‘winner’ is the SPMA shown in Figure A2.
- This process of searching for good SPMAs in stages, with selection of good partial solutions at each stage, is an example of heuristic search. This kind of search is necessary because there are too many possibilities for much to be achieved via exhaustive search within a reasonable time. By contrast, heuristic search can normally deliver results that are reasonably good within a reasonable time, but it cannot guarantee that the best possible solution has been found.
Appendix A.4. IC via Unsupervised Learning
“Unsupervised learning represents one of the most promising avenues for progress in AI. ... However, it is also one of the most difficult challenges facing the field. A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to AGI.”.Martin Ford ([20], pp. 11–12), emphasis added.
Appendix A.4.1. Unsupervised Learning in the SPS Is Entirely Different from ‘Hebbian’ Learning
or, more briefly, “Neurons that fire together, wire together.”“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”.([15], Location 1496)
Appendix A.4.2. Learning with a Tabula Rasa
“... the first time you train a convolutional network you train it with thousands, possibly even millions of images of various categories.”.Yann LeCun ([20], p. 124)
“We can imagine systems that can learn by themselves without the need for huge volumes of labeled training data.”.Martin Ford ([20], p. 12)
Appendix A.4.3. Learning with Previously Stored Knowledge
Appendix A.4.4. No Match between New and Old SP-Patterns
Partial matches between New and Old SP-patterns
Appendix A.4.5. Unsupervised Learning of SP-Grammars
Appendix A.4.6. Speeds of Learning
Appendix A.4.7. Current and Future Developments
Appendix A.4.8. The DONSVIC Principle
Appendix A.5. Strengths and Potential of the SPS in AI-Related Functions
Appendix A.5.1. Versatility in Aspects of Intelligence
Appendix A.5.2. Versatility in Reasoning
Appendix A.5.3. Versatility in the Representation of Knowledge
Appendix A.5.4. Seamless Integration of Diverse Aspects of Intelligence, and Diverse Kinds of Knowledge, in Any Combination
Appendix A.6. Potential Benefits and Applications of the SPS
- Big data. Somewhat unexpectedly, it has been discovered that the SPS has potential to help solve nine significant problems associated with big data [1]. These are: overcoming the problem of variety in big data; the unsupervised learning of structures and relationships in big data; interpretation of big data via pattern recognition, natural language processing; the analysis of streaming data; compression of big data; Model-Based Coding for the efficient transmission of big data; potential gains in computational and energy efficiency in the analysis of big data; managing errors and uncertainties in data; and visualisation of structure in big data and providing an audit trail in the processing of big data.
- Autonomous robots. The SPS opens up a radically new approach to the development of intelligence in autonomous robots [32];
- An intelligent database system. The SPS has potential in the development of an intelligent database system with several advantages compared with traditional database systems [33]. In this connection, the SPS has potential to add several kinds of reasoning and other aspects of intelligence to the ‘database’ represented by the World Wide Web, especially if the SP Machine were to be supercharged by replacing the search mechanisms in the foundations of the SP Machine with the high-parallel search mechanisms of any of the leading search engines.
- Medical diagnosis. The SPS may serve as a vehicle for medical knowledge and to assist practitioners in medical diagnosis, with potential for the automatic or semi-automatic learning of new knowledge [34];
- Computer vision and natural vision. The SPS opens up a new approach to the development of computer vision and its integration with other aspects of intelligence. It also throws light on several aspects of natural vision [30];
- Neuroscience. Abstract concepts in the SP Theory of Intelligence map quite well into concepts expressed in terms of neurons and their interconnections in a version of the theory called SP-Neural ([17], ([16], Chapter 11)). This has potential to illuminate aspects of neuroscience and to suggest new avenues for investigation.
- Commonsense reasoning. In addition to the previously described strengths of the SPS in several kinds of reasoning, the SPS has potential in the surprisingly challenging area of “commonsense reasoning and commonsense knowledge” [35]. How the SPS may meet the several challenges in this area is described in [36].
- Other areas of application. The SPS has potential in several other areas of application including ones described in [37]: the simplification and integration of computing systems; best-match and semantic forms of information retrieval; software engineering [38]; the representation of knowledge, reasoning, and the semantic web; information compression; bioinformatics; the detection of computer viruses; and data fusion.
- Mathematics. The concept of ICMUP provides an entirely novel interpretation of mathematics [39]. This interpretation is quite unlike anything described in existing writings about the philosophy of mathematics or its application in science. There are potential benefits in science and beyond from this new interpretation of mathematics.
Appendix A.7. SP-Neural
- Neural validation. Although SP-Neural is derived from the SPS, an abstract model of information processing, it maps quite well on to known features of neural tissue in the brain. This may be seen as a kind of neural validation of the abstract model from which it derives.
- The role of inhibition in the brain. In view of the importance of IC as a unifying principle in the SPS, and in view of the prevalence of inhibitory tissue in the brain, the known role of inhibitory neurons in some parts of the nervous system (([40], p. 505), [41]), suggests that inhibition could prove to be the key to understanding how IC may be achieved in SP-Neural, and hence in real brains.
Appendix A.8. Development of an SP Machine
References
- Wolff, J.G. Big data and the SP Theory of Intelligence. IEEE Access 2014, 2, 301–315. [Google Scholar] [CrossRef] [Green Version]
- Guokun, L.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling long- and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18), Ann Arbor, MI, USA, 8–12 July 2018. [Google Scholar]
- Yun, K.; Huyen, A.; Lu, T. Deep neural networks for pattern recognition. arXiv 2018, arXiv:1809.09645. [Google Scholar]
- Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy, 28 July–2 August 2019. [Google Scholar]
- Furber, S. To build a brain. IEEE Spectr. 2012, 49, 44–49. [Google Scholar] [CrossRef]
- Jabr, F. Does Thinking Really Hard Burn More Calories? Available online: https://www.scientificamerican.com/article/thinking-hard-calories/ (accessed on 20 April 2021).
- Kelly, J.E.; Hamm, S. Smart Machines: IBM’s Watson and the Era of Cognitive Computing; Columbia University Press: New York, NY, USA, 2013. [Google Scholar]
- Ellis, A.D.; Suibhne, N.M.; Saad, D.; Payne, D.N. Communication networks beyond the capacity crunch. Philos. Trans. R. Soc. A 2015, 374, 191. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. Information compression as a unifying principle in human learning, perception, and cognition. Complexity 2019, 2019. [Google Scholar] [CrossRef]
- Attneave, F. Some informational aspects of visual perception. Psychol. Rev. 1954, 61, 183–193. [Google Scholar] [CrossRef] [PubMed]
- Attneave, F. Applications of Information Theory to Psychology; Holt, Rinehart and Winston: New York, NY, USA, 1959. [Google Scholar]
- Barlow, H.B. Sensory mechanisms, the reduction of redundancy, and intelligence. In The Mechanisation of Thought Processes; Her Majesty’s Stationery Office: London, UK, 1959; pp. 535–559. [Google Scholar]
- Barlow, H.B. Trigger features, adaptation and economy of impulses. In Information Processes in the Nervous System; Leibovic, K.N., Ed.; Springer: New York, NY, USA, 1969; pp. 209–230. [Google Scholar]
- Wolff, J.G. Learning syntax and meanings through optimization and distributional analysis. In Categories and Processes in Language Acquisition; Levy, Y., Schlesinger, I.M., Braine, M.D.S., Eds.; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1988; pp. 179–215. [Google Scholar]
- Hebb, D.O. The Organization of Behaviour; John Wiley & Sons: New York, NY, USA, 1949. [Google Scholar]
- Wolff, J.G. Unifying Computing and Cognition: The SP Theory and Its Applications. 2006. Available online: https://www.cognitionresearch.org/ (accessed on 19 April 2021).
- Wolff, J.G. Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search. Technical Report. 2002. Available online: http://arXiv.org/abs/cs.AI/0307060 (accessed on 7 March 2021).
- Page, M. Connectionist modelling in psychology: A localist manifesto. Behav. Brain Sci. 2000, 23, 443–512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wolff, J.G. Problems in AI research and how the SP System may solve them. arXiv 2021, arXiv:2009.09079v3. [Google Scholar]
- Ford, M. Architects of Intelligence: The Truth About AI from the People Building It; Packt Publishing: Birmingham, UK, 2018. [Google Scholar]
- Solomonoff, R.J. A formal theory of inductive inference. Parts I and II. Inf. Control 1964, 7, 1–22, 224–254. [Google Scholar] [CrossRef] [Green Version]
- Solomonoff, R.J. The discovery of algorithmic probability. J. Comput. Syst. Sci. 1997, 55, 73–88. [Google Scholar] [CrossRef] [Green Version]
- Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications, 4th ed.; Springer: New York, NY, USA, 2019. [Google Scholar]
- Sayood, K. Introduction to Data Compression; Morgan Kaufmann: Amsterdam, The Netherlands, 2012. [Google Scholar]
- Pierce, J.R. Symbols, Signals and Noise, 1st ed.; Harper & Brothers: New York, NY, USA, 1961. [Google Scholar]
- Aizawa, K.; Harashima, H.; Saito, T. Model-based analysis synthesis image coding (mbasic) system for a person’s face. Signal Process. Image Commun. 1989, 1, 139–152. [Google Scholar] [CrossRef]
- Feng, W.U.; Peng, G.A.O.; Wen, G.A.O. Model-based coding. Chin. J. Comput. 1999, 12, 1239–1245. [Google Scholar]
- Moghaddam, B.; Pentland, A. An automatic system for model-based coding of faces. In Proceedings of the IEEE Data Compression Conference (DCC ’95), Snowbird, UT, USA, 28–30 March 1995; pp. 362–370. [Google Scholar]
- Wolff, J.G. The SP Theory of Intelligence: An overview. Information 2013, 4, 283–341. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. Application of the SP Theory of Intelligence to the understanding of natural vision and the development of computer vision. SpringerPlus 2014, 3, 552–570. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. The SP Theory of Intelligence: Its distinctive features and advantages. IEEE Access 2016, 4, 216–246. [Google Scholar] [CrossRef]
- Wolff, J.G. Autonomous robots and the SP Theory of Intelligence. IEEE Access 2014, 2, 1629–1651. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. Towards an intelligent database system founded on the SP theory of computing and cognition. Data Knowl. Eng. 2007, 60, 596–624. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. Medical diagnosis as pattern recognition in a framework of information compression by multiple alignment, unification and search. Decis. Support Syst. 2006, 42, 608–625. [Google Scholar] [CrossRef] [Green Version]
- Davis, E.; Marcus, G. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM 2015, 58, 92–103. [Google Scholar] [CrossRef]
- Wolff, J.G. Commonsense reasoning, commonsense knowledge, and the SP Theory of Intelligence. arXiv 2016, arXiv:1609.07772. [Google Scholar]
- Wolff, J.G. The SP Theory of Intelligence: Benefits and applications. Information 2014, 5, 1–27. [Google Scholar] [CrossRef] [Green Version]
- Wolff, J.G. Software engineering and the SP Theory of Intelligence. arXiv 2017, arXiv:1708.06665. [Google Scholar]
- Wolff, J.G. Mathematics as information compression via the matching and unification of patterns. Complexity 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
- Squire, L.R.; Berg, D.; Bloom, F.E.; du Lac, S.; Ghosh, A.; Spitzer, N.C. (Eds.) Fundamental Neuroscience, 4th ed.; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Békésy, G. Sensory Inhibition; Princeton University Press: Princeton, NJ, USA, 1967. [Google Scholar]
- Palade, V.; Wolff, J.G. A roadmap for the development of the ‘SP Machine’ for artificial intelligence. Comput. J. 2019, 62, 1584–1604. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wolff, J.G. How the SP System May Promote Sustainability in Energy Consumption in IT Systems. Sustainability 2021, 13, 4565. https://doi.org/10.3390/su13084565
Wolff JG. How the SP System May Promote Sustainability in Energy Consumption in IT Systems. Sustainability. 2021; 13(8):4565. https://doi.org/10.3390/su13084565
Chicago/Turabian StyleWolff, J. Gerard. 2021. "How the SP System May Promote Sustainability in Energy Consumption in IT Systems" Sustainability 13, no. 8: 4565. https://doi.org/10.3390/su13084565