Next Article in Journal
Punctuation Patterns in Finnegans Wake by James Joyce Are Largely Translation-Invariant
Previous Article in Journal
Fault Diagnosis of Semi-Supervised Electromechanical Transmission Systems Under Imbalanced Unlabeled Sample Class Information Screening
Previous Article in Special Issue
CCTFv2: Modeling Cyber Competitions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

An Entropy Approach to Interdependent Human–Machine Teams

by
William Lawless
1,* and
Ira S. Moskowitz
2
1
Department of Mathematics and Psychology, Paine College, Augusta, GA 30901, USA
2
Information Technology Division, Naval Research Laboratory, Washington, DC 20375, USA
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(2), 176; https://doi.org/10.3390/e27020176
Submission received: 18 December 2024 / Revised: 10 January 2025 / Accepted: 16 January 2025 / Published: 7 February 2025
Overview of recent developments in the field. Recently, Bengio et al. [1] proposed in the journal Science that AI is changing rapidly, and it may pose
“risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems.”
Unlike leaders who may not counter this threat, we have discovered that the self-organization of autonomous humans conveyed by interdependence is a resource that counters threats [2]. Further, Cummings [3] reported that the best teams of human scientists worked in states of maximum interdependence. But how can we generalize from human teams to human–machine teams?
We began this Special Issue in Entropy by recognizing that generalizing interdependence to human–machine teams “… presents … [a] series of challenges … [with] more questions than answers …” Manipulating states of interdependence in a team of humans is considered too difficult to manage in the laboratory (Jones, 1998, p. 33; in [4]). As an example of generalizing, inside of a machine’s team, if one machine’s process, A, is dependent on another machine’s process, B, machine B’s process is dependent on a third machine’s process, C, and the results are dependent on A, B and C, non-verbal interdependence performs well (e.g., chem-labs developing new laser materials, in [5]), but does so by ignoring cognitive beliefs about the situation, context, or world at large.
The gaps in existing knowledge and how this Special Issue addressed them. Our goal to achieve interdependent cognition and behavior is not as far along (viz., where teammates both speak and react physically to each other); for example, from the Wall Street Journal [6], robots cannot yet manage other robots or each other:
“Some traditional warehouse roles have proved too difficult for Amazon to fully automate … Humans [but not robots] can easily look into a storage container packed full of goods, identify a particular item and know how to pick it up and handle it …”
Still, the development of generative AI is rapid. In this Special Issue, our contributors addressed some of the following gaps:
1. The first article, by Abdo’s team, explored collaboration and strategies in cyber competition with agent-based models (ABMs).
2. In two articles, Boris Galitsky addressed deep learning neural nets to improve explanations and question answering by large language models (LLMs).
3. Scheuerman’s team addressed uncertainty and data drift with iterative feedback in human–machine teams.
4. Moskowitz’s team explored optimal decision making for multi-agent systems, finding that the way in which agents are combined influences channel capacity.
5. Russell’s team combined qualitative and quantitative health data and human expertise to find that the combination increased precision.
6. Santos’ team applied inverse reinforcement learning to command styles and interference in swarms, finding that heterogeneous scenarios worked best.
7. Lawless applied complementary cognition and behavior in human–machine teams to find energy-entropy trade-offs in team decisions.
8. Finally, Mylrea’s team crafted a trust framework and notional data to underscore the need for ethical safeguards in AI.
Future research that should be considered. The problem with machines and robotics is that while they are more capable, they are not yet competent with interdependent language and behavior [7]. A contributing factor to this problem is that classical social science has not been able to establish a valid process to construct and test the reproducibility of concepts. As examples of scales that have been found to be invalid, the claims made for self-esteem could not be reproduced in 2005 [8]; implicit racism in 2009 [9]; ego-depletion in 2016 [10]; and the honesty scale in 2021.1 The reproducibility crisis was identified formally in 2015 [11], leading to a plan to strengthen the tests for conceptual scales by, among other things, the preregistration of measures and analyses supporting claims before a reproducibility experiment was run. However, the test of the plan itself was retracted after the editors of Nature lost confidence based on criticisms raised about the results then being produced.2
Moreover, the National Academy of Sciences [12] reported that unraveling interdependent effects may not even be possible:
The “performance of a team is not decomposable to, or an aggregation of, individual performances …”
These problems with cognitive science suggest a loss of information within the interaction, acting like quantum superposition [7]. However, the opposite problem yet with the same result occurs with behaviorism (e.g., [13]). From a different perspective and also with the same result, logic applied to collective decision making while operating in the open has also failed [14]. These problems with social science, logic and the academy all support the conclusions drawn by Jones [4].
Alternatively, the assembly theory of alien life [15] avoids the complications of interdependence by counting successful assemblies: the greater the number, the more likely that life is involved. Building on assembly theory’s idea of observable effects, by focusing on the observable results of interdependence in a team, we have found that entropy is key: by managing to produce a low entropy in a team’s structure, the team can maximize its performance by producing the maximum entropy in its output [2].
In sum, future research must mathematically integrate and model the effects of interdependence for both cognition/behavior and human–machine teams by
1. Deriving a theory of integrated cognition/behavior (viz., “words alone do not equal reality” [16]) that addresses all risks posed by AI to humans.
2. Assuming “words alone do not equal reality” means that the tools of generative AI (e.g., tensors, LLMs and machine learning) used to model “reality” are formed from separable entities that generate independent and identically distributed random variable (i.i.d.) data, but that cannot recreate whatever social interactions are captured with the data [16]. This situation distinguishes large language models from quantum superposition and entanglement [17]:
“The term tensor is one that comes up a lot in the context of machine learning … e.g., [a tensor product is] a matrix multiplication or a convolution, performed on any number of input tensors and resulting in a single output tensor.”
However, if we can write a quantum state as a tensor product of two quantum states, then we say that the states are not entangled [18]. This discussion helps us to understand why separable tensor products and, in general, i.i.d. data [16], cannot recreate whatever social interactions have been captured [2]. Why? Words account for only 7 percent of communication [19]; moreover, we do not need language to think [20]:
“Language is a defining characteristic of our species, but the function, or functions, that it serves has been debated for centuries. Here, we … argue that in modern humans, language is a tool for communication … [not] for thinking.”
From this discussion, to match the claim by the National Academy of Sciences regarding the decomposition of teams, we conclude that interdependence is quantum-like [2] and cannot be represented by a tensor product [7].
3. Including negative and positive emotion. We have proposed a mathematics of state dependency [21] to model interdependence for human–machine teams [7]. In this model, emotional distress increases entropy in a team’s structure, a trade-off diverting available energy from a team’s maximum entropy production, reducing a team’s performance [2]. Generalized to a model of society with quantum-like coupled harmonic oscillators could fulfill Lewin’s vision of an interdependent whole [22], such as a human–machine team, organization or system. There, under the uncertainty caused by free choice (e.g., the failure rate of corporate mergers is about 50%; in [23], exemplified by the recent failure of Walgreens’ merger; in [24]), political compromise [25] and innovation might be combined into an index that differentiates evolvable, autonomous, and observable self-organized assemblies ([15]; e.g., [26]) of interdependence randomly seeking the positive emotion of “animal spirits” [27]:
“… if the animal spirits are dimmed and spontaneous optimism falters, leaving us to depend on nothing but a mathematical expectation, enterprise will fade and die.”

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
The honesty scale was retracted by the Editor of PNAS, Berenbaum, M.R. Proc. Natl. Acad. Sci. USA 2021, 118, e2115397118.
2
See the retraction at: Protzko, J.; Krosnick, J.; Nelson, L.; Nosek, B.A.; Axt, J.; Berent, M.; Buttrick, N.; DeBell, M.; Ebersole, C.R.; Lundmark, S.L.; et al. Retracted Article: High replicability of newly discovered social-behavioural findings is achievable. Nature Hum. Behav. 2024, 8, 311–319. https://doi.org/10.1038/s41562-023-01749-9.

References

  1. Bengio, Y.; Hinton, G.; Yao, A.; Song, D.; Abbeel, P.; Darrell, T.; Harari, Y.N.; Zhang, Y.Q.; Xue, L.; Shalev-Shwartz, S.; et al. Managing extreme AI risks amid rapid progress. Sci. Policy Forum 2024, 384, 842–845. [Google Scholar] [CrossRef] [PubMed]
  2. Lawless, W.F.; Moskowitz, I.S.; Doctor, K.Z. A Quantum-like Model of Interdependence for Embodied Human–Machine Teams: The Path to Autonomy Facing Complexity and Uncertainty. Entropy 2023, 25, 1323. [Google Scholar] [CrossRef] [PubMed]
  3. Cummings, J. Team Science Successes and Challenges; National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science: Bethesda, MD, USA, 2015. [Google Scholar]
  4. Jones, E.E. Major Developments in Five Decades of Social Psychology, The Handbook of Social Psychology; I: 3-57; McGraw-Hill: Boston, MA, USA, 1998. [Google Scholar]
  5. Service, R.F. Editorial, AI-driven robots discover record-setting laser compound. Science 2024, 384, 725. [Google Scholar] [CrossRef] [PubMed]
  6. Young, L. Amazon’s New Robotic Warehouse Will Rely Heavily on Human Workers. Wall Street Journal. Available online: https://www.wsj.com/articles/amazons-new-robotic-warehouse-will-rely-heavily-on-human-workers-f95e06b6 (accessed on 17 December 2024).
  7. Lawless, W.F.; Moskowitz, I.S. Shannon Holes, Black Holes and Knowledge. Knowledge 2024, 4, 331–357. [Google Scholar] [CrossRef]
  8. Baumeister, R.F.; Campbell, J.D.; Krueger, J.I.; Vohs, K.D. Exploding the self-esteem myth. Sci. Am. 2005, 292, 84–91. [Google Scholar] [CrossRef]
  9. Blanton, H.; Jaccard, J.; Klick, J.; Mellers, B.; Mitchell, G.; Tetlock, P.E. Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT. J. Appl. Psychol. 2009, 94, 567–582. [Google Scholar] [CrossRef] [PubMed]
  10. Hagger, M.S.; Chatzisarantis, N.L.; Alberts, H.; Anggono, C.O.; Batailler, C.; Birt, A.R.; Brand, R.; Brandt, M.J.; Brewer, G.; Bruyneel, S.; et al. A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspect. Psychol. Sci. 2016, 11, 546–573. [Google Scholar] [CrossRef] [PubMed]
  11. Nosek, B. Estimating the Reproducibility of Psychological Science. Science 2015, 349. [Google Scholar] [CrossRef]
  12. National Academies of Science. Human-AI Teaming, State-of-the-Art and Research Needs. Available online: https://nap.nationalacademies.org/catalog/26355/human-ai-teaming-state-of-the-art-and-research-needs (accessed on 1 June 2022).
  13. Thagard, P. Cognitive Science. 2019. Available online: https://plato.stanford.edu/entries/cognitive-science/ (accessed on 17 December 2024).
  14. Mann, R.P. Collective decision making by rational individuals. Proc. Natl. Acad. Sci. USA 2018, 115, E10387–E10396. [Google Scholar] [CrossRef] [PubMed]
  15. Sharma, A.; Czégel, D.; Lachmann, M.; Kempes, C.P.; Walker, S.I.; Cronin, L. Assembly theory explains and quantifies selection and evolution. Nature 2023, 622, 321–328. [Google Scholar] [CrossRef] [PubMed]
  16. Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Towards Causal Representation Learning. Proc. IEEE 2021, 109, 612–634. [Google Scholar] [CrossRef]
  17. Brakel, F.; Odyurt, U.; Varbanescu, A.L. Model Parallelism on Distributed Infrastructure: A Literature Review from Theory to LLM Case-Studies. arXiv 2024, arXiv:2403.03699. [Google Scholar] [CrossRef]
  18. Trevisan, L. CS259Q: Quantum Computing Handout 2; Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
  19. Mehrabian, A. Silent Messages; Wadsworth: Wadsworth, OH, USA, 1971. [Google Scholar]
  20. Fedorenko, E.; Piantadosi, S.T.; Gibson, E.A.F. Language is primarily a tool for communication rather than thought. Nature 2024, 630, 575–586. [Google Scholar] [CrossRef]
  21. Davies, P. Does new physics lurk inside living matter? Phys. Today 2021, 73, 34–40. [Google Scholar] [CrossRef]
  22. Lewin, K. Field Theory in Social Science; Tavistock Publications Ltd.: London, UK, 1951. [Google Scholar]
  23. Christensen, C.M.; Alton, R.; Rising, C.; Waldeck, A. The Big Idea: The New M&A Playbook. Harvard Business Review. 2011. Available online: https://hbr.org/2011/03/the-big-idea-the-new-ma-playbook (accessed on 27 October 2022).
  24. Lombardo, C.; Thomas, L.; Mathews, A.W. Walgreens Is in Talks to Sell Itself to Private-Equity Firm Sycamore Partners. Wall Str. J. 2024. Available online: https://www.wsj.com/business/deals/walgreens-sycamore-partners-private-equity-deal-5d14c920 (accessed on 17 December 2024).
  25. Gutmann, A.; Thompson, D.F. The Spirit of Compromise; Princeton U. Press: Princeton, NJ, USA, 2014. [Google Scholar]
  26. Lawless, W.F.; Moskowitz, I.S. Hum. Mach. Teams. Advantages Afforded by the Quantum Likeness of Interdependence; Elsevier: Amsterdam, The Netherlands, 2025. [Google Scholar]
  27. Keynes, J.M. The General Theory of Employment, Interest, and Money; Macmillan: London, UK, 1936. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lawless, W.; Moskowitz, I.S. An Entropy Approach to Interdependent Human–Machine Teams. Entropy 2025, 27, 176. https://doi.org/10.3390/e27020176

AMA Style

Lawless W, Moskowitz IS. An Entropy Approach to Interdependent Human–Machine Teams. Entropy. 2025; 27(2):176. https://doi.org/10.3390/e27020176

Chicago/Turabian Style

Lawless, William, and Ira S. Moskowitz. 2025. "An Entropy Approach to Interdependent Human–Machine Teams" Entropy 27, no. 2: 176. https://doi.org/10.3390/e27020176

APA Style

Lawless, W., & Moskowitz, I. S. (2025). An Entropy Approach to Interdependent Human–Machine Teams. Entropy, 27(2), 176. https://doi.org/10.3390/e27020176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop