Next Article in Journal
Methodology Integration in Human Medical Study
Previous Article in Journal
The Shaping of the Public Opinion: Social Media between Populism and Convivialism. A Comparative Study of Austria, Sweden, and the UK
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Agogic Principles in Trans-Human Settings †

Department of Business Information Systems-Communications Engineering, Johannes Kepler University, 4040 Linz, Austria
Presented at the IS4SI 2017 Summit DIGITALISATION FOR A SUSTAINABLE SOCIETY, Gothenburg, Sweden, 12–16 June 2017.
Proceedings 2017, 1(3), 236; https://doi.org/10.3390/IS4SI-2017-03949
Published: 8 June 2017

Abstract

:
This contribution proposes a learning approach to system design in a transhuman era. Understanding transhuman settings as systems of co-creation and co-evolution, development processes can be informed by learning principles. A development framework is proposed and scenarios of intervention are sketched. Illustrating the application of agogic principles sets the stage for further research in this highly diverse and dynamically evolving field.

1. Introduction

According to More transhumanismus is about evolving intelligent life beyond its currently human form and overcoming human limitations by means of science and technology. It should be guided by life-promoting principles and values [1]. Advocating the improvement of human capacities through advanced technologies has triggered intense discussions about future IT and artefact developments, in particular when Bostrom [2,3] has argued self-emergent artificial systems could finally control the development of intelligence, and thus, human life. To that respect Kenney and Haraway [4] have argued for co-construction anticipating a variety of humanoid and artificial actors: ‘Multi-species-becoming-with, multi-species co-making, making together; sym-poiesis rather than auto-poiesis’ (p. 260).
Such an approach challenges system design taking into account humans and technological artefacts, as transhuman entities need to be considered at some stage of development. As they may show self-emergent behavior, mutual learning processes needs to be triggered to implement co-creation and co-evolving systems. Taking a double loop learning perspective, as proposed by Argyris for organizations [5], such a process can be facilitated by principles stemming from any educational discipline, such as pedagogy or andragogy.
In the following we discuss the application of agogic principles to transhuman system development. They affect the situation-aware behavior of actors in trans-human settings. We start out with basic characteristics of transhumane systems and their development before introducing a double-loop learning framework and detailing major agogig principles. We exemplify their application in development settings.

2. The Development of Transhuman Systems

When developing as a loosely defined movement gradually over the past two decades, trans-humanism’s objective, namely to accelerate the evolution of intelligent life beyond its currently human form by means of science and technology, has become more and more substantiated by concepts and artefacts. Grounded in a variety of disciplines trans-humanism researchers analyze the dynamic interplay between humanity and the acceleration of technology. On the technology side biotechnology, robotics, information technology, molecular nanotechnology and artificial general intelligence are at the focus of interest (cf. [6]). According to the transhumanist declaration (http://humanityplus.org/philosophy/transhumanist-declaration/)—see also Appendix—trans-humanists strive for the ethical use of technologies when developing posthuman actors. They feature uploading, i.e., the process of transferring an intellect from a biological brain to a computer system through uploading, and anticipate a point in time ‘when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. The most likely cause of a singularity would be the creation of some form of rapidly self-enhancing greater-than-human intelligence’ (http://humanityplus.org/philosophy/transhuma-nist-faq/ accessed 20 May 2017) (cf. [7]).
Transhumanist protagonists envision overcoming aging and widening cognitive capabilities mainly by whole brain emulation while creating substrate-independent minds. According to the declaration), system development should be guided risk management and social processes ‘where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented’ (see declaration item 4 in the Appendix).
Increasing personal choices over how individuals design their live based on assistive and complementary technologies (termed ‘human modification and enhancement technologies’ in item 9 of the Transhumanist Declaration in the Appendix) is a vision that seems to attract many people. For instance the Singularity Network (https://www.facebook.com/groups/techsingularity/), one of the largest of hundreds of transhumanist-themed groups on the web, showed an increase from 2011 to 2017 from around 400 to 26.372 in 2017 (as indicated 20 May 2017 on the website). Transhumanists envision as design entity ‘posthumans’ through continuous grow of intelligence that can be uploaded to computer systems. When laying ground to levels of consciousness that human brains cannot access so far, posthumans could either be completely synthetic artificial intelligences or composed of many smaller systems augmenting biological human capabilities, which finally cumulates in profound enrichments of human capabilities (cf. [8]).
Besides applying advanced nano-, neuronal and genetic engineering, artificial intelligence and advanced information management play a crucial role when developing intermediary forms between the human and the posthuman (which are termed transhumans). Subjects of design and later on designers themselves due to their self-replicating capability are therefore intelligent machines in an ever increasing range of tasks, and level of autonomy. Replacing increasingly human intelligence by machine intelligence is expected at some point to create machine intelligence superior to single human cognitive intelligence, characterized by effective and efficient planning, and self-emergence (cf. [3]).
Haraway [9] contrasts the strive for trans- and posthumans referring to humans being rooted in their natural context, which in particular influences the evolution of species: ‘I am a compost-ist, not a posthuman-ist, we are all compost not posthuman’ (p. 160) Taking into account such categorical binding challenges design approaches due to the intended decoupling of human and machine intelligence. Being aware of the current development and increasing interest in transhumanist developments a learning cycle needs to trigger in the course of design, that informs development on both, an operational, and meta-operational level (cf. [5]).
In such an intertwined learning setting, the operational level concerns reflecting concrete development activities (single loop learning), e.g., applying cyber-physical development routines, whereas the meta-operational level concerns underlying assumptions and value systems that direct and trigger the operational level (double loop learning). These processes should follow common principles, than can accepted by developers and can be embodied in transhuman developments, once compost-ists rather than posthumanists are in the center of interest. We first introduce the guiding principles before providing the application framework adjusting both learning levels and cycles.

3. Agogic Principles

If we consider development issues, such as transhuman system design, meaningful to human life and the existence of being human, guidance needs to be provided for all stakeholders groups, defining it as pedagogic, andragogic, and gerontagogic issue. Consequently, it poses an agogic challenge to researchers when looking for a scientific approach that can provide a mode of thought that might result in man’s overcoming present dilemmas. The Greek idea of the agein, from which the adjective agogic is derived, refers to being together in the world (‘mitwelt’, ‘mitsein’). When reference is made to c system development as an agogic approach, the respective guiding principles can be investigated with respect to their applicability.
Starting point for an agogic model can be Dewey’s distinction between impulsive, routine, and reflective action (cf. [10,11]). Impulsive action is based on trial and error; routine action is based largely on authority and tradition; reflective action is based on ‘the active, persistent and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it’ ([11], p. 9). He explains reflective thinking as a ‘chain’ not only involving ‘a sequence of ideas but a con-sequence’ of thoughts (ibid., p. 4). In his understanding, acting in open-mindedness and responsibility are consequences of reflective thinking.
Schön also refers to professional practice. His Reflective Practitioner approach aims at professional capabilities to handle complex and unpredictable problems of actual practice with confidence, skill, and care [12]. According to Adler, a professional practitioner can not only think while acting, but also deal with conflicts involved in situations [13]. Unique or surprising situations are handled through reframing and finding new solutions (‘reflection-in-action’). This process is (i) a conscious one, though not necessarily articulated in words; (ii) a critiquing one, as it leads to questions and re-structuring; (iii) immediately significant for action (most important) (cf. [14], p. 29).
An agogic and situation-aware mindset asserts that an actor’s time perspective changes from postponed application of experiences and knowledge to immediacy of application and accordingly, orientation to acting shifts from subject-centered activities to focused interaction in co-creative settings (cf. [15]). There several agogic principles apply:
  • Activities are set in accordance with the needs of participating actors under the given conditions and capabilities to act
  • Each actor has certain resources that are not only the starting point, but rather design entities. They are accepted to be limited.
  • Actors determine their way and pace of developments, as development needs to in balanced with the current conditions. Both, active participation, and retreat are part of development processes.
It is the latter principle that is of crucial importance for triggering development in co-creative settings. Agogic actors need to embody (cf. [16]), and thus self-manage
  • Empathy as sensitive understanding of others
  • Appreciation of another personality without preconditioning acceptance and respect
  • Congruence meaning the authenticity and coherence of one’s person and behavior
Congruence is decisive in making visible system values and their attributes to others. Authenticity refers to meet a person ‘as a person’, to the equal of a person, experiencing a situation with the entire spectrum of channels (perceived impulses, feelings, impression etc.). Coherence includes judging in how far or at what point in time values or elements can be shared with others, i.e., becoming visible to others. An essential part of congruence is that all participating actors or systems have the same, transparent understanding of the co-creative system, including preset conditions and irreversible process design, e.g., normative or role-specific behavior.
  • WHAT IS? What did you see, hear, smell, tasted, feel? What happen, when and how? Can you describe in detail?
  • WHAT SHOULD BE? Which perspective, which sense do you see? What needs to be achieved? Which priorities do you want to set? What do you want exactly? And why? Which state satisfies you?
  • WHY? Which meaning do the observations have for you? Which relations do you recognize? What do you reckon? How can you explain that? What are your conclusions?
  • HOW? How to proceed? Which means shall be used? Which tactics shall we chose? What is to be done? Who does what, with what, whom, when and how?
Agogic behavior in a certain situation (see Figure 1), e.g., co-creating transhuman systems, indicates sensing to be crucial for setting cognitive acts intentionally. As indicated in the figure, it means capturing the rationale of acting in terms of perceiving a situation and cognitively reflecting perceived information. Both serve as some preprocessor to acting and are guided by intention and planned action. According to that model, various subsystems are involved in preparatory actions through reflecting on system information, and making action from system processing visible to others in the shared setting, e.g., a transhuman system development space.

4. Towards Agogic Development Settings

Firestone and McElroy [17] have designed a development life cycle intertwining single and double loop learning processes, termed Knowledge Life Cycle (KLC). In case of mismatches on the operational level, in our case developing transhuman systems, problem detection and formulation induces knowledge processing through problem and knowledge claim formulation (codified beliefs, guiding principles and meta-cognitive elements). By iteratively acquiring and processing information along individual system and collective learning processes, informed decision-making on codified knowledge claims is prepared. These claims may survive based on the decision taken, and further processed for diffusion on the collective level, e.g., inducing a certain system behavior.
Repositories, such as the distributed organizational knowledge base play a crucial role once knowledge processing is coupled with operational procedures, e.g., reflecting on a development task. As integrative living design memory, they allow reconfiguring previously produced knowledge claims and tie them to running codification schemes, e.g., representing values, and operational (transhuman system development) processes. When stepping beyond operation into a knowledge processing environment on transhuman system development, a double loop learning process is triggered.
The KLC bridges the gap between single- and double-loop learning which can be argued to be essential for transhuman system development processes: It allows reflecting on ongoing development processes as well as fundamental design decisions, e.g., deciding whether the point in time for handing over control to machine intelligence has come. In the double loop meta-information controlling the operational development environment is processed, as knowledge claims are formulated, evaluated, processed, and prepared for integrating modifications into running processes (where further optimization in terms of single-loop learning, e.g., reducing development risks, can be performed).
Hence, in case the development of a system cannot satisfy the project needs with means of the development environment, then double-loop learning is initiated. In a cycle, feedback and achievements are exchanged based on triggers from the operational development and leading to informed adjustments or changes (cf. [18]). It is the KLC’s knowledge processing phases, namely knowledge production and knowledge integration, where several agogic principles are of important, since underlying drivers of developing transhuman systems need to be addressed explicitly and in a co-creative way. Of particular importance is the system’s or actor’s balancing of development needs with current conditions, in order to set development activities in accordance with the needs and capabilities of participating actors or systems (see previous section). In terms of co-creation, the knowledge production and evaluation of knowledge claims can be structured and guided by the major principles of conversational knowledge creation, as proposed by Wagner ([19], p. 270):
  • Open: In case development content is found to be incomplete or poorly organized, any actor or system should edit it the way it fits individually.
  • Incremental: Development content can be linked to other development content, enforcing system thinking and contextual inquiry.
  • Organic: The structure and content of a system under development is open to continuous evolution.
  • Mundane: A certain number of conventions and features need to be agreed for shared access to development content.
  • Universal: The mechanism of further development and organizing are the same as creating so that any actor, system, or system developer can be in both roles, an operating and a development system.
  • Overt: The output suggests the input required to reproduce development content.
  • Unified: Labels are drawn from a flat space so that no additional context is required to interpret them.
  • Precise: Development content items are titled with sufficient precision to avoid most label or name clashes.
  • Tolerant: Interpretable behavior is preferred to error messages.
  • Observable: Activities involving development content or structure, specifications, can be watched and reviewed by other stakeholders, both on the cognitive and social level.
  • Convergent: Duplication can be discouraged or removed by identifying and linking similar or related development content.
Achieving these properties throughout development supports congruence in co-creation, as transparency and intelligibility can be improved through meta-layer information or interventions which affect the operational layer of development. Developers can develop contextual understanding of operational development procedures, qualifying them for informed transhuman design, even when the involvement of artificial intelligence is increasing continuously.

Conflicts of Interest

The author declares no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix

Transhumanist Declaration (downloaded 20 May 2017 from http://humanityplus.org/philosophy/transhumanist-declaration/).
Transhumanism seeks the ethical use of these and other speculative technologies. Our theoretical interests focus on posthuman topics of the singularity, extinction risk, and mind uploading (whole brain emulation and substrate-independent minds).
(1)
Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.
(2)
We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.
(3)
We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.
(4)
Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
(5)
Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.
(6)
Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
(7)
We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.
(8)
We favour allowing individuals wide personal choice over how they enable their lives. This includes use of techniques that may be developed to assist memory, concentration, and mental energy; life extension therapies; reproductive choice technologies; cryonics procedures; and many other possible human modification and enhancement technologies.
The Transhumanist Declaration was originally crafted in 1998 by an international group of authors: Doug Baily, Anders Sandberg, Gustavo Alves, Max More, Holger Wagner, Natasha Vita-More, Eugene Leitl, Bernie Staring, David Pearce, Bill Fantegrossi, den Otter, Ralf Fletcher, Kathryn Aegis, Tom Morrow, Alexander Chislenko, Lee Daniel Crocker, Darren Reynolds, Keith Elis, Thom Quinn, Mikhail Sverdlov, Arjen Kamphuis, Shane Spaulding, and Nick Bostrom. This Transhumanist Declaration has been modified over the years by several authors and organizations. It was adopted by the Humanity + Board in March 2009.

References

  1. More, M. Transhumanism: Towards a futurist philosophy. Extropy 1990, 6, 6–12. [Google Scholar]
  2. Bostrom, N. The future of humanity. In New Waves in Philosophy of Technology; Olsen, J.-K.B., Selinger, E., Riis, S., Eds.; Palgrave McMillan: New York, NY, USA, 2009; pp. 186–216. [Google Scholar]
  3. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  4. Kenney, M.; Haraway, D. Anthropocene, capitalocene, plantationocene, chthulucene; Donna Haraway. in conversation with Martha Kenney. In Art in the Anthropocene: Encounters among Aesthetics, Environments and Epistemologies; Davis, H., Turpin, E., Eds.; Open Humanity Press: London, UK, 2015; pp. 229–244. [Google Scholar]
  5. Argyris, C. Double-Loop Learning. In Wiley Encyclopedia of Management; Wiley: Hoboken, NJ, USA, 2000; pp. 115–125. [Google Scholar]
  6. Goldblatt, M. DARPA’s Programs in Enhancing Human Performance. In Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and the Cognitive Science; NBIC-report; Roco, M.C., Schummer, J., Eds.; National Science Foundation: Arlington, VA, USA, 2002; pp. 337–341. [Google Scholar]
  7. Kurzweil, R. The Singularity Is Near: When Humans Transcend Biology; Viking: New York, NY, USA, 2006. [Google Scholar]
  8. More, M.; Vita-More, N. (Eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future; John Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  9. Haraway, D. Anthropocene, capitalocene, plantationocene, chthulucene: Making kin. Environ. Hum. 2015, 6, 159–165. [Google Scholar] [CrossRef]
  10. Dewey, J. Educational Essays; Cedric Chivers: Bath, UK, 1910. [Google Scholar]
  11. Dewey, J. How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process; Heath & Co.: Boston, MA, USA, 1933. [Google Scholar]
  12. Schön, D.A. The Reflective Practitioner: How Professionals Think in Action; Basic Books: New York, NY, USA, 1983; Volume 5126. [Google Scholar]
  13. Adler, S. The Reflective Practitioner and the Curriculum of Teacher Education. In Proceedings of the Annual Meeting of the Association of Teacher Educators, Las Vegas, NV, USA, 5–8 February 1990; Available online: http://files.eric.ed.gov/fulltext/ED319693.pdf (accessed on 20 May 2017).
  14. Schön, D.A. Educating the Reflective Practitioner; Jossey-Bass: San Francisco, CA, USA, 1987. [Google Scholar]
  15. Bronfenbrenner, U. Die Ökologie der Menschlichen Entwicklung; Klett-Cotta: Stuttgart, Germany, 1981. [Google Scholar]
  16. Rogers, C.R. Die klientenzentrierte Psychotheraphie; Kindler: München, Germany, 2003. [Google Scholar]
  17. Firestone, J.M.; McElroy, M.W. Key Issues in the New Knowledge Management; Routledge: New York, NY, USA, 2003. [Google Scholar]
  18. Argyris, C.; Schön, D.A. Organizational Learning II: Theory, Method and Practice; Addison-Wesley: Reading, MA, USA, 1996. [Google Scholar]
  19. Wagner, C. Wiki: A technology for conversational knowledge management and group collaboration. Commun. Assoc. Inf. Syst. 2004, 13, 265–289. [Google Scholar] [CrossRef]
Figure 1. Work-agogy (according to Arbeitsagogik.ch).
Figure 1. Work-agogy (according to Arbeitsagogik.ch).
Proceedings 01 00236 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stary, C. Agogic Principles in Trans-Human Settings. Proceedings 2017, 1, 236. https://doi.org/10.3390/IS4SI-2017-03949

AMA Style

Stary C. Agogic Principles in Trans-Human Settings. Proceedings. 2017; 1(3):236. https://doi.org/10.3390/IS4SI-2017-03949

Chicago/Turabian Style

Stary, Christian. 2017. "Agogic Principles in Trans-Human Settings" Proceedings 1, no. 3: 236. https://doi.org/10.3390/IS4SI-2017-03949

APA Style

Stary, C. (2017). Agogic Principles in Trans-Human Settings. Proceedings, 1(3), 236. https://doi.org/10.3390/IS4SI-2017-03949

Article Metrics

Back to TopTop