Open Access This article is
- freely available
Entropy 2017, 19(4), 181; https://doi.org/10.3390/e19040181
Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems
Sony Computer Science Laboratories, Inc., Takanawa Muse Bldg. 3F, 3-14-13, Higashi Gotanda, Shinagawa-ku, Tokyo 141-0022, Japan
Academic Editor: J. A. Tenreiro Machado
Received: 30 December 2016 / Accepted: 20 April 2017 / Published: 22 April 2017
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates the quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database’s algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to an exhaustive optimization. Conjectures of criticality are obtained on the self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database—the process that inevitably involves human subjectivity for management within open complex systems.
Keywords:inter-subjective objectivity; complexity measure; computational complexity; criticality; citizen science; open complex system
Recent innovation of information and communication technologies (ICT) embedded in real environments is drastically changing the way society interacts with computation. This has been described as the fourth industrial revolution . In particular, ubiquitous sensors and mobile communication tools have led to an increasing capacity of distributed and interactive environmental sensing. These technological supports bring in new effective methodologies to tackle complex self-organising behaviours in social–ecological systems that are difficult to understand with conventional modelling and simulation approaches (e.g., [2,3]). Massive amounts of sparse and heterogenous data that are based on the internal observation from within various collective phenomena call for an extended analytical framework, ranging from objective measurements (e.g., with sensors) to subjective data such as human evaluations and feedbacks.
Redefining a standard formalization of computation and its complexity that are associated with self-organised citizen science can raise multiple criteria for the evaluation of critical phenomena, spread over the dynamical process of observation, management, and knowledge formation in open complex systems [4,5]. Self-organised criticality appears in various natural and social phenomena, often with scale-free statistical properties [6,7]. They manifest in the power law, which can be reduced to a simple combination of inherent stochastic processes , and whose realizations provide proxies of emergent functionality (e.g., [9,10,11]). The large fluctuation of the power law distributes the statistical complexity in multiple scales that cannot be represented by a simple mean value for predictive purposes. The sampling time series from a power-law distribution encounters intermittent shifts of the sample average due to the infinite variance of distribution—even with the upper-bounded power law in the real world (e.g., in the magnitude distribution of earthquakes). This situation addresses a statistical limit of prediction solely by the modelling and simulation of the phenomena, but also presents a positive reason to engage human elements as a practical solution in actual management—especially those involving semantic and cognitive judgements [12,13]. On the technology side, machine learning models have long been attempting to optimize the prediction of unknown stochastic sources, implementing interactive estimation processes to exploit the hidden causal structure from temporal observation sequences (e.g., ). Modelling studies of guided self-organization have recently been explored with implementation in robotics, simulated neural networks, and networks of agents, etc. . Although most of the achievement is discussed within the predictability of a confined experimental setting, a hybrid system with the synergy of human and computation elements always lies as a premise of real-world situation, which has been little exploited, except for some prototypical interfaces for the internet of things (e.g., ). For a cost-effective monitoring and control within restricted resources, guided criticality should be introduced to the user side of technology, in order to migrate and abstract decision making process from computation to human ability [3,4,17].
In particular, in solving global agendas such as sustainability goals, a comprehensive approach is required that should make use of the full potential of self-organisation in coupled social–ecological systems [5,18,19]. These efforts practically take on the engagement of citizens and multi-disciplinary stakeholders as important actors in the data acquisition and the implementation of an interactive management through guided self-organization, as a novel type of collective intelligence in the era of the fourth industrial revolution [3,20,21].
In facing the transition of data-driven citizen science towards the achievement of dynamical control in managing real-world open complex systems, this article raises fundamental theories and example models to support the discussion of complexity, computation, and criticality in its most possible general form. We formalize the basic objectives as follows, which are exploited in the subsequent sections with the corresponding numbers:
- Section 2: How can we formalize and treat the databases of varying quality from both machine and human observations, which range from subjective bias to objective fact? How can we set up scientific measures that should assure the compatibility with the principles of accuracy and reproducibility ?
- Section 3: How can we generalize the concept of complexity measures in application to the human–computer hybrid systems in citizen science?
- Section 4: What is the nature of computational complexities in actual data processing?
- Section 5: What is the general condition to yield guided self-organization for cost-effective citizen science?
Although these questions are universal in multiple industries, a common basis of understanding the problems and mutual development of ICT infrastructure are still isolated and developed independently in each sector. Throughout the exploration of these topics, this paper attempts to provide a common terminology and establish a theoretical basis for the realisation of a cost-effective citizen science in open complex systems situations. This is becoming increasingly important for solving transdisciplinary problems through the participation of multiple stakeholders in the real world .
2. Inter-Subjective Objectivity Model
We first consider the expression of the quality of data ranging between human subjectivity and machine objectivity in the general form of database . As a premise, any information that can be represented in digital computing is compatible with the natural number theory. At the infinite limit of computational memory, the representation of the database extends to general sets on a real data type with countably infinite precision, which accepts the definition of -finite measure in a measure-theoretical formulation. We define the general form of arbitrary database as follows:where is a real data type, is the m sets of arbitrary symbolic set , with the dimensions n, m, and as natural numbers including 0. Any variable in this article takes the assumption that it can be stored in . For mathematical simplicity, we hereafter consider the real data type as a real number. In practice, describes the values of n real variables (such as time, spatial coordinates, probabilities, etc.), and represents m discrete sets of symbols (such as the name of variables, occurrence of discrete variables, text data, etc.). Obviously, holds in mathematical simplification, but we separate the notations to distinguish between the quantitative and qualitative variable types.
2.1. Formalization of Subjectivity, Inter-Subjectivity, Subjective–Objective Unity, and Objectivity
Digital data from citizen science vary from subjective human perception to objective sensor measurement with a different degree of human-induced bias. Here, the subjectivity and objectivity matter because they influence the accuracy and reproducibility of data that is fundamental to establishing scientific analysis. We formalize the nature of observation variables between the subjectivity, objectivity, and these interactions as follows:
- Subjectivity is the quality of observation that is based on human perception without the substantial support of a machine.
- Inter-Subjectivity is the degree of commonality between the subjectivities of multiple subjects.
- Objectivity is the quality of observation that is based on a machine measurement whose consequence does not depend on the operator’s will.
- Subjective–Objective Unity is the degree of commonality between the subjectivity and objectivity.
- Inter-Subjective Objectivity is the quality of observation that satisfies the coincidence of both inter-subjectivity and subjective–objective unity.
These follow basic concepts in philosophy and social science and are adapted to the situation of data analysis. The concept of subjectivity is commonly used in philosophy as the collection of the perceptions, experiences, expectations, personal or cultural understanding, and beliefs specific to a person, which influences, informs, and is biased towards people’s judgments and evaluations. In contrast, objectivity refers to a view of truth or reality which is free from any individual’s influence . The most simplistic form of inter-subjectivity in social science employs the term in the sense of having a shared definition of an object, or shared subjectivity .
The relations between these classifications are shown in Figure 1a. For example, text data written by humans are subjective data whether the fact described is based on an objective phenomenon or not. Sensor logs are objective data, even measured on a human body such as heart rate that could be influenced by subjective thought. When multiple subjects give the same subjective evaluation, such as rating of web contents, the commonality augments the degree of inter-subjectivity, which is often adapted to cloud-sourced data validation (e.g., [24,25]). When a subjective evaluation coincides with an objective measurement, the commonality represents the degree of subjective–objective unity. A highly reproducible subjective–objective unity can provide on-site practical measurement in field science, typical in biodiversity assessment and soil texture analysis (e.g., [25,26]). This is because these plausible subjective–objective unity measures also coincide with high inter-subjectivity after sufficient training, which guarantees the accuracy of on-site application without confirming the accordance with objective measurement each time. When the methodology is highly established with respect to the accuracy and reproducibility, it belongs to inter-subjective objectivity, where each subjective and objective measurement converges to the same result. The developmental process of reproducible subjective evaluations that converge with objective measurements is depicted in Figure 1b). By training the subjective–objective unity of each human observer, their inter-subjectivity increases, and the commonality of measurement augments to become a self-organizing loop between the subjective–objective unity and inter-subjectivity by a mutual feedback to attain a higher degree of inter-subjective objectivity.
Note that in a philosophical generalization (e.g., phenomenology), all data are the derivatives of subjectivity, because a machine observation is also constructed on human perception in the establishment of measurement principle, construction of sensing devices and data processing workflows, and final interpretation. To avoid trivial argument that does not affect the reproducibility of the results, we adopt the standpoint that separates the subjectivity and objectivity with the degree of intervention to observation outcome between human and machine. We call this conceptual model the inter-subjective objective model.
2.2. Representative Model: Buoy–Anchor–Raft Model
In order to apply the inter-subjective objective model into a quantitative framework of actual data processing, we develop a general example model with a more familiar and analogical terminology that are intuitively easier to understand: the buoy–anchor–raft model, as schematically expressed in Figure 2. The definition and correspondence to the inter-subjective objectivity model are given as follows:
- Buoy refers to subjective data that fluctuates on the sea surface, representing subjectivity. Buoy can provide subjective estimates of an observation object lying on the objective sea floor, but the observation is biased by subjective fluctuations.
- Anchor refers to objective data that is fixed on the sea floor representing objectivity, without the influence from the subjective sea surface. Anchors can be connected to buoys, which provide the evaluation of subjective fluctuation with respect to objective machine measurements.
- Raft represents the relationship between buoys, and refers to inter-subjectivity of data without reference to anchors. A buoy can evaluate another buoy using relative difference of fluctuation on a subjective sea surface, and the overall commonality between buoys is represented as the raft. Nevertheless, it is based on an internal observation between buoys without an objective system of units, and is therefore susceptible to a global drift of collective standard.
- Buoy–Anchor connection rope defines the degree of subjective–objective unity. As a buoy’s movement is more controlled by its anchor, higher subjective–objective unity is assured.
- Raft–Anchor connection ropes define the degree of inter-subjective objectivity. In addition to the commonality between buoys represented as a raft, the effects of the global drift from subjective sea surface could be controlled with anchors within a plausible range of error with respect to the objective sea floor.
Concrete examples of the buoy, anchor, raft in various social systems and scientific domains are given in Table 1. While inter-subjective objectivity is a conceptual framework that classifies the quality of observation, the buoy, anchor, and raft refer to actual constructs of databases implemented with ICT. The terms arose from the developmental process of management systems in open systems science , sharing the perspective with the transversal question of the grand challenge of AI research regarding the effective extraction of scientific knowledge out of heterogenous data of varying quality . Without properly positioning the subjective background of the study, it is often the case that established knowledge with large-scale experiments and statistical analyses is revealed to be false in high-throughput discovery-oriented research, resulting in a null-field with statistically prevailing bias . As shown in Table 1, conceptual problematics for the implementation of ICT in various fields can be mutually characterized with the use of the buoy–anchor–raft model. This means the ICT infrastructure can be applied and shared in a synergistic way across domains, which is beneficial, especially for open-source development advocated in complex systems science . Recent development in the application programming interface for big data integration has increased the support for this challenge, which calls for a general theoretical framework of information processing that the buoy–anchor–raft model can provide (e.g., ).
We then consider a mathematical expression of the buoy–anchor–raft model in view of providing a simplified idea of computation with respect to the evaluation of inter-subjective objectivity. Recently emerging contexts of citizen science make use of buoys as important information sources, in contrast to objective science such as traditional physics, which is usually self-contained with anchors. Buoys fluctuate with human subjectivity, which is scientifically called bias. Suppose we cannot directly measure observation objects as anchors. This constraint does not necessarily arise from the observation principle but rather from the resource limitation: For example, a field evaluation of biodiversity mostly depends on human observation because massive DNA barcoding is too costly or even ineffective. So, the accuracy of buoy data should be evaluated with other buoy–anchor connections compatible with observation objects. By defining a buoy data and corresponding measurable anchor data , a buoy–anchor connection can be defined as an error function between and :
In case of n observation objects and for one observer, a typical example of buoy–anchor connection is given with the regularized mean squared error:
The regularization makes c accessible to the canonical evaluation of confidence interval, such as t-test. As a generalization to m observers, let us describewheregiven that
Next, we consider the raft model. In most social systems, the case-wise precise measurement of anchors is impossible, and we call for the raft of common sense and other social feedbacks as a premise of plausible judgement. Consider m observers with somehow quantifiable opinions (buoy) on n observation objects. We define the raft matrix as follows, as a generalization of buoy data to m observers and n observation objects:where the raft by definition refers to the commonality contained between these buoys. In a completely equal society where every observer’s opinion is equally respected, we obtain the mean inter-subjective evaluation on n objects as follows:
Decision-making based on the evaluation of raft can represent the community’s mean quantifiable opinions, although it is not free from collective bias. It remains only within the framework of inter-subjectivity. For a better evaluation in terms of inter-subjective objectivity, we need to introduce a connection with anchors. Let us introduce a buoy–anchor connection from Equation (4), then an example of the inter-subjective objective evaluation in the sense of raft–anchor connection can be given by:where
This means that the error function of the buoy–anchor connection is reflected as an entropy that represents subjective–objective unity of each observer. The opinion of the observer with higher subjective–objective unity is weighted according to the informational scarcity of subjective errors. Such integrated evaluation incorporating the scoring system on observers’ quality are one of the general solutions in web-based citizen science (e.g., ).
Note that the n objects of observation can also coincide with m observers themselves. As can be independently obtained from , it can also accept subjective objects of observation where direct anchors do not exist, such as psychological state or the quantification of qualia such as Quality Function Deployment (QFD)  and pain scale . In such cases, traditional methods only employ simple raft evaluation without anchors, as formalized in Equation (8). In contrast, with the buoy–anchor–raft model, it is possible to relate indirect anchors to other related objectively quantifiable variables, by expanding the database into a more comprehensive system. In either case, this model provides accessibility to the inter-subjective objective evaluation by properly defining the buoy, anchor, raft and its connections.
The correspondence between the buoy–anchor–raft model and computational variables developed in the following sections are listed in Table 2.
3. Complexity Measures
We consider the generalization of complexity measures with respect to essential information processing in citizen science, based on the inter-subjective objectivity model with buoy–anchor–raft constructs. The concept and definition of complexity vary according to the fields, such as algorithmic complexity, statistical complexity, biological complexity, etc. In this paper, we take a generalized definition of complexity measure as the projection from a system’s variables to one-dimensional quantity, which is composed to express a distinctive characteristic of the system . This includes classical indices mentioned with the context of complexity, as well as various forms of information expressed as numbers in ICT, such as feature dimensions of machine learning.
3.1. Complexity Measure and Search Function
We consider general forms of complexity defined on database in relation to the search function. Complexity measures are widely studied in information theory, with the underlying principle to abstract a low-dimensional representative index of useful features for functional characterization of complex systems . Usually, complexity measures defined on n real variables are the epimorphism to the one-dimensional real number line, . The general complexity measure for citizen science is therefore the projection of the database to real value index, , with the condition that this transformation will provide some utility for the management.
The importance of utility depends on the need for information retrieval in citizen science process, or the conditions that are practically used in a database search. Indeed, the search function is actually the retrieval of corresponding data set with respect to a given condition, such thatwhere stands for the search result on database with search query . For example, is an if–then construct that can specify the value range of real variables, or the matching with specific symbolic sequence, which returns the corresponding data sets into .
In order to perform computation such as the calculation of the buoy–anchor–raft model evaluation, the integral I of -finite measure on with respect to the condition can be defined as follows, with indicator function :where
In one-dimensional case, can represent either of buoy or anchor. If we define as the function of occurrence probability of , such asthen I coincides with entropy, one of the typical information theoretical complexity measures. can also include joint distribution, such that with :in which case, the mutual information ,can incorporate raft, buoy–anchor, and raft–anchor connections.
As a search query, provides a value of complexity measure I; we can also inversely use I to specify . We consider the invertible map that generates all possible queries which return the set of x associated with the given value of complexity measure I. For example, we can search the dataset with its entropy higher than a threshold by setting
Nevertheless, complexity measures that specifically define an arbitrary are generally not given explicitly. In practice, we usually compare the performance of known complexity measures with respect to the ability to characterize the features on which we focus our analysis. The general task is to invent a novel complexity measure that can exclusively separate patterns in , given implicitly as . For that purpose, the following theorem holds:
For any search condition , we can construct an exclusively selective complexity measure which can sort out effects from other variables, with the function , such thatThe definition of invertibility of G follows that of .
Proofs of the theorems are given in Appendix A.
The intuitive geometric meaning of the inverse function relationship between complexity measures and search function is shown in Figure 3.
3.2. Observation Commonality as Complexity
Inter-subjective objectivity is based on the commonality among subjectivity, inter-subjectivity, and objectivity. Essential computation is therefore the search for commonality between different observation datasets, whether it be from humans or machines. We consider the observation commonality to be a complexity measure that conforms to inter-subjective objectivity, and analyze its general mathematical structure.
We consider -finite probabilistic measures , on measurable database space , where stands for Borel -algebra of . Then, the convolution * of and is defined as follows:where and represent -algebra of and , respectively.
Through appropriate variable transformation, the convolution of probability measures with real type variables (21) can be expressed as follows, as the probability of the sum of the variables :
By choosing finite sets of such as time period, geographic range, and other real type variable range, as well as symbols for such as name of observation object, one can define the commonality of observations as a part of the convolution of the probabilities from different observers. The observation and can be of any nature between subjectivity, inter-subjectivity, and objectivity.
We now consider the condition of valid observation with respect to the regularization of probability measure as follows, for a general number of observers :
This means that by expanding the scale of the real type variable to infinity, one can observe its occurrence with probability 1. The same formalization also applies to -finite measure on , which is integrated in the formalization with .
Next, consider a confined variable range with positive probability measure . This range can be of any complex form as long as it supports positive measure. In a real situation, this can correspond to intermittent observation time interval, scattered geographical range, and other discrete range of the real type variable. We define the rate of observation by observer i within variable range r aswhich converges to (23) with .
The commonality of observation between two observers i, j based on r is expressed as the following convolution confined to r:which also means taking the sum of joint distributions between all smallest measurable events in r. The additional condition in limits the integral of each variable within r, which includes formal condition . The following generalization holds:
For N independent and valid observation on variable range , letwhere the coefficient Λ is a free parameter that remains invariant under the convolution. Then
This means that the -th power of multiple convolution represents the geometric mean of N independent valid observation rates. By choosing regularization factor , corresponds to the ensemble of possible mean values , integrated values , and other weighted sum of N random samplings from r. The regularization parameter can further be generalized to an arbitrary measurable function representing commonality characteristics, taking as a variable.
With the use of the logarithmic scale, the information of is the sum of those with individual observation:
As a similar property related to geometric mean, note that the following Young’s inequality also holds:where denotes total variation. This assures us that the variation of the commonality remains within the order of the product of each observation’s variation.
However, it is important to note that as a general property of convolution,
The equality only holds in case or for , without implication for the independence of observations. For the convolution on general subset , the exact definition is given bythough it requires direct calculation without relevance to . In order to obtain fast computable form, the following asymptotical generalization holds:
As , for , , and , converges almost everywhere to the following:where is the Lebesgue measure on , and represents the normal probability density distribution with mean value and variance as follows:
A numerical example of the convolution is presented in Figure 4. Theorems 2 and 3 can be directly generalized to , with .
3.3. Topological Structure of Complexity 1: Total Order of Observations
We consider the topological structure of inter-subjective objectivity based on the complexity, defined as the convolution between different observations. As the commonality within inter-subjective objectivity is defined with multiple different observations, the topological ordering based on these complexity measures is possible with observations of any nature.
We consider the commonality space with respect to each observation dataset as a point, and commonality between them as the distance between each pair of points. This can be considered as the undirected complete graph with N vertices, and its pair-wise complexity measure as edges length. The general property of Euclidean space allows a complete graph of size N to be embedded in dimensions (e.g., any line between two points is one-dimensional space, and any triangle with three points is two-dimensional surface, etc.), although an additional quantitative restriction such as triangle inequality on each triplet of edges is required. In order to treat an arbitrary set of the complexity measures and yield general characteristics of commonality space, we need to focus not on the actual values of complexity, but on the topological order between them.
Let us first consider the total order between complexity values with observation data contained in N vertices . One can determine the total order between edges by taking a mean order relationship between each pair of edges by the following algorithm (namely the pair-wise order algorithm):
- For each pair of edges , calculate the order relation or with respect to the given complexity measure as an edge attribute such as length.
- Score each edge by mapping to integer by adding if and by adding if , with respect to all other edges .
- The sorting with the score provides the total order of E.
Note that the quantitative difference is completely lost in the case of antisymmetry, . We will consider the meaning of this information loss with respect to other compatible sets of observation in Section 3.4.
Next, we consider the topological order of complexity for observations according to the total order of these commonalities. We need here to translate the total order between edges E to that of observations V. This can be obtained by calculating the triplet of vertices and associated total order of edges with the following algorithm (namely, the triplet order algorithm schematically represented in Figure 5):
- For each triplet of observation and associated edges , update score of each observation by mapping to integer with the following six rules:
- If , then , , .
- If , then , , .
- If , then , , .
- If , then , , .
- If , then , , .
- If , then , , .
- The sorting with the score provides the total order of V.
The commonality order of V represents the topological structure of collective intelligence in citizen science with respect to inter-subjective objectivity, which corresponds to the topological inclusion relation of the Venn diagram in Figure 1.
3.4. Topological Structure of Complexity 2: Permutation between Total Orders of Observations
We expand the situation to two sets of observations—namely, observation I and . For example, observer I and observing N objects, or N observers observing 2 different objects I and . It can also represent the application of two different complexity measures I and to N observations. For simplicity, we limit the formalization to two sets of observations, but generalization to a greater number of sets is possible.
In the general case, total orders I and do not necessarily coincide. The relationship between two total orders with N observations can be described with the permutation of N elements (Figure 6a). In order to analyze the permutation between total orders, let be a symmetric group with degrees of N. For , we define a linear transformation bywhich describes the permutation between commonality orders I and .
We define a subspace of bywhich represents the subspace with compromise of total order. While by defining its complementary subspacewe obtain the subspace in which there is no compromise, or the complete matching of two commonality orders. The whole commonality space can be divided into and :
As depicted in Figure 6a,b, the compromise between two commonality orders is expressed as a non-linear folding relationship between them. Making the assumption that the complexity measure is a continuous function, the integrated complexity measure that supports both commonality orders can be expressed as a folded structure (topologically speaking), such as the shape of the letter “N” (also the capital letter of Non-identical), taking the commonality measure of I and as an affine coordinate: the example with a red dotted line in Figure 6b shows that we can compose an integrated commonality measure by bending the commonality measure in an “N” shape with respect to that of I kept straight (in “I” shape, for Identical), which resolves the compromise. The “N” shape transformation of commonality measure means to change the topology of commonality order with respect to a permutation , while that of “I” shape represents the identical order . The non-compromising part of the two commonality orders conserves its order to the projection onto any linear combination of the two commonality measures, which topologically do not require “N” shape folding, but maintain “I” shape matching.
For simplicity, We call the topological compromise between commonality orders the I–N compromise, and we call topologically identical matching I–I matching. Then, I–I matching subspace can be obtained as the linear combination of commonality measures I and , and the subspace required for the resolution of I–N compromise corresponds to the complementary space (Figure 6b,c).
We call an I–I space that consists of I–I dimensions, and an I–N resolution space that consists of I–N resolution dimensions. The mean commonality order of two commonality orders projected onto I–I space (red solid arrows in Figure 6b,c) can be obtained with the use of the pair-wise order algorithm in Section 3.3, applied not to commonality itself, but to commonality orders. We call this the I–N mean commonality order, since it adopts the mean total order of commonality orders of I and , resolving the I–N compromise. Note that the information lost by antisymmetry of the pair-wise order algorithm does not affect the division of I–I and I–N resolution subspaces. Geometrical representation of the I–N compromise, I–I matching, and these corresponding dimensions, spaces, and the I–N mean commonality order are given in Figure 6.
We finally consider a statistical test on the degree of coincidence (TDC) between 2 commonality orders.
Statistical test on the degree of coincidence (TDC) between two commonality orders:
Given that commonality orders I and with N observations follow a uniformly random permutation with as null hypothesis, the degree of coincidence between the three commonality orders follows a binomial distribution:where signifies a binomial distribution of parameters and , represents the degree of coincidence as the number of I–I matching, returns the size of the set, and the probability of the degree of coincidence .
With respect to the buoy–anchor–raft model in Section 2.2, the following correspondence is possible:
- Two observers observing N objects: Commonality orders I and can correspond to either subjective (buoy) or objective (anchor) observation. The I–N resolution provides integrated commonality measure such as buoy–anchor connection and raft evaluation according to the nature of the observation. TDC provides connections between buoys and/or anchors.
- N observers observing two different objects: The commonality of N observers—whether it be subjective (buoy) or objective (anchor)—are ranked with respect to two different objects I and . The I–N resolution provides a mean ranking of N observers’ commonality upon these observations. TDC provides the reproducibility of commonality among N observers.
- Application of two different complexity measures to N observations: For example, the case of raft–anchor connection where N subjective observers (buoys) are ranked with inter-subjective commonality (raft evaluation) and weighted with two different anchors. The I–N resolution provides mean ranking of N observers’ inter-subjective objectivity, integrating multiple criteria of inter-subjective and objective evaluation. TDC represents statistical dependencies between two complexity measures in response to a given inter-subjective objective measurement. While significant matching between two commonality orders assures the reproducibility based on the coincidence of observation with these measures, non-significance can also be used to quantify complementarity of different evaluations .
4. Computational Complexity
The computation of complexity measures and commonality orders depends on the exhaustive calculation of combinatorics between observations. The computational complexity of such calculation should also be investigated in terms of topological complexity, in order to yield a general theoretical platform that does not depend on the particularity of the database.
4.1. Topological Complexity of Commonality
First, we investigate topological order of commonality among N observations. Using the convolution as commonality (27), we define the maximum commonality order as follows:
The general topological structure of is depicted in Figure 7.
On the cardinality of , the following holds:
As , such that , where represents aleph-naught.
This means that for any elaborated inter-subjective objectivity, there is always the possibility to develop another different set of observations that attains higher inter-subjective objectivity by increasing the dataset. This structure assures the representation of a paradigm shift in science when sufficient contradicting evidence gained a majority compared to an old model. For example, minority reports in biology that may lead to novel discoveries in the future can be properly stored and distinguished from erroneous reports as more evidence accumulates .
4.2. Algorithmic Complexity
Secondly, we evaluate the computational complexity with respect to the computing time scale. Since data-driven citizen science requires real-time computation in a highly interactive manner with observation process, the algorithmic complexity of the calculation of complexity measures is an essential limiting factor of performance. As commonality is based on the intersection of multiple observations, its exhaustive computing confronts combinatorial explosion as datasets increase. Although computation of complexity itself, or resolution of search query as mathematical theorem is provable and an algorithmic solution can be found, the computation resource is another practical issue for real-world implementation—especially in distributed observation.
The computational time scale required for the sorting of a database according to a given utility such as commonality is listed in Table 3. Under a general condition with the observation probability database of size N, , maximum complexity lies in the calculation of commonality order based on the intersection of or elements, whose sorting time belongs to factorial order of N . The case with is depicted in Figure 7. This means that an algorithmic burden exists towards the calculation of middle-scale commonality with respect to the data size. As an inter-subjective objectivity successfully increases in citizen science, this peaking of algorithmic complexity in intermediate scale may hinder the effective feedback necessary for guided self-organization.
However, in a practical situation, the actual computation time may remain in polynomial order if effective data size shrinks with respect to the increase of maximum commonality order:
By defining the diminution rate of data combination with respect to maximum commonality order asthe order of its product is upper bounded by the d-th root of maximum computational complexity atwhere returns the size of the database, and represents the polynomial order of the algorithm with respect to the data size N.
From this result, we can conjecture that for ,will assure exhaustive feedback with polynomial response time of degree c. Usually, the left side is based on the past calculation of lower maximum commonality order, we can annotate interactively whether interactive information processing can assure comprehensive feedback. This will add a criterion on the criticality of guided self-organization mediated by computation, which will be explored in Section 5.
Another methodology other than exhaustive computing is to implement a local gradient algorithm as a local interaction that leads to a global heuristic solution without top-down control. This can also be achieved with the use of limited maximum commonality order (e.g., ), which will keep its computational time within polynomial order .
4.3. Big Data Integration
Thirdly, we consider the computational complexity required for big data integration. As open data is increasingly gaining its availability in citizen science, integration of massive databases from different resources has become one of the most important data processing methods. The conversion of different databases through the application programming interface is a basic protocol when the database is distributed over multiple servers.
The computation required in big data integration is the extensive calculation of commonality in the direct product of multiple databases. For simplicity, we consider the integration of two databases and , with size N and , , , respectively. A joint distribution between subsets of and needs to be determined with respect to common parameters in order to obtain an integrated database including the calculation of up to -th order of commonality, such as order-wise correlations . Exhaustive computing follows the argument in Section 4.2, giving the extension of Theorem 6:
Given the diminution rate of data combination , with respect to maximum commonality order and , during the integration of two databases and , respectively, asthe order of its product is upper-bounded by the d-th root of maximum computational complexity at andwhere represents the polynomial order of the algorithm with respect to the data size N and M.
In this formalization, computational complexity of database integration also confronts combinatorial explosion with respect to data size. Similarly to (42), we then explore a practical condition that effective maximum commonality order can be treated with polynomial time of degree , such that
For that purpose, we set the uniform sparseness u of random databases representing the density of combination that supports the existence of commonality at each order,which maintains the diminution rate of data combination (40) and (43) invariant under the definition. With respect to the total size of the database after integration , the following holds:
As in random data (46), the mean condition of (45) for all converges to the following inequality, which represents polynomial time constraints on computational complexity for exhaustive calculation of newly emerging commonality order within data size L:whereand * signifies the discrete convolution (20):
Numerical observation of the proof is given in Figure 8.
This signifies that the convolution of the power function of each database’s size serves as the complexity measure of big data integration with respect to computational complexity. This provides the condition of data sparseness u such that exhaustive calculation of all newly generating commonality orders within size L can be treated with polynomial time order c under algorithmic constraint d. As the inequality indicates, the more data is sparse, the easier we can calculate joint commonality.
5. Conjectures on Guided Self-Organization
With effective feedbacks by computation, citizen science dynamics is expected to converge to a critical state where objective is collectively optimized through the mutual increase of inter-subjective objectivity. However, several aspects may intervene in the resulting self-organized state, on which we need theoretical interpretation. In this section, general important aspects are exemplified in relation to self-organized criticality.
5.1. Criticality by Limitation
The accuracy and reproducibility of observation is a primary factor that defines the consequent resolution of information represented in a database. Computational complexity also gives constraint on the speed of information processing for prediction. Several limiting factors may generically arise, such as:
- Limitation by principle: Deterministic chaos inherent in a natural system does not allow for long-term prediction, because the tiniest observation error of the present state will develop in exponential order . Short-term validity of meteorological prediction is a typical example.
- Limitation by computational complexity: As explored in Section 4.2, extensive feedback based on exhaustive computing is often impossible with respect to available computing resources. The resolution of feedback may include time delay or incomplete optimization. Spatial-temporal scale of the forecast also sets the constraint as a general trade-off between prediction accuracy and computational resources. The coarser the forecast granularity is, the more costly the calculation becomes, but the more likely it is to realize an accurate long-term prediction.
These limitations fundamentally regulate the order of significant digits in the prediction process, at the edge of resulting precision where the accuracy reaches criticality. The whole dynamics is also confined by the criticality of the observing phenomena itself, by which observers’ behaviour is influenced.
5.2. Criticality by Successful Learning
The motivation of citizen science is not necessarily the construction of versatile artificial intelligence, but the integration and augmentation of human capacity as well [4,12,13]. Successfulness of citizen science can also be defined in terms of information transition from machine to human, on which criticality is assumed to appear.
Let us consider the case when successful learning mediated by computation transferred an effective prediction model into human cognitive capacity. We take an example with Bayesian estimation, which is also a general model of our brain function . General formulation of Bayesian estimation updates the parameter of hypothesized prior probability with respect to the observed data , and provides an estimation of posterior probability given by Bayes’ theorem:where is considered as likelihood function, which updates to .
We now consider that the prior probability —or the model of prediction—depends on the process of computation C and human decision D. As human decision is supported by computation,
This formalization corresponds to Bayesian hierarchical modelling, where computation C provides hyperparameter of human decision D as prior distribution:
When human successfully acquired the model represented in computational model,as independent identical distribution, andas independent and informationally homologous distribution.
This criticality qualitatively corresponds to the saturation stage of Markov chain Monte Carlo method (MCMC) in the optimization of a hierarchical model (52), where hyperparameter and parameter converge to independent stable distributions. Therefore, by monitoring the dependency of machine–human interaction with respect to the actual predictability, one can suggest whether the computation model or human observation should change, or if the actual phenomenon is in transition:
- When the actual prediction accuracy is high and human–machine interaction is high, this indicates the successful modelling of observing phenomenon with the use of computation.
- When actual prediction accuracy is high and human–machine interaction is low, it means the human has achieved a successful understanding of the phenomenon with less dependency on a machine.
- When actual prediction accuracy is low and human–machine interaction is high, it indicates the possibility that computational capacity is not sufficient to effectively treat the phenomenon. Otherwise, the observing phenomenon might be in dynamical transition that effective computational model needs to be changed.
- When actual prediction accuracy is low and human–machine interaction is low, more human effort needs to be engaged both on actual observation and the utilization of the machine interface.
5.3. Criticality by Guided Optimization
The actual management task of citizen science is often firmly related to the sustainability of a social–ecological system, where the achievement of robustness and resilience is an important criterion of criticality [3,5]. A universally robust model with respect to an arbitrary variable cost function is canonically given by uniform distribution, which is commonly adopted as a prior of Bayesian estimation and random search algorithm . It is also widely prevalent in biological phenomena, as the survival rate depends on the geometric mean of evolutionary fitness, which is maximized with uniformity in space, time, and statistical configuration [32,37].
On the other hand, a short-term management goal is usually biased by a given objective. How to reconcile short-term local efficiency and long-term global sustainability is a crucial issue for guided self-organization of management in citizen science.
In order to optimize the balance between different spatio-temporal scales, information geometry can provide a theoretical compromise in terms of informational complexity. Suppose the actual distribution of variable is given by , a short-term management goal as , and idealized long-term robust distribution as . In many natural systems, the uniformity of supporting robustness as the result of self-organization is expressed with entropy maximization principle under parameter constraints such as resource availability and energy flux level .
For simplicity, take an example with Shannon’s diversity index defined on discrete distribution on symbols , such as frequency of n species in biodiversity observation.where represents the non-occurrence of any species. and could be either buoy or anchor. Note that can be generalized to mutual information to express raft, buoy–anchor, and raft–anchor connections,where denotes joint distribution on .
By maximizing , we can determine the most diverse distribution aswhich represents the most robust ecosystem taking on the assumption that every species including the gap is equally invaluable in terms of ecosystem function in a randomly changing environment.
With respect to the short-term management goal, both and could occur. However, a general relationship between biodiversity and ecosystem functions imposes , meaning a net positive impact on biodiversity and good management in terms of sustainability. can be generalized to complexity measure in Section 3.1 with respect to the commonality in Section 3.2, which will be detailed in Section 7.
Expressed as an exponential family, can be parameterized as a statistical manifold based on the canonical setting of information geometry, with the dual-flat coordinates and , with potential functions and , respectively, based on the Fisher information metric g and connection coefficients [39,40]:under the correspondence of the following transformation for discrete distribution,
The elements of Fisher information metric are given with respect to the dual coordinates,where is the inverse matrix of . This relation defines and H as the dual coordinate systems orthogonal to each other with respect to g. The -connection coefficients with respect to a real number is given by the Fisher information metric aswhere is the mean value function. The values and are essential in information geometry, which define the e- and m-flat connections, respectively, in terms of the invariance of tangent space under the covariant differential on arbitrary coordinates of the statistical manifold:where for , and for . For example, the model is e-flat with respect to the coordinates , and m-flat with respect to the coordinates H. is called the dual-flat connection of the statistical manifold. The concept of flatness defined by these connections further extends to the concept of geometric parallel and geodesic. As an autoparallel submanifold with respect to the connection, e- and m-flat geodesic and between two distributions and are defined as follows with one-dimensional parameter w:
The unique -divergence that satisfies and ⇔, and that remains invariant under possible transformations of the dual-flat coordinates with the connections is given bywhose dual divergence coincides with Kullbuck–Leibler divergence in case of ,
From the Pythagorean relation and the projection theorem of Kullbuck–Leibler divergence on the dual-flat statistical manifold  (p. 63), the following holds:
Let , and be the dual-flat coordinates of , and , respectively, with the canonical definition of e- and m-flat dual connections. We define the optimal distribution with coordinates on m-flat geodesic between and with parameter as
By optimizing with orthogonal projection of e-flat geodesic from to asthe following Pythagorean relations hold:where and are Kullback–Leibler divergence and its dual divergence, respectively,
Figure 9 shows the geometrical structure of this theorem. In this case, supposing as effectiveness of complexity measure for management, we want to find the optimal distribution of biodiversity balancing between and with respect to actual distribution , such thatbased on statistical dependencies between variables that can be orthogonally separated with Pythagorean relation. As a result, provides the optimized distribution with respect to minimum informational discrepancy from the short-term objective to the ideal transition towards the long-term most diverse state. The meaning of major components of Kullbuck–Leibler divergence to be used as a guide of self-organization is listed as follows:
- : Discrepancy between actual distribution and optimum portfolio strategy that orthogonally decomposes and attempts to achieve a balance between short-term management objective and long-term sustainability.
- : Target risk of short-term management objective.
- : Buffering element of robustness trade-off between short-term management objective and long-term sustainability.
- : Potential risk of optimum portfolio w.r.t. long-term sustainability.
- : Potential risk of short-term management objective w.r.t. long-term sustainability.
- , : Potential risk of actual distribution w.r.t. long-term sustainability.
6. Results from Biodiversity Management
We demonstrate the application of the model developed in this article to actual citizen science observation data, taking a biodiversity observation activity supported by interactive database as a typical example . Sample data contain the observation by seven citizen participants on 48 subjective binary indices on species occurrence as buoy data on biological diversity, resulting in 336 samples. On the other hand, a buoy–anchor connection was established separately by objective evaluation of each participant’s ability to detect these species.
Commonality orders among seven observers were obtained for both inter-subjectivity based on the mutual information of buoy data and subjective–objective unity by simply ranking with buoy–anchor connection data. These orders are shown in Figure 10. A binomial test defined in (38) was performed on the comparison between inter-subjective and subjective–objective commonality orders. The random order distribution hypothesis was rejected with respect to significance threshold. The matching was more consistent in a higher order of commonality, which implies the intervention of subjective bias in a lower order. With respect to the conjectures on criticality in Section 5, the results can be interpreted as a significant self-organization process towards criticality with the increase of inter-subjective objectivity.
We have tackled the general situation in data-driven citizen science where scientific accuracy and reproducibility can only be discussed at the intersection of subjectivity, inter-subjectivity, and objectivity. Based on the conceptual definition of inter-subjective objectivity, a general topological structure was characterized with respect to complexity measure, search function, computational complexity, and criticality conditions. The results provide theoretical criteria for the development of information and communication technology in view of effective assistance and guidance of citizen science from a complex systems perspective.
The universality of the developed theory and models lies in the generality of the commonality concept formalized as convolution. In reality, a joint distribution of N variables can be represented as the function of convolution with degree N, which allows for extensive expression of informational complexities .
For example, by choosing the time range with positive Lebesgue measure , marginal distribution can be expressed as the time integral of probability measure , such asaccording to (24).
On the other hand, joint distribution is also the time integral of the products between each variable’s probability measure and , within simultaneous time range :where is Lebesgue measure on . As defined in (25),which derives the practical form for actual data processing as
Taking , we obtainthe canonical definition of joint distribution with real value resolution of time.
This follows the generalization to N variables with (A5) as
Taking , it converges to
Therefore, based on the commonality as convolution, we derive whole orders of the joint distribution necessary for the calculation of known complexity measures. In a general form, any complexity measure incorporating the information of a joint distribution can be described as the function of convolution , following the formalization of Section 3.1.
Commonality order is also accessible to existing algorithms that extract the total order of system elements, such as Dulmage–Mendelsohn decomposition  and phylogenetic tree analyses . Although the calculation of joint distributions of all orders out of matrix data generally confronts exponential computational time, total order based on partial combinatorics and statistical testing with known distribution of p-value can provide a quick evaluation of matching on the results from different algorithms. The pair-wise and triplet order algorithms of N observations can be processed with and , respectively, similar to the range of most other ranking algorithms based on low-order statistics. The comparison between N total orders of commonality requires only second-degree polynomial time (38). Taking such partial optimization and algorithm-wise comparison of performance into account, as an extensive Bayesian estimator including human of Section 5.2, a deep learning model with the use of massive parallel machine learning can be structurally effective for an interactive recombination of an estimation model based on human feedback .
In order to effectively attain criticality in citizen science where knowledge acquisition, transfer, and control are optimized through self-organization, we need to reach a collective intelligence that is distributed in a parallel way both in our subjective mind and in objective reality. The cost of data-driven science sometimes depends on the overly weighted objective measurement for complete modelling, which can also hinder the agility of taking actions, and opportunity of effective interaction through internal observation . As explored in this article, if there exist natural laws extended in our collective intelligence—much like the physical law in objective nature—we may count on such topological structure, and it may be possible to take effective guidance through partial and distributed observation. Such a way to organize collective intelligence among independent and parallel activity producers could be considered as a social–environmental expansion of the “intelligence without representation”, which is based on the direct interface to the world through perception and action, rather than comprehensive representation of knowledge isolated from the environment . Data acquisition needs to generate potentially effective action strategies, or the affordance under global management principles, instead of modelling the phenomena without essential intervention of actors . This can be described as data-affordance science in contrast to exhaustive data-driven science, in which we substantially depend on the emergent topological structure of inter-subjective objectivity to make decisions in real time, represented at the intersection of the human mind, computation, and natural phenomena. The buoy–anchor–raft model developed as a mutual framework can provide a theoretical basis that expands external observation of conventional science to internal observation necessary for the management and knowledge extraction as a data-affordance science [5,27]. As a cumulative effect of synergistic efficiency, observation and data processing could diminish within a computable time scale by implicitly augmenting the knowledge representation incorporated into actual action principles. With measurement–action unity as a process of affordance in both data and reality, a cost-effective interface and a human-dependable system could be realized within the framework of internal observation, as a crucial premise for a sustainable solution. The edge of criticality for a successful citizen science—in terms of its nature and resource restriction—could find its limits neither in our internal mind nor external world, but on the topology of these interactions.
This study was funded by Sony Computer Science Laboratories, Inc.
Conflicts of Interest
The author declares no conflict of interest.
Proof of Theorem 1.
Let us formulate Equation (12) as . As I is an epimorphism but not necessarily a monomorphism, its inverse function generally retrieves a larger subset of conditions including :
Recursively defining by specifying the value of I asone obtains the inverse function that brings us back exactly to the comprehensive search condition, .
Now, we consider the epimorphism with its right-sided inverse as and . We set , which gives , then . Next, we consider such that . By resolving with respect to , we obtainthenwhich shows coincidence between and G with exclusively selective complexity measure . The exact construction of and depends on the exhaustive computation process, whose computational complexity is characterized in Section 4. ☐
Proof of Theorem 2.
From Tonelli’s theorem,☐
Proof of Theorem 3.
The central limit theorem with Lindeberg’s condition assures the following convergence as the sampling number and the number of distribution :where the variables follow independent distributions , , with finite mean and variance taken over samples, and
Based on the central limit theorem, we consider the numerical convergence of in a way accessible to . The convolution represents infinite random sampling of at the limit of , from N independent distributions as the population distributions with finite mean and variance as follows:where each mean and variance is bounded within the total variation of r as
If are finite measures, we obtain the following from the central limit theorem of independent distributions with finitely bounded mean and variance:wherewhich coincides with (33) as . In (A12), the term serves as the overall normalisation factor, since is not necessarily normalized as a probability distribution with total probability 1. Since the convolution is replaced by the integral of normal distribution with single variable, by restricting on arbitrary subset , we obtain the theorem (32):
In case includes infinite measures that do not guarantee the above convergence, , such that , though
Because, in the opposite case, , , which contradicts the definition (24). Since infinite measures could only appear within a countable set of zero Lebesgue measure,which means for almost every , the theorem holds. ☐
Proof of Theorem 4.
The null hypothesis can be represented as a random order distribution, in which pairs of N observations are susceptible to generating an I–N compromise between I and . Choose an arbitrary commonality order I and consider the null hypothesis distribution of .
With respect to an arbitrary pair out of N observations, all permutations in can be divided into two sets and , which correspond to those generating I–I matching and I–N compromises, respectively:
Here, and are not groups, but the subsets of the same size,becausewhere for ,
Then, with respect to the random permutation, the probability p that each pair from N observations will be judged as I–I matching is given by:which leads to the general probability of the occurence number of I–I matching follow binomial distribution with parameters and p.
Note that the binomial distribution can be approximated to a normal distribution with in this case, according to the condition of the mean value and variance . ☐
Proof of Theorem 5.
Take and consider the database , , in which we divide m observations with elements and these intersections as commonality structure. is a floor function.
As the cardinality of rational number is , any positive common fraction, or , can find unique correspondence to . Now, for an arbitrary , , such that (for example, take with ceiling function ). Since , for simplicity, let us consider the correspondence for any . With the use of Cantor’s pairing function , we obtain the unique counting natural number for all pairs of :
As contains permutational symmetry with respect to k and m, the uniqueness does not hold for , though from the inverse function of ,
As is equivalent with either or ,which results in
Taking , , and gives the theorem. ☐
Proof of Theorem 6.
From the definition of , when there is no diminution of data or equivalently for all ,
As the product monotonically decreases with respect to the decrease of each element, the above relation gives the upper bound. Sorting time of N elements is usually given by , , and can be generalized to algorithms with polynomial order . ☐
Proof of Theorem 7.
Fromwe directly obtain☐
Proof of Theorem 8.
The condition (45) can be translated into the following with respect to the data sparseness u:
Expressed as the order of computational time on both sides of formula without for simplicity,and taking logarithmic scale,
We consider the application of Chebyshev’s inequality on the left side, such that
Since as and removing constant coefficient , evaluation of the asymptotic behaviour of (A41) can be derived essentially for the left side from and the right side from defined as follows,with which (A41) is described as
As that becomes dominant as ,since , , such that , then . This condition holds with , for both and . Note that although explicit inequality between and exists in (A43), these converge to the same asymptotic order for all N and M, because as ,andwhich remain within the ranges of multiplication with constant. The relations (A45) and (A46) can be proved by examining the minimum and maximum values of , , and . By considering with the range of from the symmetry between N and M (), we derive the following monotonicity conditions with respect to N,from which we obtain the minimum value of at ,the maximum value of at ,the minimum value of at ,the maximum value of at ,the minimum value of at ,and the maximum value of at ,with the associated convergence as . Numerical observation of the convergence between , , and is given in Figure 8a.
As it converges to the same asymptotic behaviour on both sides of (A43), we apply the left side of Chebyshev’s inequality to (A40), which gives asymptotical relationwhere coefficient is derived from the relation (A41), including the effect of transformation and . As and taking the sum over N, it converges to the theorem:
Numerical observation of the proof is given in Figure 8 (b). ☐
Proof of Theorem 9.
We consider the coordinates of as , which constitutes the e-flat geodesic between and as
The tangent vector of the e-geodesic is expressed asand the tangent vector of the m-geodesic as
Then the inner product of these tangent vectors at is expressed assince from the duality of the coordinates in (60),
As , , and are aligned on the m-geodesic, the relation (A60) can be translated towith some constant and .
Now, from the definition of -divergence (65) and its dual divergence (66), the Pythagorean relations between Kullback–Leibler divergences are expressed as
When orthogonality holds between the e- and m- geodesic, for (A62), which proves the Pythagorean relations from (A63).
Finally, we prove that satisfies the minimum condition (68). By considering with a parameter aswe obtain the Pythagorean relation
Since from the definition of divergence, and hold, which means is a stationary point giving the minimum value with respect to , on the m-geodesic between and . Note that the theorem also holds when takes arbitrary finite values other than 1. ☐
- Schwab, K. The Fourth Industrial Revolution; Crown Business: New York, NY, USA, 2017. [Google Scholar]
- Nature’s Notebook. Available online: https://www.usanpn.org/natures_notebook (accessed on 21 April 2017).
- Funabashi, M.; Hanappe, P.; Isozaki, T.; Maes, A.M.; Sasaki, T.; Steels, L.; Yoshida, K. Foundation of CS-DC e-Laboratory: Open Systems Exploration for Ecosystems Leveraging. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 351–374. [Google Scholar]
- Funabashi, M. Open Systems Exploration: An Example with Ecosystems Management. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 223–243. [Google Scholar]
- Tokoro, M. Open Systems Science: A Challenge to Open Systems Problems. In First Complex Systems Digital Campus World E-Conference 2015, Springer Proceedings in Complexity; Springer International Publishing Switzerland: Cham, Switzerland, 2017; pp. 213–221. [Google Scholar]
- Bak, P. How Nature Works: The Science of Self-Organized Criticality; Copernicus: New York, NY, USA, 1996. [Google Scholar]
- Jensen, H.J. Self-Organized Criticality; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
- Takayasu, H.; Sato, A.; Takayasu, A. Stable Infinite Variance Fluctuations in Randomly Amplified Langevin Systems. Phys. Rev. Lett. 1997, 79, 966. [Google Scholar] [CrossRef]
- Scanlon, T.M.; Caylor, K.K.; Levin, S.A.; Rodriguez-Iturbe, I. Positive feedbacks promote power-law clustering of Kalahari vegetation. Nature 2007, 449, 209–212. [Google Scholar] [CrossRef] [PubMed]
- Gabaix, X. Power Laws in Economics: An Introduction. J. Econ. Perspect. 2016, 30, 185–206. [Google Scholar] [CrossRef]
- Alves, L.G.A.; Ribeiroa, H.V.; Lenzi, E.K.; Mendes, R.S. Empirical analysis on the connection between power-law distributions and allometries for urban indicators. Phys. A 2014, 409, 175–182. [Google Scholar] [CrossRef]
- Michelucci, P.; Dickinson, J.L. The power of crowds. Science 2016, 351, 32–33. [Google Scholar] [CrossRef] [PubMed]
- Hanappe, P.; Dunlop, R.; Maes, A.; Steels, L.; Duval, N. Agroecology: A Fertile Field for Human Computation. Hum. Comput. 2016, 1, 1–9. [Google Scholar] [CrossRef]
- Scott, S.L. A modern Bayesian look at the multi-armed bandit. Appl. Stoch. Models Bus. Ind. 2010, 26, 639–658. [Google Scholar] [CrossRef]
- Prokopenko, M. Guided Self-Organization: Inception; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Rekimoto, J.; Nagao, K. The World through the Computer: Computer Augmented Interaction with Real World Environments. In Proceedings of the 8th Annual ACM Symposium on User Interface and Software Technology (UIST’95), Pittsburgh, PA, USA, 15–17 November 1995; pp. 29–36. [Google Scholar]
- Funabashi, M. IT-Mediated Development of Sustainable Agriculture Systems: Toward a Data-Driven Citizen Science. J. Inf. Technol. Appl. Educ. 2013, 2, 179–182. [Google Scholar] [CrossRef]
- Aichi Biodiversity Targets. Available online: https://www.cbd.int/sp/targets/ (accessed on 21 April 2017).
- Funabashi, M. Synecological farming: Theoretical foundation on biodiversity responses of plant communities. Plant Biotechnol. 2016, 33, 213–234. [Google Scholar] [CrossRef]
- Goodchild, M.F. Citizens as sensors: the world of volunteered geography. GeoJoumal 2007, 69, 211–221. [Google Scholar] [CrossRef]
- ISC-PIF (Institut des Systèmes Complexes, Paris Île-de-France). French Roadmap for Complex Systems. ISC-PIF, 2009. Available online: http://cnsc.unistra.fr/uploads/media/FeuilleDeRouteNationaleSC09.pdf (accessed on 21 April 2017).
- Solomon, R.C. Subjectivity. In Oxford Companion to Philosophy; Honderich, T., Ed.; Oxford University Press: Oxford, UK, 2005; p. 900. [Google Scholar]
- Gillespie, A.; Cornish, F. Intersubjectivity: Towards a Dialogical Analysis. J. Theory Soc. Behav. 2009, 40, 19–46. [Google Scholar] [CrossRef]
- Galaxy Zoo. Available online: https://www.galaxyzoo.org/ (accessed on 21 April 2017).
- iNaturalist. Available online: http://www.inaturalist.org/ (accessed on 21 April 2017).
- Rowell, D.L. Soil Science: Methods & Applications; Wiley: New York, NY, USA, 1994. [Google Scholar]
- Kitano, H. Artificial Intelligence to Win the Nobel Prize and Beyond: Creating the Engine for Scientific Discovery. AI Mag. 2016, 37, 39–50. [Google Scholar]
- Ioannidis, J.P. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed]
- Linked Data. Available online: http://linkeddata.org (accessed on 21 April 2017).
- Akao, Y. QFD: Quality Function Deployment—Integrating Customer Requirements into Product Design; Productivity Press: New York, NY, USA, 2004. [Google Scholar]
- Hawker, G.A.; Mian, S.; Kendzerska, T.; French, M. Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short-Form McGill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF-36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res. 2011, 63, 240–252. [Google Scholar]
- Funabashi, M. Network Decomposition and Complexity Measures: An Information Geometrical Approach. Entropy 2014, 16, 4132–4167. [Google Scholar] [CrossRef]
- Walter, R. Fourier Analysis on Groups, Interscience Tracts in Pure and Applied Mathematics, No. 12; Wiley: New York, NY, USA, 1962. [Google Scholar]
- Symmetrical 5-Set Venn Diagram. Available online: https://commons.wikimedia.org/wiki/File:Symmetrical_5-set_Venn_diagram.svg (accessed on 21 April 2017).
- Funanashi, M. Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network International Journal of Bifurcation and Chaos. Int. J. Bifurc. Chaos 2015, 25, 1550054. [Google Scholar] [CrossRef]
- Doya, K.; Ishii, S.; Pouget, A.; Rao, R.P.N. Bayesian Brain: Probabilistic Approaches to Neural Coding; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
- Yoshimura, J.; Clark, C.W. Individual adaptations in stochastic environments. Evol. Ecol. 1991, 5, 173–192. [Google Scholar] [CrossRef]
- Harte, J. Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
- Amari, S.; Nagaoka, H. Method of Information Geometry; American Mathematical Society: Providence, RI, USA, 2007. [Google Scholar]
- Rao, C.R. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
- Murota, K. Matrices and Matroids for Systems Analysis; Springer: Berlin, Germany, 2000. [Google Scholar]
- Roy, S.S.; Dasgupta, R.; Bagchi, A. A Review on Phylogenetic Analysis: A Journey through Modern Era. Comput. Mol. Biosci. 2014, 4, 39–45. [Google Scholar] [CrossRef]
- Brooks, R.A. Intelligence without representation. Artif. Intell. 1991, 47, 139–159. [Google Scholar] [CrossRef]
- Gibson, J.J. The Ecological Approach to Visual Perception; Houghton Mifflin: Boston, MA, USA, 1979. [Google Scholar]
Figure 1. Schematic representation of the inter-subjective objectivity model. (a) Relations between two subjectivities A and B, objectivity, inter-subjectivity between A and B, subjective–objective unity for A and B, and inter-subjective objectivity are depicted as inclusion relations between each other set. (b) Development of inter-subjective objectivity as effective measurements of citizen science. As the inter-subjectivity increases along with the training of subjective–objective unity and inter-subjective feedbacks, the accuracy and reproducibility of measurement based on subjectivity can be assured by the convergence to inter-subjective objectivity.
Figure 2. Schematic representation of buoy–anchor–raft model. Buoy, raft, anchor, and connection rope refer to subjectivity, inter-subjectivity, objectivity, and subjective–objective unity, respectively. Concrete real-world examples are given in Table 1.
Figure 3. Schematic representation of complexity measures as non-linear feature space and search function as its inverse functions. (a) Utility characteristics of a complex system, or complexity measure in general terms, is expressed with a complex configuration in parameter space. Parameters can also represent other complexity measures. (b) Complexity measures transform parameter space into non-linear feature space, which provides easier interpretation by sorting the order of a given utility. The inverse functions of complexity measures therefore correspond to search functions with respect to the search condition on utility.
Figure 4. Numerical example of convolution . For two kinds of probability measure (green distribution) and (blue distribution) on (supported by black rug), the convolution with are shown with different colors based on random sampling of points from pairs of and . The case of is simulated, which shows the canonical convergence towards normal distribution following the central limit theorem with , where as defined in (33) and (A13). For simplicity, is adjusted to 0 by the symmetric selection of , , and r.
Figure 5. Schematic representation of the triplet order algorithm that calculates the total order of three observations with respect to the complexity defined on the pair-wise commonality between them. Three observations A, B, and C are expressed as vertices of triangle in a two-dimensional surface, whose edge lengths A–B, B–C, and A–C represent the commonality of each vertex pair. For simplicity, the triangles are projected as regular triangles, but the actual edge lengths generally differ, which provides the total order of edges. The six case statements of the algorithm are shown separately. Given the total order between the edges in blue magnitude relation, the corresponding total order of observations are depicted with orange axes at the side of each triangle. Orange axes superimposed with triangles signify that by orthogonally projecting the vertices onto them, the total order of vertices are obtained, whose generalization is developed in the Section 3.4. This holds for arbitrary three positive values of edge length without the constraint of triangular inequality, by considering appropriate projection of the triangles to a non-Euclidian surface.
Figure 6. Integration of two commonality orders. (a) The correspondence between commonality orders I and (orange arrows) can be described as the permutation between N observations (black circles), providing the topology of I–I matching (green dotted line) and I–N compromise (blue dotted line); (b) Affine space with respect to the commonality orders I and as coordinate system (orange arrows) for the resolution of I–N compromise. The I–N mean commonality order (red solid arrow) can be calculated from the pair-wise order algorithm (Section 3.3) applied on the commonality orders I and , which makes the I–I matching identical to the I–I dimension (green arrow) and sets the mean order to I–N compromise. One I-N resolution dimension is required to resolve one I–N compromise (blue arrow). The implicit structure of the integrated commonality order with continuity assumption takes a complex form reflecting I–N compromises (red dotted arrow as an example), which corresponds to the complex utility configuration in Figure 3a; (c) The general case with an arbitrary number of I–N compromises. Total commonality space of dimensions is divided between I–N resolution dimensions (blue arrows) and I–I dimensions (green arrow), between which I–N mean commonality order can be defined (red arrow). axes of I–N resolution dimensions are required to resolve k I–N compromises (blue arrows). Taking the I–I dimensions and I–N resolution dimensions as Affine coordinates, the integrated commonality order is projected onto the I–N mean commonality order as a simplest sorted order of utility, which corresponds to Figure 3b.
Figure 7. Topological hierarchy of commonality between observations. For example, five observations A, B, C, D, E are depicted with correspondence to the commonality order of each topological subset. The Venn diagram on the left represents the commonality structure within observation probability database on variable ( in Section 4.2), where coincident observation is superimposed. The maximum commonality order is the projection between these topological subsets to the natural number in right axis, describing the number of matching observations. Venn diagram cited from .
Figure 8. Numerical observation of the proof of Theorem 8. (a) Chebyshev’s inequality (A41) and asymptotic convergence to (A44) with respect to , . Y-axis is plotted with log scale. The equality in (A41) is given at ; (b) Behaviour of , , and with respect to . For visibility, the Y-axis scale is given as that represents smaller Y value to the bottom, and Y-axis label shows the value of . The surface below the solid line represents the convolution multiplied by L, . The mean value of solid line therefore corresponds to the upper limit of u that satisfies the polynomial constraint (45) with respect to given L. were used for the simulation.
Figure 9. Information geometrical optimization of diversity strategy portfolio with respect to actual distribution , short-term management objective , and long-term sustainability . On a dual-flat statistical manifold based on Fisher information metric, each distribution is represented as a point (black circles). The m-geodesic is depicted with a blue line, while the e-geodesic is shown with a red line, which orthogonally cross at the optimized strategy . Topological correspondence between complexity measure (aligned on left orange arrow) and diversity strategy portfolio ( and ) is shown with dotted lines with respect to the magnitude relation.
Figure 10. Results of inter-subjective and subjective–objective commonality orders in citizen observation of biodiversity. Seven people represented with numerical ID are aligned with commonality orders (a) based on inter-subjectivity; and (b) based on subjective–objective unity, which showed a 3.92% residual error probability regarding the rejection of the random order distribution hypothesis with respect to the binomial test (38).
Table 1. Examples of buoy, raft, and anchor in various social systems and scientific domains. Examples are not comprehensive, but a partial list of typical data from the recently increasing public availability.
|Economy||Judiciary||Biodiversity Record||Medical Treatment|
|Buoy||Demand, satisfaction||Sense of justice, guilt||Visual identification of species||Pain, psychological state|
|Raft||Price, exchange rate||Law, court decision||Identification with voting||Diagnosis, prescription|
|Anchor||Goods abundance||Evidential matter||DNA sequences||Physiological markers|
Table 2. Correspondence between buoy–anchor–raft model and computational variables in this article.
|Buoy||,||,||Data contained in vertices V||Com. order I and between N objects||Observations A, B, C, D, E||, , , , ,|
|Raft||,||,||Edge attribute of E||Com. order I and b/w N observers, TDC, I-I and I-N res. dim.||, ,|
Table 3. Algorithmic complexity for the calculation of commonality orders. With respect to the maximum commonality order in (39), an exhaustive number of combinations with the use of observation probability database of size N and the time scale required for the sorting of the commonality measure is shown. Sorting time is based on the worst-case performance of canonical algorithms such as bubble sort and quick sort (polynomial degree ). denotes asymptotic notation of Landau. and require the maximum calculation and sorting time. Note that the total computation time is upper-bounded by the sorting process () than the combinatorics of commonality (), though calculation time of each commonality such as convolution should be further considered in actual implementation.
|Maximum Commonality Order||Number of Combination||Sorting Time ()|
© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).