^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

A wealth of current research in microengineering aims at fabricating devices of increasing complexity, notably by (self-)assembling elementary components into heterogeneous functional systems. At the same time, a large body of robotic research called

On one side of the continuum, robotic agents are normally macroscopic (

On the other side of the continuum, an important body of precision manufacturing's research aims at producing very-small systems of increasing complexity. One promising route to fabricate complex functional systems is the autonomous self-organization and buildup of structures from their simpler subunits. As a prominent example, active and passive M/NEMS devices come in large quantities owing to batch fabrication technologies [

In our view, smart minimal particles (SMPs) represent the natural convergence locus of such opposing tendencies observed in M/NEMS technology and swarm robotics.

We define SMPs as mobile, sub-millimeter sized, purely reactive agents that compensate their lack of on-board resources with their specifically engineered reactivity to external physical stimulation as well as ability to scavenge energy and information from their local environment. SMPs may be subject to both global and local physical influences, yet they are only capable of local interactions. Influences on SMPs can derive from specific stimuli (e.g., wireless actuation by frequency-selective magnetic induction [

SMPs blur the boundary between M/NEMS and swarm robotics by pointing toward an ideal

In this paper, we argue the need for and the proposal of SMPs (i) by outlining the mentioned convergence toward SMPs with experimental and theoretical examples drawn from both the M/NEMS' and modular robotics' literature on self-assembly and aggregation, (ii) by illustrating our suggested SMP perspective with specific respect to the modeling of the dynamics of self-assembling SMPs, and (iii) by adopting a shared, hybrid terminology where possible.

The paper is structured as follows. Section 2 briefly illustrates the varieties of self-assembly possible at most scales before specifically focusing on its static type as the most pertinent to SMPs (though far from exhausting their potentialities). Examples are drawn from both the modular robotics (Section 2.1) and the M/NEMS literature (Section 2.2); quasi-statics, transient dynamics and collective dynamics of self-assembly are also discussed (Section 2.3). The collective dynamics of SMPs is then addressed in detail in Sections 3 and 4, which critically review the main, sometimes analogous models proposed so far for passive particles—

In its

More importantly, SA processes can be broadly classified according to the role played by energy and to the level of pro-activity of the particles to achieve aggregation [

In the following sections, recent results concerning experimental and modeling SA activities in robotics and M/NEMS are reviewed.

Achieving SA and aggregation are important tasks in distributed and modular robotics [

Probabilistic models were developed for the aggregation and SA of mobile robots [

Aggregation of passive objects mediated by mobile robots [

SA of sub-centimeter sized robots is also being addressed. Donald

Consistently with our SMP perspective, the ongoing miniaturization of robotic modules may thus further decrease the gap, both in size and performance, with M/NEMS—whose SA is reviewed next.

For the assembly of micro- and nanosystems, a wealth of static templated SA processes were proposed and demonstrated, as detailed in several recent reviews [

SA entails several correlated phenomena at different levels of detail—each being possibly subject to modeling. Models of sSA of passive particles mainly focus on three main aspects:

Analytical models (based on first-order approximations and/or first-principles equations) and finite-element numerical simulations, often coupling multiple physical domains, well suit physical modeling. Due to scaling laws, the hierarchy of magnitudes of physical forces at sub-millimetric scales, where surface phenomena dominate, is different from that at the macroscale [

Statistical mechanics is the discipline most devoted to the probabilistic modeling of large ensembles of particles and their collective properties, including their dynamics [

The main models of the SA dynamics of passive particles proposed in literature belong to one of two general approaches, namely: (1) analytical, rate equation-based, and (2) numerical, agent-based. They are illustrated and exemplified in the following sections.

A _{i}_{ij}_{ij}

The master equation derives from a deterministic description of

Very common in statistical physics and chemistry, master equations were also adopted in models of the collective dynamics of smart particles. Before introducing the latter models, we shall review their most fundamental assumptions (we denote

The system is

The reactions are

The reaction probabilities are

Only bi-particle events are considered, both for assembly (producing

Such assumptions are discussed in Section 3.1.4.

In their 1995 work (The same authors later applied the same model to a simpler self-assembling system, composed of flat sub-millimetric particles that floated at the water-air interface and interacted by capillary flotation forces [

The system was composed of a uniform population of centimeter-sized, polyurethane triangles endowed with neodymium magnets along two of their sides. The flat particles were put in a rotating box which constrained their random motion to a vertical plane. Being equilateral, the assembly of exactly six triangles formed a full hexagon, and all the intermediate assembly products were known

In 1995, Verma

Given _{P}_{P}_{S}_{S}_{P}_{P}_{S}_{S}_{A}_{A}_{P}n_{S}_{D}_{D}_{A}_{D}_{S}_{1}/_{2}, _{P}_{P}_{S}_{2} → 0). Note, however, that

Recently, Mastrangeli

Solving the SS for _{1} ≡ _{D1}_{A}_{2} ≡ _{D2}_{A}

The special cases, including only self-disassembly and kinetic disassembly, can be recovered from _{1} = 0 (_{2} = 0 (

In 2005, Zheng and Jacobs demonstrated a three-dimensional, molten solder-driven and shape matching-directed fluidic process to self-assemble submillimetric LEDs onto glass carriers [_{C}_{L}_{A}

_{A}_{A}_{A}

Mastrangeli _{D}_{L}_{C}_{1} (_{2}), _{1} ≥ _{2} > 0 being the roots of the polynomial in _{L}_{C}_{A}_{D}

When no disassembly is possible (_{D}_{L}_{D}_{1} = _{L}_{2} = _{C}_{A}_{D}

The rate constants appearing in the previous models lump many interacting factors. Though some of them may be experimentally determined (e.g., mean (dis)assembly time), a complete theoretical model should be able to derive such parameters from first principles describing e.g., the physics of the assembly interactions. On the other hand, such lumping provides a simplification that avoids the models to be application-specific. This, together with the focus on average behavior implicit in the mean-field approach, accounts for the models' high abstraction level.

Multi-particle collision events are not considered. These are less and less probable as the number of parts involved grows (in general, reactions of the form

The models predict higher assembly speed and yield for higher particle redundancy. Nevertheless, there could be a practical limit on the maximum number of parts present in a bounded assembly space. Too-high a particle density may increase the chance of damaging collisions, which irreversibly decrease the yield. It may also affect the transport and mixing of the particles themselves, thus altering the assembly rates and making them density-dependent. Therefore, practically speaking, the assumptions listed in Section 3.1 are valid for ensembles containing sparse particles or whose occupied (

Master equations considering

Importantly, all the evoked concepts of particle density, excluded volume and diffusion entail the

Modeling based on the representation of the behavior of system agents is a natural, bottom-up framework to capture the properties of DSs. An agent can be identified with an actual element of the system in object and/or with one of its variables. An

In 2009, Mermoud _{1} _{2}) ∈ [0, ^{2}is the orientational state vector (_{θ}_{P}_{+}= 1, the value of _{−} derived from _{b}

As a result, the better the alignment, the lower _{b}

Mastrangeli

Mastrangeli built a NetLogo model to simulate single realizations of the actual SA process; while simplified, it still elicits important insights into the system dynamics. The assembly space bounded by hard walls closely reproduces the actual one (_{CCS}_{A}

_{C}_{L}_{CCS}

Agent-based modeling is particularly suitable to the description of systems involving agents whose behaviors are non-linear (e.g., characterized by non-linear interactions, conditional rules, thresholds), non-Markovian (including memory, path-dependence, hysteresis, adaptation), or of systems in which spatiality and communication play an important role [

An ABM can produce

Additionally, and in spite of its reductionism, an ABM can also capture a system's

Finally, an ABM is essentially a

The following sections illustrate models that can be partly considered extensions of those discussed in Section 3, as they can also capture programmed particle actions (e.g., planned structure formation) and thus span wider fields.

Originating from chemistry,

To specify a Markov process _{ij}

And the probability distribution at time _{0} is given by:

In some cases, _{1} (_{s}_{S}_{s}_{r}_{r}_{r}

A _{i}_{i}_{r}_{i}_{i}

Hosokawa's idea of identifying intermediate assembly products with state variables (see Section 3.1.1) can be partly inscribed in the framework of stochastic reaction networks. Explicitly inspired by Hosokawa's work, Klavins

A single realization of a reaction process reduces to tracing a sample path through the system's set of states starting from specific initial conditions. Gillespie derived an exact numerical algorithm for the simulation of stochastic processes [^{+} be the time at which the next reaction occurs, as counted from instant _{0}:

the conditional distribution of ^{+} −_{r}h_{r}_{0});

the conditional probability _{0}) for the next reaction to be _{0} is

^{+} −_{0} are conditionally independent.

The algorithm can then be described as follows: starting from the initial condition _{0} at time ^{+} from the exp(^{+}; if _{max}_{0}), and set _{r}

Gillespie developed several computationally more efficient variants of the original numerical algorithm, as well [

With his

Each agent ^{(}^{k}^{)}) and stochastic (^{stoch}^{(}^{k}^{)} subsumes all specifiable influences on the state variables, such as non-linear, rule-based interactions with other agents, external conditions as defined by control parameters (e.g., dissipative forces, forces deriving from external potentials, in/outflux of resources), and internal time-dependent agent dynamics. On the other hand, ^{stoch}

The Brownian agent approach is focused on cooperative agent interactions, particularly self-organization and aggregation, instead of on individual actions. Brownian agents are

Upon the baseline of purely passive agents undergoing Brownian motion, Schweitzer adopts a minimalistic, constructive agent design, in which the complexity and potentialities of each agent are progressively augmented by the cumulative addiction of built-in features and capabilities. For each subsequent level of agent sophistication, the simplest set of rules is used and a homogeneous population of agents is simulated to investigate its collective properties and potential behavior(s). This way, Schweitzer shows that Brownian agents can be progressively enabled to perform or reproduce various collective patterns, e.g., deterministic chaotic, intermittent or swarming type of motion; simulations of aggregation and structure formation processes of physico-chemical systems; self-organization of stable and adaptive networks; trail formation, and consequent reinforced biased random walk; and social aggregation phenomena, like urban sprawl and opinion formation, as well [

As an interesting alternative, Milutinović proposed a framework linking directly the microscopic, deterministic behavior of agents to the macroscopic dynamics of their populations, as inspired by the kinetic gas theory [

Stochastic reaction networks' abstraction level lies between that of mean-field differential equations and of agent behavior-oriented ABMs. Particularly, they are not as spatially embedded as the latter; yet, in contrast to more abstract mean-field models, they can include network topologies. In fact, their strength resides in taking advantage of the developing tools of network theory [

Brownian agents provide an important example of a reductionist but nevertheless constructive approach to describe a wide range of DSs at several scales and increasing levels of sophistication and complexity—in a way reminding of Braitenberg's cumulative design of vehicles [

In Milutinović's framework, the collective macroscopic dynamics is directly connected to the microscopic agent dynamics through the evolution of the associated probability distribution, while the actual interactions among agents are lumped, and their complexity hidden, into the stochastic components of the event generators. It represents an interesting attempt at including both macroscopic (mean-field) and microscopic (individual agent) modeling levels within the context of hybrid automata control; it is also amenable to analytical treatment, especially in the Markovian approximation.

Finally, stochastic reaction networks and hybrid automata can also be readily included into an even more comprehensive modeling framework—

One of the main difficulties in modeling ensembles of SMPs, and particularly those involving aggregation and self-assembly, is the inherent randomness and hybridness of their dynamics. For instance, while a robot's controller is essentially a deterministic, discrete entity, it has to interact with a noisy, continuous environment. At the microscale, a particle's binding site may or may not be occupied (discrete state variable), and this may depend on the temperature of the system (continuous parameter), such as in the case of e.g., DNA-mediated binding sites.

These challenges motivate the combination of multiple levels of abstractions—ranging from detailed, submicroscopic models up to more general, macroscopic ones—into a consistent multi-level modeling framework. On the one hand, one needs submicroscopic models that are able to capture the complete state of particles, including spatiality and embodiment (e.g., pose, shape, surface properties). On the other hand, one is also interested in models that can yield accurate numerical predictions of collective metrics, and investigate, possibly in closed form, macroscopic properties such as the sizes, types, and proportions of the resulting assemblies. Multi-level modeling allows the fulfillment of both requirements in a very efficient way by building up models of increasing levels of abstraction in order to capture the relevant features of the system.

Within this context, we classify models of distributed systems into three main categories: (i)

Originally, the multi-level modeling methodology was developed in the context of swarm robotics [

The most detailed level of modeling is provided by physics-based simulations, which bridge the gap between model and reality by accurately capturing each system particle as well as its sub-components. These simulations faithfully account for a subset of physical phenomena (e.g., capillarity, hydrophobic interaction, electromagnetic forces), which are considered most relevant to the dynamics of the system. The strength of this type of models is their direct anchoring to reality, even though the number of their parameters tends to grow rapidly with the number of physical phenomena to be modeled. Also, while these models enable, in principle, the direct visualization of a particle's behavior, their heavy computational requirements limit their applicability to systems that involve a limited number of particles. Examples include finite-element numerical simulations and molecular dynamics (representing the most radical physical ABM) for M/NEMS (see Section 2.3), and case-specific software faithfully reproducing robot behaviors, such as Webots [

Even though microscopic models capture the state of each individual particle in the system, their state vector is significantly smaller than their correspondingly submicroscopic counterpart. This state reduction is typically obtained through appropriate aggregation of the state variables, which can be more or less important as a function of the desired level of detail. Section 3.2 described two such, spatial models. Spatial models offer an interesting modeling framework for multi-agent systems, but they can be expensive both in terms of memory and computation. Indeed, these models store the position and the orientation of each agent as well as the precise structure of each aggregate. Also, they must determine whether a collision occurred, or not, at each iteration and for each pair of agents.

One can go even further in the process of abstracting details that are not significant to the dynamics of the process under investigation. Hereafter we describe a Monte Carlo-based version of the ABM presented in Section 3.2.1 which does

This model assumes that agents aggregate pair-wise to form dimers only, and it keeps track of only one property of the dimers, that is, the relative alignment of their building blocks. Since the model is non-spatial, collisions are no longer deterministic, but are instead randomly sampled from a Poisson distribution of mean _{c}N_{s}_{i}_{1,}_{i} θ_{2,}_{i}_{a}

One subtlety in building non-spatial models of aggregation is to accurately capture the encountering probabilities. Here, we assume a constant encountering probability _{c}_{d}_{tot}

_{s}

_{0}and

_{2,3,}… = 0

Sample _{c}^{join}N_{s}

Generate and append to Ξ_{a}_{c}_{c}_{1}, …, _{nc}_{i}_{1,}_{i}_{2,}_{i}_{d,i}

Generate a random vector
_{a}_{a}

Compute _{b}_{a}_{i}^{leave}_{i}_{a}

Let _{b}_{c}

The stochastic model previously described provides a single realization of the time evolution of the system at each run, and do not scale well with the number of robots. As a result, one must usually perform a large number of computationally expensive runs in order to obtain statistically meaningful results. Hereafter, we describe a non-spatial

Macroscopic models can also track properties of the aggregates other than their size (_{2} (representing the average number of dimers) into _{2,}_{i}_{i}

One can describe the dynamics of each particle ^{(P)}_{1}^{2} given by:

Therefore, the state space of the Markov chain is given by:

With _{d}_{0}_{i}_{1}^{2} with

Therefore, the probability for a particle _{1}^{2} is:

Similarly, the probability for a particle P to leave a pair with an averaged relative positioning _{1}^{2} is given by:
_{l}_{i}_{b}_{i}_{i}

Using a set of difference equations, one can summarize the average state transitions of each individual Markov dynamical system, and thus keep track of the number of aggregates of type _{d}_{i}_{i}

The average number of single particles _{s}_{c}_{b}N⃗_{b}_{c}_{s}^{2} is the average number of particles that collided and formed a pair at iteration _{i}_{+} → [0, 1]. The term _{b}_{i}_{i}_{i}_{c}_{s}^{2} is the average number of particles that collided and formed a pair at iteration _{i}_{i}

Mean-field macroscopic models can be computationally efficient; but they are approximations of the models at lower abstraction levels and, ultimately, of the real system. Particularly, mean-field macroscopic models aggregate discrete entities into real-valued state variables that describe

_{0} = 1,000, all models show a good agreement, even though the Monte Carlo and the macroscopic model exhibit a slightly faster convergence, probably due to their non-spatiality. Indeed, a particle that is surrounded by stable aggregates may take quite some time before encountering another single particle. Such suboptimal mixing tends to slow the process down; this phenomenon is not captured by non-spatial models, but has been repeatedly reported in spatial ABMs (Section 3.2).

Also, one can clearly see that the accuracy of the macroscopic model with respect to the Monte Carlo one degrades gracefully as _{0} decreases; for _{0} = 50, the macroscopic model actually predicts a much faster growth of the pair ratio than that observed in Monte Carlo simulations, whereas an almost perfect match is observed for _{0} = 500. These results exhibit the limits of the ODE approximation for nonlinear dynamical systems. More complex behaviors are observed when varying the discretization factor

This paper proposed a novel and unifying perspective for the design and control of self-assembling micro-/nano- and distributed intelligent systems. This perspective results from the extrapolation of ongoing technological trends observed in these domains, namely smarting and minimalism, respectively. We believe that, thanks to such trends, both domains will converge and eventually merge into a single locus, defined by what we call smart minimal particles (SMPs). SMPs bridge the complexity and scale gap between micro/nanosystems and robotic systems; contextually, SMPs point to the existence of a continuum of sophistication between passive and active particles, both from a technological and a methodological standpoint. Moreover, the proposed unification emphasizes the cross-fertilizations among the originally separate domains concerning terminologies and methodologies. Particularly (but not exclusively) in the case of the modeling of aggregation and self-assembly dynamics, the mutual advantages of shared knowledge and tools are evident and very promising, as we showed in reviewing the efforts pursued in both M/NEMS and distributed robotics in terms of manufacturing technology and distributed control strategies.

A major motivation for proposing the concept of SMPs is the development of the vast, necessarily multi-disciplinary knowledge required to master the control of the hierarchical organization of matter into adaptive artificial structures—

This paper accordingly attempted to put the collective efforts of vast research communities into a shared perspective—as a step toward the ambitious direction outlined above. We are actively pursuing both technological and theoretical investigations on SMPs, and it is our hope that the introduction of the SMP perspective may help favoring stronger and fruitful interactions among the M/NEMS and robotics communities so to catalyze further research into self-assembly—needed to pursue the targeted goals and to cope with the many challenges yet to be tackled by both communities.

The convergence toward Smart Minimal Particles (SMPs).

Taxonomy of Self-Assembly.

Hosokawa's intermediate assembly products

Simulated evolution of self-assembly yield as in Hosokawa's model (from [

Mastrangeli's ABM of Zheng and Jacobs' fluidic SA process.

The geometrical

_{A} = 15 h, picture from [^{3}; initial agents speed: 100 mm/s; statistics out of 10 realizations for each CCS value).

ABM-simulated effects of _{C}_{CS} = 80°, initial agents velocity: 100 mm/s; statistics out of 10 realizations for each parameter value).

Simulated effects of inert dimers (resulting from assembly) on fluidic SA performance: for the out (in) case, the dimers are (not) removed from the assembly space after assembly (ABM parameters: 10× smaller assembly space; initial populations: 60 LEDs and 30 carriers, θ_{CCS} = 80°, initial agents velocity: 100 mm/s; statistics out of 10 realizations for each case).

_{0} = 1,000 predicted by the macroscopic model (dashed), the Monte Carlo simulation (continuous), and the agent-based simulation (bold). _{0}_{0}_{0}_{0}_{0}_{0}_{0}

Massimo Mastrangeli and Grégory Mermoud are sponsored by the SelfSys project funded by the Swiss research initiative Nano-Tera.ch.