Special Issue "Control and Optimization of Multi-Agent Systems and Complex Networks for Systems Engineering"

A special issue of Processes (ISSN 2227-9717). This special issue belongs to the section "Computational Methods".

Deadline for manuscript submissions: closed (30 September 2020).

Special Issue Editors

Dr. Manuel Herrera
Website SciProfiles
Guest Editor
Institute for Manufacturing – Department of Engineering, University of Cambridge, 17 Charles Babbage Road, Cambridge CB3 0FS, UK
Interests: Network science; Graph signal processing; Distributed AI; Decentralized systems; Predictive analytics; Critical infrastructure; Asset management; Digital water
Special Issues and Collections in MDPI journals
Dr. Marco Pérez-Hernández
Website
Guest Editor
Institute for Manufacturing, Dept. of Engineering, University of Cambridge, UK
Interests: agent-based software architectures, autonomous and adaptable systems, decentralised and constrained decision-making processes, industrial IoT
Dr. Ajith Kumar Parlikad
Website
Guest Editor
Institute for Manufacturing, Dept. of Engineering, University of Cambridge, UK
Interests: asset management, maintenance engineering, reliability engineering
Prof. Dr. Joaquín Izquierdo
Website SciProfiles
Guest Editor
Institute for Multidisciplinary Mathematics, Dept. of Applied Mathematics, Universitat Politècnica de València, Spain
Interests: mathematical modeling; knowledge-based systems; DSSs in engineering (mainly urban hydraulics)
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The recent development of technologies, such as personal mobile computing, cloud based computing and ubiquitous internet, has presented new improvement perspectives for digital management in systems engineering. This interdisciplinary branch of engineering overlaps disciplines such as industrial, mechanical, manufacturing, control, software, electrical, and civil engineering. Systems engineering deals with work-processes, optimization methods, and risk management tools related to all these areas. Within a smart environment for system operation and control, there are biological and nature-based processes, such as those that come from epidemiology models, for achieving optimal and automated decision-making in systems engineering. Particularly, compartmental epidemiology models blend areas related to computational methods, multi-agent systems, and complex network analysis that serve as a basis for key developments on the criticality and risk analysis of systems engineering. Agent-based systems such as those based on swarm intelligence (ACO, PSO, ASO, etc.) provide valuable heuristic methodologies to explore optimal solutions for systems engineering in general as well as for network management, operation and design. Swarm intelligence methods have applications on self-healing networks, community detection, and label propagation, among others.

This Special Issue on "Control and Optimization of Multi-Agent Systems and Complex Networks for Systems Engineering" aims to curate novel advances in theoretical developments and applications of biological and nature-inspired multi-agent and network models for systems engineering. Topics include but are not limited to the following:

  • Theoretical and practical advances in biological and nature-inspired mathematical models;
  • Diffusion processes and dynamics in complex networks;
  • Advanced mesoscale models and multi-layer approaches for complex networks analysis;
  • Graph-theoretical methods and complex networks applications for risk and resilience analysis in systems engineering;
  • Swarm intelligence applications in networked systems;
  • Intelligent infrastructure and asset management;
  • Approaches and bounded strategies for learning in multi-agent systems at different scales;
  • Exploitation, control and optimisation of edge resources via multi-agent systems,
  • Multi-agent learning solutions for near-real time decision-making.

Dr. Manuel Herrera
Dr. Marco Pérez-Hernández
Dr. Ajith Kumar Parlikad
Prof. Joaquín Izquierdo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Processes is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Please note that for papers submitted after 31 December 2020 an APC of 2000 CHF applies. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complex networks
  • multi-agent systems
  • nature-inspired models
  • biological models
  • systems engineering

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Grand Tour Algorithm: Novel Swarm-Based Optimization for High-Dimensional Problems
Processes 2020, 8(8), 980; https://doi.org/10.3390/pr8080980 - 13 Aug 2020
Abstract
Agent-based algorithms, based on the collective behavior of natural social groups, exploit innate swarm intelligence to produce metaheuristic methodologies to explore optimal solutions for diverse processes in systems engineering and other sciences. Especially for complex problems, the processing time, and the chance to [...] Read more.
Agent-based algorithms, based on the collective behavior of natural social groups, exploit innate swarm intelligence to produce metaheuristic methodologies to explore optimal solutions for diverse processes in systems engineering and other sciences. Especially for complex problems, the processing time, and the chance to achieve a local optimal solution, are drawbacks of these algorithms, and to date, none has proved its superiority. In this paper, an improved swarm optimization technique, named Grand Tour Algorithm (GTA), based on the behavior of a peloton of cyclists, which embodies relevant physical concepts, is introduced and applied to fourteen benchmarking optimization problems to evaluate its performance in comparison to four other popular classical optimization metaheuristic algorithms. These problems are tackled initially, for comparison purposes, with 1000 variables. Then, they are confronted with up to 20,000 variables, a really large number, inspired in the human genome. The obtained results show that GTA clearly outperforms the other algorithms. To strengthen GTA’s value, various sensitivity analyses are performed to verify the minimal influence of the initial parameters on efficiency. It is demonstrated that the GTA fulfils the fundamental requirements of an optimization algorithm such as ease of implementation, speed of convergence, and reliability. Since optimization permeates modeling and simulation, we finally propose that GTA will be appealing for the agent-based community, and of great help for a wide variety of agent-based applications. Full article
Show Figures

Figure 1

Open AccessArticle
Cooling Performance Analysis of the Lab-Scale Hybrid Oyster Refrigeration System
Processes 2020, 8(8), 899; https://doi.org/10.3390/pr8080899 - 27 Jul 2020
Cited by 1
Abstract
Compared with the waste-to-heat and electricity-based hybrid refrigeration system, the innovative lab-scale refrigeration system integrated with the DC and AC cooling units that able to use solar and electricity as energy resources. Previous studies found that temperature control and uniform temperature distribution in [...] Read more.
Compared with the waste-to-heat and electricity-based hybrid refrigeration system, the innovative lab-scale refrigeration system integrated with the DC and AC cooling units that able to use solar and electricity as energy resources. Previous studies found that temperature control and uniform temperature distribution in refrigeration systems are both critical factors reducing vibrio growth on raw oysters and saving energy consumption. Therefore, this refrigeration system also equipped a specially designed divider and was used to test various air circulation strategies to achieve uniform temperature distribution in six individual compartments. The objective is to investigate and evaluate the effects of air circulation strategies and operating conditions on the cooling performance, including temperature distribution, standard deviation of compartment temperatures, and cooling time using a factorial design method. Results indicated the maximum temperature difference between the compartments was 8.9 ± 2.0 °C, 6.7 ± 2.0 °C, and 4.8 ± 2.0 °C in the scenarios of no air circulation, natural air circulation, and combined natural and forced air circulation, respectively. The interaction of fan location and fan direction showed a significant effect on the compartment temperatures while there was no significant effect on cooling time. A circulation fan on the lower part of the 12-volt section with an air supply from the 12- to 110-volt section was determined as the optimal condition to achieve relatively uniform temperature distribution. Refrigeration system also achieved a cooling temperature of 7.2 °C within 150 min to meet regulations. To that end, the innovative hybrid oyster refrigeration system will benefit oyster industries, as well as the aquaculture farmers in terms of complying with regulations and energy savings. Full article
Show Figures

Figure 1

Open AccessArticle
The Neural Network Revamping the Process’s Reliability in Deep Lean via Internet of Things
Processes 2020, 8(6), 729; https://doi.org/10.3390/pr8060729 - 23 Jun 2020
Abstract
Deep lean is a novel approach that is concerned with the profound analysis for waste’s behavior at hidden layers in manufacturing processes to enhance processes’ reliability level at the upstream. Ideal Standard Co. for bathtubs suffered from defects and cost losses in the [...] Read more.
Deep lean is a novel approach that is concerned with the profound analysis for waste’s behavior at hidden layers in manufacturing processes to enhance processes’ reliability level at the upstream. Ideal Standard Co. for bathtubs suffered from defects and cost losses in the spraying section, due to differences in the painting cover thickness due to bubbles, caused by eddies, which move toward the bathtubs through hoses. These bubbles and their movement are considered as a form of lean’s waste. The spraying liquid inside the tanks and hoses must move with uniform velocity, viscosity, pressure, feed rate and suitable Reynolds circulation values to eliminate the eddy causes. These factors are tackled through the adoption Internet of Things (IoT) technologies that are aided by neural networks (NN) when an abnormal flow rate is detected using sensor data in real-time that can reduce the defects. The NN aimed at forecasting eddies’ movement lines that carry bubbles and works on being blasted before entering the hoses through using Design of Experiment (DOE). This paper illustrates a deep lean perspective as driven by the define, measure, analysis, improvement and control (DMAIC) methodology to improve reliability. The eddy moves downstream slowly with an anti-clockwise flow for some of the optimal values for the influencing factors, whereas the circulation of Ω increases, whether for vertical or horizontal travel. Full article
Show Figures

Figure 1

Open AccessArticle
Assessing Supply Chain Risks in the Automotive Industry through a Modified MCDM-Based FMECA
Processes 2020, 8(5), 579; https://doi.org/10.3390/pr8050579 - 13 May 2020
Abstract
Supply chains are complex networks that receive assiduous attention in the literature. Like any complex network, a supply chain is subject to a wide variety of risks that can result in significant economic losses and negative impacts in terms of image and prestige [...] Read more.
Supply chains are complex networks that receive assiduous attention in the literature. Like any complex network, a supply chain is subject to a wide variety of risks that can result in significant economic losses and negative impacts in terms of image and prestige for companies. In circumstances of aggressive competition among companies, effective management of supply chain risks (SCRs) is crucial, and is currently a very active field of research. Failure Mode, Effects and Criticality Analysis (FMECA) has been recently extended to SCR identification and prioritization, aiming at reducing potential losses caused by lack of risk control. This article has a twofold objective. First, SCR assessment is investigated, and a comprehensive list of specific risks related to the automotive industry is compiled to extend the set of most commonly considered risks. Second, an alternative way of calculating the Risk Priority Number (RPN) is proposed within the FMECA framework by means of an integrated Multi-Criteria Decision-Making (MCDM) approach. We give a new calculation procedure by making use of the Analytic Hierarchy Process (AHP) to derive factors weights, and then the fuzzy Decision-Making Trial and Evaluation Laboratory (DEMATEL) to evaluate the new factor of “dependence” among the risks. The developed joint analysis constitutes a risk analysis support tool for criticality in systems engineering. The approach also deals with uncertainty and vagueness associated with input data through the use of fuzzy numbers. The results obtained from a relevant case study in the automotive industry showcase the effectiveness of this approach, which brings important value to those companies: When planning interventions of prevention/mitigation, primary importance should be given to (1) supply chain disruptions due to natural disasters; (2) manufacturing facilities, human resources, policies and breakdown processes; and (3) inefficient transport. Full article
Show Figures

Figure 1

Open AccessArticle
Optimum Design of a Standalone Solar Photovoltaic System Based on Novel Integration of Iterative-PESA-II and AHP-VIKOR Methods
Processes 2020, 8(3), 367; https://doi.org/10.3390/pr8030367 - 22 Mar 2020
Cited by 3
Abstract
Solar energy is considered one of the most important renewable energy resources, and can be used to power a stand-alone photovoltaic (SAPV) system for supplying electricity in a remote area. However, inconstancy and unpredictable amounts of solar radiation are considered major obstacles in [...] Read more.
Solar energy is considered one of the most important renewable energy resources, and can be used to power a stand-alone photovoltaic (SAPV) system for supplying electricity in a remote area. However, inconstancy and unpredictable amounts of solar radiation are considered major obstacles in designing SAPV systems. Therefore, an accurate sizing method is necessary to apply in order to find an optimal configuration and fulfil the required load demand. In this study, a novel hybrid sizing approach was developed on the basis of techno-economic objectives to optimally size the SAPV system. The proposed hybrid method consisted of an intuitive method to estimate initial numbers of PV modules and storage battery, an iterative approach to accurately generate a set of wide ranges of optimal configurations, and a Pareto envelope-based selection algorithm (PESA-II) to reduce large configuration by efficacy obtaining a set of Pareto front (PF) solutions. Subsequently, the optimal configurations were ranked by using an integrated analytic hierarchy process (AHP) and vlsekeriterijumskaoptimizacija i kompromisonoresenje (VIKOR). The techno-economic objectives were loss of load probability, life cycle cost, and levelized cost of energy. The performance analysis results demonstrated that the lead–acid battery was reliable and more cost-effective than the other types of storage battery. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Layout Optimization Process to Minimize the Cost of Energy of an Offshore Floating Hybrid Wind–Wave Farm
Processes 2020, 8(2), 139; https://doi.org/10.3390/pr8020139 - 21 Jan 2020
Cited by 1
Abstract
Offshore floating hybrid wind and wave energy is a young technology yet to be scaled up. A way to reduce the total costs of the energy production process in order to ensure competitiveness in the sustainable energy market is to maximize the farm’s [...] Read more.
Offshore floating hybrid wind and wave energy is a young technology yet to be scaled up. A way to reduce the total costs of the energy production process in order to ensure competitiveness in the sustainable energy market is to maximize the farm’s efficiency. To do so, an energy generation and costs calculation model was developed with the objective of minimizing the technology’s Levelized Cost of Energy (LCOE) of the P80 hybrid wind-wave concept, designed by the company Floating Power Plant A/S. A Particle Swarm Optimization (PSO) algorithm was then implemented on top of other technical and decision-making processes, taking as decision variables the layout, the offshore substation position, and the export cable choice. The process was applied off the west coast of Ireland in a site of interest for the company, and after a quantitative and qualitative optimization process, a minimized LCOE was obtained. It was then found that lower costs of ~73% can be reached in the short-term, and the room for improvement in the structure’s design and materials was highlighted, with an LCOE reduction potential of up to 32%. The model serves usefully as a preliminary analysis. However, the uncertainty estimate of 11% indicates that further site-specific studies and measurements are essential. Full article
Show Figures

Figure 1

Open AccessArticle
Optimal Design of Standalone Photovoltaic System Based on Multi-Objective Particle Swarm Optimization: A Case Study of Malaysia
Processes 2020, 8(1), 41; https://doi.org/10.3390/pr8010041 - 01 Jan 2020
Cited by 7
Abstract
This paper presents a multi-objective particle swarm optimization (MOPSO) method for optimal sizing of the standalone photovoltaic (SAPV) systems. Loss of load probability (LLP) analysis is considered to determine the technical evaluation of the system. Life cycle cost (LCC) and levelized cost of [...] Read more.
This paper presents a multi-objective particle swarm optimization (MOPSO) method for optimal sizing of the standalone photovoltaic (SAPV) systems. Loss of load probability (LLP) analysis is considered to determine the technical evaluation of the system. Life cycle cost (LCC) and levelized cost of energy (LCE) are treated as the economic criteria. The two variants of the proposed PSO method, referred to as adaptive weights PSO ( A W P S O c f ) and sigmoid function PSO ( S F P S O c f ) , are implemented using MATLAB software to the optimize the number of PV modules in (series and parallel) and number of the storage battery. The case study of the proposed SAPV system is executed using the hourly meteorological data and typical load demand for one year in a rural area in Malaysia. The performance outcomes of the proposed A W / S F P S O c f methods give various configurations at desired levels of LLP values and the corresponding minimum cost. The performance results showed the superiority of S F P S O c f in terms of accuracy is selecting an optimal configuration at fitness function value 0.031268, LLP value 0.002431, LCC 53167 USD, and LCE 1.6413 USD. The accuracy of A W / S F P S O c f methods is verified by using the iterative method. Full article
Show Figures

Figure 1

Open AccessArticle
Global Supervisory Structure for Decentralized Systems of Flexible Manufacturing Systems Using Petri Nets
Processes 2019, 7(9), 595; https://doi.org/10.3390/pr7090595 - 04 Sep 2019
Cited by 2
Abstract
Decentralized supervisory structure has drawn much attention in recent years to address the computational complexity in designing supervisory structures for large Petri net model. Many studies are reported in the paradigm of automata while few can be found in the Petri net paradigm. [...] Read more.
Decentralized supervisory structure has drawn much attention in recent years to address the computational complexity in designing supervisory structures for large Petri net model. Many studies are reported in the paradigm of automata while few can be found in the Petri net paradigm. The decentralized supervisory structure can address the computational complexity, but it adds the structural complexity of supervisory structure. This paper proposed a new method of designing a global controller for decentralized systems of a large Petri net model for flexible manufacturing systems. The proposed method can both reduce the computational complexity by decomposition of large Petri net models into several subnets and structural complexity by designing a global supervisory structure that can greatly reduce the cost at the implementation stage. Two efficient algorithms are developed in the proposed method. Algorithm 1 is used to compute decentralized working zones from the given Petri net model for flexible manufacturing systems. Algorithm 2 is used to compute the global controller that enforces the liveness to the decentralized working zones. The ring assembling method is used to reconnect and controlled the working zones via a global controller. The proposed method can be applied to large Petri nets size and, in general, it has less computational and structural complexity. Experimental examples are presented to explore the applicability of the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Transient Modeling of Grain Structure and Macrosegregation during Direct Chill Casting of Al-Cu Alloy
Processes 2019, 7(6), 333; https://doi.org/10.3390/pr7060333 - 01 Jun 2019
Cited by 1
Abstract
Grain structure and macrosegregation are two important aspects to assess the quality of direct chill (DC) cast billets, and the phenomena responsible for their formation are strongly interacted. Transient modeling of grain structure and macrosegregation during DC casting is achieved with a cellular [...] Read more.
Grain structure and macrosegregation are two important aspects to assess the quality of direct chill (DC) cast billets, and the phenomena responsible for their formation are strongly interacted. Transient modeling of grain structure and macrosegregation during DC casting is achieved with a cellular automaton (CA)–finite element (FE) model, by which the macroscopic transport is coupled with microscopic relations for grain growth. In the CAFE model, a two-dimensional (2D) axisymmetric description is used for cylindrical geometry, and a Lagrangian representation is employed for both FE and CA calculations. This model is applied to the DC casting of two industrial scale Al-6.0 wt % Cu round billets with and without grain refiner. The grain structure and macrosegregation under thermal and solutal convection are studied. It is shown that the grain structure is fully equiaxed in the grain-refined billet, while a fine columnar grain region and a coarse columnar grain region are formed in the non-grain-refined billet. With the increasing casting speed, grains become finer and grow in a direction more perpendicular to the axis, and the positive segregation near the centerline becomes more pronounced. The increasing casting temperature makes grains coarser and the negative segregation near the surface more pronounced. Full article
Show Figures

Figure 1

Review

Jump to: Research

Open AccessFeature PaperReview
Multi-Agent Systems and Complex Networks: Review and Applications in Systems Engineering
Processes 2020, 8(3), 312; https://doi.org/10.3390/pr8030312 - 08 Mar 2020
Cited by 2
Abstract
Systems engineering is an ubiquitous discipline of Engineering overlapping industrial, chemical, mechanical, manufacturing, control, software, electrical, and civil engineering. It provides tools for dealing with the complexity and dynamics related to the optimisation of physical, natural, and virtual systems management. This paper presents [...] Read more.
Systems engineering is an ubiquitous discipline of Engineering overlapping industrial, chemical, mechanical, manufacturing, control, software, electrical, and civil engineering. It provides tools for dealing with the complexity and dynamics related to the optimisation of physical, natural, and virtual systems management. This paper presents a review of how multi-agent systems and complex networks theory are brought together to address systems engineering and management problems. The review also encompasses current and future research directions both for theoretical fundamentals and applications in the industry. This is made by considering trends such as mesoscale, multiscale, and multilayer networks along with the state-of-art analysis on network dynamics and intelligent networks. Critical and smart infrastructure, manufacturing processes, and supply chain networks are instances of research topics for which this literature review is highly relevant. Full article
Show Figures

Figure 1

Back to TopTop