Previous Issue

Table of Contents

Designs, Volume 3, Issue 1 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:
Open AccessArticle Tetrahedron-Based Porous Scaffold Design for 3D Printing
Received: 9 January 2019 / Revised: 4 February 2019 / Accepted: 4 February 2019 / Published: 18 February 2019
Viewed by 130 | PDF Full-text (5242 KB) | HTML Full-text | XML Full-text
Abstract
Tissue repairing has been the ultimate goal of surgery, especially with the emergence of reconstructive medicine. A large amount of research devoted to exploring innovative porous scaffold designs, including homogeneous and inhomogeneous ones, have been presented in the literature. The triply periodic minimal [...] Read more.
Tissue repairing has been the ultimate goal of surgery, especially with the emergence of reconstructive medicine. A large amount of research devoted to exploring innovative porous scaffold designs, including homogeneous and inhomogeneous ones, have been presented in the literature. The triply periodic minimal surface has been a versatile source of biomorphic structure design due to its smooth surface and high interconnectivity. Nonetheless, many 3D models are often rendered in the form of triangular meshes for its efficiency and convenience. The requirement of regular hexahedral meshes then becomes one of limitations of the triply periodic minimal surface method. In this paper, we make a successful attempt to generate microscopic pore structures using tetrahedral implicit surfaces. To replace the conventional Cartesian coordinates, a new coordinates system is built based on the perpendicular distances between a point and the tetrahedral faces to capture the periodicity of a tetrahedral implicit surface. Similarly to the triply periodic minimal surface, a variety of tetrahedral implicit surfaces, including P-, D-, and G-surfaces are defined by combinations of trigonometric functions. We further compare triply periodic minimal surfaces with tetrahedral implicit surfaces in terms of shape, porosity, and mean curvature to discuss the similarities and differences of the two surfaces. An example of femur scaffold construction is provided to demonstrate the detailed process of modeling porous architectures using the tetrahedral implicit surface. Full article
(This article belongs to the Special Issue Design and Applications of Additive Manufacturing and 3D Printing)
Figures

Figure 1

Open AccessArticle A full Model-Based Design Environment for the Development of Cyber Physical Systems
Received: 30 September 2018 / Revised: 3 February 2019 / Accepted: 9 February 2019 / Published: 13 February 2019
Viewed by 185 | PDF Full-text (1644 KB)
Abstract
This paper discusses a full model-based design approach in the applicative development of Cyber Physical Systems targeting the fast development of Logic controllers (i.e., the “Cyber” side of a CPS). The proposed modeling language provides a synthesis between various somehow conflicting constraints, such [...] Read more.
This paper discusses a full model-based design approach in the applicative development of Cyber Physical Systems targeting the fast development of Logic controllers (i.e., the “Cyber” side of a CPS). The proposed modeling language provides a synthesis between various somehow conflicting constraints, such as being graphical, easily usable by designers, self-contained with no need for extra information, and to leads to efficient implementation, even in low-end embedded systems. Its main features include easiness to describe parallelism of actions, precise time handling, communication with other systems according to various interfaces and protocols. Taking advantage the modeling easiness deriving from the above features, the language encourages to model whole CPSs, that is their Logical and their Physical side, working together; such whole models are simulated in order to achieve insight about their interaction and spot possible flaws in the controller; once validated, the very same model, without the Physical side, is compiled and into the logic controller, ready to be flashed on the controller board and to interact with the physical side. The discussed language has been implemented into a real model-based development environment, TaskScript, in use since a few years in the development of production grade systems. Results about its effectiveness in terms of model expressivity and design effort are presented; such results show the effectiveness of the approach: real case production grade systems have been developed and tested in a few days. Full article
Open AccessCorrection Correction: Sharpening the Scythe of Technological Change: Socio-Technical Challenges of Autonomous and Adaptive Cyber-Physical Systems
Received: 30 January 2019 / Accepted: 31 January 2019 / Published: 11 February 2019
Viewed by 142 | PDF Full-text (670 KB) | HTML Full-text | XML Full-text
Abstract
We, the authors, wish to make the following corrections to our paper [...] Full article
Open AccessArticle A Competitive Design and Material Consideration for Fabrication of Polymer Electrolyte Membrane Fuel Cell Bipolar Plates
Received: 9 January 2019 / Revised: 28 January 2019 / Accepted: 29 January 2019 / Published: 8 February 2019
Viewed by 219 | PDF Full-text (1424 KB) | HTML Full-text | XML Full-text
Abstract
The bipolar plate is one of the most significant components of a polymer electrolyte membrane (PEM) fuel cell, and contributes substantially to the cost structure and the weight of the stacks. A number of graphite polymer composites with different fabrication techniques have been [...] Read more.
The bipolar plate is one of the most significant components of a polymer electrolyte membrane (PEM) fuel cell, and contributes substantially to the cost structure and the weight of the stacks. A number of graphite polymer composites with different fabrication techniques have been reported in the literature. Graphite composites show excellent electromechanical properties and chemical stability in acidic environments. Compression and injection molding are the most common manufacturing methods being used for mass production. In this study, a competitive bipolar plate design and fabrication technique is adopted in order to develop a low-cost and light-weight expanded graphite (EG) polymer composite bipolar plate for an air-breathing PEM fuel cell. Cutting molds are designed to cut fuel flow channels on thin expanded graphite (EG) sheets (0.6 mm thickness). Three separate sheets, with the flow channel textures removed, are glued to each other by a commercial conductive epoxy to build a single bipolar plate. The final product has a density of 1.79 g/cm3. A bipolar plate with a 20 cm2 active area weighs only 11.38 g. The manufacturing cost is estimated to be 7.77 $/kWe, and a total manufacturing time of 2 minutes/plate is achieved with lab-scale fabrication. A flexural strength value of 29 MPa is obtained with the three-point bending method. A total resistance of 22.3 milliohms.cm2 is measured for the three-layer bipolar plate. We presume that the suggested design and fabrication process can be a competitive alternate for the small-scale, as well as mass production of bipolar plates. Full article
Figures

Figure 1

Open AccessArticle Development of a New Span-Morphing Wing Core Design
Received: 4 January 2019 / Revised: 1 February 2019 / Accepted: 2 February 2019 / Published: 7 February 2019
Viewed by 236 | PDF Full-text (7488 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a new design for the core of a span-morphing unmanned aerial vehicle (UAV) wing that increases the spanwise length of the wing by fifty percent. The purpose of morphing the wingspan is to increase lift and fuel efficiency during extension, [...] Read more.
This paper presents a new design for the core of a span-morphing unmanned aerial vehicle (UAV) wing that increases the spanwise length of the wing by fifty percent. The purpose of morphing the wingspan is to increase lift and fuel efficiency during extension, to increase maneuverability during contraction, and to add roll control capability through asymmetrical span morphing. The span morphing is continuous throughout the wing, which is comprised of multiple partitions. Three main components make up the structure of each partition: a zero Poisson’s ratio honeycomb substructure, telescoping carbon fiber spars and a linear actuator. The zero Poisson’s ratio honeycomb substructure is an assembly of rigid internal ribs and flexible chevrons. This innovative multi-part honeycomb design allows the ribs and chevrons to be 3D printed separately from different materials in order to offer different directional stiffness, and to accommodate design iterations and future maintenance. Because of its transverse rigidity and spanwise compliance, the design maintains the airfoil shape and the cross-sectional area during morphing. The telescoping carbon fiber spars interconnect to provide structural support throughout the wing while undergoing morphing. The wing model has been computationally analyzed, manufactured, assembled and experimentally tested. Full article
(This article belongs to the Special Issue Design and Applications of Additive Manufacturing and 3D Printing)
Figures

Graphical abstract

Open AccessConcept Paper Design of Direct Injection Jet Ignition High Performance Naturally Aspirated Motorcycle Engines
Received: 9 January 2019 / Revised: 28 January 2019 / Accepted: 3 February 2019 / Published: 5 February 2019
Viewed by 196 | PDF Full-text (4744 KB) | HTML Full-text | XML Full-text
Abstract
Thanks to the adoption of high pressure, direct injection and jet ignition, plus electrically assisted turbo-compounding, the fuel conversion efficiency of Fédération Internationale de l’Automobile (FIA) F1 engines has been spectacularly improved up to values above 46% peak power, and 50% peak efficiency, [...] Read more.
Thanks to the adoption of high pressure, direct injection and jet ignition, plus electrically assisted turbo-compounding, the fuel conversion efficiency of Fédération Internationale de l’Automobile (FIA) F1 engines has been spectacularly improved up to values above 46% peak power, and 50% peak efficiency, by running lean of stoichiometry stratified in a high boost, high compression ratio environment. Opposite, Federation Internationale de Motocyclisme (FIM) Moto-GP engines are still naturally aspirated, port injected, spark ignited, working with homogeneous mixtures. This old fashioned but highly optimized design is responsible for relatively low fuel conversion efficiencies, and yet delivers an outstanding specific power density of 200 kW/liter. The potential to improve the fuel conversion efficiency of Moto-GP engines through the adoption of direct injection and jet ignition, prevented by the current rules, is herein discussed based on simulations. As two-stroke engines may benefit from direct injection and jet ignition more than four-stroke engines, the opportunity of a return of two-stroke engines is also argued, similarly based on simulations. About the same power, but at a better fuel efficiency, of today’s 1000 cm3 four stroke engines, may be obtained with lean stratified direct injection jet ignition engines, four-stroke of 1450 cm3, or two-stroke of 1050 cm3. About the same power and fuel efficiency may also be delivered with stoichiometric engines direct injection jet ignition two-stroke of 750 cm3. Full article
Figures

Figure 1

Open AccessArticle Probability Study on the Thermal Stress Distribution in Thick HK40 Stainless Steel Pipe Using Finite Element Method
Received: 8 November 2018 / Revised: 16 December 2018 / Accepted: 22 January 2019 / Published: 1 February 2019
Viewed by 121 | PDF Full-text (2805 KB)
Abstract
The present work deals with the development of a finite element methodology for obtaining the stress distributions in thick cylindrical HK40 stainless steel pipe that carries high-temperature fluids. The material properties and loading were assumed to be random variables. Thermal stresses that are [...] Read more.
The present work deals with the development of a finite element methodology for obtaining the stress distributions in thick cylindrical HK40 stainless steel pipe that carries high-temperature fluids. The material properties and loading were assumed to be random variables. Thermal stresses that are generated along radial, axial, and tangential directions are generally computed using very complex analytical expressions. To circumvent such an issue, probability theory and mathematical statistics have been applied to many engineering problems, which allows determination of the safety both quantitatively and objectively based on the concepts of reliability. Monte Carlo simulation methodology is used to study the probabilistic characteristics of thermal stresses, and was implemented to estimate the probabilistic distributions of stresses against the variations arising due to material properties and load. A 2-D probabilistic finite element code was developed in MATLAB, and the deterministic solution was compared with ABAQUS solutions. The values of stresses obtained from the variation of elastic modulus were found to be low compared to the case where the load alone was varying. The probability of failure of the pipe structure was predicted against the variations in internal pressure and thermal gradient. These finite element framework developments are useful for the life estimation of piping structures in high-temperature applications and for the subsequent quantification of the uncertainties in loading and material properties. Full article
Open AccessArticle A Lazy Bailout Approach for Dual-Criticality Systems on Uniprocessor Platforms
Received: 20 October 2018 / Revised: 16 November 2018 / Accepted: 28 January 2019 / Published: 1 February 2019
Viewed by 104 | PDF Full-text (944 KB)
Abstract
A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in the case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to [...] Read more.
A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in the case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to the criticality of tasks, the scheduling of tasks with different criticalities is called mixed-criticality scheduling. In this paper, we present the Lazy Bailout Protocol (LBP), a mixed-criticality scheduling method where low-criticality jobs overrunning their time budget cannot threaten the timeliness of high-criticality jobs while at the same time the method tries to complete as many low-criticality jobs as possible. The key principle of LBP is instead of immediately abandoning low-criticality jobs when a high-criticality job overruns its optimistic WCET estimate, to put them in a low-priority queue for later execution. To compare mixed-criticality scheduling methods, we introduce a formal quality criterion for mixed-criticality scheduling, which, above all else, compares schedulability of high-criticality jobs and only afterwards the schedulability of low-criticality jobs. Based on this criterion, we prove that LBP behaves better than the original Bailout Protocol (BP). We show that LBP can be further improved by slack time exploitation and by gain time collection at runtime, resulting in LBPSG. We also show that these improvements of LBP perform better than the analogous improvements based on BP. Full article
Open AccessArticle Retrofit of Residential Buildings in Europe
Received: 9 December 2018 / Revised: 14 January 2019 / Accepted: 22 January 2019 / Published: 24 January 2019
Viewed by 145 | PDF Full-text (5522 KB) | HTML Full-text | XML Full-text
Abstract
Recently, many cities in Europe are encouraging the recovery of the existing residential heritage. To maximize the benefits of these campaigns, a multi-purpose campaign of architectural, functional, and structural retrofit is essential. Additionally, a fast-changing society requires new living criteria; new models need [...] Read more.
Recently, many cities in Europe are encouraging the recovery of the existing residential heritage. To maximize the benefits of these campaigns, a multi-purpose campaign of architectural, functional, and structural retrofit is essential. Additionally, a fast-changing society requires new living criteria; new models need to be developed to respond to the developing requirements of communities and markets. This paper proposes a method of analysis for 49 residential retrofit projects, a range of “best practices” presented through the definition of strategies, and actions and thematic packages, aiming at reassuming, in a systematic way, the complex panorama of the state of the art in Europe. Each project was analyzed using a data sheet, while synoptic views and tables provided key interpretations and a panorama of strategies and approaches. The analysis of the state of the art showed that lightweight interventions achieved using dry stratified construction technologies of structure/cladding/finishing are a widespread approach to renovation and requalification both for superficial/two-dimensional actions and volumetric/spatial actions. The study also highlights the leading role of the envelope within retrofit interventions. The retrofit approaches appear to reach the greatest efficiency when reversible, because only in this way do they ensure environmentally friendly actions with the possibility of dismantling. The intervention should improve the flexibility of the existing construction with a correct balance between planning for the present and planning for the future. Full article
(This article belongs to the Special Issue Integrated Sustainable Building Design, Construction and Operation)
Figures

Figure 1

Open AccessArticle Adaptive Time-Triggered Multi-Core Architecture
Received: 27 September 2018 / Revised: 7 December 2018 / Accepted: 18 January 2019 / Published: 22 January 2019
Viewed by 215 | PDF Full-text (2676 KB) | HTML Full-text | XML Full-text
Abstract
The static resource allocation in time-triggered systems offers significant benefits for the safety arguments of dependable systems. However, adaptation is a key factor for energy efficiency and fault recovery in Cyber-Physical System (CPS). This paper introduces the Adaptive Time-Triggered Multi-Core Architecture (ATMA), which [...] Read more.
The static resource allocation in time-triggered systems offers significant benefits for the safety arguments of dependable systems. However, adaptation is a key factor for energy efficiency and fault recovery in Cyber-Physical System (CPS). This paper introduces the Adaptive Time-Triggered Multi-Core Architecture (ATMA), which supports adaptation using multi-schedule graphs while preserving the key properties of time-triggered systems including implicit synchronization, temporal predictability and avoidance of resource conflicts. ATMA is an overall architecture for safety-critical CPS based on a network-on-a-chip with building blocks for context agreement and adaptation. Context information is established in a globally consistent manner, providing the foundation for the temporally aligned switching of schedules in the network interfaces. A meta-scheduling algorithm computes schedule graphs and avoids state explosion with reconvergence horizons for events. For each tile, the relevant part of the schedule graph is efficiently stored using difference encodings and interpreted by the adaptation logic. The architecture was evaluated using an FPGA-based implementation and example scenarios employing adaptation for improved energy efficiency. The evaluation demonstrated the benefits of adaptation while showing the overhead and the trade-off between the degree of adaptation and the memory consumption for multi-schedule graphs. Full article
Figures

Figure 1

Open AccessArticle A Two-Layer Component-Based Allocation for Embedded Systems with GPUs
Received: 14 December 2018 / Revised: 12 January 2019 / Accepted: 16 January 2019 / Published: 19 January 2019
Viewed by 214 | PDF Full-text (872 KB) | HTML Full-text | XML Full-text
Abstract
Component-based development is a software engineering paradigm that can facilitate the construction of embedded systems and tackle its complexities. The modern embedded systems have more and more demanding requirements. One way to cope with such a versatile and growing set of requirements is [...] Read more.
Component-based development is a software engineering paradigm that can facilitate the construction of embedded systems and tackle its complexities. The modern embedded systems have more and more demanding requirements. One way to cope with such a versatile and growing set of requirements is to employ heterogeneous processing power, i.e., CPU–GPU architectures. The new CPU–GPU embedded boards deliver an increased performance but also introduce additional complexity and challenges. In this work, we address the component-to-hardware allocation for CPU–GPU embedded systems. The allocation for such systems is much complex due to the increased amount of GPU-related information. For example, while in traditional embedded systems the allocation mechanism may consider only the CPU memory usage of components to find an appropriate allocation scheme, in heterogeneous systems, the GPU memory usage needs also to be taken into account in the allocation process. This paper aims at decreasing the component-to-hardware allocation complexity by introducing a two-layer component-based architecture for heterogeneous embedded systems. The detailed CPU–GPU information of the system is abstracted at a high-layer by compacting connected components into single units that behave as regular components. The allocator, based on the compacted information received from the high-level layer, computes, with a decreased complexity, feasible allocation schemes. In the last part of the paper, the two-layer allocation method is evaluated using an existing embedded system demonstrator; namely, an underwater robot. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Designs in 2018
Published: 19 January 2019
Viewed by 164 | PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Real-Time Behaviour Planning and Highway Situation Analysis Concept with Scenario Classification and Risk Estimation for Autonomous Vehicles
Received: 1 December 2018 / Revised: 28 December 2018 / Accepted: 11 January 2019 / Published: 15 January 2019
Viewed by 263 | PDF Full-text (1746 KB) | HTML Full-text | XML Full-text
Abstract
The development of autonomous vehicles is one of the most active research areas in the automotive industry. The objective of this study is to present a concept for analysing a vehicle’s current situation and a decision-making algorithm which determines an optimal and safe [...] Read more.
The development of autonomous vehicles is one of the most active research areas in the automotive industry. The objective of this study is to present a concept for analysing a vehicle’s current situation and a decision-making algorithm which determines an optimal and safe series of manoeuvres to be executed. Our work focuses on a machine learning-based approach by using neural networks for risk estimation, comparing different classification algorithms for traffic density estimation and using probabilistic and decision networks for behaviour planning. A situation analysis is carried out by a traffic density classifier module and a risk estimation algorithm, which predicts risks in a discrete manoeuvre space. For real-time operation, we applied a neural network approach, which approximates the results of the algorithm we used as a ground truth, and a labelling solution for the network’s training data. For the classification of the current traffic density, we used a support vector machine. The situation analysis provides input for the decision making. For this task, we applied probabilistic networks. Full article
(This article belongs to the Special Issue Advances in Modeling, Control and Safety of Vehicle Systems)
Figures

Figure 1

Open AccessArticle Designing Flexibility and Adaptability: The Answer to Integrated Residential Building Retrofit
Received: 9 December 2018 / Revised: 30 December 2018 / Accepted: 8 January 2019 / Published: 11 January 2019
Viewed by 194 | PDF Full-text (227 KB) | HTML Full-text | XML Full-text
Abstract
Speaking about building retrofit in Europe, the attention is often focused on the residential building stock built after the Second World War, which represents the 75% of the total number of buildings present on the territory. Recently many cities are encouraging campaigns of [...] Read more.
Speaking about building retrofit in Europe, the attention is often focused on the residential building stock built after the Second World War, which represents the 75% of the total number of buildings present on the territory. Recently many cities are encouraging campaigns of retrofit of the housing heritage built after the Second World War, since, in terms of cost, time, financing, consumption, and sustainability, the practice appears more convenient than building anew. To maximize the benefits of these retrofit campaigns, it is essential to promote multi-purpose and innovative strategies considering contemporarily architectural, functional and structural aspects. In the field of housing, in particular, it is necessary to develop new models able to answer to the new living style of a dynamic society. In fact, today as in the past, one of the downfalls of the housing sector is failing to recognize the human dimension within the designing process. This paper evaluates past architectural practices to achieve adaptability and flexibility in the residential sector and evaluate strategies for integrated retrofit based on two macro-areas: architectural/societal/functional and structural/technological/constructional. Full article
(This article belongs to the Special Issue Integrated Sustainable Building Design, Construction and Operation)
Open AccessArticle Quantifying Usability via Task Flow-Based Usability Checklists for User-Centered Design
Received: 11 December 2018 / Revised: 3 January 2019 / Accepted: 5 January 2019 / Published: 10 January 2019
Viewed by 171 | PDF Full-text (1057 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we investigated the effectiveness of a method to quantify the overall product usability using an expert review. The expert review involved a general-purpose task flow-based usability checklist that provided a single quantitative usability score. This checklist was expected to reduce [...] Read more.
In this study, we investigated the effectiveness of a method to quantify the overall product usability using an expert review. The expert review involved a general-purpose task flow-based usability checklist that provided a single quantitative usability score. This checklist was expected to reduce rating variation among evaluators. To confirm the effectiveness of the checklist, two experiments were performed. In Experiment 1, the usability score obtained using the proposed checklist was compared with traditional usability measures (task completion ration, task completion time, and subjective rating). The results demonstrated that the usability score obtained using the proposed checklist shows a tendency similar to that of the traditional measures. In Experiment 2, we investigated the inter-rater agreement of the proposed checklist by comparing it with a similar method. The results demonstrate that the inter-rater agreement of the proposed task flow-based usability checklist is greater than that of structured user interface design and evaluation. Full article
Figures

Figure 1

Open AccessArticle A Computational Framework for Procedural Abduction Done by Smart Cyber-Physical Systems
Received: 11 October 2018 / Revised: 18 December 2018 / Accepted: 19 December 2018 / Published: 25 December 2018
Viewed by 274 | PDF Full-text (3033 KB) | HTML Full-text | XML Full-text
Abstract
To be able to provide appropriate services in social and human application contexts, smart cyber-physical systems (S-CPSs) need ampliative reasoning and decision-making (ARDM) mechanisms. As one option, procedural abduction (PA) is suggested for self-managing S-CPSs. PA is a knowledge-based computation and learning mechanism. [...] Read more.
To be able to provide appropriate services in social and human application contexts, smart cyber-physical systems (S-CPSs) need ampliative reasoning and decision-making (ARDM) mechanisms. As one option, procedural abduction (PA) is suggested for self-managing S-CPSs. PA is a knowledge-based computation and learning mechanism. The objective of this article is to provide a comprehensive description of the computational framework proposed for PA. Towards this end, first the essence of smart cyber-physical systems is discussed. Then, the main recent research results related to computational abduction and ampliative reasoning are discussed. PA facilitates beliefs-driven contemplation of the momentary performance of S-CPSs, including a ‘best option’-based setting of the servicing objective and realization of any demanded adaptation. The computational framework of PA includes eight clusters of computational activities: (i) run-time extraction of signals and data by sensing, (ii) recognition of events, (iii) inferring about existing situations, (iv) building awareness of the state and circumstances of operation, (v) devising alternative performance enhancement strategies, (vi) deciding on the best system adaptation, (vii) devising and scheduling the implied interventions, and (viii) actuating effectors and controls. Several cognitive algorithms and computational actions are used to implement PA in a compositional manner. PA necessitates not only a synergic interoperation of the algorithms, but also an objective-dependent fusion of the pre-programmed and the run time acquired chunks of knowledge. A fully fledged implementation of PA is underway, which will make verification and validation possible in the context of various smart CPSs. Full article
Figures

Graphical abstract

Designs EISSN 2411-9660 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top