Special Issue "Soft Computing Application to Engineering Design"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering".

Deadline for manuscript submissions: 20 March 2022.

Special Issue Editors

Prof. Dr. Chang Yong Song
E-Mail Website
Guest Editor
Department of Naval Architecture & Ocean Engineering, Mokpo National University, Joennam 58554, Korea
Interests: soft computing; optimization; probabilistic design methodology; AI application to design
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Wu Deng
E-Mail Website
Guest Editor
College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
Interests: intelligent optimization; resource scheduling; intelligent diagnosis; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Products or machines manufactured through various design and production processes perform the functions intended by the engineer or reseacher under various external environments and operating conditions. However, due to uncertainties in design, manufacturing, and testing, design variables or system parameters may fluctuate, and the desired function may not be performed properly. Engineering design is one of the most important development processes in various industrial fields such as shipbuilding and offshore systems, automobiles, mechanical systems, architecture, civil engineering etc. In general, in order to increase the reliability of engineering design, a design method that considers the safety factor based on experience has been widely used. Because it is difficult for an engineer or reseacher to accurately ascertain the uncertainty occurring in the system, the safety factor is determined mainly based on past experience. Recently, the need for methods that can quantitatively guarantee the reliability of engineering design is increasing due to the complexity of functions and the application of new materials. Therefore, soft computing application with various methodologies such as the design of experiments, meta-model, AI, fuzzy, evolutionary algorithm, neural network, reliability analysis, robust design theory, etc. should be included in the engineering design.

Prof. Dr. Chang Yong Song
Prof. Dr. Wu Deng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • soft computing
  • design of experiments
  • meta-model
  • AI
  • fuzzy
  • evolutionary algorithm
  • neural network
  • reliability Analysis
  • robust design theory
  • numerical simulation
  • engineering design

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Tri-Partition Alphabet-Based State Prediction for Multivariate Time-Series
Appl. Sci. 2021, 11(23), 11294; https://doi.org/10.3390/app112311294 - 29 Nov 2021
Viewed by 170
Abstract
Recently, predicting multivariate time-series (MTS) has attracted much attention to obtain richer semantics with similar or better performances. In this paper, we propose a tri-partition alphabet-based state (tri-state) prediction method for symbolic MTSs. First, for each variable, the set of all symbols, i.e., [...] Read more.
Recently, predicting multivariate time-series (MTS) has attracted much attention to obtain richer semantics with similar or better performances. In this paper, we propose a tri-partition alphabet-based state (tri-state) prediction method for symbolic MTSs. First, for each variable, the set of all symbols, i.e., alphabets, is divided into strong, medium, and weak using two user-specified thresholds. With the tri-partitioned alphabet, the tri-state takes the form of a matrix. One order contains the whole variables. The other is a feature vector that includes the most likely occurring strong, medium, and weak symbols. Second, a tri-partition strategy based on the deviation degree is proposed. We introduce the piecewise and symbolic aggregate approximation techniques to polymerize and discretize the original MTS. This way, the symbol is stronger and has a bigger deviation. Moreover, most popular numerical or symbolic similarity or distance metrics can be combined. Third, we propose an along–across similarity model to obtain the k-nearest matrix neighbors. This model considers the associations among the time stamps and variables simultaneously. Fourth, we design two post-filling strategies to obtain a completed tri-state. The experimental results from the four-domain datasets show that (1) the tri-state has greater recall but lower precision; (2) the two post-filling strategies can slightly improve the recall; and (3) the along–across similarity model composed by the Triangle and Jaccard metrics are first recommended for new datasets. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
A Study on Anti-Shock Performance of Marine Diesel Engine Based on Multi-Body Dynamics and Elastohydrodynamic Lubrication
Appl. Sci. 2021, 11(23), 11259; https://doi.org/10.3390/app112311259 - 27 Nov 2021
Viewed by 159
Abstract
Diesel engine anti-shock performance is important for navy ships. The calculation method is a fast and economic way compared to underwater explosion trial in this field. Researchers of diesel engine anti-shock performance mainly use the spring damping model to simulate the main bearings [...] Read more.
Diesel engine anti-shock performance is important for navy ships. The calculation method is a fast and economic way compared to underwater explosion trial in this field. Researchers of diesel engine anti-shock performance mainly use the spring damping model to simulate the main bearings of a diesel engine. The elastohydrodynamic lubrication method has been continuously used in the main bearings of diesel engines in normal working conditions. This research aims at using the elastohydrodynamic lubrication method in the main bearings of the diesel engine in external shock conditions. The main bearing elastohydrodynamic lubrication and diesel engine multi-body dynamics analysis is based on AVL EXCITE Power Unite software. The external shock is equivalent to the interference on the elastohydrodynamic lubrication calculation. Whether the elastohydrodynamic lubrication algorithm can complete the calculation under interference is the key to the study. By adopting a very small calculation step size, a high number of iterations, and increasing the stiffness of the thrust bearing, the elastohydrodynamic lubrication algorithm can be successfully completed under the external impact environment. The calculation results of the accelerations on engine block feet have a similar trend as the experiment results. Diesel engines with and without shock absorbers in external shock conditions are calculated. This calculation model can also be used for diesel engine dynamics calculations and main bearing lubrication calculations under normal working conditions. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
A Novel K-Means Clustering Algorithm with a Noise Algorithm for Capturing Urban Hotspots
Appl. Sci. 2021, 11(23), 11202; https://doi.org/10.3390/app112311202 - 25 Nov 2021
Viewed by 220
Abstract
With the development of cities, urban congestion is nearly an unavoidable problem for almost every large-scale city. Road planning is an effective means to alleviate urban congestion, which is a classical non-deterministic polynomial time (NP) hard problem, and has become an important research [...] Read more.
With the development of cities, urban congestion is nearly an unavoidable problem for almost every large-scale city. Road planning is an effective means to alleviate urban congestion, which is a classical non-deterministic polynomial time (NP) hard problem, and has become an important research hotspot in recent years. A K-means clustering algorithm is an iterative clustering analysis algorithm that has been regarded as an effective means to solve urban road planning problems by scholars for the past several decades; however, it is very difficult to determine the number of clusters and sensitively initialize the center cluster. In order to solve these problems, a novel K-means clustering algorithm based on a noise algorithm is developed to capture urban hotspots in this paper. The noise algorithm is employed to randomly enhance the attribution of data points and output results of clustering by adding noise judgment in order to automatically obtain the number of clusters for the given data and initialize the center cluster. Four unsupervised evaluation indexes, namely, DB, PBM, SC, and SSE, are directly used to evaluate and analyze the clustering results, and a nonparametric Wilcoxon statistical analysis method is employed to verify the distribution states and differences between clustering results. Finally, five taxi GPS datasets from Aracaju (Brazil), San Francisco (USA), Rome (Italy), Chongqing (China), and Beijing (China) are selected to test and verify the effectiveness of the proposed noise K-means clustering algorithm by comparing the algorithm with fuzzy C-means, K-means, and K-means plus approaches. The compared experiment results show that the noise algorithm can reasonably obtain the number of clusters and initialize the center cluster, and the proposed noise K-means clustering algorithm demonstrates better clustering performance and accurately obtains clustering results, as well as effectively capturing urban hotspots. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
A Novel Adaptive Sparrow Search Algorithm Based on Chaotic Mapping and T-Distribution Mutation
Appl. Sci. 2021, 11(23), 11192; https://doi.org/10.3390/app112311192 - 25 Nov 2021
Viewed by 218
Abstract
Aiming at the problems of the basic sparrow search algorithm (SSA) in terms of slow convergence speed and the ease of falling into the local optimum, the chaotic mapping strategy, adaptive weighting strategy and t-distribution mutation strategy are introduced to develop a novel [...] Read more.
Aiming at the problems of the basic sparrow search algorithm (SSA) in terms of slow convergence speed and the ease of falling into the local optimum, the chaotic mapping strategy, adaptive weighting strategy and t-distribution mutation strategy are introduced to develop a novel adaptive sparrow search algorithm, namely the CWTSSA in this paper. In the proposed CWTSSA, the chaotic mapping strategy is employed to initialize the population in order to enhance the population diversity. The adaptive weighting strategy is applied to balance the capabilities of local mining and global exploration, and improve the convergence speed. An adaptive t-distribution mutation operator is designed, which uses the iteration number t as the degree of freedom parameter of the t-distribution to improve the characteristic of global exploration and local exploration abilities, so as to avoid falling into the local optimum. In order to prove the effectiveness of the CWTSSA, 15 standard test functions and other improved SSAs, differential evolution (DE), particle swarm optimization (PSO), gray wolf optimization (GWO) are selected here. The compared experiment results indicate that the proposed CWTSSA can obtain higher convergence accuracy, faster convergence speed, better diversity and exploration abilities. It provides a new optimization algorithm for solving complex optimization problems. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
TDGVRPSTW of Fresh Agricultural Products Distribution: Considering Both Economic Cost and Environmental Cost
Appl. Sci. 2021, 11(22), 10579; https://doi.org/10.3390/app112210579 - 10 Nov 2021
Viewed by 220
Abstract
The time-dependent vehicle routing problem of time windows of fresh agricultural products distribution have been studied by considering both economic cost and environmental cost. A calculation method for road travel time across time periods is designed in this study. A freshness measure function [...] Read more.
The time-dependent vehicle routing problem of time windows of fresh agricultural products distribution have been studied by considering both economic cost and environmental cost. A calculation method for road travel time across time periods is designed in this study. A freshness measure function of agricultural products and a measure function of carbon emission rate are employed by considering time-varying vehicle speeds, fuel consumptions, carbon emissions, perishable agricultural products, customers’ time windows, and minimum freshness. A time-dependent green vehicle routing problem with soft time windows (TDGVRPSTW) model is formulated. The object of the TDGVRPSTW model is to minimize the sum of economic cost and environmental cost. According to the characteristics of the model, a new variable neighborhood adaptive genetic algorithm is designed, which integrates the global search ability of the genetic algorithm and the local search ability of the variable neighborhood descent algorithm. Finally, the experimental data show that the proposed approaches effectively avoid traffic congestions, reduce total distribution costs, and promote energy conservation and emission reduction. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
An Improved Image Filtering Algorithm for Mixed Noise
Appl. Sci. 2021, 11(21), 10358; https://doi.org/10.3390/app112110358 - 04 Nov 2021
Viewed by 230
Abstract
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed [...] Read more.
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
BiTTM: A Core Biterms-Based Topic Model for Targeted Analysis
Appl. Sci. 2021, 11(21), 10162; https://doi.org/10.3390/app112110162 - 29 Oct 2021
Viewed by 241
Abstract
While most of the existing topic models perform a full analysis on a set of documents to discover all topics, it is noticed recently that in many situations users are interested in fine-grained topics related to some specific aspects only. As a result, [...] Read more.
While most of the existing topic models perform a full analysis on a set of documents to discover all topics, it is noticed recently that in many situations users are interested in fine-grained topics related to some specific aspects only. As a result, targeted analysis (or focused analysis) has been proposed to address this problem. Given a corpus of documents from a broad area, targeted analysis discovers only topics related with user-interested aspects that are expressed by a set of user-provided query keywords. Existing approaches for targeted analysis suffer from problems such as topic loss and topic suppression because of their inherent assumptions and strategies. Moreover, existing approaches are not designed to address computation efficiency, while targeted analysis is supposed to provide responses to user queries as soon as possible. In this paper, we propose a core BiTerms-based Topic Model (BiTTM). By modelling topics from core biterms that are potentially relevant to the target query, on one hand, BiTTM captures the context information across documents to alleviate the problem of topic loss or suppression; on the other hand, our proposed model enables the efficient modelling of topics related to specific aspects. Our experiments on nine real-world datasets demonstrate BiTTM outperforms existing approaches in terms of both effectiveness and efficiency. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
Layout Design and Die Casting Using CAE Simulation for Household Appliances
Appl. Sci. 2021, 11(21), 10128; https://doi.org/10.3390/app112110128 - 28 Oct 2021
Viewed by 225
Abstract
Due to the development and industrialization of science and technology, aluminum alloys have been developed in various fields. Recently, the government has been pursuing ways to decrease the weight and increase the recyclability of various components in order to conserve resources, energy, and [...] Read more.
Due to the development and industrialization of science and technology, aluminum alloys have been developed in various fields. Recently, the government has been pursuing ways to decrease the weight and increase the recyclability of various components in order to conserve resources, energy, and the environmental. In keeping with this trend, cast iron products are being replaced by aluminum products in the foundry industry by using high-pressure die casting (HPDC). Casting layout design, relies on the experience and knowledge of mold designers in the casting industry, which proves insufficient to respond to the rapidly changing needs of the era and to increasing production costs. Designing and producing casting layouts using CAD/CAM/CAE technology has become a critical issue. Computer-Aided Engineering (CAE) technology is rapidly increasing with the development of computer software and hardware. CAE technology not only predicts defects in mass production but also performs filling or solidification analysis during the mold design stage before production, enabling optimal mold design methods. New technologies that combine the emerging casting processes of filling and solidification analysis using computer simulation with existing technology and practical experience in the field are rapidly increasing in the foundry industry. Based on empirical knowledge, the layout and design of casting products has traditionally progressed through trial and error. The solutions achieved through scientific calculation and analysis using CAE technology can save a great deal of money and time in the building of die-casting molds and in their design and fabrication. In this study, numerical analysis of household appliances (cooking grills) quickly and accurately predicts problems arising from the filling and solidification of the melted metal in the casting process, thereby ensuring the quality of the final cast product. These results can be used to quickly establish a sound casting layout with reduced production costs. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
Early Robust Design—Its Effect on Parameter and Tolerance Optimization
Appl. Sci. 2021, 11(20), 9407; https://doi.org/10.3390/app11209407 - 11 Oct 2021
Cited by 1 | Viewed by 316
Abstract
The development of complex products with high quality in dynamic markets requires appropriate robust design and tolerancing workflows supporting the entire product development process. Despite the large number of methods and tools available for designers and tolerance engineers, there are hardly any consistent [...] Read more.
The development of complex products with high quality in dynamic markets requires appropriate robust design and tolerancing workflows supporting the entire product development process. Despite the large number of methods and tools available for designers and tolerance engineers, there are hardly any consistent approaches that are applicable throughout all development stages. This is mainly due to the break between the primarily qualitative approaches for the concept stage and the quantitative parameter and tolerance design activities in subsequent stages. Motivated by this, this paper bridges the gap between these two different views by contrasting the used terminology and methods. Moreover, it studies the effects of early robust design decisions with a focus on Suh’s Axiomatic Design axioms on later parameter and tolerance optimization. Since most robust design activities in concept design can be ascribed to these axioms, this allows reliable statements about the specific benefits of early robust design decisions on the entire process considering variation in product development for the first time. The presented effects on the optimization of nominal design parameters and their tolerance values are shown by means of a case study based on ski bindings. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
A Novel Fault Feature Extraction Method for Bearing Rolling Elements Using Optimized Signal Processing Method
Appl. Sci. 2021, 11(19), 9095; https://doi.org/10.3390/app11199095 - 29 Sep 2021
Viewed by 309
Abstract
A rolling element signal has a long transmission path in the acquisition process. The fault feature of the rolling element signal is more difficult to be extracted. Therefore, a novel weak fault feature extraction method using optimized variational mode decomposition with kurtosis mean [...] Read more.
A rolling element signal has a long transmission path in the acquisition process. The fault feature of the rolling element signal is more difficult to be extracted. Therefore, a novel weak fault feature extraction method using optimized variational mode decomposition with kurtosis mean (KMVMD) and maximum correlated kurtosis deconvolution based on power spectrum entropy and grid search (PGMCKD), namely KMVMD-PGMCKD, is proposed. In the proposed KMVMD-PGMCKD method, a VMD with kurtosis mean (KMVMD) is proposed. Then an adaptive parameter selection method based on power spectrum entropy and grid search for MCKD, namely PGMCKD, is proposed to determine the deconvolution period T and filter order L. The complementary advantages of the KMVMD and PGMCKD are integrated to construct a novel weak fault feature extraction model (KMVMD-PGMCKD). Finally, the power spectrum is employed to deal with the obtained signal by KMVMD-PGMCKD to effectively implement feature extraction. Bearing rolling element signals of Case Western Reserve University and actual rolling element data are selected to prove the validity of the KMVMD-PGMCKD. The experiment results show that the KMVMD-PGMCKD can effectively extract the fault features of bearing rolling elements and accurately diagnose weak faults under variable working conditions. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
Reliability Analysis of C4ISR Systems Based on Goal-Oriented Methodology
Appl. Sci. 2021, 11(14), 6335; https://doi.org/10.3390/app11146335 - 08 Jul 2021
Viewed by 674
Abstract
Hard-and-software integrated systems such as command and control systems (C4ISR systems) are typical systems that are comprised of both software and hardware, the failures of such devices result from complicated common cause failures and common (or shared) signals that make classical [...] Read more.
Hard-and-software integrated systems such as command and control systems (C4ISR systems) are typical systems that are comprised of both software and hardware, the failures of such devices result from complicated common cause failures and common (or shared) signals that make classical reliability analysis methods will be not applicable. To this end, this paper applies the Goal-Oriented (GO) methodology to detailed analyze the reliability of a C4ISR system. The reliability as well as the failure probability of the C4ISR system, are reached based on the GO model constructed. At the component level, the reliability of units of the C4ISR system is computed. Importance analysis of failures of such a system is completed by the qualitative analysis capability of the GO model, by which critical failures of hardware failures like communication module failures and motherboard module failures as well as software failures like network module application software failures and decompression module software failures are ascertained. This method of this paper contributes to the reliability analysis of all hard-and-software integrated systems. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
A Study on Learning Parameters in Application of Radial Basis Function Neural Network Model to Rotor Blade Design Approximation
Appl. Sci. 2021, 11(13), 6133; https://doi.org/10.3390/app11136133 - 01 Jul 2021
Viewed by 518
Abstract
Meta-model sre generally applied to approximate multi-objective optimization, reliability analysis, reliability based design optimization, etc., not only in order to improve the efficiencies of numerical calculation and convergence, but also to facilitate the analysis of design sensitivity. The radial basis function neural network [...] Read more.
Meta-model sre generally applied to approximate multi-objective optimization, reliability analysis, reliability based design optimization, etc., not only in order to improve the efficiencies of numerical calculation and convergence, but also to facilitate the analysis of design sensitivity. The radial basis function neural network (RBFNN) is the meta-model employing hidden layer of radial units and output layer of linear units, and characterized by relatively fast training, generalization and compact type of networks. It is important to minimize some scattered noisy data to approximate the design space to prevent local minima in the gradient based optimization or the reliability analysis using the RBFNN. Since the noisy data must be smoothed out in order for the RBFNN to be applied as the meta-model to any actual structural design problem, the smoothing parameter must be properly determined. This study aims to identify the effect of various learning parameters including the spline smoothing parameter on the RBFNN performance regarding the design approximation. An actual rotor blade design problem was considered to investigate the characteristics of RBFNN approximation with respect to the range of spline smoothing parameter, the number of training data, and the number of hidden layers. In the RBFNN approximation of the rotor blade design, design sensitivity characteristics such as main effects were also evaluated including the performance analysis according to the variation of learning parameters. From the evaluation results of learning parameters in the rotor blade design, it was found that the number of training data had larger influence on the RBFNN meta-model accuracy than the spline smoothing parameter while the number of hidden layers had little effect on the performances of RBFNN meta-model. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Article
Meta-Models and Genetic Algorithm Application to Approximate Optimization with Discrete Variables for Fire Resistance Design of A60 Class Bulkhead Penetration Piece
Appl. Sci. 2021, 11(7), 2972; https://doi.org/10.3390/app11072972 - 26 Mar 2021
Viewed by 372
Abstract
A60 class bulkhead penetration piece is a fire-resistance apparatus installed on bulkhead compartments to protect lives and to prevent flame diffusion in case of fire accident in ships and offshore plants. In this study, approximate optimization with discrete variables was carried out for [...] Read more.
A60 class bulkhead penetration piece is a fire-resistance apparatus installed on bulkhead compartments to protect lives and to prevent flame diffusion in case of fire accident in ships and offshore plants. In this study, approximate optimization with discrete variables was carried out for the fire-resistance design of an A60 class bulkhead penetration piece (A60 BPP) using various meta-models and multi-island genetic algorithms. Transient heat transfer analysis was carried out to evaluate the fire-resistance design of the A60 class bulkhead penetration piece, and we verified the results of the analysis via a fire test. The design of the experiment’s method was applied to generate the meta-models to be used for the approximate optimization, and the verified results of the transient heat transfer analysis were integrated with the design of the experiment’s method. The meta-models used in the approximate optimization were response surface model, Kriging, and radial basis function-based neural network. In the approximate optimization, the bulkhead penetration piece length, diameter, material type, and insulation density were applied to discrete design variables, and constraints that were considered include temperature, productivity, and cost. The approximate optimum design problem based on the meta-model was formulated such that the discrete design variables were determined by minimizing the weight of the A60 class bulkhead penetration piece subject to the limit values of constraints. In the context of approximate accuracy, the solution results from the approximate optimization were compared to actual analysis results. It was concluded that the radial basis function-based neural network, among the meta-models used in the approximate optimization, showed the most accurate optimum design results for the fire-resistance design of the A60 class bulkhead penetration piece. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

Back to TopTop