Next Issue
Volume 12, April
Previous Issue
Volume 12, February
 
 

Computation, Volume 12, Issue 3 (March 2024) – 25 articles

Cover Story (view full-size image): Crash simulations are required to guarantee low levels of accelerations during impact to reduce damage. Additive manufacturing allows the creation of parts for in situ testing in order to adjust the required force for a given impact configuration. This research shows the repeatability of height compression force–displacement curves of different PLA cubes, showing how a unique stress–strain curve can be used for different sizes and orientations. Simulations of compression and impact are compared to experimental tests and provide a good correlation. A protocol is given to determine the dimension mass and velocity for each cube side. Explicit impact simulations are optimised using mass scaling for time steps, as well as for classic plasticity and foam material models with large strain rates. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 2039 KiB  
Article
Multivariate Peristalsis in a Straight Rectangular Duct for Carreau Fluids
by Iosif C. Moulinos, Christos Manopoulos and Sokrates Tsangaris
Computation 2024, 12(3), 62; https://doi.org/10.3390/computation12030062 - 20 Mar 2024
Viewed by 669
Abstract
Peristaltic flow in a straight rectangular duct is examined imposed by contraction pulses implemented by pairs of horizontal cylindrical segments with their axes perpendicular to the flow direction. The wave propagation speed is considered in such a range that triggers a laminar fluid [...] Read more.
Peristaltic flow in a straight rectangular duct is examined imposed by contraction pulses implemented by pairs of horizontal cylindrical segments with their axes perpendicular to the flow direction. The wave propagation speed is considered in such a range that triggers a laminar fluid motion. The setting is analyzed over a set of variables which includes the propagation speed, the relative occlusion, the modality of the squeezing pulse profile and the Carreau power index. The numerical solution of the equations of motion on Cartesian meshes is grounded in the immersed boundary method. An increase in the peristaltic pulse modality leads to the reduction in the shear rate levels on the central tube axis and to the movement of the peristaltic characteristics to higher pressure values. The effect of the no slip side walls (NSSWs) is elucidated by the collation with relevant results for the flow field produced under the same assumptions though with slip side walls (SSWs). Shear thinning behavior exhibits a significantly larger effect on transport efficiency for the NSSWs duct than on the SSWs duct. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

13 pages, 557 KiB  
Article
Exploring Numba and CuPy for GPU-Accelerated Monte Carlo Radiation Transport
by Tair Askar, Argyn Yergaliyev, Bekdaulet Shukirgaliyev and Ernazar Abdikamalov
Computation 2024, 12(3), 61; https://doi.org/10.3390/computation12030061 - 20 Mar 2024
Viewed by 785
Abstract
This paper examines the performance of two popular GPU programming platforms, Numba and CuPy, for Monte Carlo radiation transport calculations. We conducted tests involving random number generation and one-dimensional Monte Carlo radiation transport in plane-parallel geometry on three GPU cards: NVIDIA Tesla A100, [...] Read more.
This paper examines the performance of two popular GPU programming platforms, Numba and CuPy, for Monte Carlo radiation transport calculations. We conducted tests involving random number generation and one-dimensional Monte Carlo radiation transport in plane-parallel geometry on three GPU cards: NVIDIA Tesla A100, Tesla V100, and GeForce RTX3080. We compared Numba and CuPy to each other and our CUDA C implementation. The results show that CUDA C, as expected, has the fastest performance and highest energy efficiency, while Numba offers comparable performance when data movement is minimal. While CuPy offers ease of implementation, it performs slower for compute-heavy tasks. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 298 KiB  
Article
Practical Improvement in the Implementation of Two Avalanche Tests to Measure Statistical Independence in Stream Ciphers
by Evaristo José Madarro-Capó, Eziel Christians Ramos Piñón, Guillermo Sosa-Gómez and Omar Rojas
Computation 2024, 12(3), 60; https://doi.org/10.3390/computation12030060 - 19 Mar 2024
Viewed by 731
Abstract
This study describes the implementation of two algorithms in a parallel environment. These algorithms correspond to two statistical tests based on the bit’s independence criterion and the strict avalanche criterion. They are utilized to measure avalanche properties in stream ciphers. These criteria allow [...] Read more.
This study describes the implementation of two algorithms in a parallel environment. These algorithms correspond to two statistical tests based on the bit’s independence criterion and the strict avalanche criterion. They are utilized to measure avalanche properties in stream ciphers. These criteria allow for the statistical independence between the outputs and the internal state of a bit-level cipher to be determined. Both tests require extensive input parameters to assess the performance of current stream ciphers, leading to longer execution times. The presented implementation significantly reduces the execution time of both tests, making them suitable for evaluating ciphers in practical applications. The evaluation results compare the performance of the RC4 and HC256 stream ciphers in both sequential and parallel environments. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

24 pages, 4029 KiB  
Article
Personalized Tourist Recommender System: A Data-Driven and Machine-Learning Approach
by Deepanjal Shrestha, Tan Wenan, Deepmala Shrestha, Neesha Rajkarnikar and Seung-Ryul Jeong
Computation 2024, 12(3), 59; https://doi.org/10.3390/computation12030059 - 18 Mar 2024
Viewed by 918
Abstract
This study introduces a data-driven and machine-learning approach to design a personalized tourist recommendation system for Nepal. It examines key tourist attributes, such as demographics, behaviors, preferences, and satisfaction, to develop four sub-models for data collection and machine learning. A structured survey is [...] Read more.
This study introduces a data-driven and machine-learning approach to design a personalized tourist recommendation system for Nepal. It examines key tourist attributes, such as demographics, behaviors, preferences, and satisfaction, to develop four sub-models for data collection and machine learning. A structured survey is conducted with 2400 international and domestic tourists, featuring 28 major questions and 125 variables. The data are preprocessed, and significant features are extracted to enhance the accuracy and efficiency of the machine-learning models. These models are evaluated using metrics such as accuracy, precision, recall, F-score, ROC, and lift curves. A comprehensive database for Pokhara City, Nepal, is developed from various sources that includes attributes such as location, cost, popularity, rating, ranking, and trend. The machine-learning models provide intermediate categorical recommendations, which are further mapped using a personalized recommender algorithm. This algorithm makes decisions based on weights assigned to each decision attribute to make the final recommendations. The system’s performance is compared with other popular recommender systems implemented by TripAdvisor, Google Maps, the Nepal tourism website, and others. It is found that the proposed system surpasses existing ones, offering more accurate and optimized recommendations to visitors in Pokhara. This study is a pioneering one and holds significant implications for the tourism industry and the governing sector of Nepal in enhancing the overall tourism business. Full article
(This article belongs to the Special Issue Intelligent Computing, Modeling and its Applications)
Show Figures

Figure 1

13 pages, 5414 KiB  
Review
Accurate Height Determination in Uneven Terrains with Integration of Global Navigation Satellite System Technology and Geometric Levelling: A Case Study in Lebanon
by Murat Mustafin and Hiba Moussa
Computation 2024, 12(3), 58; https://doi.org/10.3390/computation12030058 - 13 Mar 2024
Viewed by 874
Abstract
The technology for determining a point’s coordinates on the earth’s surface using the global navigation satellite system (GNSS) is becoming the norm along with ground-based methods. In this case, determining coordinates does not cause any particular difficulties. However, to identify normal heights using [...] Read more.
The technology for determining a point’s coordinates on the earth’s surface using the global navigation satellite system (GNSS) is becoming the norm along with ground-based methods. In this case, determining coordinates does not cause any particular difficulties. However, to identify normal heights using this technology with a given accuracy, special research is required. The fact is that satellite determinations of geodetic heights (h) over an ellipsoid surface differ from ground-based measurements of normal height (HN) over a quasi-geoid surface by a certain value called quasi-geoid height or height anomaly (ζ). In relation to determining heights of a certain territory, the concept of geoid height (N) is usually operated when dealing with a geoid model. In this work, geodetic and normal heights are determined for five control points in three different regions in Lebanon, where measurements are carried out using GNSS technology and geometric levelling. The obtained quasi-geoid heights are compared with geoid heights derived from the global Earth model EGM2008. The results obtained showed that, in the absence of gravimetric data, the combination of global Earth model data, geometric levelling for selected areas, and satellite determinations allows for the creation of a highly accurate altitude network for mountainous areas. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

20 pages, 2060 KiB  
Article
Turbomachinery GPU Accelerated CFD: An Insight into Performance
by Daniel Molinero-Hernández, Sergio R. Galván-González, Nicolás D. Herrera-Sandoval, Pablo Guzman-Avalos, J. Jesús Pacheco-Ibarra and Francisco J. Domínguez-Mota
Computation 2024, 12(3), 57; https://doi.org/10.3390/computation12030057 - 11 Mar 2024
Viewed by 904
Abstract
Driven by the emergence of Graphics Processing Units (GPUs), the solution of increasingly large and intricate numerical problems has become feasible. Yet, the integration of GPUs into Computational Fluid Dynamics (CFD) codes still presents a significant challenge. This study undertakes an evaluation of [...] Read more.
Driven by the emergence of Graphics Processing Units (GPUs), the solution of increasingly large and intricate numerical problems has become feasible. Yet, the integration of GPUs into Computational Fluid Dynamics (CFD) codes still presents a significant challenge. This study undertakes an evaluation of the computational performance of GPUs for CFD applications. Two Compute Unified Device Architecture (CUDA)-based implementations within the Open Field Operation and Manipulation (OpenFOAM) environment were employed for the numerical solution of a 3D Kaplan turbine draft tube workbench. A series of tests were conducted to assess the fixed-size grid problem speedup in accordance with Amdahl’s Law. Additionally, tests were performed to identify the optimal configuration utilizing various linear solvers, preconditioners, and smoothers, along with an analysis of memory usage. Full article
Show Figures

Figure 1

10 pages, 6283 KiB  
Correction
Correction: Ghnatios et al. A Regularized Real-Time Integrator for Data-Driven Control of Heating Channels. Computation 2022, 10, 176
by Chady Ghnatios, Victor Champaney, Angelo Pasquale and Francisco Chinesta
Computation 2024, 12(3), 56; https://doi.org/10.3390/computation12030056 - 11 Mar 2024
Viewed by 614
Abstract
There was an error in the original publication, section data availability, stating “Data is available upon request” [...] Full article
Show Figures

Figure 3

16 pages, 963 KiB  
Article
Group Classification for the Search and Identification of Related Patterns Using a Variety of Multivariate Techniques
by Nisa Boukichou-Abdelkader, Miguel Ángel Montero-Alonso and Alberto Muñoz-García
Computation 2024, 12(3), 55; https://doi.org/10.3390/computation12030055 - 09 Mar 2024
Viewed by 863
Abstract
Recently, many methods and algorithms have been developed that can be quickly adapted to different situations within a population of interest, especially in the health sector. Success has been achieved by generating better models and higher-quality results to facilitate decision making, as well [...] Read more.
Recently, many methods and algorithms have been developed that can be quickly adapted to different situations within a population of interest, especially in the health sector. Success has been achieved by generating better models and higher-quality results to facilitate decision making, as well as to propose new diagnostic procedures and treatments adapted to each patient. These models can also improve people’s quality of life, dissuade bad health habits, reinforce good habits, and modify the pre-existing ones. In this sense, the objective of this study was to apply supervised and unsupervised classification techniques, where the clustering algorithm was the key factor for grouping. This led to the development of three optimal groups of clinical pattern based on their characteristics. The supervised classification methods used in this study were Correspondence (CA) and Decision Trees (DT), which served as visual aids to identify the possible groups. At the same time, they were used as exploratory mechanisms to confirm the results for the existing information, which enhanced the value of the final results. In conclusion, this multi-technique approach was found to be a feasible method that can be used in different situations when there are sufficient data. It was thus necessary to reduce the dimensional space, provide missing values for high-quality information, and apply classification models to search for patterns in the clinical profiles, with a view to grouping the patients efficiently and accurately so that the clinical results can be applied in other research studies. Full article
Show Figures

Figure 1

14 pages, 2138 KiB  
Article
Creation of a Simulated Sequence of Dynamic Susceptibility Contrast—Magnetic Resonance Imaging Brain Scans as a Tool to Verify the Quality of Methods for Diagnosing Diseases Affecting Brain Tissue Perfusion
by Seweryn Lipiński
Computation 2024, 12(3), 54; https://doi.org/10.3390/computation12030054 - 08 Mar 2024
Viewed by 859
Abstract
DSC-MRI examination is one of the best methods of diagnosis for brain diseases. For this purpose, the so-called perfusion parameters are defined, of which the most used are CBF, CBV, and MTT. There are many approaches to determining these parameters, but regardless of [...] Read more.
DSC-MRI examination is one of the best methods of diagnosis for brain diseases. For this purpose, the so-called perfusion parameters are defined, of which the most used are CBF, CBV, and MTT. There are many approaches to determining these parameters, but regardless of the approach, there is a problem with the quality assessment of methods. To solve this problem, this article proposes virtual DSC-MRI brain examination, which consists of two steps. The first step is to create curves that are typical for DSC-MRI studies and characteristic of different brain regions, i.e., the gray and white matter, and blood vessels. Using perfusion descriptors, the curves are classified into three sets, which give us the model curves for each of the three regions. The curves corresponding to the perfusion of different regions of the brain in a suitable arrangement (consistent with human anatomy) form a model of the DSC-MRI examination. In the created model, one knows in advance the values of the complex perfusion parameters, as well as basic perfusion descriptors. The shown model study can be disturbed in a controlled manner—not only by adding noise, but also by determining the location of disturbances that are characteristic of specific brain diseases. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

15 pages, 630 KiB  
Article
Taylor Polynomials in a High Arithmetic Precision as Universal Approximators
by Nikolaos Bakas
Computation 2024, 12(3), 53; https://doi.org/10.3390/computation12030053 - 07 Mar 2024
Viewed by 944
Abstract
Function approximation is a fundamental process in a variety of problems in computational mechanics, structural engineering, as well as other domains that require the precise approximation of a phenomenon with an analytic function. This work demonstrates a unified approach to these techniques, utilizing [...] Read more.
Function approximation is a fundamental process in a variety of problems in computational mechanics, structural engineering, as well as other domains that require the precise approximation of a phenomenon with an analytic function. This work demonstrates a unified approach to these techniques, utilizing partial sums of the Taylor series in a high arithmetic precision. In particular, the proposed approach is capable of interpolation, extrapolation, numerical differentiation, numerical integration, solution of ordinary and partial differential equations, and system identification. The method employs Taylor polynomials and hundreds of digits in the computations to obtain precise results. Interestingly, some well-known problems are found to arise in the calculation accuracy and not methodological inefficiencies, as would be expected. In particular, the approximation errors are precisely predictable, the Runge phenomenon is eliminated, and the extrapolation extent may a priory be anticipated. The attained polynomials offer a precise representation of the unknown system as well as its radius of convergence, which provides a rigorous estimation of the prediction ability. The approximation errors are comprehensively analyzed for a variety of calculation digits and test problems and can be reproduced by the provided computer code. Full article
(This article belongs to the Special Issue Computational Methods in Structural Engineering)
Show Figures

Figure 1

26 pages, 11579 KiB  
Article
Algorithm for Propeller Optimization Based on Differential Evolution
by Andry Sedelnikov, Evgenii Kurkin, Jose Gabriel Quijada-Pioquinto, Oleg Lukyanov, Dmitrii Nazarov, Vladislava Chertykovtseva, Ekaterina Kurkina and Van Hung Hoang
Computation 2024, 12(3), 52; https://doi.org/10.3390/computation12030052 - 06 Mar 2024
Viewed by 1022
Abstract
This paper describes the development of a methodology for air propeller optimization using Bezier curves to describe blade geometry. The proposed approach allows for more flexibility in setting the propeller shape, for example, using a variable airfoil over the blade span. The goal [...] Read more.
This paper describes the development of a methodology for air propeller optimization using Bezier curves to describe blade geometry. The proposed approach allows for more flexibility in setting the propeller shape, for example, using a variable airfoil over the blade span. The goal of optimization is to identify the appropriate geometry of a propeller that reduces the power required to achieve a given thrust. Because the proposed optimization problem is a constrained optimization process, the technique of generating a penalty function was used to convert the process into a nonconstrained optimization. For the optimization process, a variant of the differential evolution algorithm was used, which includes adaptive techniques of the evolutionary operators and a population size reduction method. The aerodynamic characteristics of the propellers were obtained using the similar to blade element momentum theory (BEMT) isolated section method (ISM) and the XFOIL program. Replacing the angle of geometric twist with the angle of attack of the airfoil section as a design variable made it possible to increase the robustness of the optimization algorithm and reduce the calculation time. The optimization technique was implemented in the OpenVINT code and has been used to design helicopter and tractor propellers for unmanned aerial vehicles. The development algorithm was validated experimentally and using CFD numerical method. The experimental tests confirm that the optimized propeller geometry is superior to commercial analogues available on the market. Full article
Show Figures

Figure 1

18 pages, 3048 KiB  
Article
Extension of Cubic B-Spline for Solving the Time-Fractional Allen–Cahn Equation in the Context of Mathematical Physics
by Mubeen Fatima, Ravi P. Agarwal, Muhammad Abbas, Pshtiwan Othman Mohammed, Madiha Shafiq and Nejmeddine Chorfi
Computation 2024, 12(3), 51; https://doi.org/10.3390/computation12030051 - 05 Mar 2024
Viewed by 990
Abstract
A B-spline is defined by the degree and quantity of knots, and it is observed to provide a higher level of flexibility in curve and surface layout. The extended cubic B-spline (ExCBS) functions with new approximation for second derivative and finite difference technique [...] Read more.
A B-spline is defined by the degree and quantity of knots, and it is observed to provide a higher level of flexibility in curve and surface layout. The extended cubic B-spline (ExCBS) functions with new approximation for second derivative and finite difference technique are incorporated in this study to solve the time-fractional Allen–Cahn equation (TFACE). Initially, Caputo’s formula is used to discretize the time-fractional derivative, while a new ExCBS is used for the spatial derivative’s discretization. Convergence analysis is carried out and the stability of the proposed method is also analyzed. The scheme’s applicability and feasibility are demonstrated through numerical analysis. Full article
Show Figures

Figure 1

14 pages, 1625 KiB  
Article
Numerical Covariance Evaluation for Linear Structures Subject to Non-Stationary Random Inputs
by M. Domaneschi, R. Cucuzza, L. Sardone, S. Londoño Lopez, M. Movahedi and G. C. Marano
Computation 2024, 12(3), 50; https://doi.org/10.3390/computation12030050 - 05 Mar 2024
Viewed by 774
Abstract
Random vibration analysis is a mathematical tool that offers great advantages in predicting the mechanical response of structural systems subjected to external dynamic loads whose nature is intrinsically stochastic, as in cases of sea waves, wind pressure, and vibrations due to road asperity. [...] Read more.
Random vibration analysis is a mathematical tool that offers great advantages in predicting the mechanical response of structural systems subjected to external dynamic loads whose nature is intrinsically stochastic, as in cases of sea waves, wind pressure, and vibrations due to road asperity. Using random vibration analysis is possible, when the input is properly modeled as a stochastic process, to derive pieces of information about the structural response with a high quality (if compared with other tools), especially in terms of reliability prevision. Moreover, the random vibration approach is quite complex in cases of non-linearity cases, as well as for non-stationary inputs, as in cases of seismic events. For non-stationary inputs, the assessment of second-order spectral moments requires resolving the Lyapunov matrix differential equation. In this research, a numerical procedure is proposed, providing an expression of response in the state-space that, to our best knowledge, has not yet been presented in the literature, by using a formal justification in accordance with earthquake input modeled as a modulated white noise with evolutive parameters. The computational efforts are reduced by considering the symmetry feature of the covariance matrix. The adopted approach is applied to analyze a multi-story building, aiming to determine the reliability related to the maximum inter-story displacement surpassing a specified acceptable threshold. The building is presumed to experience seismic input characterized by a non-stationary process in both amplitude and frequency, utilizing a general Kanai–Tajimi earthquake input stationary model. The adopted case study is modeled in the form of a multi-degree-of-freedom plane shear frame system. Full article
(This article belongs to the Special Issue Computational Methods in Structural Engineering)
Show Figures

Figure 1

19 pages, 17607 KiB  
Article
Systematic Investigation of the Explicit, Dynamically Consistent Methods for Fisher’s Equation
by Husniddin Khayrullaev, Issa Omle and Endre Kovács
Computation 2024, 12(3), 49; https://doi.org/10.3390/computation12030049 - 04 Mar 2024
Viewed by 926
Abstract
We systematically investigate the performance of numerical methods to solve Fisher’s equation, which contains a linear diffusion term and a nonlinear logistic term. The usual explicit finite difference algorithms are only conditionally stable for this equation, and they can yield concentrations below zero [...] Read more.
We systematically investigate the performance of numerical methods to solve Fisher’s equation, which contains a linear diffusion term and a nonlinear logistic term. The usual explicit finite difference algorithms are only conditionally stable for this equation, and they can yield concentrations below zero or above one, even if they are stable. Here, we collect the stable and explicit algorithms, most of which we invented recently. All of them are unconditionally dynamically consistent for Fisher’s equation; thus, the concentration remains in the unit interval for arbitrary parameters. We perform tests in the cases of 1D and 2D systems to explore how the errors depend on the coefficient of the nonlinear term, the stiffness ratio, and the anisotropy of the system. We also measure running times and recommend which algorithms should be used in specific circumstances. Full article
Show Figures

Figure 1

21 pages, 4380 KiB  
Article
Analyzing the MHD Bioconvective Eyring–Powell Fluid Flow over an Upright Cone/Plate Surface in a Porous Medium with Activation Energy and Viscous Dissipation
by Francis Peter, Paulsamy Sambath and Seshathiri Dhanasekaran
Computation 2024, 12(3), 48; https://doi.org/10.3390/computation12030048 - 04 Mar 2024
Cited by 1 | Viewed by 1000
Abstract
In the field of heat and mass transfer applications, non-Newtonian fluids are potentially considered to play a very important role. This study examines the magnetohydrodynamic (MHD) bioconvective Eyring–Powell fluid flow on a permeable cone and plate, considering the viscous dissipation (0.3 ≤ E [...] Read more.
In the field of heat and mass transfer applications, non-Newtonian fluids are potentially considered to play a very important role. This study examines the magnetohydrodynamic (MHD) bioconvective Eyring–Powell fluid flow on a permeable cone and plate, considering the viscous dissipation (0.3 ≤ Ec ≤0.7), the uniform heat source/sink (−0.1 ≤ Q0 ≤ 0.1), and the activation energy (−1 ≤ E1 ≤ 1). The primary focus of this study is to examine how MHD and porosity impact heat and mass transfer in a fluid with microorganisms. A similarity transformation (ST) changes the nonlinear partial differential equations (PDEs) into ordinary differential equations (ODEs). The Keller Box (KB) finite difference method solves these equations. Our findings demonstrate that adding MHD (0.5 ≤ M ≤ 0.9) and porosity (0.3 ≤ Γ ≤ 0.7) effects improves microbial diffusion, boosting the rates of mass and heat transfer. Our comparison of our findings to prior studies shows that they are reliable. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

19 pages, 2056 KiB  
Study Protocol
Artificial Intelligence Techniques and Pedigree Charts in Oncogenetics: Towards an Experimental Multioutput Software System for Digitization and Risk Prediction
by Luana Conte, Emanuele Rizzo, Tiziana Grassi, Francesco Bagordo, Elisabetta De Matteis and Giorgio De Nunzio
Computation 2024, 12(3), 47; https://doi.org/10.3390/computation12030047 - 03 Mar 2024
Viewed by 1188
Abstract
Pedigree charts remain essential in oncological genetic counseling for identifying individuals with an increased risk of developing hereditary tumors. However, this valuable data source often remains confined to paper files, going unused. We propose a computer-aided detection/diagnosis system, based on machine learning and [...] Read more.
Pedigree charts remain essential in oncological genetic counseling for identifying individuals with an increased risk of developing hereditary tumors. However, this valuable data source often remains confined to paper files, going unused. We propose a computer-aided detection/diagnosis system, based on machine learning and deep learning techniques, capable of the following: (1) assisting genetic oncologists in digitizing paper-based pedigree charts, and in generating new digital ones, and (2) automatically predicting the genetic predisposition risk directly from these digital pedigree charts. To the best of our knowledge, there are no similar studies in the current literature, and consequently, no utilization of software based on artificial intelligence on pedigree charts has been made public yet. By incorporating medical images and other data from omics sciences, there is also a fertile ground for training additional artificial intelligence systems, broadening the software predictive capabilities. We plan to bridge the gap between scientific advancements and practical implementation by modernizing and enhancing existing oncological genetic counseling services. This would mark the pioneering development of an AI-based application designed to enhance various aspects of genetic counseling, leading to improved patient care and advancements in the field of oncogenetics. Full article
(This article belongs to the Special Issue Computational Biology and High-Performance Computing)
Show Figures

Figure 1

32 pages, 7198 KiB  
Article
Boundary Layer Stagnation Point Flow and Heat Transfer over a Nonlinear Stretching/Shrinking Sheet in Hybrid Carbon Nanotubes: Numerical Analysis and Response Surface Methodology under the Influence of Magnetohydrodynamics
by Nazrul Azlan Abdul Samat, Norfifah Bachok and Norihan Md Arifin
Computation 2024, 12(3), 46; https://doi.org/10.3390/computation12030046 - 03 Mar 2024
Viewed by 942
Abstract
The present study aims to offer new numerical solutions and optimisation strategies for the fluid flow and heat transfer behaviour at a stagnation point through a nonlinear sheet that is expanding or contracting in water-based hybrid nanofluids. Most hybrid nanofluids typically use metallic [...] Read more.
The present study aims to offer new numerical solutions and optimisation strategies for the fluid flow and heat transfer behaviour at a stagnation point through a nonlinear sheet that is expanding or contracting in water-based hybrid nanofluids. Most hybrid nanofluids typically use metallic nanoparticles. However, we deliver a new approach by combining single- and multi-walled carbon nanotubes (SWCNTs-MWCNTs). The flow is presumptively steady, laminar, and surrounded by a constant temperature of the ambient and body walls. By using similarity variables, a model of partial differential equations (PDEs) with the magnetohydrodynamics (MHD) effect on the momentum equation is converted into a model of non-dimensional ordinary differential equations (ODEs). Then, the dimensionless first-order ODEs are solved numerically using the MATLAB R2022b bvp4C program. In order to explore the range of computational solutions and physical quantities, several dimensionless variables are manipulated, including the magnetic parameter, the stretching/shrinking parameter, and the volume fraction parameters of hybrid and mono carbon nanotubes. To enhance the originality and effectiveness of this study for practical applications, we optimise the heat transfer coefficient via the response surface methodology (RSM). We apply a face-centred central composite design (CCF) and perform the CCF using Minitab. All of our findings are presented and illustrated in tabular and graphic form. We have made notable contributions in the disciplines of mathematical analysis and fluid dynamics. From our observations, we find that multiple solutions appear when the magnetic parameter is less than 1. We also detect double solutions in the shrinking region. Furthermore, the increase in the magnetic parameter and SWCNTs-MWCNTs volume fraction parameter increases both the skin friction coefficient and the local Nusselt number. To compare the performance of hybrid nanofluids and mono nanofluids, we note that hybrid nanofluids work better than single nanofluids both in skin friction and heat transfer coefficients. Full article
Show Figures

Figure 1

16 pages, 2428 KiB  
Article
Unraveling the Dual Inhibitory Mechanism of Compound 22ac: A Molecular Dynamics Investigation into ERK1 and ERK5 Inhibition in Cancer
by Elliasu Y. Salifu, Mbuso A. Faya, James Abugri and Pritika Ramharack
Computation 2024, 12(3), 45; https://doi.org/10.3390/computation12030045 - 01 Mar 2024
Viewed by 1172
Abstract
Cancer remains a major challenge in the field of medicine, necessitating innovative therapeutic strategies. Mitogen-activated protein kinase (MAPK) signaling pathways, particularly Extracellular Signal-Regulated Kinase 1 and 2 (ERK1/2), play pivotal roles in cancer pathogenesis. Recently, ERK5 (also known as MAPK7) has emerged as [...] Read more.
Cancer remains a major challenge in the field of medicine, necessitating innovative therapeutic strategies. Mitogen-activated protein kinase (MAPK) signaling pathways, particularly Extracellular Signal-Regulated Kinase 1 and 2 (ERK1/2), play pivotal roles in cancer pathogenesis. Recently, ERK5 (also known as MAPK7) has emerged as an attractive target due to its compensatory role in cancer progression upon termination of ERK1 signaling. This study explores the potential of Compound 22ac, a novel small molecule inhibitor, to simultaneously target both ERK1 and ERK5 in cancer cells. Using molecular dynamics simulations, we investigate the binding affinity, conformational dynamics, and stability of Compound 22ac when interacting with ERK1 and ERK5. Our results indicate that Compound 22ac forms strong interactions with key residues in the ATP-binding pocket of both ERK1 and ERK5, effectively inhibiting their catalytic activity. Furthermore, the simulations reveal subtle differences in the binding modes of Compound 22ac within the two kinases, shedding light on the dual inhibitory mechanism. This research not only elucidates a structural mechanism of action of Compound 22ac, but also highlights its potential as a promising therapeutic agent for cancer treatment. The dual inhibition of ERK1 and ERK5 by Compound 22ac offers a novel approach to disrupting the MAPK signaling cascade, thereby hindering cancer progression. These findings may contribute to the development of targeted therapies that could improve the prognosis for cancer patients. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Chemistry)
Show Figures

Figure 1

25 pages, 7331 KiB  
Article
A Deep Learning Approach for Brain Tumor Firmness Detection Based on Five Different YOLO Versions: YOLOv3–YOLOv7
by Norah Fahd Alhussainan, Belgacem Ben Youssef and Mohamed Maher Ben Ismail
Computation 2024, 12(3), 44; https://doi.org/10.3390/computation12030044 - 01 Mar 2024
Cited by 1 | Viewed by 1578
Abstract
Brain tumor diagnosis traditionally relies on the manual examination of magnetic resonance images (MRIs), a process that is prone to human error and is also time consuming. Recent advancements leverage machine learning models to categorize tumors, such as distinguishing between “malignant” and “benign” [...] Read more.
Brain tumor diagnosis traditionally relies on the manual examination of magnetic resonance images (MRIs), a process that is prone to human error and is also time consuming. Recent advancements leverage machine learning models to categorize tumors, such as distinguishing between “malignant” and “benign” classes. This study focuses on the supervised machine learning task of classifying “firm” and “soft” meningiomas, critical for determining optimal brain tumor treatment. The research aims to enhance meningioma firmness detection using state-of-the-art deep learning architectures. The study employs a YOLO architecture adapted for meningioma classification (Firm vs. Soft). This YOLO-based model serves as a machine learning component within a proposed CAD system. To improve model generalization and combat overfitting, transfer learning and data augmentation techniques are explored. Intra-model analysis is conducted for each of the five YOLO versions, optimizing parameters such as the optimizer, batch size, and learning rate based on sensitivity and training time. YOLOv3, YOLOv4, and YOLOv7 demonstrate exceptional sensitivity, reaching 100%. Comparative analysis against state-of-the-art models highlights their superiority. YOLOv7, utilizing the SGD optimizer, a batch size of 64, and a learning rate of 0.01, achieves outstanding overall performance with metrics including mean average precision (99.96%), precision (98.50%), specificity (97.95%), balanced accuracy (98.97%), and F1-score (99.24%). This research showcases the effectiveness of YOLO architectures in meningioma firmness detection, with YOLOv7 emerging as the optimal model. The study’s findings underscore the significance of model selection and parameter optimization for achieving high sensitivity and robust overall performance in brain tumor classification. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

20 pages, 889 KiB  
Article
Entropy Generation and Thermal Radiation Impact on Magneto-Convective Flow of Heat-Generating Hybrid Nano-Liquid in a Non-Darcy Porous Medium with Non-Uniform Heat Flux
by Nora M. Albqmi and Sivasankaran Sivanandam
Computation 2024, 12(3), 43; https://doi.org/10.3390/computation12030043 - 29 Feb 2024
Viewed by 952
Abstract
The principal objective of the study is to examine the impact of thermal radiation and entropy generation on the magnetohydrodynamic hybrid nano-fluid, Al2O3/H2O, flow in a Darcy–Forchheimer porous medium with variable heat flux when subjected to an [...] Read more.
The principal objective of the study is to examine the impact of thermal radiation and entropy generation on the magnetohydrodynamic hybrid nano-fluid, Al2O3/H2O, flow in a Darcy–Forchheimer porous medium with variable heat flux when subjected to an electric field. Investigating the impact of thermal radiation and non-uniform heat flux on the hybrid nano-liquid magnetohydrodynamic flow in a non-Darcy porous environment produces novel and insightful findings. Thus, the goal of the current study is to investigate this. The non-linear governing equation can be viewed as a set of ordinary differential equations by applying the proper transformations. The resultant dimensionless model is numerically solved in Matlab using the bvp4c command. We obtain numerical results for the temperature and velocity distributions, skin friction, and local Nusselt number across a broad range of controlling parameters. We found a significant degree of agreement with other research that has been compared with the literature. The results show that an increase in the Reynolds and Brinckmann numbers corresponds to an increase in entropy production. Furthermore, a high electric field accelerates fluid velocity, whereas the unsteadiness parameter and the presence of a magnetic field slow it down. This study is beneficial to other researchers as well as technical applications in thermal science because it discusses the factors that lead to the working hybrid nano-liquid thermal enhancement. Full article
Show Figures

Figure 1

17 pages, 2693 KiB  
Article
Predicting Time-to-Healing from a Digital Wound Image: A Hybrid Neural Network and Decision Tree Approach Improves Performance
by Aravind Kolli, Qi Wei and Stephen A. Ramsey
Computation 2024, 12(3), 42; https://doi.org/10.3390/computation12030042 - 28 Feb 2024
Viewed by 1003
Abstract
Despite the societal burden of chronic wounds and despite advances in image processing, automated image-based prediction of wound prognosis is not yet in routine clinical practice. While specific tissue types are known to be positive or negative prognostic indicators, image-based wound healing prediction [...] Read more.
Despite the societal burden of chronic wounds and despite advances in image processing, automated image-based prediction of wound prognosis is not yet in routine clinical practice. While specific tissue types are known to be positive or negative prognostic indicators, image-based wound healing prediction systems that have been demonstrated to date do not (1) use information about the proportions of tissue types within the wound and (2) predict time-to-healing (most predict categorical clinical labels). In this work, we analyzed a unique dataset of time-series images of healing wounds from a controlled study in dogs, as well as human wound images that are annotated for the tissue type composition. In the context of a hybrid-learning approach (neural network segmentation and decision tree regression) for the image-based prediction of time-to-healing, we tested whether explicitly incorporating tissue type-derived features into the model would improve the accuracy for time-to-healing prediction versus not including such features. We tested four deep convolutional encoder–decoder neural network models for wound image segmentation and identified, in the context of both original wound images and an augmented wound image-set, that a SegNet-type network trained on an augmented image set has best segmentation performance. Furthermore, using three different regression algorithms, we evaluated models for predicting wound time-to-healing using features extracted from the four best-performing segmentation models. We found that XGBoost regression using features that are (i) extracted from a SegNet-type network and (ii) reduced using principal components analysis performed the best for time-to-healing prediction. We demonstrated that a neural network model can classify the regions of a wound image as one of four tissue types, and demonstrated that adding features derived from the superpixel classifier improves the performance for healing-time prediction. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis)
Show Figures

Figure 1

14 pages, 1733 KiB  
Article
Physically Informed Deep Learning Technique for Estimating Blood Flow Parameters in Four-Vessel Junction after the Fontan Procedure
by Alexander Isaev, Tatiana Dobroserdova, Alexander Danilov and Sergey Simakov
Computation 2024, 12(3), 41; https://doi.org/10.3390/computation12030041 - 25 Feb 2024
Viewed by 1276
Abstract
This study introduces an innovative approach leveraging physics-informed neural networks (PINNs) for the efficient computation of blood flows at the boundaries of a four-vessel junction formed by a Fontan procedure. The methodology incorporates a 3D mesh generation technique based on the parameterization of [...] Read more.
This study introduces an innovative approach leveraging physics-informed neural networks (PINNs) for the efficient computation of blood flows at the boundaries of a four-vessel junction formed by a Fontan procedure. The methodology incorporates a 3D mesh generation technique based on the parameterization of the junction’s geometry, coupled with an advanced physically regularized neural network architecture. Synthetic datasets are generated through stationary 3D Navier–Stokes simulations within immobile boundaries, offering a precise alternative to resource-intensive computations. A comparative analysis of standard grid sampling and Latin hypercube sampling data generation methods is conducted, resulting in datasets comprising 1.1×104 and 5×103 samples, respectively. The following two families of feed-forward neural networks (FFNNs) are then compared: the conventional “black-box” approach using mean squared error (MSE) and a physically informed FFNN employing a physically regularized loss function (PRLF), incorporating mass conservation law. The study demonstrates that combining PRLF with Latin hypercube sampling enables the rapid minimization of relative error (RE) when using a smaller dataset, achieving a relative error value of 6% on the test set. This approach offers a viable alternative to resource-intensive simulations, showcasing potential applications in patient-specific 1D network models of hemodynamics. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Biology)
Show Figures

Figure 1

18 pages, 4576 KiB  
Article
High-Compression Crash Simulations and Tests of PLA Cubes Fabricated Using Additive Manufacturing FDM with a Scaling Strategy
by Andres-Amador Garcia-Granada
Computation 2024, 12(3), 40; https://doi.org/10.3390/computation12030040 - 23 Feb 2024
Viewed by 1111
Abstract
Impacts due to drops or crashes between moving vehicles necessitate the search for energy absorption elements to prevent damage to the transported goods or individuals. To ensure safety, a given level of acceptable deceleration is provided. The optimization of deformable parts to absorb [...] Read more.
Impacts due to drops or crashes between moving vehicles necessitate the search for energy absorption elements to prevent damage to the transported goods or individuals. To ensure safety, a given level of acceptable deceleration is provided. The optimization of deformable parts to absorb impact energy is typically conducted through explicit simulations, where kinetic energy is converted into plastic deformation energy. The introduction of additive manufacturing techniques enables this optimization to be conducted with more efficient shapes, previously unachievable with conventional manufacturing methods. This paper presents an initial approach to validating explicit simulations of impacts against solid cubes of varying sizes and fabrication directions. Such cubes were fabricated using PLA, the most used material, and a desktop printer. All simulations could be conducted using a single material law description, employing solid elements with a controlled time step suitable for industrial applications. With this approach, the simulations were capable of predicting deceleration levels across a broad range of impact configurations for solid cubes. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 4648 KiB  
Article
Data-Driven Anisotropic Biomembrane Simulation Based on the Laplace Stretch
by Alexey Liogky and Victoria Salamatova
Computation 2024, 12(3), 39; https://doi.org/10.3390/computation12030039 - 22 Feb 2024
Viewed by 962
Abstract
Data-driven simulations are gaining popularity in mechanics of biomaterials since they do not require explicit form of constitutive relations. Data-driven modeling based on neural networks lacks interpretability. In this study, we propose an interpretable data-driven finite element modeling for hyperelastic materials. This approach [...] Read more.
Data-driven simulations are gaining popularity in mechanics of biomaterials since they do not require explicit form of constitutive relations. Data-driven modeling based on neural networks lacks interpretability. In this study, we propose an interpretable data-driven finite element modeling for hyperelastic materials. This approach employs the Laplace stretch as the strain measure and utilizes response functions to define constitutive equations. To validate the proposed method, we apply it to inflation of anisotropic membranes on the basis of synthetic data for porcine skin represented by Holzapfel-Gasser-Ogden model. Our results demonstrate applicability of the method and show good agreement with reference displacements, although some discrepancies are observed in the stress calculations. Despite these discrepancies, the proposed method demonstrates its potential usefulness for simulation of hyperelastic biomaterials. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Biology)
Show Figures

Figure 1

16 pages, 3793 KiB  
Article
Method to Forecast the Presidential Election Results Based on Simulation and Machine Learning
by Luis Zuloaga-Rotta, Rubén Borja-Rosales, Mirko Jerber Rodríguez Mallma, David Mauricio and Nelson Maculan
Computation 2024, 12(3), 38; https://doi.org/10.3390/computation12030038 - 22 Feb 2024
Viewed by 2031
Abstract
The forecasting of presidential election results (PERs) is a very complex problem due to the diversity of electoral factors and the uncertainty involved. The use of a hybrid approach composed of techniques such as machine learning (ML) and Simulation in forecasting tasks is [...] Read more.
The forecasting of presidential election results (PERs) is a very complex problem due to the diversity of electoral factors and the uncertainty involved. The use of a hybrid approach composed of techniques such as machine learning (ML) and Simulation in forecasting tasks is promising because the former presents good results but requires a good balance between data quantity and quality, and the latter supplies said requirement; nonetheless, each technique has its limitations, parameters, processes, and application contexts, which should be treated as a whole to improve the results. This study proposes a systematic method to build a model to forecast the PERs with high precision, based on the factors that influence the voter’s preferences and the use of ML and Simulation techniques. The method consists of four phases, uses contextual and synthetic data, and follows a procedure that guarantees high precision in predicting the PER. The method was applied to real cases in Brazil, Uruguay, and Peru, resulting in a predictive model with 100% agreement with the actual first-round results for all cases. Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop