Computation doi: 10.3390/computation12030060
Authors: Evaristo José Madarro-Capó Eziel Christians Ramos Piñón Guillermo Sosa-Gómez Omar Rojas
This study describes the implementation of two algorithms in a parallel environment. These algorithms correspond to two statistical tests based on the bit’s independence criterion and the strict avalanche criterion. They are utilized to measure avalanche properties in stream ciphers. These criteria allow for the statistical independence between the outputs and the internal state of a bit-level cipher to be determined. Both tests require extensive input parameters to assess the performance of current stream ciphers, leading to longer execution times. The presented implementation significantly reduces the execution time of both tests, making them suitable for evaluating ciphers in practical applications. The evaluation results compare the performance of the RC4 and HC256 stream ciphers in both sequential and parallel environments.
]]>Computation doi: 10.3390/computation12030059
Authors: Deepanjal Shrestha Tan Wenan Deepmala Shrestha Neesha Rajkarnikar Seung-Ryul Jeong
This study introduces a data-driven and machine-learning approach to design a personalized tourist recommendation system for Nepal. It examines key tourist attributes, such as demographics, behaviors, preferences, and satisfaction, to develop four sub-models for data collection and machine learning. A structured survey is conducted with 2400 international and domestic tourists, featuring 28 major questions and 125 variables. The data are preprocessed, and significant features are extracted to enhance the accuracy and efficiency of the machine-learning models. These models are evaluated using metrics such as accuracy, precision, recall, F-score, ROC, and lift curves. A comprehensive database for Pokhara City, Nepal, is developed from various sources that includes attributes such as location, cost, popularity, rating, ranking, and trend. The machine-learning models provide intermediate categorical recommendations, which are further mapped using a personalized recommender algorithm. This algorithm makes decisions based on weights assigned to each decision attribute to make the final recommendations. The system’s performance is compared with other popular recommender systems implemented by TripAdvisor, Google Maps, the Nepal tourism website, and others. It is found that the proposed system surpasses existing ones, offering more accurate and optimized recommendations to visitors in Pokhara. This study is a pioneering one and holds significant implications for the tourism industry and the governing sector of Nepal in enhancing the overall tourism business.
]]>Computation doi: 10.3390/computation12030058
Authors: Murat Mustafin Hiba Moussa
The technology for determining a point’s coordinates on the earth’s surface using the global navigation satellite system (GNSS) is becoming the norm along with ground-based methods. In this case, determining coordinates does not cause any particular difficulties. However, to identify normal heights using this technology with a given accuracy, special research is required. The fact is that satellite determinations of geodetic heights (h) over an ellipsoid surface differ from ground-based measurements of normal height (HN) over a quasi-geoid surface by a certain value called quasi-geoid height or height anomaly (ζ). In relation to determining heights of a certain territory, the concept of geoid height (N) is usually operated when dealing with a geoid model. In this work, geodetic and normal heights are determined for five control points in three different regions in Lebanon, where measurements are carried out using GNSS technology and geometric levelling. The obtained quasi-geoid heights are compared with geoid heights derived from the global Earth model EGM2008. The results obtained showed that, in the absence of gravimetric data, the combination of global Earth model data, geometric levelling for selected areas, and satellite determinations allows for the creation of a highly accurate altitude network for mountainous areas.
]]>Computation doi: 10.3390/computation12030057
Authors: Daniel Molinero-Hernández Sergio R. Galván-González Nicolás D. Herrera-Sandoval Pablo Guzman-Avalos J. Jesús Pacheco-Ibarra Francisco J. Domínguez-Mota
Driven by the emergence of Graphics Processing Units (GPUs), the solution of increasingly large and intricate numerical problems has become feasible. Yet, the integration of GPUs into Computational Fluid Dynamics (CFD) codes still presents a significant challenge. This study undertakes an evaluation of the computational performance of GPUs for CFD applications. Two Compute Unified Device Architecture (CUDA)-based implementations within the Open Field Operation and Manipulation (OpenFOAM) environment were employed for the numerical solution of a 3D Kaplan turbine draft tube workbench. A series of tests were conducted to assess the fixed-size grid problem speedup in accordance with Amdahl’s Law. Additionally, tests were performed to identify the optimal configuration utilizing various linear solvers, preconditioners, and smoothers, along with an analysis of memory usage.
]]>Computation doi: 10.3390/computation12030056
Authors: Chady Ghnatios Victor Champaney Angelo Pasquale Francisco Chinesta
There was an error in the original publication, section data availability, stating “Data is available upon request” [...]
]]>Computation doi: 10.3390/computation12030055
Authors: Nisa Boukichou-Abdelkader Miguel Ángel Montero-Alonso Alberto Muñoz-García
Recently, many methods and algorithms have been developed that can be quickly adapted to different situations within a population of interest, especially in the health sector. Success has been achieved by generating better models and higher-quality results to facilitate decision making, as well as to propose new diagnostic procedures and treatments adapted to each patient. These models can also improve people’s quality of life, dissuade bad health habits, reinforce good habits, and modify the pre-existing ones. In this sense, the objective of this study was to apply supervised and unsupervised classification techniques, where the clustering algorithm was the key factor for grouping. This led to the development of three optimal groups of clinical pattern based on their characteristics. The supervised classification methods used in this study were Correspondence (CA) and Decision Trees (DT), which served as visual aids to identify the possible groups. At the same time, they were used as exploratory mechanisms to confirm the results for the existing information, which enhanced the value of the final results. In conclusion, this multi-technique approach was found to be a feasible method that can be used in different situations when there are sufficient data. It was thus necessary to reduce the dimensional space, provide missing values for high-quality information, and apply classification models to search for patterns in the clinical profiles, with a view to grouping the patients efficiently and accurately so that the clinical results can be applied in other research studies.
]]>Computation doi: 10.3390/computation12030054
Authors: Seweryn Lipiński
DSC-MRI examination is one of the best methods of diagnosis for brain diseases. For this purpose, the so-called perfusion parameters are defined, of which the most used are CBF, CBV, and MTT. There are many approaches to determining these parameters, but regardless of the approach, there is a problem with the quality assessment of methods. To solve this problem, this article proposes virtual DSC-MRI brain examination, which consists of two steps. The first step is to create curves that are typical for DSC-MRI studies and characteristic of different brain regions, i.e., the gray and white matter, and blood vessels. Using perfusion descriptors, the curves are classified into three sets, which give us the model curves for each of the three regions. The curves corresponding to the perfusion of different regions of the brain in a suitable arrangement (consistent with human anatomy) form a model of the DSC-MRI examination. In the created model, one knows in advance the values of the complex perfusion parameters, as well as basic perfusion descriptors. The shown model study can be disturbed in a controlled manner—not only by adding noise, but also by determining the location of disturbances that are characteristic of specific brain diseases.
]]>Computation doi: 10.3390/computation12030053
Authors: Nikolaos Bakas
Function approximation is a fundamental process in a variety of problems in computational mechanics, structural engineering, as well as other domains that require the precise approximation of a phenomenon with an analytic function. This work demonstrates a unified approach to these techniques, utilizing partial sums of the Taylor series in a high arithmetic precision. In particular, the proposed approach is capable of interpolation, extrapolation, numerical differentiation, numerical integration, solution of ordinary and partial differential equations, and system identification. The method employs Taylor polynomials and hundreds of digits in the computations to obtain precise results. Interestingly, some well-known problems are found to arise in the calculation accuracy and not methodological inefficiencies, as would be expected. In particular, the approximation errors are precisely predictable, the Runge phenomenon is eliminated, and the extrapolation extent may a priory be anticipated. The attained polynomials offer a precise representation of the unknown system as well as its radius of convergence, which provides a rigorous estimation of the prediction ability. The approximation errors are comprehensively analyzed for a variety of calculation digits and test problems and can be reproduced by the provided computer code.
]]>Computation doi: 10.3390/computation12030052
Authors: Andry Sedelnikov Evgenii Kurkin Jose Gabriel Quijada-Pioquinto Oleg Lukyanov Dmitrii Nazarov Vladislava Chertykovtseva Ekaterina Kurkina Van Hung Hoang
This paper describes the development of a methodology for air propeller optimization using Bezier curves to describe blade geometry. The proposed approach allows for more flexibility in setting the propeller shape, for example, using a variable airfoil over the blade span. The goal of optimization is to identify the appropriate geometry of a propeller that reduces the power required to achieve a given thrust. Because the proposed optimization problem is a constrained optimization process, the technique of generating a penalty function was used to convert the process into a nonconstrained optimization. For the optimization process, a variant of the differential evolution algorithm was used, which includes adaptive techniques of the evolutionary operators and a population size reduction method. The aerodynamic characteristics of the propellers were obtained using the similar to blade element momentum theory (BEMT) isolated section method (ISM) and the XFOIL program. Replacing the angle of geometric twist with the angle of attack of the airfoil section as a design variable made it possible to increase the robustness of the optimization algorithm and reduce the calculation time. The optimization technique was implemented in the OpenVINT code and has been used to design helicopter and tractor propellers for unmanned aerial vehicles. The development algorithm was validated experimentally and using CFD numerical method. The experimental tests confirm that the optimized propeller geometry is superior to commercial analogues available on the market.
]]>Computation doi: 10.3390/computation12030051
Authors: Mubeen Fatima Ravi P. Agarwal Muhammad Abbas Pshtiwan Othman Mohammed Madiha Shafiq Nejmeddine Chorfi
A B-spline is defined by the degree and quantity of knots, and it is observed to provide a higher level of flexibility in curve and surface layout. The extended cubic B-spline (ExCBS) functions with new approximation for second derivative and finite difference technique are incorporated in this study to solve the time-fractional Allen–Cahn equation (TFACE). Initially, Caputo’s formula is used to discretize the time-fractional derivative, while a new ExCBS is used for the spatial derivative’s discretization. Convergence analysis is carried out and the stability of the proposed method is also analyzed. The scheme’s applicability and feasibility are demonstrated through numerical analysis.
]]>Computation doi: 10.3390/computation12030050
Authors: M. Domaneschi R. Cucuzza L. Sardone S. Londoño Lopez M. Movahedi G. C. Marano
Random vibration analysis is a mathematical tool that offers great advantages in predicting the mechanical response of structural systems subjected to external dynamic loads whose nature is intrinsically stochastic, as in cases of sea waves, wind pressure, and vibrations due to road asperity. Using random vibration analysis is possible, when the input is properly modeled as a stochastic process, to derive pieces of information about the structural response with a high quality (if compared with other tools), especially in terms of reliability prevision. Moreover, the random vibration approach is quite complex in cases of non-linearity cases, as well as for non-stationary inputs, as in cases of seismic events. For non-stationary inputs, the assessment of second-order spectral moments requires resolving the Lyapunov matrix differential equation. In this research, a numerical procedure is proposed, providing an expression of response in the state-space that, to our best knowledge, has not yet been presented in the literature, by using a formal justification in accordance with earthquake input modeled as a modulated white noise with evolutive parameters. The computational efforts are reduced by considering the symmetry feature of the covariance matrix. The adopted approach is applied to analyze a multi-story building, aiming to determine the reliability related to the maximum inter-story displacement surpassing a specified acceptable threshold. The building is presumed to experience seismic input characterized by a non-stationary process in both amplitude and frequency, utilizing a general Kanai–Tajimi earthquake input stationary model. The adopted case study is modeled in the form of a multi-degree-of-freedom plane shear frame system.
]]>Computation doi: 10.3390/computation12030049
Authors: Husniddin Khayrullaev Issa Omle Endre Kovács
We systematically investigate the performance of numerical methods to solve Fisher’s equation, which contains a linear diffusion term and a nonlinear logistic term. The usual explicit finite difference algorithms are only conditionally stable for this equation, and they can yield concentrations below zero or above one, even if they are stable. Here, we collect the stable and explicit algorithms, most of which we invented recently. All of them are unconditionally dynamically consistent for Fisher’s equation; thus, the concentration remains in the unit interval for arbitrary parameters. We perform tests in the cases of 1D and 2D systems to explore how the errors depend on the coefficient of the nonlinear term, the stiffness ratio, and the anisotropy of the system. We also measure running times and recommend which algorithms should be used in specific circumstances.
]]>Computation doi: 10.3390/computation12030048
Authors: Francis Peter Paulsamy Sambath Seshathiri Dhanasekaran
In the field of heat and mass transfer applications, non-Newtonian fluids are potentially considered to play a very important role. This study examines the magnetohydrodynamic (MHD) bioconvective Eyring–Powell fluid flow on a permeable cone and plate, considering the viscous dissipation (0.3 ≤ Ec ≤0.7), the uniform heat source/sink (−0.1 ≤ Q0 ≤ 0.1), and the activation energy (−1 ≤ E1 ≤ 1). The primary focus of this study is to examine how MHD and porosity impact heat and mass transfer in a fluid with microorganisms. A similarity transformation (ST) changes the nonlinear partial differential equations (PDEs) into ordinary differential equations (ODEs). The Keller Box (KB) finite difference method solves these equations. Our findings demonstrate that adding MHD (0.5 ≤ M ≤ 0.9) and porosity (0.3 ≤ Γ ≤ 0.7) effects improves microbial diffusion, boosting the rates of mass and heat transfer. Our comparison of our findings to prior studies shows that they are reliable.
]]>Computation doi: 10.3390/computation12030047
Authors: Luana Conte Emanuele Rizzo Tiziana Grassi Francesco Bagordo Elisabetta De Matteis Giorgio De Nunzio
Pedigree charts remain essential in oncological genetic counseling for identifying individuals with an increased risk of developing hereditary tumors. However, this valuable data source often remains confined to paper files, going unused. We propose a computer-aided detection/diagnosis system, based on machine learning and deep learning techniques, capable of the following: (1) assisting genetic oncologists in digitizing paper-based pedigree charts, and in generating new digital ones, and (2) automatically predicting the genetic predisposition risk directly from these digital pedigree charts. To the best of our knowledge, there are no similar studies in the current literature, and consequently, no utilization of software based on artificial intelligence on pedigree charts has been made public yet. By incorporating medical images and other data from omics sciences, there is also a fertile ground for training additional artificial intelligence systems, broadening the software predictive capabilities. We plan to bridge the gap between scientific advancements and practical implementation by modernizing and enhancing existing oncological genetic counseling services. This would mark the pioneering development of an AI-based application designed to enhance various aspects of genetic counseling, leading to improved patient care and advancements in the field of oncogenetics.
]]>Computation doi: 10.3390/computation12030046
Authors: Nazrul Azlan Abdul Samat Norfifah Bachok Norihan Md Arifin
The present study aims to offer new numerical solutions and optimisation strategies for the fluid flow and heat transfer behaviour at a stagnation point through a nonlinear sheet that is expanding or contracting in water-based hybrid nanofluids. Most hybrid nanofluids typically use metallic nanoparticles. However, we deliver a new approach by combining single- and multi-walled carbon nanotubes (SWCNTs-MWCNTs). The flow is presumptively steady, laminar, and surrounded by a constant temperature of the ambient and body walls. By using similarity variables, a model of partial differential equations (PDEs) with the magnetohydrodynamics (MHD) effect on the momentum equation is converted into a model of non-dimensional ordinary differential equations (ODEs). Then, the dimensionless first-order ODEs are solved numerically using the MATLAB R2022b bvp4C program. In order to explore the range of computational solutions and physical quantities, several dimensionless variables are manipulated, including the magnetic parameter, the stretching/shrinking parameter, and the volume fraction parameters of hybrid and mono carbon nanotubes. To enhance the originality and effectiveness of this study for practical applications, we optimise the heat transfer coefficient via the response surface methodology (RSM). We apply a face-centred central composite design (CCF) and perform the CCF using Minitab. All of our findings are presented and illustrated in tabular and graphic form. We have made notable contributions in the disciplines of mathematical analysis and fluid dynamics. From our observations, we find that multiple solutions appear when the magnetic parameter is less than 1. We also detect double solutions in the shrinking region. Furthermore, the increase in the magnetic parameter and SWCNTs-MWCNTs volume fraction parameter increases both the skin friction coefficient and the local Nusselt number. To compare the performance of hybrid nanofluids and mono nanofluids, we note that hybrid nanofluids work better than single nanofluids both in skin friction and heat transfer coefficients.
]]>Computation doi: 10.3390/computation12030045
Authors: Elliasu Y. Salifu Mbuso A. Faya James Abugri Pritika Ramharack
Cancer remains a major challenge in the field of medicine, necessitating innovative therapeutic strategies. Mitogen-activated protein kinase (MAPK) signaling pathways, particularly Extracellular Signal-Regulated Kinase 1 and 2 (ERK1/2), play pivotal roles in cancer pathogenesis. Recently, ERK5 (also known as MAPK7) has emerged as an attractive target due to its compensatory role in cancer progression upon termination of ERK1 signaling. This study explores the potential of Compound 22ac, a novel small molecule inhibitor, to simultaneously target both ERK1 and ERK5 in cancer cells. Using molecular dynamics simulations, we investigate the binding affinity, conformational dynamics, and stability of Compound 22ac when interacting with ERK1 and ERK5. Our results indicate that Compound 22ac forms strong interactions with key residues in the ATP-binding pocket of both ERK1 and ERK5, effectively inhibiting their catalytic activity. Furthermore, the simulations reveal subtle differences in the binding modes of Compound 22ac within the two kinases, shedding light on the dual inhibitory mechanism. This research not only elucidates a structural mechanism of action of Compound 22ac, but also highlights its potential as a promising therapeutic agent for cancer treatment. The dual inhibition of ERK1 and ERK5 by Compound 22ac offers a novel approach to disrupting the MAPK signaling cascade, thereby hindering cancer progression. These findings may contribute to the development of targeted therapies that could improve the prognosis for cancer patients.
]]>Computation doi: 10.3390/computation12030044
Authors: Norah Fahd Alhussainan Belgacem Ben Youssef Mohamed Maher Ben Ismail
Brain tumor diagnosis traditionally relies on the manual examination of magnetic resonance images (MRIs), a process that is prone to human error and is also time consuming. Recent advancements leverage machine learning models to categorize tumors, such as distinguishing between “malignant” and “benign” classes. This study focuses on the supervised machine learning task of classifying “firm” and “soft” meningiomas, critical for determining optimal brain tumor treatment. The research aims to enhance meningioma firmness detection using state-of-the-art deep learning architectures. The study employs a YOLO architecture adapted for meningioma classification (Firm vs. Soft). This YOLO-based model serves as a machine learning component within a proposed CAD system. To improve model generalization and combat overfitting, transfer learning and data augmentation techniques are explored. Intra-model analysis is conducted for each of the five YOLO versions, optimizing parameters such as the optimizer, batch size, and learning rate based on sensitivity and training time. YOLOv3, YOLOv4, and YOLOv7 demonstrate exceptional sensitivity, reaching 100%. Comparative analysis against state-of-the-art models highlights their superiority. YOLOv7, utilizing the SGD optimizer, a batch size of 64, and a learning rate of 0.01, achieves outstanding overall performance with metrics including mean average precision (99.96%), precision (98.50%), specificity (97.95%), balanced accuracy (98.97%), and F1-score (99.24%). This research showcases the effectiveness of YOLO architectures in meningioma firmness detection, with YOLOv7 emerging as the optimal model. The study’s findings underscore the significance of model selection and parameter optimization for achieving high sensitivity and robust overall performance in brain tumor classification.
]]>Computation doi: 10.3390/computation12030043
Authors: Nora M. Albqmi Sivasankaran Sivanandam
The principal objective of the study is to examine the impact of thermal radiation and entropy generation on the magnetohydrodynamic hybrid nano-fluid, Al2O3/H2O, flow in a Darcy–Forchheimer porous medium with variable heat flux when subjected to an electric field. Investigating the impact of thermal radiation and non-uniform heat flux on the hybrid nano-liquid magnetohydrodynamic flow in a non-Darcy porous environment produces novel and insightful findings. Thus, the goal of the current study is to investigate this. The non-linear governing equation can be viewed as a set of ordinary differential equations by applying the proper transformations. The resultant dimensionless model is numerically solved in Matlab using the bvp4c command. We obtain numerical results for the temperature and velocity distributions, skin friction, and local Nusselt number across a broad range of controlling parameters. We found a significant degree of agreement with other research that has been compared with the literature. The results show that an increase in the Reynolds and Brinckmann numbers corresponds to an increase in entropy production. Furthermore, a high electric field accelerates fluid velocity, whereas the unsteadiness parameter and the presence of a magnetic field slow it down. This study is beneficial to other researchers as well as technical applications in thermal science because it discusses the factors that lead to the working hybrid nano-liquid thermal enhancement.
]]>Computation doi: 10.3390/computation12030042
Authors: Aravind Kolli Qi Wei Stephen A. Ramsey
Despite the societal burden of chronic wounds and despite advances in image processing, automated image-based prediction of wound prognosis is not yet in routine clinical practice. While specific tissue types are known to be positive or negative prognostic indicators, image-based wound healing prediction systems that have been demonstrated to date do not (1) use information about the proportions of tissue types within the wound and (2) predict time-to-healing (most predict categorical clinical labels). In this work, we analyzed a unique dataset of time-series images of healing wounds from a controlled study in dogs, as well as human wound images that are annotated for the tissue type composition. In the context of a hybrid-learning approach (neural network segmentation and decision tree regression) for the image-based prediction of time-to-healing, we tested whether explicitly incorporating tissue type-derived features into the model would improve the accuracy for time-to-healing prediction versus not including such features. We tested four deep convolutional encoder–decoder neural network models for wound image segmentation and identified, in the context of both original wound images and an augmented wound image-set, that a SegNet-type network trained on an augmented image set has best segmentation performance. Furthermore, using three different regression algorithms, we evaluated models for predicting wound time-to-healing using features extracted from the four best-performing segmentation models. We found that XGBoost regression using features that are (i) extracted from a SegNet-type network and (ii) reduced using principal components analysis performed the best for time-to-healing prediction. We demonstrated that a neural network model can classify the regions of a wound image as one of four tissue types, and demonstrated that adding features derived from the superpixel classifier improves the performance for healing-time prediction.
]]>Computation doi: 10.3390/computation12030041
Authors: Alexander Isaev Tatiana Dobroserdova Alexander Danilov Sergey Simakov
This study introduces an innovative approach leveraging physics-informed neural networks (PINNs) for the efficient computation of blood flows at the boundaries of a four-vessel junction formed by a Fontan procedure. The methodology incorporates a 3D mesh generation technique based on the parameterization of the junction’s geometry, coupled with an advanced physically regularized neural network architecture. Synthetic datasets are generated through stationary 3D Navier–Stokes simulations within immobile boundaries, offering a precise alternative to resource-intensive computations. A comparative analysis of standard grid sampling and Latin hypercube sampling data generation methods is conducted, resulting in datasets comprising 1.1×104 and 5×103 samples, respectively. The following two families of feed-forward neural networks (FFNNs) are then compared: the conventional “black-box” approach using mean squared error (MSE) and a physically informed FFNN employing a physically regularized loss function (PRLF), incorporating mass conservation law. The study demonstrates that combining PRLF with Latin hypercube sampling enables the rapid minimization of relative error (RE) when using a smaller dataset, achieving a relative error value of 6% on the test set. This approach offers a viable alternative to resource-intensive simulations, showcasing potential applications in patient-specific 1D network models of hemodynamics.
]]>Computation doi: 10.3390/computation12030040
Authors: Andres-Amador Garcia-Granada
Impacts due to drops or crashes between moving vehicles necessitate the search for energy absorption elements to prevent damage to the transported goods or individuals. To ensure safety, a given level of acceptable deceleration is provided. The optimization of deformable parts to absorb impact energy is typically conducted through explicit simulations, where kinetic energy is converted into plastic deformation energy. The introduction of additive manufacturing techniques enables this optimization to be conducted with more efficient shapes, previously unachievable with conventional manufacturing methods. This paper presents an initial approach to validating explicit simulations of impacts against solid cubes of varying sizes and fabrication directions. Such cubes were fabricated using PLA, the most used material, and a desktop printer. All simulations could be conducted using a single material law description, employing solid elements with a controlled time step suitable for industrial applications. With this approach, the simulations were capable of predicting deceleration levels across a broad range of impact configurations for solid cubes.
]]>Computation doi: 10.3390/computation12030039
Authors: Alexey Liogky Victoria Salamatova
Data-driven simulations are gaining popularity in mechanics of biomaterials since they do not require explicit form of constitutive relations. Data-driven modeling based on neural networks lacks interpretability. In this study, we propose an interpretable data-driven finite element modeling for hyperelastic materials. This approach employs the Laplace stretch as the strain measure and utilizes response functions to define constitutive equations. To validate the proposed method, we apply it to inflation of anisotropic membranes on the basis of synthetic data for porcine skin represented by Holzapfel-Gasser-Ogden model. Our results demonstrate applicability of the method and show good agreement with reference displacements, although some discrepancies are observed in the stress calculations. Despite these discrepancies, the proposed method demonstrates its potential usefulness for simulation of hyperelastic biomaterials.
]]>Computation doi: 10.3390/computation12030038
Authors: Luis Zuloaga-Rotta Rubén Borja-Rosales Mirko Jerber Rodríguez Mallma David Mauricio Nelson Maculan
The forecasting of presidential election results (PERs) is a very complex problem due to the diversity of electoral factors and the uncertainty involved. The use of a hybrid approach composed of techniques such as machine learning (ML) and Simulation in forecasting tasks is promising because the former presents good results but requires a good balance between data quantity and quality, and the latter supplies said requirement; nonetheless, each technique has its limitations, parameters, processes, and application contexts, which should be treated as a whole to improve the results. This study proposes a systematic method to build a model to forecast the PERs with high precision, based on the factors that influence the voter’s preferences and the use of ML and Simulation techniques. The method consists of four phases, uses contextual and synthetic data, and follows a procedure that guarantees high precision in predicting the PER. The method was applied to real cases in Brazil, Uruguay, and Peru, resulting in a predictive model with 100% agreement with the actual first-round results for all cases.
]]>Computation doi: 10.3390/computation12020037
Authors: Alexander Lopato Pavel Utkin
The propagation of detonation waves (i.e., supersonic combustion waves) in non-uniform gaseous mixtures has become a matter of interest over the past several years due to the development of rotating detonation engines. It was shown in a number of recent theoretical studies of one-dimensional pulsating detonation that perturbation of the parameters in front of the detonation wave can lead to a resonant amplification of intrinsic pulsations for a certain range of perturbation wavelengths. This work is dedicated to the clarification of the mechanism of this effect. One-dimensional reactive Euler equations with single-step Arrhenius kinetics were solved. Detonation propagation in a gas with sine waves in density was simulated in a shock-attached frame of reference. We carried out a series of simulations, varying the wavelength of the disturbances. We obtained a non-linear dependence of the amplitude of these pulsations on the wavelength of disturbances with resonant amplification for a certain range of wavelengths. The gain in velocity was about 25% of the Chapman–Jouguet velocity of the stable detonation wave. The effect is explained using the characteristic analysis in the x-t diagram. For the resonant case, we correlated the pulsation period with the time it takes for the C+ and C− characteristics to travel through the effective reaction zone. A similar pulsation mechanism is realized when a detonation wave propagates in a homogeneous medium.
]]>Computation doi: 10.3390/computation12020036
Authors: Vangelis Sarlis George Papageorgiou Christos Tjortjis
This research paper examines Sports Analytics, focusing on injury patterns in the National Basketball Association (NBA) and their impact on players’ performance. It employs a unique dataset to identify common NBA injuries, determine the most affected anatomical areas, and analyze how these injuries influence players’ post-recovery performance. This study’s novelty lies in its integrative approach that combines injury data with performance metrics and salary data, providing new insights into the relationship between injuries and economic and on-court performance. It investigates the periodicity and seasonality of injuries, seeking patterns related to time and external factors. Additionally, it examines the effect of specific injuries on players’ per-match analytics and performance, offering perspectives on the implications of injury rehabilitation for player performance. This paper contributes significantly to sports analytics, assisting coaches, sports medicine professionals, and team management in developing injury prevention strategies, optimizing player rotations, and creating targeted rehabilitation plans. Its findings illuminate the interplay between injuries, salaries, and performance in the NBA, aiming to enhance player welfare and the league’s overall competitiveness. With a comprehensive and sophisticated analysis, this research offers unprecedented insights into the dynamics of injuries and their long-term effects on athletes.
]]>Computation doi: 10.3390/computation12020035
Authors: Evgenii Kurkin Oscar Ulises Espinosa Barcenas Evgenii Kishov Oleg Lukyanov
The current study aims to develop a methodology for obtaining topology-optimal structures made of short fiber-reinforced polymers. Each iteration of topology optimization involves two consecutive steps: the first is a simulation of the injection molding process for obtaining the fiber orientation tensor, and the second is a structural analysis with anisotropic material properties. Accounting for the molding process during the internal iterations of topology optimization makes it possible to enhance the weight efficiency of structures—a crucial aspect, especially in aerospace. Anisotropy is considered through the fiber orientation tensor, which is modeled by solving the plastic molding equations for non-Newtonian fluids and then introduced as a variable in the stiffness matrix during the structural analysis. Structural analysis using a linear anisotropic material model was employed within the topology optimization. For verification, a non-linear elasto-plastic material model was used based on an exponential-and-linear hardening law. The evaluation of weight efficiency in structures composed of short-reinforced composite materials using a dimensionless criterion is addressed. Experimental verification was performed to confirm the validity of the developed methodology. The evidence illustrates that considering anisotropy leads to stiffer structures, and structural elements should be oriented in the direction of maximal stiffness. The load-carrying factor is expressed in terms of failure criteria. The presented multidisciplinary methodology can be used to improve the quality of the design of structures made of short fiber-reinforced composites (SFRC), where high stiffness, high strength, and minimum mass are the primary required structural characteristics.
]]>Computation doi: 10.3390/computation12020034
Authors: Khalaf M. Alanazi
We derive a reaction–diffusion model with time-delayed nonlocal effects to study an epidemic’s spatial spread numerically. The model describes infected individuals in the latent period using a structured model with diffusion. The epidemic model assumes that infectious individuals are subject to containment measures. To simulate the model in two-dimensional space, we use the continuous Runge–Kutta method of the fourth order and the discrete Runge–Kutta method of the third order with six stages. The numerical results admit the existence of traveling wave solutions for the proposed model. We use the COVID-19 epidemic to conduct numerical experiments and investigate the minimal speed of spread of the traveling wave front. The minimal spreading speeds of COVID-19 are found and discussed. Also, we assess the power of containment measures to contain the epidemic. The results depict a clear drop in the spreading speed of the traveling wave front after applying containment measures to at-risk populations.
]]>Computation doi: 10.3390/computation12020033
Authors: Muhammad Asad Arshed Hafiz Abdul Rehman Saeed Ahmed Christine Dewi Henoch Juli Christanto
The DNA virus responsible for monkeypox, transmitted from animals to humans, exhibits two distinct genetic lineages in central and eastern Africa. Beyond the zoonotic transmission involving direct contact with the infected animals’ bodily fluids and blood, the spread of monkeypox can also occur through skin lesions and respiratory secretions among humans. Both monkeypox and chickenpox involve skin lesions and can also be transmitted through respiratory secretions, but they are caused by different viruses. The key difference is that monkeypox is caused by an orthopox-virus, while chickenpox is caused by the varicella-zoster virus. In this study, the utilization of a patch-based vision transformer (ViT) model for the identification of monkeypox and chickenpox disease from human skin color images marks a significant advancement in medical diagnostics. Employing a transfer learning approach, the research investigates the ViT model’s capability to discern subtle patterns which are indicative of monkeypox and chickenpox. The dataset was enriched through carefully selected image augmentation techniques, enhancing the model’s ability to generalize across diverse scenarios. During the evaluation phase, the patch-based ViT model demonstrated substantial proficiency, achieving an accuracy, precision, recall, and F1 rating of 93%. This positive outcome underscores the practicality of employing sophisticated deep learning architectures, specifically vision transformers, in the realm of medical image analysis. Through the integration of transfer learning and image augmentation, not only is the model’s responsiveness to monkeypox- and chickenpox-related features enhanced, but concerns regarding data scarcity are also effectively addressed. The model outperformed the state-of-the-art studies and the CNN-based pre-trained models in terms of accuracy.
]]>Computation doi: 10.3390/computation12020032
Authors: Qanita Bani Baker Ruba A. Al-Hussien Mahmoud Al-Ayyoub
Multiple sequence alignment (MSA) stands as a critical tool for understanding the evolutionary and functional relationships among biological sequences. Obtaining an exact solution for MSA, termed exact-MSA, is a significant challenge due to the combinatorial nature of the problem. Using the dynamic programming technique to solve MSA is recognized as a highly computationally complex algorithm. To cope with the computational demands of MSA, parallel computing offers the potential for significant speedup in MSA. In this study, we investigated the utilization of parallelization to solve the exact-MSA using three proposed novel approaches. In these approaches, we used multi-threading techniques to improve the performance of the dynamic programming algorithms in solving the exact-MSA. We developed and employed three parallel approaches, named diagonal traversing, blocking, and slicing, to improve MSA performance. The proposed method accelerated the exact-MSA algorithm by around 4×. The suggested approaches could be basic approaches to be combined with many existing techniques. These proposed approaches could serve as foundational elements, offering potential integration with existing techniques for comprehensive MSA enhancement.
]]>Computation doi: 10.3390/computation12020031
Authors: Spyros Damikoukas Nikos D. Lagaros
Engineers have consistently prioritized the maintenance of structural serviceability and safety. Recent strides in design codes, computational tools, and Structural Health Monitoring (SHM) have sought to address these concerns. On the other hand, the burgeoning application of machine learning (ML) techniques across diverse domains has been noteworthy. This research proposes the combination of ML techniques with SHM to bridge the gap between high-cost and affordable measurement devices. A significant challenge associated with low-cost instruments lies in the heightened noise introduced into recorded data, particularly obscuring structural responses in ambient vibration (AV) measurements. Consequently, the obscured signal within the noise poses challenges for engineers in identifying the eigenfrequencies of structures. This article concentrates on eliminating additive noise, particularly electronic noise stemming from sensor circuitry and components, in AV measurements. The proposed MLDAR (Machine Learning-based Denoising of Ambient Response) model employs a neural network architecture, featuring a denoising autoencoder with convolutional and upsampling layers. The MLDAR model undergoes training using AV response signals from various Single-Degree-of-Freedom (SDOF) oscillators. These SDOFs span the 1–10 Hz frequency band, encompassing low, medium, and high eigenfrequencies, with their accuracy forming an integral part of the model’s evaluation. The results are promising, as AV measurements in an image format after being submitted to the trained model become free of additive noise. This with the aid of upscaling enables the possibility of deriving target eigenfrequencies without altering or deforming of them. Comparisons in various terms, both qualitative and quantitative, such as the mean magnitude-squared coherence, mean phase difference, and Signal-to-Noise Ratio (SNR), showed great performance.
]]>Computation doi: 10.3390/computation12020030
Authors: Francisco Botana Tomas Recio
We compare the performance of two systems, ChatGPT 3.5 and GeoGebra 5, in a restricted, but quite relevant, benchmark from the realm of classical geometry: the determination of geometric loci, focusing, in particular, on the computation of envelopes of families of plane curves. In order to study the loci calculation abilities of ChatGPT, we begin by entering an informal description of a geometric construction involving a locus or an envelope and then we ask ChatGPT to compute its equation. The chatbot fails in most situations, showing that it is not mature enough to deal with the subject. Then, the same constructions are also approached through the automated reasoning tools implemented in the dynamic geometry program, GeoGebra Discovery, which successfully resolves most of them. Furthermore, although ChatGPT is able to write general computer code, it cannot currently output that of GeoGebra. Thus, we consider describing a simple method for ChatGPT to generate GeoGebra constructions. Finally, in case GeoGebra fails, or gives an incorrect solution, we refer to the need for improved computer algebra algorithms to solve the loci/envelope constructions. Other than exhibiting the current problematic performance of the involved programs in this geometric context, our comparison aims to show the relevance and benefits of analyzing the interaction between them.
]]>Computation doi: 10.3390/computation12020029
Authors: Vighnesh Shenoy Prathvi Shenoy Santhosh Krishnan Venkata
This paper delves into precisely measuring liquid levels using a specific methodology with diverse real-world applications such as process optimization, quality control, fault detection and diagnosis, etc. It demonstrates the process of liquid level measurement by employing a chaotic observer, which senses multiple variables within a system. A three-dimensional computational fluid dynamics (CFD) model is meticulously created using ANSYS to explore the laminar flow characteristics of liquids comprehensively. The methodology integrates the system identification technique to formulate a third-order state–space model that characterizes the system. Based on this mathematical model, we develop estimators inspired by Lorenz and Rossler’s principles to gauge the liquid level under specified liquid temperature, density, inlet velocity, and sensor placement conditions. The estimated results are compared with those of an artificial neural network (ANN) model. These ANN models learn and adapt to the patterns and features in data and catch non-linear relationships between input and output variables. The accuracy and error minimization of the developed model are confirmed through a thorough validation process. Experimental setups are employed to ensure the reliability and precision of the estimation results, thereby underscoring the robustness of our liquid-level measurement methodology. In summary, this study helps to estimate unmeasured states using the available measurements, which is essential for understanding and controlling the behavior of a system. It helps improve the performance and robustness of control systems, enhance fault detection capabilities, and contribute to dynamic systems’ overall efficiency and reliability.
]]>Computation doi: 10.3390/computation12020028
Authors: Nirmalya Thakur Shuqi Cui Victoria Knieling Karam Khanna Mingchen Shao
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four distinct themes or focus areas—Promotion and Outreach Efforts, Treatment for COVID-19, Conspiracy Theories Regarding COVID-19, and COVID-19 and Politics. Second, the results of topic-specific sentiment analysis revealed the sentiment associated with each of these themes. For the videos belonging to the theme of Promotion and Outreach Efforts, 45.8% were neutral, 39.8% were positive, and 14.4% were negative. For the videos belonging to the theme of Treatment for COVID-19, 38.113% were positive, 31.343% were neutral, and 30.544% were negative. For the videos belonging to the theme of Conspiracy Theories Regarding COVID-19, 46.9% were positive, 31.0% were neutral, and 22.1% were negative. For the videos belonging to the theme of COVID-19 and Politics, 35.70% were positive, 32.86% were negative, and 31.44% were neutral. Third, topic-specific language analysis was performed to detect the various languages in which the video descriptions for each topic were published on YouTube. This analysis revealed multiple novel insights. For instance, for all the themes, English and Spanish were the most widely used and second most widely used languages, respectively. Fourth, the patterns of sharing these videos on other social media channels, such as Facebook and Twitter, were also investigated. The results revealed that videos containing video descriptions in English were shared the highest number of times on Facebook and Twitter. Finally, correlation analysis was performed by taking into account multiple characteristics of these videos. The results revealed that the correlation between the length of the video title and the number of tweets and the correlation between the length of the video title and the number of Facebook posts were statistically significant.
]]>Computation doi: 10.3390/computation12020027
Authors: Büryan Apaçoğlu-Turan Kadir Kırkköprü Murat Çakan
Fused-Deposition Modeling (FDM) is a commonly used 3D printing method for rapid prototyping and the fabrication of plastic components. The history of temperature variation during the FDM process plays a crucial role in the degree of bonding between layers. This study presents research on the thermal analysis of the 3D printing process using a developed simulation code. The code employs numerical discretization methods with an implicit scheme and an effective heat transfer coefficient for cooling. The computational model is validated by comparing the results with analytical solutions, demonstrating an agreement of more than 99%. The code is then utilized to perform thermal analyses for the 3D printing process. Interlayer and intralayer reheating effects, sensitivity to printing parameters, and realistic printing patterns are investigated. It is shown that concentric and zigzag paths yield similar peaks at different time intervals. Nodal temperatures can fall below the glass transition temperature (Tg) during the printing process, especially at the outer nodes of the domain and under conditions where the cooling period is longer and the printed volume per unit time is smaller. The article suggests future work to calculate welding time at different conditions and locations for the estimation of the degree of bonding.
]]>Computation doi: 10.3390/computation12020026
Authors: Ivanna Andrusyak Oksana Brodyak Petro Pukach Myroslava Vovk
A simple cell population growth model is proposed, where cells are assumed to have a physiological structure (e.g., a model describing cancer cell maturation, where cells are structured by maturation stage, size, or mass). The main question is whether we can guarantee, using the death rate as a control mechanism, that the total number of cells or the total cell biomass has prescribed dynamics, which may be applied to modeling the effect of chemotherapeutic agents on malignant cells. Such types of models are usually described by partial differential equations (PDE). The population dynamics are modeled by an inverse problem for PDE in our paper. The main idea is to reduce this model to a simplified integral equation that can be more easily studied by various analytical and numerical methods. Our results were obtained using the characteristics method.
]]>Computation doi: 10.3390/computation12020025
Authors: Dejan Brkić
Closed-loop pipe systems allow the possibility of the flow of gas from both directions across each route, ensuring supply continuity in the event of a failure at one point, but their main shortcoming is in the necessity to model them using iterative methods. Two iterative methods of determining the optimal pipe diameter in a gas distribution network with closed loops are described in this paper, offering the advantage of maintaining the gas velocity within specified technical limits, even during peak demand. They are based on the following: (1) a modified Hardy Cross method with the correction of the diameter in each iteration and (2) the node-loop method, which provides a new diameter directly in each iteration. The calculation of the optimal pipe diameter in such gas distribution networks relies on ensuring mass continuity at nodes, following the first Kirchhoff law, and concluding when the pressure drops in all the closed paths are algebraically balanced, adhering to the second Kirchhoff law for energy equilibrium. The presented optimisation is based on principles developed by Hardy Cross in the 1930s for the moment distribution analysis of statically indeterminate structures. The results are for steady-state conditions and for the highest possible estimated demand of gas, while the distributed gas is treated as a noncompressible fluid due to the relatively small drop in pressure in a typical network of pipes. There is no unique solution; instead, an infinite number of potential outcomes exist, alongside infinite combinations of pipe diameters for a given fixed flow pattern that can satisfy the first and second Kirchhoff laws in the given topology of the particular network at hand.
]]>Computation doi: 10.3390/computation12020024
Authors: Clara Guilhaumon Nicolas Hascoët Francisco Chinesta Marc Lavarde Fatima Daim
Machine learning approaches are currently used to understand or model complex physical systems. In general, a substantial number of samples must be collected to create a model with reliable results. However, collecting numerous data is often relatively time-consuming or expensive. Moreover, the problems of industrial interest tend to be more and more complex, and depend on a high number of parameters. High-dimensional problems intrinsically involve the need for large amounts of data through the curse of dimensionality. That is why new approaches based on smart sampling techniques have been investigated to minimize the number of samples to be given to train the model, such as active learning methods. Here, we propose a technique based on a combination of the Fisher information matrix and sparse proper generalized decomposition that enables the definition of a new active learning informativeness criterion in high dimensions. We provide examples proving the performances of this technique on a theoretical 5D polynomial function and on an industrial crash simulation application. The results prove that the proposed strategy outperforms the usual ones.
]]>Computation doi: 10.3390/computation12020023
Authors: Carlos Balsa M. Victoria Otero-Espinar Sílvio Gama
This work focuses on optimizing the displacement of a passive particle interacting with vortices located on the surface of a sphere. The goal is to minimize the energy expended during the displacement within a fixed time. The modeling of particle dynamics, whether in Cartesian or spherical coordinates, gives rise to alternative formulations of the identical problem. Thanks to these two versions of the same problem, we can assert that the algorithm, employed to transform the optimal control problem into an optimization problem, is effective, as evidenced by the obtained controls. The numerical resolution of these formulations through a direct approach consistently produces optimal solutions, regardless of the selected coordinate system.
]]>Computation doi: 10.3390/computation12020022
Authors: Robert S. Eisenberg
Maxwell defined a ‘true’ or ‘total’ current in a way not widely used today. He said that “… true electric current … is not the same thing as the current of conduction but that the time-variation of the electric displacement must be taken into account in estimating the total movement of electricity”. We show that the true or total current is a universal property of electrodynamics independent of the properties of matter. We use mathematics without the approximation of a dielectric constant. The resulting Maxwell current law is a generalization of the Kirchhoff law of current used in circuit analysis, that also includes the displacement current. The generalization is not a long-time low-frequency approximation in contrast to the traditional presentation of Kirchhoff’s law.
]]>Computation doi: 10.3390/computation12020021
Authors: Sadiq Alinsaif
Cardiac arrhythmias, characterized by deviations from the normal rhythmic contractions of the heart, pose a formidable diagnostic challenge. Early and accurate detection remains an integral component of effective diagnosis, informing critical decisions made by cardiologists. This review paper surveys diverse computational intelligence methodologies employed for arrhythmia analysis within the context of the widely utilized MIT-BIH dataset. The paucity of adequately annotated medical datasets significantly impedes advancements in various healthcare domains. Publicly accessible resources such as the MIT-BIH Arrhythmia Database serve as invaluable tools for evaluating and refining computer-assisted diagnosis (CAD) techniques specifically targeted toward arrhythmia detection. However, even this established dataset grapples with the challenge of class imbalance, further complicating its effective analysis. This review explores the current research landscape surrounding the application of graph-based approaches for both anomaly detection and classification within the MIT-BIH database. By analyzing diverse methodologies and their respective accuracies, this investigation aims to empower researchers and practitioners in the field of ECG signal analysis. The ultimate objective is to refine and optimize CAD algorithms, ultimately culminating in improved patient care outcomes.
]]>Computation doi: 10.3390/computation12020020
Authors: Mario-Ignacio González-Silva Ricardo-Armando González-Silva
This research proposes a new variant of Nowak and Sigmund’s indirect reciprocity model focused on agents’ individualism, which means that an agent strengthens its profile to the extent to which it makes a profit; this is using agent-based modeling. In addition, our model includes environmentally related conditions such as visibility and cooperative demand and internal poses such as obstinacy. The simulation results show that cooperators appear in a more significant proportion with conditions of low reputation visibility and high cooperative demand. Still, severe defectors take advantage of this situation and exceed the cooperators’ ratio. Some events show a heterogeneous society only with conditions of high obstinacy and cooperative demand. In general, the simulations show diverse scenarios, including centralized, polarized, and mixed societies. Simulation results show no healthy cooperation in indirect reciprocity due to individualism.
]]>Computation doi: 10.3390/computation12020019
Authors: Hayati Tutar Ali Güneş Metin Zontul Zafer Aslan
With the rapid development in technology in recent years, the use of cameras and the production of video and image data have similarly increased. Therefore, there is a great need to develop and improve video surveillance techniques to their maximum extent, particularly in terms of their speed, performance, and resource utilization. It is challenging to accurately detect anomalies and increase the performance by minimizing false positives, especially in crowded and dynamic areas. Therefore, this study proposes a hybrid video anomaly detection model combining multiple machine learning algorithms with pixel-based video anomaly detection (PBVAD) and frame-based video anomaly detection (FBVAD) models. In the PBVAD model, the motion influence map (MIM) algorithm based on spatio–temporal (ST) factors is used, while in the FBVAD model, the k-nearest neighbors (kNN) and support vector machine (SVM) machine learning algorithms are used in a hybrid manner. An important result of our study is the high-performance anomaly detection achieved using the proposed hybrid algorithms on the UCF-Crime data set, which contains 128 h of original real-world video data and has not been extensively studied before. The AUC performance metrics obtained using our FBVAD-kNN algorithm in experiments were averaged to 98.0%. Meanwhile, the success rates obtained using our PBVAD-MIM algorithm in the experiments were averaged to 80.7%. Our study contributes significantly to the prevention of possible harm by detecting anomalies in video data in a near real-time manner.
]]>Computation doi: 10.3390/computation12010018
Authors: Dmitry S. Kolybalov Evgenii D. Kadtsyn Sergey G. Arkhipov
Severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) virus syndrome caused the recent outbreak of COVID-19 disease, the most significant challenge to public health for decades. Despite the successful development of vaccines and promising therapies, the development of novel drugs is still in the interests of scientific society. SARS-CoV-2 main protease Mpro is one of the key proteins for the lifecycle of the virus and is considered an intriguing target. We used a structure-based drug design approach as a part of the search of new inhibitors for SARS-CoV-2 Mpro and hence new potential drugs for treating COVID-19. Four structures of potential inhibitors of (4S)-2-(2-(1H-imidazol-5-yl)ethyl)-4-amino-2-(1,3-dihydroxypropyl)-3-hydroxy-5-(1H-imidazol-5-yl)pentanal (L1), (2R,4S)-2-((1H-imidazol-4-yl)methyl)-4-chloro-8-hydroxy-7-(hydroxymethyl)octanoic acid (L2), 1,9-dihydroxy-6-(hydroxymethyl)-6-(((1S)-1,7,7-trimethylbicyclo [2.2.1]heptan-2-yl)amino)nonan-4-one (L3), and 2,4,6-tris((4H-1,2,4-triazol-3-yl)amino)benzonitrile (L4) were modeled. Three-dimensional structures of ligand–protein complexes were modeled and their potential binding efficiency proved. Docking and molecular dynamic simulations were performed for these compounds. Detailed trajectory analysis of the ligands’ binding conformation was carried out. Binding free energies were estimated by the MM/PBSA approach. Results suggest a high potential efficiency of the studied inhibitors.
]]>Computation doi: 10.3390/computation12010017
Authors: Ahmed S. Rashed Ehsan H. Nasr Samah M. Mabrouk
Many biotechnology sectors that depend on fluids and their physical characteristics, including the phenomenon of bioconvection, have generated a great deal of discussion. The term “bioconvection” describes the organized movement of microorganisms, such as bacteria or algae. Microorganisms that participate in bioconvection display directed movement, frequently in the form of upward or downward streaming, which can lead to the production of distinctive patterns. The interaction between the microbes’ swimming behavior and the physical forces acting on them, such as buoyancy and fluid flow, is what drives these patterns. This work considers the laminar-mixed convection incompressible flow at the stagnation point with viscous and gyrotactic microorganisms in an unsteady electrically conducting hybrid nanofluid (Fe3O4-Cu/water). In addition, hybrid nanofluid flow over a horizontal porous stretched sheet, as well as external and induced magnetic field effects, can be used in biological domains, including drug delivery and microcirculatory system flow dynamics. The governing system has been reduced to a set of ordinary differential equations (ODEs) through the use of the group technique. The current research was inspired by an examination of the impacts of multiple parameters, including Prandtl number, Pr, magnetic diffusivity, η0, shape factor, n, microorganism diffusion coefficient, Dn, Brownian motion coefficient, DB, thermophoresis diffusion coefficient,  DT, bioconvection Peclet number, Pe, temperature difference,  δt, and concentration difference,  δc. The results show that as Pr rises, temperature, heat flux, and nanoparticles all decrease. In contrast, when the η0 value increases, the magnetic field and velocity decrease. Heat flow, bacterial density, and temperature decrease as the DB value rises, yet the number of nanoparticles increases. As the DT value increases, the temperature, heat flow, and concentration of nanoparticles all rise while the density of bacteria decreases. Even though temperature, heat flux, nanoparticles, and bacterial density all decrease as δc values climb, bacterial density rises as Dn values do although bacterial density falls with increasing,  δt and Pe values; on the other hand, when n values increase, temperature and heat flow increase but the density of bacteria and nanoparticle decrease. The physical importance and behavior of the present parameters were illustrated graphically.
]]>Computation doi: 10.3390/computation12010016
Authors: W. M. Faizal C. Y. Khor Suhaimi Shahrin M. H. M. Hazwan M. Ahmad M. N. Misbah A. H. M. Haidiezul
Obstructive sleep apnea (OSA) is a common medical condition that impacts a significant portion of the population. To better understand this condition, research has been conducted on inhaling and exhaling breathing airflow parameters in patients with obstructive sleep apnea. A steady-state Reynolds-averaged Navier–Stokes (RANS) approach and an SST turbulence model have been utilized to simulate the upper airway airflow. A 3D airway model has been created using advanced software such as the Materialize Interactive Medical Image Control System (MIMICS) and ANSYS. The aim of the research was to fill this gap by conducting a detailed computational fluid dynamics (CFD) analysis to investigate the influence of cross-sectional areas on airflow characteristics during inhale and exhale breathing in OSA patients. The lack of detailed understanding of how the cross-sectional area of the airways affects OSA patients and the airflow dynamics in the upper airway is the primary problem addressed by this research. The simulations revealed that the cross-sectional area of the airway has a notable impact on velocity, Reynolds number, and turbulent kinetic energy (TKE). TKE, which measures turbulence flow in different breathing scenarios among patients, could potentially be utilized to assess the severity of obstructive sleep apnea (OSA). This research found a vital correlation between maximum pharyngeal turbulent kinetic energy (TKE) and cross-sectional areas in OSA patients, with a variance of 29.47%. Reduced cross-sectional area may result in a significant TKE rise of roughly 10.28% during inspiration and 10.18% during expiration.
]]>Computation doi: 10.3390/computation12010015
Authors: Najmu Nissa Sanjay Jamwal Mehdi Neshat
This paper addresses the global surge in heart disease prevalence and its impact on public health, stressing the need for accurate predictive models. The timely identification of individuals at risk of developing cardiovascular ailments is paramount for implementing preventive measures and timely interventions. The World Health Organization (WHO) reports that cardiovascular diseases, responsible for an alarming 17.9 million annual fatalities, constitute a significant 31% of the global mortality rate. The intricate clinical landscape, characterized by inherent variability and a complex interplay of factors, poses challenges for accurately diagnosing the severity of cardiac conditions and predicting their progression. Consequently, early identification emerges as a pivotal factor in the successful treatment of heart-related ailments. This research presents a comprehensive framework for the prediction of cardiovascular diseases, leveraging advanced boosting techniques and machine learning methodologies, including Cat boost, Random Forest, Gradient boosting, Light GBM, and Ada boost. Focusing on “Early Heart Disease Prediction using Boosting Techniques”, this paper aims to contribute to the development of robust models capable of reliably forecasting cardiovascular health risks. Model performance is rigorously assessed using a substantial dataset on heart illnesses from the UCI machine learning library. With 26 feature-based numerical and categorical variables, this dataset encompasses 8763 samples collected globally. The empirical findings highlight AdaBoost as the preeminent performer, achieving a notable accuracy of 95% and excelling in metrics such as negative predicted value (0.83), false positive rate (0.04), false negative rate (0.04), and false development rate (0.01). These results underscore AdaBoost’s superiority in predictive accuracy and overall performance compared to alternative algorithms, contributing valuable insights to the field of cardiovascular health prediction.
]]>Computation doi: 10.3390/computation12010014
Authors: Nayereh Tanha Behrouz Parsa Moghaddam Mousa Ilie
This study presents an algorithmically efficient approach to address the complexities associated with nonlocal variable-order operators characterized by diverse definitions. The proposed method employs integro spline quasi interpolation to approximate these operators, aiming for enhanced accuracy and computational efficiency. We conduct a thorough comparison of the outcomes obtained through this approach with other established techniques, including finite difference, IQS, and B-spline methods, documented in the applied mathematics literature for handling nonlocal variable-order derivatives and integrals. The numerical results, showcased in this paper, serve as a compelling validation of the notable advantages offered by our innovative approach. Furthermore, this study delves into the impact of selecting different variable-order values, contributing to a deeper understanding of the algorithm’s behavior across a spectrum of scenarios. In summary, this research seeks to provide a practical and effective solution to the challenges associated with nonlocal variable-order operators, contributing to the applied mathematics literature.
]]>Computation doi: 10.3390/computation12010013
Authors: El-Awady Attia Md Maniruzzaman Miah Abu Sayeed Arif Ali AlArjani Mahmud Hasan Md Sharif Uddin
This paper focuses on the production systems that may produce a proportion of recyclable defective products. The developed model is called an Economic Recycle Quantity (ERQ) model with the assumption of a full recovery of defective items. The defective parts are collected during the production-off time and can be used during the next production cycle of the same category. The demand rate of the non-defective items is a two-level piecewise factor—one during the production-run time and another during the production-off time. The developed model aims to optimize the total inventory cost, the order quantity, and the amount of recyclable defective items that represent the ERQ. The mathematical formulations of the model are deduced theoretically. The model was solved analytically, and the optimal results are illustrated. Sensitivity analysis is carried out to investigate the effect of varying system parameters and validate the proposed model. Results of the sensitivity analysis show that the consideration of defective part recycling reduces the total inventory cost where the raw material is reduced. The cost reduction is about 1%; of course, the environmental impact is more appreciated. Furthermore, the managerial implications are described, and the future perspectives are discussed.
]]>Computation doi: 10.3390/computation12010012
Authors: Md Rakibuzzaman Sang-Ho Suh Hyung-Woon Roh Kyung Hee Song Kwang Chul Song Ling Zhou
Small submersible drainage pumps are used to discharge leaking water and rainwater in buildings. In an emergency (e.g., heavy rain or accident), advance monitoring of the flow rate is essential to enable optimal operation, considering the point where the pump operates abnormally when the water level is increased rapidly. Moreover, pump performance optimization is crucial for energy-saving policy. Therefore, it is necessary to meet the challenges of submersible pump systems, including sustainability and pump efficiency. The final goal of this study was to develop an energy-saving and highly efficient submersible drainage pump capable of performing efficiently in emergencies. In particular, this paper targeted the hydraulic performance improvement of a submersible drainage pump model. Prior to the development of driving-mode-related technology capable of emergency response, a way to improve the performance characteristics of the existing submersible drainage pump was found. Disassembling of the current pump followed by reverse engineering was performed instead of designing a new pump. Numerical simulation was performed to analyze the flow characteristics and pump efficiency. An experiment was carried out to obtain the performance, and it was validated with numerical results. The results reveal that changing the cross-sectional shape of the impeller reduced the flow separation and enhanced velocity and pressure distributions. Also, it reduced the power and increased efficiency. The results also show that the pump’s efficiency was increased to 5.56% at a discharge rate of 0.17 m3/min, and overall average efficiency was increased to 6.53%. It was concluded that the submersible pump design method is suitable for the numerical designing of an optimized pump’s impeller and casing. This paper provides insight on the design optimization of pumps.
]]>Computation doi: 10.3390/computation12010011
Authors: İrem Bağlan Erman Aslan
A two-dimensional heat diffusion problem with a heat source that is a quasilinear parabolic problem is examined analytically and numerically. Periodic boundary conditions are employed. As the problem is nonlinear, Picard’s successive approximation theorem is utilized. We demonstrate the existence, uniqueness, and constant dependence of the solution on the data using the generalized Fourier method under specific conditions of natural regularity and consistency imposed on the input data. For the numerical solution, an implicit finite difference scheme is used. The results obtained from the analytical and numerical solutions closely match each other.
]]>Computation doi: 10.3390/computation12010010
Authors: Eirini Maria Kanakaki Anna Samnioti Vassilis Gaganis
Flash calculations are essential in reservoir engineering applications, most notably in compositional flow simulation and separation processes, to provide phase distribution factors, known as k-values, at a given pressure and temperature. The calculation output is subsequently used to estimate composition-dependent properties of interest, such as the equilibrium phases’ molar fraction, composition, density, and compressibility. However, when the flash conditions approach criticality, minor inaccuracies in the computed k-values may lead to significant deviation in the dependent properties, which is eventually inherited to the simulator, leading to large errors in the simulation. Although several machine-learning-based regression approaches have emerged to drastically accelerate flash calculations, the criticality issue persists. To address this problem, a novel resampling technique of the ML models’ training data population is proposed, which aims to fine-tune the training dataset distribution and optimally exploit the models’ learning capacity across various flash conditions. The results demonstrate significantly improved accuracy in predicting phase behavior results near criticality, offering valuable contributions not only to the subsurface reservoir engineering industry but also to the broader field of thermodynamics. By understanding and optimizing the model’s training, this research enables more precise predictions and better-informed decision-making processes in domains involving phase separation phenomena. The proposed technique is applicable to every ML-dominated regression problem, where properties dependent on the machine output are of interest rather than the model output itself.
]]>Computation doi: 10.3390/computation12010009
Authors: Kanellos Toudas Stefanos Archontakis Paraskevi Boufounou
This study focuses on testing the efficiency of alternative bankruptcy prediction models (Altman, Ohlson, Zmijewski) and on assessing the possible reasons that led to the confirmation or not of the prevailing model. Data from financial statements of listed (Greek) construction companies before the economic crisis were utilized. The results showed that Altman’s main predictive model as well as the revised models have low overall predictability for all three years before bankruptcy.
]]>Computation doi: 10.3390/computation12010008
Authors: Olexander Grebenikov Andrii Humennyi Serhii Svitlychnyi Vasyl Lohinov Valerii Matviienko
The typical and most widespread stress concentrators in the lower wing panels of aircraft are the drain holes located on the stringer vertical ribs. These are prime sources for the initiation and development of fatigue cracks, which lead to early failure of the wing structure. Therefore, improving fatigue life in these critical areas is one of the significant issues for research. Two combined methods of surface plastic treatment in the location around drain holes are discussed in this paper. Using the finite element method and ANSYS software, we created a finite element model and obtained nonlinear solution results in the case of tension in a plate with three holes. In addition, the development of residual stress due to the surface plastic treatment of the hole-adjacent areas was taken into account. In this paper, it is shown that after surface treatment of the corresponding areas of the holes, residual stress, which exceeds the yield stress for the plate material, is induced. When combined with alternative tensile stress, these reduce the amplitude of the local stresses, thus increasing the number of stress cycles before failure. The benefits of this technology were confirmed by fatigue test results, which include the fatigue failure types of the plates. Graphs showing the impact of applicable surface treatment combined methods on the number of cycles to failure were also plotted.
]]>Computation doi: 10.3390/computation12010007
Authors: Khalid Hattaf
This study develops a new definition of a fractional derivative that mixes the definitions of fractional derivatives with singular and non-singular kernels. This developed definition encompasses many types of fractional derivatives, such as the Riemann–Liouville and Caputo fractional derivatives for singular kernel types, as well as the Caputo–Fabrizio, the Atangana–Baleanu, and the generalized Hattaf fractional derivatives for non-singular kernel types. The associate fractional integral of the new mixed fractional derivative is rigorously introduced. Furthermore, a novel numerical scheme is developed to approximate the solutions of a class of fractional differential equations (FDEs) involving the mixed fractional derivative. Finally, an application in computational biology is presented.
]]>Computation doi: 10.3390/computation12010006
Authors: Yashwanth Reddy Konda Vamsi Krishna Ponnaganti Peram Venkata Sivarami Reddy R. Raja Singh Paolo Mercorelli Edison Gundabattini Darius Gnanaraj Solomon
In recent times, there has been an increased demand for electric vehicles. In this context, the energy management of the electric motor, which are an important constituent of electric vehicles, plays a pivotal role. A lot of research has been conducted on the optimization of heat flow through electric motors, thus reducing the wastage of energy via heat. Futuristic power sources may increasingly rely on cutting-edge innovations like energy harvesting and self-powered induction motors. In this context, effective thermal management techniques are discussed in this paper. Importance was given to the potential energy losses, hotspots, the influence of overheating on the motor efficiency, different cooling strategies, certain experimental approaches, and power control techniques. Two types of thermal analysis computation methods, namely the lumped-parameter circuit method (LPCM) and the finite element method (FEM), are discussed. Also, this paper reviews different cooling strategies. The experimental research showed that the efficiency was greater by 11% with the copper rotor compared to the aluminum rotor. Each rotor type was reviewed based on the temperature rise and efficiency at higher temperatures. The water-cooling method reduced the working temperatures by 39.49% at the end windings, 41.67% at the side windings, and by a huge margin of 56.95% at the yoke of the induction motor compared to the air-cooling method; hence, the water-cooling method is better. Lastly, modern cooling strategies are proposed to provide an effective thermal management solution for squirrel-cage induction motors.
]]>Computation doi: 10.3390/computation12010005
Authors: Andrea Moreno-Ceballos María Eugenia Castro Norma A. Caballero Liliana Mammino Francisco J. Melendez
In the search to cover the urgent need to combat infectious diseases, natural products have gained attention in recent years. The caespitate molecule, isolated from the plant Helichrysum caespititium of the Asteraceae family, is used in traditional African medicine. Caespitate is an acylphloroglucinol with biological activity. Acylphloroglucinols have attracted attention for treating tuberculosis due to their structural characteristics, highlighting the stabilizing effect of their intramolecular hydrogen bonds (IHBs). In this work, a conformational search for the caespitate was performed using the MM method. Posteriorly, DFT calculations with the APFD functional were used for full optimization and vibrational frequencies, obtaining stable structures. A population analysis was performed to predict the distribution of the most probable conformers. The calculations were performed in the gas phase and solution using the implicit SMD model for water, chloroform, acetonitrile, and DMSO solvents. Additionally, the multiscale ONIOM QM1/QM2 model was used to simulate the explicit solvent. The implicit and explicit solvent effects were evaluated on the global reactivity indexes using the conceptual-DFT approach. In addition, the QTAIM approach was applied to analyze the properties of the IHBs of the most energetically and populated conformers. The obtained results indicated that the most stable and populated conformer is in the gas phase, and chloroform has an extended conformation. However, water, acetonitrile, and DMSO have a hairpin shape. The optimized structures are well preserved in explicit solvent and the interaction energies for the IHBs were lower in explicit than implicit solvents due to non-covalent interactions formed between the solvent molecules. Finally, both methodologies, with implicit and explicit solvents, were validated with 1H and 13C NMR experimental data. In both cases, the results agreed with the experimental data reported in the CDCl3 solvent.
]]>Computation doi: 10.3390/computation12010004
Authors: Konstantinos Poulinakis Dimitris Drikakis Ioannis W. Kokkinakis S. Michael Spottswood Talib Dbouk
This paper concerns the application of a long short-term memory model (LSTM) for high-resolution reconstruction of turbulent pressure fluctuation signals from sparse (reduced) data. The model’s training was performed using data from high-resolution computational fluid dynamics (CFD) simulations of high-speed turbulent boundary layers over a flat panel. During the preprocessing stage, we employed cubic spline functions to increase the fidelity of the sparse signals and subsequently fed them to the LSTM model for a precise reconstruction. We evaluated our reconstruction method with the root mean squared error (RMSE) metric and via inspection of power spectrum plots. Our study reveals that the model achieved a precise high-resolution reconstruction of the training signal and could be transferred to new unseen signals of a similar nature with extremely high success. The numerical simulations show promising results for complex turbulent signals, which may be experimentally or computationally produced.
]]>Computation doi: 10.3390/computation12010003
Authors: Moses N. Arthur Kristeen Bebla Emmanuel Broni Carolyn Ashley Miriam Velazquez Xianin Hua Ravi Radhakrishnan Samuel K. Kwofie Whelton A. Miller
The prognosis of mixed-lineage leukemia (MLL) has remained a significant health concern, especially for infants. The minimal treatments available for this aggressive type of leukemia has been an ongoing problem. Chromosomal translocations of the KMT2A gene are known as MLL, which expresses MLL fusion proteins. A protein called menin is an important oncogenic cofactor for these MLL fusion proteins, thus providing a new avenue for treatments against this subset of acute leukemias. In this study, we report results using the structure-based drug design (SBDD) approach to discover potential novel MLL-mediated leukemia inhibitors from natural products against menin. The three-dimensional (3D) protein model was derived from Protein Databank (Protein ID: 4GQ4), and EasyModeller 4.0 and I-TASSER were used to fix missing residues during rebuilding. Out of the ten protein models generated (five from EasyModeller and I-TASSER each), one model was selected. The selected model demonstrated the most reasonable quality and had 75.5% of residues in the most favored regions, 18.3% of residues in additionally allowed regions, 3.3% of residues in generously allowed regions, and 2.9% of residues in disallowed regions. A ligand library containing 25,131 ligands from a Chinese database was virtually screened using AutoDock Vina, in addition to three known menin inhibitors. The top 10 compounds including ZINC000103526876, ZINC000095913861, ZINC000095912705, ZINC000085530497, ZINC000095912718, ZINC000070451048, ZINC000085530488, ZINC000095912706, ZINC000103580868, and ZINC000103584057 had binding energies of −11.0, −10.7, −10.6, −10.2, −10.2, −9.9, −9.9, −9.9, −9.9, and −9.9 kcal/mol, respectively. To confirm the stability of the menin–ligand complexes and the binding mechanisms, molecular dynamics simulations including molecular mechanics Poisson–Boltzmann surface area (MM/PBSA) computations were performed. The amino acid residues that were found to be potentially crucial in ligand binding included Phe243, Met283, Cys246, Tyr281, Ala247, Ser160, Asn287, Asp185, Ser183, Tyr328, Asn249, His186, Leu182, Ile248, and Pro250. MI-2-2 and PubChem CIDs 71777742 and 36294 were shown to possess anti-menin properties; thus, this justifies a need to experimentally determine the activity of the identified compounds. The compounds identified herein were found to have good pharmacological profiles and had negligible toxicity. Additionally, these compounds were predicted as antileukemic, antineoplastic, chemopreventive, and apoptotic agents. The 10 natural compounds can be further explored as potential novel agents for the effective treatment of MLL-mediated leukemia.
]]>Computation doi: 10.3390/computation12010002
Authors: Tatsuma Kato Kosuke Nishizawa Mingcong Deng
Recently, microreactors, which are tubular reactors capable of fast and highly efficient chemical reactions, have attracted attention. However, precise temperature control is required because temperature changes due to reaction heat can cause reactions to proceed differently from those designed. In a previous study, a single-input/output nonlinear control system was proposed using a model in which the microreactor is divided into three regions and the thermal equation is formulated considering the temperature gradient, but it could not control two different temperatures. In this paper, a multi-input, multi-output nonlinear control system was designed using operator theory. On the other hand, when the number of parallel microreactors is increased, a sensorless control method using M–SVR with a generalized Gaussian kernel was incorporated into the MIMO nonlinear control system from the viewpoint of cost reduction, and the effectiveness of the proposed method was confirmed via experimental results.
]]>Computation doi: 10.3390/computation12010001
Authors: Junlong Zheng Chaiyan Jettanasen Pathomthat Chiradeja
Our research focused on an optimization algorithm. Our work makes three contributions. First, a new optimization algorithm, the Maritime Search and Rescue Algorithm (MSRA), is creatively proposed. The algorithm not only has better optimization performance, but also has the ability to plan the path to the best site. For other existing intelligent optimization algorithms, it has never been found that they have both of these performances. Second, the mathematical model of the MSRA was established, and the computer program pseudo-code was created. Third, the MSRA was verified by experiments.
]]>Computation doi: 10.3390/computation11120255
Authors: Simone Brogi Ilaria Guarino Lorenzo Flori Hajar Sirous Vincenzo Calderone
In this study, we applied a computer-based protocol to identify novel antioxidant agents that can reduce oxidative stress (OxS), which is one of the main hallmarks of several disorders, including cancer, cardiovascular disease, and neurodegenerative disorders. Accordingly, the identification of novel and safe agents, particularly natural products, could represent a valuable strategy to prevent and slow down the cellular damage caused by OxS. Employing two chemical libraries that were properly prepared and enclosing both natural products and world-approved and investigational drugs, we performed a high-throughput docking campaign to identify potential compounds that were able to target the KEAP1 protein. This protein is the main cellular component, along with NRF2, that is involved in the activation of the antioxidant cellular pathway. Furthermore, several post-search filtering approaches were applied to improve the reliability of the computational protocol, such as the evaluation of ligand binding energies and the assessment of the ADMET profile, to provide a final set of compounds that were evaluated by molecular dynamics studies for their binding stability. By following the screening protocol mentioned above, we identified a few undisclosed natural products and drugs that showed great promise as antioxidant agents. Considering the natural products, isoxanthochymol, gingerenone A, and meranzin hydrate showed the best predicted profile for behaving as antioxidant agents, whereas, among the drugs, nedocromil, zopolrestat, and bempedoic acid could be considered for a repurposing approach to identify possible antioxidant agents. In addition, they showed satisfactory ADMET properties with a safe profile, suggesting possible long-term administration. In conclusion, the identified compounds represent a valuable starting point for the identification of novel, safe, and effective antioxidant agents to be employed in cell-based tests and in vivo studies to properly evaluate their action against OxS and the optimal dosage for exerting antioxidant effects.
]]>Computation doi: 10.3390/computation11120254
Authors: Anton Tolstikhin
Currently, the sampling problem has gained wide popularity in the field of autonomous mobile agent control due to the wide range of practical and fundamental problems described with its framework. This paper considers a combined decentralized control strategy that incorporates both elements of biologically inspired and gradient-based approaches. Its key feature is multitasking, consisting in the possibility of solving several tasks in parallel included in the sampling problem: localization and monitoring of several sources and restoration of the given level line boundaries.
]]>Computation doi: 10.3390/computation11120253
Authors: Shixin Xu Robert Eisenberg Zilong Song Huaxiong Huang
This study introduces a mathematical model for electrolytic chemical reactions, employing an energy variation approach grounded in classical thermodynamics. Our model combines electrostatics and chemical reactions within well-defined energetic and dissipative functionals. Extending the energy variation method to open systems consisting of charge, mass, and energy inputs, this model explores energy transformation from one form to another. Electronic devices and biological channels and transporters are open systems. By applying this generalized approach, we investigate the conversion of an electrical current to a proton flow by cytochrome c oxidase, a vital mitochondrial enzyme contributing to ATP production, the ‘energetic currency of life’. This model shows how the enzyme’s structure directs currents and mass flows governed by energetic and dissipative functionals. The interplay between electron and proton flows, guided by Kirchhoff’s current law within the mitochondrial membrane and the mitochondria itself, determines the function of the systems, where electron flows are converted into proton flows and gradients. This important biological system serves as a practical example of the use of energy variation methods to deal with electrochemical reactions in open systems. We combine chemical reactions and Kirchhoff’s law in a model that is much simpler to implement than a full accounting of all the charges in a chemical system.
]]>Computation doi: 10.3390/computation11120252
Authors: Carlos Macancela Manuel Eugenio Morocho-Cayamcela Oscar Chang
In August 2020, the World Health Assembly launched a global initiative to eliminate cervical cancer by 2030, setting three primary targets. One key goal is to achieve a 70% screening coverage rate for cervical cancer, primarily relying on the precise analysis of Papanicolaou (Pap) or digital Pap smears. However, the responsibility of reviewing Pap smear samples to identify potentially cancerous cells primarily falls on pathologists—a task known to be exceptionally challenging and time-consuming. This paper proposes a solution to address the shortage of pathologists for cervical cancer screening. It leverages the OpenAI-GYM API to create a deep reinforcement learning environment utilizing liquid-based Pap smear images. By employing the Proximal Policy Optimization algorithm, autonomous agents navigate Pap smear images, identifying cells with the aid of rewards, penalties, and accumulated experiences. Furthermore, the use of a pre-trained convolutional neuronal network like Res-Net50 enhances the classification of detected cells based on their potential for malignancy. The ultimate goal of this study is to develop a highly efficient, automated Papanicolaou analysis system, ultimately reducing the need for human intervention in regions with limited pathologists.
]]>Computation doi: 10.3390/computation11120251
Authors: Sergei Abramovich
The paper deals with the exploration of subsequences of polygonal numbers of different sides derived through step-by-step elimination of terms of the original sequences. Eliminations are based on special rules similarly to how the classic sieve of Eratosthenes was developed through the elimination of multiples of primes. These elementary number theory activities, appropriate for technology-enhanced secondary mathematics education courses, are supported by a spreadsheet, Wolfram Alpha, Maple, and the Online Encyclopedia of Integer Sequences. General formulas for subsequences of polygonal numbers referred to in the paper as polygonal number sieves of order k, that include base-two exponential functions of k, have been developed. Different problem-solving approaches to the derivation of such and other sieves based on the technology-immune/technology-enabled framework have been used. The accuracy of computations and mathematical reasoning is confirmed through the technique of computational triangulation enabled by using more than one digital tool. A few relevant excerpts from the history of mathematics are briefly featured.
]]>Computation doi: 10.3390/computation11120250
Authors: José G. Hasbani Evan M. C. Kias Roberto Suarez-Rivera Victor M. Calo
The laboratory measurements conducted on Vaca Muerta formation samples demonstrate stress-dependent elastic behavior and compaction under representative in situ conditions. The experimental results reveal that the analyzed samples display elastoplastic deformation and shear-enhanced compaction as primary plasticity mechanisms. These experimental findings contradict the expected linear elastic response anticipated before brittle failure, as reported in several studies on the geomechanical characterization of the Vaca Muerta formation. Therefore, we present a comprehensive laboratory analysis of Vaca Muerta formation samples showing their nonlinear elastic behavior and irrecoverable shear-enhanced compaction. Additionally, we calibrate an elastoplastic constitutive model based on these experimental observations. The resulting model accurately reproduces the observed phenomena, playing a pivotal role in geoengineering applications within the energy industry.
]]>Computation doi: 10.3390/computation11120249
Authors: Muratkan Madiyarov Nurlan Temirbekov Nurlana Alimbekova Yerzhan Malgazhdarov Yerlan Yergaliyev
This paper proposes a new approach to predicting the distribution of harmful substances in the atmosphere based on the combined use of the parameter estimation technique and machine learning algorithms. The essence of the proposed approach is based on the assumption that the concentration values predicted by machine learning algorithms at observation points can be used to refine the pollutant concentration field when solving a differential equation of the convection-diffusion-reaction type. This approach reduces to minimizing an objective functional on some admissible set by choosing the atmospheric turbulence coefficient. We consider two atmospheric turbulence models and restore its unknown parameters by using the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. Three ensemble machine learning algorithms are analyzed for the prediction of concentration values at observation points, and comparison of the predicted values with the measurement results is presented. The proposed approach has been tested on an example of two cities in the Republic of Kazakhstan. In addition, due to the lack of data on pollution sources and their intensities, an approach for identifying this information is presented.
]]>Computation doi: 10.3390/computation11120248
Authors: Jonathan Sinclair Paul John Taylor
This study examined the effects of minimal, maximal and conventional running footwear on tibial strains and stress fracture probability using finite element and probabilistic analyses. The current investigation examined fifteen males running in three footwear conditions (minimal, maximal and conventional). Kinematic data were collected during overground running at 4.0 m/s using an eight-camera motion-capture system and ground reaction forces using a force plate. Tibial strains were quantified using finite element modelling and stress fracture probability calculated via probabilistic modelling over 100 days of running. Ninetieth percentile tibial strains were significantly greater in minimal (4681.13 με) (p < 0.001) and conventional (4498.84 με) (p = 0.007) footwear compared to maximal (4069.65 με). Furthermore, tibial stress fracture probability was significantly greater in minimal footwear (0.22) (p = 0.047) compared to maximal (0.15). The observations from this investigation show that compared to minimal footwear, maximal running shoes appear to be effective in attenuating runners’ likelihood of developing a tibial stress fracture.
]]>Computation doi: 10.3390/computation11120247
Authors: Louis H. Kauffman
This paper explores a formal model of autopoiesis as presented by Maturana, Uribe and Varela, and analyzes this model and its implications through the lens of the notions of eigenforms (fixed points) and the intricacies of Goedelian coding. The paper discusses the connection between autopoiesis and eigenforms and a variety of different perspectives and examples. The paper puts forward original philosophical reflections and generalizations about its various conclusions concerning specific examples, with the aim of contributing to a unified way of understanding (formal models of) living systems within the context of natural sciences, and to see the role of such systems and the formation of information from the point of view of analogs of biological construction. To this end, we pay attention to models for fixed points, self-reference and self-replication in formal systems and in the description of biological systems.
]]>Computation doi: 10.3390/computation11120246
Authors: Catur Supriyanto Abu Salam Junta Zeniarja Adi Wijaya
This research paper presents a deep-learning approach to early detection of skin cancer using image augmentation techniques. We introduce a two-stage image augmentation process utilizing geometric augmentation and a generative adversarial network (GAN) to differentiate skin cancer categories. The public HAM10000 dataset was used to test how well the proposed model worked. Various pre-trained convolutional neural network (CNN) models, including Xception, Inceptionv3, Resnet152v2, EfficientnetB7, InceptionresnetV2, and VGG19, were employed. Our approach demonstrates an accuracy of 96.90%, precision of 97.07%, recall of 96.87%, and F1-score of 96.97%, surpassing the performance of other state-of-the-art methods. The paper also discusses the use of Shapley Additive Explanations (SHAP), an interpretable technique for skin cancer diagnosis, which can help clinicians understand the reasoning behind the diagnosis and improve trust in the system. Overall, the proposed method presents a promising approach to automated skin cancer detection that could improve patient outcomes and reduce healthcare costs.
]]>Computation doi: 10.3390/computation11120245
Authors: Martin Juříček Roman Parák Jakub Kůdela
The significance of robot manipulators in engineering applications and scientific research has increased substantially in recent years. The utilization of robot manipulators to save labor and increase production accuracy is becoming a common practice in industry. Evolutionary computation (EC) techniques are optimization methods that have found their use in diverse engineering fields. This state-of-the-art review focuses on recent developments and progress in their applications for industrial robotics, especially for path planning problems that need to satisfy various constraints that are implied by both the geometry of the robot and its surroundings. We discuss the most-used EC method and the modifications that suit this particular purpose, as well as the different simulation environments that are used for their development. Lastly, we outline the possible research gaps and the expected directions future research in this area will entail.
]]>Computation doi: 10.3390/computation11120244
Authors: Stefan Hensel Marin B. Marinov Raphael Panter
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach.
]]>Computation doi: 10.3390/computation11120243
Authors: Alexander Feoktistov Alexei Edelev Andrei Tchernykh Sergey Gorsky Olga Basharina Evgeniy Fereferov
Implementing high-performance computing (HPC) to solve problems in energy infrastructure resilience research in a heterogeneous environment based on an in-memory data grid (IMDG) presents a challenge to workflow management systems. Large-scale energy infrastructure research needs multi-variant planning and tools to allocate and dispatch distributed computing resources that pool together to let applications share data, taking into account the subject domain specificity, resource characteristics, and quotas for resource use. To that end, we propose an approach to implement HPC-based resilience analysis using our Orlando Tools (OT) framework. To dynamically scale computing resources, we provide their integration with the relevant software, identifying key application parameters that can have a significant impact on the amount of data processed and the amount of resources required. We automate the startup of the IMDG cluster to execute workflows. To demonstrate the advantage of our solution, we apply it to evaluate the resilience of the existing energy infrastructure model. Compared to similar approaches, our solution allows us to investigate large infrastructures by modeling multiple simultaneous failures of different types of elements down to the number of network elements. In terms of task and resource utilization efficiency, we achieve almost linear speedup as the number of nodes of each resource increases.
]]>Computation doi: 10.3390/computation11120242
Authors: Aphisak Witthayapraphakorn Sasarose Jaijit Peerayuth Charnsethikul
A novel approach was developed that combined LP-based row generation with optimization-based sorting to tackle computational challenges posed by budget allocation problems with combinatorial constraints. The proposed approach dynamically generated constraints using row generation and prioritized them using optimization-based sorting to ensure a high-quality solution. Computational experiments and case studies revealed that as the problem size increased, the proposed approach outperformed simplex solutions in terms of solution search time. Specifically, for a problem with 50 projects (N = 50) and 2,251,799,813,685,250 constraints, the proposed approach found a solution in just 1.4 s, while LP failed due to the problem size. The proposed approach demonstrated enhanced computational efficiency and solution quality compared to traditional LP methods.
]]>Computation doi: 10.3390/computation11120241
Authors: Eduardo J. Solteiro Pires Adelaide Cerveira José Baptista
This work addresses the wind farm (WF) optimization layout considering several substations. It is given a set of wind turbines jointly with a set of substations, and the goal is to obtain the optimal design to minimize the infrastructure cost and the cost of electrical energy losses during the wind farm lifetime. The turbine set is partitioned into subsets to assign to each substation. The cable type and the connections to collect wind turbine-produced energy, forwarding to the corresponding substation, are selected in each subset. The technique proposed uses a genetic algorithm (GA) and an integer linear programming (ILP) model simultaneously. The GA creates a partition in the turbine set and assigns each of the obtained subsets to a substation to optimize a fitness function that corresponds to the minimum total cost of the WF layout. The fitness function evaluation requires solving an ILP model for each substation to determine the optimal cable connection layout. This methodology is applied to four onshore WFs. The obtained results show that the solution performance of the proposed approach reaches up to 0.17% of economic savings when compared to the clustering with ILP approach (an exact approach).
]]>Computation doi: 10.3390/computation11120240
Authors: Helen Papadaki Evaggelos Kaselouris Makis Bakarezos Michael Tatarakis Nektarios A. Papadogiannis Vasilis Dimitriou
The dynamic behavior of solid Si targets irradiated by nanosecond laser pulses is computationally studied with transient, thermοmechanical three-dimensional finite element method simulations. The dynamic phase changes of the target and the generation and propagation of surface acoustic waves around the laser focal spot are provided by a finite element model of a very fine uniformly structured mesh, able to provide high-resolution results in short and long spatiotemporal scales. The dynamic changes in the Si material properties until the melting regime are considered, and the simulation results provide a detailed description of the irradiated area response, accompanied by the dynamics of the generation and propagation of ultrasonic waves. The new findings indicate that, due to the low thermal expansion coefficient and the high penetration depth of Si, the amplitude of the generated SAW is small, and the time and distance needed for the ultrasound to be generated is higher compared to dense metals. Additionally, in the melting regime, the development of high nonlinear thermal stresses leads to the generation and formation of an irregular ultrasound. Understanding the interaction between nanosecond lasers and Si is pivotal for advancing a wide range of technologies related to material processing and characterization.
]]>Computation doi: 10.3390/computation11120239
Authors: Luís Fonseca Fernando Ribeiro José Metrôlho
In-bed posture classification has attracted considerable research interest and has significant potential to enhance healthcare applications. Recent works generally use approaches based on pressure maps, machine learning algorithms and focused mainly on finding solutions to obtain high accuracy in posture classification. Typically, these solutions use different datasets with varying numbers of sensors and classify the four main postures (supine, prone, left-facing, and right-facing) or, in some cases, include some variants of those main postures. Following this, this article has three main objectives: fine-grained detection of postures of bedridden people, identifying a large number of postures, including small variations—consideration of 28 different postures will help to better identify the actual position of the bedridden person with a higher accuracy. The number of different postures in this approach is considerably higher than the of those used in any other related work; analyze the impact of pressure map resolution on the posture classification accuracy, which has also not been addressed in other studies; and use the PoPu dataset, a dataset that includes pressure maps from 60 participants and 28 different postures. The dataset was analyzed using five distinct ML algorithms (k-nearest neighbors, linear support vector machines, decision tree, random forest, and multi-layer perceptron). This study’s findings show that the used algorithms achieve high accuracy in 4-posture classification (up to 99% in the case of MLP) using the PoPu dataset, with lower accuracies when attempting the finer-grained 28-posture classification approach (up to 68% in the case of random forest). The results indicate that using ML algorithms for finer-grained applications is possible to specify the patient’s exact position to some degree since the parent posture is still accurately classified. Furthermore, reducing the resolution of the pressure maps seems to affect the classifiers only slightly, which suggests that for applications that do not need finer-granularity, a lower resolution might suffice.
]]>Computation doi: 10.3390/computation11120238
Authors: Yunus Emre Orhan Harun Pirim Yusuf Akbulut
This study examines how U.S. senators strategically used hashtags to create political communities on Twitter during the 2022 Midterm Elections. We propose a way to model topic-based implicit interactions among Twitter users and introduce the concept of Building Political Hashtag Communities (BPHC). Using multiplex network analysis, we provide a comprehensive view of elites’ behavior. Through AI-driven topic modeling on real-world data, we observe that, at a general level, Democrats heavily rely on BPHC. Yet, when disaggregating the network across layers, this trend does not uniformly persist. Specifically, while Republicans engage more intensively in BPHC discussions related to immigration, Democrats heavily rely on BPHC in topics related to identity and women. However, only a select group of Democratic actors engage in BPHC for topics on labor and the environment—domains where Republicans scarcely, if at all, participate in BPHC efforts. This research contributes to the understanding of digital political communication, offering new insights into echo chamber dynamics and the role of politicians in polarization.
]]>Computation doi: 10.3390/computation11120237
Authors: P. Ashok B. Bala Tripura Sundari
Stochastic circuits are used in applications that require low area and power consumption. The computing performed using these circuits is referred to as Stochastic computing (SC). The arithmetic operations in this computing can be realized using minimum logic circuits. The SC system allows a tradeoff of computational accuracy and area; thereby, the challenge in SC is improving the accuracy. The accuracy depends on the SC system’s stochastic number generator (SNG) part. SNGs provide the appropriate stochastic input required for stochastic computation. Hence we explore the accuracy in SC for various arithmetic operations performed using stochastic computing with the help of logic circuits. The contributions in this paper are; first, we have performed stochastic computing for arithmetic components using two different SNGs. The SNGs considered are Linear Feed-back Shift Register (LFSR) -based traditional stochastic number generators and S-box-based stochastic number generators. Second, the arithmetic components are implemented in a combinational circuit for algebraic expression in the stochastic domain using two different SNGs. Third, computational analysis for stochastic arithmetic components and the stochastic algebraic equation has been conducted. Finally, accuracy analysis and measurement are performed between LFSR-based computation and S-box-based computation. The novel aspect of this work is the use of S-box-based SNG in the development of stochastic computing in arithmetic components. Also, the implementation of stochastic computing in the combinational circuit using the developed basic arithmetic components, and exploration of accuracy with respect to stochastic number generators used is presented.
]]>Computation doi: 10.3390/computation11120236
Authors: Gabriel-Alexandru Constantin Mariana-Gabriela Munteanu Gheorghe Voicu Gigel Paraschiv Elena-Madalina Ștefan
The baking process in tunnel ovens can be influenced by many parameters. Among these, the most important can be considered as: the baking time, the volume of dough pieces, the texture and humidity of the dough, the distribution of temperature inside the oven, as well as the flow of air currents applied in the baking chamber. In order to obtain a constant quality of bakery or pastry products, and for the efficient operation of the oven, it is necessary that the solution made by the designers be subjected to modelling, simulation and analysis processes, before their manufacture, and in this sense it can be applied to the Computational Fluid Dynamics (CFD) numerical simulation tool. In this study, we made an analysis of the air flow inside the baking chamber of an oven. The analyzed oven was used very frequently on the pastry lines. After performing the modelling and simulation, the temperature distribution inside the oven was obtained in the longitudinal and transverse planes. For the experimental validation of the temperatures obtained in the computer-assisted simulation, the temperatures inside the analyzed electric oven were measured. The measured temperatures validated the simulation results with a maximum error of 7.6%.
]]>Computation doi: 10.3390/computation11120235
Authors: Dinara Akhmetsadyk Arkady Ilyin Nazim Guseinov Gary Beall
SO2 (sulfur dioxide) is a toxic substance emitted into the environment due to burning sulfur-containing fossil fuels in cars, factories, power plants, and homes. This issue is of grave concern because of its negative effects on the environment and human health. Therefore, the search for a material capable of interacting to detect SO2 and the research on developing effective materials for gas detection holds significant importance in the realm of environmental and health applications. It is well known that one of the effective methods for predicting the structure and electronic properties of systems capable of interacting with a molecule is a method based on quantum mechanical approaches. In this work, the DFT (Density Functional Theory) program DMol3 in Materials Studio was used to study the interactions between the SO2 molecule and four systems. The adsorption energy, bond lengths, bond angle, charge transfer, and density of states of SO2 molecule on pristine graphene, N-doped graphene, Ga-doped graphene, and -Ga-N- co-doped graphene were investigated using DFT calculations. The obtained data indicate that the bonding between the SO2 molecule and pristine graphene is relatively weak, with a binding energy of −0.32 eV and a bond length of 3.06 Å, indicating physical adsorption. Next, the adsorption of the molecule on an N-doped graphene system was considered. The adsorption of SO2 molecules on N-doped graphene is negligible; generally, the interaction of SO2 molecules with this system does not significantly change the electronic properties. However, the adsorption energy of the gas molecule on Ga-doped graphene relative to pristine graphene increased significantly. The evidence of chemisorption is increased adsorption energy and decreased adsorption distance between SO2 and Ga-doped graphene. In addition, our results show that introducing -Ga-N- co-dopants of an “ortho” configuration into pristine graphene significantly affects the adsorption between the gas molecule and graphene. Thus, this approach is significantly practical in the adsorption of SO2 molecules.
]]>Computation doi: 10.3390/computation11110234
Authors: Nirmalya Thakur Shuqi Cui Kesha A. Patel Nazif Azizi Victoria Knieling Changhee Han Audrey Poon Rishika Shah
During virus outbreaks in the recent past, web behavior mining, modeling, and analysis have served as means to examine, explore, interpret, assess, and forecast the worldwide perception, readiness, reactions, and response linked to these virus outbreaks. The recent outbreak of the Marburg Virus disease (MVD), the high fatality rate of MVD, and the conspiracy theory linking the FEMA alert signal in the United States on 4 October 2023 with MVD and a zombie outbreak, resulted in a diverse range of reactions in the general public which has transpired in a surge in web behavior in this context. This resulted in “Marburg Virus” featuring in the list of the top trending topics on Twitter on 3 October 2023, and “Emergency Alert System” and “Zombie” featuring in the list of top trending topics on Twitter on 4 October 2023. No prior work in this field has mined and analyzed the emerging trends in web behavior in this context. The work presented in this paper aims to address this research gap and makes multiple scientific contributions to this field. First, it presents the results of performing time-series forecasting of the search interests related to MVD emerging from 216 different regions on a global scale using ARIMA, LSTM, and Autocorrelation. The results of this analysis present the optimal model for forecasting web behavior related to MVD in each of these regions. Second, the correlation between search interests related to MVD and search interests related to zombies was investigated. The findings show that there were several regions where there was a statistically significant correlation between MVD-related searches and zombie-related searches on Google on 4 October 2023. Finally, the correlation between zombie-related searches in the United States and other regions was investigated. This analysis helped to identify those regions where this correlation was statistically significant.
]]>Computation doi: 10.3390/computation11110233
Authors: Moshibudi Ramoshaba Thuto Mosuang
A full potential all-electron density functional method within generalized gradient approximation is used herein to investigate correlations of the electronic, elastic and thermo-electric transport properties of cubic copper sulphide and copper selenide. The electronic band structure and density of states suggest a metallic behaviour with a zero-energy band gap for both materials. Elastic property calculations suggest stiff materials, with bulk to shear modulus ratios of 0.35 and 0.44 for Cu2S and Cu2Se, respectively. Thermo-electric transport properties were estimated using the Boltzmann transport approach. The Seebeck coefficient, electrical conductivity, thermal conductivity and power factor all suggest a potential p-type conductivity for α-Cu2S and n-type conductivity for α-Cu2Se.
]]>Computation doi: 10.3390/computation11110232
Authors: Ana Paula Aravena-Cifuentes Jose David Nuñez-Gonzalez Andoni Elola Malinka Ivanova
This study presents a model for predicting photovoltaic power generation based on meteorological, temporal and geographical variables, without using irradiance values, which have traditionally posed challenges and difficulties for accurate predictions. Validation methods and evaluation metrics are used to analyse four different approaches that vary in the distribution of the training and test database, and whether or not location-independent modelling is performed. The coefficient of determination, R2, is used to measure the proportion of variation in photovoltaic power generation that can be explained by the model’s variables, while gCO2eq represents the amount of CO2 emissions equivalent to each unit of power generation. Both are used to compare model performance and environmental impact. The results show significant differences between the locations, with substantial improvements in some cases, while in others improvements are limited. The importance of customising the predictive model for each specific location is emphasised. Furthermore, it is concluded that environmental impact studies in model production are an additional step towards the creation of more sustainable and efficient models. Likewise, this research considers both the accuracy of solar energy predictions and the environmental impact of the computational resources used in the process, thereby promoting the responsible and sustainable progress of data science.
]]>Computation doi: 10.3390/computation11110231
Authors: Azad Arif Hama Amin Aso M. Aladdin Dler O. Hasan Soran R. Mohammed-Taha Tarik A. Rashid
Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair.
]]>Computation doi: 10.3390/computation11110230
Authors: Pedro Lagos-Eulogio Pedro Miranda-Romagnoli Juan Carlos Seck-Tuoh-Mora Norberto Hernández-Romero
In this work, we propose a variation of the cellular particle swarm optimization algorithm with differential evolution hybridization (CPSO-DE) to include constrained optimization, named Ts-CPD. It is implemented as a kernel of electronic design automation (EDA) tool capable of sizing circuit components considering a single-objective design with restrictions and constraints. The aim is to improve the optimization solutions in the sizing of analog circuits. To evaluate our proposal’s performance, we present the design of three analog circuits: a differential amplifier, a two-stage operational amplifier (op-amp), and a folded cascode operational transconductance amplifier. Numerical simulation results indicate that Ts-CPD can find better solutions, in terms of the design objective and the accomplishment of constraints, than those reported in previous works. The Ts-CPD implementation was performed in Matlab using Ngspice and can be found on GitHub (see Data Availability Statement).
]]>Computation doi: 10.3390/computation11110229
Authors: B. Kh. Khuzhayorov K. K. Viswanathan F. B. Kholliev A. I. Usmonov
In this work, anomalous solute transport using adsorption effects and the decomposition of solute was studied. During the filtration of inhomogeneous liquids, a number of new phenomena arise, and this is very important for understanding the mechanisms of the filtration process. Recently, issues of mathematical modeling of substance transfer processes have been intensively discussed. Modeling approaches are based on the law of matter balance in a certain control volume using additional phenomenological relationships. The process of anomalous solute transport in a porous medium was modeled by differential equations with a fractional derivative. A new mobile—immobile model is proposed to describe anomalous solute transport with a scale-dependent dispersion in inhomogeneous porous media. The profiles of changes in the concentrations of suspended particles in the macropore and micropore were determined. The influence of the order of the derivative with respect to the coordinate and time, i.e., the fractal dimension of the medium, was estimated based on the characteristics of the solute transport in both zones. The hydrodynamic dispersion was set through various relations: constant, linear, and exponential. Based on the numerical results, the concentration fields were determined for different values of the initial data and different relations of hydrodynamic dispersion.
]]>Computation doi: 10.3390/computation11110228
Authors: Helmi Temimi
In this paper, we present an innovative approach to solve a system of boundary value problems (BVPs), using the newly developed discontinuous Galerkin (DG) method, which eliminates the need for auxiliary variables. This work is the first in a series of papers on DG methods applied to partial differential equations (PDEs). By consecutively applying the DG method to each space variable of the PDE using the method of lines, we transform the problem into a system of ordinary differential equations (ODEs). We investigate the convergence criteria of the DG method on systems of ODEs and generalize the error analysis to PDEs. Our analysis demonstrates that the DG error’s leading term is determined by a combination of specific Jacobi polynomials in each element. Thus, we prove that DG solutions are superconvergent at the roots of these polynomials, with an order of convergence of O(hp+2).
]]>Computation doi: 10.3390/computation11110227
Authors: Reza Hassanian Morris Riedel
This study introduces an approach for modeling an arm of a Stewart platform to analyze the location of sections with a high deflection among the arms. Given the dynamic nature of the Stewart platform, its arms experience static and dynamic loads. The static loads originate from the platform’s own weight components, while the dynamic loads arise from the movement or holding of equipment in a specific position using the end-effector. These loads are distributed among the platform arms. The arm encompasses various design categories, including spring-mass, spring-mass-damper, mass-actuator, and spring-mass-actuator. In accordance with these designs, joint points should be strategically placed away from critical sections where maximum buckling or deformation is prominent. The current study presents a novel model employing Euler’s formula, a fundamental concept in buckling analysis, to propose this approach. The results align with experimental and numerical reports in the literature that prove the internal force of the platform arm is affecting the arm stiffness. The equal stiffness of an arm is related to its internal force and its deflection. The study demonstrates how higher levels of dynamic loading influence the dynamic platform, causing variations in the maximum arm’s buckling deflection, its precise location, and the associated deflection slope. Notably, in platform arms capable of adjusting their tilt angles relative to the vertical axis, the angle of inclination directly correlates with deflection and its gradient. The assumption of linearity in Euler’s formula seems to reveal distinctive behavior in deflection gradients concerning dynamic mechanisms.
]]>Computation doi: 10.3390/computation11110226
Authors: Ibrahim Nali Attila Dénes
We propose a within-host mathematical model for the dynamics of Usutu virus infection, incorporating Crowley–Martin functional response. The basic reproduction number R0 is found by applying the next-generation matrix approach. Depending on this threshold, parameter, global asymptotic stability of one of the two possible equilibria is also established via constructing appropriate Lyapunov functions and using LaSalle’s invariance principle. We present numerical simulations to illustrate the results and a sensitivity analysis of R0 was also completed. Finally, we fit the model to actual data on Usutu virus titers. Our study provides new insights into the dynamics of Usutu virus infection.
]]>Computation doi: 10.3390/computation11110225
Authors: Denise Alonso-Vázquez Omar Mendoza-Montoya Ricardo Caraza Hector R. Martinez Javier M. Antelis
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that affects the nerve cells in the brain and spinal cord. This condition leads to the loss of motor skills and, in many cases, the inability to speak. Decoding spoken words from electroencephalography (EEG) signals emerges as an essential tool to enhance the quality of life for these patients. This study compares two classification techniques: (1) the extraction of spectral power features across various frequency bands combined with support vector machines (PSD + SVM) and (2) EEGNet, a convolutional neural network specifically designed for EEG-based brain–computer interfaces. An EEG dataset was acquired from 32 electrodes in 28 healthy participants pronouncing five words in Spanish. Average accuracy rates of 91.04 ± 5.82% for Attention vs. Pronunciation, 73.91 ± 10.04% for Short words vs. Long words, 81.23 ± 10.47% for Word vs. Word, and 54.87 ± 14.51% in the multiclass scenario (All words) were achieved. EEGNet outperformed the PSD + SVM method in three of the four classification scenarios. These findings demonstrate the potential of EEGNet for decoding words from EEG signals, laying the groundwork for future research in ALS patients using non-invasive methods.
]]>Computation doi: 10.3390/computation11110224
Authors: Nataliya Shakhovska Roman Kaminskyy Bohdan Khudoba Vladyslav Mykhailyshyn Ihor Helzhynskyi
This article offers experimental studies and a new methodology for analyzing the influence of micro-stresses on human operator activity in man–machine information and search interfaces. Human-centered design is a problem-solving technique that puts real people at the center of the design process. Therefore, mindfulness is one of the most important aspects in various fields such as medicine, industry, and decision-making. The human-operator activity model can be used to create a database of specialized test images and a computer for its implementation. The peculiarity of the tests is that they represent images of real work situations obtained as a result of texture stylization and allow the use of an appropriate search difficulty scale. A mathematical model of a person who makes a decision is built. The requirements for creating a switch to solve the given problem are discussed. This work summarizes the accumulated experience of such studies.
]]>Computation doi: 10.3390/computation11110222
Authors: David Nocar George Grossman Jiří Vaško Tomáš Zdráhal
This article explores the accessibility of symbolic computations, such as using the Wolfram Mathematica environment, in promoting the shift from informal experimentation to formal mathematical justifications. We investigate the accuracy of computational results from mathematical software in the context of a certain summation in trigonometry. In particular, the key issue addressed here is the calculated sum ∑n=044tan1+4n°. This paper utilizes Wolfram Mathematica to handle the irrational numbers in the sum more accurately, which it achieves by representing them symbolically rather than using numerical approximations. Can we rely on the calculated result from Wolfram, especially if almost all the addends are irrational, or must the students eventually prove it mathematically? It is clear that the problem can be solved using software; however, the nature of the result raises questions about its correctness, and this inherent informality can encourage a few students to seek viable mathematical proofs. In this way, a balance is reached between formal and informal mathematics.
]]>Computation doi: 10.3390/computation11110223
Authors: Alexi Delgado Joshis Culqui Marisabel Lazo Valeria Guerrero Isabel Delgado
The section of the Mantaro River that flows through the department of Huancavelica, Peru, has been affected by toxic wastes and mineral residues from industrial and mining activities, which have directly impacted the water quality. In this work, a grey system model, based on the grey clustering method, was used to assess water quality. The grey clustering method was applied using the central point of triangular whitening weight functions (CTWF). In addition, the Prati index and the Environmental Quality Standards for water from the Peru government were revised and used for this study. In the case study, six physicochemical parameters, pH, DO, BOD, Cd, As, and Pb, at nine monitoring points were assessed along the Mantaro River. The results showed that the sixth monitoring point (P6), which is influenced by mining activity, was highly contaminated, while the other points were classified as noncontaminated. Finally, the results obtained by applying the grey clustering method can be useful to competent authorities, for decision making on water management in this watershed.
]]>Computation doi: 10.3390/computation11110221
Authors: Dmytro Chumachenko Tetiana Dudkina Tetyana Chumachenko Plinio Pelegrini Morita
Background: The COVID-19 pandemic has profoundly transformed the global scenario, marked by overwhelming infections, fatalities, overburdened healthcare infrastructures, economic upheavals, and significant lifestyle modifications. Concurrently, the Russian full-scale invasion of Ukraine on 24 February 2022, triggered a severe humanitarian and public health crisis, leading to healthcare disruptions, medical resource shortages, and heightened emergency care needs. Italy emerged as a significant refuge for displaced Ukrainians during this period. Aim: This research aims to discern the impact of the Russian full-scale invasion of Ukraine on the COVID-19 transmission dynamics in Italy. Materials and Methods: The study employed advanced simulation methodologies, particularly those integrating machine learning, to model the pandemic’s trajectory. The XGBoost algorithm was adopted to construct a predictive model for the COVID-19 epidemic trajectory in Italy. Results: The model demonstrated a commendable accuracy of 86.03% in forecasting new COVID-19 cases in Italy over 30 days and an impressive 96.29% accuracy in estimating fatalities. When applied to the initial 30 days following the escalation of the conflict (24 February 2022, to 25 March 2022), the model’s projections suggested that the influx of Ukrainian refugees into Italy did not significantly alter the country’s COVID-19 epidemic course. Discussion: While simulation methodologies have been pivotal in the pandemic response, their accuracy is intrinsically linked to data quality, assumptions, and modeling techniques. Enhancing these methodologies can further their applicability in future public health emergencies. The findings from the model underscore that external geopolitical events, such as the mass migration from Ukraine, did not play a determinative role in Italy’s COVID-19 epidemic dynamics during the study period. Conclusion: The research provides empirical evidence negating a substantial influence of the Ukrainian refugee influx due to the Russian full-scale invasion on the COVID-19 epidemic trajectory in Italy. The robust performance of the developed model affirms its potential value in public health analyses.
]]>Computation doi: 10.3390/computation11110220
Authors: Rogelio A. Delgado-Alfaro Zeferino Gómez-Sandoval
Antioxidants are molecules that neutralize free radicals. In general, the reaction mechanisms of antioxidants are well known. The main reaction mechanisms of antioxidants are electron transfer (ET), hydrogen transfer (HT), and radical adduction formation (RAF). The study of these mechanisms is helpful in understanding how antioxidants control high free radical levels in the cell. There are many studies focused on determining the main mechanism of an antioxidant to neutralize a wide spectrum of radicals, mainly reactive oxygen species (ROS)-type radicals. Most of these antioxidants are polyphenol-type compounds. Some esters, amides, and metal antioxidants have shown antioxidant activity, but there are few experimental and theoretical studies about the antioxidant reaction mechanism of these compounds. In this work, we show the reaction mechanism proposed for two esters, 11, tri-butyl p-coumarate and its tri-butyl-tin p-coumarate counterpart, using Sn(IV). We show how Sn(IV) increases the electron transfer in polar media and the H transfer in non-polar media. Even though the nature of esters could be polar or non-polar compounds, the antioxidant activity is good for the Sn(IV)-p-coumarate complex in non-polar media.
]]>Computation doi: 10.3390/computation11110219
Authors: Bulat-Batyr Yesmagambetov
This article is devoted to methods of processing random processes. This task becomes particularly relevant in cases where the random process is broadband and non-stationary; then, the measurement of a random process can be associated with an assessment of its probabilistic characteristics. Very often, a non-stationary broadband random process is represented by a single implementation with a priori uncertainty about the type of distribution function. Such random processes occur in information and measuring communication systems in which information is transmitted at a real-time pace (for example, radio telemetry systems in spacecraft). The use of methods of traditional mathematical statistics, for example, maximum likelihood methods, to determine probability characteristics in this case is not possible. In addition, the on-board computing systems of spacecraft operate under conditions of restrictions on mass-dimensional characteristics and energy consumption. Therefore, there is a need to apply accelerated methods of processing measured random processes. This article discusses a method of processing non-stationary broadband random processes based on the use of non-parametric methods of decision theory. An algorithm for dividing the observation interval into stationary intervals using non-parametric Kendall’s statistics is considered, as are methods for estimating probabilistic characteristics on the stationary interval using ordinal statistics. This article presents the results of statistical modeling using the Mathcad program.
]]>Computation doi: 10.3390/computation11110218
Authors: Li Shang Zi Zhang Fujian Tang Qi Cao Nita Yodo Hong Pan Zhibin Lin
Welded joints in metallic pipelines and other structures are used to connect metallic structures. Welding defects, such as cracks and lack of fusion, are vulnerable to initiating early-age cracking and corrosion. The present damage identification techniques use ultrasonic-guided wave procedures, which depend on the change in the physical characteristics of waveforms as they propagate to determine damage states. However, the complexity of geometry and material discontinuity (e.g., the roughness of a weldment with or without defects) could lead to complicated wave reflection and scatters, thus increasing the difficulty in the signal processing. Artificial intelligence and machine learning exhibit their capability for data fusion, including processing signals originally from ultrasonic-guided waves. This study aims to utilize deep learning approaches, including a convolutional neural network (CNN), Long-short term memory network (LSTM), or hybrid CNN-LSTM model, to demonstrate the capability in automation for damage detection for pipes with welded joints embedded in soil. The damage features in terms of welding defect types and severity as well as multiple defects are used to understand the effectiveness of the hybrid CNN-LSTM model, which is further compared to the two commonly used deep learning approaches, CNN and LSTM. The results showed the hybrid CNN-LSTM model has much higher classification accuracy for damage states under all scenarios in comparison with the CNN and LSTM models. Furthermore, the impacts of the pipelines embedded in different types of materials, ranging from loose sand to stiff soil, on signal processing and data classification were further calibrated. The results demonstrated these deep learning approaches can still perform well to detect various pipeline damage under varying embedment conditions. However, the results demonstrate when concrete is used as an embedding material, high attention to absorbing the signal energy of concrete could pose a challenge for the signal processing, particularly under high noise levels.
]]>Computation doi: 10.3390/computation11110217
Authors: Elsayed Dahy Ahmed M. Elaiw Aeshah A. Raezah Hamdy Z. Zidan Abd Elsattar A. Abdellatif
In this paper, we study a model that enhances our understanding of cytokine-influenced HIV-1 infection. The impact of adaptive immune response (cytotoxic T lymphocytes (CTLs) and antibodies) and time delay on HIV-1 infection is included. The model takes into account two types of distributional delays, (i) the delay in the HIV-1 infection of CD4+T cells and (ii) the maturation delay of new virions. We first investigated the fundamental characteristics of the system, then found the system’s equilibria. We derived five threshold parameters, ℜi, i = 0, 1,…, 4, which completely determine the existence and stability of the equilibria. The Lyapunov method was used to prove the global asymptotic stability for all equilibria. We illustrate the theoretical results by performing numerical simulations. We also performed a sensitivity analysis on the basic reproduction number ℜ0 and identified the most-sensitive parameters. We found that pyroptosis contributes to the number ℜ0, and then, neglecting it will make ℜ0 underevaluated. Necrosulfonamide and highly active antiretroviral drug therapy (HAART) can be effective in preventing pyroptosis and at reducing viral replication. Further, it was also found that increasing time delays can effectively decrease ℜ0 and, then, inhibit HIV-1 replication. Furthermore, it is shown that both CTLs and antibody immune responses have no effect on ℜ0, while this can result in less HIV-1 infection.
]]>Computation doi: 10.3390/computation11110216
Authors: Toma Sikora Jonathan Klein Schiphorst Riccardo Scattolini
Roboat is an autonomous surface vessel (ASV) for urban waterways, developed as a research project by the AMS Institute and MIT. The platform can provide numerous functions to a city, such as transport, dynamic infrastructure, and an autonomous waste management system. This paper presents the development of a learning-based controller for the Roboat platform with the goal of achieving robustness and generalization properties. Specifically, when subject to uncertainty in the model or external disturbances, the proposed controller should be able to track set trajectories with less tracking error than the current nonlinear model predictive controller (NMPC) used on the ASV. To achieve this, a simulation of the system dynamics was developed as part of this work, based on the research presented in the literature and on the previous research performed on the Roboat platform. The simulation process also included the modeling of the necessary uncertainties and disturbances. In this simulation, a trajectory tracking agent was trained using the proximal policy optimization (PPO) algorithm. The trajectory tracking of the trained agent was then validated and compared to the current control strategy both in simulations and in the real world.
]]>