Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (110)

Search Parameters:
Keywords = distance regularization term

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3906 KB  
Article
S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments
by Artem Sazonov, Oleksii Kuchkin, Irina Cherepanska and Arūnas Lipnickas
Sensors 2026, 26(2), 731; https://doi.org/10.3390/s26020731 - 21 Jan 2026
Viewed by 176
Abstract
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). [...] Read more.
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). These limitations seriously undermine long-term reliability and safety compliance—both essential for Industry 4.0 applications. This paper introduces S3PM, a lightweight entropy-regularized framework for simultaneous mapping and path planning that operates directly on dense 3D point clouds. Its key innovation is a dynamics-aware entropy field that fuses per-voxel occupancy probabilities with motion cues derived from residual optical flow. Each voxel is assigned a risk-weighted entropy score that accounts for both geometric uncertainty and predicted object dynamics. This representation enables (i) robust differentiation between reliable free space and ambiguous/hazardous regions, (ii) proactive collision avoidance, and (iii) real-time trajectory replanning. The resulting multi-objective cost function effectively balances path length, smoothness, safety margins, and expected information gain, while maintaining high computational efficiency through voxel hashing and incremental distance transforms. Extensive experiments in both real-world and simulated settings, conducted on a Raspberry Pi 5 (with and without the Hailo-8 NPU), show that S3PM achieves 18–27% higher IoU in static/dynamic segmentation, 0.94–0.97 AUC in motion detection, and 30–45% fewer collisions compared to OctoMap + RRT* and standard probabilistic baselines. The full pipeline runs at 12–15 Hz on the bare Pi 5 and 25–30 Hz with NPU acceleration, making S3PM highly suitable for deployment on resource-constrained embedded platforms. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing—2nd Edition)
Show Figures

Figure 1

24 pages, 3755 KB  
Article
The Role of Annual-Fee Memberships in Promoting Citizen Involvement in Community-Level Biodiversity Conservation
by Rasuna Mishima, Makoto Kobayashi and Noboru Kuramoto
Conservation 2026, 6(1), 7; https://doi.org/10.3390/conservation6010007 - 6 Jan 2026
Viewed by 434
Abstract
The necessity of citizen involvement in biodiversity conservation activities is widely recognized in practical conservation operations. Clarifying the roles of annual-fee membership schemes is important, as they enable diverse styles of citizen participation. Kyororo is a museum whose main theme is the Satoyama [...] Read more.
The necessity of citizen involvement in biodiversity conservation activities is widely recognized in practical conservation operations. Clarifying the roles of annual-fee membership schemes is important, as they enable diverse styles of citizen participation. Kyororo is a museum whose main theme is the Satoyama in snowy regions, and the Kyororo Friends Association is an affiliated annual-fee membership program. This study examines the results of a questionnaire survey distributed among the association’s members to examine their perceptions of Kyororo’s activities, in addition to their characteristics—such as age group, place of residence, and type of involvement—and their motivations for joining the association. This study contributes by revising four potential roles of annual-fee membership in terms of promoting citizen participation. The first is as a platform for citizen involvement that is independent of geographic distance or direct participation. The second is as a platform for sustaining the involvement of individuals who have contributed to the accumulated history of the activities. The third is as platforms for citizens who understand and trust community-level nature, conservation activities, and their values to affiliate with and provide their support. The fourth is as a platform for sustained citizen support through regular fixed-amount payments to trusted entities. Full article
Show Figures

Graphical abstract

37 pages, 8656 KB  
Article
Anomaly-Aware Graph-Based Semi-Supervised Deep Support Vector Data Description for Anomaly Detection
by Taha J. Alhindi
Mathematics 2025, 13(24), 3987; https://doi.org/10.3390/math13243987 - 14 Dec 2025
Viewed by 543
Abstract
Anomaly detection in safety-critical systems often operates under severe label constraints, where only a small subset of normal and anomalous samples can be reliably annotated, while large unlabeled data streams are contaminated and high-dimensional. Deep one-class methods, such as deep support vector data [...] Read more.
Anomaly detection in safety-critical systems often operates under severe label constraints, where only a small subset of normal and anomalous samples can be reliably annotated, while large unlabeled data streams are contaminated and high-dimensional. Deep one-class methods, such as deep support vector data description (DeepSVDD) and deep semi-supervised anomaly detection (DeepSAD), address this setting. However, they treat samples largely in isolation and do not explicitly leverage the manifold structure of unlabeled data, which can limit robustness and interpretability. This paper proposes Anomaly-Aware Graph-based Semi-Supervised Deep Support Vector Data Description (AAG-DSVDD), a boundary-focused deep one-class approach that couples a DeepSAD-style hypersphere with a label-aware latent k-nearest neighbor (k-NN) graph. The method combines a soft-boundary enclosure for labeled normals, a margin-based push-out for labeled anomalies, an unlabeled center-pull, and a k-NN graph regularizer on the squared distances to the center. The resulting graph term propagates information from scarce labels along the latent manifold, aligns anomaly scores of neighboring samples, and supports sample-level interpretability through graph neighborhoods, while test-time scoring remains a single distance-to-center computation. On a controlled two-dimensional synthetic dataset, AAG-DSVDD achieves a mean F1-score of 0.88±0.02 across ten random splits, improving on the strongest baseline by about 0.12 absolute F1. On three public benchmark datasets (Thyroid, Arrhythmia, and Heart), AAG-DSVDD attains the highest F1 on all datasets with F1-scores of 0.719, 0.675, and 0.8, respectively, compared to all baselines. In a multi-sensor fire monitoring case study, AAG-DSVDD reduces the average absolute error in fire starting time to approximately 473 s (about 30% improvement over DeepSAD) while keeping the average pre-fire false-alarm rate below 1% and avoiding persistent pre-fire alarms. These results indicate that graph-regularized deep one-class boundaries offer an effective and interpretable framework for semi-supervised anomaly detection under realistic label budgets. Full article
Show Figures

Figure 1

24 pages, 4004 KB  
Article
Graph-Attention-Regularized Deep Support Vector Data Description for Semi-Supervised Anomaly Detection: A Case Study in Automotive Quality Control
by Taha J. Alhindi
Mathematics 2025, 13(23), 3876; https://doi.org/10.3390/math13233876 - 3 Dec 2025
Viewed by 352
Abstract
This paper addresses semi-supervised anomaly detection in settings where only a small subset of normal data can be labeled. Such conditions arise, for example, in industrial quality control of windshield wiper noise, where expert labeling is costly and limited. Our objective is to [...] Read more.
This paper addresses semi-supervised anomaly detection in settings where only a small subset of normal data can be labeled. Such conditions arise, for example, in industrial quality control of windshield wiper noise, where expert labeling is costly and limited. Our objective is to learn a one-class decision boundary that leverages the geometry of unlabeled data while remaining robust to contamination and scarcity of labeled normals. We propose a graph-attention-regularized deep support vector data description (GAR-DSVDD) model that combines a deep one-class enclosure with a latent k-nearest-neighbor graph whose edges are weighted by similarity- and score-aware attention. The resulting loss integrates (i) a distance-based enclosure on labeled normals, (ii) a graph smoothness term on squared distances over the attention-weighted graph, and (iii) a center-pull regularizer on unlabeled samples to avoid over-smoothing and boundary drift. Experiments on a controlled simulated dataset and an industrial windshield wiper acoustics dataset show that GAR-DSVDD consistently improves the F1 score under scarce label conditions. On average, F1 increases from 0.78 to 0.84 on the simulated benchmark and from 0.63 to 0.86 on the industrial case study relative to the best competing baseline. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning with Applications, 2nd Edition)
Show Figures

Figure 1

26 pages, 12950 KB  
Article
Qualitative Assessment of Point Cloud from SLAM-Based MLS for Quarry Digital Twin Creation
by Ľudovít Kovanič, Patrik Peťovský, Branislav Topitzer, Peter Blišťan and Ondrej Tokarčík
Appl. Sci. 2025, 15(22), 12326; https://doi.org/10.3390/app152212326 - 20 Nov 2025
Viewed by 794
Abstract
Quarries represent critical sites for raw material extraction, for which regular monitoring and mine surveying documentation, along with its updating, is essential to ensuring safety, environmental protection, and effective management of the mining process. This article aims to evaluate the modern approach to [...] Read more.
Quarries represent critical sites for raw material extraction, for which regular monitoring and mine surveying documentation, along with its updating, is essential to ensuring safety, environmental protection, and effective management of the mining process. This article aims to evaluate the modern approach to quarry surveying and the creation of a base mining map using advanced laser scanning methods, such as terrestrial laser scanning (TLS) and simultaneous localization and mapping (SLAM)-based mobile laser scanning (MLS). Particular attention is given to the analysis of noise generated using TLS and SLAM-based MLS methods. An analysis of mutual differences between point clouds is presented to compare the spatial accuracy of the point clouds obtained using MLS technology against those from the reference TLS method on both horizontally and vertically oriented test areas. To assess the quality and usability of data obtained using the TLS and MLS methods, a selected section of the mining wall was analyzed based on the distance between points (Cloud-to-Cloud analysis), cross-section analysis, and volume calculations based on 3D mesh models generated from stage edges and point clouds. The findings offer valuable insights into the effective use of each method in quarry surveying, contributing to the development of innovative approaches to spatial data collection as a base for creating Digital Twins of quarries. The article also evaluates the efficiency of both measurement approaches in terms of accuracy, measurement speed, and practical applicability in mining practices. The results show that the point cloud obtained by the TLS Leica RTC360 device, compared to that by the MLS method using the FARO Orbis device (FARO Technologies, Inc., Lakemary, FL, USA), achieves better values in terms of average noise level, standard deviation, interval of highest point density, and RMSD (Root Mean Square Deviation) in test areas. Our conclusions highlight the high potential of laser scanning for the modernization of mining documentation and the improvement of surveying processes in the smart mining industry, particularly for updating Digital Elevation Models (DEMs), Digital Terrain Models (DTMs), Digital Surface Models (DSMs), and other 3D models of quarries for the creation of their Digital Twins. Full article
(This article belongs to the Special Issue Surface and Underground Mining Technology and Sustainability)
Show Figures

Figure 1

10 pages, 5564 KB  
Proceeding Paper
Bayesian Regularization for Dynamical System Identification: Additive Noise Models
by Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel and Markus Quade
Phys. Sci. Forum 2025, 12(1), 17; https://doi.org/10.3390/psf2025012017 - 14 Nov 2025
Viewed by 418
Abstract
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its [...] Read more.
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its time-series or spatial data. For this, we propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized Tikhonov regularization method with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. In this study, two Bayesian algorithms for the estimation of hyperparameters—the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA)—are compared to the popular SINDy, LASSO, and ridge regression algorithms for the analysis of several dynamical systems with additive noise. We consider two dynamical systems, the Lorenz convection system and the Shil’nikov cubic system, with four choices of noise model: symmetric Gaussian or Laplace noise and skewed Rayleigh or Erlang noise, with different magnitudes. The posterior Gaussian norm is found to provide a robust metric for quantitative model selection—with quantification of the model uncertainties—across all dynamical systems and noise models examined. Full article
Show Figures

Figure 1

17 pages, 2801 KB  
Article
Glenoid Radiolucent Lines and Subsidence Show Limited Impact on Clinical and Functional Long-Term Outcomes After Anatomic Total Shoulder Arthroplasty: A Retrospective Analysis of Cemented Polyethylene Glenoid Components
by Felix Hochberger, Jonas Limmer, Justus Muhmann, Frank Gohlke, Laura Elisa Streck, Maximilian Rudert and Kilian List
J. Clin. Med. 2025, 14(19), 7058; https://doi.org/10.3390/jcm14197058 - 6 Oct 2025
Viewed by 877
Abstract
Background: Glenoid radiolucenct lines (gRLL) and glenoid component subsidence (gSC) after anatomic total shoulder arthroplasty (aTSA) have traditionally been linked to implant loosening and functional decline. However, their impact on long-term clinical outcomes remains unclear. This study aimed to evaluate whether gRLL [...] Read more.
Background: Glenoid radiolucenct lines (gRLL) and glenoid component subsidence (gSC) after anatomic total shoulder arthroplasty (aTSA) have traditionally been linked to implant loosening and functional decline. However, their impact on long-term clinical outcomes remains unclear. This study aimed to evaluate whether gRLL and gSC are associated with inferior clinical or functional results in patients without revision surgery. Methods: In this retrospective study, 52 aTSA cases (2008–2015) were analyzed with a minimum of five years of clinical and radiographic follow-up. Based on final imaging, patients were categorized according to the presence and extent of gRLL and gSC. Clinical outcomes included the Constant-Murley Score, DASH, VAS for pain, and range of motion (ROM). Radiographic parameters included the critical shoulder angle (CSA), acromiohumeral distance (AHD), lateral offset (LO), humeral head-stem index (HSI), and cranial humeral head decentration (DC). Group comparisons were conducted between: (1) ≤2 vs. 3 gRLL zones, (2) 0 vs. 1 zone, (3) 0 vs. 3 zones, (4) gSC vs. no gSC, and (5) DC vs. no DC. Results: Demographics and baseline characteristics were comparable across groups. Functional scores (Constant, DASH), pain (VAS), and ROM were largely similar. Patients with extensive gRLL showed reduced external rotation (p = 0.01), but the difference remained below the MCID. Similarly, gSC was associated with lower forward elevation (p = 0.04) and external rotation (p = 0.03), both below MCID thresholds. No significant differences were observed for DC. Conclusions: Neither extensive gRLL nor gSC significantly impaired long-term clinical or functional outcomes. As these radiographic changes can occur in the absence of symptoms, regular radiographic monitoring is essential, and revision decisions should be made individually in cases of progressive bone loss. Full article
(This article belongs to the Special Issue Clinical Updates on Shoulder Arthroplasty)
Show Figures

Figure 1

17 pages, 3914 KB  
Article
Adaptive Structured Latent Space Learning via Component-Aware Triplet Convolutional Autoencoder for Fault Diagnosis in Ship Oil Purifiers
by Sun Geu Chae, Gwang Ho Yun, Jae Cheul Park and Hwa Sup Jang
Processes 2025, 13(9), 3012; https://doi.org/10.3390/pr13093012 - 21 Sep 2025
Cited by 2 | Viewed by 663
Abstract
Timely and accurate fault diagnosis of ship oil purifiers is essential for maintaining the operational reliability of a degree-4 maritime autonomous surface ship (MASS). Conventional approaches rely on manual feature engineering or simple machine learning classifiers, limiting their robustness in dynamic maritime environments. [...] Read more.
Timely and accurate fault diagnosis of ship oil purifiers is essential for maintaining the operational reliability of a degree-4 maritime autonomous surface ship (MASS). Conventional approaches rely on manual feature engineering or simple machine learning classifiers, limiting their robustness in dynamic maritime environments. This study proposes an adaptive latent space learning framework that couples a two-dimensional convolutional autoencoder (2D-CAE) with a component-aware triplet-loss regularizer. The loss term structures the latent space to reflect both the fault severity progression and component-specific distinctions, enabling severity-proportional distances among a latent vector learned directly from vibration signals even in a limited data environment. Using data collected on a dedicated ship oil purifier test bed, the method yields a latent vector that encodes the fault severity and physical provenance, enhancing the interpretability and diagnostic accuracy. Experiments demonstrate enhanced performance over state-of-the-art deep models, while offering clear insight into fault evolution and inter-component dependencies. The framework thus advances intelligent, condition-based maintenance for autonomous maritime systems. Full article
(This article belongs to the Section Automation Control Systems)
Show Figures

Graphical abstract

23 pages, 355 KB  
Article
Two Types of Geometric Jensen–Shannon Divergences
by Frank Nielsen
Entropy 2025, 27(9), 947; https://doi.org/10.3390/e27090947 - 11 Sep 2025
Viewed by 1963
Abstract
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize [...] Read more.
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize geometric mixtures. This novel divergence is termed the extended G-JSD, as it applies to the more general case of positive measures. We explicitly report the gap between the extended G-JSD and the G-JSD when considering probability densities, and show how to express the G-JSD and extended G-JSD using the Jeffreys divergence and the Bhattacharyya distance or Bhattacharyya coefficient. The extended G-JSD is proven to be an f-divergence, which is a separable divergence satisfying information monotonicity and invariance in information geometry. We derive a corresponding closed-form formula for the two types of G-JSDs when considering the case of multivariate Gaussian distributions that is often met in applications. We consider Monte Carlo stochastic estimations and approximations of the two types of G-JSD using the projective γ-divergences. Although the square root of the JSD yields a metric distance, we show that this is no longer the case for the two types of G-JSD. Finally, we explain how these two types of geometric JSDs can be interpreted as regularizations of the ordinary JSD. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
17 pages, 310 KB  
Article
Statistical Entropy Based on the Generalized-Uncertainty-Principle-Induced Effective Metric
by Soon-Tae Hong, Yong-Wan Kim and Young-Jai Park
Universe 2025, 11(8), 256; https://doi.org/10.3390/universe11080256 - 2 Aug 2025
Viewed by 711
Abstract
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from [...] Read more.
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from the GUP-corrected temperature, entropy, and all-order GUP corrections, and analyze their impact on black hole entropy using ’t Hooft’s brick wall method. Our results show that, despite the differences in the effective metrics and the corresponding ultraviolet cutoffs, the statistical entropy consistently satisfies the Bekenstein–Hawking area law when expressed in terms of an invariant (coordinate-independent) distance near the horizon. Furthermore, we demonstrate that the GUP naturally regularizes the ultraviolet divergence in the density of states, eliminating the need for artificial cutoffs and yielding finite entropy even when counting quantum states only in the vicinity of the event horizon. These findings highlight the universality and robustness of the area law under GUP modifications and provide new insights into the interplay between quantum gravity effects and black hole thermodynamics. Full article
(This article belongs to the Collection Open Questions in Black Hole Physics)
24 pages, 3524 KB  
Article
Transient Stability Assessment of Power Systems Based on Temporal Feature Selection and LSTM-Transformer Variational Fusion
by Zirui Huang, Zhaobin Du, Jiawei Gao and Guoduan Zhong
Electronics 2025, 14(14), 2780; https://doi.org/10.3390/electronics14142780 - 10 Jul 2025
Cited by 1 | Viewed by 1229
Abstract
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep [...] Read more.
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep learning-based modeling. First, a two-stage feature selection strategy is designed using the inter-class Mahalanobis distance and Spearman rank correlation. This helps extract highly discriminative and low-redundancy features from wide-area measurement system (WAMS) time-series data. Then, a parallel LSTM-Transformer architecture is constructed to capture both short-term local fluctuations and long-term global dependencies. A variational inference mechanism based on a Gaussian mixture model (GMM) is introduced to enable dynamic representations fusion and uncertainty modeling. A composite loss function combining improved focal loss and Kullback–Leibler (KL) divergence regularization is designed to enhance model robustness and training stability under complex disturbances. The proposed method is validated on a modified IEEE 39-bus system. Results show that it outperforms existing models in accuracy, robustness, interpretability, and other aspects. This provides an effective solution for TSA in power systems with high renewable energy integration. Full article
(This article belongs to the Special Issue Advanced Energy Systems and Technologies for Urban Sustainability)
Show Figures

Figure 1

15 pages, 72897 KB  
Article
Dual-Dimensional Gaussian Splatting Integrating 2D and 3D Gaussians for Surface Reconstruction
by Jichan Park, Jae-Won Suh and Yuseok Ban
Appl. Sci. 2025, 15(12), 6769; https://doi.org/10.3390/app15126769 - 16 Jun 2025
Viewed by 5335
Abstract
Three-Dimensional Gaussian Splatting (3DGS) has revolutionized novel-view synthesis, enabling real-time rendering of high-quality scenes. Two-Dimensional Gaussian Splatting (2DGS) improves geometric accuracy by replacing 3D Gaussians with flat 2D Gaussians. However, the flat nature of 2D Gaussians reduces mesh quality on volumetric surfaces and [...] Read more.
Three-Dimensional Gaussian Splatting (3DGS) has revolutionized novel-view synthesis, enabling real-time rendering of high-quality scenes. Two-Dimensional Gaussian Splatting (2DGS) improves geometric accuracy by replacing 3D Gaussians with flat 2D Gaussians. However, the flat nature of 2D Gaussians reduces mesh quality on volumetric surfaces and results in over-smoothed reconstruction. To address this, we propose Dual-Dimensional Gaussian Splatting (DDGS), which integrates both 2D and 3D Gaussians. First, we generalize the homogeneous transformation matrix based on 2DGS to initialize all Gaussians in 3D. Subsequently, during training, we selectively convert Gaussians into 2D representations based on their scale. This approach leverages the complementary strengths of 2D and 3D Gaussians, resulting in more accurate surface reconstruction across both flat and volumetric regions. Additionally, to mitigate over-smoothing, we introduce gradient-based regularization terms. Quantitative evaluations on the DTU and TnT datasets demonstrate that DDGS consistently outperforms prior methods, including 3DGS, SuGaR, and 2DGS, achieving the best Chamfer Distance and F1 score across a wide range of scenes. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1138 KB  
Article
Fuzzy Clustering Approaches Based on Numerical Optimizations of Modified Objective Functions
by Erind Bedalli, Shkelqim Hajrulla, Rexhep Rada and Robert Kosova
Algorithms 2025, 18(6), 327; https://doi.org/10.3390/a18060327 - 29 May 2025
Cited by 1 | Viewed by 1070
Abstract
Fuzzy clustering is a form of unsupervised learning that assigns the elements of a dataset into multiple clusters with varying degrees of membership rather than assigning them to a single cluster. The classical Fuzzy C-Means algorithm operates as an iterative procedure that minimizes [...] Read more.
Fuzzy clustering is a form of unsupervised learning that assigns the elements of a dataset into multiple clusters with varying degrees of membership rather than assigning them to a single cluster. The classical Fuzzy C-Means algorithm operates as an iterative procedure that minimizes an objective function defined based on the weighted distance between each point and the cluster centers. The algorithm operates decently in many datasets but struggles with datasets that exhibit irregularities in overlapping shapes, densities, and sizes of clusters. Meanwhile, there is a growing demand for accurate and scalable clustering techniques, especially in high-dimensional data analysis. This research work aims to address these infirmities of the classical fuzzy clustering algorithm by applying several modification approaches on the objective function of this algorithm. These modifications include several regularization terms aiming to make the algorithm more robust in specific types of datasets. The optimization of the modified objective functions is handled based on several numerical methods: gradient descent, root mean square propagation (RMSprop), and adaptive mean estimation (Adam). These methods are implemented in a Python environment, and extensive experimental studies are conducted, following carefully the steps of dataset selection, algorithm implementation, hyper-parameter tuning, picking the evaluation metrics, and analyzing the results. A comparison of the features of these algorithms on various datasets is carefully summarized. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

39 pages, 9959 KB  
Article
Hydrodynamic Performance and Motion Prediction Before Twin-Barge Float-Over Installation of Offshore Wind Turbines
by Mengyang Zhao, Xiang Yuan Zheng, Sheng Zhang, Kehao Qian, Yucong Jiang, Yue Liu, Menglan Duan, Tianfeng Zhao and Ke Zhai
J. Mar. Sci. Eng. 2025, 13(5), 995; https://doi.org/10.3390/jmse13050995 - 21 May 2025
Cited by 3 | Viewed by 1921
Abstract
In recent years, the twin-barge float-over method has been widely used in offshore installations. This paper conducts numerical simulation and experimental research on the twin-barge float-over installation of offshore wind turbines (TBFOI-OWTs), focusing primarily on seakeeping performance, and also explores the influence of [...] Read more.
In recent years, the twin-barge float-over method has been widely used in offshore installations. This paper conducts numerical simulation and experimental research on the twin-barge float-over installation of offshore wind turbines (TBFOI-OWTs), focusing primarily on seakeeping performance, and also explores the influence of the gap distance on the hydrodynamic behavior of TBFOI-OWTs. Model tests are conducted in the ocean basin at Tsinghua Shenzhen International Graduate School. A physical model with a scale ratio of 1:50 is designed and fabricated, comprising two barges, a truss carriage frame, two small wind turbines, and a spread catenary mooring system. A series of model tests, including free decay tests, regular wave tests, and random wave tests, are carried out to investigate the hydrodynamics of TBFOI-OWTs. The experimental results and the numerical results are in good agreement, thereby validating the accuracy of the numerical simulation method. The motion RAOs of TBFOI-OWTs are small, demonstrating their good seakeeping performance. Compared with the regular wave situation, the surge and sway motions in random waves have greater ranges and amplitudes. This reveals that the mooring analysis cannot depend on regular waves only, and more importantly, that the random nature of realistic waves is less favorable for float-over installations. The responses in random waves are primarily controlled by motions’ natural frequencies and incident wave frequency. It is also revealed that the distance between two barges has a significant influence on the motion RAOs in beam seas. Within a certain range of incident wave periods (10.00 s < T < 15.00 s), increasing the gap distance reduces the sway RAO and roll RAO due to the energy dissipated by the damping pool of the barge gap. For installation safety within an operating window, it is meaningful but challenging to have accurate predictions of the forthcoming motions. For this, this study employs the Whale Optimization Algorithm (WOA) to optimize the Long Short-Term Memory (LSTM) neural network. Both the stepwise iterative model and the direct multi-step model of LSTM achieve a high accuracy of predicted heave motions. This study, to some extent, affirms the feasibility of float-over installation in the offshore wind power industry and provides a useful scheme for short-term predictions of motions. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

15 pages, 3214 KB  
Article
Dimensional Accuracy of Regular- and Fast-Setting Vinyl Polysiloxane Impressions Using Customized Metal and Plastic Trays—An In Vitro Study
by Moritz Waldecker, Karla Jetter, Stefan Rues, Peter Rammelsberg and Andreas Zenthöfer
Materials 2025, 18(9), 2164; https://doi.org/10.3390/ma18092164 - 7 May 2025
Viewed by 949
Abstract
The aim of this study was to compare the dimensional accuracy of vinyl polysiloxane impressions differing in terms of curing time (regular-setting (RS) or fast-setting (FS)) in combination with different tray materials (metal (M) and plastic (P)). A typodont reference model simulated a [...] Read more.
The aim of this study was to compare the dimensional accuracy of vinyl polysiloxane impressions differing in terms of curing time (regular-setting (RS) or fast-setting (FS)) in combination with different tray materials (metal (M) and plastic (P)). A typodont reference model simulated a partially edentulous maxilla. Reference points were given by center points of either precision balls welded to specific teeth or finishing-line centers of prepared teeth. These reference points enabled the detection of dimensional deviations between the digitized reference and the scans of the models achieved from the study impressions. Twenty impressions were made for each of the following four test groups: RS-M, RS-P, FS-M and FS-P. Global scan data accuracy was measured by distance and tooth axis deviations from the reference, while local accuracy was determined based on the trueness and precision of the abutment tooth surfaces. Statistical analysis was conducted using ANOVA accompanied by pairwise Tukey post hoc tests (α = 0.05). Most of the distances tended to be underestimated. Global accuracy was favorable; even for long distances, the mean absolute distance deviations were < 100 µm. Local accuracy was excellent for all test groups, with trueness ≤ 11 µm and precision ≤ 9 µm. Within the limitations of this study, all impression and tray materials were suitable to fabricate models with clinically acceptable accuracy. Full article
(This article belongs to the Special Issue Advanced Biomaterials for Dental Applications (2nd Edition))
Show Figures

Figure 1

Back to TopTop