Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (187)

Search Parameters:
Keywords = weighted Euclidean distance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3983 KiB  
Article
Prediction of Mature Body Weight of Indigenous Camel (Camelus dromedarius) Breeds of Pakistan Using Data Mining Methods
by Daniel Zaborski, Wilhelm Grzesiak, Abdul Fatih, Asim Faraz, Mohammad Masood Tariq, Irfan Shahzad Sheikh, Abdul Waheed, Asad Ullah, Illahi Bakhsh Marghazani, Muhammad Zahid Mustafa, Cem Tırınk, Senol Celik, Olha Stadnytska and Oleh Klym
Animals 2025, 15(14), 2051; https://doi.org/10.3390/ani15142051 - 11 Jul 2025
Viewed by 211
Abstract
The determination of the live body weight of camels (required for their successful breeding) is a rather difficult task due to the problems with handling and restraining these animals. Therefore, the main aim of this study was to predict the ABW of eight [...] Read more.
The determination of the live body weight of camels (required for their successful breeding) is a rather difficult task due to the problems with handling and restraining these animals. Therefore, the main aim of this study was to predict the ABW of eight indigenous camel (Camelus dromedarius) breeds of Pakistan (Bravhi, Kachi, Kharani, Kohi, Lassi, Makrani, Pishin, and Rodbari). Selected productive (hair production, milk yield per lactation, and lactation length) and reproductive (age of puberty, age at first breeding, gestation period, dry period, and calving interval) traits served as the predictors. Six data mining methods [classification and regression trees (CARTs), chi-square automatic interaction detector (CHAID), exhaustive CHAID (EXCHAID), multivariate adaptive regression splines (MARSs), MLP, and RBF] were applied for ABW prediction. Additionally, hierarchical cluster analysis with Euclidean distance was performed for the phenotypic characterization of the camel breeds. The highest Pearson correlation coefficient between the observed and predicted values (0.84, p < 0.05) was obtained for MLP, which was also characterized by the lowest root-mean-square error (RMSE) (20.86 kg), standard deviation ratio (SDratio) (0.54), mean absolute percentage error (MAPE) (2.44%), and mean absolute deviation (MAD) (16.45 kg). The most influential predictor for all the models was the camel breed. The applied methods allowed for the moderately accurate prediction of ABW (average R2 equal to 65.0%) and the identification of the most important productive and reproductive traits affecting its value. However, one important limitation of the present study is its relatively small dataset, especially for training the ANN (MLP and RBF). Hence, the obtained preliminary results should be validated on larger datasets in the future. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

27 pages, 7066 KiB  
Article
A Deep Learning-Based Trajectory and Collision Prediction Framework for Safe Urban Air Mobility
by Junghoon Kim, Hyewon Yoon, Seungwon Yoon, Yongmin Kwon and Kyuchul Lee
Drones 2025, 9(7), 460; https://doi.org/10.3390/drones9070460 - 26 Jun 2025
Viewed by 521
Abstract
As urban air mobility moves rapidly toward real-world deployment, accurate vehicle trajectory prediction and early collision risk detection are vital for safe low-altitude operations. This study presents a deep learning framework based on an LSTM–Attention network that captures both short-term flight dynamics and [...] Read more.
As urban air mobility moves rapidly toward real-world deployment, accurate vehicle trajectory prediction and early collision risk detection are vital for safe low-altitude operations. This study presents a deep learning framework based on an LSTM–Attention network that captures both short-term flight dynamics and long-range dependencies in trajectory data. The model is trained on fifty-six routes generated from a UAM planned commercialization network, sampled at 0.1 s intervals. To unify spatial dimensions, the model uses Earth-Centered Earth-Fixed (ECEF) coordinates, enabling efficient Euclidean distance calculations. The trajectory prediction component achieves an RMSE of 0.2172, MAE of 0.1668, and MSE of 0.0524. The collision classification module built on the LSTM–Attention prediction backbone delivers an accuracy of 0.9881. Analysis of attention weight distributions reveals which temporal segments most influence model outputs, enhancing interpretability and guiding future refinements. Moreover, this model is embedded within the Short-Term Conflict Alert component of the Safety Nets module in the UAM traffic management system to provide continuous trajectory prediction and collision risk assessment, supporting proactive traffic control. The system exhibits robust generalizability on unseen scenarios and offers a scalable foundation for enhancing operational safety. Validation currently excludes environmental disturbances such as wind, physical obstacles, and real-world flight logs. Future work will incorporate atmospheric variability, sensor and communication uncertainties, and obstacle detection inputs to advance toward a fully integrated traffic management solution with comprehensive situational awareness. Full article
(This article belongs to the Special Issue Urban Air Mobility Solutions: UAVs for Smarter Cities)
Show Figures

Figure 1

29 pages, 7833 KiB  
Article
A Novel Multi-Criteria Quantum Group Decision-Making Model Considering Decision Makers’ Risk Perception Based on Type-2 Fuzzy Numbers
by Wen Li, Shuaicheng Lu, Zhiliang Ren and Obaid Ur Rehman
Symmetry 2025, 17(7), 1006; https://doi.org/10.3390/sym17071006 - 26 Jun 2025
Viewed by 310
Abstract
In multi-criteria group decision making, decision makers are commonly regarded as independent. However, in practice, heterogeneous backgrounds and complex cognitive processes lead to mutual interference among their judgments. To address this gap, a novel multi-criteria quantum group decision-making model is proposed that explicitly [...] Read more.
In multi-criteria group decision making, decision makers are commonly regarded as independent. However, in practice, heterogeneous backgrounds and complex cognitive processes lead to mutual interference among their judgments. To address this gap, a novel multi-criteria quantum group decision-making model is proposed that explicitly incorporates opinion interference effects. First, type-2 fuzzy numbers are employed to represent evaluation information, and a specialized Euclidean distance measure for them is introduced. Second, an extended distance-based criteria importance through an inter-criteria correlation method incorporating Deng entropy is developed to derive robust criteria weights under uncertainty. Third, the TODIM method integrates cumulative prospect theory to capture decision makers’ risk perceptions and computes prospect-based dominance degrees. Fourth, a quantum-inspired aggregation mechanism models the mutual interference in group opinions. Finally, a case study on FinTech startup investment demonstrates the model’s practical applicability, while sensitivity analysis and comparisons to established methods confirm its robustness and effectiveness. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

27 pages, 2176 KiB  
Article
Minimum Critical Test Scenario Set Selection for Autonomous Vehicles Prior to First Deployment and Public Road Testing
by Balint Toth and Zsolt Szalay
Appl. Sci. 2025, 15(13), 7031; https://doi.org/10.3390/app15137031 - 22 Jun 2025
Viewed by 364
Abstract
The growing complexity of autonomous vehicle functionalities poses significant challenges for vehicle testing, validation, and regulatory approval. Despite the availability of various testing protocols and standards, a harmonized and widely accepted method specifically targeting the selection of critical test scenarios—especially for safety assessments [...] Read more.
The growing complexity of autonomous vehicle functionalities poses significant challenges for vehicle testing, validation, and regulatory approval. Despite the availability of various testing protocols and standards, a harmonized and widely accepted method specifically targeting the selection of critical test scenarios—especially for safety assessments prior to public road testing—has not yet been developed. This study introduces a systematic methodology for selecting a minimum critical set of test scenarios tailored to an autonomous vehicle’s Operational Design Domain (ODD) and capabilities. Building on existing testing frameworks (e.g., EuroNCAP protocols, ISO standards, UNECE and EU regulations), the proposed method combines a structured questionnaire with a weighted cosine similarity based filtering mechanism to identify relevant scenarios from a robust database of over 1000 test cases. Further refinement using similarity metrics such as Euclidean and Manhattan distances ensures the elimination of redundant test scenarios. Application of the framework to real-world projects demonstrates significant alignment with expert-identified cases, while also identifying overlooked but relevant scenarios. By addressing the need for a structured and efficient scenario selection method, this work supports the advancement of systematic safety assurance for autonomous vehicles and provides a scalable solution for authorities and vehicle testing companies. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
Show Figures

Figure 1

29 pages, 3690 KiB  
Article
Application of the Adaptive Mixed-Order Cubature Particle Filter Algorithm Based on Matrix Lie Group Representation for the Initial Alignment of SINS
by Ning Wang and Fanming Liu
Information 2025, 16(5), 416; https://doi.org/10.3390/info16050416 - 20 May 2025
Viewed by 347
Abstract
Under large azimuth misalignment conditions, the initial alignment of strapdown inertial navigation systems (SINS) is challenged by the nonlinear characteristics of the error model. Traditional particle filter (PF) algorithms suffer from the inappropriate selection of importance density functions and severe particle degeneration, which [...] Read more.
Under large azimuth misalignment conditions, the initial alignment of strapdown inertial navigation systems (SINS) is challenged by the nonlinear characteristics of the error model. Traditional particle filter (PF) algorithms suffer from the inappropriate selection of importance density functions and severe particle degeneration, which limit their applicability in high-precision navigation. To address these limitations, this paper proposes an adaptive mixed-order spherical simplex-radial cubature particle filter (MLG-AMSSRCPF) algorithm based on matrix Lie group representation. In this approach, attitude errors are represented on the matrix Lie group SO(3), while velocity errors and inertial sensor biases are retained in Euclidean space. Efficient bidirectional conversion between Euclidean and manifold spaces is achieved through exponential and logarithmic maps, enabling accurate attitude estimation without the need for Jacobian matrices. A hybrid-order cubature transformation is introduced to reduce model linearization errors, thereby enhancing the estimation accuracy. To improve the algorithm’s adaptability in dynamic noise environments, an adaptive noise covariance update mechanism is integrated. Meanwhile, the particle similarity is evaluated using Euclidean distance, allowing the dynamic adjustment of particle numbers to balance the filtering accuracy and computational load. Furthermore, a multivariate Huber loss function is employed to adaptively adjust particle weights, effectively suppressing the influence of outliers and significantly improving the robustness of the filter. Simulation and the experimental results validate the superior performance of the proposed algorithm under moving-base alignment conditions. Compared with the conventional cubature particle filter (CPF), the heading accuracy of the MLG-AMSSRCPF algorithm was improved by 31.29% under measurement outlier interference and by 39.79% under system noise mutation scenarios. In comparison with the unscented Kalman filter (UKF), it yields improvements of 58.51% and 58.82%, respectively. These results demonstrate that the proposed method substantially enhances the filtering accuracy, robustness, and computational efficiency of SINS, confirming its practical value for initial alignment in high-noise, complex dynamic, and nonlinear navigation systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 7526 KiB  
Article
Salp Swarm Algorithm Optimized A* Algorithm and Improved B-Spline Interpolation in Path Planning
by Hang Zhou, Tianning Shang, Yongchuan Wang and Long Zuo
Appl. Sci. 2025, 15(10), 5583; https://doi.org/10.3390/app15105583 - 16 May 2025
Cited by 1 | Viewed by 364
Abstract
The efficiency and smoothness of path planning algorithms are critical factors influencing their practical applications. A traditional A* algorithm suffers from limitations in search efficiency, path smoothness, and obstacle avoidance. To address these challenges, this paper introduces an improved A* algorithm that integrates [...] Read more.
The efficiency and smoothness of path planning algorithms are critical factors influencing their practical applications. A traditional A* algorithm suffers from limitations in search efficiency, path smoothness, and obstacle avoidance. To address these challenges, this paper introduces an improved A* algorithm that integrates the Salp Swarm Algorithm (SSA) for heuristic function optimization and proposes a refined B-spline interpolation method for path smoothing. The first major improvement involves enhancing the A* algorithm by optimizing its heuristic function through the SSA. The heuristic function combines Chebyshev distance, Euclidean distance, and obstacle density, with the SSA adjusting the weight parameters to maximize efficiency. The simulation experimental results demonstrate that this modification reduces the number of searched nodes by more than 78.2% and decreases planning time by over 48.1% compared to traditional A* algorithms. The second key contribution is an improved B-spline interpolation method incorporating a two-stage optimization strategy for smoother and safer paths. A corner avoidance strategy first adjusts control points near sharp turns to prevent collisions, followed by a path obstacle avoidance strategy that fine-tunes control point positions to ensure safe distances from obstacles. The simulation experimental results show that the optimized path increases the minimum obstacle distance by 0.2–0.5 units, improves the average distance by over 43.0%, and reduces path curvature by approximately 61.8%. Comparative evaluations across diverse environments confirm the superiority of the proposed method in computational efficiency, path smoothness, and safety. This study presents an effective and robust solution for path planning in complex scenarios. Full article
(This article belongs to the Special Issue Collaborative Learning and Optimization Theory and Its Applications)
Show Figures

Figure 1

19 pages, 1553 KiB  
Article
Optimal Portfolio Construction Using the Realized Volatility Concept: Empirical Evidence from the Stock Exchange of Thailand
by Sanae Rujivan, Thapakon Khuatongkeaw and Athinan Sutchada
J. Risk Financial Manag. 2025, 18(5), 269; https://doi.org/10.3390/jrfm18050269 - 15 May 2025
Viewed by 1178
Abstract
This paper addresses the problem of constructing optimal equity portfolios under volatile market conditions by minimizing realized volatility—an alternative risk quantifier that more accurately captures short-term market fluctuations than traditional variance-based approaches. This issue is particularly relevant for investors seeking robust risk management [...] Read more.
This paper addresses the problem of constructing optimal equity portfolios under volatile market conditions by minimizing realized volatility—an alternative risk quantifier that more accurately captures short-term market fluctuations than traditional variance-based approaches. This issue is particularly relevant for investors seeking robust risk management strategies in dynamic and uncertain environments. We propose a mathematical optimization framework that determines portfolio weights by minimizing realized volatility, subject to expected return constraints. The model is empirically validated using historical data from stocks listed in the Stock Exchange of Thailand 50 (SET50) index. Through a comparative analysis of realized volatility and variance-based optimization across multiple portfolio sizes and return levels, we find that portfolios constructed using realized volatility consistently achieve higher Sharpe ratios, indicating superior risk-adjusted performance. We further introduce an efficiency metric based on the Euclidean distance between optimal portfolio weight vectors to evaluate the stability of allocations under extended investment horizons. The findings underscore the practical advantages of realized volatility in portfolio construction, offering enhanced responsiveness to market dynamics and improved performance outcomes. The novelty of this study lies in integrating realized volatility into a constrained portfolio optimization model and empirically demonstrating its superiority, thereby extending traditional mean-variance methods in both scope and effectiveness. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

19 pages, 251 KiB  
Article
Defending Federated Learning from Collaborative Poisoning Attacks: A Clique-Based Detection Framework
by Dimitrios Anastasiadis and Ioannis Refanidis
Electronics 2025, 14(10), 2011; https://doi.org/10.3390/electronics14102011 - 15 May 2025
Viewed by 547
Abstract
Federated Learning (FL) systems are increasingly vulnerable to data poisoning attacks, in which malicious clients attempt to manipulate their training data in order to compromise the corresponding machine learning model. Existing detection techniques rely mostly on identifying clients who provide weight updates that [...] Read more.
Federated Learning (FL) systems are increasingly vulnerable to data poisoning attacks, in which malicious clients attempt to manipulate their training data in order to compromise the corresponding machine learning model. Existing detection techniques rely mostly on identifying clients who provide weight updates that significantly diverge from the average across multiple training rounds. In this work, we propose a Clique-Based Detection Framework (CBDF) that focuses on similarity patterns between client updates instead of their deviation. Specifically, we make use of the Euclidean distance to measure similarity between the weight update vectors of different clients over training iterations. Clients that provide consistently similar weight updates and exceed a predefined threshold are flagged as potential adversaries. Therefore, this method detects the coordination patterns of the attackers and uses them to strengthen FL systems against sophisticated, coordinated data poisoning attacks. We validate the effectiveness of this approach through extensive experimental evaluation. Moreover, we provide suggestions regarding fine-tuning hyperparameters to maximize the performance of the detection method. This approach represents a novel advancement in protecting FL models from malicious interference. Full article
(This article belongs to the Special Issue Recent Advances in Intrusion Detection Systems Using Machine Learning)
17 pages, 2994 KiB  
Article
Similarity and Homogeneity of Climate Change in Local Destinations: A Globally Reproducible Approach from Slovakia
by Csaba Sidor, Branislav Kršák and Ľubomír Štrba
World 2025, 6(2), 68; https://doi.org/10.3390/world6020068 - 15 May 2025
Viewed by 515
Abstract
In terms of climate change, while tourism’s natural resources may be considered climate vulnerable, a large part of tourism’s primary industries are high carbon consumers. With the growth of worldwide efforts to adopt climate resilience actions across all industries, Destination Management Organizations could [...] Read more.
In terms of climate change, while tourism’s natural resources may be considered climate vulnerable, a large part of tourism’s primary industries are high carbon consumers. With the growth of worldwide efforts to adopt climate resilience actions across all industries, Destination Management Organizations could become focal points for raising awareness and leadership among local tourism stakeholders. The manuscript communicates a simple, reproducible approach to observing and analyzing climate change at a high territorial granularity to empower local destinations with the capability to disseminate quantifiable information about past, current, and future climate projections. In relation to Slovakia’s 39 local destinations, the approach utilizes six sub-sets of the latest high-resolution Köppen–Geiger climate classification grid data. The main climate categories’ similarity for local destinations was measured across six periods through the Pearson Correlation Coefficient of Pairwise Euclidean Distances between the linkage matrices of hierarchical clusters adopting Ward’s Linkage Method. The Shannon Entropy Analysis was adopted for the quantification of the homogeneity of the DMOs’ main climate categories, and Weighted Variance Analysis was adopted to identify the main climate categories’ weight fluctuations. The current results indicate not only a major shift from destination climates classified as cold to temperate, but also a transformation to more heterogeneous climates in the future. Full article
(This article belongs to the Special Issue Data-Driven Strategic Approaches to Public Management)
Show Figures

Figure 1

21 pages, 2806 KiB  
Article
A Computer-Aided Approach to Canine Hip Dysplasia Assessment: Measuring Femoral Head–Acetabulum Distance with Deep Learning
by Pedro Franco-Gonçalo, Pedro Leite, Sofia Alves-Pimenta, Bruno Colaço, Lio Gonçalves, Vítor Filipe, Fintan McEvoy, Manuel Ferreira and Mário Ginja
Appl. Sci. 2025, 15(9), 5087; https://doi.org/10.3390/app15095087 - 3 May 2025
Viewed by 548
Abstract
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric [...] Read more.
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric in CHD evaluation. Unlike most AI models that directly classify CHD severity using convolutional neural networks, this system provides an interpretable, measurement-based output to support a more transparent evaluation. The system combines a keypoint regression model for femoral head center localization with a U-Net-based segmentation model for acetabular edge delineation. It was trained on 7967 images for hip joint detection, 571 for keypoints, and 624 for acetabulum segmentation, all from ventrodorsal hip-extended radiographs. On a test set of 70 images, the keypoint model achieved high precision (Euclidean Distance = 0.055 mm; Mean Absolute Error = 0.0034 mm; Mean Squared Error = 2.52 × 10−5 mm2), while the segmentation model showed strong performance (Dice Score = 0.96; Intersection over Union = 0.92). Comparison with expert annotations demonstrated strong agreement (Intraclass Correlation Coefficients = 0.97 and 0.93; Weighted Kappa = 0.86 and 0.79; Standard Error of Measurement = 0.92 to 1.34 mm). By automating anatomical landmark detection, the system enhances standardization, reproducibility, and interpretability in CHD radiographic assessment. Its strong alignment with expert evaluations supports its integration into CHD screening workflows for more objective and efficient diagnosis and CHD scoring. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

16 pages, 1277 KiB  
Article
Research on the Reproductive Strategies of Different Provenances/Families of Juglans mandshurica Maxim. Based on the Fruit Traits
by Yitong Chen, Ruixue Guo, Xiaona Pei, Dan Peng, Zihan Yan, Mingrui Kang, Yulu Pan, Jingxin Yu, Lu Xu, Huicong Lin, Chuang Liu, Qinhui Zhang and Xiyang Zhao
Horticulturae 2025, 11(5), 495; https://doi.org/10.3390/horticulturae11050495 - 2 May 2025
Viewed by 359
Abstract
This study systematically analyzed the fruit traits of four sources and 117 families of Juglans mandshurica Maxim. in Jilin Province. By measuring key traits such as fruit phenotype and nut phenotype, the relationship between fruit characteristics and environmental adaptability was explored, leading to [...] Read more.
This study systematically analyzed the fruit traits of four sources and 117 families of Juglans mandshurica Maxim. in Jilin Province. By measuring key traits such as fruit phenotype and nut phenotype, the relationship between fruit characteristics and environmental adaptability was explored, leading to the selection of superior materials with high oil content potential. The study used fruit from J. mandshurica of 117 families (random sampling) across four provenances as experimental materials and measured 13 fruit phenotypic traits, including fruit length and fruit width. Finally, principal component analysis and genetic variation parameters were conducted. The results of the variance analysis (ANOVA) indicated that except for the nut roundness index, all other traits exhibited highly significant differences among provenances and families (p < 0.01). The range of genetic and phenotypic variation coefficients for the various traits was 7.47–23.23% and 8.76–29.59%. The family heritability ranged from 0.968 to 0.988. Correlation analysis among fruit traits revealed a non-significant correlation between fruit width and seed yield, fruit type index and nut weight, kernel weight and kernel yield, as well as nut longitudinal diameter and kernel yield. However, significant correlations were observed among all other traits. The Pearson correlation analysis between fruit traits and environmental factors revealed a significant negative correlation between longitude and seed yield. Cluster analysis results, based on the Euclidean distance method, showed that materials from four provenances were categorized into three groups at a genetic distance of 5. Principal Component Analysis (PCA) revealed that the cumulative contribution rate of four principal components reached 87.00%. PCI demonstrated the highest contribution rate and included traits such as fruit length, nut longitudinal diameter, nut transverse diameter, nut side diameter, three-diameter mean, and nut weight. One elite provenance and five elite families were preliminarily selected. The realized gain for the selected provenance fruit traits was higher for fruit weight and kernel weight, with values of 2.41% and 3.67%, respectively. For the selected families, the genetic gain was highest for kernel yield and kernel weight, with values of 16.51% and 26.66%, respectively. The findings will provide insights into breeding strategies to enhance walnut oil yield. The identified traits may be used to guide breeding programs for developing high-oil-content varieties; However, further validation studies are required to confirm these traits and their applicability in large-scale breeding efforts. Full article
(This article belongs to the Section Genetics, Genomics, Breeding, and Biotechnology (G2B2))
Show Figures

Figure 1

14 pages, 734 KiB  
Article
MWMOTE-FRIS-INFFC: An Improved Majority Weighted Minority Oversampling Technique for Solving Noisy and Imbalanced Classification Datasets
by Dong Zhang, Xiang Huang, Gen Li, Shengjie Kong and Liang Dong
Appl. Sci. 2025, 15(9), 4670; https://doi.org/10.3390/app15094670 - 23 Apr 2025
Viewed by 448
Abstract
In view of the data of fault diagnosis and good product testing in the industrial field, high-noise unbalanced data samples exist widely, and such samples are very difficult to analyze in the field of data analysis. The oversampling technique has proved to be [...] Read more.
In view of the data of fault diagnosis and good product testing in the industrial field, high-noise unbalanced data samples exist widely, and such samples are very difficult to analyze in the field of data analysis. The oversampling technique has proved to be a simple solution to unbalanced data in the past, but it has no significant resistance to noise. In order to solve the binary classification problem of high-noise unbalanced data, an enhanced majority-weighted minority oversampling technique, MWMOTE-FRIS-INFFC, is introduced in this study, which is specially used for processing noise-unbalanced classified data sets. The method uses Euclidean distance to assign sample weights, synthesizes and combines new samples into samples with larger weights but belonging to a few classes, and thus solves the problem of data scarcity in smaller class clusters. Then, the fuzzy rough instance selection (FRIS) method is used to eliminate the subsets of synthetic minority samples with low clustering membership, which effectively reduces the overfitting tendency of minority samples caused by synthetic oversampling. In addition, the integration of classification fusion iterative filters (INFFC) helps mitigate synthetic noise issues, both raw data and synthetic data noise. On this basis, a series of experiments are designed to improve the performance of 6 oversampling algorithms on 8 data sets by using the MWMOTE-FRIS-INFFC algorithm proposed in this paper. Full article
(This article belongs to the Special Issue Fuzzy Control Systems: Latest Advances and Prospects)
Show Figures

Figure 1

21 pages, 8070 KiB  
Article
Housing Price Modeling Using a New Geographically, Temporally, and Characteristically Weighted Generalized Regression Neural Network (GTCW-GRNN) Algorithm
by Saeed Zali, Parham Pahlavani, Omid Ghorbanzadeh, Ali Khazravi, Mohammad Ahmadlou and Sara Givekesh
Buildings 2025, 15(9), 1405; https://doi.org/10.3390/buildings15091405 - 22 Apr 2025
Viewed by 389
Abstract
The location of housing has a significant influence on its pricing. Generally, spatial self-correlation and spatial heterogeneity phenomena affect housing price data. Additionally, time is a crucial factor in housing price modeling, as it helps understand market trends and fluctuations. Currency market fluctuations [...] Read more.
The location of housing has a significant influence on its pricing. Generally, spatial self-correlation and spatial heterogeneity phenomena affect housing price data. Additionally, time is a crucial factor in housing price modeling, as it helps understand market trends and fluctuations. Currency market fluctuations also directly affect housing prices. Therefore, in addition to the physical features of the property, such as the area of the residential unit and building age, the rate of exchange (dollar price) is added to the independent variable set. This study used the real estate transaction records from Iran’s registration system, covering February, May, August, and November in 2017–2019. Initially, 7464 transactions were collected, but after preprocessing, the dataset was refined to 7161 records. Unlike feedforward neural networks, the generalized regression neural network does not converge to local minimums, so in this research, the Geographically, Temporally, and Characteristically Weighted Generalized Regression Neural Network (GTCW-GRNN) for housing price modeling was developed. In addition to being able to model the spatial–time heterogeneity available in observations, this algorithm is accurate and faster than MLR, GWR, GRNN, and GCW-GRNN. The average index of the adjusted coefficient of determination in other methods, including the MLR, GWR, GTWR, GRNN, GCW-GRNN, and the proposed GTCW-GRNN in different modes of using Euclidean or travel distance and fixed or adaptive kernel was equal to 0.760, 0.797, 0.854, 0.777, 0.774, and 0.813, respectively, which showed the success of the proposed GTCW-GRNN algorithm. The results showed the importance of the variable of the dollar and the area of housing significantly. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

15 pages, 802 KiB  
Article
A Theoretical Framework for Computing Generalized Weighted Voronoi Diagrams Based on Lower Envelopes
by Martin Held and Stefan de Lorenzo
Geometry 2025, 2(2), 5; https://doi.org/10.3390/geometry2020005 - 17 Apr 2025
Viewed by 705
Abstract
This paper presents a theoretical framework for constructing generalized weighted Voronoi diagrams (GWVDs) of weighted points and straight-line segments (“sites”) in the Euclidean plane, based on lower envelopes constructed in three-dimensional space. Central to our approach is an algebraic distance function that defines [...] Read more.
This paper presents a theoretical framework for constructing generalized weighted Voronoi diagrams (GWVDs) of weighted points and straight-line segments (“sites”) in the Euclidean plane, based on lower envelopes constructed in three-dimensional space. Central to our approach is an algebraic distance function that defines the minimum weighted distance from a point to a site. We also introduce a parameterization for the bisectors, ensuring a precise representation of Voronoi edges. The connection to lower envelopes allows us to derive (almost tight) bounds on the combinatorial complexity of a GWVD. We conclude with a short discussion of implementation strategies, ranging from leveraging computational geometry libraries to employing graphics hardware for approximate solutions. Full article
Show Figures

Figure 1

16 pages, 4637 KiB  
Article
Indoor Air Pollution Source Localization Based on Small-Sample Training Convolutional Neural Networks
by Tiancheng Ye and Mengtao Han
Buildings 2025, 15(8), 1244; https://doi.org/10.3390/buildings15081244 - 10 Apr 2025
Viewed by 490
Abstract
In addressing the problem of indoor air pollution source localization, traditional methods have limitations such as strong sample dependence and low computational efficiency. This study uses a convolutional neural network to establish a pollution source inversion method based on small samples. By integrating [...] Read more.
In addressing the problem of indoor air pollution source localization, traditional methods have limitations such as strong sample dependence and low computational efficiency. This study uses a convolutional neural network to establish a pollution source inversion method based on small samples. By integrating computational fluid dynamics simulation data and deep learning techniques, a spatial pollution source identification model suitable for limited-sample conditions was constructed. In a benchmark scenario, the optimized model achieved a localization of 82.3% weighted accuracy within a prediction radius of 1 m, and the corresponding normalized error of the detected area was of less than 0.26%. In cross-scenario verification, the localization accuracy within a 1 m radius increased to 100%, and the corresponding predicted Euclidean distance error decreased by 21.43%. By using the optimal cutting ratio (α = 0.25) and a rotation-enhanced dataset (θ = 10°, n = 36), the model reduced the cross-space sample requirement to 1/5 of that of the benchmark scenario while ensuring the accuracy of spatial representation. The research findings provide an efficient and reliable deep learning solution for the localization of pollution sources in complex spaces. Full article
(This article belongs to the Special Issue New Technologies in Assessment of Indoor Environment)
Show Figures

Figure 1

Back to TopTop