Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (692)

Search Parameters:
Keywords = label assigned

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 41726 KiB  
Article
Robust Unsupervised Feature Selection Algorithm Based on Fuzzy Anchor Graph
by Zhouqing Yan, Ziping Ma, Jinlin Ma and Huirong Li
Entropy 2025, 27(8), 827; https://doi.org/10.3390/e27080827 (registering DOI) - 4 Aug 2025
Abstract
Unsupervised feature selection aims to characterize the cluster structure of original features and select the optimal subset without label guidance. However, existing methods overlook fuzzy information in the data, failing to model cluster structures between data effectively, and rely on squared error for [...] Read more.
Unsupervised feature selection aims to characterize the cluster structure of original features and select the optimal subset without label guidance. However, existing methods overlook fuzzy information in the data, failing to model cluster structures between data effectively, and rely on squared error for data reconstruction, exacerbating noise impact. Therefore, a robust unsupervised feature selection algorithm based on fuzzy anchor graphs (FWFGFS) is proposed. To address the inaccuracies in neighbor assignments, a fuzzy anchor graph learning mechanism is designed. This mechanism models the association between nodes and clusters using fuzzy membership distributions, effectively capturing potential fuzzy neighborhood relationships between nodes and avoiding rigid assignments to specific clusters. This soft cluster assignment mechanism improves clustering accuracy and the robustness of the graph structure while maintaining low computational costs. Additionally, to mitigate the interference of noise in the feature selection process, an adaptive fuzzy weighting mechanism is presented. This mechanism assigns different weights to features based on their contribution to the error, thereby reducing errors caused by redundant features and noise. Orthogonal tri-factorization is applied to the low-dimensional representation matrix. This guarantees that each center represents only one class of features, resulting in more independent cluster centers. Experimental results on 12 public datasets show that FWFGFS improves the average clustering accuracy by 5.68% to 13.79% compared with the state-of-the-art methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 1481 KiB  
Article
Ambiguities, Built-In Biases, and Flaws in Big Data Insight Extraction
by Serge Galam
Information 2025, 16(8), 661; https://doi.org/10.3390/info16080661 (registering DOI) - 2 Aug 2025
Viewed by 39
Abstract
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents [...] Read more.
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents unclassified or ambiguous data. A macro-color is assigned only if one color holds a strict majority among the pixels. Otherwise, the aggregate is labeled white, reflecting uncertainty. This setup mimics a percolation threshold at fifty percent. Assuming that directly accessing the various proportions from the data of colors is infeasible, I implement a hierarchical coarse-graining procedure. Elements (first pixels, then aggregates) are recursively grouped and reclassified via local majority rules, ultimately producing a single super-aggregate for which the color represents the inferred macro-property of the collection of pixels as a whole. Analytical results supported by simulations show that the process introduces additional white aggregates beyond white pixels, which could be present initially; these arise from groups lacking a clear majority, requiring arbitrary symmetry-breaking decisions to attribute a color to them. While each local resolution may appear minor and inconsequential, their repetitions introduce a growing systematic bias. Even with complete data, unavoidable asymmetries in local rules are shown to skew outcomes. This study highlights a critical limitation of recursive data reduction. Insight extraction is shaped not only by data quality but also by how local ambiguity is handled, resulting in built-in biases. Thus, the related flaws are not due to the data but to structural choices made during local aggregations. Although based on a simple model, these findings expose a high likelihood of inherent flaws in widely used hierarchical classification techniques. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 2495 KiB  
Article
AIM-Net: A Resource-Efficient Self-Supervised Learning Model for Automated Red Spider Mite Severity Classification in Tea Cultivation
by Malathi Kanagarajan, Mohanasundaram Natarajan, Santhosh Rajendran, Parthasarathy Velusamy, Saravana Kumar Ganesan, Manikandan Bose, Ranjithkumar Sakthivel and Baskaran Stephen Inbaraj
AgriEngineering 2025, 7(8), 247; https://doi.org/10.3390/agriengineering7080247 - 1 Aug 2025
Viewed by 70
Abstract
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. [...] Read more.
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. This article proposes AIM-Net (AI-based Infestation Mapping Network) by evaluating SwAV (Swapping Assignments between Views), a self-supervised learning framework, for classifying RSM infestation severity (Mild, Moderate, Severe) using a geo-referenced, field-acquired dataset of RSM infested tea-leaves, Cam-RSM. The methodology combines SwAV pre-training on unlabeled data with fine-tuning on labeled subsets, employing multi-crop augmentation and online clustering to learn discriminative features without full supervision. Comparative analysis against a fully supervised ResNet-50 baseline utilized 5-fold cross-validation, assessing accuracy, F1-scores, and computational efficiency. Results demonstrate SwAV’s superiority, achieving 98.7% overall accuracy (vs. 92.1% for ResNet-50) and macro-average F1-scores of 98.3% across classes, with a 62% reduction in labeled data requirements. The model showed particular strength in Mild_RSM-class detection (F1-score: 98.5%) and computational efficiency, enabling deployment on edge devices. Statistical validation confirmed significant improvements (p < 0.001) over baseline approaches. These findings establish self-supervised learning as a transformative tool for precision pest management, offering resource-efficient solutions for early infestation detection while maintaining high accuracy. Full article
Show Figures

Figure 1

31 pages, 11269 KiB  
Review
Advancements in Semantic Segmentation of 3D Point Clouds for Scene Understanding Using Deep Learning
by Hafsa Benallal, Nadine Abdallah Saab, Hamid Tairi, Ayman Alfalou and Jamal Riffi
Technologies 2025, 13(8), 322; https://doi.org/10.3390/technologies13080322 - 30 Jul 2025
Viewed by 462
Abstract
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is [...] Read more.
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is inherently unstructured, irregular, and spatially sparse. In recent years, deep learning has become the dominant framework for addressing this task, leading to a broad variety of models and techniques designed to tackle the unique challenges posed by 3D data. This survey presents a comprehensive overview of deep learning methods for 3D semantic segmentation. We organize the literature into a taxonomy that distinguishes between supervised and unsupervised approaches. Supervised methods are further classified into point-based, projection-based, voxel-based, and hybrid architectures, while unsupervised methods include self-supervised learning strategies, generative models, and implicit representation techniques. In addition to presenting and categorizing these approaches, we provide a comparative analysis of their performance on widely used benchmark datasets, discuss key challenges such as generalization, model transferability, and computational efficiency, and examine the limitations of current datasets. The survey concludes by identifying potential directions for future research in this rapidly evolving field. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

18 pages, 308 KiB  
Article
Roman Domination of Cartesian Bundles of Cycles over Cycles
by Simon Brezovnik and Janez Žerovnik
Mathematics 2025, 13(15), 2351; https://doi.org/10.3390/math13152351 - 23 Jul 2025
Viewed by 112
Abstract
A Roman dominating function f of a graph G=(V,E) assigns labels from the set {0,1,2} to vertices such that every vertex labeled 0 has a neighbor labeled 2. The weight of [...] Read more.
A Roman dominating function f of a graph G=(V,E) assigns labels from the set {0,1,2} to vertices such that every vertex labeled 0 has a neighbor labeled 2. The weight of an RDF f is defined as w(f)=vVf(v), and the Roman domination number, γR(G), is the minimum weight among all RDFs of G. This paper studies the domination and Roman domination numbers in Cartesian bundles of cycles. Furthermore, the constructed optimal patterns improve known bounds and suggest even better bounds might be achieved by combining patterns, especially for bundles involving shifts of order 4k and 5k. Full article
(This article belongs to the Special Issue Graph Theory: Advanced Algorithms and Applications, 2nd Edition)
23 pages, 2543 KiB  
Article
Beyond Standard Losses: Redefining Text-to-SQL with Task-Specific Optimization
by Iker Azurmendi, Ekaitz Zulueta, Gustavo García, Nekane Uriarte-Arrazola and Jose Manuel Lopez-Guede
Mathematics 2025, 13(14), 2315; https://doi.org/10.3390/math13142315 - 20 Jul 2025
Viewed by 468
Abstract
In recent years, large language models (LLMs) have shown an impressive ability in translating text to SQL queries. However, in real-world applications, standard loss functions frequently fail to capture the complexity of queries adequately. Therefore, in this study, a dynamic loss function is [...] Read more.
In recent years, large language models (LLMs) have shown an impressive ability in translating text to SQL queries. However, in real-world applications, standard loss functions frequently fail to capture the complexity of queries adequately. Therefore, in this study, a dynamic loss function is proposed, which assigns different weights to specific groups of tokens, such as SQL keywords or table names. The objective is to guide the model during training to facilitate the mastery of more fundamental concepts within the SQL. Our custom loss function is composed of four components: cross-entropy with sequence matching loss, focal loss, F-beta loss, and contrastive sequence loss. During the training process, the weights of each component of the loss function are dynamically adjusted to prioritize different aspects of query generation at the appropriate stage. This approach avoids computationally expensive approaches such as SQL validation or detokenization, which improves the efficiency of the learning process compared to alternative methods. We empirically tested this method on several open source LLMs with less than 2 billion parameters, using a customized real vehicle diagnostic dataset. The findings demonstrate that the employment of our dynamic loss function can enhance SQL execution accuracy by up to 20% in comparison with standard cross-entropy loss. It has been demonstrated that customized loss functions for specific tasks can improve the efficiency of LLMs without extending the model or acquiring additional labelled data. The proposed technique is also scalable and adaptable to new domains or more complex weighting schemes, highlighting the importance of custom design of loss functions in real world applications. Full article
Show Figures

Figure 1

21 pages, 7139 KiB  
Article
Comparative Study of a Topical and Oral Combination Therapy Containing Oleanolic Acid, Apigenin, and Biotinyl Tripeptide-1 in Patients with Androgenetic Alopecia: A Prospective, Open-Label Trial
by Vlad-Mihai Voiculescu and Mihai Lupu
Cosmetics 2025, 12(4), 152; https://doi.org/10.3390/cosmetics12040152 - 16 Jul 2025
Viewed by 918
Abstract
Background: Androgenetic alopecia (AGA) is a prevalent condition characterized by progressive follicular miniaturization. Minoxidil topical treatment and finasteride oral treatment are the golden standard, but they are limited by local and systemic adverse effects. Combination therapies targeting both follicular stimulation and nutritional support [...] Read more.
Background: Androgenetic alopecia (AGA) is a prevalent condition characterized by progressive follicular miniaturization. Minoxidil topical treatment and finasteride oral treatment are the golden standard, but they are limited by local and systemic adverse effects. Combination therapies targeting both follicular stimulation and nutritional support may enhance clinical outcomes. Objective: To evaluate the efficacy of a combined topical and oral therapy compared to topical monotherapy in patients with AGA using trichoscopic and clinical parameters. Methods: In this open-label, prospective trial, 48 patients were assigned to receive either a topical spray alone (Group A) or in combination with oral capsules (Group B) for 3 months. Trichoscopic parameters were assessed at baseline and post-treatment. Paired and independent t-tests, along with Cohen’s d effect sizes, were used to evaluate intra- and inter-group changes. Results: Both groups demonstrated improvements in hair density, thickness, and anagen/telogen ratio. Group B exhibited significantly greater increases in total hair count and anagen conversion (p < 0.05). The effect sizes ranged from small to large, with the most pronounced changes observed in anagen/telogen ratio (Cohen’s d = 0.841) in males. Conclusions: The combination of topical and oral treatment led to greater trichologic improvements than topical therapy alone. While extrapolated projections at 6 and 12 months suggest continued benefit, future studies with longer duration and placebo controls are required to validate these findings. Full article
(This article belongs to the Section Cosmetic Formulations)
Show Figures

Figure 1

27 pages, 6169 KiB  
Article
Application of Semi-Supervised Clustering with Membership Information and Deep Learning in Landslide Susceptibility Assessment
by Hua Xia, Zili Qin, Yuanxin Tong, Yintian Li, Rui Zhang and Hongxia Luo
Land 2025, 14(7), 1472; https://doi.org/10.3390/land14071472 - 15 Jul 2025
Viewed by 240
Abstract
Landslide susceptibility assessment (LSA) plays a crucial role in disaster prevention and mitigation. Traditional random selection of non-landslide samples (labeled as 0) suffers from poor representativeness and high randomness, which may include potential landslide areas and affect the accuracy of LSA. To address [...] Read more.
Landslide susceptibility assessment (LSA) plays a crucial role in disaster prevention and mitigation. Traditional random selection of non-landslide samples (labeled as 0) suffers from poor representativeness and high randomness, which may include potential landslide areas and affect the accuracy of LSA. To address this issue, this study proposes a novel Landslide Susceptibility Index–based Semi-supervised Fuzzy C-Means (LSI-SFCM) sampling strategy combining membership degrees. It utilizes landslide and unlabeled samples to map landslide membership degree via Semi-supervised Fuzzy C-Means (SFCM). Non-landslide samples are selected from low-membership regions and assigned membership values as labels. This study developed three models for LSA—Convolutional Neural Network (CNN), U-Net, and Support Vector Machine (SVM), and compared three negative sample sampling strategies: Random Sampling (RS), SFCM (samples labeled 0), and LSI-SFCM. The results demonstrate that the LSI-SFCM effectively enhances the representativeness and diversity of negative samples, improving the predictive performance and classification reliability. Deep learning models using LSI-SFCM performed with superior predictive capability. The CNN model achieved an area under the receiver operating characteristic curve (AUC) of 95.52% and a prediction rate curve value of 0.859. Furthermore, compared with the traditional unsupervised fuzzy C-means (FCM) clustering, SFCM produced a more reasonable distribution of landslide membership degrees, better reflecting the distinction between landslides and non-landslides. This approach enhances the reliability of LSA and provides a scientific basis for disaster prevention and mitigation authorities. Full article
Show Figures

Figure 1

16 pages, 3611 KiB  
Article
Study on the Effectiveness of Multi-Dimensional Approaches to Urban Flood Risk Assessment
by Hyung Jun Park, Su Min Song, Dong Hyun Kim and Seung Oh Lee
Appl. Sci. 2025, 15(14), 7777; https://doi.org/10.3390/app15147777 - 11 Jul 2025
Viewed by 324
Abstract
Increasing frequency and severity of urban flooding, driven by climate change and urban population growth, present major challenges. Traditional flood control infrastructure alone cannot fully prevent flood damage, highlighting the need for a comprehensive and multi-dimensional disaster management approach. This study proposes the [...] Read more.
Increasing frequency and severity of urban flooding, driven by climate change and urban population growth, present major challenges. Traditional flood control infrastructure alone cannot fully prevent flood damage, highlighting the need for a comprehensive and multi-dimensional disaster management approach. This study proposes the Flood Risk Index for Building (FRIB)—a building-level assessment framework that integrates vulnerability, hazard, and exposure. FRIB assigns customized risk levels to individual buildings and evaluates the effectiveness of a multi-dimensional method. Compared to traditional indicators like flood depth, FRIB more accurately identifies high-risk areas by incorporating diverse risk factors. It also enables efficient resource allocation by excluding low-risk buildings, focusing efforts on high-risk zones. For example, in a case where 5124 buildings were targeted based on 1 m flood depth, applying FRIB excluded 24 buildings with “low” risk and up to 530 with “high” risk, reducing unnecessary interventions. Moreover, quantitative metrics like entropy and variance showed that as FRIB levels rise, flood depth distributions become more balanced—demonstrating that depth alone does not determine risk. In conclusion, while qualitative labels such as “very low” to “very high” aid intuitive understanding, FRIB’s quantitative, multi-dimensional approach enhances precision in urban flood management. Future research may expand FRIB’s application to varied regions, supporting tailored flood response strategies. Full article
Show Figures

Figure 1

27 pages, 14035 KiB  
Article
Unsupervised Segmentation and Classification of Waveform-Distortion Data Using Non-Active Current
by Andrea Mariscotti, Rafael S. Salles and Sarah K. Rönnberg
Energies 2025, 18(13), 3536; https://doi.org/10.3390/en18133536 - 4 Jul 2025
Viewed by 346
Abstract
Non-active current in the time domain is considered for application to the diagnostics and classification of loads in power grids based on waveform-distortion characteristics, taking as a working example several recordings of the pantograph current in an AC railway system. Data are processed [...] Read more.
Non-active current in the time domain is considered for application to the diagnostics and classification of loads in power grids based on waveform-distortion characteristics, taking as a working example several recordings of the pantograph current in an AC railway system. Data are processed with a deep autoencoder for feature extraction and then clustered via k-means to allow identification of patterns in the latent space. Clustering enables the evaluation of the relationship between the physical meaning and operation of the system and the distortion phenomena emerging in the waveforms during operation. Euclidean distance (ED) is used to measure the diversity and pertinence of observations within pattern groups and to identify anomalies (abnormal distortion, transients, …). This approach allows the classification of new data by assigning data to clusters based on proximity to centroids. This unsupervised method exploiting non-active current is novel and has proven useful for providing data with labels for later supervised learning performed with the 1D-CNN, which achieved a balanced accuracy of 96.46% under normal conditions. ED and 1D-CNN methods were tested on an additional unlabeled dataset and achieved 89.56% agreement in identifying normal states. Additionally, Grad-CAM, when applied to the 1D-CNN, quantitatively identifies the waveform parts that influence the model predictions, significantly enhancing the interpretability of the classification results. This is particularly useful for obtaining a better understanding of load operation, including anomalies that affect grid stability and energy efficiency. Finally, the method has been also successfully further validated for general applicability with data from a different scenario (charging of electric vehicles). The method can be applied to load identification and classification for non-intrusive load monitoring, with the aim of implementing automatic and unsupervised assessment of load behavior, including transient detection, power-quality issues and improvement in energy efficiency. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

25 pages, 2314 KiB  
Article
Automatic Classification of BIM Object Based on IFC Data Using the Uniclass Classification Standard
by Shi Tang, Takamasa Bito and Kazuya Shide
Buildings 2025, 15(13), 2347; https://doi.org/10.3390/buildings15132347 - 4 Jul 2025
Viewed by 530
Abstract
Classification of BIM objects is critical for enhancing information interoperability and standardization within construction projects; however, research on automated BIM object classification based on standardized classification systems remains limited. Therefore, this study proposes an automated method to classify BIM objects using IFC data [...] Read more.
Classification of BIM objects is critical for enhancing information interoperability and standardization within construction projects; however, research on automated BIM object classification based on standardized classification systems remains limited. Therefore, this study proposes an automated method to classify BIM objects using IFC data under the Uniclass system, aiming to enhance standardization, semantic clarity, and practical applicability. The proposed method first assigns Uniclass codes to 8715 BIM objects, then extracts 13 types of IFC-derived feature variables—including semantic, spatial, and dimensional information, and uses 2 categories of Uniclass coding information (EF and Ss tables) as classification labels, each comprising 11 and 17 classes, respectively. A Random Forest model with 100 decision trees and 10-fold cross-validation is then employed to perform automatic classification. Experimental results demonstrate that the proposed method achieves classification accuracies of 1.00 and 0.99 for BIM objects under the Elements/Functions and Systems classification tasks. This study demonstrates that accurate and fine-grained classification of BIM objects can be achieved using only low-LOD IFC data, thereby contributing to standardized information structuring and facilitating intelligent model management during the early design phase. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

23 pages, 578 KiB  
Article
Distributed Partial Label Multi-Dimensional Classification via Label Space Decomposition
by Zhen Xu and Sicong Chen
Electronics 2025, 14(13), 2623; https://doi.org/10.3390/electronics14132623 - 28 Jun 2025
Viewed by 223
Abstract
Multi-dimensional classification (MDC), in which the training data are concurrently associated with numerous label variables across many dimensions, has garnered significant interest recently. Most of the current MDC methods are based on the framework of supervised learning, which induces a predictive model from [...] Read more.
Multi-dimensional classification (MDC), in which the training data are concurrently associated with numerous label variables across many dimensions, has garnered significant interest recently. Most of the current MDC methods are based on the framework of supervised learning, which induces a predictive model from a large amount of precisely labeled data. So, they are challenged to obtain satisfactory learning results in the situation where the training data are not annotated with precise labels but assigned with ambiguous labels. Besides, the current MDC algorithms only consider the scenario of centralized learning, where all training data are handled at a single node for the purpose of classifier induction. However, in some real applications, the training data are not consolidated at a single fusion center, but rather are dispersedly distributed among multiple nodes. In this study, we focus on the problem of decentralized classification involving partial multi-dimensional data that have partially accessible candidate labels, and develop a distributed method called dPL-MDC for learning with these partial labels. In this algorithm, we conduct one-vs.-one decomposition on the originally heterogeneous multi-dimensional output space, such that the problem of partial MDC can be transformed into the issue of distributed partial multi-label learning. Then, by using several shared anchor data to characterize the global distribution of label variables, we propose a novel distributed approach to learn the label confidence of the training data. Under the supervision of recovered credible labels, the classifier can be induced by exploiting the high-order label dependencies from a common low-dimensional subspace. Experiments performed on various datasets indicate that our proposed method is capable of achieving learning performance in distributed partial MDC. Full article
Show Figures

Figure 1

73 pages, 2833 KiB  
Article
A Comprehensive Methodological Survey of Human Activity Recognition Across Diverse Data Modalities
by Jungpil Shin, Najmul Hassan, Abu Saleh Musa Miah and Satoshi Nishimura
Sensors 2025, 25(13), 4028; https://doi.org/10.3390/s25134028 - 27 Jun 2025
Cited by 1 | Viewed by 1438
Abstract
Human Activity Recognition (HAR) systems aim to understand human behavior and assign a label to each action, attracting significant attention in computer vision due to their wide range of applications. HAR can leverage various data modalities, such as RGB images and video, skeleton, [...] Read more.
Human Activity Recognition (HAR) systems aim to understand human behavior and assign a label to each action, attracting significant attention in computer vision due to their wide range of applications. HAR can leverage various data modalities, such as RGB images and video, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, and radar signals. Each modality provides unique and complementary information suited to different application scenarios. Consequently, numerous studies have investigated diverse approaches for HAR using these modalities. This survey includes only peer-reviewed research papers published in English to ensure linguistic consistency and academic integrity. This paper presents a comprehensive survey of the latest advancements in HAR from 2014 to 2025, focusing on Machine Learning (ML) and Deep Learning (DL) approaches categorized by input data modalities. We review both single-modality and multi-modality techniques, highlighting fusion-based and co-learning frameworks. Additionally, we cover advancements in hand-crafted action features, methods for recognizing human–object interactions, and activity detection. Our survey includes a detailed dataset description for each modality, as well as a summary of the latest HAR systems, accompanied by a mathematical derivation for evaluating the deep learning model for each modality, and it also provides comparative results on benchmark datasets. Finally, we provide insightful observations and propose effective future research directions in HAR. Full article
(This article belongs to the Special Issue Computer Vision and Sensors-Based Application for Intelligent Systems)
Show Figures

Figure 1

22 pages, 1233 KiB  
Article
Radio Mean Labeling Algorithm, Its Complexity and Existence Results
by Meera Saraswathi, K. N. Meera and Yuqing Lin
Mathematics 2025, 13(13), 2057; https://doi.org/10.3390/math13132057 - 20 Jun 2025
Viewed by 293
Abstract
Radio mean labeling of a connected graph G is an assignment of distinct positive integers to the vertices of G satisfying a mathematical constraint called radio mean condition. The maximum label assigned to any vertex of G is called the [...] Read more.
Radio mean labeling of a connected graph G is an assignment of distinct positive integers to the vertices of G satisfying a mathematical constraint called radio mean condition. The maximum label assigned to any vertex of G is called the span of the radio mean labeling. The minimum span of all feasible radio mean labelings of G is the radio mean number of G, denoted by rmn(G). In our previous study, we proved that if G has order n, then rmn(G)[n,rmn(Pn)] where Pn is a path of order n. All graphs of diameters 1, 2 and 3 have the radio mean number equal to order n. However, they are not the only graphs on n vertices with radio mean number n. Graphs isomorphic to path Pn are the graphs having the maximum diameter among the set of all graphs of order n and they possess the maximum feasible radio mean number. In this paper, we show that, for any integer in the range of achievable radio mean numbers, there always exists a graph of order n with the given integer as its radio mean number. This is approached by introducing a special type of tree whose construction is detailed in the article. The task of assigning radio mean labels to a graph can be considered as an optimization problem. This paper critiques the limitations of existing Integer Linear Programming (ILP) models for assigning radio mean labeling to graphs and proposes a new ILP model. The existing ILP model does not guarantee that the vertex labels are distinct, positive and satisfy the radio mean condition, prompting the need for an improved approach. We propose a new ILP model which involves n2 constraints is the input graph’s order is n. We obtain a radio mean labeling of cycle of order 10 using the new ILP. In our previous study, we showed that, for any graph G, we can extend the radio mean labelings of its diametral paths to the vertex set of G and obtain radio mean labelings of G. This insight forms the basis for an algorithm presented in this paper to obtain radio mean labels for a given graph G with n vertices and diameter d. The correctness and complexity of this algorithm are analyzed in detail. Radio mean labelings have been proposed for cryptographic key generation in previous works, and the algorithm presented in this paper is general enough to support similar applications across various graph structures. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 9889 KiB  
Article
An Intelligent Management System and Advanced Analytics for Boosting Date Production
by Shaymaa E. Sorour, Munira Alsayyari, Norah Alqahtani, Kaznah Aldosery, Anfal Altaweel and Shahad Alzhrani
Sustainability 2025, 17(12), 5636; https://doi.org/10.3390/su17125636 - 19 Jun 2025
Viewed by 671
Abstract
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and [...] Read more.
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and its optimized variant, YOLOv11-Opt, to automate the detection, classification, and monitoring of date fruit varieties and disease-related defects. The models were trained on a curated dataset of real-world images collected in Saudi Arabia and enhanced through advanced data augmentation techniques, dynamic label assignment (SimOTA++), and extensive hyperparameter optimization. The experimental results demonstrated that YOLOv11-Opt significantly outperformed the baseline YOLOv11, achieving an overall classification accuracy of 99.04% for date types and 99.69% for disease detection, with ROC-AUC scores exceeding 99% in most cases. The optimized model effectively distinguished visually complex diseases, such as scale insert and dry date skin, across multiple date types, enabling high-resolution, real-time inference. Furthermore, a visual analytics dashboard was developed to support strategic decision-making by providing insights into production trends, disease prevalence, and varietal distribution. These findings underscore the value of integrating optimized deep learning architectures and visual analytics for intelligent, scalable, and sustainable precision agriculture. Full article
(This article belongs to the Special Issue Sustainable Food Processing and Food Packaging Technologies)
Show Figures

Figure 1

Back to TopTop