Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (112)

Search Parameters:
Keywords = regular open set

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1611 KB  
Article
Evaluating a Virtual Learning Environment for Secondary English in a Public School: Usability, Motivation, and Engagement
by Myriam Tatiana Velarde Orozco and Bárbara Luisa de Benito Crosetti
Educ. Sci. 2026, 16(1), 169; https://doi.org/10.3390/educsci16010169 - 22 Jan 2026
Viewed by 44
Abstract
Public schools often operate with shared devices, unstable connectivity, and limited support for digital tools, which can make feature-heavy platforms difficult to adopt and sustain. This study reports the first formal design iteration and formative evaluation of VLEPIC, a school-centred virtual learning environment [...] Read more.
Public schools often operate with shared devices, unstable connectivity, and limited support for digital tools, which can make feature-heavy platforms difficult to adopt and sustain. This study reports the first formal design iteration and formative evaluation of VLEPIC, a school-centred virtual learning environment (VLE) developed to support secondary English as a Foreign Language in a low-resource Ecuadorian public school. Using a design-based research approach with a convergent mixed-methods design, one Grade 10 cohort (n = 42; two intact classes) used VLEPIC for one month as a complement to regular lessons. Data were collected through questionnaires on perceived usability and motivation, platform usage logs, and open-ended feedback from students and the teacher; results were analysed descriptively and thematically and then integrated to inform design decisions. Students reported high perceived usability and strong motivational responses in attention, relevance, and satisfaction, while confidence was more heterogeneous. Usage logs indicated recurrent but uneven engagement, with distinct low-, medium-, and high-activity profiles. Qualitative feedback highlighted enjoyment and clarity alongside issues with progress tracking between missions, navigation on mobile devices, and task submission reliability. The main contribution is a set of empirically grounded, context-sensitive design principles linking concrete interface and task-design decisions to perceived usability, motivation, and real-world usage patterns in constrained school settings. Full article
Show Figures

Figure 1

26 pages, 1674 KB  
Article
Relaxed Monotonic QMIX (R-QMIX): A Regularized Value Factorization Approach to Decentralized Multi-Agent Reinforcement Learning
by Liam O’Brien and Hao Xu
Robotics 2026, 15(1), 28; https://doi.org/10.3390/robotics15010028 - 21 Jan 2026
Viewed by 89
Abstract
Value factorization methods have become a standard tool for cooperative multi-agent reinforcement learning (MARL) in the centralized-training, decentralized-execution (CTDE) setting. QMIX (a monotonic mixing network for value factorization), in particular, constrains the joint action–value function to be a monotonic mixing of per-agent utilities, [...] Read more.
Value factorization methods have become a standard tool for cooperative multi-agent reinforcement learning (MARL) in the centralized-training, decentralized-execution (CTDE) setting. QMIX (a monotonic mixing network for value factorization), in particular, constrains the joint action–value function to be a monotonic mixing of per-agent utilities, which guarantees consistency with individual greedy policies but can severely limit expressiveness on tasks with non-monotonic agent interactions. This work revisits this design choice and proposes Relaxed Monotonic QMIX (R-QMIX), a simple regularized variant of QMIX that encourages but does not strictly enforce the monotonicity constraint. R-QMIX removes the sign constraints on the mixing network weights and introduces a differentiable penalty on negative partial derivatives of the joint value with respect to each agent’s utility. This preserves the computational benefits of value factorization while allowing the joint value to deviate from strict monotonicity when beneficial. R-QMIX is implemented in a standard PyMARL (an open-source MARL codebase) and evaluated on the StarCraft Multi-Agent Challenge (SMAC). On a simple map (3m), R-QMIX matches the asymptotic performance of QMIX while learning substantially faster. On more challenging maps (MMM2, 6h vs. 8z, and 27m vs. 30m), R-QMIX significantly improves both sample efficiency and final win rate (WR), for example increasing the final-quarter mean win rate from 42.3% to 97.1% on MMM2, from 0.0% to 57.5% on 6h vs. 8z, and from 58.0% to 96.6% on 27m vs. 30m. These results suggest that soft monotonicity regularization is a practical way to bridge the gap between strictly monotonic value factorization and fully unconstrained joint value functions. A further comparison against QTRAN (Q-value transformation), a more expressive value factorization method, shows that R-QMIX achieves higher and more reliably convergent win rates on the challenging SMAC maps considered. Full article
(This article belongs to the Special Issue AI-Powered Robotic Systems: Learning, Perception and Decision-Making)
Show Figures

Figure 1

18 pages, 10569 KB  
Article
State of the Art of Remote Sensing Data: Gradient Pattern in Pseudocolor Composite Images
by Alexey Terekhov, Ravil I. Mukhamediev and Igor Savin
J. Imaging 2026, 12(1), 23; https://doi.org/10.3390/jimaging12010023 - 4 Jan 2026
Viewed by 171
Abstract
The thematic processing of pseudocolor composite images, especially those created from remote sensing data, is of considerable interest. The set of spectral classes comprising such images is typically described by a nominal scale, meaning the absence of any predetermined relationships between the classes. [...] Read more.
The thematic processing of pseudocolor composite images, especially those created from remote sensing data, is of considerable interest. The set of spectral classes comprising such images is typically described by a nominal scale, meaning the absence of any predetermined relationships between the classes. However, in many cases, images of this type may contain elements of a regular spatial order, one variant of which is a gradient structure. Gradient structures are characterized by a certain regular spatial ordering of spectral classes. Recognizing gradient patterns in the structure of pseudocolor composite images opens up new possibilities for deeper thematic images processing. This article describes an algorithm for analyzing the spatial structure of a pseudocolor composite image to identify gradient patterns. In this process, the initial nominal scale of spectral classes is transformed into a rank scale of the gradient legend. The algorithm is based on the analysis of Moore neighborhoods for each image pixel. This creates an array of the prevalence of all types of local binary patterns (the pixel’s nearest neighbors). All possible variants of the spectral class rank scale composition are then considered. The rank scale variant that describes the largest proportion of image pixels within its gradient order is used as a final result. The user can independently define the criteria for the significance of the gradient order in the analyzed image, focusing either on the overall statistics of the proportion of pixels consistent with the spatial structure of the selected gradient or on the statistics of a selected key image region. The proposed algorithm is illustrated using analysis of test examples. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

33 pages, 1463 KB  
Article
Hybrid LLM-Assisted Fault Diagnosis Framework for 5G/6G Networks Using Real-World Logs
by Aymen D. Salman, Akram T. Zeyad, Shereen S. Jumaa, Safanah M. Raafat, Fanan Hikmat Jasim and Amjad J. Humaidi
Computers 2025, 14(12), 551; https://doi.org/10.3390/computers14120551 - 12 Dec 2025
Viewed by 768
Abstract
This paper presents Hy-LIFT (Hybrid LLM-Integrated Fault Diagnosis Toolkit), a multi-stage framework for interpretable and data-efficient fault diagnosis in 5G/6G networks that integrates a high-precision interpretable rule-based engine (IRBE) for known patterns, a semi-supervised classifier (SSC) that leverages scarce labels and abundant unlabeled [...] Read more.
This paper presents Hy-LIFT (Hybrid LLM-Integrated Fault Diagnosis Toolkit), a multi-stage framework for interpretable and data-efficient fault diagnosis in 5G/6G networks that integrates a high-precision interpretable rule-based engine (IRBE) for known patterns, a semi-supervised classifier (SSC) that leverages scarce labels and abundant unlabeled logs via consistency regularization and pseudo-labeling, and an LLM Augmentation Engine (LAE) that generates operator-ready, context-aware explanations and zero-shot hypotheses for novel faults. Evaluations on a five-class, imbalanced Dataset-A and a simulated production setting with noise and label scarcity show that Hy-LIFT consistently attains higher macro-F1 than rule-only and standalone ML baselines while maintaining strong per-class precision/recall (≈0.85–0.93), including minority classes, indicating robust generalization under class imbalance. IRBE supplies auditable, high-confidence seeds; SSC expands coverage beyond explicit rules without sacrificing precision; and LAE improves operational interpretability and surfaces potential “unknown/novel” faults without altering classifier labels. The paper’s contributions are as follows: (i) a reproducible, interpretable baseline that doubles as a high-quality pseudo-label source; (ii) a principled semi-supervised learning objective tailored to network logs; (iii) an LLM-driven explanation layer with zero-shot capability; and (iv) an open, end-to-end toolkit with scripts to regenerate all figures and tables. Overall, Hy-LIFT narrows the gap between brittle rules and opaque black-box models by combining accuracy, data efficiency, and auditability, offering a practical path toward trustworthy AIOps in next-generation mobile networks. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 729
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

20 pages, 1345 KB  
Article
Difference Lindelöf Perfect Function in Topology and Statistical Modeling
by Ali A. Atoom, Mohammad A. Bani Abdelrahman, Tariq S. Alshammari, Khudhayr A. Rashedi and Mahmoud Zuhier Aldrabseh
Mathematics 2025, 13(24), 3961; https://doi.org/10.3390/math13243961 - 12 Dec 2025
Viewed by 273
Abstract
We develop the theory of Difference Lindelöf perfect functions. Through difference covers, we provide intrinsic characterizations; prove stability under composition, subspace restriction, and suitable products; and obtain preservation theorems. Under standard separation axioms, properties such as D-countable compactness, regularity, paracompactness, and the [...] Read more.
We develop the theory of Difference Lindelöf perfect functions. Through difference covers, we provide intrinsic characterizations; prove stability under composition, subspace restriction, and suitable products; and obtain preservation theorems. Under standard separation axioms, properties such as D-countable compactness, regularity, paracompactness, and the closedness of projections transfer along D-Lindelöf perfect maps. We also connect the framework to statistics. Uses include decision regions expressed as differences of open sets and parameter screening, with visualizations of countable subcovers and their pushforwards. The results point to practical countable cores for learning and inference and suggest extensions to bitopological and fuzzy contexts. Full article
(This article belongs to the Section B: Geometry and Topology)
Show Figures

Figure 1

19 pages, 484 KB  
Article
Which Islamic Index to Invest?
by Burak Doğan and Umut Ugurlu
J. Risk Financial Manag. 2025, 18(11), 651; https://doi.org/10.3390/jrfm18110651 - 19 Nov 2025
Viewed by 1939
Abstract
This paper compares the rulebooks of five main Shariah-compliant equity indices—DJIMI, KLSI, FTSE Shariah, MSCI Islamic, and STOXX Europe Islamic 50—inside one fixed S&P 500 stock list from Q1 2019 to Q4 2023. For each index, we build both equally weighted and market-capitalization-weighted [...] Read more.
This paper compares the rulebooks of five main Shariah-compliant equity indices—DJIMI, KLSI, FTSE Shariah, MSCI Islamic, and STOXX Europe Islamic 50—inside one fixed S&P 500 stock list from Q1 2019 to Q4 2023. For each index, we build both equally weighted and market-capitalization-weighted portfolios, then check their performances with the Sharpe, Treynor, and Jensen’s alpha ratios. All Islamic portfolios beat the regular S&P 500 after adjusting for risk, with STOXX as the most stable winner. Its market-cap version reaches a level of 253.01 by Q4 2023, far above the S&P 500 level of 210.46. Market-cap portfolios, in general, perform better than equally weighted ones. Furthermore, STOXX offer better protection in rough markets, while DJIMI shows relatively better performance when prices recover. Most rule sets cause small advantages to the Islamic portfolios compared to conventional ones, but STOXX’s 33% limit on leverage and liquidity results in higher Sharpe ratios. These results suggest that screening details shape portfolio behavior and point to the need for one clear, shared Shariah rulebook so investors can compare products with confidence. From a business ethics view, our study also shows that strict and open screening brings a real “moral dividend”, as follows: smaller losses when markets fall and stronger risk-adjusted returns overall, linking faith-based rules to the wider talk on responsible investing and stakeholder welfare. Full article
(This article belongs to the Special Issue Islamic Financial Markets in Times of Global Uncertainty)
Show Figures

Figure 1

27 pages, 657 KB  
Review
Artificial Intelligence in Finance: From Market Prediction to Macroeconomic and Firm-Level Forecasting
by Flavius Gheorghe Popa and Vlad Muresan
AI 2025, 6(11), 295; https://doi.org/10.3390/ai6110295 - 17 Nov 2025
Cited by 1 | Viewed by 3991
Abstract
This review surveys how contemporary machine learning is reshaping financial and economic forecasting across markets, macroeconomics, and corporate planning. We synthesize evidence on model families, such as regularized linear methods, tree ensembles, and deep neural architecture, and explain their optimization (with gradient-based training) [...] Read more.
This review surveys how contemporary machine learning is reshaping financial and economic forecasting across markets, macroeconomics, and corporate planning. We synthesize evidence on model families, such as regularized linear methods, tree ensembles, and deep neural architecture, and explain their optimization (with gradient-based training) and design choices (activation and loss functions). Across tasks, Random Forest and gradient-boosted trees emerge as robust baselines, offering strong out-of-sample accuracy and interpretable variable importance. For sequential signals, recurrent models, especially LSTM ensembles, consistently improve directional classification and volatility-aware predictions, while transformer-style attention is a promising direction for longer contexts. Practical performance hinges on aligning losses with business objectives (for example cross-entropy vs. RMSE/MAE), handling class imbalance, and avoiding data leakage through rigorous cross-validation. In high-dimensional settings, regularization (such as ridge/lasso/elastic-net) stabilizes estimation and enhances generalization. We compile task-specific feature sets for macro indicators, market microstructure, and firm-level data, and distill implementation guidance covering hyperparameter search, evaluation metrics, and reproducibility. We conclude in open challenges (accuracy–interpretability trade-off, limited causal insight) and outline a research agenda combining econometrics with representation learning and data-centric evaluation. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
Show Figures

Figure 1

29 pages, 166274 KB  
Article
Bridging Vision Foundation and Vision–Language Models for Open-Vocabulary Semantic Segmentation of UAV Imagery
by Fan Li, Zhaoxiang Zhang, Xuanbin Wang, Xuan Wang and Yuelei Xu
Remote Sens. 2025, 17(22), 3704; https://doi.org/10.3390/rs17223704 - 13 Nov 2025
Viewed by 1192
Abstract
Open-vocabulary semantic segmentation (OVSS) is of critical importance for unmanned aerial vehicle (UAV) imagery, as UAV scenes are highly dynamic and characterized by diverse, unpredictable object categories. Current OVSS approaches mainly rely on the zero-shot capabilities of vision–language models (VLMs), but their image-level [...] Read more.
Open-vocabulary semantic segmentation (OVSS) is of critical importance for unmanned aerial vehicle (UAV) imagery, as UAV scenes are highly dynamic and characterized by diverse, unpredictable object categories. Current OVSS approaches mainly rely on the zero-shot capabilities of vision–language models (VLMs), but their image-level pretraining objectives yield ambiguous spatial relationships and coarse-grained feature representations, resulting in suboptimal performance in UAV scenes. In this work, we propose a novel hybrid framework for OVSS in UAV imagery, named HOSU, which leverages the priors of vision foundation models to unleash the potential of vision–language models in representing complex spatial distributions and capturing fine-grained small-object details in UAV scenes. Specifically, we propose a distribution-aware fine-tuning method that aligns CLIP with DINOv2 across intra- and inter-region feature distributions, enhancing the capacity of CLIP to model complex scene semantics and capture fine-grained details critical for UAV imagery. Meanwhile, we propose a text-guided multi-level regularization mechanism that leverages the text embeddings of CLIP to impose semantic constraints on the visual features, preventing their drift from the original semantic space during fine-tuning and ensuring stable vision–language correspondence. Finally, to address the pervasive occlusion in UAV imagery, we propose a mask-based feature consistency strategy that enables the model to learn stable representations, remaining robust against viewpoint-induced occlusions. Extensive experiments across four training settings on six UAV datasets demonstrate that our approach consistently achieves state-of-the-art performance compared with previous methods, while comprehensive ablation studies and analyses further validate its effectiveness. Full article
Show Figures

Graphical abstract

23 pages, 6818 KB  
Article
Enhanced Osseointegration, Osteogenic Differentiation and Adherence Behaviour of Healthy Human Osteoblasts on a Roughened Titanium Surface by Vitamin K2 and Vitamin D3
by Katharina Tscheu, Ann Kathrin Bergmann, Christoph V. Suschek and Uwe Maus
Materials 2025, 18(21), 5012; https://doi.org/10.3390/ma18215012 - 3 Nov 2025
Viewed by 760
Abstract
The number of endoprosthetic implants is constantly increasing. Successful osseointegration of the inserted material into the bone is essential for a prosthesis to remain in the bone as long as possible. In the clinical setting, a roughened titanium surface of implants is used [...] Read more.
The number of endoprosthetic implants is constantly increasing. Successful osseointegration of the inserted material into the bone is essential for a prosthesis to remain in the bone as long as possible. In the clinical setting, a roughened titanium surface of implants is used as standard to enable the best possible osseointegration. Vitamin K2 and vitamin D3 play a decisive role in dynamic bone metabolism and therefore also influence osseointegration. For the first time, we carried out in vitro investigations with clinically relevant cells, primary healthy human osteoblasts (hOBs). We qualitatively compared the adhesion behaviour of hOBs on a plastic surface, a smooth, regular titanium surface structure and a roughened, irregular titanium surface structure by scanning electron microscopy and fluorescence microscopy. The osteogenic behaviour and the osteogenic differentiation capacity were quantitatively investigated by analysing the activity of alkaline phosphatase and the alizarin red S assay under the influence of vitamin K2, vitamin D3 and the combination of both vitamins. It was shown that more adhesion points formed between the cells and the titanium on the rough surface structure. In addition, a solid cell network developed more quickly on this side, with cell runners forming in three-dimensional space, which means the interactions between the cells across different cell layers. On the other hand, a structured cell network also appeared on the regular smooth surface structure, which means that the network seems to be formed and built up along a defined structure. The addition of vitamins further increased the osteogenic differentiation capacity on the rough titanium surface structure. In particular, the isolated addition of vitamin K2 showed an improved osteogenic differentiation in the long-term observation, whereas the combined addition of both vitamins promoted the initial osteogenic differentiation. Vitamin K2, therefore, plays a greater role in osseointegration than previously assumed. This opens up new possibilities for the use of vitamin K2 during and after the surgical insertion of an implant. The use of vitamin K2 should be reconsidered for clinical applications in implant care and further investigated clinically. Full article
(This article belongs to the Section Biomaterials)
Show Figures

Figure 1

10 pages, 216 KB  
Article
Associations Between Decision-Making Biases and Swallowing and Physical Functions in Community-Dwelling Older Adults: A Cross-Sectional Study
by Ayane Horike, Kohei Yamaguchi, Kanako Toda Shibahara, Jun Aida, Rieko Moritoyo, Kanako Yoshimi, Kazuharu Nakagawa and Haruka Tohara
Geriatrics 2025, 10(6), 138; https://doi.org/10.3390/geriatrics10060138 - 24 Oct 2025
Viewed by 576
Abstract
Background/Objective: In the context of global aging, maintaining daily habits such as adequate nutrition and regular exercise are essential to achieve healthy aging. Therefore, the preservation of swallowing and physical functions is fundamental. Jaw-opening force, an important swallowing function, is linked to physical [...] Read more.
Background/Objective: In the context of global aging, maintaining daily habits such as adequate nutrition and regular exercise are essential to achieve healthy aging. Therefore, the preservation of swallowing and physical functions is fundamental. Jaw-opening force, an important swallowing function, is linked to physical function. Daily health behaviors are shaped by decision-making biases, which influence decision-making. Individuals with high procrastination tendencies may be less likely to engage in health-promoting behaviors, potentially leading to functional decline. While such tendencies are associated with general health behaviors, little is known about their associations with swallowing and physical functions among older adults. The objective of this study was to examine the associations between decision-making biases and swallowing and physical functions in community-dwelling older adults. Methods: A questionnaire survey was conducted to collect basic information and assess decision-making biases. The jaw-opening force (swallowing function) and grip strength (physical function) were measured. Associations of decision-making biases with jaw-opening force and grip strength were examined using multivariable linear regression analysis. We further conducted sex-stratified sensitivity analyses. Results: This cross-sectional study targeted 107 community-dwelling older adults. There was a significant negative association between procrastination tendency and jaw-opening force (B = −0.715, p = 0.005), and grip strength (B = −1.552, p = 0.003), indicating that individuals with a propensity for procrastination had lower jaw-opening force and grip strength. Conclusions: Procrastination tendency may be used as an indicator to detect swallowing and physical functions. Moreover, incorporating this modifiable factor to promote behavior change may prevent functional decline. The study results highlight the significance of considering individuals’ decision-making biases—particularly procrastination tendency—in clinical settings. Full article
(This article belongs to the Section Dysphagia)
29 pages, 3003 KB  
Review
Efficient and Secure GANs: A Survey on Privacy-Preserving and Resource-Aware Models
by Niovi Efthymia Apostolou, Elpida Vasiliki Balourdou, Maria Mouratidou, Eleni Tsalera, Ioannis Voyiatzis, Andreas Papadakis and Maria Samarakou
Appl. Sci. 2025, 15(20), 11207; https://doi.org/10.3390/app152011207 - 19 Oct 2025
Viewed by 1709
Abstract
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as [...] Read more.
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as privacy and security concerns. These factors restrict their scalability and integration in real-world applications. This survey provides a systematic review of research aimed at addressing these challenges. Techniques such as few-shot learning, consistency regularization, and advanced data augmentation are examined to address data scarcity. Approaches designed to reduce computational and energy costs, including hardware-based acceleration and model optimization, are also considered. In addition, strategies to improve privacy and security, such as privacy-preserving GAN architectures and defense mechanisms against adversarial attacks, are analyzed. By organizing the literature into these thematic categories, the review highlights available solutions, their trade-offs, and remaining open issues. Our findings underline the growing role of GANs in artificial intelligence, while also emphasizing the importance of efficient, sustainable, and secure designs. This work not only concentrates the current knowledge but also sets the basis for future research. Full article
(This article belongs to the Special Issue Big Data Analytics and Deep Learning for Predictive Maintenance)
Show Figures

Figure 1

26 pages, 4563 KB  
Article
Personalized Smart Home Automation Using Machine Learning: Predicting User Activities
by Mark M. Gad, Walaa Gad, Tamer Abdelkader and Kshirasagar Naik
Sensors 2025, 25(19), 6082; https://doi.org/10.3390/s25196082 - 2 Oct 2025
Cited by 2 | Viewed by 1967
Abstract
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy [...] Read more.
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy consumption, and offering proactive support in smart home settings. The Edge Light Human Activity Recognition Predictor, or EL-HARP, is the main prediction model used in this framework to predict user behavior. The system combines open-source software for real-time sensing, facial recognition, and appliance control with affordable hardware, including the Raspberry Pi 5, ESP32-CAM, Tuya smart switches, NFC (Near Field Communication), and ultrasonic sensors. In order to predict daily user activities, three gradient-boosting models—XGBoost, CatBoost, and LightGBM (Gradient Boosting Models)—are trained for each household using engineered features and past behaviour patterns. Using extended temporal features, LightGBM in particular achieves strong predictive performance within EL-HARP. The framework is optimized for edge deployment with efficient training, regularization, and class imbalance handling. A fully functional prototype demonstrates real-time performance and adaptability to individual behavior patterns. This work contributes a scalable, privacy-preserving, and user-centric approach to intelligent home automation. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Graphical abstract

22 pages, 414 KB  
Article
Computing Probabilities of Finding Extremes in a Random Set
by Gheorghiţă Zbăganu, Anişoara Maria Răducan and Marius Rădulescu
Mathematics 2025, 13(19), 3074; https://doi.org/10.3390/math13193074 - 24 Sep 2025
Viewed by 447
Abstract
This study examines the occurrence of extreme points in random samples of size n obtained by mapping uniformly distributed random variables through a function into a multidimensional space. We focus on the probabilities that such sets contain a unique componentwise maximum, minimum, or [...] Read more.
This study examines the occurrence of extreme points in random samples of size n obtained by mapping uniformly distributed random variables through a function into a multidimensional space. We focus on the probabilities that such sets contain a unique componentwise maximum, minimum, or both. Our interest lies in the asymptotic behavior of these probabilities. We found that in some cases, for certain irregular mappings, the limits of these probabilities may fail to exist as n tends to infinity. This contrasts with our earlier work, where the assumptions of smoothness and regularity of the mapping function ensured well-behaved limits. In the present study, we investigate scenarios in which these smoothness conditions are relaxed or absent. Because the general multidimensional case is highly challenging, we restrict attention to a simpler but illustrative setting: finite random sets in the plane that lie on the graph of a real function defined over the unit interval. We present partial results in this setting and discuss open questions that remain for future research. Full article
(This article belongs to the Special Issue Advances in Convex Analysis and Inequalities)
Show Figures

Figure A1

13 pages, 265 KB  
Article
Multidual Complex Numbers and the Hyperholomorphicity of Multidual Complex-Valued Functions
by Ji Eun Kim
Axioms 2025, 14(9), 683; https://doi.org/10.3390/axioms14090683 - 5 Sep 2025
Cited by 2 | Viewed by 639
Abstract
We develop a rigorous algebraic–analytic framework for multidual complex numbers DCn within the setting of Clifford analysis and establish a comprehensive theory of hyperholomorphic multidual complex-valued functions. Our main contributions are (i) a fully coupled multidual Cauchy–Riemann system derived from the Dirac [...] Read more.
We develop a rigorous algebraic–analytic framework for multidual complex numbers DCn within the setting of Clifford analysis and establish a comprehensive theory of hyperholomorphic multidual complex-valued functions. Our main contributions are (i) a fully coupled multidual Cauchy–Riemann system derived from the Dirac operator, yielding precise differentiability criteria; (ii) generalized conjugation laws and the associated norms that clarify metric and geometric structure; and (iii) explicit operator and kernel constructions—including generalized Cauchy kernels and Borel–Pompeiu-type formulas—that produce new representation theorems and regularity results. We further provide matrix–exponential and functional calculus representations tailored to DCn, which unify algebraic and analytic viewpoints and facilitate computation. The theory is illustrated through a portfolio of examples (polynomials, rational maps on invertible sets, exponentials, and compositions) and a solvable multidual boundary value problem. Connections to applications are made explicit via higher-order automatic differentiation (using nilpotent infinitesimals) and links to kinematics and screw theory, highlighting how multidual analysis expands classical holomorphic paradigms to richer, nilpotent-augmented coordinate systems. Our results refine and extend prior work on dual/multidual numbers and situate multidual hyperholomorphicity within modern Clifford analysis. We close with a concise summary of notation and a set of concrete open problems to guide further development. Full article
(This article belongs to the Special Issue Mathematical Analysis and Applications IV)
Show Figures

Figure 1

Back to TopTop