Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,040)

Search Parameters:
Keywords = regular sets

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
72 pages, 2054 KB  
Article
Neural Network IDS/IPS Intrusion Detection and Prevention System with Adaptive Online Training to Improve Corporate Network Cybersecurity, Evidence Recording, and Interaction with Law Enforcement Agencies
by Serhii Vladov, Victoria Vysotska, Svitlana Vashchenko, Serhii Bolvinov, Serhii Glubochenko, Andrii Repchonok, Maksym Korniienko, Mariia Nazarkevych and Ruslan Herasymchuk
Big Data Cogn. Comput. 2025, 9(11), 267; https://doi.org/10.3390/bdcc9110267 - 22 Oct 2025
Abstract
Thise article examines the reliable online detection and IDS/IPS intrusion prevention in dynamic corporate networks problems, where traditional signature-based methods fail to keep pace with new and evolving attacks, and streaming data is susceptible to drift and targeted “poisoning” of the training dataset. [...] Read more.
Thise article examines the reliable online detection and IDS/IPS intrusion prevention in dynamic corporate networks problems, where traditional signature-based methods fail to keep pace with new and evolving attacks, and streaming data is susceptible to drift and targeted “poisoning” of the training dataset. As a solution, we propose a hybrid neural network system with adaptive online training, a formal minimax false-positive control framework, and a robustness mechanism set (a Huber model, pruned learning rate, DRO, a gradient-norm regularizer, and a prioritized replay). In practice, the system combines modal encoders for traffic, logs, and metrics; a temporal GNN for entity correlation; a variational module for uncertainty assessment; a differentiable symbolic unit for logical rules; an RL agent for incident prioritization; and an NLG module for explanations and the preparation of forensically relevant artifacts. In this case, the applied components are connected via a cognitive layer (cross-modal fusion memory), a Bayesian-neural network fuser, and a single multi-task loss function. The practical implementation includes the pipeline “novelty detection → active labelling → incremental supervised update” and chain-of-custody mechanisms for evidential fitness. A significant improvement in quality has been experimentally demonstrated, since the developed system achieves an ROC AUC of 0.96, an F1-score of 0.95, and a significantly lower FPR compared to basic architectures (MLP, CNN, and LSTM). In applied validation tasks, detection rates of ≈92–94% and resistance to distribution drift are noted. Full article
(This article belongs to the Special Issue Internet Intelligence for Cybersecurity)
18 pages, 1270 KB  
Article
When AI Is Fooled: Hidden Risks in LLM-Assisted Grading
by Alfredo Milani, Valentina Franzoni, Emanuele Florindi, Assel Omarbekova, Gulmira Bekmanova and Banu Yergesh
Educ. Sci. 2025, 15(11), 1419; https://doi.org/10.3390/educsci15111419 - 22 Oct 2025
Abstract
This study investigates how targeted attacks can compromise the reliability and applications of large language models (LLMs) in educational assessment, highlighting security vulnerabilities that are frequently underestimated in current AI-supported learning environments. As LLMs and other AI tools are increasingly being integrated into [...] Read more.
This study investigates how targeted attacks can compromise the reliability and applications of large language models (LLMs) in educational assessment, highlighting security vulnerabilities that are frequently underestimated in current AI-supported learning environments. As LLMs and other AI tools are increasingly being integrated into grading, providing feedback, and supporting the evaluation workflow, educators are adopting them for their potential to increase efficiency and scalability. However, this rapid adoption also introduces new risks. An unexplored threat is prompt injection, whereby a student acting as an attacker embeds malicious instructions within seemingly regular assignment submissions to influence the model’s behaviour and obtain a more favourable evaluation. To the best of our knowledge, this is the first systematic comparative study to investigate the vulnerability of popular LLMs within a real-world educational context. We analyse a significant representative scenario involving prompt injection in exam assessment to highlight how easily such manipulations can bypass the teacher’s oversight and distort results, thereby disrupting the entire evaluation process. By modelling the structure and behavioural patterns of LLMs under attack, we aim to clarify the underlying mechanisms and expose their limitations when used in educational settings. Full article
(This article belongs to the Special Issue Cybersecurity and Online Learning in Tertiary Education)
Show Figures

Figure 1

24 pages, 4441 KB  
Article
Assessing the Uncertainty of Traditional Sample-Based Forest Inventories in Mixed and Single Species Conifer Systems Using a Digital Forest Twin
by Mikhail Kondratev, Mark V. Corrao, Ryan Armstrong and Alistar M. S. Smith
Forests 2025, 16(11), 1617; https://doi.org/10.3390/f16111617 - 22 Oct 2025
Abstract
Forest managers need regular accurate assessments of forest conditions to make informed decisions associated with harvest schedules, growth projections, merchandising, investment, and overall management planning. Traditionally, this is achieved through field-based sampling (i.e., timber cruising) a subset of the trees within a desired [...] Read more.
Forest managers need regular accurate assessments of forest conditions to make informed decisions associated with harvest schedules, growth projections, merchandising, investment, and overall management planning. Traditionally, this is achieved through field-based sampling (i.e., timber cruising) a subset of the trees within a desired area (e.g., 1%–2%) through stratification of the landscape to group similar vegetation structures and apply a grid within each stratum where fixed- or variable-radius sample locations (i.e., plots) are installed to gather information used to estimate trees throughout the unmeasured remainder of the area. These traditional approaches are often limited in their assessment of uncertainty until trees are harvested and processed. However, the increasing availability of airborne laser scanning datasets in commercial forestry processed into Digital Inventories® enables the ability to non-destructively assess the accuracy of these field-based surveys, which are commonly referred to as cruises. In this study, we assess the uncertainty of common field sampling-based estimation methods by comparing them to a population of individual trees developed using established and validated methods and in operational use on the University of Idaho Experimental Forest (UIEF) and a commercial conifer plantation in Louisiana, USA (PLLP). A series of repeated sampling experiments, representing over 90 million simulations, were conducted under industry-standard cruise specifications, and the resulting estimates are compared against the population values. The analysis reveals key limitations in current sampling approaches, highlighting biases and inefficiencies inherent in certain specifications. Specifically, methods applied to handle edge plots (i.e., measurements conducted on or near the boundary of a sampling stratum), and stratum delineation contributes most significantly to systematic bias in estimates of the mean and variance around the mean. The study also shows that conventional estimators, designed for perfectly randomized experiments, are highly sensitive to plot location strategies in field settings, leading to potential inaccurate estimations of BAA and TPA. Overall, the study highlights the challenges and limitations of traditional forest sampling and impacts specific sampling design decisions can have on the reliability of key statistical estimates. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

17 pages, 552 KB  
Article
Winning Opinion in the Voter Model: Following Your Friends’ Advice or That of Their Friends?
by Francisco J. Muñoz and Juan Carlos Nuño
Entropy 2025, 27(11), 1087; https://doi.org/10.3390/e27111087 - 22 Oct 2025
Abstract
We investigate a variation of the classical voter model where the set of influencing agents depends on an individual’s current opinion. The initial population is made up of a random sample of equally sized sub-populations for each state, and two types of interactions [...] Read more.
We investigate a variation of the classical voter model where the set of influencing agents depends on an individual’s current opinion. The initial population is made up of a random sample of equally sized sub-populations for each state, and two types of interactions are considered: (i) direct neighbors and (ii) second neighbors (friends of direct neighbors, excluding the direct neighbors themselves). The neighborhood size, reflecting regular network connectivity, remains constant across all agents. Our findings show that varying the interaction range introduces asymmetries that affect the probability of consensus and convergence time. At low connectivity, direct neighbor interactions dominate, leading to consensus. As connectivity increases, the probability of either state reaching consensus becomes equal, reflecting symmetric dynamics. This asymmetric effect on the probability of consensus is shown to be independent of network topology in small-world and scale-free networks. Asymmetry also influences convergence time: while symmetric cases display decreasing times with increased connectivity, asymmetric cases show an almost linear increase. Unlike the probability of reaching consensus, the impact of asymmetry on convergence time depends on the network topology. The introduction of stubborn agents further magnifies these effects, especially when they favor the less dominant state, significantly lengthening the time to consensus. We conclude by discussing the implications of these findings for decision-making processes and political campaigns in human populations. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics II)
Show Figures

Figure 1

34 pages, 3112 KB  
Article
Artificial Intelligence Applied to Soil Compaction Control for the Light Dynamic Penetrometer Method
by Jorge Rojas-Vivanco, José García, Gabriel Villavicencio, Miguel Benz, Antonio Herrera, Pierre Breul, German Varas, Paola Moraga, Jose Gornall and Hernan Pinto
Mathematics 2025, 13(21), 3359; https://doi.org/10.3390/math13213359 - 22 Oct 2025
Abstract
Compaction quality control in earthworks and pavements still relies mainly on density-based acceptance referenced to laboratory Proctor tests, which are costly, time-consuming, and spatially sparse. Lightweight dynamic cone penetrometer (LDCP) provides rapid indices, such as qd0 and qd1, [...] Read more.
Compaction quality control in earthworks and pavements still relies mainly on density-based acceptance referenced to laboratory Proctor tests, which are costly, time-consuming, and spatially sparse. Lightweight dynamic cone penetrometer (LDCP) provides rapid indices, such as qd0 and qd1, yet acceptance thresholds commonly depend on ad hoc, site-specific calibrations. This study develops and validates a supervised machine learning framework that estimates qd0, qd1, and Zc directly from readily available soil descriptors (gradation, plasticity/activity, moisture/state variables, and GTR class) using a multi-campaign dataset of n=360 observations. While the framework does not remove the need for the standard soil characterization performed during design (e.g., W, γd,field, and RCSPC), it reduces reliance on additional LDCP calibration campaigns to obtain device-specific reference curves. Models compared under a unified pipeline include regularized linear baselines, support vector regression, Random Forest, XGBoost, and a compact multilayer perceptron (MLP). The evaluation used a fixed 80/20 train–test split with 5-fold cross-validation on the training set and multiple error metrics (R2, RMSE, MAE, and MAPE). Interpretability combined SHAP with permutation importance, 1D partial dependence (PDP), and accumulated local effects (ALE); calibration diagnostics and split-conformal prediction intervals connected the predictions to QA/QC decisions. A naïve GTR-average baseline was added for reference. Computation was lightweight. On the test set, the MLP attained the best accuracy for qd1 (R2=0.794, RMSE =5.866), with XGBoost close behind (R2=0.773, RMSE =6.155). Paired bootstrap contrasts with Holm correction indicated that the MLP–XGBoost difference was not statistically significant. Explanations consistently highlighted density- and moisture-related variables (γd,field, RCSPC, and W) as dominant, with gradation/plasticity contributing second-order adjustments; these attributions are model-based and associational rather than causal. The results support interpretable, computationally efficient surrogates of LDCP indices that can complement density-based acceptance and enable risk-aware QA/QC via conformal prediction intervals. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science, 2nd Edition)
Show Figures

Figure 1

18 pages, 329 KB  
Article
Irregular Bundles on Hopf Surfaces
by Edoardo Ballico and Elizabeth Gasparim
Mathematics 2025, 13(20), 3356; https://doi.org/10.3390/math13203356 - 21 Oct 2025
Abstract
We discuss the complex-analytic subsets of the moduli spaces of rank 2 vector bundles on a classical Hopf surface formed by irregular bundles. We stratify the set of irregular bundles by weight (and irregular profiles). We provide the topological result (vanishing of higher [...] Read more.
We discuss the complex-analytic subsets of the moduli spaces of rank 2 vector bundles on a classical Hopf surface formed by irregular bundles. We stratify the set of irregular bundles by weight (and irregular profiles). We provide the topological result (vanishing of higher cohomology groups) on the part of the moduli spaces parameterizing regular bundles. Full article
16 pages, 4901 KB  
Article
Diagnostic Performance of CBCT in Detecting Different Types of Root Fractures with Various Intracanal Post Systems
by Serhat Efeoglu, Ecem Ozgur, Aysenur Oncu, Ahmet Tohumcu, Rana Nalcaci and Berkan Celikten
Tomography 2025, 11(10), 116; https://doi.org/10.3390/tomography11100116 - 21 Oct 2025
Abstract
Objective: This study aimed to evaluate the diagnostic accuracy of two cone beam computed tomography (CBCT) devices using 18 imaging modalities in detecting root fractures—vertical, horizontal, and oblique—in teeth with intracanal post systems. Materials and methods: Ninety-six were extracted; single-rooted mandibular premolars were [...] Read more.
Objective: This study aimed to evaluate the diagnostic accuracy of two cone beam computed tomography (CBCT) devices using 18 imaging modalities in detecting root fractures—vertical, horizontal, and oblique—in teeth with intracanal post systems. Materials and methods: Ninety-six were extracted; single-rooted mandibular premolars were endodontically treated and restored with Bundle, Reforpost, or Splendor Single Adjustable posts. Controlled fractures of different types were induced using a universal testing machine. Each tooth was scanned with NewTom 7G and NewTom Go (Quantitative Radiology, Verona, Italy) under nine imaging protocols per device; varying in dose and voxel size, yielding 1728 CBCT images. Three observers (a professor of endodontics; a specialist; and a postgraduate student in endodontics) independently evaluated the images. Results: Observers demonstrated almost perfect agreement (κ ≥ 0.81) with the gold standard in fracture detection using NewTom 7G. No significant differences were found in sensitivity, specificity, or accuracy across voxel size and dose parameters for both devices in detecting fracture presence (p > 0.05). Similarly, both devices displayed comparable performance in identifying horizontal and oblique fractures (p > 0.05). However, in NewTom Go, regular and low doses with different voxel sizes showed reduced sensitivity and accuracy in detecting vertical fractures across all post systems (p ≤ 0.05). Conclusions: NewTom 7G, with its advanced detector system and smaller voxel sizes, provides superior diagnostic accuracy for root fractures. In contrast, NewTom Go displays reduced sensitivity for vertical fractures at lower settings. Clinical relevance: CBCT device selection and imaging protocols significantly affect the diagnosis of vertical root fractures. Full article
Show Figures

Figure 1

33 pages, 525 KB  
Article
Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data
by Abderrahmane Belguerna, Abdelkader Rassoul, Hamza Daoudi, Zouaoui Chikr Elmezouar and Fatimah Alshahrani
Symmetry 2025, 17(10), 1777; https://doi.org/10.3390/sym17101777 - 21 Oct 2025
Abstract
This paper examines the asymptotic behavior of the conditional hazard function using kernel-based methods, with particular emphasis on functional weakly dependent data. In particular, we establish the asymptotic normality of the proposed estimator when the covariate follows a functional quasi-associated process. This contribution [...] Read more.
This paper examines the asymptotic behavior of the conditional hazard function using kernel-based methods, with particular emphasis on functional weakly dependent data. In particular, we establish the asymptotic normality of the proposed estimator when the covariate follows a functional quasi-associated process. This contribution extends the scope of nonparametric inference under weak dependence within the framework of functional data analysis. The estimator is constructed through kernel smoothing techniques inspired by the classical Nadaraya–Watson approach, and its theoretical properties are rigorously derived under appropriate regularity conditions. To evaluate its practical performance, we carried out an extensive simulation study, where finite-sample outcomes were compared with their asymptotic counterparts. The results showed the robustness and reliability of the estimator across a range of scenarios, thereby confirming the validity of the proposed limit theorem in empirical settings. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

26 pages, 2625 KB  
Article
De Novo Single-Cell Biological Analysis of Drug Resistance in Human Melanoma Through a Novel Deep Learning-Powered Approach
by Sumaya Alghamdi, Turki Turki and Y-h. Taguchi
Mathematics 2025, 13(20), 3334; https://doi.org/10.3390/math13203334 - 20 Oct 2025
Viewed by 137
Abstract
Elucidating drug response mechanisms in human melanoma is crucial for improving treatment outcomes. Moreover, the existing tools intended to provide deeper insight into melanoma drug resistance have notable limitations. Therefore, we propose a deep learning (DL)-based approach that works as follows. First, we [...] Read more.
Elucidating drug response mechanisms in human melanoma is crucial for improving treatment outcomes. Moreover, the existing tools intended to provide deeper insight into melanoma drug resistance have notable limitations. Therefore, we propose a deep learning (DL)-based approach that works as follows. First, we processed two single-cell datasets related to human melanoma from the GEO (GSE108383_A375 and GSE108383_451Lu) database and trained a fully connected neural network with five adapted methods (L1-Regularization, DeepLIFT, SHAP, IG, and LRP). We then identified 100 genes by ranking all genes from the highest to the lowest based on the sum of absolute values for corresponding weights across all neurons in the first hidden layer. From a biological perspective, compared to existing bioinformatics tools, the presented DL-based methods identified a higher number of expressed genes in four well-established melanoma cell lines: MALME-3M, MDA-MB435, SK-MEL-28, and SK-MEL-5. Furthermore, we identified FDA-approved melanoma drugs (e.g., Vemurafenib and Dabrafenib), critical genes such as ARAF, SOX10, DCT, and AXL, and key TFs including MITF and TFAP2A. From a classification perspective, we utilized five-fold cross-validation and provided gene sets using all the abovementioned methods to three randomly selected machine learning algorithms, namely, support vector machines, random forests, and neural networks with different hyperparameters. The results demonstrate that the integrated gradients (IG) method adapted in our DL approach contributed to 2.2% and 0.5% overall performance improvements over the best-performing baselines when using A375 and 451Lu cell line datasets. Additional comparison against no gene selection demonstrated that IG is the only method to generate statistically significant results, with 14.4% and 11.7% overall performance improvements. Full article
(This article belongs to the Special Issue Computational Intelligence for Bioinformatics)
Show Figures

Figure 1

29 pages, 3003 KB  
Review
Efficient and Secure GANs: A Survey on Privacy-Preserving and Resource-Aware Models
by Niovi Efthymia Apostolou, Elpida Vasiliki Balourdou, Maria Mouratidou, Eleni Tsalera, Ioannis Voyiatzis, Andreas Papadakis and Maria Samarakou
Appl. Sci. 2025, 15(20), 11207; https://doi.org/10.3390/app152011207 - 19 Oct 2025
Viewed by 305
Abstract
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as [...] Read more.
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as privacy and security concerns. These factors restrict their scalability and integration in real-world applications. This survey provides a systematic review of research aimed at addressing these challenges. Techniques such as few-shot learning, consistency regularization, and advanced data augmentation are examined to address data scarcity. Approaches designed to reduce computational and energy costs, including hardware-based acceleration and model optimization, are also considered. In addition, strategies to improve privacy and security, such as privacy-preserving GAN architectures and defense mechanisms against adversarial attacks, are analyzed. By organizing the literature into these thematic categories, the review highlights available solutions, their trade-offs, and remaining open issues. Our findings underline the growing role of GANs in artificial intelligence, while also emphasizing the importance of efficient, sustainable, and secure designs. This work not only concentrates the current knowledge but also sets the basis for future research. Full article
(This article belongs to the Special Issue Big Data Analytics and Deep Learning for Predictive Maintenance)
Show Figures

Figure 1

18 pages, 1645 KB  
Article
An In-Hospital Mortality Prediction Model for Acute Pesticide Poisoning in the Emergency Department
by Yoonseo Jeon, Da-Eun Kim, Inyong Jeong, Se-Jin Ahn, Nam-Jun Cho, Hyo-Wook Gil and Hwamin Lee
Toxics 2025, 13(10), 893; https://doi.org/10.3390/toxics13100893 - 18 Oct 2025
Viewed by 173
Abstract
Pesticide poisoning remains a significant public health issue, characterized by high morbidity and mortality, particularly among patients presenting to the emergency department. This study aimed to develop a 14-day in-hospital mortality prediction model for patients with acute pesticide poisoning using early clinical and [...] Read more.
Pesticide poisoning remains a significant public health issue, characterized by high morbidity and mortality, particularly among patients presenting to the emergency department. This study aimed to develop a 14-day in-hospital mortality prediction model for patients with acute pesticide poisoning using early clinical and laboratory data. This retrospective cohort study included 1056 patients who visited Soonchunhyang University Cheonan Hospital between January 2015 and December 2020. The cohort was randomly divided into train (n = 739) and test (n = 317) sets using stratification by pesticide type and outcome. Candidate predictors were selected based on univariate Cox regression, LASSO regularization, random forest feature importance, and clinical relevance derived from established prognostic scoring systems. Logistic regression models were constructed using six distinct feature sets. The best-performing model combined LASSO-selected and clinically curated features (AUC 0.926 [0.890–0.957]), while the final model—selected for interpretability—used only LASSO-selected features (AUC 0.923 [0.884–0.955]; balanced accuracy 0.835; sensitivity 0.843; specificity 0.857; F1.5 score 0.714 at threshold 0.450). SHapley Additive exPlanations (SHAP) analysis identified paraquat ingestion, Glasgow Coma Scale, bicarbonate level, base excess, and alcohol history as major mortality predictors. The proposed model outperformed the APACHE II score (AUC 0.835 [0.781–0.888]) and may serve as a valuable tool for early risk stratification and clinical decision making in pesticide-poisoned patients. Full article
(This article belongs to the Special Issue Hazardous Effects of Pesticides on Human Health—2nd Edition)
Show Figures

Figure 1

15 pages, 653 KB  
Article
Basic Vaidya White Hole Evaporation Process
by Qingyao Zhang
Symmetry 2025, 17(10), 1762; https://doi.org/10.3390/sym17101762 - 18 Oct 2025
Viewed by 122
Abstract
We developed a self-consistent double-null description of an evaporating white-hole spacetime by embedding the outgoing Vaidya solution in a coordinate system that remains regular across the future horizon. Starting from the radiation-coordinate form, we specialize in retarded time so that a monotonically decreasing [...] Read more.
We developed a self-consistent double-null description of an evaporating white-hole spacetime by embedding the outgoing Vaidya solution in a coordinate system that remains regular across the future horizon. Starting from the radiation-coordinate form, we specialize in retarded time so that a monotonically decreasing mass function M(u) encodes outgoing positive-energy flux. Expressing the metric in null coordinates (u,v), Einstein’s equations for a single-directional null-dust stress–energy tensor, Tuu=ρ(u), then reduce to one first-order PDE for the areal radius: vr=B(u)12M(u)/r. Its integral, r+2M(u)ln|r2M(u)|=vC(u), defines an implicit foliation of outgoing null cones. The metric coefficient follows algebraically as f(u,v)=12M(u)/r. Residual gauge freedom in B(u) and C(u) is fixed so that u matches the Bondi retarded time at null infinity, while v remains analytic at the apparent horizon, generalizing the Kruskal prescription to dynamical mass loss. In the limit M(u)M, the construction reduces to the familiar Eddington–Finkelstein and Kruskal forms. Our solution, therefore, provides a compact analytic framework for studying white-hole evaporation, Hawking-like energy fluxes, and back-reaction in spherically symmetric settings without encountering coordinate singularities. Full article
(This article belongs to the Special Issue Advances in Black Holes, Symmetry and Chaos)
Show Figures

Figure 1

17 pages, 1010 KB  
Article
A Prolog-Based Expert System with Application to University Course Scheduling
by Wan-Yu Lin and Che-Chern Lin
Electronics 2025, 14(20), 4093; https://doi.org/10.3390/electronics14204093 - 18 Oct 2025
Viewed by 155
Abstract
University course scheduling is a kind of timetable problem and can be mathematically formulated as an integer linear programming problem. Essentially, a university course scheduling problem is an optimization problem that aims at most efficiently minimizing a cost function according to a set [...] Read more.
University course scheduling is a kind of timetable problem and can be mathematically formulated as an integer linear programming problem. Essentially, a university course scheduling problem is an optimization problem that aims at most efficiently minimizing a cost function according to a set of constraints. The huge searching space for the course scheduling problem means a long time will be needed to find the optimal solution. Therefore, some studies have used soft computing approaches to solve course scheduling problems in order to reduce the searching space. However, in order to use soft computing approaches to solve university course scheduling problems, we may need to design algorithms and conduct numerous experiments to achieve maximum efficiency. Thus, in this study, instead of employing soft computing methods, we propose a SWI-PROLOG-based expert system to solve the course scheduling problem. An experiment was conducted using real-world data from a department at a national university in southern Taiwan. During the experiment, each teacher in the department chose five preferential time slots. The experimental results have shown that about 99% of courses were scheduled in teachers’ five preferential time slots with an acceptable computational time of executing SWI-PROLOG (127 milliseconds on a regular personal computer). This study has thus provided a framework for solving course scheduling problems using an expert system. This would be the main contribution of this study. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 272 KB  
Article
On the Eigenvalue Spectrum of Cayley Graphs: Connections to Group Structure and Expander Properties
by Mohamed A. Abd Elgawad, Junaid Nisar, Salem A. Alyami, Mdi Begum Jeelani and Qasem Al-Mdallal
Mathematics 2025, 13(20), 3298; https://doi.org/10.3390/math13203298 - 16 Oct 2025
Viewed by 164
Abstract
Cayley graphs sit at the intersection of algebra, geometry, and theoretical computer science. Their spectra encode fine structural information about both the underlying group and the graph itself. Building on classical work of Alon–Milman, Dodziuk, Margulis, Lubotzky–Phillips–Sarnak, and many others, we develop a [...] Read more.
Cayley graphs sit at the intersection of algebra, geometry, and theoretical computer science. Their spectra encode fine structural information about both the underlying group and the graph itself. Building on classical work of Alon–Milman, Dodziuk, Margulis, Lubotzky–Phillips–Sarnak, and many others, we develop a unified representation-theoretic framework that yields several new results. We establish a monotonicity principle showing that the algebraic connectivity never decreases when generators are added. We provide closed-form spectra for canonical 3-regular dihedral Cayley graphs, with exact spectral gaps. We prove a quantitative obstruction demonstrating that bounded-degree Cayley graphs of groups with growing abelian quotients cannot form expander families. In addition, we present two universal comparison theorems: one for quotients and one for direct products of groups. We also derive explicit eigenvalue formulas for class-sum-generating sets together with a Hoffman-type second-moment bound for all Cayley graphs. We also establish an exact relation between the Laplacian spectra of a Cayley graph and its complement, giving a closed-form expression for the complementary spectral gap. These results give new tools for deciding when a given family of Cayley graphs can or cannot expand, sharpening and extending several classical criteria. Full article
8 pages, 207 KB  
Communication
Oculocutaneous Albinism in Northern Madagascar: Clinical Burden, Social Stigma, and Impact of a Community-Based Photoprotection Program
by Rebecca Donadoni, Andrea Michelerio and Valeria Brazzelli
Cosmetics 2025, 12(5), 229; https://doi.org/10.3390/cosmetics12050229 - 15 Oct 2025
Viewed by 217
Abstract
Oculocutaneous albinism (OCA) increases susceptibility to ultraviolet (UV) skin damage, skin cancer risk, and psychosocial burden. Data from Madagascar are lacking. We conducted a six-month pilot study (July–December 2024) in northern Madagascar (DIANA and SAVA regions). Forty-one individuals with OCA were enrolled. Baseline [...] Read more.
Oculocutaneous albinism (OCA) increases susceptibility to ultraviolet (UV) skin damage, skin cancer risk, and psychosocial burden. Data from Madagascar are lacking. We conducted a six-month pilot study (July–December 2024) in northern Madagascar (DIANA and SAVA regions). Forty-one individuals with OCA were enrolled. Baseline socio-demographic, clinical, and behavioral data were collected through interviews and dermatological examinations. A structured program provided education, culturally adapted materials, and photoprotective resources, with monthly follow-up visits. The cohort included 22 males and 19 females, with a mean age of 18 years (range: 1 month–35 years). Actinic keratoses were present in 61% of participants, and invasive skin cancer in 4.9%. All patients had photophobia and nystagmus. Social discrimination was reported by 65.9%, with 12.2% describing severe abuse. Baseline photoprotection was inadequate: 43.9% reported no protective practices, 7.3% used sunscreen, and 19.5% avoided midday sun. Follow-up was completed by 20/41 patients (48.8%). Among completers, paired analysis showed a decrease in sunburn prevalence from 95.0% to 10.0% (p < 0.0001), an increase in regular sunscreen use from 0.0% to 100.0% (p < 0.0001), use of protective clothing from 35.0% to 80.0% (p = 0.0039), and adoption of behavioral strategies from 15.0% to 50.0% (p = 0.0156). This first study on OCA in northern Madagascar demonstrates a high burden of UV-related dermatoses and stigma. A low-cost community intervention significantly improved photoprotection. Wider implementation could reduce morbidity and enhance quality of life in resource-limited settings. Full article
(This article belongs to the Section Cosmetic Dermatology)
Show Figures

Graphical abstract

Back to TopTop