Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (500)

Search Parameters:
Keywords = generalized information criterion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1162 KB  
Article
A Teamwork Science Approach to Trust Dynamics in Hybrid Product Development Teams: Modeling Non-Verbal Interactions Through Bayesian Networks
by Tsuyoshi Aburai
Adm. Sci. 2026, 16(5), 208; https://doi.org/10.3390/admsci16050208 - 29 Apr 2026
Abstract
Motivation: In modern organizations where remote and hybrid work has become normalized, fostering trust without frequent face-to-face interaction is a critical management challenge. This study aims to explore how non-verbal digital dynamics associate with trust formation within hybrid product development teams from a [...] Read more.
Motivation: In modern organizations where remote and hybrid work has become normalized, fostering trust without frequent face-to-face interaction is a critical management challenge. This study aims to explore how non-verbal digital dynamics associate with trust formation within hybrid product development teams from a teamwork science perspective, integrating Big Five traits and established trust scales. Methods: The empirical study observed twelve product development teams (N = 40) participating in a major innovation competition over an eight-month period. Dynamic behavioral data, including speaking time, nodding, smiling, and silence, were extracted from online workshop recordings using synchronized behavioral coding validated by high inter-rater reliability (Cohen’s Kappa k ≥ 0.78). These were integrated with Big Five personality traits, mutual trust scales, and idea value metrics into a Bayesian Network (BN) to model probabilistic dependencies. The structural model was validated using the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to ensure predictive robustness. Furthermore, we performed sensitivity analysis on the BN to quantify how specific shifts in non-verbal cues—particularly nodding and the functional categories of silence—disproportionately affect the “Mutual Trust” node. While this exploratory study utilizes a sample of “digital native” student teams, it provides a critical baseline for “high digital fluency” collaboration, which we contextualize against the “asymmetric cues” found in multi-generational corporate environments. Results: Sensitivity analysis identified specific probabilistic associations suggesting that effective role fulfillment is the strongest predictor of idea originality. Crucially, nodding was identified as a behavioral ‘digital reward’ that enhances psychological safety, facilitating divergent thinking. Smiling showed a strong association with feasibility and consensus-building during convergent phases. The model further identifies distinct behavioral ‘fingerprints’: high-trust sequences are characterized by frequent non-verbal backchanneling and deliberate “thinking silences,” whereas low-trust sequences exhibit a disproportionate increase in unproductive lapses (e.g., a 10% increase in lapses correlating with an 18% decrease in trust probability). Furthermore, a probabilistic pathway was identified where teams with highly open members and frequent non-verbal validation exhibit higher mutual support behaviors. Conclusions: This research offers empirical insights into how trust can be modeled in hybrid environments through specific combinations of behavioral and personality traits. Practically, this study proposes “Hybrid Team Protocols”—such as intentional backchanneling and the normalization of deliberative silence—as actionable Organizational Development (OD) interventions. These provide managers with data-driven guidelines to visualize and monitor the quality of digital collaboration while emphasizing the ethical necessity of transparent implementation to prevent “digital performance” and ensure psychological safety across diverse organizational structures. Full article
Show Figures

Figure 1

11 pages, 609 KB  
Article
Using Natural Language and Health Ontologies in Hope Recommender System: Evaluation of Use in Medicine
by Hans Eguia, Carlos Sánchez-Bocanegra, Carlos Fernandez Llatas, Fernando Alvarez López and Francesc Saigí-Rubió
Appl. Syst. Innov. 2026, 9(5), 86; https://doi.org/10.3390/asi9050086 - 27 Apr 2026
Viewed by 169
Abstract
Objectives: Despite the widespread availability of digital clinical information, timely access to relevant biomedical evidence during routine consultations remains limited in practice. Primary care clinicians, in particular, face significant time constraints that make it difficult to integrate comprehensive literature searches into everyday workflows. [...] Read more.
Objectives: Despite the widespread availability of digital clinical information, timely access to relevant biomedical evidence during routine consultations remains limited in practice. Primary care clinicians, in particular, face significant time constraints that make it difficult to integrate comprehensive literature searches into everyday workflows. This study evaluates whether an ontology-based recommender system can support routine clinical workflows by reducing information retrieval time while preserving the clinically acceptable usefulness of retrieved evidence. We assessed the performance of the HOPE (Health Operation for Personalised Evidence) system compared with realistic manual PubMed searches conducted by physicians. Materials and Methods: We conducted an observational evaluation involving 50 primary care physicians, who independently assessed 30 anonymised, rewritten clinical cases representative of common primary care scenarios. HOPE automatically extracted biomedical concepts from case descriptions using natural language processing and mapped them to Unified Medical Language System (UMLS) ontologies to generate ranked PubMed recommendations. A subset of 10 physicians also conducted manual PubMed searches in line with their usual clinical practice. Article relevance was assessed using a predefined binary criterion, and a reference relevance set was established by consensus among three senior physicians using a pooled document set. Retrieval performance was evaluated using Precision@k, relative Recall@k, and Normalised Discounted Cumulative Gain (NDCG@k). Manual search time was measured using a standardised stopwatch protocol, whereas HOPE response time was logged automatically by the system. Results: Inter-physician agreement in relevance assessment was substantial (Fleiss’ κ = 0.66; 95% CI: 0.61–0.70). HOPE achieved moderate-to-high precision within the top-ranked results (Precision@3 = 0.72), with relative recall increasing as additional documents were considered. Ranking metrics indicated that relevant articles were generally positioned early in the result lists. The mean total retrieval time for manual PubMed searches was 13.3 ± 1.7 min per case, compared with 17.4 ± 2.1 s for HOPE-assisted retrieval (p < 0.001). Conclusions: In a controlled, workflow-oriented evaluation using synthetic clinical cases, HOPE substantially reduced information retrieval time while maintaining clinically acceptable relevance in the retrieved literature. These findings support the use of ontology-based, AI-assisted systems as workflow-support tools to facilitate timely access to biomedical evidence, without replacing clinical judgment. Full article
(This article belongs to the Special Issue AI-Enhanced Decision Support Systems)
23 pages, 1052 KB  
Article
Technology Analysis of Extended Reality Using Machine Learning and Statistical Models
by Sunghae Jun
Virtual Worlds 2026, 5(2), 19; https://doi.org/10.3390/virtualworlds5020019 - 20 Apr 2026
Viewed by 164
Abstract
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from [...] Read more.
Extended reality (XR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), is a key enabling technology for virtual worlds, and XR-related patents continue to grow rapidly. However, patent-based XR technology analysis faces a fundamental challenge: document–keyword matrix (DKM) built from patent titles and abstracts are typically high dimensional, sparse, and often exhibit excess zeros, which can distort inference when conventional text mining pipelines are applied without a generative count perspective. In this study, we propose a statistically grounded XR technology analysis framework that combines likelihood-based count modeling with interpretable structure mining to map XR sub-technologies from a patent DKM. Using an XR patent–keyword matrix, we fit Poisson regression (PR), negative binomial regression (NBR), and zero-inflated negative binomial regression (ZINBR) models via maximum likelihood estimation (MLE), controlling for document-length effects. Model selection by Akaike information criterion (AIC) consistently favored NBR for both target keywords, indicating substantial overdispersion in XR patent counts. We interpret exponentiated coefficients as incidence rate ratios (IRRs) and construct a technology relatedness network from significant IRR edges, revealing a dual-axis XR structure: reality is anchored in an AR or VR experience and content axis such as virtual and augment, whereas extend is embedded in a structure and integration axis for example, surface, edge, layer, and connectivity-related terms. To show how the proposed method can be applied to real domains, we searched the XR patent documents, and analyzed them for XR technology analysis. Full article
Show Figures

Figure 1

23 pages, 877 KB  
Article
Statistical Analysis of NO2 Emissions from Eskom’s Majuba Coal-Fired Power Station in Mpumalanga, South Africa
by Mpendulo Wiseman Mamba and Delson Chikobvu
Atmosphere 2026, 17(4), 415; https://doi.org/10.3390/atmos17040415 - 19 Apr 2026
Viewed by 200
Abstract
Gaseous emissions from coal combustion during electricity generation continue to be a challenge in South Africa. To meet the regulatory limits, it is crucial to understand the statistical distribution of such emissions from the power generating plants. The current paper characterises the nitrogen [...] Read more.
Gaseous emissions from coal combustion during electricity generation continue to be a challenge in South Africa. To meet the regulatory limits, it is crucial to understand the statistical distribution of such emissions from the power generating plants. The current paper characterises the nitrogen dioxide (NO2) emissions from Eskom’s Majuba coal-fired power station by making use of the quantile–quantile (QQ) plots and derivative plots of three statistical parent distributions, namely, the Weibull, Lognormal, and Pareto distributions. These distributions are fitted and compared according to their tail heaviness as they cater for data that may have tails lighter or heavier than that of the Exponential distribution. Of the three distributions evaluated here, the Lognormal gave the best fit for the full body of the data according to the QQ and derivative plots, and the goodness-of-fit tools (bootstrap Kolmogorov–Smirnov (KS), Anderson–Darling (AD), Akaike Information Criterion (AIC), Schwarz’s Bayesian Information Criterion (BIC), and the BIC-corrected Vuong test for non-nested distributions). The Lognormal distribution also gave the best fit for the overall upper tail, while at the very top six largest NO2 emission observations in the upper tail, a Pareto-type tail was observed. The practical implication of a heavy tail like the Pareto is that it models more frequent larger sized NO2 emissions compared to lighter tails like the Weibull and Lognormal tails. The methods used in this study give a framework on how emissions of NO2 from a coal-fired power station can be modelled using statistical parent distributions whilst also taking into account the distribution of the data in the tails which is mostly ignored when fitting statistical parent distributions. Understanding the distribution of the upper tail is very important since higher and rare emissions are of the most concern and are dangerous to human health and the environment. Full article
(This article belongs to the Special Issue Modeling and Monitoring of Air Quality: From Data to Predictions)
Show Figures

Graphical abstract

40 pages, 6612 KB  
Article
A Method for Selecting Key Flight Parameters of Aircraft Based on Dual-Domain Rough Set and Three-Branch Decision
by Shengkai Yan, Qiang Wang, Jiayang Yu, Jiajin Li, Qiuhan Liu and Gaocheng Chen
Aerospace 2026, 13(4), 382; https://doi.org/10.3390/aerospace13040382 - 17 Apr 2026
Viewed by 185
Abstract
The precise selection of key flight parameters is fundamental to enhancing aircraft condition monitoring and risk warning capabilities. However, existing methods typically rely on a single source of information, i.e., either solely expert judgments or solely objective flight data, and lack effective mechanisms [...] Read more.
The precise selection of key flight parameters is fundamental to enhancing aircraft condition monitoring and risk warning capabilities. However, existing methods typically rely on a single source of information, i.e., either solely expert judgments or solely objective flight data, and lack effective mechanisms to reconcile conflicts between subjective opinions and objective data characteristics, which limits their applicability in complex aviation safety scenarios. To address this issue, a flight parameter selection method based on dual-domain rough sets and three-way decision theory is proposed in this paper. First, regret theory is introduced to quantify experts’ psychological preferences, and a subjective evaluation model integrating both psychological and absolute agreement is constructed. Second, a subjective–objective conflict information system is established within a dual-domain framework. Based on this system, bidirectional decision rules are designed to simultaneously consider positive-domain and negative-domain conditional probabilities, through which candidate sets of key flight parameters are generated. Finally, a new Bayesian minimum loss criterion is designed to determine the optimal parameter set. Experimental results demonstrate that the accuracy and robustness of flight parameter selection are improved by the proposed method while interpretability is maintained, offering reliable decision support for aviation safety analysis. Full article
Show Figures

Figure 1

33 pages, 5648 KB  
Article
Extreme Daily Rainfall Assessment in Arid Environments Through Statistical Modeling
by Ali Aldrees and Abubakr Taha Bakheit Taha
Atmosphere 2026, 17(4), 402; https://doi.org/10.3390/atmos17040402 - 16 Apr 2026
Viewed by 314
Abstract
Rainfall is a significant input for several engineering designs such as hydraulic structures, culverts, bridges and ducts, rainfall water sewer, and highway drainage system. The detailed statistical analysis of extreme daily rainfall of each arid environment’s region is essential to estimate the relevant [...] Read more.
Rainfall is a significant input for several engineering designs such as hydraulic structures, culverts, bridges and ducts, rainfall water sewer, and highway drainage system. The detailed statistical analysis of extreme daily rainfall of each arid environment’s region is essential to estimate the relevant input value for designing and analyzing engineering structures and agricultural planning. This paper aims to assess the best-fitting distribution to estimate the design of rainfall depth (XT) and maximum rainfall values for different return periods (2, 10, 25, 50, 100, and 150). This study used extreme daily rainfall historical data collected in period of 1970–2020, collected from four rainfall gauge stations nearby the Wadi Al-Aqiq that are selected for analysis; they are Al Faqir (J109), Umm Al Birak (J112), Madinah Munawara (M001), and Bir Al Mashi (M103). The methodology approved in this paper examined four frequency distributions, namely: GEV (Generalised Extreme Value), Gumbel, Weibull, and Pearson type III to identify the most suitable and extreme storm design depth corresponding to different return periods. The results demonstrate that GEV and Pearson Type 3 produce higher extremes values, while the Weibull method is commonly suggested in the HYFRAN-PLUS MODEL (DSS) for criterion suitability. The findings for the 100-year storm design demonstrate that extreme values generated by the Hyfran-Plus model are higher than the decision support system (DSS). All (DSS) comparative values are less than the maximum historical data from 1970–2020, except the Al Faqir station (DSS), which has a value of 79.6 mm that exceeds the historical maximum of 71 mm. This study will provide advantageous information about the study area for water resources planners, farmers, and urban engineers to assess water availability and create storage. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

18 pages, 835 KB  
Article
Prism-Based Mapping of 6G Use Cases Integrating Technical Requirements and Multidimensional Service Classification
by Sunhye Kim, Yoon Seo, Seung-Hoon Hwang and Byungun Yoon
Systems 2026, 14(4), 404; https://doi.org/10.3390/systems14040404 - 7 Apr 2026
Viewed by 356
Abstract
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic [...] Read more.
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic and institutional 6G sources into 21 representative cases. A three-round Delphi-based expert assessment, employing a five-point Likert scale and interquartile-range-based consensus monitoring, was used to assign primary and secondary technical requirements across six core dimensions: immersive communication, massive communication, hyper-reliable low-latency communication, integrated sensing and communication, integrated artificial intelligence and communication (IAAC), and ubiquitous connectivity. A three-dimensional (3D) prism-based visualization framework was subsequently developed to represent the interdependencies among these requirements. Results: IAAC and massive communication emerged as the most critical requirements, each functioning as a primary or secondary driver across most use cases. The prism framework revealed hierarchical and complementary relationships among the six dimensions that conventional 2D wheel diagrams cannot adequately capture. Furthermore, a nine-criterion multidimensional classification framework, encompassing data transmission mode, decision-making mode, communication flow, interaction type, device type, deployment type, human activity innovation, user type, and personalization level, was developed, offering industry-specific guidance for service design. Collectively, the proposed framework supports user-centric design, informs strategic technology planning, and contributes to policy development while acknowledging existing limitations in quantitative mapping and economic analysis. Full article
Show Figures

Figure 1

18 pages, 4298 KB  
Article
Spatial Pattern of Soil Erosion Drivers and Prioritizing Soil Conservation Areas Using Ordinary Least Squares and Geographically Weighted Regression
by Nazila Alaei, Fatemeh Saeedi Nazarlu, Hassan Khavarian Nehzak and Raoof Mostafazadeh
Earth 2026, 7(2), 59; https://doi.org/10.3390/earth7020059 - 4 Apr 2026
Viewed by 413
Abstract
The spatial assessment of soil erosion drivers provides essential information for prioritizing soil conservation areas. This study aims to compare the performance of the Ordinary Least Squares (OLS) regression model and the Geographically Weighted Regression (GWR) model in explaining and analyzing the spatial [...] Read more.
The spatial assessment of soil erosion drivers provides essential information for prioritizing soil conservation areas. This study aims to compare the performance of the Ordinary Least Squares (OLS) regression model and the Geographically Weighted Regression (GWR) model in explaining and analyzing the spatial variations of soil erosion in the Qara-Su watershed (Ardabil Province, Iran) and identifying the relative roles of the driving factors affecting erosion. To determine the relative importance of factors influencing soil erosion in the Qara-Su watershed, potential soil erosion (A) data and RUSLE model factors, including R, K, LS, C, and P, were collected at 13,845 points within the watershed. Initially, general relationships between erosion and contributing factors were examined using the OLS regression model. Subsequently, to analyze the spatial variability of relationships and identify the relative importance of factors at different locations within the watershed, the GWR model with an adaptive kernel and optimal bandwidth selection based on AICc was employed. The performance of the OLS and GWR models was compared based on fit indices such as R2 and Akaike Information Criterion corrected (AICc), and the relative importance of erosion factors was determined based on the mean local GWR coefficients. Results from the RUSLE model indicated an average annual soil erosion of approximately 7.64 tons per hectare, suggesting that the watershed falls into the moderate erosion risk category. According to the GWR model, significant improvements in explaining variations and reducing errors were observed, with higher R2 and adjusted R2 values (0.62 vs. 0.50) and lower AICc values (3687 vs. 97,848) compared to the OLS model. The local GWR coefficients confirmed spatial non-stationarity and revealed that LS (topography) has the highest importance in mountainous areas. The C factor showed a stronger protective effect in agricultural land-use areas. These results provide a basis for developing targeted strategies to mitigate and manage erosion drivers with higher relative importance and facilitate a better understanding of the causes and mechanisms of soil erosion across the watershed. Full article
Show Figures

Figure 1

18 pages, 3933 KB  
Article
Feature Selection Based on Height Mutual Information in Airborne LiDAR Filtering
by Zhan Cai, Luying Zhao, Qiuli Chen, Zhijun He, Shaoyun Bi and Xiaolong Xu
Remote Sens. 2026, 18(7), 1031; https://doi.org/10.3390/rs18071031 - 30 Mar 2026
Viewed by 335
Abstract
Filtering constitutes a critical step in the post-processing of airborne Light Detection And Ranging (LiDAR) data. Over the past decade, machine learning has emerged as a prominent methodological paradigm across numerous disciplines, attracting significant research interest in its application to LiDAR filtering. From [...] Read more.
Filtering constitutes a critical step in the post-processing of airborne Light Detection And Ranging (LiDAR) data. Over the past decade, machine learning has emerged as a prominent methodological paradigm across numerous disciplines, attracting significant research interest in its application to LiDAR filtering. From a machine learning perspective, filtering is essentially a binary classification task that aims to discriminate between ground and non-ground points. However, the limited information inherent in point clouds often leads to the generation of highly correlated features, particularly those derived from height data, which can compromise filtering accuracy. To address this issue, feature selection becomes imperative. In this study, we employed height-based mutual information as a criterion to identify and eliminate less discriminative features for filtering. The AdaBoost (Adaptive Boosting) algorithm was adopted as the classifier for point cloud filtering. For each point, nineteen features were derived from the raw LiDAR point cloud based on height and other geometric attributes within a defined neighborhood. The performance of the proposed feature selection approach was evaluated using benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results demonstrate that the method is effective and reliable. After removing three selected features, the average kappa coefficient improved, along with a reduction in three categories of error, although a slight increase in Type II error (0.15%) was observed. Full article
Show Figures

Figure 1

10 pages, 7086 KB  
Article
Identifying Predictors of Lung Volume in Pediatric Patients Undergoing Surgery: A STROBE-Compliant Retrospective Cross-Sectional Chest Computed Tomography Study
by Sou-Hyun Lee, Dong Gun Lim, Sung-Sik Park, Younghoon Jeon, Jinseok Yeo, Hoon Jung, Jiyong Yeom, Chanhyo Choi and Kyung-Hwa Kwak
J. Clin. Med. 2026, 15(6), 2313; https://doi.org/10.3390/jcm15062313 - 18 Mar 2026
Viewed by 376
Abstract
Background/Objectives: Tidal volume is determined by height and sex in adults under mechanical ventilation, and it serves as the foundation for implementing a lung-protective ventilation strategy. In children, tidal volume is often calculated based on actual body weight, without established guidelines regarding [...] Read more.
Background/Objectives: Tidal volume is determined by height and sex in adults under mechanical ventilation, and it serves as the foundation for implementing a lung-protective ventilation strategy. In children, tidal volume is often calculated based on actual body weight, without established guidelines regarding the predictors of lung volume. The aim of this study was to identify the key predictors of lung volume in children aged 0–5 years. Methods: This retrospective study involved 51 children aged 0–5 years who underwent chest computed tomography (CT) and surgery under general anesthesia between 2014 and 2024. The total lung volume was calculated using three-dimensional segmentation of the CT images. Linear regression models were used to assess predictors, including height, weight, age, sex, and body mass index (BMI). Model performance was evaluated using the adjusted R-squared and Akaike Information Criterion (AIC). Bootstrap validation with 2000 iterations was used to validate model reliability. Results: Height was the strongest predictor of lung volume (adjusted R-squared: 0.5621), and it showed a collinearity with age. The final model included age and sex as the covariates. The Bootstrap validation confirmed the model’s reliability. Conclusions: Age and sex are key predictors of the CT-derived total lung volume in children aged 0–5 years. Further studies are required to validate these findings. In addition, research is needed to derive and validate a tidal volume equation based on these predictors and assess the influence of this equation on clinical outcomes such as atelectasis, oxygenation, and inflammatory markers in pediatric surgery. Full article
(This article belongs to the Section Anesthesiology)
Show Figures

Figure 1

9 pages, 308 KB  
Article
Analysis of Influences of Sjögren’s Disease and Anti-Ro/SS-A Antibodies on Clinical Course of Patients with Rheumatoid Arthritis Complicated by Lymphoproliferative Disorders: A Pilot Study
by Yoshiro Horai, Shota Kurushima, Hideki Nakamura and Atsushi Kawakami
J. Clin. Med. 2026, 15(6), 2271; https://doi.org/10.3390/jcm15062271 - 17 Mar 2026
Viewed by 407
Abstract
Background/Objectives: Lymphoproliferative disorders (LPDs) are adverse effects of methotrexate (MTX) prescribed for rheumatoid arthritis (RA). Sjögren’s disease (SjD), for which the presence of anti-Ro/SS-A antibodies (Abs) is a diagnostic criterion, might accompany RA and be a risk factor for LPDs. We conducted [...] Read more.
Background/Objectives: Lymphoproliferative disorders (LPDs) are adverse effects of methotrexate (MTX) prescribed for rheumatoid arthritis (RA). Sjögren’s disease (SjD), for which the presence of anti-Ro/SS-A antibodies (Abs) is a diagnostic criterion, might accompany RA and be a risk factor for LPDs. We conducted a retrospective study to analyze the effects of SjD or anti-Ro/SS-A Ab positivity on the clinical course of patients with RA complicated by LPDs. Methods: We retrospectively analyzed 25 patients in our department who had RA complicated by LPDs, specifically collecting clinical information regarding the complications of SjD and positivity for anti-Ro/SS-A Abs. Results: In total, 25 patients with RA were included in this study, 3 of which were diagnosed with SjD by attending physicians based on sicca symptoms and positiveness of anti-Ro/SS-A antibodies. No significant differences in clinical characteristics except for SjD diagnosis given by attending physicians were found between the patients positive for anti-Ro/SS-A Abs and the patients negative for anti-Ro/SS-A Ab. The most common histologic LPD subtype was diffuse large B cell lymphoma, while mucosa-associated lymphoid tissue lymphoma, the histologic subtype often diagnosed as SjD-LPD, was found in only one patient, who was positive for anti-Ro/SS-A Abs without an SjD diagnosis. There were no significant differences in the intervals between the RA and LPD diagnoses and those of SjD and anti-Ro/SS-A Ab positivity. Conclusions: While the rate of anti-Ro/SS-A Ab positivity in the study population seemed to be higher than that in the general RA population, any potential effects of SjD on RA-LPD development were not ascertained in this study. Full article
(This article belongs to the Special Issue Clinical Updates on Rheumatoid Arthritis: 2nd Edition)
Show Figures

Figure 1

41 pages, 8144 KB  
Article
Statistical Development of Rainfall IDF Curves and Machine Learning-Based Bias Assessment: A Case Study of Wadi Al-Rummah, Saudi Arabia
by Ibrahim T. Alhbib, Ibrahim H. Elsebaie and Saleh H. Alhathloul
Hydrology 2026, 13(3), 96; https://doi.org/10.3390/hydrology13030096 - 16 Mar 2026
Viewed by 953
Abstract
Reliable estimation of extreme rainfall is essential for hydraulic design and flood risk mitigation, particularly in arid regions where rainfall exhibits strong temporal and spatial variability. This study presents a statistical framework for developing rainfall intensity-duration-frequency (IDF) curves, complemented by a machine learning-based [...] Read more.
Reliable estimation of extreme rainfall is essential for hydraulic design and flood risk mitigation, particularly in arid regions where rainfall exhibits strong temporal and spatial variability. This study presents a statistical framework for developing rainfall intensity-duration-frequency (IDF) curves, complemented by a machine learning-based assessment of model bias and performance. The analysis was conducted using data from ten rainfall stations located within or near the Wadi Al-Rummah Basin. Annual maximum series (AMS) from 1969 to 2024 were first reconstructed to address missing years using a modified normal ratio method (NRM) combined with nearest-station selection, ensuring spatial consistency while preserving station-specific rainfall characteristics. Six probability distributions (Weibull, Gumbel, gamma, lognormal, generalized extreme value (GEV), and generalized Pareto) were fitted to each station, and the best-fit distribution was identified using multiple goodness-of-fit (GOF) criteria, including the Kolmogorov–Smirnov (K-S) test, Anderson–Darling (A-D) test, root mean square error (RMSE), chi-square (χ2) statistic, Akaike information criterion (AIC), Bayesian information criterion (BIC), and the coefficient of determination (R2). Statistical IDF curves were then developed for durations ranging from 5 to 1440 min and return periods from 2 to 1000 years. To evaluate the robustness of the statistically derived IDF curves, three machine learning (ML) models, multiple linear regression (MLR), regression random forest (RRF), and multilayer feed-forward neural network (MFFNN), were trained as surrogate models using duration, return period, and station geographic attributes as predictor variables. Model performance was evaluated using RMSE, MAE, and mean bias metrics across stations and return periods. The lognormal distribution emerged as the best-fit model for four stations, while the Gumbel and gamma distributions were selected for two stations each. Overall, no single probability distribution consistently outperformed others, indicating station-dependent behavior. Among the machine learning models, the MFFNN achieved the closest agreement with statistical IDF estimates (RMSE0.97, MAE0.65, bias0.02), followed by RRF and MLR based on global average performance across all stations and return periods. The proposed framework offers a reliable approach for rainfall IDF development and evaluation in arid region watersheds. Full article
(This article belongs to the Section Statistical Hydrology)
Show Figures

Figure 1

15 pages, 2004 KB  
Article
Testing Five Nonlinear Equations for Quantifying Leaf Area Inequality of Semiarundinaria densiflora
by Hanzhou Qiu, Lin Wang and Johan Gielis
Symmetry 2026, 18(3), 501; https://doi.org/10.3390/sym18030501 - 15 Mar 2026
Viewed by 280
Abstract
Accurately quantifying the inequality of plant organ size distributions, such as leaf area, is essential for understanding plant resource allocation strategies, and this is commonly achieved using Lorenz curves. Previous studies have shown that the performance equation (PE) and its generalized form (GPE) [...] Read more.
Accurately quantifying the inequality of plant organ size distributions, such as leaf area, is essential for understanding plant resource allocation strategies, and this is commonly achieved using Lorenz curves. Previous studies have shown that the performance equation (PE) and its generalized form (GPE) effectively describe Lorenz curves that are rotated 135° counterclockwise around the origin and shifted rightward by 2 units. However, few studies have compared the fitting performance of PE (and GPE) with other traditional equations generating Lorenz curves in modeling empirical leaf area distributions, and even fewer have considered the validity of linear approximation assumptions in these nonlinear models. To address this gap, we quantified the inequality of leaf area distributions in Semiarundinaria densiflora, a bamboo species for which the abundant and measurable leaves per culm provide an ideal system for examining the ecological strategies underlying leaf allocation patterns. Five nonlinear models were employed to fit the leaf area distribution: PE, GPE, the Sarabia equation (SarabiaE), the Sarabia–Castillo–Slottje equation (SCSE), and the Sitthiyot–Holasut equation (SHE). Model performance was assessed using root-mean-square error (RMSE) and Akaike information criterion (AIC), while nonlinearity curvature measures were applied to evaluate the close-to-linear behavior of parameter estimates. In addition, the Lorenz asymmetry coefficient (LAC) was used to quantify the asymmetry of the Lorenz curves. Our results showed a clear trade-off between predictive accuracy and linear approximation behavior. Among the five models, GPE achieved the best fit, with the lowest RMSE and AIC values, yet did not show good close-to-linear behavior. In contrast, SHE provided the poorest fit but demonstrated the strongest close-to-linear properties. LAC values indicated that relatively abundant, larger leaves disproportionately contributed to the inequality in leaf area distribution. These findings highlight an inherent trade-off in using Lorenz-based models to describe leaf area frequency distributions: predictive accuracy does not necessarily align with statistical validity. By integrating model fit, nonlinearity diagnostics, and asymmetry assessment, this study provides new perspectives and methodological tools for future investigations into inequality in plant organ size distributions and their ecological significance. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

30 pages, 954 KB  
Article
Poisson Mixed-Effects Count Regression Model Based on Double SCAD Penalty and Its Simulation Study
by Keqian Li, Xueni Ren, Hanfang Li and Youxi Luo
Axioms 2026, 15(3), 214; https://doi.org/10.3390/axioms15030214 - 12 Mar 2026
Viewed by 248
Abstract
This paper focuses on variable selection and parameter estimation for mixed-effects Poisson count regression models. To simultaneously select important variables in both fixed effects and random effects, we propose a double-penalized Poisson count regression model with the Smoothly Clipped Absolute Deviation (SCAD) penalty [...] Read more.
This paper focuses on variable selection and parameter estimation for mixed-effects Poisson count regression models. To simultaneously select important variables in both fixed effects and random effects, we propose a double-penalized Poisson count regression model with the Smoothly Clipped Absolute Deviation (SCAD) penalty imposed on both components. To estimate the unknown parameters, we develop a new iterative algorithm called the Double SCAD–Local Quadratic Approximation (DSCAD-LQA) algorithm. Under regularity conditions, the consistency and Oracle property of the proposed estimator are established. Simulation studies are conducted under two types of penalty parameter selection criteria: the Schwarz Information Criterion (SIC) and the Generalized Approximate Cross-Validation (GACV). We evaluate the performance of the proposed method under different levels of correlation among explanatory variables and different covariance structures of random effects. Comparisons are also carried out with the non-penalized model, the single-penalized model, and the double LASSO-penalized model. The results demonstrate that the proposed double SCAD penalty method performs better than the other three methods in terms of important variable selection and coefficient estimation, and is especially effective for sparse models. Full article
Show Figures

Figure 1

19 pages, 364 KB  
Article
New Fuzzy Topologies via Ideals and Generalized Openness
by Ahu Açıkgöz
Mathematics 2026, 14(5), 904; https://doi.org/10.3390/math14050904 - 6 Mar 2026
Viewed by 261
Abstract
This paper introduces and investigates a new class of generalized open sets, called fuzzy hI-open sets, in fuzzy ideal topological spaces (X,τ˜,I˜). We prove that the collection of all fuzzy hI [...] Read more.
This paper introduces and investigates a new class of generalized open sets, called fuzzy hI-open sets, in fuzzy ideal topological spaces (X,τ˜,I˜). We prove that the collection of all fuzzy hI-open sets forms a fuzzy topology τ˜hI satisfying τ˜τ˜hI and show that τ˜ and τ˜hI are in general incomparable, demonstrating that the hI-construction captures fundamentally different information from the ∗-topology. We establish precise conditions under which these topologies coincide and introduce a fuzzy hI-T1 separation axiom. Furthermore, we develop a comprehensive hierarchy of generalizations—fuzzy hαI-open, fuzzy hpI-open, fuzzy hsI-open, and fuzzy hβI-open sets—and prove that these classes are pairwise distinct through genuinely fuzzy (non-characteristic) examples. We introduce fuzzy hI-continuous and fuzzy hI-irresolute functions, providing six equivalent characterizations and a closed-set criterion via the ∗-interior operator. The framework is applied to a concrete multi-criteria decision-making problem, where the ideal filters negligible criteria and the hI-interior provides a refined ranking that demonstrably outperforms the original fuzzy topology. Full article
(This article belongs to the Topic Fuzzy Sets Theory and Its Applications)
Back to TopTop