Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (459)

Search Parameters:
Keywords = standard errors (S.E.)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 8881 KB  
Article
Evaluating Machine Learning Techniques for Brain Tumor Detection with Emphasis on Few-Shot Learning Using MAML
by Soham Sanjay Vaidya, Raja Hashim Ali, Shan Faiz, Iftikhar Ahmed and Talha Ali Khan
Algorithms 2025, 18(10), 624; https://doi.org/10.3390/a18100624 - 2 Oct 2025
Viewed by 237
Abstract
Accurate brain tumor classification from MRI is often constrained by limited labeled data. We systematically compare conventional machine learning, deep learning, and few-shot learning (FSL) for four classes (glioma, meningioma, pituitary, no tumor) using a standardized pipeline. Models are trained on the Kaggle [...] Read more.
Accurate brain tumor classification from MRI is often constrained by limited labeled data. We systematically compare conventional machine learning, deep learning, and few-shot learning (FSL) for four classes (glioma, meningioma, pituitary, no tumor) using a standardized pipeline. Models are trained on the Kaggle Brain Tumor MRI Dataset and evaluated across dataset regimes (100%→10%). We further test generalization on BraTS and quantify robustness to resolution changes, acquisition noise, and modality shift (T1→FLAIR). To support clinical trust, we add visual explanations (Grad-CAM/saliency) and report per-class results (confusion matrices). A fairness-aligned protocol (shared splits, optimizer, early stopping) and a complexity analysis (parameters/FLOPs) enable balanced comparison. With full data, Convolutional Neural Networks (CNNs)/Residual Networks (ResNets) perform strongly but degrade with 10% data; Model-Agnostic Meta-Learning (MAML) retains competitive performance (AUC-ROC ≥ 0.9595 at 10%). Under cross-dataset validation (BraTS), FSL—particularly MAML—shows smaller performance drops than CNN/ResNet. Variability tests reveal FSL’s relative robustness to down-resolution and noise, although modality shift remains challenging for all models. Interpretability maps confirm correct activations on tumor regions in true positives and explain systematic errors (e.g., “no tumor”→pituitary). Conclusion: FSL provides accurate, data-efficient, and comparatively robust tumor classification under distribution shift. The added per-class analysis, interpretability, and complexity metrics strengthen clinical relevance and transparency. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

23 pages, 398 KB  
Article
Business Strategies and Corporate Reporting for Sustainability: A Comparative Study of Materiality, Stakeholder Engagement, and ESG Performance in Europe
by Andreas-Errikos Delegkos, Michalis Skordoulis and Petros Kalantonis
Sustainability 2025, 17(19), 8814; https://doi.org/10.3390/su17198814 - 1 Oct 2025
Viewed by 274
Abstract
This study investigates the relationship between corporate reporting practices and the value relevance of accounting information by analyzing 100 publicly listed non-financial European firms between 2015 and 2019. Drawing on the Ohlson valuation framework, the analysis combines random effects with Driscoll–Kraay standard errors [...] Read more.
This study investigates the relationship between corporate reporting practices and the value relevance of accounting information by analyzing 100 publicly listed non-financial European firms between 2015 and 2019. Drawing on the Ohlson valuation framework, the analysis combines random effects with Driscoll–Kraay standard errors and System GMM estimations to assess the role of financial and non-financial disclosures. Materiality and stakeholder engagement were scored through content analysis of corporate reports, while ESG performance data were obtained from Refinitiv Eikon. The results show that financial fundamentals remain the most robust determinants of firm value, consistent with Ohlson’s model. Among qualitative disclosures, materiality demonstrates a strong and statistically significant positive association with market value in the random effects specification, while stakeholder engagement and ESG scores do not attain statistical significance. In the dynamic panel model, lagged market value is highly significant, confirming the persistence of valuation, while the effect of materiality and stakeholder engagement diminishes. Interaction models further indicate that materiality strengthens the relevance of earnings but reduces the role of book value, underscoring its selective contribution. Overall, the findings provide partial support for the claim that Integrated Reporting enhances the value relevance of accounting information. It suggests that the usefulness of IR depends less on adoption per se and more on the quality and substance of disclosures, particularly the integration of financial material ESG issues into corporate reporting. This highlights IR’s potential to improve transparency, accountability, and investor decision making, thereby contributing to more effective capital market outcomes. Full article
Show Figures

Figure 1

16 pages, 1462 KB  
Systematic Review
Application of Radiomics in Melanoma: A Systematic Review and Meta-Analysis
by Rosa Falcone, Sofia Verkhovskaia, Francesca Romana Di Pietro, Chiara Scianni, Giulia Poti, Maria Francesca Morelli, Paolo Marchetti, Federica De Galitiis, Matteo Sammarra and Armando Ugo Cavallo
Cancers 2025, 17(19), 3130; https://doi.org/10.3390/cancers17193130 - 26 Sep 2025
Viewed by 342
Abstract
Background/Objectives: Radiomics is a powerful and emerging tool in oncology, with many potential applications in predicting therapy response and prognosis. To assess the current state of radiomics in melanoma, we conducted a systematic review of its various clinical uses. Methods: We [...] Read more.
Background/Objectives: Radiomics is a powerful and emerging tool in oncology, with many potential applications in predicting therapy response and prognosis. To assess the current state of radiomics in melanoma, we conducted a systematic review of its various clinical uses. Methods: We searched three databases: PubMed, Web of Science and Scopus. Each study was classified based on multiple variables, including patient number, metastasis number, therapy, imaging modality, clinical endpoints and analysis methods. The risk of bias in the systematic review was assessed with QUADAS-2, and the certainty of evidence in the meta-analysis with GRADE. Results: Forty studies involving 4673 patients and 24,561 lesions were included in the analysis. Metastatic disease was the most frequently studied clinical setting (85%). Immunotherapy was the most commonly investigated treatment, featured in half of the studies. Computed tomography (CT) was the preferred imaging modality, appearing in 17 studies (42.5%). Radiomic features were most often extracted using three-dimensional (3D) analysis (72.5%). Across 24 studies investigating the prediction of treatment response and survival, only 9 provided sufficient data (Area Under the Curve, AUC, and standard error, SE) for inclusion. A random-effects model estimated a pooled AUC of 0.83 (95% CI: 0.74 to 0.92), indicating strong discriminative performance of the radiomic models included. Low to moderate heterogeneity was observed (I2 = 28.6%, p = 0.4741). No evidence of publication bias was detected (p = 0.470). Conclusions: Radiomics is increasingly being explored in the context of melanoma, particularly in advanced disease settings and in relation to immunotherapy. Most studies rely on CT imaging and 3D feature extraction, while molecular integration remains limited. Despite promising findings with strong discriminative performance in predicting therapy response, further prospective, standardized studies with higher methodological rigor are needed to validate radiomic biomarkers and integrate them into clinical decision-making. Full article
(This article belongs to the Special Issue Development of Biomarkers and Antineoplastic Drugs in Solid Tumors)
Show Figures

Figure 1

29 pages, 7187 KB  
Article
A Novel Framework for Predicting Daily Reference Evapotranspiration Using Interpretable Machine Learning Techniques
by Elsayed Ahmed Elsadek, Mosaad Ali Hussein Ali, Clinton Williams, Kelly R. Thorp and Diaa Eldin M. Elshikha
Agriculture 2025, 15(18), 1985; https://doi.org/10.3390/agriculture15181985 - 20 Sep 2025
Cited by 1 | Viewed by 385
Abstract
Accurate estimation of daily reference evapotranspiration (ETo) is crucial for sustainable water resource management and irrigation scheduling, especially in water-scarce regions like Arizona. The standardized Penman–Monteith (PM) method is costly and requires specialized instruments and expertise, making it generally impractical for [...] Read more.
Accurate estimation of daily reference evapotranspiration (ETo) is crucial for sustainable water resource management and irrigation scheduling, especially in water-scarce regions like Arizona. The standardized Penman–Monteith (PM) method is costly and requires specialized instruments and expertise, making it generally impractical for commercial growers. This study developed 35 ETo models to predict daily ETo across Coolidge, Maricopa, and Queen Creek in Pinal County, Arizona. Seven input combinations of daily meteorological variables were used for training and testing five machine learning (ML) models: Artificial Neural Network (ANN), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Support Vector Machine (SVM). Four statistical indicators, coefficient of determination (R2), the normalized root-mean-squared error (RMSEn), mean absolute error (MAE), and simulation error (Se), were used to evaluate the ML models’ performance in comparison with the FAO-56 PM standardized method. The SHapley Additive exPlanations (SHAP) method was used to interpret each meteorological variable’s contribution to the model predictions. Overall, the 35 ETo-developed models showed an excellent to fair performance in predicting daily ETo over the three weather stations. Employing ANN10, RF10, XGBoost10, CatBoost10, and SVM10, incorporating all ten meteorological variables, yielded the highest accuracies during training and testing periods (0.994 ≤ R2 ≤ 1.0, 0.729 ≤ RMSEn ≤ 3.662, 0.030 ≤ MAE ≤ 0.181 mm·day−1, and 0.833 ≤ Se ≤ 2.295). Excluding meteorological variables caused a gradual decline in ET-developed models’ performance across the stations. However, 3-variable models using only maximum, minimum, and average temperatures (Tmax, Tmin, and Tave) predicted ETo well across the three stations during testing (17.655 ≤ RMSEn ≤ 13.469 and Se ≤ 15.45%). Results highlighted that Tmax, solar radiation (Rs), and wind speed at 2 m height (U2) are the most influential factors affecting ETo at the central Arizona sites, followed by extraterrestrial solar radiation (Ra) and Tave. In contrast, humidity-related variables (RHmin, RHmax, and RHave), along with Tmin and precipitation (Pr), had minimal impact on the model’s predictions. The results are informative for assisting growers and policymakers in developing effective water management strategies, especially for arid regions like central Arizona. Full article
(This article belongs to the Section Agricultural Water Management)
Show Figures

Figure 1

28 pages, 5282 KB  
Article
Predicting Empathy and Other Mental States During VR Sessions Using Sensor Data and Machine Learning
by Emilija Kizhevska, Hristijan Gjoreski and Mitja Luštrek
Sensors 2025, 25(18), 5766; https://doi.org/10.3390/s25185766 - 16 Sep 2025
Viewed by 800
Abstract
Virtual reality (VR) is often regarded as the “ultimate empathy machine” because of its ability to immerse users in alternative perspectives and environments beyond physical reality. In this study, 105 participants (average age 22.43 ± 5.31 years, range 19–45, 75% female) with diverse [...] Read more.
Virtual reality (VR) is often regarded as the “ultimate empathy machine” because of its ability to immerse users in alternative perspectives and environments beyond physical reality. In this study, 105 participants (average age 22.43 ± 5.31 years, range 19–45, 75% female) with diverse educational and professional backgrounds experienced three-dimensional 360° VR videos featuring actors expressing different emotions. Despite the availability of established methodologies in both research and clinical domains, there remains a lack of a universally accepted “gold standard” for empathy assessment. The primary objective was to explore the relationship between the empathy levels of the participants and the changes in their physiological responses. Empathy levels were self-reported using questionnaires, while physiological attributes were recorded through various sensors. The main outcomes of the study are machine learning (ML) models capable of predicting state empathy levels and trait empathy scores during VR video exposure. The Random Forest (RF) regressor achieved the best performance for trait empathy prediction, with a mean absolute percentage error (MAPE) of 9.1%, and a standard error of the mean (SEM) of 0.32% across folds. For classifying state empathy, the RF classifier achieved the highest balanced accuracy of 67%, and a standard error of the proportion (SE) of 1.90% across folds. This study contributes to empathy research by introducing an objective and efficient method for predicting empathy levels using physiological signals, demonstrating the potential of ML models to complement self-reports. Moreover, by providing a novel dataset of VR empathy-eliciting videos, the work offers valuable resources for future research and clinical applications. Additionally, predictive models were developed to detect non-empathic arousal (78% balanced accuracy ± 0.63% SE) and to distinguish empathic vs. non-empathic arousal (79% balanced accuracy ± 0.41% SE). Furthermore, statistical tests explored the influence of narrative context, as well as empathy differences toward different genders and emotions. We also make available a set of carefully designed and recorded VR videos specifically created to evoke empathy while minimizing biases and subjective perspectives. Full article
(This article belongs to the Special Issue Sensors and Wearables for AR/VR Applications)
Show Figures

Figure 1

23 pages, 5510 KB  
Article
Research on Intelligent Generation of Line Drawings from Point Clouds for Ancient Architectural Heritage
by Shuzhuang Dong, Dan Wu, Weiliang Kong, Wenhu Liu and Na Xia
Buildings 2025, 15(18), 3341; https://doi.org/10.3390/buildings15183341 - 15 Sep 2025
Viewed by 298
Abstract
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural [...] Read more.
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural knowledge of ancient buildings to establish a multi-granularity feature extraction framework encompassing local geometric features (normal vectors, curvature, Simplified Point Feature Histograms-SPFH), component-level semantic features (utilizing enhanced PointNet++ segmentation and geometric graph matching for specialized elements), and structural relationships (adjacency analysis, hierarchical support inference). This framework autonomously achieves intelligent layer assignment, line type/width selection based on component semantics, vectorization optimization via orthogonal and hierarchical topological constraints, and the intelligent generation of sectional views and symbolic annotations. We implemented an algorithmic toolchain using the AutoCAD Python API (pyautocad version 0.5.0) within the AutoCAD 2023 environment. Validation on point cloud datasets from two representative ancient structures—Guanchang No. 11 (Luoyuan County, Fujian) and Li Tianda’s Residence (Langxi County, Anhui)—demonstrates the method’s effectiveness in accurately identifying key components (e.g., columns, beams, Dougong brackets), generating engineering-standard line drawings with significantly enhanced efficiency over traditional approaches, and robustly handling complex architectural geometries. This research delivers an efficient, reliable, and intelligent solution for digital preservation, restoration design, and information archiving of ancient architectural heritage. Full article
Show Figures

Figure 1

16 pages, 347 KB  
Article
Interaction Effects of Green Finance and Digital Platforms on China’s Economic Growth
by He Li, Nurhafiza Abdul Kader Malim, Xiaojun Xie and Xuyang Du
Sustainability 2025, 17(18), 8171; https://doi.org/10.3390/su17188171 - 11 Sep 2025
Viewed by 572
Abstract
This study examines the interaction effects of green finance and digital platforms on China’s economic growth, employing panel data from 30 provinces spanning the period 2013–2023. Green finance is measured using green bonds and green credit, while digital platforms are proxied by electronic [...] Read more.
This study examines the interaction effects of green finance and digital platforms on China’s economic growth, employing panel data from 30 provinces spanning the period 2013–2023. Green finance is measured using green bonds and green credit, while digital platforms are proxied by electronic payment and e-commerce penetration. Previous empirical findings indicate that both green finance and digital platforms significantly contribute to economic growth. A 1% increase in the interaction term between green finance and digital platforms, based on fixed effects models with robust standard errors, results in a 0.0204% increase in GDP, supporting the hypothesis of a positive interaction. Control variables including money supply, fiscal expenditure, inflation rate, fixed asset investment, and industrial structure, are included to isolate the net effects of green finance and digital platforms on GDP growth, reinforcing the study’s econometric robustness. This study contributes novel evidence on how the integration of green finance and digital infrastructure fosters sustainable and inclusive economic development. Full article
Show Figures

Figure 1

22 pages, 3577 KB  
Article
Sensortoolkit—A Python Library for Standardizing the Ingestion, Analysis, and Reporting of Air Sensor Data for Performance Evaluation
by Menaka Kumar, Samuel G. Frederick, Karoline K. Barkjohn and Andrea L. Clements
Sensors 2025, 25(18), 5645; https://doi.org/10.3390/s25185645 - 10 Sep 2025
Viewed by 530
Abstract
Open-source software tools designed specifically for evaluating and reporting air sensor performance are limited. The available tools do not provide a means for summarizing the sensor performance using common statistical metrics and figures, nor are they suited for handling the wide variety of [...] Read more.
Open-source software tools designed specifically for evaluating and reporting air sensor performance are limited. The available tools do not provide a means for summarizing the sensor performance using common statistical metrics and figures, nor are they suited for handling the wide variety of data formats currently used by air sensors. We developed sensortoolkit v1.1.0 as a free, open-source Python v3.8.20 library to encourage the use of the U.S. Environmental Protection Agency’s (U.S. EPA) recommended performance evaluation protocols for air sensors measuring particulate matter and gases. The library compares the collocated air sensor against reference monitor data and includes procedures to reformat both datasets into a standardized format using an interactive setup module. Library modules calculate performance metrics (e.g., the coefficient of determination (R2), slope, intercept, and root mean square error (RMSE)) and make plots to visualize the data. These metrics and plots can be used to better understand sensor accuracy, the precision between sensors of the same make and model, and the influence of meteorological parameters at 1 h and 24 h averages. The results can be compiled into a reporting template allowing for the easier comparison of sensor performance results generated by different organizations. This paper provides a summary of the sensortoolkit and a case study to demonstrate its utility. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

70 pages, 6601 KB  
Review
A Comparative Study of Waveforms Across Mobile Cellular Generations: From 0G to 6G and Beyond
by Farah Arabian and Morteza Shoushtari
Telecom 2025, 6(3), 67; https://doi.org/10.3390/telecom6030067 - 9 Sep 2025
Viewed by 1101
Abstract
Waveforms define the shape, structure, and frequency characteristics of signals, whereas modulation schemes determine how information symbols are mapped onto these waveforms for transmission. Their appropriate selection plays a critical role in determining the efficiency, robustness, and reliability of data transmission. In wireless [...] Read more.
Waveforms define the shape, structure, and frequency characteristics of signals, whereas modulation schemes determine how information symbols are mapped onto these waveforms for transmission. Their appropriate selection plays a critical role in determining the efficiency, robustness, and reliability of data transmission. In wireless communications, the choice of waveform influences key factors, such as network capacity, coverage, performance, power consumption, battery life, spectral efficiency (SE), bandwidth utilization, and the system’s resistance to noise and electromagnetic interference. This paper provides a comprehensive analysis of the waveforms and modulation schemes used across successive generations of mobile cellular networks, exploring their fundamental differences, structural characteristics, and trade-offs for various communication scenarios. It also situates this analysis within the historical evolution of mobile standards, highlighting how advances in modulation and waveform technologies have shaped the development and proliferation of cellular networks. It further examines criteria for waveform selection—such as SE, bit error rate (BER), throughput, and latency—and discusses methods for assessing waveform performance. Finally, this study presents a comparative evaluation of modulation schemes across multiple mobile generations, focusing on key performance metrics, with the BER analysis conducted through MATLAB simulations. Full article
(This article belongs to the Special Issue Advances in Wireless Communication: Applications and Developments)
Show Figures

Figure 1

19 pages, 1748 KB  
Article
On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss
by Nathalia M. N. da Rocha, Andre L. B. Ribeiro and Francisco B. S. Oliveira
Universe 2025, 11(9), 303; https://doi.org/10.3390/universe11090303 - 6 Sep 2025
Viewed by 349
Abstract
The Hubble tension, a persistent discrepancy between early and late Universe measurements of H0, poses a significant challenge to the standard cosmological model. In this work, we present a new Bayesian hierarchical framework designed to meticulously decompose this observed tension into [...] Read more.
The Hubble tension, a persistent discrepancy between early and late Universe measurements of H0, poses a significant challenge to the standard cosmological model. In this work, we present a new Bayesian hierarchical framework designed to meticulously decompose this observed tension into its constituent parts: standard measurement errors, information loss arising from parameter-space projection, and genuine physical tension. Our approach, employing Fisher matrix analysis with MCMC-estimated loss coefficients and explicitly modeling information loss via variance inflation factors (λ), is particularly important in high-precision analysis where even seemingly small information losses can impact conclusions. We find that the real tension component (Treal) has a mean value of 5.94 km/s/Mpc (95% CI: [3.32, 8.64] km/s/Mpc). Quantitatively, approximately 78% of the observed tension variance is attributed to real tension, 13% to measurement error, and 9% to information loss. Despite this, our decomposition indicates that the observed ∼6.39σ discrepancy is predominantly a real physical phenomenon, with real tension contributing ∼5.64σ. Our findings strongly suggest that the Hubble tension is robust and probably points toward new physics beyond the ΛCDM model. Full article
Show Figures

Figure 1

23 pages, 2046 KB  
Article
Does the Forensic Filler-Control Method Reduce Examiner Overconfidence? An Experimental Investigation Using Mock Fingerprint Examiners
by Hannah J. Rath, Bethany Rocha, Andrew M. Smith and Laura Smalarz
Behav. Sci. 2025, 15(9), 1191; https://doi.org/10.3390/bs15091191 - 31 Aug 2025
Viewed by 594
Abstract
Examiner overconfidence is a persistent challenge in the field of forensic science, where testimony overstating the validity of forensic techniques has contributed to numerous wrongful convictions. Scholars have proposed a new method for reducing examiner overconfidence (i.e., subjective confidence that exceeds objective accuracy): [...] Read more.
Examiner overconfidence is a persistent challenge in the field of forensic science, where testimony overstating the validity of forensic techniques has contributed to numerous wrongful convictions. Scholars have proposed a new method for reducing examiner overconfidence (i.e., subjective confidence that exceeds objective accuracy): the forensic filler-control method. The forensic filler-control method, which includes known non-matching “filler” samples alongside the suspect’s sample, is theorized to reduce examiner overconfidence through the provision of immediate error feedback to examiners following match judgments on fillers. We conducted two experiments that failed to yield support for this claim. Among both an undergraduate student sample (Experiment 1) and a forensic science student sample (Experiment 2), the filler-control method was associated with worse calibration (C) and greater overconfidence (O/U) in affirmative match judgments than the standard method. Moreover, the filler-control method produced less accurate non-match judgments, undermining the exonerating value of forensic analysis (i.e., NPV). However, the filler-control method’s ability to draw false positive matches away from innocent-suspect samples and onto fillers produced more reliable incriminating evidence (i.e., PPV) compared to the standard procedure. Our findings suggest that neither the standard procedure nor the filler-control procedure offers a uniformly superior method of conducting forensic analysis. We suggest alternative procedures for enhancing both the inculpatory and exculpatory value of forensic analysis. Full article
(This article belongs to the Special Issue Social Cognitive Processes in Legal Decision Making)
Show Figures

Figure 1

19 pages, 3081 KB  
Article
Temporal and Statistical Insights into Multivariate Time Series Forecasting of Corn Outlet Moisture in Industrial Continuous-Flow Drying Systems
by Marko Simonič and Simon Klančnik
Appl. Sci. 2025, 15(16), 9187; https://doi.org/10.3390/app15169187 - 21 Aug 2025
Viewed by 507
Abstract
Corn drying is a critical post-harvest process to ensure product quality and compliance with moisture standards. Traditional optimization approaches often overlook dynamic interactions between operational parameters and environmental factors in industrial continuous flow drying systems. This study integrates statistical analysis and deep learning [...] Read more.
Corn drying is a critical post-harvest process to ensure product quality and compliance with moisture standards. Traditional optimization approaches often overlook dynamic interactions between operational parameters and environmental factors in industrial continuous flow drying systems. This study integrates statistical analysis and deep learning to predict outlet moisture content, leveraging a dataset of 3826 observations from an operational dryer. The effects of inlet moisture, target air temperature, and material discharge interval on thermal behavior of the system were evaluated through linear regression and t-test, which provided interpretable insights into process dependencies. Three neural network architectures (LSTM, GRU, and TCN) were benchmarked for multivariate time-series forecasting of outlet corn moisture, with hyperparameters optimized using grid search to ensure fair performance comparison. Results demonstrated GRU’s superior performance in the context of absolute deviations, achieving the lowest mean absolute error (MAE = 0.304%) and competitive mean squared error (MSE = 0.304%), compared to LSTM (MAE = 0.368%, MSE = 0.291%) and TCN (MAE = 0.397%, MSE = 0.315%). While GRU excelled in average prediction accuracy, LSTM’s lower MSE highlighted its robustness against extreme deviations. The hybrid methodology bridges statistical insights for interpretability with deep learning’s dynamic predictive capabilities, offering a scalable framework for real-time process optimization. By combining traditional analytical methods (e.g., regression and t-test) with deep learning-driven forecasting, this work advances intelligent monitoring and control of industrial drying systems, enhancing process stability, ensuring compliance with moisture standards, and indirectly supporting energy efficiency by reducing over drying and enabling more consistent operation. Full article
Show Figures

Figure 1

12 pages, 545 KB  
Article
Signal Detection Based on Separable CNN for OTFS Communication Systems
by Ying Wang, Zixu Zhang, Hang Li, Tao Zhou and Zhiqun Cheng
Entropy 2025, 27(8), 839; https://doi.org/10.3390/e27080839 - 7 Aug 2025
Viewed by 633
Abstract
This paper proposes a low-complexity signal detection method for orthogonal time frequency space (OTFS) communication systems, based on a separable convolutional neural network (SeCNN), termed SeCNN-OTFS. A novel SeparableBlock architecture is introduced, which integrates residual connections and a channel attention mechanism to enhance [...] Read more.
This paper proposes a low-complexity signal detection method for orthogonal time frequency space (OTFS) communication systems, based on a separable convolutional neural network (SeCNN), termed SeCNN-OTFS. A novel SeparableBlock architecture is introduced, which integrates residual connections and a channel attention mechanism to enhance feature discrimination and training stability under high Doppler conditions. By decomposing standard convolutions into depthwise and pointwise operations, the model achieves a substantial reduction in computational complexity. To validate its effectiveness, simulations are conducted under a standard OTFS configuration with 64-QAM modulation, comparing the proposed SeCNN-OTFS with conventional CNN-based models and classical linear estimators, such as least squares (LS) and minimum mean square error (MMSE). The results show that SeCNN-OTFS consistently outperforms LS and MMSE, and when the signal-to-noise ratio (SNR) exceeds 12.5 dB, its bit error rate (BER) performance becomes nearly identical to that of 2D-CNN. Notably, SeCNN-OTFS requires only 19% of the parameters compared to 2D-CNN, making it highly suitable for resource-constrained environments such as satellite and IoT communication systems. For scenarios where higher accuracy is required and computational resources are sufficient, the CNN-OTFS model—with conventional convolutional layers replacing the separable convolutional layers—can be adopted as a more precise alternative. Full article
Show Figures

Figure 1

23 pages, 1191 KB  
Article
The Power of Interaction: Fan Growth in Livestreaming E-Commerce
by Hangsheng Yang and Bin Wang
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 203; https://doi.org/10.3390/jtaer20030203 - 6 Aug 2025
Viewed by 1188
Abstract
Fan growth serves as a critical performance indicator for the sustainable development of livestreaming e-commerce (LSE). However, existing research has paid limited attention to this topic. This study investigates the unique interactive advantages of LSE over traditional e-commerce by examining how interactivity drives [...] Read more.
Fan growth serves as a critical performance indicator for the sustainable development of livestreaming e-commerce (LSE). However, existing research has paid limited attention to this topic. This study investigates the unique interactive advantages of LSE over traditional e-commerce by examining how interactivity drives fan growth through the mediating role of user retention and the moderating role of anchors’ facial attractiveness. To conduct the analysis, real-time data were collected from 1472 livestreaming sessions on Douyin, China’s leading LSE platform, between January and March 2023, using Python-based (3.12.7) web scraping and third-party data sources. This study operationalizes key variables through text sentiment analysis and image recognition techniques. Empirical analyses are performed using ordinary least squares (OLS) regression with robust standard errors, propensity score matching (PSM), and sensitivity analysis to ensure robustness. The results reveal the following: (1) Interactivity has a significant positive effect on fan growth. (2) User retention partially mediates the relationship between interactivity and fan growth. (3) There is a substitution effect between anchors’ facial attractiveness and interactivity in enhancing user retention, highlighting the substitution relationship between anchors’ personal characteristics and livestreaming room attributes. This research advances the understanding of interactivity’s mechanisms in LSE and, notably, is among the first to explore the marketing implications of anchors’ facial attractiveness in this context. The findings offer valuable insights for both academic research and managerial practice in the evolving livestreaming commerce landscape. Full article
Show Figures

Figure 1

26 pages, 1698 KB  
Article
Photoplethysmography-Based Blood Pressure Calculation for Neonatal Telecare in an IoT Environment
by Camilo S. Jiménez, Isabel Cristina Echeverri-Ocampo, Belarmino Segura Giraldo, Carolina Márquez-Narváez, Diego A. Cortes, Fernando Arango-Gómez, Oscar Julián López-Uribe and Santiago Murillo-Rendón
Electronics 2025, 14(15), 3132; https://doi.org/10.3390/electronics14153132 - 6 Aug 2025
Viewed by 663
Abstract
This study presents an algorithm for non-invasive blood pressure (BP) estimation in neonates using photoplethysmography (PPG), suitable for resource-constrained neonatal telecare platforms. Using the Windkessel model, the algorithm processes PPG signals from a MAX 30102 sensor, (Analog Devices (formerly Maxim Integrated), based in [...] Read more.
This study presents an algorithm for non-invasive blood pressure (BP) estimation in neonates using photoplethysmography (PPG), suitable for resource-constrained neonatal telecare platforms. Using the Windkessel model, the algorithm processes PPG signals from a MAX 30102 sensor, (Analog Devices (formerly Maxim Integrated), based in San Jose, CA, USA) filtering motion noise and extracting cardiac cycle time and systolic time (ST). These parameters inform a derived blood flow signal, the input for the Windkessel model. Calibration utilizes average parameters based on the newborn’s post-conceptional age, weight, and gestational age. Performance was validated against readings from a standard non-invasive BP cuff at SES Hospital Universitario de Caldas. Two parameter estimation methods were evaluated. The first yielded root mean square errors (RMSEs) of 24.14 mmHg for systolic and 19.13 mmHg for diastolic BP. The second method significantly improved accuracy, achieving RMSEs of 2.31 mmHg and 5.13 mmHg, respectively. The successful adaptation of the Windkessel model to single PPG signals allows for BP calculation alongside other physiological variables within the telecare program. A device analysis was conducted to determine the appropriate device based on computational capacity, availability of programming tools, and ease of integration within an Internet of Things environment. This study paves the way for future research that focuses on parameter variations due to cardiovascular changes in newborns during their first month of life. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

Back to TopTop