Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,579)

Search Parameters:
Keywords = end-user application

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 2202 KB  
Article
LLM-Guided Dynamic Security Testing of Android Applications: A Comparative Study Across Selected Models
by Aleksandra Łabęda and Mariusz Sepczuk
Electronics 2026, 15(10), 2106; https://doi.org/10.3390/electronics15102106 - 14 May 2026
Abstract
The rapid growth of publicly available digital services increases the need for scalable security assessment. This is particularly important for software directly used by end users, such as Android applications. Due to staff shortages and financial constraints, small and medium-sized enterprises are often [...] Read more.
The rapid growth of publicly available digital services increases the need for scalable security assessment. This is particularly important for software directly used by end users, such as Android applications. Due to staff shortages and financial constraints, small and medium-sized enterprises are often unable to test their applications for vulnerabilities. One possible support mechanism is the use of large language models (LLMs) to assist testers during such assessments. The aim of this study was to investigate the possibility of using an LLM as an interactive guide for dynamic application security testing (DAST) of Android applications. For this purpose, five LLM-based systems were compared: Gemini 2.5 Flash, GPT-oss 120B, Llama 3.3 70B, Qwen 3 32B, and Trinity Large Preview accessed via OpenRouter. The models were evaluated on intentionally vulnerable Android applications using weighted step-level scoring and three selected exploit guidance scenarios. In the main guidance experiment, Gemini achieved the highest combined Fully Discovered and Partially Discovered (FD + PD) detection rate of 79.1% in the representative run, while repeated runs for selected models showed limited aggregate variability. The results also indicate that more detailed prompts improve the quality of generated guidance. The findings suggest that LLMs can serve as interactive guides for DAST testing of Android applications, although they should be treated as supporting tools rather than standalone security-testing systems. Full article
27 pages, 1695 KB  
Article
SABER-BIM: A Component-Level Adaptive Lightweighting Framework for Digital Twin BIM Models
by Zhengbing Yang, Mahemujiang Aihemaiti, Beilikezi Abudureheman and Hongfei Tao
Sensors 2026, 26(10), 2990; https://doi.org/10.3390/s26102990 - 9 May 2026
Viewed by 488
Abstract
Lightweighting Building Information Modeling (BIM) models for digital-twin applications requires balancing aggressive geometric reduction with component-level engineering tolerances and mesh usability. Most geometric simplification pipelines apply uniform ratios or hand-tuned heuristics, which struggle to accommodate the strong heterogeneity of BIM components in functional [...] Read more.
Lightweighting Building Information Modeling (BIM) models for digital-twin applications requires balancing aggressive geometric reduction with component-level engineering tolerances and mesh usability. Most geometric simplification pipelines apply uniform ratios or hand-tuned heuristics, which struggle to accommodate the strong heterogeneity of BIM components in functional role, geometric complexity, and detail distribution. End-to-end learning-based simplification can be adaptive, but it often entangles decision-making with geometric editing, making engineering constraints difficult to enforce and audit. We present Semantic-Geometric Co-driven Adaptive Budget Estimation and Reduction for BIM (SABER-BIM), which formulates lightweighting as a component-level face-budget allocation problem. Conditioned on Industry Foundation Classes (IFC) types and structure-sensitive geometric descriptors, SABER-BIM predicts target face counts for individual components and then meets a user-specified global budget through global scaling. The predicted budgets are executed by a robust geometric backend (e.g., quadric error metrics, QEM), yielding an auditable and easily deployable pipeline. To address the absence of direct supervision, we introduce an offline pseudo-ground-truth procedure that searches for the minimum feasible target face count for each component under semantic-aware tolerance and mesh-validity constraints. Experiments on the IFCNet dataset show that SABER-BIM allocates budgets more effectively under identical global constraints, improving stability in both geometric error control and engineering usability. Full article
(This article belongs to the Section Internet of Things)
8 pages, 604 KB  
Proceeding Paper
uqStudio: A Modular Framework for Uncertainty Quantification in Multidisciplinary Design
by Tawfiq Ahmed and Marko Alder
Eng. Proc. 2026, 133(1), 87; https://doi.org/10.3390/engproc2026133087 - 7 May 2026
Viewed by 137
Abstract
Uncertainty quantification (UQ) is essential for the robust and competitive design of climate-friendly transportation systems, such as aircraft and space launch systems. However, supporting software applications for UQ are fragmented across numerous open-source libraries, often require in-depth knowledge of the mathematics underlying UQ, [...] Read more.
Uncertainty quantification (UQ) is essential for the robust and competitive design of climate-friendly transportation systems, such as aircraft and space launch systems. However, supporting software applications for UQ are fragmented across numerous open-source libraries, often require in-depth knowledge of the mathematics underlying UQ, and commercial solutions often involve licensing costs. This can make it difficult for design experts to take uncertainties into account. To address this issue, we propose a modular, web-based framework that will guide practitioners through the most common UQ processes, such as statistical sampling, propagation through design workflows, and statistical analysis of the results. Adopting a modern client-server architecture, a backend service, called uqFramework, wraps relevant software libraries for each of the aforementioned steps. The current version focuses on probabilistic approaches, enabling the generation of Design-of-Experiment (DOE) inputs via Quasi-Monte Carlo, Latin Hypercube, and Low Discrepancy Sequence sampling methods. Furthermore, it enables the parallel execution of design and analysis workflows via DLR’s Remote Component Environment (RCE) or Python scripts. Finally, uqFramework performs global sensitivity analyses using Sobol, FAST, or Morris techniques. An interactive front-end application called uqStudio connects to uqFramework through a Representational State Transfer (REST) interface. It guides users through the UQ process via an intuitive, step-by-step interface. Interactive visualizations enable detailed exploration of each step. The framework’s capabilities are illustrated through two examples, the Ishigami function and a multidisciplinary UAV design study, verifying its precision, adaptability, and user-friendliness. We demonstrate that uqStudio enables researchers to conduct integrated UQ studies covering uncertainty specification, propagation, and sensitivity analysis without the difficulty of installing and properly using fragmented libraries. Future work includes extending visualization capabilities and integrating surrogate-modeling capabilities to enable faster workflow execution. Full article
Show Figures

Figure 1

13 pages, 1143 KB  
Article
Near-Infrared Spectroscopy for the Single-Kernel Analysis of Sorghum Protein Content
by Princess Tiffany D. Mendoza, Paul R. Armstrong, Erin D. Scully, Xiaorong Wu, Kamaranga H. S. Peiris, Scott R. Bean and Kaliramesh Siliveru
Sensors 2026, 26(10), 2936; https://doi.org/10.3390/s26102936 - 7 May 2026
Viewed by 559
Abstract
Protein content is an important quality trait in sorghum that influences breeding approaches, end-use applications, and market value. Influenced by genetic, agronomic, and environmental variability, sorghum is characterized by its wide variation in composition, which may also be evident in kernels from the [...] Read more.
Protein content is an important quality trait in sorghum that influences breeding approaches, end-use applications, and market value. Influenced by genetic, agronomic, and environmental variability, sorghum is characterized by its wide variation in composition, which may also be evident in kernels from the same sample. This study developed and evaluated a method for a non-destructive and rapid prediction of protein content in individual sorghum kernels using single-kernel near-infrared spectroscopy (SKNIR). Applying different pre-processing techniques to the spectra collected from intact kernels, the calibration models were developed using partial least squares regression and the reference protein content values obtained from the LECO combustion method. The best model was obtained using multiplicative scatter correction as pre-processing, resulting in a standard error of prediction of 0.83% and a relative predictive determinant of 3.40. These were indicative of the good predictive ability of the model and the instrument to be applied in quality control and sorting applications. These results highlight the potential of SKNIR to capture the inter-kernel variability in sorghum protein content and enhance screening for grain quality in breeding and grain processing. Full article
Show Figures

Figure 1

18 pages, 639 KB  
Article
Digitalization of Last-Mile Delivery: Comparative Assessment of Mobile Applications for Urban Parcel Locker Networks
by Maria Cieśla and Artur Budzyński
Urban Sci. 2026, 10(5), 247; https://doi.org/10.3390/urbansci10050247 - 4 May 2026
Viewed by 499
Abstract
The rapid growth of e-commerce has significantly increased direct-to-consumer deliveries, putting competitive and environmental pressure on urban last-mile logistics. Out-of-home (OOH) delivery options, particularly parcel lockers, are increasingly integrated into city mobility strategies to reduce congestion and emissions. However, the role of mobile [...] Read more.
The rapid growth of e-commerce has significantly increased direct-to-consumer deliveries, putting competitive and environmental pressure on urban last-mile logistics. Out-of-home (OOH) delivery options, particularly parcel lockers, are increasingly integrated into city mobility strategies to reduce congestion and emissions. However, the role of mobile applications front-ending these networks remains under-researched. This study aims to evaluate the user experience (UX) and functional adequacy across three major parcel-locker apps in Poland: InPost Mobile, DPD Mobile, and ORLEN Paczka. A cross-sectional, mixed-methods approach combining in situ corridor testing and structured post-task questionnaires was employed with 30 users at real locker locations in Katowice. The results indicate that interface simplicity, predictable information flow, and technical stability are the dimensions most consistently associated with higher user ratings. InPost Mobile consistently achieved the highest ratings due to its focus on core workflows, whereas applications emphasizing broader functional coverage (ORLEN Paczka) exhibited usability trade-offs, and DPD Mobile underperformed in speed and stability. Because the study relied on a small convenience sample (n = 30) in a single city and was skewed toward younger adults (18–24), the findings should be interpreted as exploratory and primarily reflective of a digitally proficient demographic rather than the broader user population. Full article
(This article belongs to the Special Issue Advances in Urban Planning and the Digitalization of City Management)
Show Figures

Figure 1

35 pages, 27039 KB  
Article
A Complete Grocery Pick-and-Pack Application Using a Computationally Lightweight Vision-Based Mobile Manipulator
by Thanavin Mansakul, Gilbert Tang, Phil Webb, Jamie Rice, Daniel Oakley and James Fowler
Sensors 2026, 26(9), 2860; https://doi.org/10.3390/s26092860 - 3 May 2026
Viewed by 1061
Abstract
Mobile manipulators have become essential platforms for autonomous tasks that demand high-quality performance and efficient operational processes. This paper presents a complete grocery pick-and-pack system for a mobile manipulator, integrating a graphical user interface (GUI) with an end-to-end vision-based grasp detection pipeline designed [...] Read more.
Mobile manipulators have become essential platforms for autonomous tasks that demand high-quality performance and efficient operational processes. This paper presents a complete grocery pick-and-pack system for a mobile manipulator, integrating a graphical user interface (GUI) with an end-to-end vision-based grasp detection pipeline designed for lightweight computation. The system is evaluated on the Grocery Pick-and-Pack Benchmark (Level-3), the most challenging level due to deformable objects, dimensional constraints, and strict grasp-point requirements. Experimental results demonstrate an average success rate of 92% across five item classes, with the deformable sweet bag the most challenging at 60% and an average execution time of 7.5 s on an edge device. The system achieves strong computational efficiency, reflected by a compute-to-speed ratio (CSR) of 0.008, with a total model size of only 30.9 MB. Performance is further validated across multiple hardware platforms and under real competition scenarios in the European Robotics League 2025. The findings highlight the practical impact of lightweight, vision-based mobile manipulation and provide insights into current challenges and future research directions for autonomous robotic applications. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

22 pages, 2528 KB  
Article
Demographic Patterns in the Aesthetic Acceptance of Building-Integrated Photovoltaics in Apartment Housing: Implications for Solar Energy Design and Policy
by Jenan Abu Qadourah, Saba Alnusairat and Rund Hiyasat
Buildings 2026, 16(9), 1758; https://doi.org/10.3390/buildings16091758 - 29 Apr 2026
Viewed by 251
Abstract
This study examines how demographic and socioeconomic characteristics shape the aesthetic perception of building-integrated photovoltaics (BIPV) in apartment housing. A survey of 418 respondents was conducted using visual scenarios showing PV integrated into rooftops, facades, balconies, and windows. Data were analysed using descriptive [...] Read more.
This study examines how demographic and socioeconomic characteristics shape the aesthetic perception of building-integrated photovoltaics (BIPV) in apartment housing. A survey of 418 respondents was conducted using visual scenarios showing PV integrated into rooftops, facades, balconies, and windows. Data were analysed using descriptive statistics, Pearson correlations, one-way analysis of variance (ANOVA) with Scheffé post hoc tests, multiple regression, and thematic analysis of open-ended responses. The results indicate that aesthetic responses to BIPV are not uniform across user groups. Younger respondents, participants with higher educational attainment, and respondents working in energy or technical fields tended to be more receptive to certain forms of BIPV integration, while architecture and design professionals were generally more critical of visually dominant applications. Rooftop PV received the highest overall ratings, while façade- and balcony-integrated applications generated greater disagreement. The regression models explained only a limited share of the variance, indicating that demographic factors are associated with broad perception patterns but do not strongly predict individual aesthetic judgement. The study offers context-specific evidence for facade-sensitive design guidance, retrofit prioritisation, and targeted stakeholder engagement in Jordan, with only cautious relevance to comparable settings pending cross-cultural validation. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

24 pages, 1924 KB  
Article
BIM-SeL: Building Information Modelling Data-Adaptive Natural-Language Sequence Labeling Using Machine Learning
by Qi Qiu, Xiaoping Zhou, Yukang Wang, Jichao Zhao, Maozu Guo and Xin Zhang
Buildings 2026, 16(9), 1731; https://doi.org/10.3390/buildings16091731 - 27 Apr 2026
Viewed by 203
Abstract
Building Information Modelling has become a common paradigm in the construction industry. To bridge the gap between end users and BIM data, some studies have adopted Natural Language Processing (NLP) in the BIM applications. Due to the incorrect segmentation of users’ natural language, [...] Read more.
Building Information Modelling has become a common paradigm in the construction industry. To bridge the gap between end users and BIM data, some studies have adopted Natural Language Processing (NLP) in the BIM applications. Due to the incorrect segmentation of users’ natural language, most NLP-based BIM applications usually provide users with redundant or inaccurate BIM data. Sequence labeling has been widely studied in the area of NLP to find correct segments of a natural language sequence. However, the existing sequence labeling schemes perform poorly for specific BIM models. To address this issue, this study proposed a BIM model of an adaptive natural-language Sequence Labeling scheme using Machine learning, termed BIM-SeL. We first presented the problem definition of sequence labeling and the overall framework of the BIM-SeL. The BIM-SeL employs Conditional Random Field (CRF) to model the sequence labeling problem and Machine learning to train a sequence labeling model using a corpus of millions of data from the news and web domains. Then, a BIM dictionary extraction algorithm is developed to collect the exclusive vocabularies from the BIM models. A BIM dictionary-enhanced sequence labeling scheme is proposed to achieve the BIM model adaptive sequence labeling, by jointly utilizing the trained sequence labeling model and the BIM dictionary. To further enhance contextual representation and compare with state-of-the-art deep learning methods, we extend BIM-SeL with an advanced BERT*-BiLSTM-CRF model under the same framework. The effectiveness of the BIM-SeL was verified through two real-world projects, the BUCEA Library and a water pump house. The experiment results showed that the sequence accuracies of BIM-SeL in the BUCEA Library and the water pump house projects achieved 92.61% and 93.41%, respectively, and the vocabulary accuracies reach 96.77% and 97.32%, respectively. Compared with the original CRF-based sequence labeling algorithm, the BIM-SeL improved the sequence accuracies by 7.05 and 18.50 times, and the vocabulary accuracies by 1.33 and 2.48 times, in the two projects. Meanwhile, the BERT-BiLSTM-CRF variant obtains up to 99.93% vocabulary accuracy on real BIM test sequences, further validating the generality and advancement of the proposed framework. These observations proved that the BIM-SeL contributed to the natural language understanding of BIM applications using BIM data and could bridge the gap between users and BIM data. Full article
(This article belongs to the Special Issue Intelligence and Automation in Construction—2nd Edition)
Show Figures

Figure 1

15 pages, 1743 KB  
Article
Essential HDRescue: A Co-Targeting Strategy to Enhance Precision Genome Editing by Co-Editing Essential Genes
by Jamaica F. Siwak, Jon P. Connelly and Shondra M. Pruett-Miller
Cells 2026, 15(9), 768; https://doi.org/10.3390/cells15090768 - 24 Apr 2026
Viewed by 751
Abstract
Genome editing is widely used and conceptually simple, yet in practice, it is hindered by laborious workflows and high costs. These challenges stem from the difficulty of identifying and isolating cells that contain the desired user-defined modifications, a problem compounded by the wide [...] Read more.
Genome editing is widely used and conceptually simple, yet in practice, it is hindered by laborious workflows and high costs. These challenges stem from the difficulty of identifying and isolating cells that contain the desired user-defined modifications, a problem compounded by the wide variability in editing efficiencies across cell types. While homology-directed repair (HDR) provides a mechanism for precise genome modification following nuclease-induced double-strand breaks (DSBs), it is frequently outcompeted by the dominant mutagenic non-homologous end-joining (NHEJ) pathway in mammalian cells. Therefore, we developed a novel enrichment method, Essential HDRescue, to increase the frequency of HDR events at a target site by co-targeting an essential genomic locus. Using both intrinsic positive and negative selection at a common essential gene, we enabled enrichment of precise editing events at a second, unlinked target site. We demonstrated that co-targeting essential genes in cancer cell lines and iPSCs increased HDR rates without the need for an exogenous reporter or selective drug. Analysis of resulting clones revealed that Essential HDRescue produced up to a 6-fold increase in single-allele edits and an ~4-fold increase in homozygous edits relative to single-targeted controls. By harnessing the intrinsic cellular dependencies that arise from DSB repair at essential loci, Essential HDRescue offers a widely applicable method to improve precise genome editing outcomes in mammalian cells, leaving only a minimal, protein-silent scar at the essential gene. Full article
(This article belongs to the Special Issue Genome Editing in Biomedicine)
Show Figures

Figure 1

25 pages, 37592 KB  
Article
Deep-Learning-Based Mobile Application for Real-Time Recognition of Cultural Artifacts in Museum Environments
by Pablo Minango, Marcelo Zambrano, Carmen Inés Huerta Suarez and Juan Minango
Appl. Sci. 2026, 16(9), 4064; https://doi.org/10.3390/app16094064 - 22 Apr 2026
Viewed by 563
Abstract
Dissemination and conservation of cultural heritage have been challenged by continued accessibility in museums, where traditional information delivery systems are at times ineffective in terms if interaction with visitors. The current paper investigates RumiArt IA, a mobile application, to identify cultural objects in [...] Read more.
Dissemination and conservation of cultural heritage have been challenged by continued accessibility in museums, where traditional information delivery systems are at times ineffective in terms if interaction with visitors. The current paper investigates RumiArt IA, a mobile application, to identify cultural objects in real-time, remaining fully in the scope of this line of research without relying on internet connectivity. The system, which is developed based on the Rumiñahui Museum and Cultural Center, Ecuador, uses transfer learning in the MobileNetV2 architecture with INT8 post-training quantization to identify 21 cultural artifacts spread across six thematic rooms. The experiment involved building a dataset of 36,000 images under diverse lighting conditions, viewing angles, and distances; furthermore, artificial transformations were explicitly crafted to simulate real museum conditions such as glass reflections and non-frontal capture angles. Quantization was used to reduce each model to 775 KB as compared with the 2.4 MB, with accuracy loss not reaching more than 0.5 percent (DKL < 0.05). Assessment of 9450 validation images yielded a general accuracy of 92.2%, with an inference time of 63 ms on current devices with a high throughput and 215 ms on mid-range hardware from 2020. Practical validation involving 50 visitors of the museum showed a success rate of 93.7%, with average user satisfaction at 8.5/10 and 87%, indicating they would recommend the application. An in-depth error study of the most difficult room (88.3% accuracy) indicated that 47% of the errors were due to the angles of the camera, which blocked out distinguishing features, and 22% were caused by display case reflections and the shadows of the visitors. These results indicate that end-to-end machine learning can provide consistent cultural heritage recognition in resource-constrained settings but its efficiency is susceptible to physical capture factors that cannot be resolved by data augmentation. Offline mode and low memory footprint (less than 90 MB when loaded on six models) of the system are especially relevant to application in situations where there is no guarantee of cloud connectivity. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Figure 1

41 pages, 3508 KB  
Systematic Review
Who, Where, What, and How to Nudge: A Systematic Review of Co-Designed Digital Nudges for Behavioral Interventions
by Alaa Ziyud, Khaled Al-Thelaya and Jens Schneider
Multimodal Technol. Interact. 2026, 10(4), 43; https://doi.org/10.3390/mti10040043 - 21 Apr 2026
Viewed by 860
Abstract
Digital nudges refer to subtle modifications in digital choice architectures that are increasingly applied across domains such as healthcare, human–computer interactions, and behavioral science. However, existing approaches often overlook users’ needs, contextual factors, and ethical considerations related to transparency and autonomy. This systematic [...] Read more.
Digital nudges refer to subtle modifications in digital choice architectures that are increasingly applied across domains such as healthcare, human–computer interactions, and behavioral science. However, existing approaches often overlook users’ needs, contextual factors, and ethical considerations related to transparency and autonomy. This systematic literature review, guided by PRISMA 2020, examines the integration of co-design methodologies in digital nudging across four dimensions: participants, application domains, nudge forms, and development methods. The findings show that co-design is primarily driven by end-users, supported by domain experts and technology specialists. Applications are concentrated in health-related contexts, particularly chronic disease management and mental health. The effectiveness of priming varied across studies, with some reporting short-term benefits and others indicating user fatigue, suggesting context-dependent impact and limited long-term effectiveness. Full article
Show Figures

Graphical abstract

38 pages, 2768 KB  
Review
Sulla coronaria, a Multifunctional Legume for Climate-Smart Agriculture and the Green Economy: A Review
by Roberta Rossi, Giovanna Piluzza and Leonardo Sulas
Agronomy 2026, 16(8), 813; https://doi.org/10.3390/agronomy16080813 - 15 Apr 2026
Viewed by 404
Abstract
Climate change threatens crop yields and farming profitability, especially in drought-prone regions, requiring a transition to climate-resilient farming systems. Concurrently, growing demand for health-promoting and bio-based materials is creating new market opportunities for farmers. Sulla (Sulla coronaria Medik; syn. Hedysarum coronarium L.), [...] Read more.
Climate change threatens crop yields and farming profitability, especially in drought-prone regions, requiring a transition to climate-resilient farming systems. Concurrently, growing demand for health-promoting and bio-based materials is creating new market opportunities for farmers. Sulla (Sulla coronaria Medik; syn. Hedysarum coronarium L.), a Mediterranean forage crop, may represent a strategic resource for sustainable intensification by simultaneously providing high-value commodities and a wide range of ecosystem services. This review explores the multifunctional potential of sulla following a holistic approach and is structured in thematic chapters, exploring: i. agronomy, ii. ecosystem services and agroecological value, iii. plant biochemical profile, iv. emerging applications for the bio-based industry, v. genetic diversity (including rhizobia diversity) and breeding perspectives for target environments and end-use. A SWOT analysis synthesizes strengths, research gaps and bottlenecks hindering large-scale adoption and valorization. The review proposes a strategic framework matching research priority with specific, actionable goals. The review aims to increase awareness of the multifaceted value of sulla as a promising model legume to increase sustainability in agriculture, promote product diversification and farming profitability, while assuring important ecosystem benefits. Full article
(This article belongs to the Section Agroecology Innovation: Achieving System Resilience)
Show Figures

Figure 1

19 pages, 3225 KB  
Article
Metaheuristic Optimized Random Forest Regression with Streamlit Web Application for Predicting Jute Yarn Tenacity
by Nageshkumar T, Avijit Das, Sanjoy Debnath and D. B. Shakyawar
Textiles 2026, 6(2), 46; https://doi.org/10.3390/textiles6020046 - 14 Apr 2026
Viewed by 430
Abstract
Yarn tenacity is one of the vital quality parameters that determine the performance, fabric durability and end use suitability. The tenacity of yarn is largely influenced by the fibre characteristics used. The physical properties of jute fibres, including root content, defect, bundle strength, [...] Read more.
Yarn tenacity is one of the vital quality parameters that determine the performance, fabric durability and end use suitability. The tenacity of yarn is largely influenced by the fibre characteristics used. The physical properties of jute fibres, including root content, defect, bundle strength, and fineness, exert a significant influence on yarn tenacity. This study utilized metaheuristic optimized random forest regression (RFR) to predict jute yarn tenacity from fibre parameters. The hyperparameters of the RFR models were optimized using four metaheuristic algorithms: whale optimization algorithm (WOA), grey wolf optimization (GWO), beetle antennae search (BAS) and ant colony optimization (ACO). The model utilized a dataset comprising 414 experimental data with 70% data for training and 30% for testing the model, using input variables such as bundle strength (g/tex), defects (%), root content (%) and fineness (tex) to predict yarn tenacity (cN/tex). The developed models effectively predicted yarn tenacity. However, RFR–GWO achieved slightly better performance with R2 of 1.0 for training set and 0.96 for test set. Regarding execution time, RFR–GWO is the fastest requiring only 14.25 s. SHAP analysis revealed that bundle strength and root content of jute fibre are the most influential factors, whereas defect and fineness exert the least influence on model’s prediction. The best model RFR–GWO was deployed into an interactive Streamlit web application, offering an intuitive and user-friendly platform for the real-time estimation of yarn tenacity. Full article
Show Figures

Figure 1

28 pages, 4829 KB  
Article
OH-MEMA: An Integrated One Health Mixed-Effects Modeling Approach for Syndromic Surveillance
by Aseel Basheer, Parisa Masnadi Khiabani, Wolfgang Jentner, Aaron Wendelboe, Jason R. Vogel, Katrin Gaardbo Kuhn, Michael C. Wimberly, Dean Hougen and David Ebert
J. Clin. Med. 2026, 15(8), 2966; https://doi.org/10.3390/jcm15082966 - 14 Apr 2026
Viewed by 482
Abstract
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework [...] Read more.
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework that integrates heterogeneous One Health data streams, including human clinical outcomes, environmental factors, and wastewater surveillance data, to support syndromic surveillance and pandemic preparedness. Methods: The system enables users to upload and analyze multi-source datasets through an interactive web-based interface. The modeling component supports fixed effects for multi-source predictors, random effects for spatial, temporal, and demographic grouping variables, optional random slopes, and rolling time-series validation. Model results are visualized as time series comparing observed and predicted outcomes, with evaluation metrics including Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and correlation. To support iterative exploration, the system incorporates analytic provenance through a visual model tree that records prior configurations. Results: OH-MEMA was validated through both quantitative and qualitative evaluations. Quantitatively, mixed-effects models were assessed across multiple counties and outcomes using RMSE, MAE, and correlation, demonstrating robust predictive performance. Qualitatively, expert users, including epidemiologists and disease surveillance analysts, evaluated the system using the NASA Task Load Index and open-ended interviews, indicating improved interpretability, manageable cognitive workload, and effective workflow integration. Conclusions: OH-MEMA provides an interpretable, human-in-the-loop platform for exploratory forecasting and comparative model analysis in syndromic surveillance. The framework effectively bridges data integration, modeling, and interpretation, supporting user-centered analytical reasoning and decision-making in One Health applications. Full article
(This article belongs to the Special Issue New Advances of Infectious Disease Epidemiology)
Show Figures

Figure 1

26 pages, 14566 KB  
Article
Compound-Resolved Gas–Water Assessment of RDF Pyrolysis with Wet Scrubbing: Operating Windows for Internal Combustion Engine Combined Heat and Power and Closed-Loop Water Management
by Sergejs Osipovs and Aleksandrs Pučkins
Energies 2026, 19(8), 1870; https://doi.org/10.3390/en19081870 - 11 Apr 2026
Viewed by 452
Abstract
Pyrolysis of refuse-derived fuel (RDF) is a promising waste-to-energy route, but its use in higher-value applications remains limited by tar carryover, benzene, toluene, ethylbenzene, and xylenes (BTEX), heteroatom-containing compounds, and pollutant accumulation in recirculated scrubber water. This study evaluated operating windows for RDF [...] Read more.
Pyrolysis of refuse-derived fuel (RDF) is a promising waste-to-energy route, but its use in higher-value applications remains limited by tar carryover, benzene, toluene, ethylbenzene, and xylenes (BTEX), heteroatom-containing compounds, and pollutant accumulation in recirculated scrubber water. This study evaluated operating windows for RDF pyrolysis coupled with direct wet scrubbing and closed-loop water reuse, with the aim of identifying regimes suitable for different end-use tiers. A Taguchi L27 design of experiments (DOE), i.e., an orthogonal array comprising 27 experimental runs, was applied to evaluate the effects of pyrolysis temperature, residence time, scrubber liquid-to-gas ratio, and scrubber-water temperature, while sequential reuse of the same scrubber-water inventory was evaluated at 5, 10, and 15 cycles. Cleaned-gas pollutants were quantified by compound-resolved gas chromatography–mass spectrometry (GC–MS) after solid-phase adsorption (SPA) sampling, while phenolics and polycyclic aromatic hydrocarbons (PAHs) in scrubber water were determined by extraction followed by GC–MS. Feasibility within each end-use tier was defined as simultaneous satisfaction of tier-specific cleaned-gas thresholds (Ctar, CBTEX, IN, and IS) and the corresponding water-loop hazard limit (Itox), using literature-informed engineering screening criteria. The results showed that stronger scrubbing reduced gas-phase tar and BTEX burdens, whereas extended water reuse caused systematic accumulation of phenolics and PAHs and increased the composite water-loop hazard index. Boiler-grade operation remained feasible across a broad operating range, with 23 of the 27 tested conditions remaining robust, whereas internal combustion engine combined heat and power (ICE-CHP) feasibility was restricted to a narrow robust regime, and no robust microturbine-grade condition was identified. These findings show that operating windows for RDF pyrolysis must be defined jointly by gas cleanliness and water-loop management constraints. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

Back to TopTop