Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,157)

Search Parameters:
Keywords = regular producers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2636 KiB  
Article
Chest Compression Skill Evaluation System Using Pose Estimation and Web-Based Application
by Ryota Watanabe, Jahidul Islam, Xin Zhu, Emiko Kaneko, Ken Iseki and Lei Jing
Appl. Sci. 2025, 15(15), 8252; https://doi.org/10.3390/app15158252 - 24 Jul 2025
Viewed by 259
Abstract
It is critical to provide life-sustaining treatment to OHCA patients before ambulance care arrives. However, incorrectly performed resuscitation maneuvers reduce the chances of survival and recovery for the victims. Therefore, we must train regularly and learn how to do it correctly. To facilitate [...] Read more.
It is critical to provide life-sustaining treatment to OHCA patients before ambulance care arrives. However, incorrectly performed resuscitation maneuvers reduce the chances of survival and recovery for the victims. Therefore, we must train regularly and learn how to do it correctly. To facilitate regular chest compression training, this study aims to improve the accuracy of a chest compression evaluation system using posture estimation and to develop a web application. To analyze and enhance accuracy, the YOLOv8 posture estimation was used to examine compression depth, recoil, and tempo, and its accuracy was compared to that of the manikin, which has evaluation systems. We conducted comparative experiments with different camera angles and heights to optimize the accuracy of the evaluation. The experimental results showed that an angle of 30 degrees and a height of 50 cm produced superior accuracy. For web application development, a system has been designed to allow users to upload videos for analysis and obtain appropriate compression parameters. The usability evaluation of the application confirmed its ease of use and accessibility, and positive feedback was obtained. In the conclusion, these findings suggest that optimizing recording conditions significantly improves the accuracy of posture-based chest compression evaluation. Future work will focus on enhancing real-time feedback functionality and improving the user interface of the web application. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

27 pages, 2617 KiB  
Article
Monte Carlo Gradient Boosted Trees for Cancer Staging: A Machine Learning Approach
by Audrey Eley, Thu Thu Hlaing, Daniel Breininger, Zarindokht Helforoush and Nezamoddin N. Kachouie
Cancers 2025, 17(15), 2452; https://doi.org/10.3390/cancers17152452 - 24 Jul 2025
Viewed by 290
Abstract
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional [...] Read more.
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional features must be extracted and then the selected subset will be used to train a classifier. Gradient Boosted Trees (GBT) is an ensemble model and, particularly due to their robustness, ability to model complex nonlinear interactions, and feature interpretability, they are well suited for complex applications. XGBoost (eXtreme Gradient Boosting) is a high-performance implementation of GBT that incorporates regularization, parallel computation, and efficient tree pruning that makes it a suitable efficient, interpretable, and scalable classifier with potential applications to medical data analysis. In this study, a Monte Carlo Gradient Boosted Trees (MCGBT) model is proposed for both feature reduction and classification. The proposed MCGBT method was applied to a lung cancer dataset for feature identification and classification. The dataset contains 107 radiomics which are quantitative imaging biomarkers extracted from CT scans. A reduced set of 12 radiomics were identified, and patients were classified into different cancer stages. Cancer staging accuracy of 90.3% across 100 independent runs was achieved which was on par with that obtained using the full set of 107 radiomics, enabling lean and deployable classifiers. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

37 pages, 55522 KiB  
Article
EPCNet: Implementing an ‘Artificial Fovea’ for More Efficient Monitoring Using the Sensor Fusion of an Event-Based and a Frame-Based Camera
by Orla Sealy Phelan, Dara Molloy, Roshan George, Edward Jones, Martin Glavin and Brian Deegan
Sensors 2025, 25(15), 4540; https://doi.org/10.3390/s25154540 - 22 Jul 2025
Viewed by 223
Abstract
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional [...] Read more.
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional Neural Networks (CNNs) often have a resolution limitation, requiring images to be down-sampled before inference, causing significant information loss. Event-based cameras are neuromorphic vision sensors with high temporal resolution, low power consumption, and high dynamic range, making them preferable to regular RGB cameras in many situations. This project proposes the fusion of an event-based camera with an RGB camera to mitigate the trade-off between temporal resolution and accuracy, while minimising power consumption. The cameras are calibrated to create a multi-modal stereo vision system where pixel coordinates can be projected between the event and RGB camera image planes. This calibration is used to project bounding boxes detected by clustering of events into the RGB image plane, thereby cropping each RGB frame instead of down-sampling to meet the requirements of the CNN. Using the Common Objects in Context (COCO) dataset evaluator, the average precision (AP) for the bicycle class in RGB scenes improved from 21.08 to 57.38. Additionally, AP increased across all classes from 37.93 to 46.89. To reduce system latency, a novel object detection approach is proposed where the event camera acts as a region proposal network, and a classification algorithm is run on the proposed regions. This achieved a 78% improvement over baseline. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 3526 KiB  
Article
Effects of Glomus iranicum Inoculation on Growth and Nutrient Uptake in Potatoes Associated with Broad Beans Under Greenhouse Conditions
by Duglas Lenin Contreras-Pino, Samuel Pizarro, Patricia Verastegui-Martinez, Richard Solórzano-Acosta and Edilson J. Requena-Rojas
Microbiol. Res. 2025, 16(7), 164; https://doi.org/10.3390/microbiolres16070164 - 21 Jul 2025
Viewed by 330
Abstract
The rising global demand for food, including potatoes, necessitates increased crop production. To achieve higher yields, farmers frequently depend on regular applications of nitrogen and phosphate fertilizers. As people seek more environmentally friendly alternatives, biofertilizers are gaining popularity as a potential replacement for [...] Read more.
The rising global demand for food, including potatoes, necessitates increased crop production. To achieve higher yields, farmers frequently depend on regular applications of nitrogen and phosphate fertilizers. As people seek more environmentally friendly alternatives, biofertilizers are gaining popularity as a potential replacement for synthetic fertilizers. This study aimed to determine how Glomus iranicum affects the growth of potatoes (Solanum tuberosum L.) and the nutritional value of potato tubers when grown alongside broad beans (Vicia faba L.). An experiment was conducted using potatoes tested at five dosage levels of G. iranicum, ranging from 0 to 4 g, to see its impact on the plants and soil. Inoculation with G. iranicum produced variable results in associated potato and bean crops, with significant effects on some variables. In particular, inoculation with 3 g of G. iranicum produced an increase in plant height (24%), leaf dry weight (90%), and tuber dry weight (57%) of potatoes. Similarly, 4 g of G. iranicum produced an increase in the foliar fresh weight (115%), root length (124%), root fresh weight (159%), and root dry weight (243%) of broad beans compared to no inoculation. These findings suggest that G. iranicum could be a helpful biological tool in Andean crops to improve the productivity of potatoes associated with broad beans. This could potentially reduce the need for chemical fertilizers in these crops. Full article
Show Figures

Figure 1

23 pages, 1755 KiB  
Article
An Efficient Continuous-Variable Quantum Key Distribution with Parameter Optimization Using Elitist Elk Herd Random Immigrants Optimizer and Adaptive Depthwise Separable Convolutional Neural Network
by Vidhya Prakash Rajendran, Deepalakshmi Perumalsamy, Chinnasamy Ponnusamy and Ezhil Kalaimannan
Future Internet 2025, 17(7), 307; https://doi.org/10.3390/fi17070307 - 17 Jul 2025
Viewed by 296
Abstract
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key [...] Read more.
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key distribution method with parameter optimization utilizing the Elitist Elk Herd Random Immigrants Optimizer (2E-HRIO) technique. At the outset of transmission, the quantum device undergoes initialization and authentication via Compressed Hash-based Message Authentication Code with Encoded Post-Quantum Hash (CHMAC-EPQH). The settings are subsequently optimized from the authenticated device via 2E-HRIO, which mitigates the effects of decoherence by adaptively tuning system parameters. Subsequently, quantum bits are produced from the verified device, and pilot insertion is executed within the quantum bits. The pilot-inserted signal is thereafter subjected to pulse shaping using a Gaussian filter. The pulse-shaped signal undergoes modulation. Authenticated post-modulation, the prediction of link failure is conducted through an authenticated channel using Radial Density-Based Spatial Clustering of Applications with Noise. Subsequently, transmission occurs via a non-failure connection. The receiver performs channel equalization on the received signal with Recursive Regularized Least Mean Squares. Subsequently, a dataset for side-channel attack authentication is gathered and preprocessed, followed by feature extraction and classification using Adaptive Depthwise Separable Convolutional Neural Networks (ADS-CNNs), which enhances security against side-channel attacks. The quantum state is evaluated based on the signal received, and raw data are collected. Thereafter, a connection is established between the transmitter and receiver. Both the transmitter and receiver perform the scanning process. Thereafter, the calculation and correction of the error rate are performed based on the sifting results. Ultimately, privacy amplification and key authentication are performed using the repaired key via B-CHMAC-EPQH. The proposed system demonstrated improved resistance to decoherence and side-channel attacks, while achieving a reconciliation efficiency above 90% and increased key generation rate. Full article
Show Figures

Graphical abstract

18 pages, 4803 KiB  
Article
Global Health as Vector for Agroecology in Collective Gardens in Toulouse Region (France)
by Wilkens Jules, Stéphane Mombo and Camille Dumat
Urban Sci. 2025, 9(7), 272; https://doi.org/10.3390/urbansci9070272 - 15 Jul 2025
Viewed by 722
Abstract
Agroecological transitions in collective urban gardens in Toulouse region were studied through the prism of global health (2011–2022). The specific meaning of “global health” in the context of urban gardens concerns the health of gardeners (well-being and physical health), plants, soil, and animals, [...] Read more.
Agroecological transitions in collective urban gardens in Toulouse region were studied through the prism of global health (2011–2022). The specific meaning of “global health” in the context of urban gardens concerns the health of gardeners (well-being and physical health), plants, soil, and animals, as well as the interactions between humans and non-humans, which are crucial for gardeners. A sociotechnical research project was developed on four different collective gardening sites, consisting of the following: 1. surveys issued to 100 garden stakeholders to highlight issues and practices, participation in meetings with the social centers in charge of events, and focus groups; 2. participative agronomic and environmental measurements and field observations, including soil quality analyses; and 3. analysis of the available documentary corpus. In order to produce the results, these three research methods (surveys, agronomy, document analysis) were combined through a transdisciplinary approach, in that both the field experimentation outcomes and retrieved scientific publications and technical documents informed the discussions with gardeners. Consideration of the four different sites enabled the exploration of various contextual factors—such as soil or air quality—affecting the production of vegetables. A rise in the concerns of gardeners about the impacts of their activities on global health was observed, including aspects such as creating and enjoying landscapes, taking care of the soil and biodiversity, developing social connections through the transmission of practices, and regular outside physical activity and healthier eating. The increased consideration for global health issues by all stakeholders promotes the implementation of agroecological practices in gardens to improve biodiversity and adherence to circular economy principles. Four concepts emerged from the interviews: health, production of vegetables, living soil, and social interactions. Notably, nuances between the studied sites were observed, according to their history, environment, and organization. These collective gardens can thus be considered as accessible laboratories for social and agroecological experimentation, being areas that can strongly contribute to urban ecosystem services. Full article
(This article belongs to the Special Issue Social Evolution and Sustainability in the Urban Context)
Show Figures

Figure 1

16 pages, 6900 KiB  
Article
Infrared Small Target Detection via Modified Fast Saliency and Weighted Guided Image Filtering
by Yi Cui, Tao Lei, Guiting Chen, Yunjing Zhang, Gang Zhang and Xuying Hao
Sensors 2025, 25(14), 4405; https://doi.org/10.3390/s25144405 - 15 Jul 2025
Viewed by 282
Abstract
The robust detection of small targets is crucial in infrared (IR) search and tracking applications. Considering that many state-of-the-art (SOTA) methods are still unable to suppress various edges satisfactorily, especially under complex backgrounds, an effective infrared small target detection algorithm inspired by modified [...] Read more.
The robust detection of small targets is crucial in infrared (IR) search and tracking applications. Considering that many state-of-the-art (SOTA) methods are still unable to suppress various edges satisfactorily, especially under complex backgrounds, an effective infrared small target detection algorithm inspired by modified fast saliency and the weighted guided image filter (WGIF) is presented in this paper. Initially, the fast saliency map modulated by the steering kernel (SK) is calculated. Then, a set of edge-preserving smoothed images are produced by WGIF using different filter radii and regularization parameters. After that, utilizing the fuzzy sets technique, the background image is predicted reasonably according to the results of the saliency map and smoothed or non-smoothed images. Finally, the differential image is calculated by subtracting the predicted image from the original one, and IR small targets are detected through a simple thresholding. Experimental results on four sequences demonstrate that the proposed method can not only suppress background clutter effectively under strong edge interference but also detect targets accurately with a low false alarm rate. Full article
Show Figures

Figure 1

13 pages, 900 KiB  
Hypothesis
Beyond Classical Multipoles: The Magnetic Metapole as an Extended Field Source
by Angelo De Santis and Roberto Dini
Foundations 2025, 5(3), 25; https://doi.org/10.3390/foundations5030025 - 14 Jul 2025
Viewed by 185
Abstract
We introduce the concept of the magnetic metapole—a theoretical extension of classical multipole theory involving a fractional j pole count (related to the harmonic degree n as j = 2n). Defined by a scalar potential with colatitudinal dependence and no radial [...] Read more.
We introduce the concept of the magnetic metapole—a theoretical extension of classical multipole theory involving a fractional j pole count (related to the harmonic degree n as j = 2n). Defined by a scalar potential with colatitudinal dependence and no radial variation, the metapole yields a magnetic field that decays as 1/r and is oriented along spherical surfaces. Unlike classical multipoles, the metapole cannot be described as a point source; rather, it corresponds to an extended or filamentary magnetic distribution as derived from Maxwell’s equations. We demonstrate that pairs of oppositely oriented metapoles (up/down) can, at large distances, produce magnetic fields resembling those of classical monopoles. A regularized formulation of the potential resolves singularities for the potential and the field. When applied in a bounded region, it yields finite field energy, enabling practical modeling applications. We propose that the metapole can serve as a conceptual and computational framework for representing large-scale magnetic field structures particularly where standard dipole-based models fall short. This construct may have utility in both geophysical and astrophysical contexts, and it provides a new tool for equivalent source modeling and magnetic field decomposition. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

20 pages, 108154 KiB  
Article
Masks-to-Skeleton: Multi-View Mask-Based Tree Skeleton Extraction with 3D Gaussian Splatting
by Xinpeng Liu, Kanyu Xu, Risa Shinoda, Hiroaki Santo and Fumio Okura
Sensors 2025, 25(14), 4354; https://doi.org/10.3390/s25144354 - 11 Jul 2025
Viewed by 410
Abstract
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent [...] Read more.
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent development in accurate mask extraction from images, we introduce a mask-guided graph optimization framework that estimates a 3D skeleton directly from multi-view segmentation masks, bypassing the reliance on point cloud quality. In our method, a skeleton is modeled as a graph whose nodes store positions and radii while its adjacency matrix encodes branch connectivity. We use 3D Gaussian splatting (3DGS) to render silhouettes of the graph and directly optimize the nodes and the adjacency matrix to fit given multi-view silhouettes in a differentiable manner. Furthermore, we use a minimum spanning tree (MST) algorithm during the optimization loop to regularize the graph to a tree structure. Experiments on synthetic and real-world plants show consistent improvements in completeness and structural accuracy over existing point-cloud-based and heuristic baseline methods. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

26 pages, 1556 KiB  
Article
Modified Two-Parameter Ridge Estimators for Enhanced Regression Performance in the Presence of Multicollinearity: Simulations and Medical Data Applications
by Muteb Faraj Alharthi and Nadeem Akhtar
Axioms 2025, 14(7), 527; https://doi.org/10.3390/axioms14070527 - 10 Jul 2025
Viewed by 247
Abstract
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and [...] Read more.
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and improve the prediction accuracy. In this study, we introduce four newly modified ridge estimators, referred to as RIRE1, RIRE2, RIRE3, and RIRE4, that are aimed at tackling severe multicollinearity more effectively than ordinary least squares (OLS) and other existing estimators under both normal and non-normal error distributions. The ridge estimators are biased, so their efficiency cannot be judged by variance alone; instead, we use the mean squared error (MSE) to compare their performance. Each new estimator depends on two shrinkage parameters, k and d, making the theoretical analysis complex. To address this, we employ Monte Carlo simulations to rigorously evaluate and compare these new estimators with OLS and other existing ridge estimators. Our simulations show that the proposed estimators consistently minimize the MSE better than OLS and other ridge estimators, particularly in datasets with strong multicollinearity and large error variances. We further validate their practical value through applications using two real-world datasets, demonstrating both their robustness and theoretical alignment. Full article
(This article belongs to the Special Issue Applied Mathematics and Mathematical Modeling)
Show Figures

Figure 1

19 pages, 3291 KiB  
Article
Predicting High-Cost Healthcare Utilization Using Machine Learning: A Multi-Service Risk Stratification Analysis in EU-Based Private Group Health Insurance
by Eslam Abdelhakim Seyam
Risks 2025, 13(7), 133; https://doi.org/10.3390/risks13070133 - 8 Jul 2025
Viewed by 303
Abstract
Healthcare cost acceleration and resource allocation issues have worsened across European health systems, where a small group of patients drives excessive healthcare spending. The prediction of high-cost utilization patterns is important for the sustainable management of healthcare and focused intervention measures. The aim [...] Read more.
Healthcare cost acceleration and resource allocation issues have worsened across European health systems, where a small group of patients drives excessive healthcare spending. The prediction of high-cost utilization patterns is important for the sustainable management of healthcare and focused intervention measures. The aim of our study was to derive and validate machine learning algorithms for high-cost healthcare utilization prediction based on detailed administrative data and by comparing three algorithmic methods for the best risk stratification performance. The research analyzed extensive insurance beneficiary records which compile data from health group collective funds operated by non-life insurers across EU countries, across multiple service classes. The definition of high utilization was equivalent to the upper quintile of overall health expenditure using a moderate cost threshold. The research applied three machine learning algorithms, namely logistic regression using elastic net regularization, the random forest, and support vector machines. The models used a comprehensive set of predictor variables including demographics, policy profiles, and patterns of service utilization across multiple domains of healthcare. The performance of the models was evaluated using the standard train–test methodology and rigorous cross-validation procedures. All three models demonstrated outstanding discriminative ability by achieving area under the curve values at near-perfect levels. The random forest achieved the best test performance with exceptional metrics, closely followed by logistic regression with comparable exceptional performance. Service diversity proved to be the strongest predictor across all models, while dentistry services produced an extraordinarily high odds ratio with robust confidence intervals. The group of high utilizers comprised approximately one-fifth of the sample but demonstrated significantly higher utilization across all service classes. Machine learning algorithms are capable of classifying patients eligible for the high utilization of healthcare services with nearly perfect discriminative ability. The findings justify the application of predictive analytics for proactive case management, resource planning, and focused intervention measures across private group health insurance providers in EU countries. Full article
Show Figures

Figure 1

26 pages, 2441 KiB  
Article
Structure–Property Relationship in Isotactic Polypropylene Under Contrasting Processing Conditions
by Edin Suljovrujic, Dejan Milicevic, Katarina Djordjevic, Zorana Rogic Miladinovic, Georgi Stamboliev and Slobodanka Galovic
Polymers 2025, 17(14), 1889; https://doi.org/10.3390/polym17141889 - 8 Jul 2025
Viewed by 606
Abstract
Polypropylene (PP), with its good physical, thermal, and mechanical properties and excellent processing capabilities, has become one of the most used synthetic polymers. It is known that the overall properties of semicrystalline polymers, including PP, are governed by morphology, which is influenced by [...] Read more.
Polypropylene (PP), with its good physical, thermal, and mechanical properties and excellent processing capabilities, has become one of the most used synthetic polymers. It is known that the overall properties of semicrystalline polymers, including PP, are governed by morphology, which is influenced by the crystallization behavior of the polymer under specific conditions. The most important industrial PP remains the isotactic one, and it has been studied extensively for its polymorphic characteristics and crystallization behavior for over half a century. Due to its regular chain structure, isotactic polypropylene (iPP) belongs to the group of polymers with a high tendency for crystallization. The rapid quenching of molten iPP fails to produce a completely amorphous polymer but leads to an intermediate crystalline order. On the other hand, slow cooling yields a material with high crystalline content. The processing conditions that occur in practice and industry are between these two extremes and, in some cases, are even very close. Therefore, the study of limits in processability and the impact of extreme preparation conditions on morphology, structure, thermal, and mechanical properties fills a gap in the current understanding of how the processing conditions of iPP can be used to design the desired properties for specific applications and is in the focus of this research. The first set of samples (Q samples) was obtained by rapid quenching, while the second was prepared by very slow cooling from the melt to room temperature (SC samples). Testing of samples was performed by optical microscopy (OM), scanning electron microscopy (SEM), wide-angle X-ray diffraction (WAXD), Fourier transform infrared spectroscopy (FTIR), differential scanning calorimetry (DSC), dynamic dielectric spectroscopy (DDS), and mechanical measurements. Characterization revealed that slowly cooled samples exhibited a significantly higher degree of crystallinity and larger crystallites (χ ≥ 55% and L(110) ≈ 20 nm), compared to quenched samples (χ < 30%, L(110) ≤ 3 nm). Mechanical testing showed a drastic contrast: quenched samples exhibited elongation at break > 500%, while slowly cooled samples broke below 15%, reflecting their brittle behavior. For the first time, DDS is applied to investigate molecular mobility differences between processing-dependent structural forms, specifically the mesomorphic (smectic) and α-monoclinic forms. In slowly cooled samples, α relaxation exhibited both enhanced intensity and an upward temperature shift, indicating stronger structural constraints due to a much higher crystalline phase content and significantly larger crystallite size, respectively. These findings provide novel insights into the structure–property–processing relationship, which is crucial for industrial applications. Full article
(This article belongs to the Special Issue Thermal and Elastic Properties of Polymer Materials)
Show Figures

Figure 1

33 pages, 8582 KiB  
Article
Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers
by Xueqin Wu, Jian Ma, Jianfeng Wang, Hongxun Song and Jiyang Xu
Sensors 2025, 25(13), 4177; https://doi.org/10.3390/s25134177 - 4 Jul 2025
Viewed by 235
Abstract
The health of road tunnel linings directly impacts traffic safety and requires regular inspection. Appearance defects on tunnel linings can be measured through images scanned by cameras mounted on a car to avoid disrupting traffic. Existing tunnel lining mobile scanning methods often fail [...] Read more.
The health of road tunnel linings directly impacts traffic safety and requires regular inspection. Appearance defects on tunnel linings can be measured through images scanned by cameras mounted on a car to avoid disrupting traffic. Existing tunnel lining mobile scanning methods often fail in image stitching due to the lack of corresponding feature points in the lining images, or require complex, time-consuming algorithms to eliminate stitching seams caused by the same issue. This paper proposes a mobile scanning method aided by collimated lasers, which uses lasers as corresponding points to assist with image stitching to address the problems. Additionally, the lasers serve as structured light, enabling the measurement of image projection relationships. An inspection car was developed based on this method for the experiment. To ensure operational flexibility, a single checkerboard was used to calibrate the system, including estimating the poses of lasers and cameras, and a Laplace kernel-based algorithm was developed to guarantee the calibration accuracy. Experiments show that the performance of this algorithm exceeds that of other benchmark algorithms, and the proposed method produces nearly seamless, measurable tunnel lining images, demonstrating its feasibility. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 7163 KiB  
Article
Entropy-Regularized Attention for Explainable Histological Classification with Convolutional and Hybrid Models
by Pedro L. Miguel, Leandro A. Neves, Alessandra Lumini, Giuliano C. Medalha, Guilherme F. Roberto, Guilherme B. Rozendo, Adriano M. Cansian, Thaína A. A. Tosta and Marcelo Z. do Nascimento
Entropy 2025, 27(7), 722; https://doi.org/10.3390/e27070722 - 3 Jul 2025
Viewed by 395
Abstract
Deep learning models such as convolutional neural networks (CNNs) and vision transformers (ViTs) perform well in histological image classification, but often lack interpretability. We introduce a unified framework that adds an attention branch and CAM Fostering, an entropy-based regularizer, to improve Grad-CAM visualizations. [...] Read more.
Deep learning models such as convolutional neural networks (CNNs) and vision transformers (ViTs) perform well in histological image classification, but often lack interpretability. We introduce a unified framework that adds an attention branch and CAM Fostering, an entropy-based regularizer, to improve Grad-CAM visualizations. Six backbone architectures (ResNet-50, DenseNet-201, EfficientNet-b0, ResNeXt-50, ConvNeXt, CoatNet-small) were trained, with and without our modifications, on five H&E-stained datasets. We measured explanation quality using coherence, complexity, confidence drop, and their harmonic mean (ADCC). Our method increased the ADCC in five of the six backbones; ResNet-50 saw the largest gain (+15.65%), and CoatNet-small achieved the highest overall score (+2.69%), peaking at 77.90% on the non-Hodgkin lymphoma set. The classification accuracy remained stable or improved in four models. These results show that combining attention and entropy produces clearer, more informative heatmaps without degrading performance. Our contributions include a modular architecture for both convolutional and hybrid models and a comprehensive, quantitative explainability evaluation suite. Full article
Show Figures

Figure 1

16 pages, 4381 KiB  
Article
The Influence of Different Foaming Agents on the Properties and Foaming Mechanisms of Foam Ceramics from Quartz Tailings
by Huiyang Gao and Jie Zhang
Crystals 2025, 15(7), 606; https://doi.org/10.3390/cryst15070606 - 28 Jun 2025
Viewed by 276
Abstract
The type of foaming agent significantly influences the pore structure and properties of foam ceramics, particularly their compressive strength. This study used quartz sand tailings and waste glass powder as raw materials to fabricate foam ceramic materials. The effects of different foaming agents [...] Read more.
The type of foaming agent significantly influences the pore structure and properties of foam ceramics, particularly their compressive strength. This study used quartz sand tailings and waste glass powder as raw materials to fabricate foam ceramic materials. The effects of different foaming agents (SiC, CaCO3, and MnO2) on the phase evolution, microstructure, pore size distribution, and physical properties of the foam ceramics were investigated, and the foaming mechanisms were elucidated. The results indicated that when SiC was employed as the foaming agent, the viscosity was high at elevated temperatures and pores with irregular shapes tended to form because of the anisotropy of the quartz crystals. CaO generated from CaCO3 decomposition reduced the melt viscosity by disrupting the [SiO4] tetrahedra, whereas the formation of anorthite and diopside stabilized the pore morphology, resulting in regular circular pores. When MnO2 was used as the foaming agent, the pressure from the gas produced during oxidation exceeded the surface tension of the molten phase owing to its viscosity, leading to the formation of larger, irregular, and interconnected pores. The foam ceramic material exhibited optimal properties when 2% CaCO3 was used as the foaming agent, with a water absorption rate of 30%, bulk density of 0.62 g/cm3, porosity of 68.4%, compressive strength of 9.67 MPa, and thermal conductivity of 0.26 W/(m·K). Full article
(This article belongs to the Section Polycrystalline Ceramics)
Show Figures

Figure 1

Back to TopTop