Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,649)

Search Parameters:
Keywords = zero-set

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 204 KiB  
Article
The Proximal Point Method with Remotest Set Control for Maximal Monotone Operators and Quasi-Nonexpansive Mappings
by Alexander J. Zaslavski
Mathematics 2025, 13(14), 2282; https://doi.org/10.3390/math13142282 - 16 Jul 2025
Abstract
In the present paper, we use the proximal point method with remotest set control for find an approximate common zero of a finite collection of maximal monotone maps in a real Hilbert space under the presence of computational errors. We prove that the [...] Read more.
In the present paper, we use the proximal point method with remotest set control for find an approximate common zero of a finite collection of maximal monotone maps in a real Hilbert space under the presence of computational errors. We prove that the inexact proximal point method generates an approximate solution if these errors are summable. Also, we show that if the computational errors are small enough, then the inexact proximal point method generates approximate solutions Full article
(This article belongs to the Special Issue Variational Inequality, 2nd Edition)
26 pages, 3369 KiB  
Article
Zero-Day Threat Mitigation via Deep Learning in Cloud Environments
by Sebastian Ignacio Berrios Vasquez, Pamela Alejandra Hermosilla Monckton, Dante Ivan Leiva Muñoz and Hector Allende
Appl. Sci. 2025, 15(14), 7885; https://doi.org/10.3390/app15147885 - 15 Jul 2025
Viewed by 86
Abstract
The growing sophistication of cyber threats has increased the need for advanced detection techniques, particularly in cloud computing environments. Zero-day threats pose a critical risk due to their ability to bypass traditional security mechanisms. This study proposes a deep learning model called mixed [...] Read more.
The growing sophistication of cyber threats has increased the need for advanced detection techniques, particularly in cloud computing environments. Zero-day threats pose a critical risk due to their ability to bypass traditional security mechanisms. This study proposes a deep learning model called mixed vision transformer (MVT), which converts binary files into images and applies deep attention mechanisms for classification. The model was trained using the MaLeX dataset in a simulated Docker environment. It achieved an accuracy between 70% and 80%, with better performance in detecting malware compared with benign files. The proposed MVT approach not only demonstrates its potential to significantly enhance zero-day threat detection in cloud environments but also sets a foundation for robust and adaptive solutions to emerging cybersecurity challenges. Full article
Show Figures

Figure 1

21 pages, 3826 KiB  
Article
UAV-OVD: Open-Vocabulary Object Detection in UAV Imagery via Multi-Level Text-Guided Decoding
by Lijie Tao, Guoting Wei, Zhuo Wang, Zhaoshuai Qi, Ying Li and Haokui Zhang
Drones 2025, 9(7), 495; https://doi.org/10.3390/drones9070495 - 14 Jul 2025
Viewed by 157
Abstract
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore [...] Read more.
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore open-vocabulary or open-world detection, their application to UAV imagery remains limited and underexplored. In this paper, we address this limitation by exploring the relationship between images and textual semantics to extend object detection in UAV imagery to an open-vocabulary setting. We propose a novel and efficient detector named Unmanned Aerial Vehicle Open-Vocabulary Detector (UAV-OVD), specifically designed for drone-captured scenes. To facilitate open-vocabulary object detection, we propose improvements from three complementary perspectives. First, at the training level, we design a region–text contrastive loss to replace conventional classification loss, allowing the model to align visual regions with textual descriptions beyond fixed category sets. Structurally, building on this, we introduce a multi-level text-guided fusion decoder that integrates visual features across multiple spatial scales under language guidance, thereby improving overall detection performance and enhancing the representation and perception of small objects. Finally, from the data perspective, we enrich the original dataset with synonym-augmented category labels, enabling more flexible and semantically expressive supervision. Experiments conducted on two widely used benchmark datasets demonstrate that our approach achieves significant improvements in both mean mAP and Recall. For instance, for Zero-Shot Detection on xView, UAV-OVD achieves 9.9 mAP and 67.3 Recall, 1.1 and 25.6 higher than that of YOLO-World. In terms of speed, UAV-OVD achieves 53.8 FPS, nearly twice as fast as YOLO-World and five times faster than DetrReg, demonstrating its strong potential for real-time open-vocabulary detection in UAV imagery. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

19 pages, 910 KiB  
Article
Robust Gas Demand Prediction Using Deep Neural Networks: A Data-Driven Approach to Forecasting Under Regulatory Constraints
by Kostiantyn Pavlov, Olena Pavlova, Tomasz Wołowiec, Svitlana Slobodian, Andriy Tymchyshak and Tetiana Vlasenko
Energies 2025, 18(14), 3690; https://doi.org/10.3390/en18143690 - 12 Jul 2025
Viewed by 164
Abstract
Accurate gas consumption forecasting is critical for modern energy systems due to complex consumer behavior and regulatory requirements. Deep neural networks (DNNs), such as Seq2Seq with attention, TiDE, and Temporal Fusion Transformers, are promising for modeling complex temporal relationships and non-linear dependencies. This [...] Read more.
Accurate gas consumption forecasting is critical for modern energy systems due to complex consumer behavior and regulatory requirements. Deep neural networks (DNNs), such as Seq2Seq with attention, TiDE, and Temporal Fusion Transformers, are promising for modeling complex temporal relationships and non-linear dependencies. This study compares state-of-the-art architectures using real-world data from over 100,000 consumers to determine their practical viability for forecasting gas consumption under operational and regulatory conditions. Particular attention is paid to the impact of data quality, feature attribution, and model reliability on performance. The main use cases for natural gas consumption forecasting are tariff setting by regulators and system balancing for suppliers and operators. The study used monthly natural gas consumption data from 105,527 households in the Volyn region of Ukraine from January 2019 to April 2023 and meteorological data on average monthly air temperature. Missing values were replaced with zeros or imputed using seasonal imputation and the K-nearest neighbors. The results showed that previous consumption is the dominant feature for all models, confirming their autoregressive origin and the high importance of historical data. Temperature and category were identified as supporting features. Improvised data consistently improved the performance of all models. Seq2SeqPlus showed high accuracy, TiDE was the most stable, and TFT offered flexibility and interpretability. Implementing these models requires careful integration with data management, regulatory frameworks, and operational workflows. Full article
Show Figures

Figure 1

21 pages, 874 KiB  
Article
Explainable Use of Foundation Models for Job Hiring
by Vishnu S. Pendyala, Neha Bais Thakur and Radhika Agarwal
Electronics 2025, 14(14), 2787; https://doi.org/10.3390/electronics14142787 - 11 Jul 2025
Viewed by 563
Abstract
Automating candidate shortlisting is a non-trivial task that stands to benefit substantially from advances in artificial intelligence. We evaluate a suite of foundation models such as Llama 2, Llama 3, Mixtral, Gemma-2b, Gemma-7b, Phi-3 Small, Phi-3 Mini, Zephyr, and Mistral-7b for their ability [...] Read more.
Automating candidate shortlisting is a non-trivial task that stands to benefit substantially from advances in artificial intelligence. We evaluate a suite of foundation models such as Llama 2, Llama 3, Mixtral, Gemma-2b, Gemma-7b, Phi-3 Small, Phi-3 Mini, Zephyr, and Mistral-7b for their ability to predict hiring outcomes in both zero-shot and few-shot settings. Using only features extracted from applicants’ submissions, these models, on average, achieved an AUC above 0.5 in zero-shot settings. Providing a few examples similar to the job applicants based on a nearest neighbor search improved the prediction rate marginally, indicating that the models perform competently even without task-specific fine-tuning. For Phi-3 Small and Mixtral, all reported performance metrics fell within the 95% confidence interval across evaluation strategies. Model outputs were interpreted quantitatively via post hoc explainability techniques and qualitatively through prompt engineering, revealing that decisions are largely attributable to knowledge acquired during pre-training. A task-specific MLP classifier trained solely on the provided dataset only outperformed the strongest foundation model (Zephyr in 5-shot setting) by approximately 3 percentage points on accuracy, but all the foundational models outperformed the baseline model by more than 15 percentage points on f1 and recall, underscoring the competitive strength of general-purpose language models in the hiring domain. Full article
Show Figures

Figure 1

25 pages, 1669 KiB  
Article
Zero-Shot Infrared Domain Adaptation for Pedestrian Re-Identification via Deep Learning
by Xu Zhang, Yinghui Liu, Liangchen Guo and Huadong Sun
Electronics 2025, 14(14), 2784; https://doi.org/10.3390/electronics14142784 - 10 Jul 2025
Viewed by 141
Abstract
In computer vision, the performance of detectors trained under optimal lighting conditions is significantly impaired when applied to infrared domains due to the scarcity of labeled infrared target domain data and the inherent degradation in infrared image quality. Progress in cross-domain pedestrian re-identification [...] Read more.
In computer vision, the performance of detectors trained under optimal lighting conditions is significantly impaired when applied to infrared domains due to the scarcity of labeled infrared target domain data and the inherent degradation in infrared image quality. Progress in cross-domain pedestrian re-identification is hindered by the lack of labeled infrared image data. To address the degradation of pedestrian recognition in infrared environments, we propose a framework for zero-shot infrared domain adaptation. This integrated approach is designed to mitigate the challenges of pedestrian recognition in infrared domains while enabling zero-shot domain adaptation. Specifically, an advanced reflectance representation learning module and an exchange–re-decomposition–coherence process are employed to learn illumination invariance and to enhance the model’s effectiveness, respectively. Additionally, the CLIP (Contrastive Language–Image Pretraining) image encoder and DINO (Distillation with No Labels) are fused for feature extraction, improving model performance under infrared conditions and enhancing its generalization capability. To further improve model performance, we introduce the Non-Local Attention (NLA) module, the Instance-based Weighted Part Attention (IWPA) module, and the Multi-head Self-Attention module. The NLA module captures global feature dependencies, particularly long-range feature relationships, effectively mitigating issues such as blurred or missing image information in feature degradation scenarios. The IWPA module focuses on localized regions to enhance model accuracy in complex backgrounds and unevenly lit scenes. Meanwhile, the Multi-head Self-Attention module captures long-range dependencies between cross-modal features, further strengthening environmental understanding and scene modeling. The key innovation of this work lies in the skillful combination and application of existing technologies to new domains, overcoming the challenges posed by vision in infrared environments. Experimental results on the SYSU-MM01 dataset show that, under the single-shot setting, Rank-1 Accuracy (Rank-1) andmean Average Precision (mAP) values of 37.97% and 37.25%, respectively, were achieved, while in the multi-shot setting, values of 34.96% and 34.14% were attained. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 2221 KiB  
Article
Enhanced Pedestrian Navigation with Wearable IMU: Forward–Backward Navigation and RTS Smoothing Techniques
by Yilei Shen, Yiqing Yao, Chenxi Yang and Xiang Xu
Technologies 2025, 13(7), 296; https://doi.org/10.3390/technologies13070296 - 9 Jul 2025
Viewed by 224
Abstract
Accurate and reliable pedestrian positioning service is essential for providing Indoor Location-Based Services (ILBSs). Zero-Velocity Update (ZUPT)-aided Strapdown Inertial Navigation System (SINS) based on foot-mounted wearable Inertial Measurement Units (IMUs) has shown great performance in pedestrian navigation systems. Though the velocity errors will [...] Read more.
Accurate and reliable pedestrian positioning service is essential for providing Indoor Location-Based Services (ILBSs). Zero-Velocity Update (ZUPT)-aided Strapdown Inertial Navigation System (SINS) based on foot-mounted wearable Inertial Measurement Units (IMUs) has shown great performance in pedestrian navigation systems. Though the velocity errors will be corrected once zero-velocity measurement is available, the navigation system errors accumulated during measurement outages are yet to be further optimized by utilizing historical data during both stance and swing phases of pedestrian gait. Thus, in this paper, a novel Forward–Backward navigation and Rauch–Tung–Striebel smoothing (FB-RTS) navigation scheme is proposed. First, to efficiently re-estimate past system state and reduce accumulated navigation error once zero-velocity measurement is available, both the forward and backward integration method and the corresponding error equations are constructed. Second, to further improve navigation accuracy and reliability by exploiting historical observation information, both backward and forward RTS algorithms are established, where the system model and observation model are built under the output correction mode. Finally, both navigation results are combined to achieve the final estimation of attitude and velocity, where the position is recalculated by the optimized data. Through simulation experiments and two sets of field tests, the FB-RTS algorithm demonstrated superior performance in reducing navigation errors and smoothing pedestrian trajectories compared to traditional ZUPT method and both the FB and the RTS method, whose advantage becomes more pronounced over longer navigation periods than the traditional methods, offering a robust solution for positioning applications in smart buildings, indoor wayfinding, and emergency response operations. Full article
28 pages, 3966 KiB  
Article
Photovoltaic Power Forecasting Based on Variational Mode Decomposition and Long Short-Term Memory Neural Network
by Zhijian Hou, Yunhui Zhang, Xuemei Cheng and Xiaojiang Ye
Energies 2025, 18(13), 3572; https://doi.org/10.3390/en18133572 - 7 Jul 2025
Viewed by 256
Abstract
The accurate forecasting of photovoltaic (PV) power is vital for grid stability. This paper presents a hybrid forecasting model that combines Variational Mode Decomposition (VMD) and Long Short-Term Memory (LSTM). The model uses VMD to decompose the PV power into modal components and [...] Read more.
The accurate forecasting of photovoltaic (PV) power is vital for grid stability. This paper presents a hybrid forecasting model that combines Variational Mode Decomposition (VMD) and Long Short-Term Memory (LSTM). The model uses VMD to decompose the PV power into modal components and residuals. These components are combined with meteorological variables and their first-order differences, and feature extraction techniques are used to generate multiple sets of feature vectors. These vectors are utilized as inputs for LSTM sub-models, which predict the modal components and residuals. Finally, the aggregation of prediction results is used to achieve the PV power prediction. Validated on Australia’s 1.8 MW Yulara PV plant, the model surpasses 13 benchmark models, achieving an MAE of 63.480 kW, RMSE of 81.520 kW, and R2 of 92.3%. Additionally, the results of a paired t-test showed that the mean differences in the MAE and RMSE were negative, and the 95% confidence intervals for the difference did not include zero, indicating statistical significance. To further evaluate the model’s robustness, white noise with varying levels of signal-to-noise ratios was introduced to the photovoltaic power and global radiation signals. The results showed that the model exhibited higher prediction accuracy and better noise tolerance compared to other models. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

23 pages, 372 KiB  
Article
Computability of the Zero-Error Capacity of Noisy Channels
by Holger Boche and Christian Deppe
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571 - 3 Jul 2025
Viewed by 257
Abstract
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is [...] Read more.
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
20 pages, 1979 KiB  
Article
Salivary Biosensing Opportunities for Predicting Cognitive and Physical Human Performance
by Sara Anne Goring, Evan D. Gray, Eric L. Miller and Tad T. Brunyé
Biosensors 2025, 15(7), 418; https://doi.org/10.3390/bios15070418 - 1 Jul 2025
Viewed by 410
Abstract
Advancements in biosensing technologies have introduced opportunities for non-invasive, real-time monitoring of salivary biomarkers, enabling progress in fields ranging from personalized medicine to public health. Identifying and prioritizing the most critical analytes to measure in saliva is essential for estimating physiological status and [...] Read more.
Advancements in biosensing technologies have introduced opportunities for non-invasive, real-time monitoring of salivary biomarkers, enabling progress in fields ranging from personalized medicine to public health. Identifying and prioritizing the most critical analytes to measure in saliva is essential for estimating physiological status and forecasting performance in applied contexts. This study examined the value of 12 salivary analytes, including hormones, metabolites, and enzymes, for predicting cognitive and physical performance outcomes in military personnel (N = 115) engaged in stressful laboratory and field tasks. We calculated a series of features to quantify time-series analyte data and applied multiple regression techniques, including Elastic Net, Partial Least Squares, and Random Forest regression, to evaluate their predictive utility for five outcomes of interest: the ability to move, shoot, communicate, navigate, and sustain performance under stress. Predictive performance was poor across all models, with R-squared values near zero and limited evidence that salivary analytes provided stable or meaningful performance predictions. While certain features (e.g., post-peak slopes and variance metrics) appeared more frequently than others, no individual analyte emerged as a reliable predictor. These results suggest that salivary biomarkers alone are unlikely to provide robust insights into cognitive and physical performance outcomes. Future research may benefit from combining salivary and other biosensor data with contextual variables to improve predictive accuracy in real-world settings. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

21 pages, 32152 KiB  
Article
Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement
by Huitao Zhao, Shaoping Xu, Liang Peng, Hanyang Hu and Shunliang Jiang
Appl. Sci. 2025, 15(13), 7382; https://doi.org/10.3390/app15137382 - 30 Jun 2025
Viewed by 253
Abstract
In recent years, the continuous advancement of deep learning technology and its integration into the domain of low-light image enhancement have led to a steady improvement in enhancement effects. However, this progress has been accompanied by an increase in model complexity, imposing significant [...] Read more.
In recent years, the continuous advancement of deep learning technology and its integration into the domain of low-light image enhancement have led to a steady improvement in enhancement effects. However, this progress has been accompanied by an increase in model complexity, imposing significant constraints on applications that demand high real-time performance. To address this challenge, inspired by the state-of-the-art Zero-DCE approach, we introduce a novel method that transforms the low-light image enhancement task into a curve estimation task tailored to each individual image, utilizing a lightweight shallow neural network. Specifically, we first design a novel curve formula based on Gamma correction, which we call the Gamma-based light-enhancement (GLE) curve. This curve enables outstanding performance in the enhancement task by directly mapping the input low-light image to the enhanced output at the pixel level, thereby eliminating the need for multiple iterative mappings as required in the Zero-DCE algorithm. As a result, our approach significantly improves inference speed. Additionally, we employ a lightweight network architecture to minimize computational complexity and introduce a novel global channel attention (GCA) module to enhance the nonlinear mapping capability of the neural network. The GCA module assigns distinct weights to each channel, allowing the network to focus more on critical features. Consequently, it enhances the effectiveness of low-light image enhancement while incurring a minimal computational cost. Finally, our method is trained using a set of zero-reference loss functions, akin to the Zero-DCE approach, without relying on paired or unpaired data. This ensures the practicality and applicability of our proposed method. The experimental results of both quantitative and qualitative comparisons demonstrate that, despite its lightweight design, the images enhanced using our method not only exhibit perceptual quality, authenticity, and contrast comparable to those of mainstream state-of-the-art (SOTA) methods but in some cases even surpass them. Furthermore, our model demonstrates very fast inference speed, making it suitable for real-time inference in resource-constrained or mobile environments, with broad application prospects. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 14879 KiB  
Article
Computational Adaptive Optics for HAR Hybrid Trench Array Topography Measurement by Utilizing Coherence Scanning Interferometry
by Wenyou Qiao, Zhishan Gao, Qun Yuan, Lu Chen, Zhenyan Guo, Xiao Huo and Qian Wang
Sensors 2025, 25(13), 4085; https://doi.org/10.3390/s25134085 - 30 Jun 2025
Viewed by 246
Abstract
High aspect ratio (HAR) sample-induced aberrations seriously affect the topography measurement for the bottom of the microstructure by coherence scanning interferometry (CSI). Previous research proposed an aberration compensating method using deformable mirrors at the conjugate position of the pupil. However, it failed to [...] Read more.
High aspect ratio (HAR) sample-induced aberrations seriously affect the topography measurement for the bottom of the microstructure by coherence scanning interferometry (CSI). Previous research proposed an aberration compensating method using deformable mirrors at the conjugate position of the pupil. However, it failed to compensate for the shift-variant aberrations introduced by the HAR hybrid trench array composed of multiple trenches with different parameters. Here, we propose a computational aberration correction method for measuring the topography of the HAR structure by the particle swarm optimization (PSO) algorithm without constructing a database and prior knowledge, and a phase filter in the spatial frequency domain is constructed to restore interference signals distorted by shift-variant aberrations. Since the aberrations of each sampling point are basically unchanged in the field of view corresponding to a single trench, each trench under test can be considered as a separate isoplanatic region. Therefore, a multi-channel aberration correction scheme utilizing the virtual phase filter based on isoplanatic region segmentation is established for hybrid trench array samples. The PSO algorithm is adopted to derive the optimal Zernike polynomial coefficients representing the filter, in which the interference fringe contrast is taken as the optimization criterion. Additionally, aberrations introduce phase distortion within the 3D transfer function (3D-TF), and the 3D-TF bandwidth remains unchanged. Accordingly, we set the non-zero part of the 3D-TF as a window function to preprocess the interferogram by filtering out the signals outside the window. Finally, experiments are performed in a single trench sample and two hybrid trench array samples with depths ranging from 100 to 300 μm and widths from 10 to 30 μm to verify the effectiveness and accuracy of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

19 pages, 994 KiB  
Article
(Finite-Time) Thermodynamics, Hyperbolicity, Lorentz Invariance: Study of an Example
by Bernard Guy
Entropy 2025, 27(7), 700; https://doi.org/10.3390/e27070700 - 29 Jun 2025
Viewed by 288
Abstract
Our study lies at the intersection of three fields: finite-time thermodynamics, relativity theory, and the theory of hyperbolic conservation laws. Each of these fields has its own requirements and richness, and in order to link them together as effectively as possible, we have [...] Read more.
Our study lies at the intersection of three fields: finite-time thermodynamics, relativity theory, and the theory of hyperbolic conservation laws. Each of these fields has its own requirements and richness, and in order to link them together as effectively as possible, we have simplified each one, reducing it to its fundamental principles. The example chosen concerns the propagation of chemical changes in a very large reactor, as found in geology. We ask ourselves two sets of questions: (1) How do the finiteness of propagation speeds modeled by hyperbolic problems (diffusion is neglected) and the finiteness of the time allocated to transformations interact? (2) How do the finiteness of time and that of resources interact? The similarity in the behavior of the pairs of variables (x, t and resources, resource flows) in Lorentz relativistic transformations allows us to put them on the same level and propose complementary-type relationships between the two classes of finiteness. If times are finite, so are resources, which can be neither zero nor infinite. In hyperbolic problems, a condition is necessary to select solutions with a physical sense among the multiplicity of weak solutions: this is given by the entropy production, which is Lorentz invariant (and not entropy alone). Full article
(This article belongs to the Special Issue The First Half Century of Finite-Time Thermodynamics)
Show Figures

Figure 1

11 pages, 3071 KiB  
Article
Pathologic Response and Survival Outcomes on HER2-Low vs. HER2-Zero in Breast Cancer Receiving Neoadjuvant Chemotherapy
by Rumeysa Colak, Caner Kapar, Ezgi Degerli, Seher Yildiz Tacar, Aysegul Akdogan Gemici, Nursadan Gergerlioglu, Serdar Altinay and Mesut Yilmaz
Medicina 2025, 61(7), 1168; https://doi.org/10.3390/medicina61071168 - 27 Jun 2025
Viewed by 235
Abstract
Background and Objectives: The clinical value of HER2-low breast cancer (BC), defined by immunohistochemistry (IHC) scores of 1+ or 2+/ISH-negative without HER2 amplification, remains unclear in the neoadjuvant setting. This study aimed to determine whether HER2-low and HER2-zero tumors differ in pathological [...] Read more.
Background and Objectives: The clinical value of HER2-low breast cancer (BC), defined by immunohistochemistry (IHC) scores of 1+ or 2+/ISH-negative without HER2 amplification, remains unclear in the neoadjuvant setting. This study aimed to determine whether HER2-low and HER2-zero tumors differ in pathological complete response (pCR) rates and disease-free survival (DFS) among early-stage breast cancer patients undergoing neoadjuvant chemotherapy (NAC). Materials and Methods: We retrospectively analyzed 134 early BC patients treated with NAC between 2017 and 2023. Patients were categorized as HER2-zero (IHC 0) or HER2-low (IHC 1+ or 2+/ISH–). The primary endpoint was total pCR (tpCR); secondary endpoints included breast (bpCR), nodal (npCR), and radiologic complete response (rCR), alongside DFS analysis stratified by hormone receptor (HR) status. Results: Of the cohort, 91 patients (67.9%) were HER2-zero and 43 (32.1%) were HER2-low. There was no statistically significant difference in tpCR (26.4% vs. 27.9%, p = 0.852), bpCR (28.6% vs. 30.2%, p = 0.843), npCR (37.4% vs. 32.6%, p = 0.588), and rCR (23.1% vs. 30.2%, p = 0.374) between HER2-zero and HER2-low groups. DFS did not significantly differ between HER2-zero and HER2-low groups overall (p = 0.714), nor within HR-positive (p = 0.540) or TNBC (p = 0.523) subgroups. Conclusions: HER2-low tumors demonstrated similar pathological responses and survival outcomes compared to HER2-zero tumors. While a HER2-low status does not appear to define a distinct biological subtype in early BC, it remains a relevant classification for emerging HER2-targeted therapies, needing further investigation in prospective studies. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

12 pages, 918 KiB  
Article
Fault-Tolerant Edge Metric Dimension of Zero-Divisor Graphs of Commutative Rings
by Omaima Alshanquiti, Malkesh Singh and Vijay Kumar Bhat
Axioms 2025, 14(7), 499; https://doi.org/10.3390/axioms14070499 - 26 Jun 2025
Viewed by 220
Abstract
In recent years, the intersection of algebraic structures and graph-theoretic concepts has developed significant interest, particularly through the study of zero-divisor graphs derived from commutative rings. Let Z*(S) be the set of non-zero zero divisors of a finite commutative ring [...] Read more.
In recent years, the intersection of algebraic structures and graph-theoretic concepts has developed significant interest, particularly through the study of zero-divisor graphs derived from commutative rings. Let Z*(S) be the set of non-zero zero divisors of a finite commutative ring S with unity. Consider a graph Γ(S) with vertex set V(Γ) = Z*(S), and two vertices in Γ(S) are adjacent if and only if their product is zero. This graph Γ(S) is known as zero-divisor graph of S. Zero-divisor graphs provide a powerful bridge between abstract algebra and graph theory. The zero-divisor graphs for finite commutative rings and their minimum fault-tolerant edge-resolving sets are studied in this article. Through analytical and constructive techniques, we highlight how the algebraic properties of the ring influence the edge metric structure of its associated graph. In addition to this, the existence of a connected graph G having a resolving set of cardinality of 2n + 2 from a star graph K1,2n, is studied. Full article
(This article belongs to the Special Issue Recent Developments in Graph Theory)
Show Figures

Figure 1

Back to TopTop