Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (782)

Search Parameters:
Keywords = iterated Ore extension

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 17227 KiB  
Article
Distributed Online Voltage Control with Feedback Delays Under Coupled Constraints for Distribution Networks
by Jinxuan Liu, Yanjian Peng, Xiren Zhang, Zhihao Ning and Dingzhong Fan
Technologies 2025, 13(8), 327; https://doi.org/10.3390/technologies13080327 (registering DOI) - 31 Jul 2025
Abstract
High penetration of photovoltaic (PV) generation presents new challenges for voltage regulation in distribution networks (DNs), primarily due to output intermittency and constrained reactive power capabilities. This paper introduces a distributed voltage control method leveraging reactive power compensation from PV inverters. Instead of [...] Read more.
High penetration of photovoltaic (PV) generation presents new challenges for voltage regulation in distribution networks (DNs), primarily due to output intermittency and constrained reactive power capabilities. This paper introduces a distributed voltage control method leveraging reactive power compensation from PV inverters. Instead of relying on centralized computation, the proposed method allows each inverter to make local decisions using real-time voltage measurements and delayed communication with neighboring PV nodes. To account for practical asynchronous communication and feedback delay, a Distributed Online Primal–Dual Push–Sum (DOPP) algorithm that integrates a fixed-step delay model into the push–sum coordination framework is developed. Through extensive case studies on a modified IEEE 123-bus system, it has been demonstrated that the proposed method maintains robust performance under both static and dynamic scenarios, even in the presence of fixed feedback delays. Specifically, in static scenarios, the proposed strategy rapidly eliminates voltage violations within 50–100 iterations, effectively regulating all nodal voltages into the acceptable range of [0.95, 1.05] p.u. even under feedback delays with a delay step of 10. In dynamic scenarios, the proposed strategy ensures 100% voltage compliance across all nodes, demonstrating superior voltage regulation and reactive power coordination performance over conventional droop and incremental control approaches. Full article
14 pages, 2206 KiB  
Article
Numerical Simulation Study on the Fracture Process of CFRP-Reinforced Concrete
by Xiangqian Fan, Jueding Liu, Li Zou and Juan Wang
Buildings 2025, 15(15), 2636; https://doi.org/10.3390/buildings15152636 - 25 Jul 2025
Viewed by 166
Abstract
To investigate the crack extension mechanism in CFRP-reinforced concrete, this paper derives analytical expressions for the external load and crack opening displacement in the fracture process of CFRP concrete beams based on the crack emergence toughness criterion and the Paris displacement formula as [...] Read more.
To investigate the crack extension mechanism in CFRP-reinforced concrete, this paper derives analytical expressions for the external load and crack opening displacement in the fracture process of CFRP concrete beams based on the crack emergence toughness criterion and the Paris displacement formula as the theoretical basis. A numerical iterative method was used to computationally simulate the fracture process of CFRP-reinforced concrete beams and to analyze the effect of different initial crack lengths on the fracture process. The research results indicate that the numerical simulation results of the crack initiation load are in good agreement with the test results, and the crack propagation curves and the test results are basically consistent before the CFRP-concrete interface peels off. The numerical results of ultimate load are lower than the test results, but it is safe for fracture prediction in actual engineering. With the increase in the initial crack length, the effect of the initial crack length on the critical effective crack propagation length is more obvious. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

22 pages, 16961 KiB  
Article
Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution
by Dibin Zhou, Fengyuan Xing, Wenhao Liu and Fuchang Liu
Sensors 2025, 25(15), 4604; https://doi.org/10.3390/s25154604 - 25 Jul 2025
Viewed by 181
Abstract
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in [...] Read more.
Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in images will significantly affect the registration precision, which is largely neglected in state-of-the-art works. To address this, the paper proposes a dual-pose medical image registration algorithm based on improved differential evolution. More specifically, the proposed algorithm defines a composite similarity measurement based on contour points and utilizes this measurement to calculate the similarity between frontal–lateral positional DRR (Digitally Reconstructed Radiograph) images and X-ray images. In order to ensure the accuracy of the registration algorithm in particular dimensions, the algorithm implements a dual-pose registration strategy. A PDE (Phased Differential Evolution) algorithm is proposed for iterative optimization, enhancing the optimization algorithm’s ability to globally search in low-dimensional space, aiding in the discovery of global optimal solutions. Extensive experimental results demonstrate that the proposed algorithm provides more accurate similarity metrics compared to conventional registration algorithms; the dual-pose registration strategy largely reduces errors in specific dimensions, resulting in reductions of 67.04% and 71.84%, respectively, in rotation and translation errors. Additionally, the algorithm is more suitable for clinical applications due to its lower complexity. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Viewed by 268
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

35 pages, 11039 KiB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 151
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

34 pages, 1247 KiB  
Article
SBCS-Net: Sparse Bayesian and Deep Learning Framework for Compressed Sensing in Sensor Networks
by Xianwei Gao, Xiang Yao, Bi Chen and Honghao Zhang
Sensors 2025, 25(15), 4559; https://doi.org/10.3390/s25154559 - 23 Jul 2025
Viewed by 213
Abstract
Compressed sensing is widely used in modern resource-constrained sensor networks. However, achieving high-quality and robust signal reconstruction under low sampling rates and noise interference remains challenging. Traditional CS methods have limited performance, so many deep learning-based CS models have been proposed. Although these [...] Read more.
Compressed sensing is widely used in modern resource-constrained sensor networks. However, achieving high-quality and robust signal reconstruction under low sampling rates and noise interference remains challenging. Traditional CS methods have limited performance, so many deep learning-based CS models have been proposed. Although these models show strong fitting capabilities, they often lack the ability to handle complex noise in sensor networks, which affects their performance stability. To address these challenges, this paper proposes SBCS-Net. This framework innovatively expands the iterative process of sparse Bayesian compressed sensing using convolutional neural networks and Transformer. The core of SBCS-Net is to optimize key SBL parameters through end-to-end learning. This can adaptively improve signal sparsity and probabilistically process measurement noise, while fully leveraging the powerful feature extraction and global context modeling capabilities of deep learning modules. To comprehensively evaluate its performance, we conduct systematic experiments on multiple public benchmark datasets. These studies include comparisons with various advanced and traditional compressed sensing methods, comprehensive noise robustness tests, ablation studies of key components, computational complexity analysis, and rigorous statistical significance tests. Extensive experimental results consistently show that SBCS-Net outperforms many mainstream methods in both reconstruction accuracy and visual quality. In particular, it exhibits excellent robustness under challenging conditions such as extremely low sampling rates and strong noise. Therefore, SBCS-Net provides an effective solution for high-fidelity, robust signal recovery in sensor networks and related fields. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 5862 KiB  
Article
ICP-Based Mapping and Localization System for AGV with 2D LiDAR
by Felype de L. Silva, Eisenhawer de M. Fernandes, Péricles R. Barros, Levi da C. Pimentel, Felipe C. Pimenta, Antonio G. B. de Lima and João M. P. Q. Delgado
Sensors 2025, 25(15), 4541; https://doi.org/10.3390/s25154541 - 22 Jul 2025
Viewed by 183
Abstract
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need [...] Read more.
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need for low-complexity solutions that are independent of auxiliary sensors and capable of operating on embedded platforms with limited computational resources. The system integrates scan alignment techniques based on the Iterative Closest Point (ICP) algorithm. Experimental validation in a controlled environment indicated better performance using Gauss–Newton optimization and the point-to-plane metric, achieving pose estimation accuracy of 99.42%, 99.6%, and 99.99% in the position (x, y) and orientation (θ) components, respectively. Subsequently, the system was adapted for operation with data from the onboard sensor, integrating a lightweight graphical interface for real-time visualization of scans, estimated pose, and the evolving map. Despite the moderate update rate, the system proved effective for robotic applications, enabling coherent localization and progressive environment mapping. The modular architecture developed allows for future extensions such as trajectory planning and control. The proposed solution provides a robust and adaptable foundation for mobile platforms, with potential applications in industrial automation, academic research, and education in mobile robotics. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

18 pages, 3220 KiB  
Article
High-Throughput Microfluidic Electroporation (HTME): A Scalable, 384-Well Platform for Multiplexed Cell Engineering
by William R. Gaillard, Jess Sustarich, Yuerong Li, David N. Carruthers, Kshitiz Gupta, Yan Liang, Rita Kuo, Stephen Tan, Sam Yoder, Paul D. Adams, Hector Garcia Martin, Nathan J. Hillson and Anup K. Singh
Bioengineering 2025, 12(8), 788; https://doi.org/10.3390/bioengineering12080788 - 22 Jul 2025
Viewed by 429
Abstract
Electroporation-mediated gene delivery is a cornerstone of synthetic biology, offering several advantages over other methods: higher efficiencies, broader applicability, and simpler sample preparation. Yet, electroporation protocols are often challenging to integrate into highly multiplexed workflows, owing to limitations in their scalability and tunability. [...] Read more.
Electroporation-mediated gene delivery is a cornerstone of synthetic biology, offering several advantages over other methods: higher efficiencies, broader applicability, and simpler sample preparation. Yet, electroporation protocols are often challenging to integrate into highly multiplexed workflows, owing to limitations in their scalability and tunability. These challenges ultimately increase the time and cost per transformation. As a result, rapidly screening genetic libraries, exploring combinatorial designs, or optimizing electroporation parameters requires extensive iterations, consuming large quantities of expensive custom-made DNA and cell lines or primary cells. To address these limitations, we have developed a High-Throughput Microfluidic Electroporation (HTME) platform that includes a 384-well electroporation plate (E-Plate) and control electronics capable of rapidly electroporating all wells in under a minute with individual control of each well. Fabricated using scalable and cost-effective printed-circuit-board (PCB) technology, the E-Plate significantly reduces consumable costs and reagent consumption by operating on nano to microliter volumes. Furthermore, individually addressable wells facilitate rapid exploration of large sets of experimental conditions to optimize electroporation for different cell types and plasmid concentrations/types. Use of the standard 384-well footprint makes the platform easily integrable into automated workflows, thereby enabling end-to-end automation. We demonstrate transformation of E. coli with pUC19 to validate the HTME’s core functionality, achieving at least a single colony forming unit in more than 99% of wells and confirming the platform’s ability to rapidly perform hundreds of electroporations with customizable conditions. This work highlights the HTME’s potential to significantly accelerate synthetic biology Design-Build-Test-Learn (DBTL) cycles by mitigating the transformation/transfection bottleneck. Full article
(This article belongs to the Section Cellular and Molecular Bioengineering)
Show Figures

Graphical abstract

25 pages, 22731 KiB  
Article
Scalable and Efficient GCL Scheduling for Time-Aware Shaping in Autonomous and Cyber-Physical Systems
by Chengwei Zhang and Yun Wang
Future Internet 2025, 17(8), 321; https://doi.org/10.3390/fi17080321 - 22 Jul 2025
Viewed by 216
Abstract
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring [...] Read more.
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring both reliability and determinism for massive data streams—a requirement that traditional Ethernet technologies cannot satisfy. This paper addresses this critical gap by proposing a comprehensive scheduling framework based on Time-Aware Shaping (TAS) within the Time-Sensitive Networking (TSN) standard. The framework features two key contributions: (1) a novel baseline scheduling algorithm that incorporates a sub-flow division mechanism to enhance schedulability for high-bandwidth streams, computing Gate Control Lists (GCLs) via an iterative SMT-based method; (2) a separate heuristic-based computation acceleration algorithm to enable fast, scalable GCL generation for large-scale networks. Through extensive simulations, the proposed baseline algorithm demonstrates a reduction in end-to-end latency of up to 59% compared to standard methods, with jitter controlled at the nanosecond level. The acceleration algorithm is shown to compute schedules for 200 data streams in approximately one second. The framework’s effectiveness is further validated on a real-world TSN hardware testbed, confirming its capability to achieve deterministic transmission with low latency and jitter in a physical environment. This work provides a practical and scalable solution for deploying deterministic communication in complex autonomous and cyber-physical systems. Full article
Show Figures

Figure 1

22 pages, 13310 KiB  
Article
Dual-Domain Joint Learning Reconstruction Method (JLRM) Combined with Physical Process for Spectral Computed Tomography (SCT)
by Genwei Ma, Ping Yang and Xing Zhao
Symmetry 2025, 17(7), 1165; https://doi.org/10.3390/sym17071165 - 21 Jul 2025
Viewed by 147
Abstract
Spectral computed tomography (SCT) enables material decomposition, artifact reduction, and contrast enhancement, leveraging symmetry principles across its technical framework to enhance material differentiation and image quality. However, its nonlinear data acquisition process involving noise and scatter leads to a highly ill-posed inverse problem. [...] Read more.
Spectral computed tomography (SCT) enables material decomposition, artifact reduction, and contrast enhancement, leveraging symmetry principles across its technical framework to enhance material differentiation and image quality. However, its nonlinear data acquisition process involving noise and scatter leads to a highly ill-posed inverse problem. To address this, we propose a dual-domain iterative reconstruction network that combines joint learning reconstruction with physical process modeling, which also uses the symmetric complementary properties of the two domains for optimization. A dedicated physical module models the SCT forward process to ensure stability and accuracy, while a residual-to-residual strategy reduces the computational burden of model-based iterative reconstruction (MBIR). Our method, which won the AAPM DL-Spectral CT Challenge, achieves high-accuracy material decomposition. Extensive evaluations also demonstrate its robustness under varying noise levels, confirming the method’s generalizability. This integrated approach effectively combines the strengths of physical modeling, MBIR, and deep learning. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

22 pages, 875 KiB  
Article
Towards Robust Synthetic Data Generation for Simplification of Text in French
by Nikos Tsourakis
Mach. Learn. Knowl. Extr. 2025, 7(3), 68; https://doi.org/10.3390/make7030068 - 19 Jul 2025
Viewed by 315
Abstract
We present a pipeline for synthetic simplification of text in French that combines large language models with structured semantic guidance. Our approach enhances data generation by integrating contextual knowledge from Wikipedia and Vikidia articles and injecting symbolic control through lightweight knowledge graphs. To [...] Read more.
We present a pipeline for synthetic simplification of text in French that combines large language models with structured semantic guidance. Our approach enhances data generation by integrating contextual knowledge from Wikipedia and Vikidia articles and injecting symbolic control through lightweight knowledge graphs. To construct document-level representations, we implement a progressive summarization process that incrementally builds running summaries and extracts key ideas. Simplifications are generated iteratively and assessed using semantic comparisons between input and output graphs, enabling targeted regeneration when critical information is lost. Our system is implemented using LangChain’s orchestration framework, allowing modular and extensible coordination of LLM components. Evaluation shows that context-aware prompting and semantic feedback improve simplification quality across successive iterations. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Figure 1

22 pages, 1295 KiB  
Article
Enhanced Similarity Matrix Learning for Multi-View Clustering
by Dongdong Zhang, Pusheng Wang and Qin Li
Electronics 2025, 14(14), 2845; https://doi.org/10.3390/electronics14142845 - 16 Jul 2025
Viewed by 153
Abstract
Graph-based multi-view clustering is a fundamental analysis method that learns the similarity matrix of multi-view data. Despite its success, it has two main limitations: (1) complementary information is not fully utilized by directly combining graphs from different views; (2) existing multi-view clustering methods [...] Read more.
Graph-based multi-view clustering is a fundamental analysis method that learns the similarity matrix of multi-view data. Despite its success, it has two main limitations: (1) complementary information is not fully utilized by directly combining graphs from different views; (2) existing multi-view clustering methods do not adequately address redundancy and noise in the data, significantly affecting performance. To address these issues, we propose the Enhanced Similarity Matrix Learning (ES-MVC) for multi-view clustering, which dynamically integrates global graphs from all views with local graphs from each view to create an improved similarity matrix. Specifically, the global graph captures cross-view consistency, while the local graph preserves view-specific geometric patterns. The balance between global and local graphs is controlled through an adaptive weighting strategy, where hyperparameters adjust the relative importance of each graph, effectively capturing complementary information. In this way, our method can learn the clustering structure that contains fully complementary information, leveraging both global and local graphs. Meanwhile, we utilize a robust similarity matrix initialization to reduce the negative effects caused by noisy data. For model optimization, we derive an effective optimization algorithm that converges quickly, typically requiring fewer than five iterations for most datasets. Extensive experimental results on diverse real-world datasets demonstrate the superiority of our method over state-of-the-art multi-view clustering methods. In our experiments on datasets such as MSRC-v1, Caltech101, and HW, our proposed method achieves superior clustering performance with average accuracy (ACC) values of 0.7643, 0.6097, and 0.9745, respectively, outperforming the most advanced multi-view clustering methods such as OMVFC-LICAG, which yield ACC values of 0.7284, 0.4512, and 0.8372 on the same datasets. Full article
Show Figures

Figure 1

22 pages, 1703 KiB  
Article
Developing a Concept for an OPC UA Standard to Improve Interoperability in Battery Cell Production: A Methodological Approach for Standardization in Heterogeneous Production Environments
by Julia Sawodny, Simon Otte, Fabian Böttinger, Fabian Haag, Andreas Schlereth, Tom-Hendrik Hülsmann, Felix Tidde, David Roth, Arno Schmetz, Alexander Puchta, Sebastian Schabel, Thomas Bauernhansl and Jürgen Fleischer
Technologies 2025, 13(7), 302; https://doi.org/10.3390/technologies13070302 - 14 Jul 2025
Viewed by 371
Abstract
The development of interoperable and reusable information models is a key challenge for digitalization in manufacturing domains with heterogeneous and complex process chains. Ensuring seamless data exchange requires the standardization of both data syntax and semantics, while maintaining compatibility with existing industry standards. [...] Read more.
The development of interoperable and reusable information models is a key challenge for digitalization in manufacturing domains with heterogeneous and complex process chains. Ensuring seamless data exchange requires the standardization of both data syntax and semantics, while maintaining compatibility with existing industry standards. This paper presents a methodology for deriving standardizable and generalizable OPC UA information models tailored to domains with high process variability and interdisciplinary requirements. The methodology integrates system analysis, parameter mapping, and the development of modular submodels, supported by expert input and validation. It emphasizes the reuse and extension of existing OPC UA Companion Specifications to reduce complexity, avoid redundancy, and enable long-term standardization. The approach is exemplified by its application to battery cell production, an emerging manufacturing domain combining process and mechanical engineering with continuous and discrete processes. Its high degree of heterogeneity and lack of domain-specific standards pose significant challenges for model development. Through iterative expert workshops and structured model validation, a dedicated and transferable OPC UA framework is created. The resulting layered model structure combines a cross-industry standard with newly developed, process-aware model elements. This enables both broad applicability and the depth required for complex production environments, while supporting use cases such as traceability, regulatory reporting (e.g., EU Battery Passport), and process optimization. The resulting model improves interoperability, transparency, and data integration, offering a scalable blueprint for other complex manufacturing sectors. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

30 pages, 956 KiB  
Article
Stochastic Production Planning with Regime-Switching: Sensitivity Analysis, Optimal Control, and Numerical Implementation
by Dragos-Patru Covei
Axioms 2025, 14(7), 524; https://doi.org/10.3390/axioms14070524 - 8 Jul 2025
Viewed by 187
Abstract
This study investigates a stochastic production planning problem with regime-switching parameters, inspired by economic cycles impacting production and inventory costs. The model considers types of goods and employs a Markov chain to capture probabilistic regime transitions, coupled with a multidimensional Brownian motion representing [...] Read more.
This study investigates a stochastic production planning problem with regime-switching parameters, inspired by economic cycles impacting production and inventory costs. The model considers types of goods and employs a Markov chain to capture probabilistic regime transitions, coupled with a multidimensional Brownian motion representing stochastic demand dynamics. The production and inventory cost optimization problem is formulated as a quadratic cost functional, with the solution characterized by a regime-dependent system of elliptic partial differential equations (PDEs). Numerical solutions to the PDE system are computed using a monotone iteration algorithm, enabling quantitative analysis. Sensitivity analysis and model risk evaluation illustrate the effects of regime-dependent volatility, holding costs, and discount factors, revealing the conservative bias of regime-switching models when compared to static alternatives. Practical implications include optimizing production strategies under fluctuating economic conditions and exploring future extensions such as correlated Brownian dynamics, non-quadratic cost functions, and geometric inventory frameworks. In contrast to earlier studies that imposed static or overly simplified regime-switching assumptions, our work presents a fully integrated framework—combining optimal control theory, a regime-dependent system of elliptic PDEs, and comprehensive numerical and sensitivity analyses—to more accurately capture the complex stochastic dynamics of production planning and thereby deliver enhanced, actionable insights for modern manufacturing environments. Full article
Show Figures

Figure 1

21 pages, 5977 KiB  
Article
A Two-Stage Machine Learning Approach for Calving Detection in Rangeland Cattle
by Yuxi Wang, Andrés Perea, Huiping Cao, Mehmet Bakir and Santiago Utsumi
Agriculture 2025, 15(13), 1434; https://doi.org/10.3390/agriculture15131434 - 3 Jul 2025
Viewed by 397
Abstract
Monitoring parturient cattle during calving is crucial for reducing cow and calf mortality, enhancing reproductive and production performance, and minimizing labor costs. Traditional monitoring methods include direct animal inspection or the use of specialized sensors. These methods can be effective, but impractical in [...] Read more.
Monitoring parturient cattle during calving is crucial for reducing cow and calf mortality, enhancing reproductive and production performance, and minimizing labor costs. Traditional monitoring methods include direct animal inspection or the use of specialized sensors. These methods can be effective, but impractical in large-scale ranching operations due to time, cost, and logistical constraints. To address this challenge, a network of low-power and long-range IoT sensors combining the Global Navigation Satellite System (GNSS) and tri-axial accelerometers was deployed to monitor in real-time 15 parturient Brangus cows on a 700-hectare pasture at the Chihuahuan Desert Rangeland Research Center (CDRRC). A two-stage machine learning approach was tested. In the first stage, a fully connected autoencoder with time encoding was used for unsupervised detection of anomalous behavior. In the second stage, a Random Forest classifier was applied to distinguish calving events from other detected anomalies. A 5-fold cross-validation, using 12 cows for training and 3 cows for testing, was applied at each iteration. While 100% of the calving events were successfully detected by the autoencoder, the Random Forest model failed to classify the calving events of two cows and misidentified the onset of calving for a third cow by 46 h. The proposed framework demonstrates the value of combining unsupervised and supervised machine learning techniques for detecting calving events in rangeland cattle under extensive management conditions. The real-time application of the proposed AI-driven monitoring system has the potential to enhance animal welfare and productivity, improve operational efficiency, and reduce labor demands in large-scale ranching. Future advancements in multi-sensor platforms and model refinements could further boost detection accuracy, making this approach increasingly adaptable across diverse management systems, herd structures, and environmental conditions. Full article
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)
Show Figures

Figure 1

Back to TopTop