Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (378)

Search Parameters:
Keywords = algebraic specification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 874 KB  
Article
Deep Learning with Visualization-Based Worked Examples to Enhance Students’ Algebra Problem Solving Ability and Metacognitive Awareness
by Windia Hadi, Benny Hendriana, Widyah Noviana and Csaba Csíkos
Educ. Sci. 2026, 16(4), 608; https://doi.org/10.3390/educsci16040608 - 10 Apr 2026
Abstract
This study aims to examine the improvement of algebra problem-solving ability and metacognitive awareness among junior high school students through the use of visualization based on a deep learning approach. The research employed a quantitative method with a quasi-experimental design, specifically a pretest–posttest [...] Read more.
This study aims to examine the improvement of algebra problem-solving ability and metacognitive awareness among junior high school students through the use of visualization based on a deep learning approach. The research employed a quantitative method with a quasi-experimental design, specifically a pretest–posttest control group design. The population consisted of all students from public schools in Tangerang City, Indonesia. The sample comprised seventh-grade students studying algebra. A purposive sampling technique was used to determine the experimental and control groups, with a total sample size of 51 students. The instruments included an algebra problem-solving ability test consisting of nine essay questions and a metacognitive awareness questionnaire with 52 items. Data were collected using these two instruments, with a pretest administered before the intervention and a posttest administered afterward. Data analysis was conducted using a prerequisite test, continued with independent sample t-tests, nonparametric tests, ANCOVA, and multiple linear regression. The results based on statistics indicated a significant improvement in students’ algebra problem-solving ability with a large effect. Nevertheless, the absolute increase in problem-solving scores in the experimental group is very small (N-gain mean = 0.02). Additionally, metacognitive awareness was not found to be a significant predictor of problem-solving ability; instead, initial ability (pretest) emerged as the strongest predictor. Only understanding the problem has a moderate effect; planning strategies has a small effect, and otherwise there is no effect. In conclusion, the use of visualization-based worked examples with a deep learning approach has a statistically significant effect, but its impact on improving students’ abilities should be interpreted with caution. So the practical effects of the intervention are limited; however, metacognitive awareness is not the main predictor in algebra problem-solving ability. Full article
42 pages, 544 KB  
Article
Encoding-Relative Structural Diagnostics for Differential Operators
by Robert Castro
Symmetry 2026, 18(4), 631; https://doi.org/10.3390/sym18040631 - 9 Apr 2026
Abstract
Differential operators often admit multiple algebraically equivalent symbolic formulations, yet those formulations can differ in the organization of their internal structure prior to solution analysis. A reproducible symbolic framework is introduced to compare such formulations at the level of operator expressions. Within a [...] Read more.
Differential operators often admit multiple algebraically equivalent symbolic formulations, yet those formulations can differ in the organization of their internal structure prior to solution analysis. A reproducible symbolic framework is introduced to compare such formulations at the level of operator expressions. Within a declared symbolic specification consisting of a fixed grammar, an admissible weight class, canonical compression rules, and an admissible family of reformulations, we define four encoding-relative structural descriptors: structural strain τ, structural curvature κ, compressibility σ, and the balance ratio Γ = κ/τ. Structural strain compares an encoding to a designated reference representation, while compressibility measures reduction under canonical symbolic compression. These quantities are deterministic descriptors within the declared encoding class rather than coordinate-free invariants of the underlying operator. The structural length functional underlying these descriptors is developed, canonical compression is formalized, and finite symbolic comparison is distinguished from pathwise symbolic deformation. A robustness theorem shows that, away from the threshold surface Γ = σ, sufficiently small admissible perturbations preserve the induced diagnostic label. A supporting weight-robustness result further shows that qualitative labels persist across a local admissible family of weight choices under corresponding nondegeneracy conditions. The framework serves as a reproducible diagnostic for operator representations alongside Lyapunov, spectral, pseudospectral, and energy-based stability theories. Examples of representative ordinary and partial differential operators illustrate how the descriptors are computed and how they behave under admissible re-expression, while the appendices provide the technical backbone of the paper: formal definitions, reproducibility protocol, extended perturbation arguments, and explicit failure-mode analysis. Additional sensitivity checks regarding encoding, weights, and threshold variation clarify the method’s scope, and explicit failure modes delineate the boundary cases in which the descriptors cease to apply. The main contribution of this study is a formally delimited and reproducible symbolic framework for comparing differential operators under a fixed, declared specification, together with robustness results and worked examples that clarify the method’s scope. Full article
(This article belongs to the Section Mathematics)
12 pages, 259 KB  
Article
From Dedekind’s Level 12 Identities to Combinatorial Structures of Colored Partitions
by Fatemah Mofarreh, Arooj Fatima and Ahmer Ali
Axioms 2026, 15(4), 270; https://doi.org/10.3390/axioms15040270 - 8 Apr 2026
Abstract
The Dedekind η-function plays an important role in number theory, particularly in the study of modular forms, q-series, and partition identities. In this paper, we investigate several level-12 η-function identities and examine their combinatorial implications. These identities are obtained from [...] Read more.
The Dedekind η-function plays an important role in number theory, particularly in the study of modular forms, q-series, and partition identities. In this paper, we investigate several level-12 η-function identities and examine their combinatorial implications. These identities are obtained from algebraic transformations of known expansions involving mock theta functions, which were originally introduced by Srinivasa Ramanujan. By employing classical q-series techniques and modular transformations, we derive identities that reveal interesting relationships among η-functions. We further interpret these identities combinatorially to establish correspondences between specific classes of colored partitions with prescribed color restrictions. These results provide new insights into the structure of colored partition functions and highlight the interplay between mock theta functions, Dedekind η-function identities, and combinatorial partition theory. Our findings contribute to a deeper understanding of the connections between modular forms and colored partitions and suggest further directions for research in number theory and combinatorics. Full article
(This article belongs to the Special Issue Advances in Applied Algebra and Related Topics)
13 pages, 313 KB  
Article
Almost Extraspecial Structures and Pseudofermionic Operators
by Daniele Ettore Otera and Francesco G. Russo
Symmetry 2026, 18(4), 615; https://doi.org/10.3390/sym18040615 - 5 Apr 2026
Viewed by 121
Abstract
We survey some recent combinatorial properties, which have been found in the context of the algebras of ladder operators in quantum mechanics. More specifically, we review dynamical systems which have nonselfadjoint Hamiltonians and are subject to a formalization in terms of pseudofermionic operators. [...] Read more.
We survey some recent combinatorial properties, which have been found in the context of the algebras of ladder operators in quantum mechanics. More specifically, we review dynamical systems which have nonselfadjoint Hamiltonians and are subject to a formalization in terms of pseudofermionic operators. For these systems, we detect structural analogies between algebras of pseudofermionic operators and the abstract notion of central product, which was originally studied for finite groups. Full article
(This article belongs to the Special Issue Advances in Topology and Algebraic Geometry)
22 pages, 1876 KB  
Article
Extended LSTM to Enhance Learner Performance Prediction
by Adel Ihichr, Soukaina Hakkal, Omar Oustous, Younès El Bouzekri El Idrissi and Ayoub Ait Lahcen
Algorithms 2026, 19(4), 251; https://doi.org/10.3390/a19040251 - 25 Mar 2026
Viewed by 330
Abstract
Knowledge Tracing (KT) is a fundamental task in intelligent education systems, designed to track students’ evolving knowledge states and predict their future performance. While Deep Learning-based Knowledge Tracing (DLKT) models have advanced the field, they often face significant limitations in jointly capturing short-term [...] Read more.
Knowledge Tracing (KT) is a fundamental task in intelligent education systems, designed to track students’ evolving knowledge states and predict their future performance. While Deep Learning-based Knowledge Tracing (DLKT) models have advanced the field, they often face significant limitations in jointly capturing short-term performance fluctuations and long-term knowledge retention, which restricts their predictive precision in complex learning trajectories. This paper proposes the Extended Deep Knowledge Tracing (xDKT) model, which integrates the Extended Long Short-Term Memory (xLSTM) architecture to enhance multi-scale temporal learning representations. Specifically, through rigorous ablation studies over extended learning sequences (up to 1000 steps), our analysis indicates that the exponential gating and advanced scalar memory of sLSTM units are the primary drivers of performance. This architecture effectively captures both short-term performance shifts and long-term knowledge retention without the vanishing gradient degradation inherent to standard LSTMs. We evaluate xDKT across six diverse benchmark datasets, including Synthetic, Algebra2005–2006, Statics2011, and the ASSISTments series, covering over 22,000 learners. Experimental results show that xDKT yields improved Area Under the ROC Curve (AUC) scores on Statics2011 (0.8562) and ASSISTments2009 (0.8318) compared to baseline models such as DKT, DKVMN, and AKT. Finally, through extensive validation, these findings suggest that xDKT architecture provides a robust and promising framework for accurate and adaptive learning environments. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

16 pages, 1313 KB  
Article
New Accurate Local-Buckling Analysis of Equal-Leg Angle Steels in Transmission Towers
by Dongrui Song, Xiaocheng Tang, Zhiwei Sun, Dong Han, Xiaozhuo Guan and Huashun Li
Vibration 2026, 9(1), 22; https://doi.org/10.3390/vibration9010022 - 22 Mar 2026
Viewed by 219
Abstract
This study presents a specific analytical solution procedure to the local-buckling problem in angle steels using a two-dimensional improved Fourier-series method (2D-IFSM). The effect of coupling between the sub-plates of an angle steel on its local-buckling behaviour is studied by incorporating rotational spring [...] Read more.
This study presents a specific analytical solution procedure to the local-buckling problem in angle steels using a two-dimensional improved Fourier-series method (2D-IFSM). The effect of coupling between the sub-plates of an angle steel on its local-buckling behaviour is studied by incorporating rotational spring constraints between them. The proposed solution procedure enables one to convert the local-buckling problem of angle steels into solving sets of linear algebraic equations, thereby effectively simplifying its solution process. The critical load and related buckling-mode results obtained in this study are in good agreement with the existing analytical solutions and finite-element-method numerical data, verifying the effectiveness of the proposed method. Based on the derived solutions, a quantitative analysis is conducted to investigate the influences of aspect ratio, width–thickness ratio, and rotational constraint degree on the local-buckling behaviour of angle steels. Full article
Show Figures

Figure 1

26 pages, 3122 KB  
Article
A 94 GHz Millimeter-Wave Radar System for Remote Vehicle Height Measurement to Prevent Bridge Collisions
by Natan Steinmetz, Eyal Magori, Yael Balal, Yonatan B. Sudai and Nezah Balal
Sensors 2026, 26(6), 1921; https://doi.org/10.3390/s26061921 - 18 Mar 2026
Viewed by 269
Abstract
Collisions between over-height vehicles and low-clearance bridges cause infrastructure damage and pose safety risks. Existing detection systems rely primarily on optical sensors, which suffer from performance degradation in adverse weather conditions. This paper presents an alternative approach based on a 94 GHz millimeter-wave [...] Read more.
Collisions between over-height vehicles and low-clearance bridges cause infrastructure damage and pose safety risks. Existing detection systems rely primarily on optical sensors, which suffer from performance degradation in adverse weather conditions. This paper presents an alternative approach based on a 94 GHz millimeter-wave radar that achieves velocity-independent height measurement. The proposed technique exploits the ratio of Doppler shifts from two scattering centers on a vehicle, specifically the roof and the wheel–road interface. This ratio depends only on the measurement geometry, as the unknown vehicle velocity cancels algebraically, enabling direct height computation without speed measurement. The paper provides a closed-form height estimation model, analyzes the trade-off between frequency resolution and geometric constancy during integration, and presents experimental validation using a scaled laboratory testbed. An optical tracking system is used solely for ground-truth validation in the laboratory and is not required for operational deployment. Results across six test cases with heights ranging from 20 cm to 46 cm demonstrate an average absolute error of 0.60 cm and relative errors below 3.3 percent. A scaling analysis for representative full-scale geometries indicates that at highway speeds of 80 km/h, integration times in the millisecond range (approximately 3–18 ms for representative 20–50 m measurement standoff) are feasible; warning distance can be extended independently by upstream radar placement. The expected advantage in fog, rain, and dust is based on established W-band propagation characteristics; dedicated adverse-weather and full field validation (including multipath, clutter, and multi-vehicle scenarios) remain future work. Full article
Show Figures

Figure 1

24 pages, 4628 KB  
Article
Numerical Scheme for Modified Anomalous Time-Fractional Sub-Diffusion Equations Using the Shifted Dickson Polynomials of the Second Kind
by Waleed Mohamed Abd-Elhameed and Ahmed Gamal Atta
Mathematics 2026, 14(6), 1008; https://doi.org/10.3390/math14061008 - 16 Mar 2026
Viewed by 223
Abstract
This paper develops a numerical algorithm for treating the modified anomalous time-fractional sub-diffusion problems (MAFSDPs). The proposed numerical algorithm relies on the tau method. The basis functions, namely, shifted Dickson polynomials of the second kind, are employed to obtain the proposed numerical solutions. [...] Read more.
This paper develops a numerical algorithm for treating the modified anomalous time-fractional sub-diffusion problems (MAFSDPs). The proposed numerical algorithm relies on the tau method. The basis functions, namely, shifted Dickson polynomials of the second kind, are employed to obtain the proposed numerical solutions. Many theoretical formulas of the Dickson polynomials of the second kind and their shifted polynomials, such as the linearization formula, derivative relations, and some specific definite integrals, are developed. These formulas will serve as a fundamental basis for designing our proposed numerical algorithm. The approximate solution is expressed as a truncated double expansion in shifted Dickson basis functions. The utilization of the tau method transforms the equation, along with its underlying conditions, into a system of algebraic equations that can be numerically treated. Rigorous convergence of the double-shifted expansion is studied. Numerical examples are included to verify the accuracy and applicability of the proposed algorithm. In addition, comparisons with some existing numerical methods are presented to confirm the superior performance of our algorithm. Full article
(This article belongs to the Special Issue Theory and Applications of Fractional Models)
Show Figures

Figure 1

17 pages, 539 KB  
Article
Wavelet-Based Error-Correcting Codes: Performance Comparison with BCH in Modern Channels
by Alla Levina and Sergey Boyko
Mathematics 2026, 14(6), 993; https://doi.org/10.3390/math14060993 - 14 Mar 2026
Viewed by 242
Abstract
Reliable data transmission over noisy channels requires effective error-correcting codes. While classical algebraic constructions, such as Bose–Chaudhuri–Hocquenghem (BCH) codes, remain industry standards, structured alternatives based on discrete wavelet transforms offer potential benefits in terms of implementation complexity and error resilience. This study presents [...] Read more.
Reliable data transmission over noisy channels requires effective error-correcting codes. While classical algebraic constructions, such as Bose–Chaudhuri–Hocquenghem (BCH) codes, remain industry standards, structured alternatives based on discrete wavelet transforms offer potential benefits in terms of implementation complexity and error resilience. This study presents a comparative analysis of BCH and wavelet-based linear block codes, focusing on their error-correction capability and overall performance under realistic wireless channel conditions. This work evaluates both coding schemes across five channel models: additive white Gaussian noise (AWGN), Rayleigh fading, sinusoidal attenuation, multiplicative Gaussian noise, and a composite Rayleigh-plus-sinusoid channel. Performance is assessed using bit error rate (BER), frame error rate (FER), and decoding reliability across a range of signal-to-noise ratios. Results show that wavelet codes achieve error-correction performance comparable to or slightly better than BCH in most channels. Notably, they demonstrate a consistent advantage in scenarios with periodic or slow-varying interference, outperforming BCH starting from the 1.5 dB SNR threshold where the wavelet code achieves a BER reduction of up to 48% and a 37.5% improvement in FER, significantly enhancing decoding reliability in structured noise environments. These findings indicate that wavelet-based codes are not only viable but, in specific practical environments characterized by structured noise, represent a superior alternative for robust and reliable communication systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 743 KB  
Article
Tensor Train Completion from Fiberwise Observations Along a Single Mode
by Shakir Showkat Sofi and Lieven De Lathauwer
Mathematics 2026, 14(5), 922; https://doi.org/10.3390/math14050922 - 9 Mar 2026
Viewed by 276
Abstract
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved [...] Read more.
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved entries of the tensor. The low-rank tensor completion problem is typically solved using numerical optimization techniques, where the rank information is used either implicitly (in the rank minimization approach) or explicitly (in the error minimization approach). Current theories concerning these techniques often study probabilistic recovery guarantees under conditions such as random uniform observations and incoherence requirements. However, if an observation pattern exhibits some low-rank structure that can be exploited, more efficient algorithms with deterministic recovery guarantees can be designed by leveraging this structure. This work shows how to use only standard linear algebra operations to compute the tensor train decomposition of a specific type of “fiber-wise” observed tensor, where some of the fibers of a tensor (along a single specific mode) are either fully observed or entirely missing, unlike the usual entry-wise observations. From an application viewpoint, this setting is relevant when it is easier to sample or collect a multiway data tensor along a specific mode (e.g., temporal). The proposed completion method is fast and is guaranteed to work under reasonable deterministic conditions on the observation pattern. Through numerical experiments, we showcase interesting applications and use cases that illustrate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

26 pages, 951 KB  
Article
q-Fractional Fuzzy Frank Aggregation Operators and Their Application in Decision-Making
by Muhammad Amad Sarwar, Yuezheng Gong and Sarah A. Alzakari
Fractal Fract. 2026, 10(3), 163; https://doi.org/10.3390/fractalfract10030163 - 28 Feb 2026
Viewed by 542
Abstract
Multi-criteria decision-making (MCDM) involves evaluating alternatives under uncertain, vague, and conflicting criteria. While fuzzy set theories, such as intuitionistic, pythagorean, fermatean, and q-rung orthopair fuzzy sets have advanced uncertainty modeling, they remain limited to capturing extreme judgments where membership reaches a value of [...] Read more.
Multi-criteria decision-making (MCDM) involves evaluating alternatives under uncertain, vague, and conflicting criteria. While fuzzy set theories, such as intuitionistic, pythagorean, fermatean, and q-rung orthopair fuzzy sets have advanced uncertainty modeling, they remain limited to capturing extreme judgments where membership reaches a value of one alongside significant non-membership. The recently introduced q-fractional fuzzy set (q-FrFS) addresses these shortcomings via a flexible constraint, making it suitable for extreme contexts. However, existing q-FrFS methodologies lack robust aggregation mechanisms capable of balancing trade-offs and modulating compensation during information fusion. To overcome this, this study proposes a novel class of Frank-based aggregation operators tailored specifically to q-FrFS environments. Leveraging the parameterized structure of Frank t-norms and t-conorms, we develop two operators: q-FrFFWA (Frank weighted averaging) and q-FrFFWG (Frank weighted geometric) alongside their essential algebraic properties. These operators enhance the representation and fusion of complex and uncertain data. Furthermore, we present a comprehensive MCDM framework utilizing the proposed operators and demonstrate its applicability by selecting optimal vehicle routing software for last-mile delivery. Sensitivity and comparative analyses affirm the stability and credibility of the proposed methodology. This research contributes to the evolving landscape of fuzzy decision-making by integrating the expressive power of q-FrFS with the adaptive flexibility of Frank aggregation, offering a potent tool for modeling and analyzing multidimensional uncertainties in complex decision environments. Full article
Show Figures

Figure 1

22 pages, 568 KB  
Article
Application of Extended Dirac Equation to Photon–Electron Interactions and Electron–Positron Collision Processes: A Quantum Theoretical Approach Using a 256 × 256 Matrix Representation
by Hirokazu Maruyama
Atoms 2026, 14(2), 14; https://doi.org/10.3390/atoms14020014 - 19 Feb 2026
Viewed by 601
Abstract
We propose a novel theoretical framework for describing photon–electron interactions and electron collision processes in a unified manner within quantum electrodynamics. Specifically, we develop a method to construct the Dirac operator in curved spacetime using only matrix representations rooted in the basis structure [...] Read more.
We propose a novel theoretical framework for describing photon–electron interactions and electron collision processes in a unified manner within quantum electrodynamics. Specifically, we develop a method to construct the Dirac operator in curved spacetime using only matrix representations rooted in the basis structure of four-dimensional gamma matrix algebra, without introducing vierbeins (tetrads) or independent spin connections. We realize 16 gamma matrices with two indices as 256×256 matrices and embed the spacetime metric directly into the matrix elements. This reduces geometric operations such as covariantization, connection-like operations, and basis transformations to matrix products and trace calculations, yielding a unified and transparent computational scheme. The spacetime dimension remains as four, and the number “16” represents the number of basis elements of four-dimensional gamma matrix algebra (24=16). Based on the extended QED Lagrangian, vertex rules, propagators, spin sums, and traces can be handled uniformly, making it suitable for automation. As validation of this method, we analyzed four fundamental scattering processes in atomic and particle physics: (i) Compton scattering (photon–electron scattering), (ii) muon pair production (e+eμ+μ), (iii) Møller scattering (electron–electron collision), and (iv) Bhabha scattering (electron–positron collision). In the flat spacetime limit, we confirmed the exact reproduction of standard quantum electrodynamics (QED) results including the Klein–Nishina formula. Furthermore, trial calculations using a metric with off-diagonal components show systematic deviations from flat results near scattering angle θ90, suggesting that metric-induced angular dependence could in principle serve as an observable signature. The matrix representation developed in this work enables unified pipeline execution of theoretical calculations for photon interactions and charged particle collision processes, with expected applications to precision calculations in atomic and particle physics. Full article
(This article belongs to the Section Atomic, Molecular and Nuclear Spectroscopy and Collisions)
Show Figures

Figure 1

20 pages, 1566 KB  
Article
A Methodological Framework for Chaos-Aware Evaluation of Self-Organization in Swarm-Based Engineering Systems
by Nikitas Gerolimos, Vasileios Alevizos and Georgios Priniotakis
Systems 2026, 14(2), 215; https://doi.org/10.3390/systems14020215 - 18 Feb 2026
Viewed by 479
Abstract
In the field of engineered systems, there has been an increasing trend in the utilization of self-organizing and swarm-based methodologies. These methodologies are employed to ensure the maintenance of functionality in the presence of uncertainty. However, prevailing evaluation continues to be dominated by [...] Read more.
In the field of engineered systems, there has been an increasing trend in the utilization of self-organizing and swarm-based methodologies. These methodologies are employed to ensure the maintenance of functionality in the presence of uncertainty. However, prevailing evaluation continues to be dominated by task-level KPIs (e.g., coverage, latency), providing limited insight into organizational quality, specifically stability near critical regimes and recoverability. This paper proposes a methodological framework based on the Chaos-Aware Design Index (CADI), integrated into a Descriptive Study II (DS-II) context. Validation follows a dual-tier strategy: (i) tier I (behavioral), utilizing a behavioral emulator of consensus dynamics; (ii) tier II (urban), employing a macro-scale manifold analysis of 17,692 urban spatial polygons from the nuScenes dataset. Results demonstrate that an ensemble-based surrogate model (Random Forest), trained on a representative manifold curated via Latin Hypercube Sampling (LHS), captures organizational stability with high predictive fidelity (R2 = 0.9136, p < 0.001) under strict scene-independent (GroupKFold) validation. Stability descriptors are grounded in spectral graph theory, leveraging algebraic connectivity (λ2) and morphological proxies (node density, aspect ratio, convexity). The CADI framework serves as an auditable reporting scaffold, proving that swarm coherence is governed by observable geometric manifold dynamics. The findings establish that urban morphology exerts a dominant deterministic influence on collective stability, providing a rigorous foundation for early-stage design decisions in autonomous systems. Full article
(This article belongs to the Special Issue Modeling of Complex Systems and Systems of Systems)
Show Figures

Figure 1

12 pages, 756 KB  
Communication
Revised Long-Term Scheduling Model for Multi-Stage Biopharmaceutical Processes
by Vaibhav Kumar and Munawar A. Shaik
Math. Comput. Appl. 2026, 31(1), 32; https://doi.org/10.3390/mca31010032 - 15 Feb 2026
Viewed by 489
Abstract
Biopharmaceuticals are therapeutic drugs engineered to target specific sites within the body. Their manufacturing process comprises two primary stages: upstream processing (USP) and downstream processing (DSP). USP primarily involves cell culture and growth, whereas DSP focuses on purifying and packaging the final product. [...] Read more.
Biopharmaceuticals are therapeutic drugs engineered to target specific sites within the body. Their manufacturing process comprises two primary stages: upstream processing (USP) and downstream processing (DSP). USP primarily involves cell culture and growth, whereas DSP focuses on purifying and packaging the final product. The recent literature only reports a few studies addressing production planning and scheduling in biopharmaceutical manufacturing. In this work, we address a long-term scheduling and midterm planning problem incorporating on-time or late delivery of final products with unknown finite delivery rates. Early delivery is prohibited, and late delivery incurs a penalty cost. Published models and evolutionary algorithms exhibit key limitations in areas such as shelf-life modeling, inventory management, and product delivery. To overcome these shortcomings, we propose a revised mixed-integer linear programming (MILP) model implemented using the General Algebraic Modeling System (GAMS). When applied to two illustrative examples, the model reduces optimum event counts by two to three, improving computational efficiency through fewer binary variables, continuous variables, and constraints. Furthermore, it achieves up to 7% improvement over two published benchmarks, underscoring its potential to enhance scheduling strategies for multiproduct biopharmaceutical facilities. Full article
(This article belongs to the Special Issue Applied Optimization in Automatic Control and Systems Engineering)
Show Figures

Graphical abstract

39 pages, 2415 KB  
Article
Unified Algebraic Framework for Centralized and Decentralized MIMO RST Control for Strongly Coupled Processes
by Cesar A. Peregrino, Guadalupe Lopez Lopez, Nelly Ramirez-Corona, Victor M. Alvarado, Froylan Antonio Alvarado Lopez and Monica Borunda
Mathematics 2026, 14(4), 677; https://doi.org/10.3390/math14040677 - 14 Feb 2026
Viewed by 282
Abstract
Reliable multivariable control is critical for industrial sectors where processes exhibit severe nonlinearities and interactions. A Continuous Stirred Tank Reactor (CSTR) is a rigorous benchmark for testing control strategies addressing these complexities. This work first establishes a linear MIMO mathematical framework to define [...] Read more.
Reliable multivariable control is critical for industrial sectors where processes exhibit severe nonlinearities and interactions. A Continuous Stirred Tank Reactor (CSTR) is a rigorous benchmark for testing control strategies addressing these complexities. This work first establishes a linear MIMO mathematical framework to define the specific structure of such interactive systems. Analysis via phase planes and steady-state analysis reveals low controllability, bistability, and strong coupling, leading to the collapse of traditional decoupled control schemes. To address these issues via multivariable control, we propose a centralized MIMO RST control structure synthesized via a Matrix Fraction Description (MFD) and the extended Bézout equation. Simulations for performance evaluation and comparison highlight the following key findings: (1) the centralized RST maintains stability and tracking precision in regions where decentralized RST loops fail; (2) it exhibits performance comparable to the Augmented State Pole Placement with Integral Action (ASPPIA) method and outperforms the standard Model-Based Predictive Control (MPC) baseline, particularly during critical equilibrium point transitions; and (3) it offers a robust yet computationally simple design that provides superior flexibility for pole placement, accommodating future identification-based models and adaptive tuning. These results validate our algebraic synthesis as a robust, computationally efficient solution for managing highly interactive nonlinear dynamics. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

Back to TopTop