Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (235)

Search Parameters:
Keywords = modular neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1097 KB  
Article
Low-Power Embedded Sensor Node for Real-Time Environmental Monitoring with On-Board Machine-Learning Inference
by Manuel J. C. S. Reis
Sensors 2026, 26(2), 703; https://doi.org/10.3390/s26020703 - 21 Jan 2026
Viewed by 67
Abstract
This paper presents the design and optimisation of a low-power embedded sensor-node architecture for real-time environmental monitoring with on-board machine-learning inference. The proposed system integrates heterogeneous sensing elements for air quality and ambient parameters (temperature, humidity, gas concentration, and particulate matter) into a [...] Read more.
This paper presents the design and optimisation of a low-power embedded sensor-node architecture for real-time environmental monitoring with on-board machine-learning inference. The proposed system integrates heterogeneous sensing elements for air quality and ambient parameters (temperature, humidity, gas concentration, and particulate matter) into a modular embedded platform based on a low-power microcontroller coupled with an energy-efficient neural inference accelerator. The design emphasises end-to-end energy optimisation through adaptive duty-cycling, hierarchical power domains, and edge-level data reduction. The embedded machine-learning layer performs lightweight event/anomaly detection via on-device multi-class classification (normal/anomalous/critical) using quantised neural models in fixed-point arithmetic. A comprehensive system-level analysis, performed via MATLAB Simulink simulations, evaluates inference accuracy, latency, and energy consumption under realistic environmental conditions. Results indicate that the proposed node achieves 94% inference accuracy, 0.87 ms latency, and an average power consumption of approximately 2.9 mWh, enabling energy-autonomous operation with hybrid solar–battery harvesting. The adaptive LoRaWAN communication strategy further reduces data transmissions by ≈88% relative to periodic reporting. The results indicate that on-device inference can reduce network traffic while maintaining reliable event detection under the evaluated operating conditions. The proposed architecture is intended to support energy-efficient environmental sensing deployments in smart-city and climate-monitoring contexts. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

17 pages, 3006 KB  
Article
Development of an Early Warning System for Compound Coastal and Fluvial Flooding: Implementation at the Alfeios River Mouth, Greece
by Anastasios S. Metallinos, Michalis K. Chondros, Andreas G. Papadimitriou and Vasiliki K. Tsoukala
J. Mar. Sci. Eng. 2026, 14(2), 110; https://doi.org/10.3390/jmse14020110 - 6 Jan 2026
Viewed by 290
Abstract
An integrated early warning system (EWS) for compound coastal and fluvial flooding is developed for Pyrgos, Western Greece, where low-lying geomorphology and past storm events highlight the need for rapid, impact-based forecasting. The methodology couples historical and climate-informed metocean and river discharge datasets [...] Read more.
An integrated early warning system (EWS) for compound coastal and fluvial flooding is developed for Pyrgos, Western Greece, where low-lying geomorphology and past storm events highlight the need for rapid, impact-based forecasting. The methodology couples historical and climate-informed metocean and river discharge datasets within a numerical modeling framework consisting of a mild-slope wave model, the CSHORE coastal profile model, and HEC-RAS 2D inundation simulations. A weighted K-Means clustering approach is used to generate representative extreme scenarios, yielding more than 4000 coupled simulations that train and validate Artificial Neural Networks (ANNs). The optimal feed-forward ANN accurately predicts spatially distributed flood depths across the HEC-RAS grid using only offshore wave characteristics, water level, and river discharge as inputs, reducing computation time from hours to seconds. Blind tests demonstrate close agreement with full numerical simulations, with average differences typically below 5% and minor deviations confined to negligible water depths. These results confirm the ANN’s capability to emulate complex compound flooding dynamics with high computational efficiency. Deployed as a web application (EWS_CoCoFlood), the system provides actionable, near-real-time inundation forecasts to support local civil protection authorities. The framework is modular and scalable, enabling future integration of urban and rainfall-induced flooding processes and coastal morphological change. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

28 pages, 8796 KB  
Article
CPU-Only Spatiotemporal Anomaly Detection in Microservice Systems via Dynamic Graph Neural Networks and LSTM
by Jiaqi Zhang and Hao Yang
Symmetry 2026, 18(1), 87; https://doi.org/10.3390/sym18010087 - 3 Jan 2026
Viewed by 247
Abstract
Microservice architecture has become a foundational component of modern distributed systems due to its modularity, scalability, and deployment flexibility. However, the increasing complexity and dynamic nature of service interactions have introduced substantial challenges in accurately detecting runtime anomalies. Existing methods often rely on [...] Read more.
Microservice architecture has become a foundational component of modern distributed systems due to its modularity, scalability, and deployment flexibility. However, the increasing complexity and dynamic nature of service interactions have introduced substantial challenges in accurately detecting runtime anomalies. Existing methods often rely on multiple monitoring metrics, which introduce redundancy and noise while increasing the complexity of data collection and model design. This paper proposes a novel spatiotemporal anomaly detection framework that integrates Dynamic Graph Neural Networks (D-GNN) combined with Long Short-Term Memory (LSTM) networks to model both the structural dependencies and temporal evolution of microservice behaviors. Unlike traditional approaches, our method uses only CPU utilization as the sole monitoring metric, leveraging its high observability and strong correlation with service performance. From a symmetry perspective, normal microservice behaviors exhibit approximately symmetric spatiotemporal patterns: structurally similar services tend to share similar CPU trajectories, and recurring workload cycles induce quasi-periodic temporal symmetries in utilization signals. Runtime anomalies can therefore be interpreted as symmetry-breaking events that create localized structural and temporal asymmetries in the service graph. The proposed framework is explicitly designed to exploit such symmetry properties: the D-GNN component respects permutation symmetry on the microservice graph while embedding the evolving structural context of each service, and the LSTM module captures shift-invariant temporal trends in CPU usage to highlight asymmetric deviations over time. Experiments conducted on real-world microservice datasets demonstrate that the proposed method delivers excellent performance, achieving 98 percent accuracy and 98 percent F1-score. Compared to baseline methods such as DeepTraLog, which achieves 0.93 precision, 0.978 recall, and 0.954 F1-score, our approach performs competitively, achieving 0.980 precision, 0.980 recall, and 0.980 F1-score. Our results indicate that a single-metric, symmetry-aware spatiotemporal modeling approach can achieve competitive performance without the complexity of multi-metric inputs, providing a lightweight and robust solution for real-time anomaly detection in large-scale microservice environments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

31 pages, 3962 KB  
Article
Modular Model of Neuronal Activity That Captures the Dynamics of Main Molecular Targets of Antiepileptic Drugs
by Pavel Y. Kondrakhin and Fedor A. Kolpakov
Int. J. Mol. Sci. 2026, 27(1), 490; https://doi.org/10.3390/ijms27010490 - 3 Jan 2026
Viewed by 226
Abstract
This paper presents a modular mathematical model of neuronal activity, designed to simulate the dynamics of main molecular targets of antiepileptic drugs and their pharmacological effects. The model was developed based on several existing synaptic transmission models that capture cellular processes crucial to [...] Read more.
This paper presents a modular mathematical model of neuronal activity, designed to simulate the dynamics of main molecular targets of antiepileptic drugs and their pharmacological effects. The model was developed based on several existing synaptic transmission models that capture cellular processes crucial to the pathology of epilepsy. It incorporates the primary molecular mechanisms involved in regulating excitation and inhibition within the neural network. Special attention is given to the dynamics of ion currents (Na+, K+, Ca2+), receptors (AMPA, NMDA, GABAA, GABAB and mGlu), and neurotransmitters (glutamate and GABA). Examples of simulations illustrating the inhibitory effects on synaptic transmission are provided. The numerical results are consistent with experimental data reported in the literature. Full article
(This article belongs to the Special Issue Bioinformatics of Gene Regulations and Structure–2025)
Show Figures

Figure 1

19 pages, 1680 KB  
Article
A Hybrid Decision-Making Framework for Autonomous Vehicles in Urban Environments Based on Multi-Agent Reinforcement Learning with Explainable AI
by Ameni Ellouze, Mohamed Karray and Mohamed Ksantini
Vehicles 2026, 8(1), 8; https://doi.org/10.3390/vehicles8010008 - 2 Jan 2026
Viewed by 487
Abstract
Autonomous vehicles (AVs) are expected to operate safely and efficiently in complex urban environments characterized by dynamic and uncertain elements such as pedestrians, cyclists and adverse weather. Although current neural network-based decision-making algorithms, fuzzy logic and reinforcement learning have shown promise, they often [...] Read more.
Autonomous vehicles (AVs) are expected to operate safely and efficiently in complex urban environments characterized by dynamic and uncertain elements such as pedestrians, cyclists and adverse weather. Although current neural network-based decision-making algorithms, fuzzy logic and reinforcement learning have shown promise, they often struggle to handle ambiguous situations, such as partially hidden road signs or unpredictable human behavior. This paper proposes a new hybrid decision-making framework combining multi-agent reinforcement learning (MARL) and explainable artificial intelligence (XAI) to improve robustness, adaptability and transparency. Each agent of the MARL architecture is specialized in a specific sub-task (e.g., obstacle avoidance, trajectory planning, intention prediction), enabling modular and cooperative learning. XAI techniques are integrated to provide interpretable rationales for decisions, facilitating human understanding and regulatory compliance. The proposed system will be validated using CARLA simulator, combined with reference data, to demonstrate improved performance in safety-critical and ambiguous driving scenarios. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

23 pages, 13345 KB  
Article
Neural-Based Controller on Low-Density FPGAs for Dynamic Systems
by Edson E. Cruz-Miguel, José R. García-Martínez, Jorge Orrante-Sakanassi, José M. Álvarez-Alvarado, Omar A. Barra-Vázquez and Juvenal Rodríguez-Reséndiz
Electronics 2026, 15(1), 198; https://doi.org/10.3390/electronics15010198 - 1 Jan 2026
Viewed by 187
Abstract
This work introduces a logic resource-efficient Artificial Neural Network (ANN) controller for embedded control applications on low-density Field-Programmable Gate Array (FPGA) platforms. The proposed design relies on 32-bit fixed-point arithmetic and incorporates an online learning mechanism, enabling the controller to adapt to system [...] Read more.
This work introduces a logic resource-efficient Artificial Neural Network (ANN) controller for embedded control applications on low-density Field-Programmable Gate Array (FPGA) platforms. The proposed design relies on 32-bit fixed-point arithmetic and incorporates an online learning mechanism, enabling the controller to adapt to system variations while maintaining low hardware complexity. Unlike conventional artificial intelligence solutions that require high-performance processors or Graphics Processing Units (GPUs), the proposed approach targets platforms with limited logic, memory, and computational resources. The ANN controller was described using a Hardware Description Language (HDL) and validated via cosimulation between ModelSim and Simulink. A practical comparison was also made between Proportional-Integral-Derivative (PID) control and an ANN for motor position control. The results confirm that the architecture efficiently utilizes FPGA resources, consuming approximately 50% of the available Digital Signal Processor (DSP) units, less than 40% of logic cells, and only 6% of embedded memory blocks. Owing to its modular design, the architecture is inherently scalable, allowing additional inputs or hidden-layer neurons to be incorporated with minimal impact on overall resource usage. Additionally, the computational latency can be precisely determined and scales with (16n+39)m+31 clock cycles, enabling precise timing analysis and facilitating integration into real-time embedded control systems. Full article
Show Figures

Figure 1

27 pages, 3190 KB  
Article
A Dynamic Asymmetric Overcurrent-Limiting Strategy for Grid-Forming Modular Multilevel Converters Considering Multiple Physical Constraints
by Qian Chen, Yi Lu, Feng Xu, Fan Zhang, Mingyue Han and Guoteng Wang
Symmetry 2026, 18(1), 53; https://doi.org/10.3390/sym18010053 - 27 Dec 2025
Viewed by 226
Abstract
Grid-forming (GFM) converters are promising for renewable energy integration, but their overcurrent limitation during grid faults remains a critical challenge. Existing overcurrent-limiting strategies were primarily developed for two-level converters and are often inadequate for Modular Multilevel Converters (MMCs). By overlooking the MMC’s unique [...] Read more.
Grid-forming (GFM) converters are promising for renewable energy integration, but their overcurrent limitation during grid faults remains a critical challenge. Existing overcurrent-limiting strategies were primarily developed for two-level converters and are often inadequate for Modular Multilevel Converters (MMCs). By overlooking the MMC’s unique topology and internal physical constraints, these conventional methods compromise both operational safety and grid support capabilities. Thus, this paper proposes a dynamic asymmetric overcurrent-limiting strategy for grid-forming MMCs that considers multiple physical constraints. The proposed strategy establishes a dynamic asymmetric overcurrent boundary based on three core physical constraints: capacitor voltage ripple, capacitor voltage peak, and the modulation signal. This boundary accurately defines the converter’s true safe operating area under arbitrary operating conditions. To address the complexity of the boundary’s analytical form for real-time application, an offline-trained neural network is introduced as a high-precision function approximator to efficiently and accurately reproduce this dynamic asymmetric boundary. The effectiveness of the proposed strategy is verified by hardware-in-the-loop experiments. Experimental results demonstrate that the proposed strategy reduces the capacitor voltage ripple by 30.7% and maintains the modulation signal safely within the linear range, significantly enhancing both system safety and fault ride-through performance. Full article
Show Figures

Figure 1

15 pages, 2483 KB  
Article
Intelligent Identification of Micro-NPR Bolt Shear Deformation Based on Modular Convolutional Neural Network
by Guang Han, Chen Shang, Zhigang Tao, Xu Yang, Bowen Du, Xiaoyun Sun and Liang Geng
Sensors 2026, 26(1), 184; https://doi.org/10.3390/s26010184 - 26 Dec 2025
Viewed by 313
Abstract
As an important means of reinforcement and support, the bolt can effectively resolve the problem of slope instability. Micro-Negative Poisson Ratio (Micro-NPR) bolts are superior to conventional bolts in mitigating large deformations caused by geological shifts. A large number of bolt anchoring systems [...] Read more.
As an important means of reinforcement and support, the bolt can effectively resolve the problem of slope instability. Micro-Negative Poisson Ratio (Micro-NPR) bolts are superior to conventional bolts in mitigating large deformations caused by geological shifts. A large number of bolt anchoring systems require non-destructive testing technology for quality inspection. This technology utilizes time-domain signal characteristics to detect internal defects in the bolt anchoring systems of support engineering. The combination of stress wave nondestructive detection technology and modular convolutional neural network method can identify the shear deformation in the case of the anchor slope support. Integrating the identification results of both the shear angle and shear location sub-modules improves the accuracy of detecting shear deformation in micro-NPR bolt anchoring system, which will be of great assistance in our future engineering applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

33 pages, 40054 KB  
Article
MVDCNN: A Multi-View Deep Convolutional Network with Feature Fusion for Robust Sonar Image Target Recognition
by Yue Fan, Cheng Peng, Peng Zhang, Zhisheng Zhang, Guoping Zhang and Jinsong Tang
Remote Sens. 2026, 18(1), 76; https://doi.org/10.3390/rs18010076 - 25 Dec 2025
Viewed by 400
Abstract
Automatic Target Recognition (ATR) in single-view sonar imagery is severely hampered by geometric distortions, acoustic shadows, and incomplete target information due to occlusions and the slant-range imaging geometry, which frequently give rise to misclassification and hinder practical underwater detection applications. To address these [...] Read more.
Automatic Target Recognition (ATR) in single-view sonar imagery is severely hampered by geometric distortions, acoustic shadows, and incomplete target information due to occlusions and the slant-range imaging geometry, which frequently give rise to misclassification and hinder practical underwater detection applications. To address these critical limitations, this paper proposes a Multi-View Deep Convolutional Neural Network (MVDCNN) based on feature-level fusion for robust sonar image target recognition. The MVDCNN adopts a highly modular and extensible architecture consisting of four interconnected modules: an input reshaping module that adapts multi-view images to match the input format of pre-trained backbone networks via dimension merging and channel replication; a shared-weight feature extraction module that leverages Convolutional Neural Network (CNN) or Transformer backbones (e.g., ResNet, Swin Transformer, Vision Transformer) to extract discriminative features from each view, ensuring parameter efficiency and cross-view feature consistency; a feature fusion module that aggregates complementary features (e.g., target texture and shape) across views using max-pooling to retain the most salient characteristics and suppress noisy or occluded view interference; and a lightweight classification module that maps the fused feature representations to target categories. Additionally, to mitigate the data scarcity bottleneck in sonar ATR, we design a multi-view sample augmentation method based on sonar imaging geometric principles: this method systematically combines single-view samples of the same target via the combination formula and screens valid samples within a predefined azimuth range, constructing high-quality multi-view training datasets without relying on complex generative models or massive initial labeled data. Comprehensive evaluations on the Custom Side-Scan Sonar Image Dataset (CSSID) and Nankai Sonar Image Dataset (NKSID) demonstrate the superiority of our framework over single-view baselines. Specifically, the two-view MVDCNN achieves average classification accuracies of 94.72% (CSSID) and 97.24% (NKSID), with relative improvements of 7.93% and 5.05%, respectively; the three-view MVDCNN further boosts the average accuracies to 96.60% and 98.28%. Moreover, MVDCNN substantially elevates the precision and recall of small-sample categories (e.g., Fishing net and Small propeller in NKSID), effectively alleviating the class imbalance challenge. Mechanism validation via t-Distributed Stochastic Neighbor Embedding (t-SNE) feature visualization and prediction confidence distribution analysis confirms that MVDCNN yields more separable feature representations and more confident category predictions, with stronger intra-class compactness and inter-class discrimination in the feature space. The proposed MVDCNN framework provides a robust and interpretable solution for advancing sonar ATR and offers a technical paradigm for multi-view acoustic image understanding in complex underwater environments. Full article
(This article belongs to the Special Issue Underwater Remote Sensing: Status, New Challenges and Opportunities)
Show Figures

Graphical abstract

30 pages, 1176 KB  
Article
Towards Secure and Adaptive AI Hardware: A Framework for Optimizing LLM-Oriented Architectures
by Sabya Shtaiwi and Dheya Mustafa
Computers 2026, 15(1), 10; https://doi.org/10.3390/computers15010010 - 25 Dec 2025
Viewed by 741
Abstract
With the increasing computational demands of large language models (LLMs), there is a pressing need for more specialized hardware architectures capable of supporting their dynamic and memory-intensive workloads. This paper examines recent studies on hardware acceleration for AI, focusing on three critical aspects: [...] Read more.
With the increasing computational demands of large language models (LLMs), there is a pressing need for more specialized hardware architectures capable of supporting their dynamic and memory-intensive workloads. This paper examines recent studies on hardware acceleration for AI, focusing on three critical aspects: energy efficiency, architectural adaptability, and runtime security. While notable advancements have been made in accelerating convolutional and deep neural networks using ASICs, FPGAs, and compute-in-memory (CIM) approaches, most existing solutions remain inadequate for the scalability and security requirements of LLMs. Our comparative analysis highlights two key limitations: restricted reconfigurability and insufficient support for real-time threat detection. To address these gaps, we propose a novel architectural framework grounded in modular adaptivity, memory-centric processing, and security-by-design principles. The paper concludes with a proposed evaluation roadmap and outlines promising future research directions, including RISC-V-based secure accelerators, neuromorphic co-processors, and hybrid quantum-AI integration. Full article
Show Figures

Graphical abstract

14 pages, 13781 KB  
Article
Neurosynaptic Core Prototype for Memristor Crossbar Arrays Diagnostics
by Ivan V. Alyaev, Igor A. Surazhevsky, Dmitry V. Ichyotkin, Vladimir V. Rylkov and Vyacheslav A. Demin
Electronics 2025, 14(24), 4965; https://doi.org/10.3390/electronics14244965 - 18 Dec 2025
Viewed by 612
Abstract
The use of neural network technologies is becoming more widespread today, from automating routine office tasks to developing new medicines. However, at the same time, the load on power grids and generation systems increases significantly, which, alongside the desire to increase equipment performance, [...] Read more.
The use of neural network technologies is becoming more widespread today, from automating routine office tasks to developing new medicines. However, at the same time, the load on power grids and generation systems increases significantly, which, alongside the desire to increase equipment performance, further motivates the development of specialized architectures for hardware implementation and training of neural networks. Memristor-based systems are considered one of the promising areas for creating energy-efficient platforms for artificial intelligence (AI) due to their ability to implement in-memory computing at the hardware level. A crucial step towards the realization of such systems is the comprehensive characterization of memristive devices. This work presents the implementation of a hardware platform for the automated measurement of key memristor characteristics, including current-voltage (I-V) curves, retention time, and endurance. The developed device features a modular architecture for validating the functionality of individual subsystems and incorporates a unipolar pulse switching scheme to mitigate the risk of gate-oxide breakdown in 1T1R active arrays that can occur when applying negative voltages during synaptic weight programming. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 4739 KB  
Article
Design and Testing of an Emg-Controlled Semi-Active Knee Prosthesis
by Kassymbek Ozhikenov, Yerkebulan Nurgizat, Abu-Alim Ayazbay, Arman Uzbekbayev, Aidos Sultan, Arailym Nussibaliyeva, Nursultan Zhetenbayev, Raushan Kalykpaeva and Gani Sergazin
Sensors 2025, 25(24), 7505; https://doi.org/10.3390/s25247505 - 10 Dec 2025
Viewed by 1641
Abstract
Affordable, sensor-driven lower-limb prostheses remain scarce in middle-income health systems. We report the design, numerical justification, and bench validation of a semi-active transfemoral prosthesis featuring surface electromyography (EMG) control and inertial sensing for low-resource deployment. The mechanical architecture combines a titanium–aluminum–carbon composite frame [...] Read more.
Affordable, sensor-driven lower-limb prostheses remain scarce in middle-income health systems. We report the design, numerical justification, and bench validation of a semi-active transfemoral prosthesis featuring surface electromyography (EMG) control and inertial sensing for low-resource deployment. The mechanical architecture combines a titanium–aluminum–carbon composite frame (total mass 0.87 kg; parts cost < USD 400) with topology optimization (SIMP) to minimize weight while preserving stiffness. Finite-element analyses (critical load 2.94 kN) confirmed structural safety (yield safety factor ≥ 1.6) and favorable fatigue margins. A dual-channel sensing scheme—surface EMG from the rectus femoris and an IMU—drives a five-state gait finite state machine implemented on a low-power STM32H platform. The end-to-end EMG→PWM latency remained <200 ms (mean 185 ms). Bench tests reproduced commanded flexion within ±2.2%, with average electrical power of ~4.6 W and battery autonomy of ~5.7 h using a 1650 mAh Li-Po pack. Results demonstrate a pragmatic trade-off between functionality and cost: semi-active damping with EMG-triggered control and open, modular hardware suitable for small-lab fabrication. Meeting target metrics (mass ≤ 1 kg, latency ≤ 200 ms, autonomy ≥ 6 h, cost ≤ USD 500), the prototype indicates a viable pathway to broaden access to intelligent prostheses and provides a platform for future upgrades (e.g., neural network control and higher-efficiency actuators). Full article
(This article belongs to the Special Issue Recent Advances in Sensor Technology and Robotics Integration)
Show Figures

Figure 1

35 pages, 8401 KB  
Article
A Multi-Output Neural Network-Based Hybrid Control Strategy for MMC-HVDC Systems
by Shunxi Guo, Ho Chun Wu, Shing Chow Chan and Jizhong Zhu
Electronics 2025, 14(24), 4803; https://doi.org/10.3390/electronics14244803 - 6 Dec 2025
Viewed by 351
Abstract
The modular multilevel converter (MMC) has become a pivotal technology in high-voltage direct current (HVDC) transmission systems due to its modularity, superior harmonic performance, and enhanced controllability. However, conventional control strategies, including model predictive control (MPC) and sorting-based voltage balancing methods, often suffer [...] Read more.
The modular multilevel converter (MMC) has become a pivotal technology in high-voltage direct current (HVDC) transmission systems due to its modularity, superior harmonic performance, and enhanced controllability. However, conventional control strategies, including model predictive control (MPC) and sorting-based voltage balancing methods, often suffer from high computational complexity, limited real-time performance, and inadequate handling of transient events. To address these challenges, this paper proposes a novel Multi-Output Neural Network-based hybrid control strategy that integrates a multi-output neural network (MONN) with an optimized reduced-switching-frequency (RSF) sorting algorithm. The MONN directly outputs precise submodule switching signals, eliminating the need for traditional sorting processes and significantly reducing switching losses. Meanwhile, the RSF algorithm further minimizes unnecessary switching operations while maintaining voltage balance. Furthermore, to enhance the accuracy of predicted switching stage, we extend the MONN for submodule activation count prediction (ACP) and employ a novel Cardinality-Constrained Post-Inference Projection (CCPIP) to further align the predicted switching stages and activation count. Simulation results under dynamic load conditions demonstrate that the proposed method achieves a 76.1% reduction in switching frequency compared to conventional bubble sort, with high switch prediction accuracy (up to 92.01%). This approach offers a computationally efficient, scalable, and adaptive solution for real-time MMC control, enhancing both dynamic response and steady-state stability. Full article
Show Figures

Figure 1

36 pages, 6895 KB  
Article
Machine-Learning Algorithms for Remote-Control and Autonomous Operation of the Very-Small, Long-Life, Modular (VSLLIM) Microreactor
by Mohamed S. El-Genk, Timothy M. Schriener and Ahmad N. Shaheen
J. Nucl. Eng. 2025, 6(4), 54; https://doi.org/10.3390/jne6040054 - 2 Dec 2025
Viewed by 526
Abstract
This work investigated machine-learning algorithms for remote-control and autonomous operation of the Very-Small, Long-Life, Modular (VSLLIM) microreactor. This walk-away safe reactor can continuously generate 1.0–10 MW of thermal power for 92 and 5.6 full power years, respectively, is cooled by natural circulation of [...] Read more.
This work investigated machine-learning algorithms for remote-control and autonomous operation of the Very-Small, Long-Life, Modular (VSLLIM) microreactor. This walk-away safe reactor can continuously generate 1.0–10 MW of thermal power for 92 and 5.6 full power years, respectively, is cooled by natural circulation of in-vessel liquid sodium, does not require on-site storage of either fresh or spent nuclear fuel, and offers redundant means of control and passive decay heat removal. The two ML algorithms investigated are Supervised Learning with Long Short-Term Memory networks (SL-LSTM) and Soft-Actor Critic with Feedforward Neural Networks (SAC-FNN). They are trained to manage the movement of the control rods in the reactor core during various transients including startup, shutdown, and to change the reactor steady state power up to 10 MW. The trained algorithms are incorporated into a Programmable Logic Controller (PLC) coupled to a digital twin dynamic model of the VSLLIM microreactor. Although the SL-LSTM algorithms demonstrate high prediction accuracy of up to 99.95%, they demonstrate inferior performance when incorporated into the PLC. Conversely, the PLC with SAC-FNN algorithm accurately adjusts the control rods positions during the reactor startup transients to within ±1.6% of target values. Full article
Show Figures

Figure 1

12 pages, 597 KB  
Article
AgentMol: Multi-Model AI System for Automatic Drug-Target Identification and Molecule Development
by Piotr Karabowicz, Radosław Charkiewicz, Alicja Charkiewicz, Anetta Sulewska and Jacek Nikliński
Methods Protoc. 2025, 8(6), 143; https://doi.org/10.3390/mps8060143 - 1 Dec 2025
Viewed by 711
Abstract
Drug discovery remains a time-consuming and costly process, necessitating innovative computational approaches to accelerate early stage target identification and compound development. We introduce AgentMol, a modular multimodel AI system that integrates large language models, chemical language modeling, and deep learning–based affinity prediction to [...] Read more.
Drug discovery remains a time-consuming and costly process, necessitating innovative computational approaches to accelerate early stage target identification and compound development. We introduce AgentMol, a modular multimodel AI system that integrates large language models, chemical language modeling, and deep learning–based affinity prediction to automate the discovery pipeline. AgentMol begins with disease-related queries processed through a Retrieval-Augmented Generation system using the Large Language Model to identify protein targets. Protein sequences are then used to condition a GPT-2–based chemical language model, which generates corresponding small-molecule candidates in SMILES format. Finally, a regression convolutional neural network (RCNN) predicts the drug-target interaction by estimating binding affinities (pKi). Models were trained and validated on 470,560 ligand–protein pairs from the BindingDB database. The chemical language model achieved high validity (1.00), uniqueness (0.96), and diversity (0.89), whereas the RCNN model demonstrated robust predictive performance with R2 > 0.6 and Pearson’s R > 0.8. By leveraging LangGraph for orchestration, AgentMol delivers a scalable, interpretable pipeline, effectively enabling the end-to-end generation and evaluation of drug candidates conditioned on protein targets. This system represents a significant step toward practical AI-driven molecular discovery with accessible computational demands. Full article
(This article belongs to the Special Issue Advanced Methods and Technologies in Drug Discovery)
Show Figures

Graphical abstract

Back to TopTop