Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = message-passing interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2553 KB  
Article
Adaptive Path Planning for Autonomous Underwater Vehicle (AUV) Based on Spatio-Temporal Graph Neural Networks and Conditional Normalizing Flow Probabilistic Reconstruction
by Guoshuai Li, Jinghua Wang, Jichuan Dai, Tian Zhao, Danqiang Chen and Cui Chen
Algorithms 2026, 19(2), 147; https://doi.org/10.3390/a19020147 - 11 Feb 2026
Viewed by 58
Abstract
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made [...] Read more.
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made with incomplete and uncertain observations. A path-planning framework is built around two coupled components: spatiotemporal graph neural network prediction and conditional normalizing flow (CNF)-based probabilistic environment reconstruction. Forward-looking sonar and inertial navigation system (INS) measurements are fused online to form a local environment graph with temporal encoding. Cross-temporal message passing captures how occupancy and maneuver patterns evolve, which supports path prediction under dynamic reachability and collision-avoidance constraints. For regions that remain unobserved, CNF performs conditional generation from the available local observations, producing probabilistic completion and an explicit uncertainty output. Conformal calibration then maps model confidence to credible intervals with controlled miscoverage, giving a consistent probabilistic interface for risk budgeting. To keep pace with ocean currents and moving targets, edge weights and graph connectivity are updated online as new observations arrive. Compared with Informed Random Tree star (RRT*), D* Lite, Soft Actor-Critic (SAC), and Graph Neural Network-Probabilistic Roadmap (GNN-PRM), the proposed method achieves a near 100% success rate at 20% occlusion and maintains about an 80% success rate even under 70% occlusion. In dynamic obstacle scenarios, it yields about a 4% collision rate at low speeds and keeps the collision rate below 20% when obstacle speed increases to 3 m/s. Ablation studies further demonstrate that temporal modeling improves success rate by about 7.1%, CNF-based probabilistic completion boosts success rate by about 13.2% and reduces collisions by about 17%, while conformal calibration reduces coverage error by about 6.6%, confirming robust planning under heavy occlusion and time-varying uncertainty. Full article
Show Figures

Figure 1

28 pages, 673 KB  
Article
How to Find the Covering Radius of Linear Codes over Finite Fields Using a Parity-Check Matrix in Parallel
by Iliya Bouyukliev, Dushan Bikov and Maria Pashinska-Gadzheva
Mathematics 2026, 14(3), 534; https://doi.org/10.3390/math14030534 - 2 Feb 2026
Viewed by 180
Abstract
We present a parallel algorithm for computing the covering radius of a linear [n,k]q code using its parity-check matrix. The method is based on the systematic generation of syndromes associated with linear combinations of columns of the parity-check [...] Read more.
We present a parallel algorithm for computing the covering radius of a linear [n,k]q code using its parity-check matrix. The method is based on the systematic generation of syndromes associated with linear combinations of columns of the parity-check matrix. To improve scalability, the search space is partitioned and processed in parallel using a master–worker strategy implemented with the Message Passing Interface (MPI). The proposed approach significantly reduces the computational effort required for covering radius computation, a problem known to be NP-hard in general. Experimental results demonstrate that the parallelization achieves substantial speedups and makes the exact computation of the covering radius feasible for codes of larger parameters. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

15 pages, 3863 KB  
Proceeding Paper
Fast Parallel Gaussian Filter Based on Partial Sums
by Atanaska Bosakova-Ardenska, Hristina Andreeva and Ivan Halvadzhiev
Eng. Proc. 2025, 104(1), 1; https://doi.org/10.3390/engproc2025104001 - 21 Aug 2025
Viewed by 661
Abstract
As a convolutional operation in a space domain, Gaussian filtering involves a large number of computational operations, a number that increases when the sizes of images and the kernel size also increase. Thus, finding methods to accelerate such computations is significant for overall [...] Read more.
As a convolutional operation in a space domain, Gaussian filtering involves a large number of computational operations, a number that increases when the sizes of images and the kernel size also increase. Thus, finding methods to accelerate such computations is significant for overall time complexity enhancement, and the current paper proposes the use of partial sums to achieve this acceleration. The MPI (Message Passing Interface) library and the C programming language are used for the parallel program implementation of Gaussian filtering, based on a 1D kernel and 2D kernel working with and without the use of partial sums, and then a theoretical and practical evaluation of the effectiveness of the proposed implementations is made. The experimental results indicate a significant acceleration of the computational process when partial sums are used in both sequential and parallel processing. A PSNR (Peak Signal to Noise Ratio) metric is used to assess the quality of filtering for the proposed algorithms in comparison with the MATLAB implementation of Gaussian filtering, and time performance for the proposed algorithms is also evaluated. Full article
Show Figures

Figure 1

20 pages, 2206 KB  
Article
Parallelization of Rainbow Tables Generation Using Message Passing Interface: A Study on NTLMv2, MD5, SHA-256 and SHA-512 Cryptographic Hash Functions
by Mark Vainer, Arnas Kačeniauskas and Nikolaj Goranin
Appl. Sci. 2025, 15(15), 8152; https://doi.org/10.3390/app15158152 - 22 Jul 2025
Viewed by 5165
Abstract
Rainbow table attacks utilize a time-memory trade-off to efficiently crack passwords by employing precomputed tables containing chains of passwords and hash values. Generating these tables is computationally intensive, and several researchers have proposed utilizing parallel computing to speed up the generation process. This [...] Read more.
Rainbow table attacks utilize a time-memory trade-off to efficiently crack passwords by employing precomputed tables containing chains of passwords and hash values. Generating these tables is computationally intensive, and several researchers have proposed utilizing parallel computing to speed up the generation process. This paper introduces a modification to the traditional master-slave parallelization model using the MPI framework, where, unlike previous approaches, the generation of starting points is decentralized, allowing each process to generate its own tasks independently. This design is proposed to reduce communication overhead and improve the efficiency of rainbow table generation. We reduced the number of inter-process communications by letting each process generate chains independently. We conducted three experiments to evaluate the performance of the parallel rainbow tables generation algorithm for four cryptographic hash functions: NTLMv2, MD5, SHA-256 and SHA-512. The first experiment assessed parallel performance, showing near-linear speedup and 95–99% efficiency across varying numbers of nodes. The second experiment evaluated scalability by increasing the number of processed chains from 100 to 100,000, revealing that higher workloads significantly impacted execution time, with SHA-512 being the most computationally intensive. The third experiment evaluated the effect of chain length on execution time, confirming that longer chains increase computational cost, with SHA-512 consistently requiring the most resources. The proposed approach offers an efficient and practical solution to the computational challenges of rainbow tables generation. The findings of this research can benefit key stakeholders, including cybersecurity professionals, ethical hackers, digital forensics experts and researchers in cryptography, by providing an efficient method for generating rainbow tables to analyze password security. Full article
Show Figures

Figure 1

26 pages, 2688 KB  
Article
Improved Parallel Differential Evolution Algorithm with Small Population for Multi-Period Optimal Dispatch Problem of Microgrids
by Tianle Li, Yifei Li, Fang Wang, Cheng Gong, Jingrui Zhang and Hao Ma
Energies 2025, 18(14), 3852; https://doi.org/10.3390/en18143852 - 19 Jul 2025
Cited by 3 | Viewed by 700
Abstract
Microgrids have drawn attention due to their helpfulness in the development of renewable energy. It is necessary to make an optimal power dispatch scheme for each micro-source in a microgrid in order to make the best use of fluctuating and unpredictable renewable energy. [...] Read more.
Microgrids have drawn attention due to their helpfulness in the development of renewable energy. It is necessary to make an optimal power dispatch scheme for each micro-source in a microgrid in order to make the best use of fluctuating and unpredictable renewable energy. However, the computational time of solving the optimal dispatch problem increases greatly when the grid’s structure is more complex. An improved parallel differential evolution (PDE) approach based on a message-passing interface (MPI) is proposed, aiming at the solution of the optimal dispatch problem of a microgrid (MG), reducing the consumed time effectively but not destroying the quality of the obtained solution. In the new approach, the main population of the parallel algorithm is divided into several small populations, and each performs the original operators of a differential evolution algorithm, i.e., mutation, crossover, and selection, in different processes concurrently. The gather and scatter operations are employed after several iterations to enhance population diversity. Some improvements on mutation, adaptive parameters, and the introduction of migration operation are also proposed in the approach. Two test systems are employed to verify and evaluate the proposed approach, and the comparisons with traditional differential evolution are also reported. The results show that the proposed PDE algorithm can reduce the consumed time on the premise of obtaining no worse solutions. Full article
Show Figures

Figure 1

20 pages, 1535 KB  
Article
Multi-Agentic LLMs for Personalizing STEM Texts
by Michael Vaccaro, Mikayla Friday and Arash Zaghi
Appl. Sci. 2025, 15(13), 7579; https://doi.org/10.3390/app15137579 - 6 Jul 2025
Cited by 3 | Viewed by 3205
Abstract
Multi-agent large language models promise flexible, modular architectures for delivering personalized educational content. Drawing on a pilot randomized controlled trial with middle school students (n = 23), we introduce a two-agent GPT-4 framework in which a Profiler agent infers learner-specific preferences and [...] Read more.
Multi-agent large language models promise flexible, modular architectures for delivering personalized educational content. Drawing on a pilot randomized controlled trial with middle school students (n = 23), we introduce a two-agent GPT-4 framework in which a Profiler agent infers learner-specific preferences and a Rewrite agent dynamically adapts science passages via an explicit message-passing protocol. We implement structured system and user prompts as inter-agent communication schemas to enable real-time content adaptation. The results of an ordinal logistic regression analysis hinted that students may be more likely to prefer texts aligned with their profile, demonstrating the feasibility of multi-agent system-driven personalization and highlighting the need for additional work to build upon this pilot study. Beyond empirical validation, we present a modular multi-agent architecture detailing agent roles, communication interfaces, and scalability considerations. We discuss design best practices, ethical safeguards, and pathways for extending this framework to collaborative agent networks—such as feedback-analysis agents—in K-12 settings. These results advance both our theoretical and applied understanding of multi-agent LLM systems for personalized learning. Full article
Show Figures

Figure 1

22 pages, 2191 KB  
Review
Towards Efficient HPC: Exploring Overlap Strategies Using MPI Non-Blocking Communication
by Yuntian Zheng and Jianping Wu
Mathematics 2025, 13(11), 1848; https://doi.org/10.3390/math13111848 - 2 Jun 2025
Viewed by 2811
Abstract
As high-performance computing (HPC) platforms continue to scale up, communication costs have become a critical bottleneck affecting overall application performance. An effective strategy to overcome this limitation is to overlap communication with computation. The Message Passing Interface (MPI), as the de facto standard [...] Read more.
As high-performance computing (HPC) platforms continue to scale up, communication costs have become a critical bottleneck affecting overall application performance. An effective strategy to overcome this limitation is to overlap communication with computation. The Message Passing Interface (MPI), as the de facto standard for communication in HPC, provides non-blocking communication primitives that make such overlapping feasible. By enabling asynchronous communication, non-blocking operations reduce idle time of cores caused by data transfer delays, thereby improving resource utilization. Overlapping communication with computation is particularly important for enhancing the performance of large-scale scientific applications, such as numerical simulations, climate modeling, and other data-intensive tasks. However, achieving efficient overlapping is non-trivial and depends not only on advances in hardware technologies such as Remote Direct Memory Access (RDMA), but also on well-designed and optimized MPI implementations. This paper presents a comprehensive survey on the principles of MPI non-blocking communication, the core techniques for achieving computation–communication overlap, and some representative applications in scientific computing. Alongside the survey, we include a preliminary experimental study evaluating the effectiveness of asynchronous progress mechanism on modern HPC platforms to support the development of parallel programs for HPC researchers and practitioners. Full article
(This article belongs to the Special Issue Numerical Analysis and Algorithms for High-Performance Computing)
Show Figures

Figure 1

35 pages, 11134 KB  
Article
Error Classification and Static Detection Methods in Tri-Programming Models: MPI, OpenMP, and CUDA
by Saeed Musaad Altalhi, Fathy Elbouraey Eassa, Sanaa Abdullah Sharaf, Ahmed Mohammed Alghamdi, Khalid Ali Almarhabi and Rana Ahmad Bilal Khalid
Computers 2025, 14(5), 164; https://doi.org/10.3390/computers14050164 - 28 Apr 2025
Viewed by 1573
Abstract
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and [...] Read more.
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and Fortran. However, modern multi-core processors and accelerators necessitate fine-grained control to achieve effective parallelism, complicating the development process. To address this, developers commonly utilize high-level programming models such as Open Multi-Processing (OpenMP), Open Accelerators (OpenACCs), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). These models may be used independently or combined into dual- or tri-model applications to leverage their complementary strengths. However, integrating multiple models introduces subtle and difficult-to-detect runtime errors such as data races, deadlocks, and livelocks that often elude conventional compilers. This complexity is exacerbated in applications that simultaneously incorporate MPI, OpenMP, and CUDA, where the origin of runtime errors, whether from individual models, user logic, or their interactions, becomes ambiguous. Moreover, existing tools are inadequate for detecting such errors in tri-model applications, leaving a critical gap in development support. To address this gap, the present study introduces a static analysis tool designed specifically for tri-model applications combining MPI, OpenMP, and CUDA in C++-based environments. The tool analyzes source code to identify both actual and potential runtime errors prior to execution. Central to this approach is the introduction of error dependency graphs, a novel mechanism for systematically representing and analyzing error correlations in hybrid applications. By offering both error classification and comprehensive static detection, the proposed tool enhances error visibility and reduces manual testing effort. This contributes significantly to the development of more robust parallel applications for high-performance computing (HPC) and future exascale systems. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

22 pages, 3570 KB  
Article
High-Performance Computing and Parallel Algorithms for Urban Water Demand Forecasting
by Georgios Myllis, Alkiviadis Tsimpiris, Stamatios Aggelopoulos and Vasiliki G. Vrana
Algorithms 2025, 18(4), 182; https://doi.org/10.3390/a18040182 - 22 Mar 2025
Cited by 5 | Viewed by 2167
Abstract
This paper explores the application of parallel algorithms and high-performance computing (HPC) in the processing and forecasting of large-scale water demand data. Building upon prior work, which identified the need for more robust and scalable forecasting models, this study integrates parallel computing frameworks [...] Read more.
This paper explores the application of parallel algorithms and high-performance computing (HPC) in the processing and forecasting of large-scale water demand data. Building upon prior work, which identified the need for more robust and scalable forecasting models, this study integrates parallel computing frameworks such as Apache Spark for distributed data processing, Message Passing Interface (MPI) for fine-grained parallel execution, and CUDA-enabled GPUs for deep learning acceleration. These advancements significantly improve model training and deployment speed, enabling near-real-time data processing. Apache Spark’s in-memory computing and distributed data handling optimize data preprocessing and model execution, while MPI provides enhanced control over custom parallel algorithms, ensuring high performance in complex simulations. By leveraging these techniques, urban water utilities can implement scalable, efficient, and reliable forecasting solutions critical for sustainable water resource management in increasingly complex environments. Additionally, expanding these models to larger datasets and diverse regional contexts will be essential for validating their robustness and applicability in different urban settings. Addressing these challenges will help bridge the gap between theoretical advancements and practical implementation, ensuring that HPC-driven forecasting models provide actionable insights for real-world water management decision-making. Full article
Show Figures

Figure 1

23 pages, 6475 KB  
Article
Genetic Algorithm-Enhanced Direct Method in Protein Crystallography
by Ruijiang Fu, Wu-Pei Su and Hongxing He
Molecules 2025, 30(2), 288; https://doi.org/10.3390/molecules30020288 - 13 Jan 2025
Cited by 3 | Viewed by 1624
Abstract
Direct methods based on iterative projection algorithms can determine protein crystal structures directly from X-ray diffraction data without prior structural information. However, traditional direct methods often converge to local minima during electron density iteration, leading to reconstruction failure. Here, we present an enhanced [...] Read more.
Direct methods based on iterative projection algorithms can determine protein crystal structures directly from X-ray diffraction data without prior structural information. However, traditional direct methods often converge to local minima during electron density iteration, leading to reconstruction failure. Here, we present an enhanced direct method incorporating genetic algorithms for electron density modification in real space. The method features customized selection, crossover, and mutation strategies; premature convergence prevention; and efficient message passing interface (MPI) parallelization. We systematically tested the method on 15 protein structures from different space groups with diffraction resolutions of 1.35∼2.5 Å. The test cases included high-solvent-content structures, high-resolution structures with medium solvent content, and structures with low solvent content and non-crystallographic symmetry (NCS). Results showed that the enhanced method significantly improved success rates from below 30% to nearly 100%, with average phase errors reduced below 40°. The reconstructed electron density maps were of sufficient quality for automated model building. This method provides an effective alternative for solving structures that are difficult to predict accurately by AlphaFold3 or challenging to solve by molecular replacement and experimental phasing methods. The implementation is available on Github. Full article
(This article belongs to the Special Issue Advanced Research in Macromolecular Crystallography)
Show Figures

Figure 1

25 pages, 1511 KB  
Article
Performance Study of an MRI Motion-Compensated Reconstruction Program on Intel CPUs, AMD EPYC CPUs, and NVIDIA GPUs
by Mohamed Aziz Zeroual, Karyna Isaieva, Pierre-André Vuissoz and Freddy Odille
Appl. Sci. 2024, 14(21), 9663; https://doi.org/10.3390/app14219663 - 23 Oct 2024
Cited by 3 | Viewed by 2228
Abstract
Motion-compensated image reconstruction enables new clinical applications of Magnetic Resonance Imaging (MRI), but it relies on computationally intensive algorithms. This study focuses on the Generalized Reconstruction by Inversion of Coupled Systems (GRICS) program, applied to the reconstruction of 3D images in cases of [...] Read more.
Motion-compensated image reconstruction enables new clinical applications of Magnetic Resonance Imaging (MRI), but it relies on computationally intensive algorithms. This study focuses on the Generalized Reconstruction by Inversion of Coupled Systems (GRICS) program, applied to the reconstruction of 3D images in cases of non-rigid or rigid motion. It uses hybrid parallelization with the MPI (Message Passing Interface) and OpenMP (Open Multi-Processing). For clinical integration, the GRICS needs to efficiently harness the computational resources of compute nodes. We aim to improve the GRICS’s performance without any code modification. This work presents a performance study of GRICS on two CPU architectures: Intel Xeon Gold and AMD EPYC. The roofline model is used to study the software–hardware interaction and quantify the code’s performance. For CPU–GPU comparison purposes, we propose a preliminary MATLAB–GPU implementation of the GRICS’s reconstruction kernel. We establish the roofline model of the kernel on two NVIDIA GPU architectures: Quadro RTX 5000 and A100. After the performance study, we propose some optimization patterns for the code’s execution on CPUs, first considering only the OpenMP implementation using thread binding and affinity and appropriate architecture-compilation flags and then looking for the optimal combination of MPI processes and OpenMP threads in the case of the hybrid MPI–OpenMP implementation. The results show that the GRICS performed well on the AMD EPYC CPUs, with an architectural efficiency of 52%. The kernel’s execution was fast on the NVIDIA A100 GPU, but the roofline model reported low architectural efficiency and utilization. Full article
Show Figures

Figure 1

13 pages, 3590 KB  
Proceeding Paper
Performance Evaluation of Recursive Mean Filter Using Scilab, MATLAB, and MPI (Message Passing Interface)
by Hristina Andreeva and Atanaska Bosakova-Ardenska
Eng. Proc. 2024, 70(1), 33; https://doi.org/10.3390/engproc2024070033 - 8 Aug 2024
Viewed by 1273
Abstract
As a popular linear filter, the mean filter is widely used in different applications as a basic tool for image enhancement. Its main purpose is to reduce the noise in an image and thus to prepare the picture for other image-processing operations depending [...] Read more.
As a popular linear filter, the mean filter is widely used in different applications as a basic tool for image enhancement. Its main purpose is to reduce the noise in an image and thus to prepare the picture for other image-processing operations depending on the current task. In the last decade, the amount of data, particularly images, that has to be processed in a variety of applications has increased significantly, and thus the usage of effective and fast filtering algorithms has become crucial. The aim of the present research is to identify what type of software (MATLAB, Scilab, or MPI-based) is preferred for reducing the filtering time and consequently save energy. Thus, the aim of the present research corresponds to actual trends in information processing and corresponds to green computing concepts. A set of experimental images divided into two groups—one for small images and a second one for big images—is used for performance evaluation of the recursive mean filter. This type of linear filter was chosen due to its very good denoising characteristics. The filter is implemented in MATLAB and Scilab environments using their specific commands and it is also implemented using the C language with the MPI library to provide the opportunity for parallel execution. Two mobile computer systems are used for experimental performance evaluation and the results indicate that the slowest filtering execution is registered when Scilab is used and the fastest execution is achieved when MPI is used with the C implementation. Depending on the amount and size of the images that have to be filtered, this study formulates advice for achieving effective performance throughout the whole process of working with images. Full article
Show Figures

Figure 1

9 pages, 5236 KB  
Article
Beamline Optimisation for High-Intensity Muon Beams at PSI Using the Heterogeneous Island Model
by Eremey Valetov, Giovanni Dal Maso, Peter-Raymond Kettle, Andreas Knecht and Angela Papa
Particles 2024, 7(3), 683-691; https://doi.org/10.3390/particles7030039 - 1 Aug 2024
Viewed by 2286
Abstract
The High Intensity Muon Beams (HIMB) project at the Paul Scherrer Institute (PSI) will deliver muon beams with unprecedented intensities of up to 1010muons/s for next-generation particle physics and material science experiments. This represents a hundredfold increase over the [...] Read more.
The High Intensity Muon Beams (HIMB) project at the Paul Scherrer Institute (PSI) will deliver muon beams with unprecedented intensities of up to 1010muons/s for next-generation particle physics and material science experiments. This represents a hundredfold increase over the current state-of-the-art muon intensities, also provided by PSI. We performed beam dynamics optimisations and studies for the design of the HIMB beamlines MUH2 and MUH3 using Graphics Transport, Graphics Turtle, and G4beamline, the latter incorporating PSI’s own measured π+ cross-sections and variance reduction. We initially performed large-scale beamline optimisations using asynchronous Bayesian optimisation with DeepHyper. We are now developing an island-based evolutionary optimisation code glyfada based on the Paradiseo framework, where we implemented Message Passing Interface (MPI) islands with OpenMP parallelisation within each island. Furthermore, we implemented an island model that is also suitable for high-throughput computing (HTC) environments with asynchronous communication via a Redis database. The code interfaces with the codes COSY INFINITY and G4beamline. The code glyfada will provide heterogeneous island model optimisation using evolutionary optimisation and local search methods, as well as part-wise optimisation of the beamline with automatic advancement through stages. We will use the glyfada for a future large-scale optimisation of the HIMB beamlines. Full article
Show Figures

Figure 1

12 pages, 5199 KB  
Article
EGG: Accuracy Estimation of Individual Multimeric Protein Models Using Deep Energy-Based Models and Graph Neural Networks
by Andrew Jordan Siciliano, Chenguang Zhao, Tong Liu and Zheng Wang
Int. J. Mol. Sci. 2024, 25(11), 6250; https://doi.org/10.3390/ijms25116250 - 6 Jun 2024
Viewed by 1944
Abstract
Reliable and accurate methods of estimating the accuracy of predicted protein models are vital to understanding their respective utility. Discerning how the quaternary structure conforms can significantly improve our collective understanding of cell biology, systems biology, disease formation, and disease treatment. Accurately determining [...] Read more.
Reliable and accurate methods of estimating the accuracy of predicted protein models are vital to understanding their respective utility. Discerning how the quaternary structure conforms can significantly improve our collective understanding of cell biology, systems biology, disease formation, and disease treatment. Accurately determining the quality of multimeric protein models is still computationally challenging, as the space of possible conformations is significantly larger when proteins form in complex with one another. Here, we present EGG (energy and graph-based architectures) to assess the accuracy of predicted multimeric protein models. We implemented message-passing and transformer layers to infer the overall fold and interface accuracy scores of predicted multimeric protein models. When evaluated with CASP15 targets, our methods achieved promising results against single model predictors: fourth and third place for determining the highest-quality model when estimating overall fold accuracy and overall interface accuracy, respectively, and first place for determining the top three highest quality models when estimating both overall fold accuracy and overall interface accuracy. Full article
(This article belongs to the Special Issue Structural and Functional Analysis of Amino Acids and Proteins)
Show Figures

Figure 1

13 pages, 8672 KB  
Article
Efficient Parallel FDTD Method Based on Non-Uniform Conformal Mesh
by Kaihui Liu, Tao Huang, Liang Zheng, Xiaolin Jin, Guanjie Lin, Luo Huang, Wenjing Cai, Dapeng Gong and Chunwang Fang
Appl. Sci. 2024, 14(11), 4364; https://doi.org/10.3390/app14114364 - 21 May 2024
Cited by 1 | Viewed by 2867
Abstract
The finite-difference time-domain (FDTD) method is a versatile electromagnetic simulation technique, widely used for solving various broadband problems. However, when dealing with complex structures and large dimensions, especially when applying perfectly matched layer (PML) absorbing boundaries, tremendous computational burdens will occur. To reduce [...] Read more.
The finite-difference time-domain (FDTD) method is a versatile electromagnetic simulation technique, widely used for solving various broadband problems. However, when dealing with complex structures and large dimensions, especially when applying perfectly matched layer (PML) absorbing boundaries, tremendous computational burdens will occur. To reduce the computational time and memory, this paper presents a Message Passing Interface (MPI) parallel scheme based on non-uniform conformal FDTD, which is suitable for convolutional perfectly matched layer (CPML) absorbing boundaries, and adopts a domain decomposition approach, dividing the entire computational domain into several subdomains. More importantly, only one magnetic field exchange is required during the iterations, and the electric field update is divided into internal and external parts, facilitating the synchronous communication of magnetic fields between adjacent subdomains and internal electric field updates. Finally, unmanned helicopters, helical antennas, 100-period folded waveguides, and 16 × 16 phased array antennas are designed to verify the accuracy and efficiency of the algorithm. Moreover, we conducted parallel tests on a supercomputing platform, showing its satisfactory reduction in computational time and excellent parallel efficiency. Full article
(This article belongs to the Special Issue Parallel Computing and Grid Computing: Technologies and Applications)
Show Figures

Figure 1

Back to TopTop