Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = primality test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2254 KiB  
Article
Distributed Optimization Strategy for Voltage Regulation in PV-Integrated Power Systems with Limited Sensor Deployment
by Xun Lu, Junlei Liu, Xinmiao Liu, Jun Liu and Lingxue Lin
Energies 2025, 18(14), 3598; https://doi.org/10.3390/en18143598 - 8 Jul 2025
Viewed by 234
Abstract
This paper presents a distributed optimization strategy for reactive power–voltage control in distribution networks with high photovoltaic (PV) penetration under limited sensor deployment scenarios. To address voltage violations and minimize network power losses, a novel distributed optimization framework is developed that utilizes selective [...] Read more.
This paper presents a distributed optimization strategy for reactive power–voltage control in distribution networks with high photovoltaic (PV) penetration under limited sensor deployment scenarios. To address voltage violations and minimize network power losses, a novel distributed optimization framework is developed that utilizes selective nodal measurements from PV-integrated nodes and critical T-junction locations, coupled with inter-node communication for information exchange. The methodology integrates an adaptive step size algorithm within a dynamic projected primal–dual distributed optimization framework, eliminating manual parameter tuning requirements while ensuring theoretical convergence guarantees through Lyapunov stability analysis. Comprehensive validation on the IEEE 33-bus distribution test system demonstrates that the proposed strategy achieves significant performance improvements. The distributed control framework reduces measurement infrastructure requirements while maintaining near-optimal performance, demonstrating superior economic efficiency and operational reliability. These results establish the practical viability of the proposed approach for real-world distribution network applications with high renewable energy integration, providing a cost-effective solution for voltage regulation under incomplete observability conditions. Full article
(This article belongs to the Special Issue Advances in Power Distribution Systems)
Show Figures

Figure 1

15 pages, 11069 KiB  
Article
Implementation of a Non-Intrusive Primal–Dual Method with 2D-3D-Coupled Models for the Analysis of a DCB Test with Cohesive Zones
by Ricardo Hernández, Jorge Hinojosa, Ignacio Fuenzalida-Henríquez and Víctor Tuninetti
Appl. Sci. 2025, 15(12), 6924; https://doi.org/10.3390/app15126924 - 19 Jun 2025
Viewed by 298
Abstract
This study explores a global–local non-intrusive computational strategy to address problems in computational mechanics, specifically applied to a double cantilever beam (DCB) with cohesive interfaces. The method aims to reduce computational requirements while maintaining accuracy. The DCB, representing two plates connected by a [...] Read more.
This study explores a global–local non-intrusive computational strategy to address problems in computational mechanics, specifically applied to a double cantilever beam (DCB) with cohesive interfaces. The method aims to reduce computational requirements while maintaining accuracy. The DCB, representing two plates connected by a cohesive zone simulating delamination, was modeled with a 3D representation using the cohesive zone method for crack propagation. Different mesh configurations were tested to evaluate the strategy’s effectiveness. The results showed that the global–local strategy successfully provided solutions that were comparable to monolithic models. Mesh size had a significant impact on the results, but even with a simplified local model that did not fully represent the plate thickness, the structural deformation and crack displacement were accurately captured. The interface near the study area influenced the stress distribution. Although effective, the strategy requires careful mesh selection due to its sensitivity to mesh size. Future research could optimize mesh configurations, expand the strategy to other structures, and explore the use of orthotropic materials. This research introduces a computational approach that reduces costs while simulating delamination and crack propagation, highlighting the importance of mesh configuration for real-world applications. Full article
Show Figures

Figure 1

22 pages, 2740 KiB  
Article
Unsupervised Canine Emotion Recognition Using Momentum Contrast
by Aarya Bhave, Alina Hafner, Anushka Bhave and Peter A. Gloor
Sensors 2024, 24(22), 7324; https://doi.org/10.3390/s24227324 - 16 Nov 2024
Cited by 2 | Viewed by 2585
Abstract
We describe a system for identifying dog emotions based on dogs’ facial expressions and body posture. Towards that goal, we built a dataset with 2184 images of ten popular dog breeds, grouped into seven similarly sized primal mammalian emotion categories defined by neuroscientist [...] Read more.
We describe a system for identifying dog emotions based on dogs’ facial expressions and body posture. Towards that goal, we built a dataset with 2184 images of ten popular dog breeds, grouped into seven similarly sized primal mammalian emotion categories defined by neuroscientist and psychobiologist Jaak Panksepp as ‘Exploring’, ‘Sadness’, ‘Playing’, ‘Rage’, ‘Fear’, ‘Affectionate’ and ‘Lust’. We modified the contrastive learning framework MoCo (Momentum Contrast for Unsupervised Visual Representation Learning) to train it on our original dataset and achieved an accuracy of 43.2% and a baseline of 14%. We also trained this model on a second publicly available dataset that resulted in an accuracy of 48.46% but had a baseline of 25%. We compared our unsupervised approach with a supervised model based on a ResNet50 architecture. This model, when tested on our dataset with the seven Panksepp labels, resulted in an accuracy of 74.32% Full article
(This article belongs to the Special Issue Integrated Sensor Systems for Multi-modal Emotion Recognition)
Show Figures

Figure 1

20 pages, 487 KiB  
Article
On Implementing a Two-Step Interior Point Method for Solving Linear Programs
by Sajad Fathi Hafshejani, Daya Gaur and Robert Benkoczi
Algorithms 2024, 17(7), 303; https://doi.org/10.3390/a17070303 - 8 Jul 2024
Cited by 2 | Viewed by 1630
Abstract
A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such [...] Read more.
A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such that the feasibility condition is held. Since we use the information from the previous iteration to find the search direction, the inverse of the system is evaluated only once every iteration. A detailed empirical evaluation is performed on NETLIB instances, which compares two variants of the approach to the primal-dual log barrier interior point method. Results show that the proposed method is faster. The method reduces the number of iterations and CPU time(s) by 27% and 18%, respectively, on NETLIB instances tested compared to the classical interior point algorithm. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

18 pages, 2784 KiB  
Article
Optimization of Torrefaction Parameters Using Metaheuristic Approach
by Alok Dhaundiyal and Laszlo Toth
Energies 2024, 17(13), 3314; https://doi.org/10.3390/en17133314 - 5 Jul 2024
Viewed by 814
Abstract
The probabilistic technique was used to optimize the torrefaction parameters that indirectly influence the yield of end-products obtained through the pyrolysis of biomass. In the same pursuit, pine cones underwent thermal pre-treatment at 210 °C, 220 °C, 230 °C, 240 °C, and 250 [...] Read more.
The probabilistic technique was used to optimize the torrefaction parameters that indirectly influence the yield of end-products obtained through the pyrolysis of biomass. In the same pursuit, pine cones underwent thermal pre-treatment at 210 °C, 220 °C, 230 °C, 240 °C, and 250 °C in the presence of N2 gas with a flowing rate of 0.7 L∙s−1, whereas the duration of the pre-treatment process was 5 min, 10 min, and 15 min at each. To facilitate the processing of pine waste, a muffle furnace was improvised for pilot-scale testing. The thermal process used to carry out torrefaction was quasi-static. The average dynamic head of volatile gases inside the chamber was 1.04 m. The criteria for determining the optimal solution were based on calorific value, solid yield, energy consumption during the pre-treatment process, and ash handling. In absolute terms, time and temperature did not influence the statistical deviation in cellulose and hemicellulose decomposition after thermal pre-treatment. While considering ash content as a primal factor, thermal processing should be conducted for 5 min at 210 °C for the bounded operating conditions, which are similar to the operating conditions obtained experimentally. The optimal solid yield would be obtained if the thermal pre-treatment is performed at 250 °C for 5 min. The solution derived through a simulated annealing technique provided a better convergence with the experimental dataset. Full article
(This article belongs to the Section K: State-of-the-Art Energy Related Technologies)
Show Figures

Graphical abstract

29 pages, 4234 KiB  
Article
Comparative Analysis of the Particle Swarm Optimization and Primal-Dual Interior-Point Algorithms for Transmission System Volt/VAR Optimization in Rectangular Voltage Coordinates
by Haltor Mataifa, Senthil Krishnamurthy and Carl Kriger
Mathematics 2023, 11(19), 4093; https://doi.org/10.3390/math11194093 - 27 Sep 2023
Cited by 1 | Viewed by 1691
Abstract
Optimal power flow (OPF) is one of the most widely studied problems in the field of operations research, as it applies to the optimal and efficient operation of the electric power system. Both the problem formulation and solution techniques have attracted significant research [...] Read more.
Optimal power flow (OPF) is one of the most widely studied problems in the field of operations research, as it applies to the optimal and efficient operation of the electric power system. Both the problem formulation and solution techniques have attracted significant research interest over the decades. A wide range of OPF problems have been formulated to cater for the various operational objectives of the power system and are mainly expressed either in polar or rectangular voltage coordinates. Many different solution techniques falling into the two main categories of classical/deterministic optimization and heuristic/non-deterministic optimization techniques have been explored in the literature. This study considers the Volt/VAR optimization (VVO) variant of the OPF problem formulated in rectangular voltage coordinates, which is something of a departure from the majority of the studies, which tend to use the polar coordinate formulation. The heuristic particle swarm optimization (PSO) and the classical primal-dual interior-point method (PDIPM) are applied to the solution of the VVO problem and a comparative analysis of the relative performance of the two algorithms for this problem is presented. Four case studies based on the 6-bus, IEEE 14-bus, 30-bus, and 118-bus test systems are presented. The comparative performance analysis reveals that the two algorithms have complementary strengths, when evaluated on the basis of the solution quality and computational efficiency. Particularly, the PSO algorithm achieves greater power loss minimization, whereas the PDIPM exhibits greater speed of convergence (and, thus, better computational efficiency) relative to the PSO algorithm, particularly for higher-dimensional problems. An additional distinguishing characteristic of the proposed solution is that it incorporates the Newton–Raphson load flow computation, also formulated in rectangular voltage coordinates, which adds to the efficiency and effectiveness of the presented solution method. Full article
(This article belongs to the Special Issue Control, Optimization and Intelligent Computing in Energy)
Show Figures

Figure 1

34 pages, 686 KiB  
Article
A Non-Archimedean Interior Point Method and Its Application to the Lexicographic Multi-Objective Quadratic Programming
by Lorenzo Fiaschi and Marco Cococcioni
Mathematics 2022, 10(23), 4536; https://doi.org/10.3390/math10234536 - 30 Nov 2022
Cited by 5 | Viewed by 2037
Abstract
This work presents a generalized implementation of the infeasible primal-dual interior point method (IPM) achieved by the use of non-Archimedean values, i.e., infinite and infinitesimal numbers. The extended version, called here the non-Archimedean IPM (NA-IPM), is proved to converge in polynomial time to [...] Read more.
This work presents a generalized implementation of the infeasible primal-dual interior point method (IPM) achieved by the use of non-Archimedean values, i.e., infinite and infinitesimal numbers. The extended version, called here the non-Archimedean IPM (NA-IPM), is proved to converge in polynomial time to a global optimum and to be able to manage infeasibility and unboundedness transparently, i.e., without considering them as corner cases: by means of a mild embedding (addition of two variables and one constraint), the NA-IPM implicitly and transparently manages their possible presence. Moreover, the new algorithm is able to solve a wider variety of linear and quadratic optimization problems than its standard counterpart. Among them, the lexicographic multi-objective one deserves particular attention, since the NA-IPM overcomes the issues that standard techniques (such as scalarization or preemptive approach) have. To support the theoretical properties of the NA-IPM, the manuscript also shows four linear and quadratic non-Archimedean programming test cases where the effectiveness of the algorithm is verified. This also stresses that the NA-IPM is not just a mere symbolic or theoretical algorithm but actually a concrete numerical tool, paving the way for its use in real-world problems in the near future. Full article
(This article belongs to the Special Issue Mathematical Modeling and Optimization)
Show Figures

Figure 1

18 pages, 4080 KiB  
Article
Magnetotelluric Regularized Inversion Based on the Multiplier Method
by Deshan Feng, Xuan Su, Xun Wang, Siyuan Ding, Cen Cao, Shuo Liu and Yi Lei
Minerals 2022, 12(10), 1230; https://doi.org/10.3390/min12101230 - 28 Sep 2022
Cited by 2 | Viewed by 2044
Abstract
Magnetotellurics (MT) is an important geophysical method for resource exploration and mineral evaluation. As a direct and effective form of data interpretation, MT inversion is usually considered to be a penalty-function constraint-based optimization strategy. However, conventional MT inversion involves a large number of [...] Read more.
Magnetotellurics (MT) is an important geophysical method for resource exploration and mineral evaluation. As a direct and effective form of data interpretation, MT inversion is usually considered to be a penalty-function constraint-based optimization strategy. However, conventional MT inversion involves a large number of calculations in penalty terms and causes difficulties in selecting exact regularization factors. For this reason, we propose a multiplier-based MT inversion scheme, which is implemented by introducing the incremental Lagrangian function. In this case, it can avoid the exact solution of the primal-dual subproblem in the penalty function and further reduce the sensitivity of the regularization factors, thus achieving the goal of improving the convergence efficiency and accelerating the optimization calculation of the inverse algorithm. In this study, two models were used to verify the performance of the multiplier method in the regularized MT inversion. The first experiment, with an undulating two-layer model of metal ore, verified that the multiplier method could effectively avoid the MT inversion falling into local minimal. The second experiment, with a wedge model, showed that the multiplier method has strong robustness, due to which it can expand the selection range and reduce the difficulty of the regularization factors. We tested the feasibility of the multiplier method in field data. We compared the results of the multiplier method with those of conventional inversion methods in order to verify the accuracy of the multiplier method. Full article
(This article belongs to the Special Issue Electromagnetic Exploration: Theory, Methods and Applications)
Show Figures

Figure 1

14 pages, 786 KiB  
Article
Portfolio Insurance through Error-Correction Neural Networks
by Vladislav N. Kovalnogov, Ruslan V. Fedorov, Dmitry A. Generalov, Andrey V. Chukalin, Vasilios N. Katsikis, Spyridon D. Mourtas and Theodore E. Simos
Mathematics 2022, 10(18), 3335; https://doi.org/10.3390/math10183335 - 14 Sep 2022
Cited by 22 | Viewed by 2228
Abstract
Minimum-cost portfolio insurance (MCPI) is a well-known investment strategy that tries to limit the losses a portfolio may incur as stocks decrease in price without requiring the portfolio manager to sell those stocks. In this research, we define and study the time-varying MCPI [...] Read more.
Minimum-cost portfolio insurance (MCPI) is a well-known investment strategy that tries to limit the losses a portfolio may incur as stocks decrease in price without requiring the portfolio manager to sell those stocks. In this research, we define and study the time-varying MCPI problem as a time-varying linear programming problem. More precisely, using real-world datasets, three different error-correction neural networks are employed to address this financial time-varying linear programming problem in continuous-time. These neural network solvers are the zeroing neural network (ZNN), the linear-variational-inequality primal-dual neural network (LVI-PDNN), and the simplified LVI-PDNN (S-LVI-PDNN). The neural network solvers are tested using real-world data on portfolios of up to 20 stocks, and the results show that they are capable of solving the financial problem efficiently, in some cases more than five times faster than traditional methods, though their accuracy declines as the size of the portfolio increases. This demonstrates the speed and accuracy of neural network solvers, showing their superiority over traditional methods in moderate-size portfolios. To promote and contend the outcomes of this research, we created two MATLAB repositories, for the interested user, that are publicly accessible on GitHub. Full article
Show Figures

Figure 1

12 pages, 2260 KiB  
Article
An Algorithm for Task Allocation and Planning for a Heterogeneous Multi-Robot System to Minimize the Last Task Completion Time
by Abhishek Patil, Jungyun Bae and Myoungkuk Park
Sensors 2022, 22(15), 5637; https://doi.org/10.3390/s22155637 - 28 Jul 2022
Cited by 8 | Viewed by 2855
Abstract
This paper proposes an algorithm that provides operational strategies for multiple heterogeneous mobile robot systems utilized in many real-world applications, such as deliveries, surveillance, search and rescue, monitoring, and transportation. Specifically, the authors focus on developing an algorithm that solves a min–max multiple [...] Read more.
This paper proposes an algorithm that provides operational strategies for multiple heterogeneous mobile robot systems utilized in many real-world applications, such as deliveries, surveillance, search and rescue, monitoring, and transportation. Specifically, the authors focus on developing an algorithm that solves a min–max multiple depot heterogeneous asymmetric traveling salesperson problem (MDHATSP). The algorithm is designed based on a primal–dual technique to operate given multiple heterogeneous robots located at distinctive depots by finding a tour for each robot such that all the given targets are visited by at least one robot while minimizing the last task completion time. Building on existing work, the newly developed algorithm can solve more generalized problems, including asymmetric cost problems with a min–max objective. Though producing optimal solutions requires high computational loads, the authors aim to find reasonable sub-optimal solutions within a short computation time. The algorithm was repeatedly tested in a simulation with varying problem sizes to verify its effectiveness. The computational results show that the algorithm can produce reliable solutions to apply in real-time operations within a reasonable time. Full article
(This article belongs to the Special Issue Sensors for Mobile Robot)
Show Figures

Figure 1

14 pages, 1569 KiB  
Article
GPU-Accelerated PD-IPM for Real-Time Model Predictive Control in Integrated Missile Guidance and Control Systems
by Sanghyeon Lee, Heoncheol Lee, Yunyoung Kim, Jaehyun Kim and Wonseok Choi
Sensors 2022, 22(12), 4512; https://doi.org/10.3390/s22124512 - 14 Jun 2022
Cited by 14 | Viewed by 3443
Abstract
This paper addresses the problem of real-time model predictive control (MPC) in the integrated guidance and control (IGC) of missile systems. When the primal-dual interior point method (PD-IPM), which is a convex optimization method, is used as an optimization solution for the MPC, [...] Read more.
This paper addresses the problem of real-time model predictive control (MPC) in the integrated guidance and control (IGC) of missile systems. When the primal-dual interior point method (PD-IPM), which is a convex optimization method, is used as an optimization solution for the MPC, the real-time performance of PD-IPM degenerates due to the elevated computation time in checking the Karush–Kuhn–Tucker (KKT) conditions in PD-IPM. This paper proposes a graphics processing unit (GPU)-based method to parallelize and accelerate PD-IPM for real-time MPC. The real-time performance of the proposed method was tested and analyzed on a widely-used embedded system. The comparison results with the conventional PD-IPM and other methods showed that the proposed method improved the real-time performance by reducing the computation time significantly. Full article
(This article belongs to the Special Issue Recent Trends and Advances in SLAM with Multi-Robot Systems)
Show Figures

Figure 1

19 pages, 1552 KiB  
Article
Equilibrium Pricing with Duality-Based Method: Approach for Market-Oriented Capacity Remuneration Mechanism
by Perica Ilak, Lin Herenčić, Ivan Rajšl, Sara Raos and Željko Tomšić
Energies 2021, 14(3), 567; https://doi.org/10.3390/en14030567 - 22 Jan 2021
Cited by 4 | Viewed by 2267
Abstract
The crucial design elements of a good capacity remuneration mechanism are market orientation, insurance of long-term power system adequacy, and optimal cross-border generation capacity utilization. Having in mind these design elements, this research aims to propose a financially fair pricing mechanism that will [...] Read more.
The crucial design elements of a good capacity remuneration mechanism are market orientation, insurance of long-term power system adequacy, and optimal cross-border generation capacity utilization. Having in mind these design elements, this research aims to propose a financially fair pricing mechanism that will guarantee enough new capacity and will not present state aid. The proposed capacity remuneration mechanism is an easy-to-implement linear program problem presented in its primal and dual form. The shadow prices in the primal problem and dual variables in the dual problem are used to calculate the prices of firm capacity which is capacity needed for long-term power system adequacy under capacity remuneration mechanism. In order to test if the mechanism ensures sufficient new capacity under fair prices, the mechanism is tested on the European Network of Transmission System Operators for Electricity (ENTSO-E) regional block consisting of Austria, Slovenia, Hungary, and Croatia with simulation conducted for a period of one year with a one-hour resolution and for different scenarios of the credible critical events from a standpoint of security of supply; different amounts of newly installed firm capacity; different short-run marginal costs of newly installed firm capacity; and different capacity factors of newly installed firm capacity. Test data such as electricity prices and electricity load are referred to the year 2018. The results show that the worst-case scenario for Croatia is an isolated system scenario with dry hydrology that results with high values of indicators expected energy not served (EENS), loss of load expectation (LOLE), and loss of load probability (LOLP) for Croatia. Therefore, new capacity of several hundred MW is needed to stabilize these indicators at lower values. Price for that capacity depends on the range of installed firm capacity and should be in range of 1000–7000 €/MW/year for value of lost load (VoLL) in Croatia of 1000 €/MWh and 3000–22,000 €/MW/year for VoLL of 3100 €/MWh that correlates with prices from already established capacity markets. The presented methodology can assist policymakers, regulators, and market operators when determining capacity remuneration mechanism rules and both capacity and price caps. On the other hand, it can help capacity market participants to prepare the most suitable and near-optimal bids on capacity markets. Full article
(This article belongs to the Section C: Energy Economics and Policy)
Show Figures

Figure 1

13 pages, 836 KiB  
Article
Effect of Breed Types and Castration on Carcass Characteristics of Boer and Large Frame Indigenous Veld Goats of Southern Africa
by Gertruida L. van Wyk, Louwrens C. Hoffman, Phillip E. Strydom and Lorinda Frylinck
Animals 2020, 10(10), 1884; https://doi.org/10.3390/ani10101884 - 15 Oct 2020
Cited by 11 | Viewed by 4160
Abstract
Weaner male Boer Goats (BG; n = 36; 21 bucks and 15 wethers) and large frame Indigenous Veld Goats (IVG; n = 41; 21 bucks and 20 wethers) were raised on hay and natural grass ad libitum and the recommended amount of commercial [...] Read more.
Weaner male Boer Goats (BG; n = 36; 21 bucks and 15 wethers) and large frame Indigenous Veld Goats (IVG; n = 41; 21 bucks and 20 wethers) were raised on hay and natural grass ad libitum and the recommended amount of commercial pelleted diet to a live weight between 30 and 35 kg. Carcass quality characteristics (live weight, carcass weights, dressing %, chilling loss and eye muscle area) were measured. The right sides of the carcasses were divided into wholesale cuts and dissected into subcutaneous fat, meat and bone. Large frame Indigenous Veld Goat (IVG) wethers were slightly lighter than the IVG bucks with no significant difference observed between BG. Wethers compared to bucks had higher dressing %, subcutaneous fat % in all primal cuts, intramuscular fat %, kidney fat % and, overall, slightly less bone %. Some breed–wether interactions were noticed: IVG wethers were slightly lighter than the IVG bucks, but the IVG bucks tended to produce higher % meat compared to other test groups. Judged on the intramuscular fat % characteristics, it seems as if wethers should produce juicier and more flavorsome meat compared to bucks. Full article
(This article belongs to the Collection Carcass Composition and Meat Quality of Small Ruminants)
Show Figures

Figure 1

21 pages, 989 KiB  
Article
Parallel Matrix-Free Higher-Order Finite Element Solvers for Phase-Field Fracture Problems
by Daniel Jodlbauer, Ulrich Langer and Thomas Wick
Math. Comput. Appl. 2020, 25(3), 40; https://doi.org/10.3390/mca25030040 - 7 Jul 2020
Cited by 19 | Viewed by 4164
Abstract
Phase-field fracture models lead to variational problems that can be written as a coupled variational equality and inequality system. Numerically, such problems can be treated with Galerkin finite elements and primal-dual active set methods. Specifically, low-order and high-order finite elements may be employed, [...] Read more.
Phase-field fracture models lead to variational problems that can be written as a coupled variational equality and inequality system. Numerically, such problems can be treated with Galerkin finite elements and primal-dual active set methods. Specifically, low-order and high-order finite elements may be employed, where, for the latter, only few studies exist to date. The most time-consuming part in the discrete version of the primal-dual active set (semi-smooth Newton) algorithm consists in the solutions of changing linear systems arising at each semi-smooth Newton step. We propose a new parallel matrix-free monolithic multigrid preconditioner for these systems. We provide two numerical tests, and discuss the performance of the parallel solver proposed in the paper. Furthermore, we compare our new preconditioner with a block-AMG preconditioner available in the literature. Full article
(This article belongs to the Special Issue High-Performance Computing 2020)
Show Figures

Figure 1

12 pages, 251 KiB  
Article
On the Number of Witnesses in the Miller–Rabin Primality Test
by Shamil Talgatovich Ishmukhametov, Bulat Gazinurovich Mubarakov and Ramilya Gakilevna Rubtsova
Symmetry 2020, 12(6), 890; https://doi.org/10.3390/sym12060890 - 1 Jun 2020
Cited by 9 | Viewed by 4595
Abstract
In this paper, we investigate the popular Miller–Rabin primality test and study its effectiveness. The ability of the test to determine prime integers is based on the difference of the number of primality witnesses for composite and prime integers. Let [...] Read more.
In this paper, we investigate the popular Miller–Rabin primality test and study its effectiveness. The ability of the test to determine prime integers is based on the difference of the number of primality witnesses for composite and prime integers. Let W ( n ) denote the set of all primality witnesses for odd n. By Rabin’s theorem, if n is prime, then each positive integer a < n is a primality witness for n. For composite n, the power of W ( n ) is less than or equal to φ ( n ) / 4 where φ ( n ) is Euler’s Totient function. We derive new exact formulas for the power of W ( n ) depending on the number of factors of tested integers. In addition, we study the average probability of errors in the Miller–Rabin test and show that it decreases when the length of tested integers increases. This allows us to reduce estimations for the probability of the Miller–Rabin test errors and increase its efficiency. Full article
(This article belongs to the Special Issue Number Theory and Symmetry)
Back to TopTop