Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (267)

Search Parameters:
Keywords = arithmetic multiplication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3598 KB  
Article
Research on Denoising Methods for Magnetocardiography Signals in a Non-Magnetic Shielding Environment
by Biao Xing, Xie Feng and Binzhen Zhang
Sensors 2025, 25(19), 6096; https://doi.org/10.3390/s25196096 - 3 Oct 2025
Abstract
Magnetocardiography (MCG) offers a noninvasive method for early screening and precise localization of cardiovascular diseases by measuring picotesla-level weak magnetic fields induced by cardiac electrical activity. However, in unshielded magnetic environments, geomagnetic disturbances, power-frequency electromagnetic interference, and physiological/motion artifacts can significantly overwhelm effective [...] Read more.
Magnetocardiography (MCG) offers a noninvasive method for early screening and precise localization of cardiovascular diseases by measuring picotesla-level weak magnetic fields induced by cardiac electrical activity. However, in unshielded magnetic environments, geomagnetic disturbances, power-frequency electromagnetic interference, and physiological/motion artifacts can significantly overwhelm effective magnetocardiographic components. To address this challenge, this paper systematically constructs an integrated denoising framework, termed “AOA-VMD-WT”. In this approach, the Arithmetic Optimization Algorithm (AOA) adaptively optimizes the key parameters (decomposition level K and penalty factor α) of Variational Mode Decomposition (VMD). The decomposed components are then regularized based on their modal center frequencies: components with frequencies ≥50 Hz are directly suppressed; those with frequencies <50 Hz undergo wavelet threshold (WT) denoising; and those with frequencies <0.5 Hz undergo baseline correction. The purified signal is subsequently reconstructed. For quantitative evaluation, we designed performance indicators including QRS amplitude retention rate, high/low frequency suppression amount, and spectral entropy. Further comparisons are made with baseline methods such as FIR and wavelet soft/hard thresholds. Experimental results on multiple sets of measured MCG data demonstrate that the proposed method achieves an average improvement of approximately 8–15 dB in high-frequency suppression, 2–8 dB in low-frequency suppression, and a decrease in spectral entropy ranging from 0.1 to 0.6 without compromising QRS amplitude. Additionally, the parameter optimization exhibits high stability. These findings suggest that the proposed framework provides engineerable algorithmic support for stable MCG measurement in ordinary clinic scenarios. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

25 pages, 6705 KB  
Article
Machine Learning-Enhanced Monitoring and Assessment of Urban Drinking Water Quality in North Bhubaneswar, Odisha, India
by Kshyana Prava Samal, Rakesh Ranjan Thakur, Alok Kumar Panda, Debabrata Nandi, Alok Kumar Pati, Kumarjeeb Pegu and Bojan Đurin
Limnol. Rev. 2025, 25(3), 44; https://doi.org/10.3390/limnolrev25030044 - 12 Sep 2025
Viewed by 1105
Abstract
Access to clean drinking water is crucial for any region’s social and economic growth. However, rapid urbanization and industrialization have significantly deteriorated water quality, posing severe pollution threats from domestic, agricultural, and industrial sources. This study presents an innovative framework for assessing water [...] Read more.
Access to clean drinking water is crucial for any region’s social and economic growth. However, rapid urbanization and industrialization have significantly deteriorated water quality, posing severe pollution threats from domestic, agricultural, and industrial sources. This study presents an innovative framework for assessing water quality in North Bhubaneswar, integrating the Water Quality Index (WQI) with statistical analysis, geospatial technologies, and machine learning models. The WQI, calculated using the Weighted Arithmetic Index method, provides a single composite value representing overall water quality based on several key physicochemical parameters. To evaluate potable water quality across 21 wards in the northern zone, several key parameters were monitored, including pH, electrical conductivity (EC), dissolved oxygen (DO), hardness, chloride, total dissolved solids (TDSs), and biochemical oxygen demand (BOD). The Weighted Arithmetic WQI method was employed to determine overall water quality, which ranged from excellent to good. Furthermore, Principal Component Analysis (PCA) revealed a strong positive correlation (r > 0.6) between pH, conductivity, hardness, and alkalinity. To enhance the accuracy and reliability of water quality assessment, multiple machine learning models Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naïve Bayes (NB) were applied to classify water quality based on these parameters. Among them, the Decision Tree (DT) and Random Forest (RF) models demonstrated the highest precision (91.8% and 92.7%, respectively) and overall accuracy (91.7%), making them the most effective in predicting water quality and integrating WQI, machine learning, and statistics to analyze water quality. The study emphasizes the importance of continuous water quality monitoring and offers data-driven recommendations to ensure sustainable access to clean drinking water in North Bhubaneswar. Full article
Show Figures

Figure 1

16 pages, 3123 KB  
Article
Numerical Modeling of Tissue Irradiation in Cylindrical Coordinates Using the Fuzzy Finite Pointset Method
by Anna Korczak
Appl. Sci. 2025, 15(18), 9923; https://doi.org/10.3390/app15189923 - 10 Sep 2025
Viewed by 199
Abstract
This study focuses on the numerical analysis of heat transfer in biological tissue. The proposed model is formulated using the Pennes equation for a two-dimensional cylindrical domain. The tissue undergoes laser irradiation, where internal heat sources are determined based on the Beer–Lambert law. [...] Read more.
This study focuses on the numerical analysis of heat transfer in biological tissue. The proposed model is formulated using the Pennes equation for a two-dimensional cylindrical domain. The tissue undergoes laser irradiation, where internal heat sources are determined based on the Beer–Lambert law. Moreover, key parameters—such as the perfusion rate and effective scattering coefficient—are modeled as functions dependent on tissue damage. In addition, a fuzzy heat source associated with magnetic nanoparticles is also incorporated into the model to account for magnetothermal effects. A novel aspect of this work is the introduction of uncertainty in selected model parameters by representing them as triangular fuzzy numbers. Consequently, the entire Finite Pointset Method (FPM) framework is extended to operate with fuzzy-valued quantities, which—to the best of our knowledge—has not been previously applied in two-dimensional thermal modeling of biological tissues. The numerical computations are carried out using the fuzzy-adapted FPM approach. All calculations are performed due to the fuzzy arithmetic rules with the application of α-cuts. This fuzzy formulation inherently captures the variability of uncertain parameters, effectively replacing the need for a traditional sensitivity analysis. As a result, the need for multiple simulations over a wide range of input values is eliminated. The findings, discussed in the final Section, demonstrate that this extended FPM formulation is a viable and effective tool for analyzing heat transfer processes under uncertainty, with an evaluation of α-cut widths and the influence of the degree of fuzziness on the results also carried out. Full article
Show Figures

Figure 1

14 pages, 4655 KB  
Article
Evaluation of Surface Roughness with Reduced Data of BRDF Pattern
by Jui-Hsiang Yen, Zih-Ying Fang and Cheng-Huan Chen
Appl. Sci. 2025, 15(17), 9850; https://doi.org/10.3390/app15179850 - 8 Sep 2025
Viewed by 1273
Abstract
Traditional non-destructive measurement of surface roughness exploits complete data of bidirectional reflective distribution function (BRDF). The instrument is normally bulky and the process should be conducted off-line, hence it is time-consuming. If only a part of BRDF data can be sufficient to determine [...] Read more.
Traditional non-destructive measurement of surface roughness exploits complete data of bidirectional reflective distribution function (BRDF). The instrument is normally bulky and the process should be conducted off-line, hence it is time-consuming. If only a part of BRDF data can be sufficient to determine the surface roughness, both the measurement equipment and processing time can be significantly reduced. This paper proposes a compact device capable of detecting multiple angular intensities of reflective scattering with different incident angles from different spatial points of the target object at the same time. It is used to evaluate the surface roughness of a standard specimen with arithmetic mean roughness (Ra) values ranging from 0.13 µm to 2.1 µm. The case of measuring two spatial points of the specimen is used for illustrating the calibration procedure of the device and how the data were searched and processed to increase the reliability and robustness for evaluating the surface roughness with reduced data of BRDF. Similar methodologies can be applicable for other real-time detection methods based on the scattering process. Full article
(This article belongs to the Topic Advances in Non-Destructive Testing Methods, 3rd Edition)
Show Figures

Figure 1

16 pages, 2206 KB  
Article
Environmental Factors Driving Carbonate Distribution in Marine Sediments in the Canary Current Upwelling System
by Hasnaa Nait-Hammou, Khalid El Khalidi, Ahmed Makaoui, Melissa Chierici, Chaimaa Jamal, Nezha Mejjad, Otmane Khalfaoui, Fouad Salhi, Mohammed Idrissi and Bendahhou Zourarah
J. Mar. Sci. Eng. 2025, 13(9), 1709; https://doi.org/10.3390/jmse13091709 - 4 Sep 2025
Viewed by 395
Abstract
This study illustrates the complex interaction between environmental parameters and carbonate distribution in marine sediments along the Tarfaya–Boujdour coastline (26–28° N) of Northwest Africa. Analysis of 21 surface sediment samples and their associated bottom water properties (salinity, temperature, dissolved oxygen, nutrients) reveals CaCO [...] Read more.
This study illustrates the complex interaction between environmental parameters and carbonate distribution in marine sediments along the Tarfaya–Boujdour coastline (26–28° N) of Northwest Africa. Analysis of 21 surface sediment samples and their associated bottom water properties (salinity, temperature, dissolved oxygen, nutrients) reveals CaCO3 content ranging from 16.8 wt.% to 60.5 wt.%, with concentrations above 45 wt.% occurring in multiple stations, especially in nearshore deposits. Mineralogy indicates a general decrease in quartz, with an arithmetic mean and standard deviation of 52.5 wt.% ± 19.8 towards the open sea, and an increase in carbonate minerals (calcite ≤ 24%, aragonite ≤ 10%) with depth. Sediments are predominantly composed of fine sand (78–99%), poorly classified, with gravel content reaching 6.7% in energetic coastal stations. An inverse relationship between organic carbon (0.63–3.23 wt.%) and carbonates is observed in upwelling zones, correlated with nitrate concentrations exceeding 19 μmol/L. Hydrological gradients show temperatures from 12.41 °C (offshore) to 21.62 °C (inshore), salinity from 35.64 to 36.81 psu and dissolved oxygen from 2.06 to 4.21 mL/L. The weak correlation between carbonates and depth (r = 0.10) reflects the balance between three processes: biogenic production stimulated by upwelling, dilution by Saharan terrigenous inputs, and hydrodynamic sorting redistributing bioclasts. These results underline the need for models integrating hydrology, mineralogy and hydrodynamics to predict carbonate dynamics in desert margins under upwelling. Full article
(This article belongs to the Section Geological Oceanography)
Show Figures

Figure 1

8 pages, 921 KB  
Proceeding Paper
Design of Complementary Metal–Oxide–Semiconductor Encoder/Decoder with Compact Circuit Structure for Booth Multiplier
by Yu-Nsin Wang and Yu-Cherng Hung
Eng. Proc. 2025, 103(1), 21; https://doi.org/10.3390/engproc2025103021 - 1 Sep 2025
Viewed by 368
Abstract
Multipliers are crucial components in digital processing and the arithmetic logic unit (ALU) of central processing unit (CPU) design. As the data bit length increases, the number of partial products in the multiplication process increases, resulting in an increased summation time for the [...] Read more.
Multipliers are crucial components in digital processing and the arithmetic logic unit (ALU) of central processing unit (CPU) design. As the data bit length increases, the number of partial products in the multiplication process increases, resulting in an increased summation time for the partial products. Consequently, the speed of the multiplier circuit is adversely affected by increased time delays. In this article, we present a combined radix-4 Booth encoding module that employs metal–oxide–semiconductor (MOS) transistors that share common control signals to reduce the transistor count. In HSPICE simulations, the functionality of the proposed circuit architecture was verified, and the number of transistors used was successfully reduced. Full article
(This article belongs to the Proceedings of The 8th Eurasian Conference on Educational Innovation 2025)
Show Figures

Figure 1

16 pages, 4762 KB  
Article
ACR: Adaptive Confidence Re-Scoring for Reliable Answer Selection Among Multiple Candidates
by Eunhye Jeong and Yong Suk Choi
Appl. Sci. 2025, 15(17), 9587; https://doi.org/10.3390/app15179587 - 30 Aug 2025
Viewed by 522
Abstract
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final [...] Read more.
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final answer via majority voting. However, this approach can fail when the correct answer is generated but does not appear frequently enough to be selected, highlighting its vulnerability to inconsistent generations. To address this, we propose Adaptive Confidence Re-scoring (ACR)—a method that adaptively evaluates and re-scores candidate answers to select the most trustworthy one when LLMs fail to generate consistent reasoning. Experiments on arithmetic and logical reasoning benchmarks show that ACR maintains or improves answer accuracy while significantly reducing inference cost. Compared to existing verification methods such as FOBAR, ACR reduces the number of inference calls by up to 95%, while improving inference efficiency—measured as accuracy gain per inference call—by a factor of 2× to 17×, depending on the dataset and model. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 445 KB  
Article
Design of Real-Time Gesture Recognition with Convolutional Neural Networks on a Low-End FPGA
by Rui Policarpo Duarte, Tiago Gonçalves, Gustavo Jacinto, Paulo Flores and Mário Véstias
Electronics 2025, 14(17), 3457; https://doi.org/10.3390/electronics14173457 - 29 Aug 2025
Viewed by 452
Abstract
Hand gesture recognition is used in human–computer interaction, with multiple applications in assistive technologies, virtual reality, and smart systems. While vision-based methods are commonly employed, they are often computationally intensive, sensitive to environmental conditions, and raise privacy concerns. This work proposes a hardware/software [...] Read more.
Hand gesture recognition is used in human–computer interaction, with multiple applications in assistive technologies, virtual reality, and smart systems. While vision-based methods are commonly employed, they are often computationally intensive, sensitive to environmental conditions, and raise privacy concerns. This work proposes a hardware/software co-optimized system for real-time hand gesture recognition using accelerometer data, designed for a portable, low-cost platform. A Convolutional Neural Network from TinyML is implemented on a Xilinx Zynq-7000 SoC-FPGA, utilizing fixed-point arithmetic to minimize computational complexity while maintaining classification accuracy. Additionally, combined architectural optimizations, including pipelining and loop unrolling, are applied to enhance processing efficiency. The final system achieves a 62× speedup over an unoptimized floating-point implementation while reducing power consumption, making it suitable for embedded and battery-powered applications. Full article
Show Figures

Figure 1

30 pages, 1703 KB  
Article
A Three-Stage Stochastic–Robust Scheduling for Oxy-Fuel Combustion Capture Involved Virtual Power Plants Considering Source–Load Uncertainties and Carbon Trading
by Jiahong Wang, Xintuan Wang and Bingkang Li
Sustainability 2025, 17(16), 7354; https://doi.org/10.3390/su17167354 - 14 Aug 2025
Viewed by 466
Abstract
Driven by the “dual carbon” goal, virtual power plants (VPPs) are the core vehicle for integrating distributed energy resources, but the multiple uncertainties in wind power, electricity/heat load, and electricity price, coupled with the impact of carbon-trading cost, make it difficult for traditional [...] Read more.
Driven by the “dual carbon” goal, virtual power plants (VPPs) are the core vehicle for integrating distributed energy resources, but the multiple uncertainties in wind power, electricity/heat load, and electricity price, coupled with the impact of carbon-trading cost, make it difficult for traditional scheduling methods to balance the robustness and economy of VPPs. Therefore, this paper proposes an oxy-fuel combustion capture (OCC)-VPP architecture, integrating an OCC unit to improve the energy efficiency of the system through the “electricity-oxygen-carbon” cycle. Ten typical scenarios are generated by Latin hypercube sampling and K-means clustering to describe the uncertainties of source and load probability distribution, combined with the polyhedral uncertainty set to delineate the boundary of source and load fluctuations, and the stepped carbon-trading mechanism is introduced to quantify the cost of carbon emission. Then, a three-stage stochastic–robust scheduling model is constructed. The simulation based on the arithmetic example of OCC-VPP in North China shows that (1) OCC-VPP significantly improves the economy through the synergy of electric–hydrogen production and methanation (52% of hydrogen is supplied with heat and 41% is methanated), and the cost of carbon sequestration increases with the prediction error, but the carbon benefit of stepped carbon trading is stabilized at the base price of 320 DKK/ton; (2) when the uncertainty is increased from 0 to 18, the total cost rises by 45%, and the cost of purchased gas increases by the largest amount, and the cost of energy abandonment increases only by 299.6 DKK, which highlights the smoothing effect of energy storage; (3) the proposed model improves the solution speed by 70% compared with stochastic optimization, and reduces cost by 4.0% compared with robust optimization, which balances economy and robustness efficiently. Full article
Show Figures

Figure 1

31 pages, 2421 KB  
Article
Optimization of Cooperative Operation of Multiple Microgrids Considering Green Certificates and Carbon Trading
by Xiaobin Xu, Jing Xia, Chong Hong, Pengfei Sun, Peng Xi and Jinchao Li
Energies 2025, 18(15), 4083; https://doi.org/10.3390/en18154083 - 1 Aug 2025
Cited by 1 | Viewed by 425
Abstract
In the context of achieving low-carbon goals, building low-carbon energy systems is a crucial development direction and implementation pathway. Renewable energy is favored because of its clean characteristics, but the access may have an impact on the power grid. Microgrid technology provides an [...] Read more.
In the context of achieving low-carbon goals, building low-carbon energy systems is a crucial development direction and implementation pathway. Renewable energy is favored because of its clean characteristics, but the access may have an impact on the power grid. Microgrid technology provides an effective solution to this problem. Uncertainty exists in single microgrids, so multiple microgrids are introduced to improve system stability and robustness. Electric carbon trading and profit redistribution among multiple microgrids have been challenges. To promote energy commensurability among microgrids, expand the types of energy interactions, and improve the utilization rate of renewable energy, this paper proposes a cooperative operation optimization model of multi-microgrids based on the green certificate and carbon trading mechanism to promote local energy consumption and a low carbon economy. First, this paper introduces a carbon capture system (CCS) and power-to-gas (P2G) device in the microgrid and constructs a cogeneration operation model coupled with a power-to-gas carbon capture system. On this basis, a low-carbon operation model for multi-energy microgrids is proposed by combining the local carbon trading market, the stepped carbon trading mechanism, and the green certificate trading mechanism. Secondly, this paper establishes a cooperative game model for multiple microgrid electricity carbon trading based on the Nash negotiation theory after constructing the single microgrid model. Finally, the ADMM method and the asymmetric energy mapping contribution function are used for the solution. The case study uses a typical 24 h period as an example for the calculation. Case study analysis shows that, compared with the independent operation mode of microgrids, the total benefits of the entire system increased by 38,296.1 yuan and carbon emissions were reduced by 30,535 kg through the coordinated operation of electricity–carbon coupling. The arithmetic example verifies that the method proposed in this paper can effectively improve the economic benefits of each microgrid and reduce carbon emissions. Full article
Show Figures

Figure 1

20 pages, 1104 KB  
Article
Fast Algorithms for the Small-Size Type IV Discrete Hartley Transform
by Vitalii Natalevych, Marina Polyakova and Aleksandr Cariow
Electronics 2025, 14(14), 2841; https://doi.org/10.3390/electronics14142841 - 15 Jul 2025
Viewed by 313
Abstract
This paper presents new fast algorithms for the fourth type discrete Hartley transform (DHT-IV) for input data sequences of lengths from 3 to 8. Fast algorithms for small-sized trigonometric transforms can be used as building blocks for synthesizing algorithms for large-sized transforms. Additionally, [...] Read more.
This paper presents new fast algorithms for the fourth type discrete Hartley transform (DHT-IV) for input data sequences of lengths from 3 to 8. Fast algorithms for small-sized trigonometric transforms can be used as building blocks for synthesizing algorithms for large-sized transforms. Additionally, they can be utilized to process small data blocks in various digital signal processing applications, thereby reducing overall system latency and computational complexity. The existing polynomial algebraic approach and the approach based on decomposing the transform matrix into cyclic convolution submatrices involve rather complicated housekeeping and a large number of additions. To avoid the noted drawback, this paper uses a structural approach to synthesize new algorithms. The starting point for constructing fast algorithms was to represent DHT-IV as a matrix–vector product. The next step was to bring the block structure of the DHT-IV matrix to one of the matrix patterns following the structural approach. In this case, if the block structure of the DHT-IV matrix did not match one of the existing patterns, its rows and columns were reordered, and, if necessary, the signs of some entries were changed. If this did not help, the DHT-IV matrix was represented as the sum of two or more matrices, and then each matrix was analyzed separately, if necessary, subjecting the matrices obtained by decomposition to the above transformations. As a result, the factorizations of matrix components were obtained, which led to a reduction in the arithmetic complexity of the developed algorithms. To illustrate the space–time structures of computational processes described by the developed algorithms, their data flow graphs are presented, which, if necessary, can be directly mapped onto the VLSI structure. The obtained DHT-IV algorithms can reduce the number of multiplications by an average of 75% compared with the direct calculation of matrix–vector products. However, the number of additions has increased by an average of 4%. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

20 pages, 1069 KB  
Article
Cognitive, Behavioral, and Learning Profiles of Children with Above-Average Cognitive Functioning: Insights from an Italian Clinical Sample
by Daniela Pia Rosaria Chieffo, Valentina Arcangeli, Valentina Delle Donne, Giulia Settimi, Valentina Massaroni, Angelica Marfoli, Monia Pellizzari, Ida Turrini, Elisa Marconi, Laura Monti, Federica Moriconi, Delfina Janiri, Gabriele Sani and Eugenio Maria Mercuri
Children 2025, 12(7), 926; https://doi.org/10.3390/children12070926 - 13 Jul 2025
Viewed by 486
Abstract
Background/Objectives: Children with above-average cognitive functioning often present complex developmental profiles, combining high cognitive potential with heterogeneous socio-emotional and learning trajectories. Although the cognitive and behavioral characteristics of giftedness have been widely studied in Anglophone countries, evidence remains limited in Southern Europe. This [...] Read more.
Background/Objectives: Children with above-average cognitive functioning often present complex developmental profiles, combining high cognitive potential with heterogeneous socio-emotional and learning trajectories. Although the cognitive and behavioral characteristics of giftedness have been widely studied in Anglophone countries, evidence remains limited in Southern Europe. This study aimed to investigate the cognitive, academic, and emotional–behavioral profiles of Italian children and adolescents with above-average cognitive functioning, using an inclusive, dimensional approach (IQ > 114). Methods: We analyzed a cross-sectional sample of 331 children and adolescents (ages 2.11–16.5 years), referred for clinical cognitive or behavioral evaluations. Participants were assessed using the WPPSI-III or WISC-IV for cognitive functioning, the MT battery for academic achievement, and the Child Behavior Checklist (CBCL) for emotional and behavioral symptoms. Comparative and correlational analyses were performed across age, gender, and functional domains. A correction for multiple testing was applied using the Benjamini–Hochberg procedure. Results: Gifted participants showed strong verbal comprehension (mean VCI: preschoolers = 118; school-aged = 121) and relative weaknesses in working memory (WM = 106) and processing speed (PS = 109). Males outperformed females in perceptual reasoning (PR = 121 vs. 118; p = 0.032), while females scored higher in processing speed (112 vs. 106; p = 0.021). Difficulties in writing and arithmetic were observed in 47.3% and 41.8% of school-aged participants, respectively. Subclinical internalizing problems were common in preschool and school-aged groups (mean CBCL T = 56.2–56.7). Working memory negatively correlated with total behavioral problems (r = −0.13, p = 0.046). Conclusions: These findings confirm the heterogeneity of gifted profiles and underscore the need for personalized educational and psychological interventions to support both strengths and vulnerabilities in gifted children. Caution is warranted when interpreting these associations, given their modest effect sizes and the exploratory nature of the study. Full article
(This article belongs to the Section Pediatric Mental Health)
Show Figures

Figure 1

16 pages, 1013 KB  
Article
Multidimensional Educational Inequality in Italy: A Stacking-Based Approach for Gender and Territorial Analysis
by Martina De Anna and Enrico Ivaldi
Sustainability 2025, 17(14), 6243; https://doi.org/10.3390/su17146243 - 8 Jul 2025
Viewed by 529
Abstract
This study investigates regional and gender disparities in educational attainment across Italy in 2021, drawing on the Fair and Sustainable Well-being (BES) dataset from ISTAT. By applying cluster analysis and composite indicators—including the Mazziotta–Pareto Index (MPI), geometric and arithmetic means, min-max normalization, and [...] Read more.
This study investigates regional and gender disparities in educational attainment across Italy in 2021, drawing on the Fair and Sustainable Well-being (BES) dataset from ISTAT. By applying cluster analysis and composite indicators—including the Mazziotta–Pareto Index (MPI), geometric and arithmetic means, min-max normalization, and principal component analysis (PCA)—we assess the robustness and consistency of educational performance across regions. A key methodological innovation is the use of the stacking method to ensure comparability between genders. Results show persistent North–South educational divides and a consistent female advantage across all indicators. The paper contributes to Sustainable Development Goals by providing empirical insights into SDG 4 (Quality Education) through measurement of educational inequality and access; SDG 5 (Gender Equality) by highlighting structural advantages of women in educational outcomes; and SDG 10 (Reduced Inequalities) through a territorial analysis of disparities and policy implications. The findings offer both a methodological contribution—by testing multiple aggregation techniques—and a practical tool for policy evaluation, emphasizing the importance of multidimensional and gender-sensitive approaches in achieving educational sustainability. Full article
Show Figures

Figure 1

24 pages, 1061 KB  
Article
High- and Low-Rank Optimization of SNOVA on ARMv8: From High-Security Applications to IoT Efficiency
by Minwoo Lee, Minjoo Sim, Siwoo Eum and Hwajeong Seo
Electronics 2025, 14(13), 2696; https://doi.org/10.3390/electronics14132696 - 3 Jul 2025
Viewed by 582
Abstract
The increasing threat of quantum computing to traditional cryptographic systems has prompted intense research into post-quantum schemes. Despite SNOVA’s potential for lightweight and secure digital signatures, its performance on embedded devices (e.g., ARMv8 platforms) remains underexplored. This research addresses this gap by presenting [...] Read more.
The increasing threat of quantum computing to traditional cryptographic systems has prompted intense research into post-quantum schemes. Despite SNOVA’s potential for lightweight and secure digital signatures, its performance on embedded devices (e.g., ARMv8 platforms) remains underexplored. This research addresses this gap by presenting the optimal SNOVA implementations on embedded devices. This paper presents a performance-optimized implementation of the SNOVA post-quantum digital signature scheme on ARMv8 processors. SNOVA is a multivariate cryptographic algorithm under consideration in the NIST’s additional signature standardization. Our work targets the performance bottlenecks in the SNOVA scheme. Specifically, we employ matrix arithmetic over GF16 and AES-CTR-based pseudorandom number generation by exploiting the NEON SIMD extension and tailoring the computations to the matrix rank. At a low level, we develop rank-specific SIMD kernels for addition and multiplication. Rank 4 matrices (i.e., 16 bytes) are handled using fully vectorized instructions that align with 128-bit-wise registers, while rank 2 matrices (i.e., 4 bytes) are processed in batches of four to ensure full SIMD occupancy. At the high level, core routines such as key generation and signature evaluation are structurally refactored to provide aligned memory layouts for batched execution. This joint optimization across algorithmic layers reduces the overhead and enables seamless hardware acceleration. The resulting implementation supports 12 SNOVA parameter sets and demonstrates substantial efficiency improvements compared to the reference baseline. These results highlight that fine-grained SIMD adaptation is essential for the efficient deployment of multivariate cryptography on modern embedded platforms. Full article
(This article belongs to the Special Issue Trends in Information Systems and Security)
Show Figures

Figure 1

33 pages, 3647 KB  
Article
Research on the Operation Optimisation of Integrated Energy System Based on Multiple Thermal Inertia
by Huiqiang Zhi, Min Zhang, Xiao Chang, Rui Fan, Huipeng Li, Le Gao and Jinge Song
Energies 2025, 18(13), 3500; https://doi.org/10.3390/en18133500 - 2 Jul 2025
Viewed by 321
Abstract
Addressing the problem that energy supply and load demand cannot be matched due to the difference in inertia effects among multiple energy sources, and taking into account the thermoelectric load, this paper designs a two-stage operation optimization model of IES considering multi-dimensional thermal [...] Read more.
Addressing the problem that energy supply and load demand cannot be matched due to the difference in inertia effects among multiple energy sources, and taking into account the thermoelectric load, this paper designs a two-stage operation optimization model of IES considering multi-dimensional thermal inertia and constructs an intelligent adaptive solution method based on a time scale-model base. Validation is conducted through an arithmetic example. Scenario 2 has 15.3% fewer CO2 emissions than Scenario 1, 19.7% less purchased electricity, and 20.0% less purchased electricity cost. The optimal algorithm for the day-ahead phase is GA, and the optimal algorithm for the intraday phase is PSO, which is able to produce optimization results in a few minutes. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

Back to TopTop