Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,332)

Search Parameters:
Keywords = two-hybrid analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 6505 KiB  
Review
Trends in Oil Spill Modeling: A Review of the Literature
by Rodrigo N. Vasconcelos, André T. Cunha Lima, Carlos A. D. Lentini, José Garcia V. Miranda, Luís F. F. de Mendonça, Diego P. Costa, Soltan G. Duverger and Elaine C. B. Cambui
Water 2025, 17(15), 2300; https://doi.org/10.3390/w17152300 (registering DOI) - 2 Aug 2025
Abstract
Oil spill simulation models are essential for predicting the oil spill behavior and movement in marine environments. In this study, we comprehensively reviewed a large and diverse body of peer-reviewed literature obtained from Scopus and Web of Science. Our initial analysis phase focused [...] Read more.
Oil spill simulation models are essential for predicting the oil spill behavior and movement in marine environments. In this study, we comprehensively reviewed a large and diverse body of peer-reviewed literature obtained from Scopus and Web of Science. Our initial analysis phase focused on examining trends in scientific publications, utilizing the complete dataset derived after systematic screening and database integration. In the second phase, we applied elements of a systematic review to identify and evaluate the most influential contributions in the scientific field of oil spill simulations. Our analysis revealed a steady and accelerating growth of research activity over the past five decades, with a particularly notable expansion in the last two. The field has also experienced a marked increase in collaborative practices, including a rise in international co-authorship and multi-authored contributions, reflecting a more global and interdisciplinary research landscape. We cataloged the key modeling frameworks that have shaped the field from established systems such as OSCAR, OIL-MAP/SIMAP, and GNOME to emerging hybrid and Lagrangian approaches. Hydrodynamic models were consistently central, often integrated with biogeochemical, wave, atmospheric, and oil-spill-specific modules. Environmental variables such as wind, ocean currents, and temperature were frequently used to drive model behavior. Geographically, research has concentrated on ecologically and economically sensitive coastal and marine regions. We conclude that future progress will rely on the real-time integration of high-resolution environmental data streams, the development of machine-learning-based surrogate models to accelerate computations, and the incorporation of advanced biodegradation and weathering mechanisms supported by experimental data. These advancements are expected to enhance the accuracy, responsiveness, and operational value of oil spill modeling tools, supporting environmental monitoring and emergency response. Full article
(This article belongs to the Special Issue Advanced Remote Sensing for Coastal System Monitoring and Management)
24 pages, 3553 KiB  
Article
A Hybrid Artificial Intelligence Framework for Melanoma Diagnosis Using Histopathological Images
by Alberto Nogales, María C. Garrido, Alfredo Guitian, Jose-Luis Rodriguez-Peralto, Carlos Prados Villanueva, Delia Díaz-Prieto and Álvaro J. García-Tejedor
Technologies 2025, 13(8), 330; https://doi.org/10.3390/technologies13080330 (registering DOI) - 1 Aug 2025
Abstract
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. [...] Read more.
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. Histopathological analysis is a widely used diagnostic method, but it is a time-consuming process that heavily depends on the expertise of highly trained specialists. Recent advances in Artificial Intelligence have shown promising results in image classification, highlighting its potential as a supportive tool for medical diagnosis. In this study, we explore the application of hybrid Artificial Intelligence models for melanoma diagnosis using histopathological images. The dataset used consisted of 506 histopathological images, from which 313 curated images were selected after quality control and preprocessing. We propose a two-step framework that employs an Autoencoder for dimensionality reduction and feature extraction of the images, followed by a classification algorithm to distinguish between melanoma and nevus, trained on the extracted feature vectors from the bottleneck of the Autoencoder. We evaluated Support Vector Machines, Random Forest, Multilayer Perceptron, and K-Nearest Neighbours as classifiers. Among these, the combinations of Autoencoder with K-Nearest Neighbours achieved the best performance and inference time, reaching an average accuracy of approximately 97.95% on the test set and requiring 3.44 min per diagnosis. The baseline comparison results were consistent, demonstrating strong generalisation and outperforming the other models by 2 to 13 percentage points. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

20 pages, 4765 KiB  
Article
Ultrasonic EDM for External Cylindrical Surface Machining with Graphite Electrodes: Horn Design and Hybrid NSGA-II–AHP Optimization of MRR and Ra
by Van-Thanh Dinh, Thu-Quy Le, Duc-Binh Vu, Ngoc-Pi Vu and Tat-Loi Mai
Machines 2025, 13(8), 675; https://doi.org/10.3390/machines13080675 (registering DOI) - 1 Aug 2025
Abstract
This study presents the first investigation into the application of ultrasonic vibration-assisted electrical discharge machining (UV-EDM) using graphite electrodes for external cylindrical surface machining—an essential surface in the production of tablet punches and sheet metal-forming dies. A custom ultrasonic horn was designed and [...] Read more.
This study presents the first investigation into the application of ultrasonic vibration-assisted electrical discharge machining (UV-EDM) using graphite electrodes for external cylindrical surface machining—an essential surface in the production of tablet punches and sheet metal-forming dies. A custom ultrasonic horn was designed and fabricated using 90CrSi material to operate effectively at a resonant frequency of 20 kHz, ensuring stable vibration transmission throughout the machining process. A Box–Behnken experimental design was employed to explore the effects of five process parameters—vibration amplitude (A), pulse-on time (Ton), pulse-off time (Toff), discharge current (Ip), and servo voltage (SV)—on two key performance indicators: material removal rate (MRR) and surface roughness (Ra). The optimization process was conducted in two stages: single-objective analysis to maximize MRR while ensuring Ra < 4 µm, followed by a hybrid multi-objective approach combining NSGA-II and the Analytic Hierarchy Process (AHP). The optimal solution achieved a high MRR of 9.28 g/h while maintaining Ra below the critical surface finish threshold, thus meeting the practical requirements for punch surface quality. The findings confirm the effectiveness of the proposed horn design and hybrid optimization strategy, offering a new direction for enhancing productivity and surface integrity in cylindrical EDM applications using graphite electrodes. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

26 pages, 1263 KiB  
Article
Identifying Key Digital Enablers for Urban Carbon Reduction: A Strategy-Focused Study of AI, Big Data, and Blockchain Technologies
by Rongyu Pei, Meiqi Chen and Ziyang Liu
Systems 2025, 13(8), 646; https://doi.org/10.3390/systems13080646 (registering DOI) - 1 Aug 2025
Abstract
The integration of artificial intelligence (AI), big data analytics, and blockchain technologies within the digital economy presents transformative opportunities for promoting low-carbon urban development. However, a systematic understanding of how these digital innovations influence urban carbon mitigation remains limited. This study addresses this [...] Read more.
The integration of artificial intelligence (AI), big data analytics, and blockchain technologies within the digital economy presents transformative opportunities for promoting low-carbon urban development. However, a systematic understanding of how these digital innovations influence urban carbon mitigation remains limited. This study addresses this gap by proposing two research questions (RQs): (1) What are the key success factors for artificial intelligence, big data, and blockchain in urban carbon emission reduction? (2) How do these technologies interact and support the transition to low-carbon cities? To answer these questions, the study employs a hybrid methodological framework combining the decision-making trial and evaluation laboratory (DEMATEL) and interpretive structural modeling (ISM) techniques. The data were collected through structured expert questionnaires, enabling the identification and hierarchical analysis of twelve critical success factors (CSFs). Grounded in sustainability transitions theory and institutional theory, the CSFs are categorized into three dimensions: (1) digital infrastructure and technological applications; (2) digital transformation of industry and economy; (3) sustainable urban governance. The results reveal that e-commerce and sustainable logistics, the adoption of the circular economy, and cross-sector collaboration are the most influential drivers of digital-enabled decarbonization, while foundational elements such as smart energy systems and digital infrastructure act as key enablers. The DEMATEL-ISM approach facilitates a system-level understanding of the causal relationships and strategic priorities among the CSFs, offering actionable insights for urban planners, policymakers, and stakeholders committed to sustainable digital transformation and carbon neutrality. Full article
Show Figures

Figure 1

25 pages, 2082 KiB  
Article
XTTS-Based Data Augmentation for Profanity Keyword Recognition in Low-Resource Speech Scenarios
by Shin-Chi Lai, Yi-Chang Zhu, Szu-Ting Wang, Yen-Ching Chang, Ying-Hsiu Hung, Jhen-Kai Tang and Wen-Kai Tsai
Appl. Syst. Innov. 2025, 8(4), 108; https://doi.org/10.3390/asi8040108 - 31 Jul 2025
Abstract
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation [...] Read more.
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation method based on XText-to-Speech (XTTS) synthesis to tackle the challenges of small-sample, multi-class speech recognition, using profanity as a case study to achieve high-accuracy keyword recognition. Two models were therefore evaluated: a CNN model (Proposed-I) and a CNN-Transformer hybrid model (Proposed-II). Proposed-I leverages local feature extraction, improving accuracy on a real human speech (RHS) test set from 55.35% without augmentation to 80.36% with XTTS-enhanced data. Proposed-II integrates CNN’s local feature extraction with Transformer’s long-range dependency modeling, further boosting test set accuracy to 88.90% while reducing the parameter count by approximately 41%, significantly enhancing computational efficiency. Compared to a previously proposed incremental architecture, the Proposed-II model achieves an 8.49% higher accuracy while reducing parameters by about 98.81% and MACs by about 98.97%, demonstrating exceptional resource efficiency. By utilizing XTTS and public corpora to generate a novel keyword speech dataset, this study enhances sample diversity and reduces reliance on large-scale original speech data. Experimental analysis reveals that an optimal synthetic-to-real speech ratio of 1:5 significantly improves the overall system accuracy, effectively addressing data scarcity. Additionally, the Proposed-I and Proposed-II models achieve accuracies of 97.54% and 98.66%, respectively, in distinguishing real from synthetic speech, demonstrating their strong potential for speech security and anti-spoofing applications. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
18 pages, 1910 KiB  
Article
Hierarchical Learning for Closed-Loop Robotic Manipulation in Cluttered Scenes via Depth Vision, Reinforcement Learning, and Behaviour Cloning
by Hoi Fai Yu and Abdulrahman Altahhan
Electronics 2025, 14(15), 3074; https://doi.org/10.3390/electronics14153074 (registering DOI) - 31 Jul 2025
Abstract
Despite rapid advances in robot learning, the coordination of closed-loop manipulation in cluttered environments remains a challenging and relatively underexplored problem. We present a novel two-level hierarchical architecture for a depth vision-equipped robotic arm that integrates pushing, grasping, and high-level decision making. Central [...] Read more.
Despite rapid advances in robot learning, the coordination of closed-loop manipulation in cluttered environments remains a challenging and relatively underexplored problem. We present a novel two-level hierarchical architecture for a depth vision-equipped robotic arm that integrates pushing, grasping, and high-level decision making. Central to our approach is a prioritised action–selection mechanism that facilitates efficient early-stage learning via behaviour cloning (BC), while enabling scalable exploration through reinforcement learning (RL). A high-level decision neural network (DNN) selects between grasping and pushing actions, and two low-level action neural networks (ANNs) execute the selected primitive. The DNN is trained with RL, while the ANNs follow a hybrid learning scheme combining BC and RL. Notably, we introduce an automated demonstration generator based on oriented bounding boxes, eliminating the need for manual data collection and enabling precise, reproducible BC training signals. We evaluate our method on a challenging manipulation task involving five closely packed cubic objects. Our system achieves a completion rate (CR) of 100%, an average grasping success (AGS) of 93.1% per completion, and only 7.8 average decisions taken for completion (DTC). Comparative analysis against three baselines—a grasping-only policy, a fixed grasp-then-push sequence, and a cloned demonstration policy—highlights the necessity of dynamic decision making and the efficiency of our hierarchical design. In particular, the baselines yield lower AGS (86.6%) and higher DTC (10.6 and 11.4) scores, underscoring the advantages of content-aware, closed-loop control. These results demonstrate that our architecture supports robust, adaptive manipulation and scalable learning, offering a promising direction for autonomous skill coordination in complex environments. Full article
Show Figures

Figure 1

16 pages, 3038 KiB  
Article
The Interaction Mechanism Between Modified Selective Catalytic Reduction Catalysts and Volatile Organic Compounds in Flue Gas: A Density Functional Theory Study
by Ke Zhuang, Hanwen Wang, Zhenglong Wu, Yao Dong, Yun Xu, Chunlei Zhang, Xinyue Zhou, Yangwen Wu and Bing Zhang
Catalysts 2025, 15(8), 728; https://doi.org/10.3390/catal15080728 (registering DOI) - 31 Jul 2025
Viewed by 37
Abstract
The overall efficiency of combining denitrification and volatile organic compound (VOC) removal through selective catalytic reduction (SCR) technology is currently mainly limited by the VOC removal aspect. However, existing studies have not studied the microscopic mechanism of the interaction between VOCs and catalysts, [...] Read more.
The overall efficiency of combining denitrification and volatile organic compound (VOC) removal through selective catalytic reduction (SCR) technology is currently mainly limited by the VOC removal aspect. However, existing studies have not studied the microscopic mechanism of the interaction between VOCs and catalysts, failing to provide a theoretical basis for catalysts. Therefore, this work explored the interaction mechanisms between SCR catalysts doped with different additives and typical VOCs (acetone and toluene) in flue gas based on density functional theory (DFT) calculations. The results showed that the VNi-TiO2 surface exhibited a high adsorption energy of −0.80 eV for acetone and a high adsorption energy of −1.02 eV for toluene on the VMn-TiO2 surface. Electronic structure analysis revealed the VMn-TiO2 and VNi-TiO2 surfaces exhibited more intense orbital hybridization with acetone and toluene, promoting charge transfer between the two and resulting in stronger interactions. The analysis of temperature on adsorption free energy showed that VMn-TiO2 and VNi-TiO2 still maintained high activity at high temperatures. This work contributes to clarifying the interaction mechanism between SCR and VOCs and enhancing the VOC removal efficiency. Full article
(This article belongs to the Section Computational Catalysis)
Show Figures

Graphical abstract

23 pages, 1652 KiB  
Article
Case Study on Emissions Abatement Strategies for Aging Cruise Vessels: Environmental and Economic Comparison of Scrubbers and Low-Sulphur Fuels
by Luis Alfonso Díaz-Secades, Luís Baptista and Sandrina Pereira
J. Mar. Sci. Eng. 2025, 13(8), 1454; https://doi.org/10.3390/jmse13081454 - 30 Jul 2025
Viewed by 163
Abstract
The maritime sector is undergoing rapid transformation, driven by increasingly stringent international regulations targeting air pollution. While newly built vessels integrate advanced technologies for compliance, the global fleet averages 21.8 years of age and must meet emission requirements through retrofitting or operational changes. [...] Read more.
The maritime sector is undergoing rapid transformation, driven by increasingly stringent international regulations targeting air pollution. While newly built vessels integrate advanced technologies for compliance, the global fleet averages 21.8 years of age and must meet emission requirements through retrofitting or operational changes. This study evaluates, at environmental and economic levels, two key sulphur abatement strategies for a 1998-built cruise vessel nearing the end of its service life: (i) the installation of open-loop scrubbers with fuel enhancement devices, and (ii) a switch to marine diesel oil as main fuel. The analysis was based on real operational data from a cruise vessel. For the environmental assessment, a Tier III hybrid emissions model was used. The results show that scrubbers reduce SOx emissions by approximately 97% but increase fuel consumption by 3.6%, raising both CO2 and NOx emissions, while particulate matter decreases by only 6.7%. In contrast, switching to MDO achieves over 99% SOx reduction, an 89% drop in particulate matter, and a nearly 5% reduction in CO2 emissions. At an economic level, it was found that, despite a CAPEX of nearly USD 1.9 million, scrubber installation provides an average annual net saving exceeding USD 8.2 million. From the deterministic and probabilistic analyses performed, including Monte Carlo simulations under various fuel price correlation scenarios, scrubber installation consistently shows high profitability, with NPVs surpassing USD 70 million and payback periods under four months. Full article
(This article belongs to the Special Issue Sustainable and Efficient Maritime Operations)
Show Figures

Figure 1

22 pages, 1703 KiB  
Article
Towards Personalized Precision Oncology: A Feasibility Study of NGS-Based Variant Analysis of FFPE CRC Samples in a Chilean Public Health System Laboratory
by Eduardo Durán-Jara, Iván Ponce, Marcelo Rojas-Herrera, Jessica Toro, Paulo Covarrubias, Evelin González, Natalia T. Santis-Alay, Mario E. Soto-Marchant, Katherine Marcelain, Bárbara Parra and Jorge Fernández
Curr. Issues Mol. Biol. 2025, 47(8), 599; https://doi.org/10.3390/cimb47080599 - 30 Jul 2025
Viewed by 140
Abstract
Massively parallel or next-generation sequencing (NGS) has enabled the genetic characterization of cancer patients, allowing the identification of somatic and germline variants associated with their diagnosis, tumor classification, and therapy response. Despite its benefits, NGS testing is not yet available in the Chilean [...] Read more.
Massively parallel or next-generation sequencing (NGS) has enabled the genetic characterization of cancer patients, allowing the identification of somatic and germline variants associated with their diagnosis, tumor classification, and therapy response. Despite its benefits, NGS testing is not yet available in the Chilean public health system, rendering it both costly and time-consuming for patients and clinicians. Using a retrospective cohort of 67 formalin-fixed, paraffin-embedded (FFPE) colorectal cancer (CRC) samples, we aimed to implement the identification, annotation, and prioritization of relevant actionable tumor somatic variants in our laboratory, as part of the public health system. We compared two different library preparation methodologies (amplicon-based and capture-based) and different bioinformatics pipelines for sequencing analysis to assess advantages and disadvantages of each one. We obtained 80.5% concordance between actionable variants detected in our analysis and those obtained in the Cancer Genomics Laboratory from the Universidad de Chile (62 out of 77 variants), a validated laboratory for this methodology. Notably, 98.4% (61 out of 62) of variants detected previously by the validated laboratory were also identified in our analysis. Then, comparing the hybridization capture-based library preparation methodology with the amplicon-based strategy, we found ~94% concordance between identified actionable variants across the 15 shared genes, analyzed by the TumorSecTM bioinformatics pipeline, developed by the Cancer Genomics Laboratory. Our results demonstrate that it is entirely viable to implement an NGS-based analysis of actionable variant identification and prioritization in cancer samples in our laboratory, being part of the Chilean public health system and paving the way to improve the access to such analyses. Considering the economic realities of most Latin American countries, using a small NGS panel, such as TumorSecTM, focused on relevant variants of the Chilean and Latin American population is a cost-effective approach to extensive global NGS panels. Furthermore, the incorporation of automated bioinformatics analysis in this streamlined assay holds the potential of facilitating the implementation of precision medicine in this geographic region, which aims to greatly support personalized treatment of cancer patients in Chile. Full article
(This article belongs to the Special Issue Linking Genomic Changes with Cancer in the NGS Era, 2nd Edition)
Show Figures

Figure 1

27 pages, 8496 KiB  
Article
Comparative Performance of Machine Learning Models for Landslide Susceptibility Assessment: Impact of Sampling Strategies in Highway Buffer Zone
by Zhenyu Tang, Shumao Qiu, Haoying Xia, Daming Lin and Mingzhou Bai
Appl. Sci. 2025, 15(15), 8416; https://doi.org/10.3390/app15158416 - 29 Jul 2025
Viewed by 113
Abstract
Landslide susceptibility assessment is critical for hazard mitigation and land-use planning. This study evaluates the impact of two different non-landslide sampling methods—random sampling and sampling constrained by the Global Landslide Hazard Map (GLHM)—on the performance of various machine learning and deep learning models, [...] Read more.
Landslide susceptibility assessment is critical for hazard mitigation and land-use planning. This study evaluates the impact of two different non-landslide sampling methods—random sampling and sampling constrained by the Global Landslide Hazard Map (GLHM)—on the performance of various machine learning and deep learning models, including Naïve Bayes (NB), Support Vector Machine (SVM), SVM-Random Forest hybrid (SVM-RF), and XGBoost. The study area is a 2 km buffer zone along the Duku Highway in Xinjiang, China, with 102 landslide and 102 non-landslide points extracted by aforementioned sampling methods. Models were tested using ROC curves and non-parametric significance tests based on 20 repetitions of 5-fold spatial cross-validation data. GLHM sampling consistently improved AUROC and accuracy across all models (e.g., AUROC gains: NB +8.44, SVM +7.11, SVM–RF +3.45, XGBoost +3.04; accuracy gains: NB +11.30%, SVM +8.33%, SVM–RF +7.40%, XGBoost +8.31%). XGBoost delivered the best performance under both sampling strategies, reaching 94.61% AUROC and 84.30% accuracy with GLHM sampling. SHAP analysis showed that GLHM sampling stabilized feature importance rankings, highlighting STI, TWI, and NDVI as the main controlling factors for landslides in the study area. These results highlight the importance of hazard-informed sampling to enhance landslide susceptibility modeling accuracy and interpretability. Full article
Show Figures

Figure 1

28 pages, 7048 KiB  
Article
Enhanced Conjunction Assessment in LEO: A Hybrid Monte Carlo and Spline-Based Method Using TLE Data
by Shafeeq Koheal Tealib, Ahmed Magdy Abdelaziz, Igor E. Molotov, Xu Yang, Jian Sun and Jing Liu
Aerospace 2025, 12(8), 674; https://doi.org/10.3390/aerospace12080674 - 28 Jul 2025
Viewed by 171
Abstract
The growing density of space objects in low Earth orbit (LEO), driven by the deployment of large satellite constellations, has elevated the risk of orbital collisions and the need for high-precision conjunction analysis. Traditional methods based on Two-Line Element (TLE) data suffer from [...] Read more.
The growing density of space objects in low Earth orbit (LEO), driven by the deployment of large satellite constellations, has elevated the risk of orbital collisions and the need for high-precision conjunction analysis. Traditional methods based on Two-Line Element (TLE) data suffer from limited accuracy and insufficient uncertainty modeling. This study proposes a hybrid collision assessment framework that combines Monte Carlo simulation, spline-based refinement of the time of closest approach (TCA), and a multi-stage deterministic refinement process. The methodology begins with probabilistic sampling of TLE uncertainties, followed by a coarse search for TCA using the SGP4 propagator. A cubic spline interpolation then enhances temporal resolution, and a hierarchical multi-stage refinement computes the final TCA and minimum distance with sub-second and sub-kilometer accuracy. The framework was validated using real-world TLE data from over 2600 debris objects and active satellites. Results demonstrated a reduction in average TCA error to 0.081 s and distance estimation error to 0.688 km. The approach is computationally efficient, with average processing times below one minute per conjunction event using standard hardware. Its compatibility with operational space situational awareness (SSA) systems and scalability for high-volume screening make it suitable for integration into real-time space traffic management workflows. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

13 pages, 1650 KiB  
Article
A Fast TaqMan® Real-Time PCR Assay for the Detection of Mitochondrial DNA Haplotypes in a Wolf Population
by Rita Lorenzini, Lorenzo Attili, Martina De Crescenzo and Antonella Pizzarelli
Genes 2025, 16(8), 897; https://doi.org/10.3390/genes16080897 - 28 Jul 2025
Viewed by 168
Abstract
Background/Objectives: The gene pool of the Apennine wolf is affected by admixture with domestic variants due to anthropogenic hybridisation with dogs. Genetic monitoring at the population level involves assessing the extent of admixture in single individuals, ranging from pure wolves to recent [...] Read more.
Background/Objectives: The gene pool of the Apennine wolf is affected by admixture with domestic variants due to anthropogenic hybridisation with dogs. Genetic monitoring at the population level involves assessing the extent of admixture in single individuals, ranging from pure wolves to recent hybrids or wolf backcrosses, through the analysis of nuclear and mitochondrial DNA (mtDNA) markers. Although individually non-diagnostic, mtDNA is nevertheless essential for completing the final diagnosis of genetic admixture. Typically, the identification of wolf mtDNA haplotypes is carried out via sequencing of coding genes and non-coding DNA stretches. Our objective was to develop a fast real-time PCR assay to detect the mtDNA haplotypes that occur exclusively in the Apennine wolf population, as a valuable alternative to the demanding sequence-based typing. Methods: We validated a qualitative duplex real-time PCR that exploits the combined presence of diagnostic point mutations in two mtDNA segments, the NDH-4 gene and the control region, and is performed in a single-tube step through TaqMan-MGB chemistry. The aim was to detect mtDNA multi-fragment haplotypes that are exclusive to the Apennine wolf, bypassing sequencing. Results: Basic validation of 149 field samples, consisting of pure Apennine wolves, dogs, wolf × dog hybrids, and Dinaric wolves, showed that the assay is highly specific and sensitive, with genomic DNA amounts as low as 10−5 ng still producing positive results. It also proved high repeatability and reproducibility, thereby enabling reliable high-throughput testing. Conclusions: The results indicate that the assay presented here provides a valuable alternative method to the time- and cost-consuming sequencing procedure to reliably diagnose the maternal lineage of the still-threatened Apennine wolf, and it covers a wide range of applications, from scientific research to conservation, diagnostics, and forensics. Full article
(This article belongs to the Section Animal Genetics and Genomics)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 167
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

17 pages, 4667 KiB  
Article
Workspace Analysis and Dynamic Modeling of 6-DoF Multi-Pattern Cable-Driven Hybrid Mobile Robot
by Jiahao Song, Meiqi Wang, Jiabao Wu, Qing Liu and Shuofei Yang
Machines 2025, 13(8), 659; https://doi.org/10.3390/machines13080659 - 28 Jul 2025
Viewed by 218
Abstract
A cable-driven hybrid mobile robot is a kind of robot consisting of two modules connected in series, which uses multiple parallel cables to drive the moving platforms. Cable-driven robots benefit from a large workspace, low inertia, excellent dynamic performance due to the lightweight [...] Read more.
A cable-driven hybrid mobile robot is a kind of robot consisting of two modules connected in series, which uses multiple parallel cables to drive the moving platforms. Cable-driven robots benefit from a large workspace, low inertia, excellent dynamic performance due to the lightweight and high extensibility of cables, making them ideal for a wide range of applications, such as sports cameras, large radio telescopes, and planetary exploration. Considering the fundamental dynamic constraint imposed by the unilateral constraint of cables, the workspace and dynamic modeling for cable-driven robots require specialized study. In this paper, a novel cable-driven hybrid robot, which has two motion patterns, is designed, and an arc intersection method for analyzing workspace is applied to solve the robot workspace of two motion patterns. Based on the workspace analysis, a dynamic model for the cable-driven hybrid robot is established, laying the foundation for subsequent trajectory planning. Simulation results in MATLAB R2021a demonstrate that the cable-driven hybrid robot has a large workspace in both motion patterns and is capable of meeting various motion requirements, indicating promising application potential. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

21 pages, 1201 KiB  
Article
A Comparison of the Black Hole Algorithm Against Conventional Training Strategies for Neural Networks
by Péter Veres
Mathematics 2025, 13(15), 2416; https://doi.org/10.3390/math13152416 - 27 Jul 2025
Viewed by 214
Abstract
Artificial Intelligence continues to demand robust and adaptable training methods for neural networks, particularly in scenarios involving limited computational resources or noisy, complex data. This study presents a comparative analysis of four training algorithms, Backpropagation, Genetic Algorithm, Black-hole Algorithm, and Particle Swarm Optimization, [...] Read more.
Artificial Intelligence continues to demand robust and adaptable training methods for neural networks, particularly in scenarios involving limited computational resources or noisy, complex data. This study presents a comparative analysis of four training algorithms, Backpropagation, Genetic Algorithm, Black-hole Algorithm, and Particle Swarm Optimization, evaluated across both classification and regression tasks. Each method was implemented from scratch in MATLAB ver. R2024a, avoiding reliance on pre-optimized libraries to isolate algorithmic behavior. Two types of datasets were used, namely a synthetic benchmark dataset and a real-world dataset preprocessed into classification and regression formats. All algorithms were tested in both basic and advanced forms using consistent network architectures and training constraints. Results indicate that while Backpropagation maintained strong performance in smooth regression settings, the Black-hole and PSO algorithms demonstrated more stable and faster initial progress in noisy or discrete classification tasks. These findings highlight the practical viability of the Black-hole Algorithm as a competitive, gradient-free alternative for neural network training, particularly in early-stage learning or hybrid optimization frameworks. Full article
Show Figures

Figure 1

Back to TopTop