Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,721)

Search Parameters:
Keywords = Artificial (General) Intelligence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11604 KiB  
Article
A Philosophical Framework for Data-Driven Miscomputations
by Alessandro G. Buda, Chiara Manganini and Giuseppe Primiero
Philosophies 2025, 10(4), 88; https://doi.org/10.3390/philosophies10040088 (registering DOI) - 6 Aug 2025
Abstract
This paper introduces a first approach to miscomputations for data-driven systems. First, we establish an ontology for data-driven learning systems and categorize various computational errors based on the Levels of Abstraction ontology. Next, we consider computational errors which are associated with users’ evaluation [...] Read more.
This paper introduces a first approach to miscomputations for data-driven systems. First, we establish an ontology for data-driven learning systems and categorize various computational errors based on the Levels of Abstraction ontology. Next, we consider computational errors which are associated with users’ evaluation and requirements and consider the user level ontology, identifying two additional types of miscomputation. Full article
(This article belongs to the Special Issue Semantics and Computation)
Show Figures

Figure 1

16 pages, 824 KiB  
Article
ChatGPT and Microsoft Copilot for Cochlear Implant Side Selection: A Preliminary Study
by Daniele Portelli, Sabrina Loteta, Mariangela D’Angelo, Cosimo Galletti, Leonard Freni, Rocco Bruno, Francesco Ciodaro, Angela Alibrandi and Giuseppe Alberti
Audiol. Res. 2025, 15(4), 100; https://doi.org/10.3390/audiolres15040100 - 6 Aug 2025
Abstract
Background/Objectives: Artificial Intelligence (AI) is increasingly being applied in otolaryngology, including cochlear implants (CIs). This study evaluates the accuracy and completeness of ChatGPT-4 and Microsoft Copilot in determining the appropriate implantation side based on audiological and radiological data, as well as the [...] Read more.
Background/Objectives: Artificial Intelligence (AI) is increasingly being applied in otolaryngology, including cochlear implants (CIs). This study evaluates the accuracy and completeness of ChatGPT-4 and Microsoft Copilot in determining the appropriate implantation side based on audiological and radiological data, as well as the presence of tinnitus. Methods: Data from 22 CI patients (11 males, 11 females; 12 right-sided, 10 left-sided implants) were used to query both AI models. Each patient’s audiometric thresholds, hearing aid benefit, tinnitus presence, and radiological findings were provided. The AI-generated responses were compared to the clinician-chosen sides. Accuracy and completeness were scored by two independent reviewers. Results: ChatGPT had a 50% concordance rate for right-side implantation and a 70% concordance rate for left-side implantation, while Microsoft Copilot achieved 75% and 90%, respectively. Chi-square tests showed significant associations between AI-suggested and clinician-chosen sides for both AI (p < 0.05). ChatGPT outperformed Microsoft Copilot in identifying radiological alterations (60% vs. 40%) and tinnitus presence (77.8% vs. 66.7%). Cronbach’s alpha was >0.70 only for ChatGPT accuracy, indicating better agreement between reviewers. Conclusions: Both AI models showed significant alignment with clinician decisions. Microsoft Copilot was more accurate in implantation side selection, while ChatGPT better recognized radiological alterations and tinnitus. These results highlight AI’s potential as a clinical decision support tool in CI candidacy, although further research is needed to refine its application in complex cases. Full article
Show Figures

Figure 1

19 pages, 253 KiB  
Article
The Application of Artificial Intelligence in Acute Prescribing in Homeopathy: A Comparative Retrospective Study
by Rachael Doherty, Parker Pracjek, Christine D. Luketic, Denise Straiges and Alastair C. Gray
Healthcare 2025, 13(15), 1923; https://doi.org/10.3390/healthcare13151923 - 6 Aug 2025
Abstract
Background/Objective: The use of artificial intelligence to assist in medical applications is an emerging area of investigation and discussion. The researchers studied whether there was a difference between homeopathy guidance provided by artificial intelligence (AI) (automated) and live professional practitioners (live) for acute [...] Read more.
Background/Objective: The use of artificial intelligence to assist in medical applications is an emerging area of investigation and discussion. The researchers studied whether there was a difference between homeopathy guidance provided by artificial intelligence (AI) (automated) and live professional practitioners (live) for acute illnesses. Additionally, the study explored the practical challenges associated with validating AI tools used for homeopathy and sought to generate insights on the potential value and limitations of these tools in the management of acute health complaints. Method: Randomly selected cases at a homeopathy teaching clinic (n = 100) were entered into a commercially available homeopathic remedy finder to investigate the consistency between automated and live recommendations. Client symptoms, medical disclaimers, remedies, and posology were compared. The findings of this study show that the purpose-built homeopathic remedy finder is not a one-to-one replacement for a live practitioner. Result: In the 100 cases compared, the automated online remedy finder provided between 1 and 20 prioritized remedy recommendations for each complaint, leaving the user to make the final remedy decision based on how well their characteristic symptoms were covered by each potential remedy. The live practitioner-recommended remedy was included somewhere among the auto-mated results in 59% of the cases, appeared in the top three results in 37% of the cases, and was a top remedy match in 17% of the cases. There was no guidance for managing remedy responses found in live clinical settings. Conclusion: This study also highlights the challenge and importance of validating AI remedy recommendations against real cases. The automated remedy finder used covered 74 acute complaints. The live cases from the teaching clinic included 22 of the 74 complaints. Full article
(This article belongs to the Special Issue The Role of AI in Predictive and Prescriptive Healthcare)
26 pages, 2638 KiB  
Article
How Explainable Really Is AI? Benchmarking Explainable AI
by Giacomo Bergami and Oliver Robert Fox
Logics 2025, 3(3), 9; https://doi.org/10.3390/logics3030009 (registering DOI) - 6 Aug 2025
Abstract
This work contextualizes the possibility of deriving a unifying artificial intelligence framework by walking in the footsteps of General, Explainable, and Verified Artificial Intelligence (GEVAI): by considering explainability not only at the level of the results produced by a specification but also considering [...] Read more.
This work contextualizes the possibility of deriving a unifying artificial intelligence framework by walking in the footsteps of General, Explainable, and Verified Artificial Intelligence (GEVAI): by considering explainability not only at the level of the results produced by a specification but also considering the explicability of the inference process as well as the one related to the data processing step, we can not only ensure human explainability of the process leading to the ultimate results but also mitigate and minimize machine faults leading to incorrect results. This, on the other hand, requires the adoption of automated verification processes beyond system fine-tuning, which are essentially relevant in a more interconnected world. The challenges related to full automation of a data processing pipeline, mostly requiring human-in-the-loop approaches, forces us to tackle the framework from a different perspective: while proposing a preliminary implementation of GEVAI mainly used as an AI test-bed having different state-of-the-art AI algorithms interconnected, we propose two other data processing pipelines, LaSSI and EMeriTAte+DF, being a specific instantiation of GEVAI for solving specific problems (Natural Language Processing, and Multivariate Time Series Classifications). Preliminary results from our ongoing work strengthen the position of the proposed framework by showcasing it as a viable path to improve current state-of-the-art AI algorithms. Full article
Show Figures

Figure 1

30 pages, 2414 KiB  
Review
Melittin-Based Nanoparticles for Cancer Therapy: Mechanisms, Applications, and Future Perspectives
by Joe Rizkallah, Nicole Charbel, Abdallah Yassine, Amal El Masri, Chris Raffoul, Omar El Sardouk, Malak Ghezzawi, Therese Abou Nasr and Firas Kreidieh
Pharmaceutics 2025, 17(8), 1019; https://doi.org/10.3390/pharmaceutics17081019 - 6 Aug 2025
Abstract
Melittin, a cytolytic peptide derived from honeybee venom, has demonstrated potent anticancer activity through mechanisms such as membrane disruption, apoptosis induction, and modulation of key signaling pathways. Melittin exerts its anticancer activity by interacting with key molecular targets, including downregulation of the PI3K/Akt [...] Read more.
Melittin, a cytolytic peptide derived from honeybee venom, has demonstrated potent anticancer activity through mechanisms such as membrane disruption, apoptosis induction, and modulation of key signaling pathways. Melittin exerts its anticancer activity by interacting with key molecular targets, including downregulation of the PI3K/Akt and NF-κB signaling pathways, and by inducing mitochondrial apoptosis through reactive oxygen species generation and cytochrome c release. However, its clinical application is hindered by its systemic and hemolytic toxicity, rapid degradation in plasma, poor pharmacokinetics, and immunogenicity, necessitating the development of targeted delivery strategies to enable safe and effective treatment. Nanoparticle-based delivery systems have emerged as a promising strategy for overcoming these challenges, offering improved tumor targeting, reduced off-target effects, and enhanced stability. This review provides a comprehensive overview of the mechanisms through which melittin exerts its anticancer effects and evaluates the development of various melittin-loaded nanocarriers, including liposomes, polymeric nanoparticles, dendrimers, micelles, and inorganic systems. It also summarizes the preclinical evidence for melittin nanotherapy across a wide range of cancer types, highlighting both its cytotoxic and immunomodulatory effects. The potential of melittin nanoparticles to overcome multidrug resistance and synergize with chemotherapy, immunotherapy, photothermal therapy, and radiotherapy is discussed. Despite promising in vitro and in vivo findings, its clinical translation remains limited. Key barriers include toxicity, manufacturing scalability, regulatory approval, and the need for more extensive in vivo validation. A key future direction is the application of computational tools, such as physiologically based pharmacokinetic modeling and artificial-intelligence-based modeling, to streamline development and guide its clinical translation. Addressing these challenges through focused research and interdisciplinary collaboration will be essential to realizing the full therapeutic potential of melittin-based nanomedicines in oncology. Overall, this review synthesizes the findings from over 100 peer-reviewed studies published between 2008 and 2025, providing an up-to-date assessment of melittin-based nanomedicine strategies across diverse cancer types. Full article
(This article belongs to the Special Issue Development of Novel Tumor-Targeting Nanoparticles, 2nd Edition)
Show Figures

Figure 1

16 pages, 2750 KiB  
Article
Combining Object Detection, Super-Resolution GANs and Transformers to Facilitate Tick Identification Workflow from Crowdsourced Images on the eTick Platform
by Étienne Clabaut, Jérémie Bouffard and Jade Savage
Insects 2025, 16(8), 813; https://doi.org/10.3390/insects16080813 - 6 Aug 2025
Abstract
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance [...] Read more.
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance based on tick species and province of residence of the submitter. Considering that more than 100,000 images from over 73,500 identified records representing 25 tick species have been submitted to eTick since the public launch in 2018, a partial automation of the image processing workflow could save substantial human resources, especially as submission numbers have been steadily increasing since 2021. In this study, we evaluate an end-to-end artificial intelligence (AI) pipeline to support tick identification from eTick user-submitted images, characterized by heterogeneous quality and uncontrolled acquisition conditions. Our framework integrates (i) tick localization using a fine-tuned YOLOv7 object detection model, (ii) resolution enhancement of cropped images via super-resolution Generative Adversarial Networks (RealESRGAN and SwinIR), and (iii) image classification using deep convolutional (ResNet-50) and transformer-based (ViT) architectures across three datasets (12, 6, and 3 classes) of decreasing granularities in terms of taxonomic resolution, tick life stage, and specimen viewing angle. ViT consistently outperformed ResNet-50, especially in complex classification settings. The configuration yielding the best performance—relying on object detection without incorporating super-resolution—achieved a macro-averaged F1-score exceeding 86% in the 3-class model (Dermacentor sp., other species, bad images), with minimal critical misclassifications (0.7% of “other species” misclassified as Dermacentor). Given that Dermacentor ticks represent more than 60% of tick volume submitted on the eTick platform, the integration of a low granularity model in the processing workflow could save significant time while maintaining very high standards of identification accuracy. Our findings highlight the potential of combining modern AI methods to facilitate efficient and accurate tick image processing in community science platforms, while emphasizing the need to adapt model complexity and class resolution to task-specific constraints. Full article
(This article belongs to the Section Medical and Livestock Entomology)
Show Figures

Graphical abstract

22 pages, 3804 KiB  
Article
Enabling Intelligent 6G Communications: A Scalable Deep Learning Framework for MIMO Detection
by Muhammad Yunis Daha, Ammu Sudhakaran, Bibin Babu and Muhammad Usman Hadi
Telecom 2025, 6(3), 58; https://doi.org/10.3390/telecom6030058 - 6 Aug 2025
Abstract
Artificial intelligence (AI) has emerged as a transformative technology in the evolution of massive multiple-input multiple-output (ma-MIMO) systems, positioning them as a cornerstone for sixth-generation (6G) wireless networks. Despite their significant potential, ma-MIMO systems face critical challenges at the receiver end, particularly in [...] Read more.
Artificial intelligence (AI) has emerged as a transformative technology in the evolution of massive multiple-input multiple-output (ma-MIMO) systems, positioning them as a cornerstone for sixth-generation (6G) wireless networks. Despite their significant potential, ma-MIMO systems face critical challenges at the receiver end, particularly in signal detection under high-dimensional and noisy environments. To address these limitations, this paper proposes MIMONet, a novel deep learning (DL)-based MIMO detection framework built upon a lightweight and optimized feedforward neural network (FFNN) architecture. MIMONet is specifically designed to achieve a balance between detection performance and complexity by optimizing the neural network architecture for MIMO signal detection tasks. Through extensive simulations across multiple MIMO configurations, the proposed MIMONet detector consistently demonstrates superior bit error rate (BER) performance. It achieves a notably lower error rate compared to conventional benchmark detectors, particularly under moderate to high signal-to-noise ratio (SNR) conditions. In addition to its enhanced detection accuracy, MIMONet maintains significantly reduced computational complexity, highlighting its practical feasibility for advanced wireless communication systems. These results validate the effectiveness of the MIMONet detector in optimizing detection accuracy without imposing excessive processing burdens. Moreover, the architectural flexibility and efficiency of MIMONet lay a solid foundation for future extensions toward large-scale ma-MIMO configurations, paving the way for practical implementations in beyond-5G (B5G) and 6G communication infrastructures. Full article
Show Figures

Figure 1

25 pages, 502 KiB  
Article
Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education
by Antonio Pérez-Portabella, Mario Arias-Oliva, Graciela Padilla-Castillo and Jorge de Andrés-Sánchez
Digital 2025, 5(3), 33; https://doi.org/10.3390/digital5030033 - 6 Aug 2025
Abstract
The emergence of generative artificial intelligence (GenAI) in higher education offers new opportunities for academic support while also raising complex ethical concerns. This study explores how university students ethically evaluate the use of GenAI in three academic contexts: improving essay writing, preparing for [...] Read more.
The emergence of generative artificial intelligence (GenAI) in higher education offers new opportunities for academic support while also raising complex ethical concerns. This study explores how university students ethically evaluate the use of GenAI in three academic contexts: improving essay writing, preparing for exams, and generating complete essays without personal input. Drawing on the Multidimensional Ethics Scale (MES), the research assesses five philosophical frameworks—moral equity, relativism, egoism, utilitarianism, and deontology—based on a survey conducted among undergraduate social sciences students in Spain. The findings reveal that students generally view GenAI use as ethically acceptable when used to improve or prepare content, but express stronger ethical concerns when authorship is replaced by automation. Gender and full-time employment status also influence ethical evaluations: women respond differently than men in utilitarian dimensions, while working students tend to adopt a more relativist stance and are more tolerant of full automation. These results highlight the importance of context, individual characteristics, and philosophical orientation in shaping ethical judgments about GenAI use in academia. Full article
Show Figures

Figure 1

24 pages, 2345 KiB  
Article
Towards Intelligent 5G Infrastructures: Performance Evaluation of a Novel SDN-Enabled VANET Framework
by Abiola Ifaloye, Haifa Takruri and Rabab Al-Zaidi
Network 2025, 5(3), 28; https://doi.org/10.3390/network5030028 - 5 Aug 2025
Abstract
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications [...] Read more.
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications remains a significant challenge. This paper proposes a novel framework integrating Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV) as embedded functionalities in connected vehicles. A lightweight SDN Controller model, implemented via vehicle on-board computing resources, optimised QoS for communications between connected vehicles and the Next-Generation Node B (gNB), achieving a consistent packet delivery rate of 100%, compared to 81–96% for existing solutions leveraging SDN. Furthermore, a Software-Defined Wide-Area Network (SD-WAN) model deployed at the gNB enabled the efficient management of data, network, identity, and server access. Performance evaluations indicate that SDN and NFV are reliable and scalable technologies for virtualised and distributed 5G VANET infrastructures. Our SDN-based in-vehicle traffic classification model for dynamic resource allocation achieved 100% accuracy, outperforming existing Artificial Intelligence (AI)-based methods with 88–99% accuracy. In addition, a significant increase of 187% in flow rates over time highlights the framework’s decreasing latency, adaptability, and scalability in supporting URLLC class guarantees for critical vehicular services. Full article
26 pages, 1589 KiB  
Systematic Review
Machine Learning and Generative AI in Learning Analytics for Higher Education: A Systematic Review of Models, Trends, and Challenges
by Miguel Ángel Rodríguez-Ortiz, Pedro C. Santana-Mancilla and Luis E. Anido-Rifón
Appl. Sci. 2025, 15(15), 8679; https://doi.org/10.3390/app15158679 (registering DOI) - 5 Aug 2025
Abstract
This systematic review examines how machine learning (ML) and generative AI (GenAI) have been integrated into learning analytics (LA) in higher education (2018–2025). Following PRISMA 2020, we screened 9590 records and included 101 English-language, peer-reviewed empirical studies that applied ML or GenAI within [...] Read more.
This systematic review examines how machine learning (ML) and generative AI (GenAI) have been integrated into learning analytics (LA) in higher education (2018–2025). Following PRISMA 2020, we screened 9590 records and included 101 English-language, peer-reviewed empirical studies that applied ML or GenAI within LA contexts. Records came from 12 databases (last search 15 March 2025), and the results were synthesized via thematic clustering. ML approaches dominate LA tasks, such as engagement prediction, dropout-risk modelling, and academic-performance forecasting, whereas GenAI—mainly transformer models like GPT-4 and BERT—is emerging in real-time feedback, adaptive learning, and sentiment analysis. Studies spanned world regions. Most ML papers (n = 75) examined engagement or dropout, while GenAI papers (n = 26) focused on adaptive feedback and sentiment analysis. No formal risk-of-bias assessment was conducted due to heterogeneity. While ML methods are well-established, GenAI applications remain experimental and face challenges related to transparency, pedagogical grounding, and implementation feasibility. This review offers a comparative synthesis of paradigms and outlines future directions for responsible, inclusive, theory-informed AI use in education. Full article
Show Figures

Figure 1

26 pages, 514 KiB  
Article
Improving Voice Spoofing Detection Through Extensive Analysis of Multicepstral Feature Reduction
by Leonardo Mendes de Souza, Rodrigo Capobianco Guido, Rodrigo Colnago Contreras, Monique Simplicio Viana and Marcelo Adriano dos Santos Bongarti
Sensors 2025, 25(15), 4821; https://doi.org/10.3390/s25154821 - 5 Aug 2025
Abstract
Voice biometric systems play a critical role in numerous security applications, including electronic device authentication, banking transaction verification, and confidential communications. Despite their widespread utility, these systems are increasingly targeted by sophisticated spoofing attacks that leverage advanced artificial intelligence techniques to generate realistic [...] Read more.
Voice biometric systems play a critical role in numerous security applications, including electronic device authentication, banking transaction verification, and confidential communications. Despite their widespread utility, these systems are increasingly targeted by sophisticated spoofing attacks that leverage advanced artificial intelligence techniques to generate realistic synthetic speech. Addressing the vulnerabilities inherent to voice-based authentication systems has thus become both urgent and essential. This study proposes a novel experimental analysis that extensively explores various dimensionality reduction strategies in conjunction with supervised machine learning models to effectively identify spoofed voice signals. Our framework involves extracting multicepstral features followed by the application of diverse dimensionality reduction methods, such as Principal Component Analysis (PCA), Truncated Singular Value Decomposition (SVD), statistical feature selection (ANOVA F-value, Mutual Information), Recursive Feature Elimination (RFE), regularization-based LASSO selection, Random Forest feature importance, and Permutation Importance techniques. Empirical evaluation using the ASVSpoof 2017 v2.0 dataset measures the classification performance with the Equal Error Rate (EER) metric, achieving values of approximately 10%. Our comparative analysis demonstrates significant performance gains when dimensionality reduction methods are applied, underscoring their value in enhancing the security and effectiveness of voice biometric verification systems against emerging spoofing threats. Full article
(This article belongs to the Special Issue Sensors and Machine-Learning Based Signal Processing)
Show Figures

Figure 1

29 pages, 16357 KiB  
Article
Evaluation of Heterogeneous Ensemble Learning Algorithms for Lithological Mapping Using EnMAP Hyperspectral Data: Implications for Mineral Exploration in Mountainous Region
by Soufiane Hajaj, Abderrazak El Harti, Amin Beiranvand Pour, Younes Khandouch, Abdelhafid El Alaoui El Fels, Ahmed Babeker Elhag, Nejib Ghazouani, Mustafa Ustuner and Ahmed Laamrani
Minerals 2025, 15(8), 833; https://doi.org/10.3390/min15080833 - 5 Aug 2025
Abstract
Hyperspectral remote sensing plays a crucial role in guiding and supporting various mineral prospecting activities. Combined with artificial intelligence, hyperspectral remote sensing technology becomes a powerful and versatile tool for a wide range of mineral exploration activities. This study investigates the effectiveness of [...] Read more.
Hyperspectral remote sensing plays a crucial role in guiding and supporting various mineral prospecting activities. Combined with artificial intelligence, hyperspectral remote sensing technology becomes a powerful and versatile tool for a wide range of mineral exploration activities. This study investigates the effectiveness of ensemble learning (EL) algorithms for lithological classification and mineral exploration using EnMAP hyperspectral imagery (HSI) in a semi-arid region. The Moroccan Anti-Atlas mountainous region is known for its complex geology, high mineral potential and rugged terrain, making it a challenging for mineral exploration. This research applies core and heterogeneous ensemble learning methods, i.e., boosting, stacking, voting, bagging, blending, and weighting to improve the accuracy and robustness of lithological classification and mapping in the Moroccan Anti-Atlas mountainous region. Several state-of-the-art models, including support vector machines (SVMs), random forests (RFs), k-nearest neighbors (k-NNs), multi-layer perceptrons (MLPs), extra trees (ETs) and extreme gradient boosting (XGBoost), were evaluated and used as individual and ensemble classifiers. The results show that the EL methods clearly outperform (single) base classifiers. The potential of EL methods to improve the accuracy of HSI-based classification is emphasized by an optimal blending model that achieves the highest overall accuracy (96.69%). The heterogeneous EL models exhibit better generalization ability than the baseline (single) ML models in lithological classification. The current study contributes to a more reliable assessment of resources in mountainous and semi-arid regions by providing accurate delineation of lithological units for mineral exploration objectives. Full article
(This article belongs to the Special Issue Feature Papers in Mineral Exploration Methods and Applications 2025)
Show Figures

Figure 1

42 pages, 7526 KiB  
Review
Novel Nanomaterials for Developing Bone Scaffolds and Tissue Regeneration
by Nazim Uddin Emon, Lu Zhang, Shelby Dawn Osborne, Mark Allen Lanoue, Yan Huang and Z. Ryan Tian
Nanomaterials 2025, 15(15), 1198; https://doi.org/10.3390/nano15151198 - 5 Aug 2025
Abstract
Nanotechnologies bring a rapid paradigm shift in hard and soft bone tissue regeneration (BTR) through unprecedented control over the nanoscale structures and chemistry of biocompatible materials to regenerate the intricate architecture and functional adaptability of bone. This review focuses on the transformative analyses [...] Read more.
Nanotechnologies bring a rapid paradigm shift in hard and soft bone tissue regeneration (BTR) through unprecedented control over the nanoscale structures and chemistry of biocompatible materials to regenerate the intricate architecture and functional adaptability of bone. This review focuses on the transformative analyses and prospects of current and next-generation nanomaterials in designing bioactive bone scaffolds, emphasizing hierarchical architecture, mechanical resilience, and regenerative precision. Mainly, this review elucidated the innovative findings, new capabilities, unmet challenges, and possible future opportunities associated with biocompatible inorganic ceramics (e.g., phosphates, metallic oxides) and the United States Food and Drug Administration (USFDA) approved synthetic polymers, including their nanoscale structures. Furthermore, this review demonstrates the newly available approaches for achieving customized standard porosity, mechanical strengths, and accelerated bioactivity to construct an optimized nanomaterial-oriented scaffold. Numerous strategies including three-dimensional bioprinting, electro-spinning techniques and meticulous nanomaterials (NMs) fabrication are well established to achieve radical scientific precision in BTR engineering. The contemporary research is unceasingly decoding the pathways for spatial and temporal release of osteoinductive agents to enhance targeted therapy and prompt healing processes. Additionally, successful material design and integration of an osteoinductive and osteoconductive agents with the blend of contemporary technologies will bring radical success in this field. Furthermore, machine learning (ML) and artificial intelligence (AI) can further decode the current complexities of material design for BTR, notwithstanding the fact that these methods call for an in-depth understanding of bone composition, relationships and impacts on biochemical processes, distribution of stem cells on the matrix, and functionalization strategies of NMs for better scaffold development. Overall, this review integrated important technological progress with ethical considerations, aiming for a future where nanotechnology-facilitated bone regeneration is boosted by enhanced functionality, safety, inclusivity, and long-term environmental responsibility. Therefore, the assimilation of a specialized research design, while upholding ethical standards, will elucidate the challenge and questions we are presently encountering. Full article
(This article belongs to the Special Issue Applications of Functional Nanomaterials in Biomedical Science)
Show Figures

Graphical abstract

29 pages, 3266 KiB  
Article
Wavelet Multiresolution Analysis-Based Takagi–Sugeno–Kang Model, with a Projection Step and Surrogate Feature Selection for Spectral Wave Height Prediction
by Panagiotis Korkidis and Anastasios Dounis
Mathematics 2025, 13(15), 2517; https://doi.org/10.3390/math13152517 - 5 Aug 2025
Abstract
The accurate prediction of significant wave height presents a complex yet vital challenge in the fields of ocean engineering. This capability is essential for disaster prevention, fostering sustainable development and deepening our understanding of various scientific phenomena. We explore the development of a [...] Read more.
The accurate prediction of significant wave height presents a complex yet vital challenge in the fields of ocean engineering. This capability is essential for disaster prevention, fostering sustainable development and deepening our understanding of various scientific phenomena. We explore the development of a comprehensive predictive methodology for wave height prediction by integrating novel Takagi–Sugeno–Kang fuzzy models within a multiresolution analysis framework. The multiresolution analysis emerges via wavelets, since they are prominent models characterised by their inherent multiresolution nature. The maximal overlap discrete wavelet transform is utilised to generate the detail and resolution components of the time series, resulting from this multiresolution analysis. The novelty of the proposed model lies on its hybrid training approach, which combines least squares with AdaBound, a gradient-based algorithm derived from the deep learning literature. Significant wave height prediction is studied as a time series problem, hence, the appropriate inputs to the model are selected by developing a surrogate-based wrapped algorithm. The developed wrapper-based algorithm, employs Bayesian optimisation to deliver a fast and accurate method for feature selection. In addition, we introduce a projection step, to further refine the approximation capabilities of the resulting predictive system. The proposed methodology is applied to a real-world time series pertaining to spectral wave height and obtained from the Poseidon operational oceanography system at the Institute of Oceanography, part of the Hellenic Center for Marine Research. Numerical studies showcase a high degree of approximation performance. The predictive scheme with the projection step yields a coefficient of determination of 0.9991, indicating a high level of accuracy. Furthermore, it outperforms the second-best comparative model by approximately 49% in terms of root mean squared error. Comparative evaluations against powerful artificial intelligence models, using regression metrics and hypothesis test, underscore the effectiveness of the proposed methodology. Full article
(This article belongs to the Special Issue Applications of Mathematics in Neural Networks and Machine Learning)
Show Figures

Figure 1

15 pages, 4422 KiB  
Article
Advanced Deep Learning Methods to Generate and Discriminate Fake Images of Egyptian Monuments
by Daniyah Alaswad and Mohamed A. Zohdy
Appl. Sci. 2025, 15(15), 8670; https://doi.org/10.3390/app15158670 (registering DOI) - 5 Aug 2025
Abstract
Artificial intelligence technologies, particularly machine learning and computer vision, are being increasingly utilized to preserve, restore, and create immersive virtual experiences with cultural artifacts and sites, thus aiding in conserving cultural heritage and making it accessible to a global audience. This paper examines [...] Read more.
Artificial intelligence technologies, particularly machine learning and computer vision, are being increasingly utilized to preserve, restore, and create immersive virtual experiences with cultural artifacts and sites, thus aiding in conserving cultural heritage and making it accessible to a global audience. This paper examines the performance of Generative Adversarial Networks (GAN), especially Style-Based Generator Architecture (StyleGAN), as a deep learning approach for producing realistic images of Egyptian monuments. We used Sigmoid loss for Language–Image Pre-training (SigLIP) as a unique image–text alignment system to guide monument generation through semantic elements. We also studied truncation methods to regulate the generated image noise and identify the most effective parameter settings based on architectural representation versus diverse output creation. An improved discriminator design that combined noise addition with squeeze-and-excitation blocks and a modified MinibatchStdLayer produced 27.5% better Fréchet Inception Distance performance than the original discriminator models. Moreover, differential evolution for latent-space optimization reduced alignment mistakes during specific monument construction tasks by about 15%. We checked a wide range of truncation values from 0.1 to 1.0 and found that somewhere between 0.4 and 0.7 was the best range because it allowed for good accuracy while retaining many different architectural elements. Our findings indicate that specific model optimization strategies produce superior outcomes by creating better-quality and historically correct representations of diverse Egyptian monuments. Thus, the developed technology may be instrumental in generating educational and archaeological visualization assets while adding virtual tourism capabilities. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
Show Figures

Figure 1

Back to TopTop