Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = AI/ML predictive pointing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 3174 KB  
Review
A Bibliometric-Systematic Literature Review (B-SLR) of Machine Learning-Based Water Quality Prediction: Trends, Gaps, and Future Directions
by Jeimmy Adriana Muñoz-Alegría, Jorge Núñez, Ricardo Oyarzún, Cristian Alfredo Chávez, José Luis Arumí and Lien Rodríguez-López
Water 2025, 17(20), 2994; https://doi.org/10.3390/w17202994 - 17 Oct 2025
Abstract
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified [...] Read more.
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified exponential growth in scientific publications since 2020, indicating that this field has reached a stage of maturity. The results showed that the predominant techniques for predicting water quality, both for surface and groundwater, fall into three main categories: (i) ensemble models, with Bagging and Boosting representing 43.07% and 25.91%, respectively, particularly random forest (RF), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGB), along with their optimized variants; (ii) deep neural networks such as long short-term memory (LSTM) and convolutional neural network (CNN), which excel at modeling complex temporal dynamics; and (iii) traditional algorithms like artificial neural network (ANN), support vector machines (SVMs), and decision tree (DT), which remain widely used. Current trends point towards the use of hybrid and explainable architectures, with increased application of interpretability techniques. Emerging approaches such as Generative Adversarial Network (GAN) and Group Method of Data Handling (GMDH) for data-scarce contexts, Transfer Learning for knowledge reuse, and Transformer architectures that outperform LSTM in time series prediction tasks were also identified. Furthermore, the most studied water bodies (e.g., rivers, aquifers) and the most commonly used water quality indicators (e.g., WQI, EWQI, dissolved oxygen, nitrates) were identified. The B-SLR and Topic Modeling methodology provided a more robust, reproducible, and comprehensive overview of AI/ML/DL models for freshwater quality prediction, facilitating the identification of thematic patterns and research opportunities. Full article
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)
Show Figures

Figure 1

26 pages, 1008 KB  
Article
FedECPA: An Efficient Countermeasure Against Scaling-Based Model Poisoning Attacks in Blockchain-Based Federated Learning
by Rukayat Olapojoye, Tara Salman, Mohamed Baza and Ali Alshehri
Sensors 2025, 25(20), 6343; https://doi.org/10.3390/s25206343 - 14 Oct 2025
Viewed by 154
Abstract
Artificial intelligence (AI) and machine learning (ML) have become integral to various applications, leveraging vast amounts of heterogeneous, globally distributed Internet of Things (IoT) data to identify patterns and build accurate ML models for predictive tasks. Federated learning (FL) is a distributed ML [...] Read more.
Artificial intelligence (AI) and machine learning (ML) have become integral to various applications, leveraging vast amounts of heterogeneous, globally distributed Internet of Things (IoT) data to identify patterns and build accurate ML models for predictive tasks. Federated learning (FL) is a distributed ML technique developed to learn from such distributed data while ensuring privacy. Nevertheless, traditional FL requires a central server for aggregation, which can be a central point of failure and raises trust issues. Blockchain-based federated learning (BFL) has emerged as an FL extension that provides guaranteed decentralization alongside other security assurances. However, due to the inherent openness of blockchain, BFL comes with several vulnerabilities that remain unexplored in literature, e.g., a higher possibility of model poisoning attacks. This paper investigates how scaling-based model poisoning attacks are made easier in BFL systems and their effects on model performance. Subsequently, it proposes FedECPA-an extension of FedAvg aggregation algorithm with Efficient Countermeasure against scaling-based model Poisoning Attacks in BFL. FedECPA filters out clients with outlier weights and protects the model against these attacks. Several experiments are conducted with different attack scenarios and settings. We further compared our results to a frequently used defense mechanism, Multikrum. Results show the effectiveness of our defense mechanism in protecting BFL from these attacks. On the MNIST dataset, it maintains an overall accuracy of 98% and 89% and outperforms our baseline with 4% and 38% in both IID and non-IID settings, respectively. Similar results were achieved with the CIFAR-10 dataset. Full article
Show Figures

Figure 1

41 pages, 1713 KB  
Review
A Review of Pointing Modules and Gimbal Systems for Free-Space Optical Communication in Non-Terrestrial Platforms
by Dhruv and Hemani Kaushal
Photonics 2025, 12(10), 1001; https://doi.org/10.3390/photonics12101001 - 11 Oct 2025
Viewed by 195
Abstract
As the world is technologically advancing, the integration of FSO communication in non-terrestrial platforms is transforming the landscape of global connectivity. By enabling high-data-rate inter-satellite links, secure UAV–ground channels, and efficient HAPS backhaul, FSO technology is paving the way for sustainable 6G non-terrestrial [...] Read more.
As the world is technologically advancing, the integration of FSO communication in non-terrestrial platforms is transforming the landscape of global connectivity. By enabling high-data-rate inter-satellite links, secure UAV–ground channels, and efficient HAPS backhaul, FSO technology is paving the way for sustainable 6G non-terrestrial networks. However, the stringent requirement for precise line-of-sight (LoS) alignment between the optical transmitter and receivers poses a hindrance in practical deployment. As non-terrestrial missions require continuous movement across the mission area, the platform is subject to vibrations, dynamic motion, and environmental disturbances. This makes maintaining the LoS between the transceivers difficult. While fine-pointing mechanisms such as fast steering mirrors and adaptive optics are effective for microradian angular corrections, they rely heavily on an initial coarse alignment to maintain the LoS. Coarse pointing modules or gimbals serve as the primary mechanical interface for steering and stabilizing the optical beam over wide angular ranges. This survey presents a comprehensive analysis of coarse pointing and gimbal modules that are being used in FSO communication systems for non-terrestrial platforms. The paper classifies gimbal architectures based on actuation type, degrees of freedom, and stabilization strategies. Key design trade-offs are examined, including angular precision, mechanical inertia, bandwidth, and power consumption, which directly impact system responsiveness and tracking accuracy. This paper also highlights emerging trends such as AI-driven pointing prediction and lightweight gimbal design for SWap-constrained platforms. The final part of the paper discusses open challenges and research directions in developing scalable and resilient coarse pointing systems for aerial FSO networks. Full article
Show Figures

Figure 1

37 pages, 2546 KB  
Review
POC Sensor Systems and Artificial Intelligence—Where We Are Now and Where We Are Going?
by Prashanthi Kovur, Krishna M. Kovur, Dorsa Yahya Rayat and David S. Wishart
Biosensors 2025, 15(9), 589; https://doi.org/10.3390/bios15090589 - 8 Sep 2025
Viewed by 1203
Abstract
Integration of machine learning (ML) and artificial intelligence (AI) into point-of-care (POC) sensor systems represents a transformative advancement in healthcare. This integration enables sophisticated data analysis and real-time decision-making in emergency and intensive care settings. AI and ML algorithms can process complex biomedical [...] Read more.
Integration of machine learning (ML) and artificial intelligence (AI) into point-of-care (POC) sensor systems represents a transformative advancement in healthcare. This integration enables sophisticated data analysis and real-time decision-making in emergency and intensive care settings. AI and ML algorithms can process complex biomedical data, improve diagnostic accuracy, and enable early disease detection for better patient outcomes. Predictive analytics in POC devices supports proactive healthcare by analyzing data to forecast health issues and facilitating early intervention and personalized treatment. This review covers the key areas of ML and AI integration in POC devices, including data analysis, pattern recognition, real-time decision support, predictive analytics, personalization, automation, and workflow optimization. Examples of current POC devices that use ML and AI include AI-powered blood glucose monitors, portable imaging devices, wearable cardiac monitors, AI-enhanced infectious disease detection, and smart wound care sensors are also discussed. The review further explores new directions for POC sensors and ML integration, including mental health monitoring, nutritional monitoring, metabolic health tracking, and decentralized clinical trials (DCTs). We also examined the impact of integrating ML and AI into POC devices on healthcare accessibility, efficiency, and patient outcomes. Full article
Show Figures

Figure 1

21 pages, 493 KB  
Proceeding Paper
Natural Hazards and Spatial Data Infrastructures (SDIs) for Disaster Risk Reduction
by Michail-Christos Tsoutsos and Vassilios Vescoukis
Eng. Proc. 2025, 87(1), 101; https://doi.org/10.3390/engproc2025087101 - 5 Aug 2025
Viewed by 556
Abstract
When there is an absence of disaster prevention measures, natural hazards can lead to disasters. An essential part of disaster risk management is the geospatial modeling of devastating hazards, where data sharing is of paramount importance in the context of early-warning systems. This [...] Read more.
When there is an absence of disaster prevention measures, natural hazards can lead to disasters. An essential part of disaster risk management is the geospatial modeling of devastating hazards, where data sharing is of paramount importance in the context of early-warning systems. This research points out the usefulness of Spatial Data Infrastructures (SDIs) for disaster risk reduction through a literature review, focusing on the necessity of data unification and disposal. Initially, the principles of SDIs are presented, given the fact that this framework contributes significantly to the fulfilment of specific targets and priorities of the Sendai Framework for Disaster Risk Reduction 2015–2030. Thereafter, the challenges of SDIs are investigated in order to underline the main drawbacks stakeholders in emergency management have to come up against, namely the semantic misalignment that impedes efficient data retrieval, malfunctions in the interoperability of datasets and web services, the non-availability of the data in spite of their existence, and a lack of quality data, while also highlighting the obstacles of real case studies on national NSDIs. Thus, diachronic observations on disasters will not be made, despite these comprising a meaningful dataset in disaster mitigation. Consequently, the harmonization of national SDIs with international schemes, such as the Group on Earth Observations (GEO) and European Union’s space program Copernicus, and the usefulness of Artificial Intelligence (AI) and Machine Learning (ML) for disaster mitigation through the prediction of natural hazards are demonstrated. In this paper, for the purpose of disaster preparedness, real-world implementation barriers that preclude SDIs to be completed or deter their functionality are presented, culminating in the proposed future research directions and topics for the SDIs that need further investigation. SDIs constitute an ongoing collaborative effort intending to offer valuable operational tools for decision-making under the threat of a devastating event. Despite the operational potential of SDIs, the complexity of data standardization and coordination remains a core challenge. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

33 pages, 14482 KB  
Article
AI-Driven Surrogate Model for Room Ventilation
by Jaume Luis-Gómez, Francisco Martínez, Alejandro González-Barberá, Javier Mascarós, Guillem Monrós-Andreu, Sergio Chiva, Elisa Borrás and Raúl Martínez-Cuenca
Fluids 2025, 10(7), 163; https://doi.org/10.3390/fluids10070163 - 26 Jun 2025
Cited by 2 | Viewed by 998
Abstract
The control of ventilation systems is often performed by automatic algorithms which often do not consider the future evolution of the system in its control politics. Digital twins allow system forecasting for a more sophisticated control. This paper explores a novel methodology to [...] Read more.
The control of ventilation systems is often performed by automatic algorithms which often do not consider the future evolution of the system in its control politics. Digital twins allow system forecasting for a more sophisticated control. This paper explores a novel methodology to create a Machine Learning (ML) model for the predictive control of a ventilation system combining Computational Fluid Dynamics (CFD) with Artificial Intelligence (AI). This predictive model was created to forecast the temperature and humidity evolution of a ventilated room to be implemented in a digital twin for better unsupervised control strategies. To replicate the full range of annual conditions, a series of CFD simulations were configured and executed based on seasonal data collected by sensors positioned inside and outside the room. These simulations generated a dataset used to develop the predictive model, which was based on a Deep Neural Network (DNN) with fully connected layers. The model’s performance was evaluated, yielding final average absolute errors of 0.34 degrees Kelvin for temperature and 2.2 percentage points for relative humidity. The presented results highlight the potential of this methodology to create AI-driven digital twins for the control of room ventilation. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Fluid Mechanics)
Show Figures

Figure 1

32 pages, 1435 KB  
Review
Resistance in Lung Cancer Immunotherapy and How to Overcome It: Insights from the Genetics Perspective and Combination Therapies Approach
by Paweł Zieliński, Maria Stępień, Hanna Chowaniec, Kateryna Kalyta, Joanna Czerniak, Martyna Borowczyk, Ewa Dwojak, Magdalena Mroczek, Grzegorz Dworacki, Antonina Ślubowska, Hanna Markiewicz, Rafał Ałtyn and Paula Dobosz
Cells 2025, 14(8), 587; https://doi.org/10.3390/cells14080587 - 12 Apr 2025
Cited by 2 | Viewed by 3810
Abstract
Lung cancer with the highest number of new cases diagnosed in Europe and in Poland, remains an example of malignancy with a very poor prognosis despite the recent progress in medicine. Different treatment strategies are now available for cancer therapy based on its [...] Read more.
Lung cancer with the highest number of new cases diagnosed in Europe and in Poland, remains an example of malignancy with a very poor prognosis despite the recent progress in medicine. Different treatment strategies are now available for cancer therapy based on its type, molecular subtype and other factors including overall health, the stage of disease and cancer molecular profile. Immunotherapy is emerging as a potential addition to surgery, chemotherapy, radiotherapy or other targeted therapies, but also considered a mainstay therapy mode. This combination is an area of active investigation in order to enhance efficacy and overcome resistance. Due to the complexity and dynamic of cancer’s ecosystem, novel therapeutic targets and strategies need continued research into the cellular and molecular mechanisms within the tumour microenvironment. From the genetic point of view, several signatures ranging from a few mutated genes to hundreds of them have been identified and associated with therapy resistance and metastatic potential. ML techniques and AI can enhance the predictive potential of genetic signatures and model the prognosis. Here, we present the overview of already existing treatment approaches, the current findings of key aspects of immunotherapy, such as immune checkpoint inhibitors (ICIs), existing molecular biomarkers like PD-L1 expression, tumour mutation burden, immunoscore, and neoantigens, as well as their roles as predictive markers for treatment response and resistance. Full article
(This article belongs to the Special Issue Advances in Immunotherapy for Non-Small-Cell Lung Cancer)
Show Figures

Figure 1

39 pages, 3054 KB  
Review
Applications of Machine Learning in Food Safety and HACCP Monitoring of Animal-Source Foods
by Panagiota-Kyriaki Revelou, Efstathia Tsakali, Anthimia Batrinou and Irini F. Strati
Foods 2025, 14(6), 922; https://doi.org/10.3390/foods14060922 - 8 Mar 2025
Cited by 4 | Viewed by 6512
Abstract
Integrating advanced computing techniques into food safety management has attracted significant attention recently. Machine learning (ML) algorithms offer innovative solutions for Hazard Analysis Critical Control Point (HACCP) monitoring by providing advanced data analysis capabilities and have proven to be powerful tools for assessing [...] Read more.
Integrating advanced computing techniques into food safety management has attracted significant attention recently. Machine learning (ML) algorithms offer innovative solutions for Hazard Analysis Critical Control Point (HACCP) monitoring by providing advanced data analysis capabilities and have proven to be powerful tools for assessing the safety of Animal-Source Foods (ASFs). Studies that link ML with HACCP monitoring in ASFs are limited. The present review provides an overview of ML, feature extraction, and selection algorithms employed for food safety. Several non-destructive techniques are presented, including spectroscopic methods, smartphone-based sensors, paper chromogenic arrays, machine vision, and hyperspectral imaging combined with ML algorithms. Prospects include enhancing predictive models for food safety with the development of hybrid Artificial Intelligence (AI) models and the automation of quality control processes using AI-driven computer vision, which could revolutionize food safety inspections. However, handling conceivable inclinations in AI models is vital to guaranteeing reasonable and exact hazard assessments in an assortment of nourishment generation settings. Moreover, moving forward, the interpretability of ML models will make them more straightforward and dependable. Conclusively, applying ML algorithms allows real-time monitoring and predictive analytics and can significantly reduce the risks associated with ASF consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Graphical abstract

31 pages, 3248 KB  
Systematic Review
Diagnosis and Management of Sexually Transmitted Infections Using Artificial Intelligence Applications Among Key and General Populations in Sub-Saharan Africa: A Systematic Review and Meta-Analysis
by Claris Siyamayambo, Edith Phalane and Refilwe Nancy Phaswana-Mafuya
Algorithms 2025, 18(3), 151; https://doi.org/10.3390/a18030151 - 7 Mar 2025
Cited by 1 | Viewed by 2046
Abstract
The Fourth Industrial Revolution (4IR) has significantly impacted healthcare, including sexually transmitted infection (STI) management in Sub-Saharan Africa (SSA), particularly among key populations (KPs) with limited access to health services. This review investigates 4IR technologies, including artificial intelligence (AI) and machine learning (ML), [...] Read more.
The Fourth Industrial Revolution (4IR) has significantly impacted healthcare, including sexually transmitted infection (STI) management in Sub-Saharan Africa (SSA), particularly among key populations (KPs) with limited access to health services. This review investigates 4IR technologies, including artificial intelligence (AI) and machine learning (ML), that assist in diagnosing, treating, and managing STIs across SSA. By leveraging affordable and accessible solutions, 4IR tools support KPs who are disproportionately affected by STIs. Following systematic review guidelines using Covidence, this study examined 20 relevant studies conducted across 20 SSA countries, with Ethiopia, South Africa, and Zimbabwe emerging as the most researched nations. All the studies reviewed used secondary data and favored supervised ML models, with random forest and XGBoost frequently demonstrating high performance. These tools assist in tracking access to services, predicting risks of STI/HIV, and developing models for community HIV clusters. While AI has enhanced the accuracy of diagnostics and the efficiency of management, several challenges persist, including ethical concerns, issues with data quality, and a lack of expertise in implementation. There are few real-world applications or pilot projects in SSA. Notably, most of the studies primarily focus on the development, validation, or technical evaluation of the ML methods rather than their practical application or implementation. As a result, the actual impact of these approaches on the point of care remains unclear. This review highlights the effectiveness of various AI and ML methods in managing HIV and STIs through detection, diagnosis, treatment, and monitoring. The study strengthens knowledge on the practical application of 4IR technologies in diagnosing, treating, and managing STIs across SSA. Understanding this has potential to improve sexual health outcomes, address gaps in STI diagnosis, and surpass the limitations of traditional syndromic management approaches. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

35 pages, 4021 KB  
Review
Agricultural Non-Point Source Pollution: Comprehensive Analysis of Sources and Assessment Methods
by Fida Hussain, Shakeel Ahmed, Syed Muhammad Zaigham Abbas Naqvi, Muhammad Awais, Yanyan Zhang, Hao Zhang, Vijaya Raghavan, Yiheng Zang, Guoqing Zhao and Jiandong Hu
Agriculture 2025, 15(5), 531; https://doi.org/10.3390/agriculture15050531 - 28 Feb 2025
Cited by 11 | Viewed by 4472
Abstract
Agricultural non-point source pollution (ANPSP) significantly affects worldwide water quality, soil integrity, and ecosystems. Primary factors are nutrient runoff, pesticide leaching, and inadequate livestock waste management. Nonetheless, a thorough assessment of ANPSP sources and efficient control techniques is still lacking. This research delineates [...] Read more.
Agricultural non-point source pollution (ANPSP) significantly affects worldwide water quality, soil integrity, and ecosystems. Primary factors are nutrient runoff, pesticide leaching, and inadequate livestock waste management. Nonetheless, a thorough assessment of ANPSP sources and efficient control techniques is still lacking. This research delineates the origins and present state of ANPSP, emphasizing its influence on agricultural practices, livestock, and rural waste management. It assesses current evaluation models, encompassing field- and watershed-scale methodologies, and investigates novel technologies such as Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT) that possess the potential to enhance pollution monitoring and predictive precision. The research examines strategies designed to alleviate ANPSP, such as sustainable agricultural practices, fertilizer reduction, and waste management technology, highlighting the necessity for integrated, real-time monitoring systems. This report presents a comprehensive analysis of current tactics, finds significant gaps, and offers recommendations for enhancing both research and policy initiatives to tackle ANPSP and foster sustainable farming practices. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

20 pages, 22126 KB  
Article
Nonlinear Load-Deflection Analysis of Steel Rebar-Reinforced Concrete Beams: Experimental, Theoretical and Machine Learning Analysis
by Muhammet Karabulut
Buildings 2025, 15(3), 432; https://doi.org/10.3390/buildings15030432 - 29 Jan 2025
Cited by 6 | Viewed by 1867
Abstract
The integration of cutting-edge technologies into reinforced concrete (RC) design is reshaping the construction industry, enabling smarter and more sustainable solutions. Among these, machine learning (ML), a subset of artificial intelligence (AI), has emerged as a transformative tool, offering unprecedented accuracy in prediction [...] Read more.
The integration of cutting-edge technologies into reinforced concrete (RC) design is reshaping the construction industry, enabling smarter and more sustainable solutions. Among these, machine learning (ML), a subset of artificial intelligence (AI), has emerged as a transformative tool, offering unprecedented accuracy in prediction and optimization. This study investigated the flexural behavior of steel rebar RC beams, focusing on varying concrete compressive strengths via theoretical, experimental and ML analysis. Nine steel rebar RC beams with low (SC20), moderate (SC30) and high (SC40) concrete compressive strength, measuring 150 × 200 × 1100 mm, were produced and subjected to three-point bending tests. An average error of less than 5% was obtained between the theoretical calculations and the experiments of the ultimate load-carrying capacity of reinforced concrete beams. By combining three-point bending experiments with ML-powered prediction models, this research bridges the gap between experimental insights and advanced analytical techniques. A groundbreaking aspect of this work is the deployment of 18 ML regression models using Python’s PyCaret library to predict deflection values with an impressive average accuracy of 95%. Notably, the K Neighbors Regressor and Gradient Boosting Regressor models demonstrated exceptional performance, providing fast, consistent and highly accurate predictions, making them an invaluable tool for structural engineers. The results revealed distinct failure mechanisms: SC30 and SC40 RC beams exhibited ductile flexural cracking, while SC20 RC beams showed brittle shear cracking and failure with sudden collapse. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

17 pages, 2734 KB  
Article
An Efficient Deep Learning Framework for Optimized Event Forecasting
by Emad Ul Haq Qazi, Muhammad Hamza Faheem, Tanveer Zia, Muhammad Imran and Iftikhar Ahmad
Information 2024, 15(11), 701; https://doi.org/10.3390/info15110701 - 4 Nov 2024
Viewed by 1877
Abstract
There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that [...] Read more.
There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that are most widely used in practice to detect or forecast violent activities. However, machine learning algorithms become less accurate in identifying and forecasting violent activity as data volume and complexity increase. For the prediction of future events, we propose a hybrid deep learning (DL)-based model that is composed of a convolutional neural network (CNN), long short-term memory (LSTM), and an attention layer to learn temporal features from the benchmark the Global Terrorism Database (GTD). The GTD is an internationally recognized database that includes around 190,000 violent events and occurrences worldwide from 1970 to 2020. We took into account two factors for this experimental work: the type of event and the type of object used. The LSTM model takes these complex feature extractions from the CNN first to determine the chronological link between data points, whereas the attention model is used for the time series prediction of an event. The results show that the proposed model achieved good accuracies for both cases—type of event and type of object—compared to benchmark studies using the same dataset (98.1% and 97.6%, respectively). Full article
Show Figures

Figure 1

19 pages, 24023 KB  
Article
Optimization and Multimachine Learning Algorithms to Predict Nanometal Surface Area Transfer Parameters for Gold and Silver Nanoparticles
by Steven M. E. Demers, Christopher Sobecki and Larry Deschaine
Nanomaterials 2024, 14(21), 1741; https://doi.org/10.3390/nano14211741 - 30 Oct 2024
Cited by 1 | Viewed by 1444
Abstract
Interactions between gold metallic nanoparticles and molecular dyes have been well described by the nanometal surface energy transfer (NSET) mechanism. However, the expansion and testing of this model for nanoparticles of different metal composition is needed to develop a greater variety of nanosensors [...] Read more.
Interactions between gold metallic nanoparticles and molecular dyes have been well described by the nanometal surface energy transfer (NSET) mechanism. However, the expansion and testing of this model for nanoparticles of different metal composition is needed to develop a greater variety of nanosensors for medical and commercial applications. In this study, the NSET formula was slightly modified in the size-dependent dampening constant and skin depth terms to allow for modeling of different metals as well as testing the quenching effects created by variously sized gold, silver, copper, and platinum nanoparticles. Overall, the metal nanoparticles followed more closely the NSET prediction than for Förster resonance energy transfer, though scattering effects began to occur at 20 nm in the nanoparticle diameter. To further improve the NSET theoretical equation, an attempt was made to set a best-fit line of the NSET theoretical equation curve onto the Au and Ag data points. An exhaustive grid search optimizer was applied in the ranges for two variables, 0.1C2.0 and 0α4, representing the metal dampening constant and the orientation of donor to the metal surface, respectively. Three different grid searches, starting from coarse (entire range) to finer (narrower range), resulted in more than one million total calculations with values C=2.0 and α=0.0736. The results improved the calculation, but further analysis needed to be conducted in order to find any additional missing physics. With that motivation, two artificial intelligence/machine learning (AI/ML) algorithms, multilayer perception and least absolute shrinkage and selection operator regression, gave a correlation coefficient, R2, greater than 0.97, indicating that the small dataset was not overfitting and was method-independent. This analysis indicates that an investigation is warranted to focus on deeper physics informed machine learning for the NSET equations. Full article
Show Figures

Graphical abstract

20 pages, 3602 KB  
Article
Effective Machine Learning Solution for State Classification and Productivity Identification: Case of Pneumatic Pressing Machine
by Alexandros Kolokas, Panagiotis Mallioris, Michalis Koutsiantzis, Christos Bialas, Dimitrios Bechtsis and Evangelos Diamantis
Machines 2024, 12(11), 762; https://doi.org/10.3390/machines12110762 - 30 Oct 2024
Cited by 3 | Viewed by 1575
Abstract
The fourth industrial revolution (Industry 4.0) brought significant changes in manufacturing, driven by technologies like artificial intelligence (AI), Internet of Things (IoT), 5G, robotics, and big data analytics. For industries to remain competitive, the primary goals must be the improvement of the efficiency [...] Read more.
The fourth industrial revolution (Industry 4.0) brought significant changes in manufacturing, driven by technologies like artificial intelligence (AI), Internet of Things (IoT), 5G, robotics, and big data analytics. For industries to remain competitive, the primary goals must be the improvement of the efficiency and safety of machinery, the reduction of production costs, and the enhancement of product quality. Predictive maintenance (PdM) utilizes historical data and AI models to diagnose equipment’s health and predict the remaining useful life (RUL), providing critical insights for machinery effectiveness and product manufacturing. This prediction is a critical strategy to maximize the useful life of equipment, especially in large-scale and important infostructures. This study focuses on developing an unsupervised machine state-classification solution utilizing real-world industrial measurements collected from a pneumatic pressing machine. Unsupervised machine learning (ML) models were tested to diagnose and output the working state of the pressing machine at each given point (offline, idle, pressing, defective). Our research contributes to extracting valuable insights regarding real-world industrial settings for PdM and production efficiency using unsupervised ML, promoting operation safety, cost reduction, and productivity enhancement in modern industries. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

26 pages, 19393 KB  
Article
ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia
by Marco Cappellazzo, Giacomo Patrucco and Antonia Spanò
Heritage 2024, 7(10), 5521-5546; https://doi.org/10.3390/heritage7100261 - 5 Oct 2024
Cited by 2 | Viewed by 2237
Abstract
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light [...] Read more.
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light Detection And Ranging (LiDAR) point clouds, complex spatial analyses, and automated data structuring is crucial for supporting heritage preservation and knowledge processes. In this context, the present contribution investigates the latest Artificial Intelligence (AI) technologies for automating existing LiDAR data structuring, focusing on the case study of Sardinia coastlines. Moreover, the study preliminary addresses automation challenges in the perspective of historical defensive landscapes mapping. Since historical defensive architectures and landscapes are characterized by several challenging complexities—including their association with dark periods in recent history and chronological stratification—their digitization and preservation are highly multidisciplinary issues. This research aims to improve data structuring automation in these large heritage contexts with a multiscale approach by applying Machine Learning (ML) techniques to low-scale 3D Airborne Laser Scanning (ALS) point clouds. The study thus develops a predictive Deep Learning Model (DLM) for the semantic segmentation of sparse point clouds (<10 pts/m2), adaptable to large landscape heritage contexts and heterogeneous data scales. Additionally, a preliminary investigation into object-detection methods has been conducted to map specific fortification artifacts efficiently. Full article
Show Figures

Figure 1

Back to TopTop