Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,803)

Search Parameters:
Keywords = automatic collection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 13059 KiB  
Article
Verifying the Effects of the Grey Level Co-Occurrence Matrix and Topographic–Hydrologic Features on Automatic Gully Extraction in Dexiang Town, Bayan County, China
by Zhuo Chen and Tao Liu
Remote Sens. 2025, 17(15), 2563; https://doi.org/10.3390/rs17152563 - 23 Jul 2025
Abstract
Erosion gullies can reduce arable land area and decrease agricultural machinery efficiency; therefore, automatic gully extraction on a regional scale should be one of the preconditions of gully control and land management. The purpose of this study is to compare the effects of [...] Read more.
Erosion gullies can reduce arable land area and decrease agricultural machinery efficiency; therefore, automatic gully extraction on a regional scale should be one of the preconditions of gully control and land management. The purpose of this study is to compare the effects of the grey level co-occurrence matrix (GLCM) and topographic–hydrologic features on automatic gully extraction and guide future practices in adjacent regions. To accomplish this, GaoFen-2 (GF-2) satellite imagery and high-resolution digital elevation model (DEM) data were first collected. The GLCM and topographic–hydrologic features were generated, and then, a gully label dataset was built via visual interpretation. Second, the study area was divided into training, testing, and validation areas, and four practices using different feature combinations were conducted. The DeepLabV3+ and ResNet50 architectures were applied to train five models in each practice. Thirdly, the trainset gully intersection over union (IOU), test set gully IOU, receiver operating characteristic curve (ROC), area under the curve (AUC), user’s accuracy, producer’s accuracy, Kappa coefficient, and gully IOU in the validation area were used to assess the performance of the models in each practice. The results show that the validated gully IOU was 0.4299 (±0.0082) when only the red (R), green (G), blue (B), and near-infrared (NIR) bands were applied, and solely combining the topographic–hydrologic features with the RGB and NIR bands significantly improved the performance of the models, which boosted the validated gully IOU to 0.4796 (±0.0146). Nevertheless, solely combining GLCM features with RGB and NIR bands decreased the accuracy, which resulted in the lowest validated gully IOU of 0.3755 (±0.0229). Finally, by employing the full set of RGB and NIR bands, the GLCM and topographic–hydrologic features obtained a validated gully IOU of 0.4762 (±0.0163) and tended to show an equivalent improvement with the combination of topographic–hydrologic features and RGB and NIR bands. A preliminary explanation is that the GLCM captures the local textures of gullies and their backgrounds, and thus introduces ambiguity and noise into the convolutional neural network (CNN). Therefore, the GLCM tends to provide no benefit to automatic gully extraction with CNN-type algorithms, while topographic–hydrologic features, which are also original drivers of gullies, help determine the possible presence of water-origin gullies when optical bands fail to tell the difference between a gully and its confusing background. Full article
Show Figures

Figure 1

20 pages, 8763 KiB  
Article
An Integrated Approach to Real-Time 3D Sensor Data Visualization for Digital Twin Applications
by Hyungki Kim and Hyowon Suh
Electronics 2025, 14(15), 2938; https://doi.org/10.3390/electronics14152938 - 23 Jul 2025
Abstract
Digital twin technology is emerging as a core technology that models physical objects or systems in a digital space and links real-time data to accurately reflect the state and behavior of the real world. For the effective operation of such digital twins, high-performance [...] Read more.
Digital twin technology is emerging as a core technology that models physical objects or systems in a digital space and links real-time data to accurately reflect the state and behavior of the real world. For the effective operation of such digital twins, high-performance visualization methods that support an intuitive understanding of the vast amounts of data collected from sensors and enable rapid decision-making are essential. The proposed system is designed as a balanced 3D monitoring solution that prioritizes intuitive, real-time state observation. Conventional 3D-simulation-based systems, while offering high physical fidelity, are often unsuitable for real-time monitoring due to their significant computational cost. Conversely, 2D-based systems are useful for detailed analysis but struggle to provide an intuitive, holistic understanding of multiple assets within a spatial context. This study introduces a visualization approach that bridges this gap. By leveraging sensor data, our method generates a physically plausible representation 3D CAD models, enabling at-a-glance comprehension in a visual format reminiscent of simulation analysis, without claiming equivalent physical accuracy. The proposed method includes GPU-accelerated interpolation, the user-selectable application of geodesic and Euclidean distance calculations, the automatic resolution of CAD model connectivity issues, the integration of Physically Based Rendering (PBR), and enhanced data interpretability through ramp shading. The proposed system was implemented in the Unity3D environment. Through various experiments, it was confirmed that the system maintained high real-time performance, achieving tens to hundreds of Frames Per Second (FPS), even with complex 3D models and numerous sensor data. Moreover, the application of geodesic distance yielded a more intuitive representation of surface-based phenomena, while PBR integration significantly enhanced visual realism, thereby enabling the more effective analysis and utilization of sensor data in digital twin environments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 7173 KiB  
Article
LiDAR Data-Driven Deep Network for Ship Berthing Behavior Prediction in Smart Port Systems
by Jiyou Wang, Ying Li, Hua Guo, Zhaoyi Zhang and Yue Gao
J. Mar. Sci. Eng. 2025, 13(8), 1396; https://doi.org/10.3390/jmse13081396 - 23 Jul 2025
Abstract
Accurate ship berthing behavior prediction (BBP) is essential for enabling collision warnings and support decision-making. Existing methods based on Automatic Identification System (AIS) data perform well in the task of ship trajectory prediction over long time-series and large scales, but struggle with addressing [...] Read more.
Accurate ship berthing behavior prediction (BBP) is essential for enabling collision warnings and support decision-making. Existing methods based on Automatic Identification System (AIS) data perform well in the task of ship trajectory prediction over long time-series and large scales, but struggle with addressing the fine-grained and highly dynamic changes in berthing scenarios. Therefore, the accuracy of BBP remains a crucial challenge. In this paper, a novel BBP method based on Light Detection and Ranging (LiDAR) data is proposed. To test its feasibility, a comprehensive dataset is established by conducting on-site collection of berthing data at Dalian Port (China) using a shore-based LiDAR system. This dataset comprises equal-interval data from 77 berthing activities involving three large ships. In order to find a straightforward architecture to provide good performance on our dataset, a cascading network model combining convolutional neural network (CNN), a bi-directional gated recurrent unit (BiGRU) and bi-directional long short-term memory (BiLSTM) are developed to serve as the baseline. Experimental results demonstrate that the baseline outperformed other commonly used prediction models and their combinations in terms of prediction accuracy. In summary, our research findings help overcome the limitations of AIS data in berthing scenarios and provide a foundation for predicting complete berthing status, therefore offering practical insights for safer, more efficient, and automated management in smart port systems. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 27
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

14 pages, 730 KiB  
Article
Opportunities and Limitations of Wrist-Worn Devices for Dyskinesia Detection in Parkinson’s Disease
by Alexander Johannes Wiederhold, Qi Rui Zhu, Sören Spiegel, Adrin Dadkhah, Monika Pötter-Nerger, Claudia Langebrake, Frank Ückert and Christopher Gundler
Sensors 2025, 25(14), 4514; https://doi.org/10.3390/s25144514 - 21 Jul 2025
Viewed by 159
Abstract
During the in-hospital optimization of dopaminergic dosage for Parkinson’s disease, drug-induced dyskinesias emerge as a common side effect. Wrist-worn devices present a substantial opportunity for continuous movement recording and the supportive identification of these dyskinesias. To bridge the gap between dyskinesia assessment and [...] Read more.
During the in-hospital optimization of dopaminergic dosage for Parkinson’s disease, drug-induced dyskinesias emerge as a common side effect. Wrist-worn devices present a substantial opportunity for continuous movement recording and the supportive identification of these dyskinesias. To bridge the gap between dyskinesia assessment and machine learning-enabled detection, the recorded information requires meaningful data representations. This study evaluates and compares two distinct representations of sensor data: a task-dependent, semantically grounded approach and automatically extracted large-scale time-series features. Each representation was assessed on public datasets to identify the best-performing machine learning model and subsequently applied to our own collected dataset to assess generalizability. Data representations incorporating semantic knowledge demonstrated comparable or superior performance to reported works, with peak F1 scores of 0.68. Generalization to our own dataset from clinical practice resulted in an observed F1 score of 0.53 using both setups. These results highlight the potential of semantic movement data analysis for dyskinesia detection. Dimensionality reduction in accelerometer-based movement data positively impacts performance, and models trained with semantically obtained features avoid overfitting. Expanding cohorts with standardized neurological assessments labeled by medical experts is essential for further improvements. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

16 pages, 2914 KiB  
Article
Smart Dairy Farming: A Mobile Application for Milk Yield Classification Tasks
by Allan Hall-Solorio, Graciela Ramirez-Alonso, Alfonso Juventino Chay-Canul, Héctor A. Lee-Rangel, Einar Vargas-Bello-Pérez and David R. Lopez-Flores
Animals 2025, 15(14), 2146; https://doi.org/10.3390/ani15142146 - 21 Jul 2025
Viewed by 160
Abstract
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient [...] Read more.
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient object detection and classification with real-time performance. The model is trained on a public dataset of cow images labeled with 305-day milk yield records. Thresholds were established to define the three yield classes, and a balanced subset of labeled images was selected for training, validation, and testing purposes. To assess the robustness and consistency of the proposed approach, the model was trained 30 times following the same experimental protocol. The system achieves precision, recall, and mean Average Precision (mAP@50) of 0.408 ± 0.044, 0.739 ± 0.095, and 0.492 ± 0.031, respectively, across all classes. The highest precision (0.445 ± 0.055), recall (0.766 ± 0.107), and mAP@50 (0.558 ± 0.036) were observed in the low-yield class. Qualitative analysis revealed that misclassifications mainly occurred near class boundaries, emphasizing the importance of consistent image acquisition conditions. The resulting model was deployed in a mobile application designed to support field-level assessment by non-specialist users. These findings demonstrate the practical feasibility of applying vision-based models to support decision-making in dairy production systems, particularly in settings where traditional data collection methods are unavailable or impractical. Full article
Show Figures

Figure 1

24 pages, 2021 KiB  
Article
A Framework for Constructing Large-Scale Dynamic Datasets for Water Conservancy Image Recognition Using Multi-Role Collaboration and Intelligent Annotation
by Xueying Song, Xiaofeng Wang, Ganggang Zuo and Jiancang Xie
Appl. Sci. 2025, 15(14), 8002; https://doi.org/10.3390/app15148002 - 18 Jul 2025
Viewed by 121
Abstract
The construction of large-scale, dynamic datasets for specialized domain models often suffers with problems of low efficiency and poor consistency. This paper proposes a method that integrates multi-role collaboration with automated annotation to address these issues. The framework introduces two new roles, data [...] Read more.
The construction of large-scale, dynamic datasets for specialized domain models often suffers with problems of low efficiency and poor consistency. This paper proposes a method that integrates multi-role collaboration with automated annotation to address these issues. The framework introduces two new roles, data augmentation specialists and automatic annotation operators, to establish a closed-loop process that includes dynamic classification adjustment, data augmentation, and intelligent annotation. Two supporting tools were developed: an image classification modification tool that automatically adapts to changes in categories and an automatic annotation tool with rotation-angle perception based on the rotation matrix algorithm. Experimental results show that this method increases annotation efficiency by 40% compared to traditional approaches, while achieving 100% annotation consistency after classification modifications. The method’s effectiveness was validated using the WATER-DET dataset, a collection of 1500 annotated images from the water conservancy engineering field. A model trained on this dataset achieved an F1-score of 0.9 for identifying water environment problems in rivers and lakes. This research offers an efficient framework for dynamic dataset construction, and the developed methods and tools are expected to promote the application of artificial intelligence in specialized domains. Full article
Show Figures

Figure 1

26 pages, 2215 KiB  
Article
Smart Routing for Sustainable Supply Chain Networks: An AI and Knowledge Graph Driven Approach
by Manuel Felder, Matteo De Marchi, Patrick Dallasega and Erwin Rauch
Appl. Sci. 2025, 15(14), 8001; https://doi.org/10.3390/app15148001 - 18 Jul 2025
Viewed by 211
Abstract
Small and medium-sized enterprises (SMEs) face growing challenges in optimizing their sustainable supply chains because of fragmented logistics data and changing regulatory requirements. In particular, globally operating manufacturing SMEs often lack suitable tools, resulting in manual data collection and making reliable accounting and [...] Read more.
Small and medium-sized enterprises (SMEs) face growing challenges in optimizing their sustainable supply chains because of fragmented logistics data and changing regulatory requirements. In particular, globally operating manufacturing SMEs often lack suitable tools, resulting in manual data collection and making reliable accounting and benchmarking of transport emissions in lifecycle assessments (LCAs) time-consuming and difficult to scale. This paper introduces a novel hybrid AI-supported knowledge graph (KG) which combines large language models (LLMs) with graph-based optimization to automate industrial supply chain route enrichment, completion, and emissions analysis. The proposed solution automatically resolves transportation gaps through generative AI and programming interfaces to create optimal routes for cost, time, and emission determination. The application merges separate routes into a single multi-modal network which allows users to evaluate sustainability against operational performance. A case study shows the capabilities in simplifying data collection for emissions reporting, therefore reducing manual effort and empowering SMEs to align logistics decisions with Industry 5.0 sustainability goals. Full article
Show Figures

Figure 1

38 pages, 5137 KiB  
Systematic Review
Current State of the Art and Potential for Construction and Demolition Waste Processing: A Scoping Review of Sensor-Based Quality Monitoring and Control for In- and Online Implementation in Production Processes
by Lieve Göbbels, Alexander Feil, Karoline Raulf and Kathrin Greiff
Sensors 2025, 25(14), 4401; https://doi.org/10.3390/s25144401 - 14 Jul 2025
Viewed by 416
Abstract
Automated quality assurance is gaining popularity across application areas; however, automatization for monitoring and control of product quality in waste processing is still in its infancy. At the same time, research on this topic is scattered, limiting efficient implementation of already developed strategies [...] Read more.
Automated quality assurance is gaining popularity across application areas; however, automatization for monitoring and control of product quality in waste processing is still in its infancy. At the same time, research on this topic is scattered, limiting efficient implementation of already developed strategies and technologies across research and application areas. To this end, the current work describes a scoping review conducted to systematically map available sensor-based quality assurance technologies and research based on the PRISMA-ScR framework. Additionally, the current state of research and potential automatization strategies are described in the context of construction and demolition waste processing. The results show 31 different sensor types extracted from a collection of 364 works, which have varied popularity depending on the application. However, visual imaging and spectroscopy sensors in particular seem to be popular overall. Only five works describing quality control system implementation were found, of which three describe varying manufacturing applications. Most works found describe proof-of-concept quality prediction systems on a laboratory scale. Compared to other application areas, works regarding construction and demolition waste processing indicate that the area seems to be especially behind in terms of implementing visual imaging at higher technology readiness levels. Moreover, given the importance of reliable and detailed data on material quality to transform the construction sector into a sustainable one, future research on quality monitoring and control systems could therefore focus on the implementation on higher technology readiness levels and the inclusion of detailed descriptions on how these systems have been verified. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 4070 KiB  
Article
A Deep Learning-Based System for Automatic License Plate Recognition Using YOLOv12 and PaddleOCR
by Bianca Buleu, Raul Robu and Ioan Filip
Appl. Sci. 2025, 15(14), 7833; https://doi.org/10.3390/app15147833 - 12 Jul 2025
Viewed by 345
Abstract
Automatic license plate recognition (ALPR) plays an important role in applications such as intelligent traffic systems, vehicle access control in specific areas, and law enforcement. The main novelty brought by the present research consists in the development of an automatic vehicle license plate [...] Read more.
Automatic license plate recognition (ALPR) plays an important role in applications such as intelligent traffic systems, vehicle access control in specific areas, and law enforcement. The main novelty brought by the present research consists in the development of an automatic vehicle license plate recognition system adapted to the Romanian context, which integrates the YOLOv12 detection architecture with the PaddleOCR library while also providing functionalities for recognizing the type of vehicle on which the license plate is mounted and identifying the county of registration. The integration of these functionalities allows for an extension of the applicability range of the proposed solution, including for addressing issues related to restricting access for certain types of vehicles in specific areas, as well as monitoring vehicle traffic based on the county of registration. The dataset used in the study was manually collected and labeled using the makesense.ai platform and was made publicly available for future research. It includes 744 images of vehicles registered in Romania, captured in real traffic conditions (the training dataset being expanded by augmentation). The YOLOv12 model was trained to automatically detect license plates in images with vehicles, and then it was evaluated and validated using standard metrics such as precision, recall, F1 score, mAP@0.5, mAP@0.5:0.95, etc., proving very good performance. Experimental results demonstrate that YOLOv12 achieved superior performance compared to YOLOv11 for the analyzed issue. YOLOv12 outperforms YOLOv11 with a 2.3% increase in precision (from 97.4% to 99.6%) and a 1.1% improvement in F1 score (from 96.7% to 97.8%). Full article
(This article belongs to the Collection Machine Learning in Computer Engineering Applications)
Show Figures

Figure 1

12 pages, 450 KiB  
Proceeding Paper
Methodology for Automatic Information Extraction and Summary Generation from Online Sources for Project Funding
by Mariya Zhekova
Eng. Proc. 2025, 100(1), 44; https://doi.org/10.3390/engproc2025100044 - 11 Jul 2025
Viewed by 84
Abstract
The summarized content of one or more extensive text documents helps users extract only the most important key information, instead of reviewing and reading hundreds of pages of text. This study uses extractive and abstractive mechanisms to automatically extract and summarize information retrieved [...] Read more.
The summarized content of one or more extensive text documents helps users extract only the most important key information, instead of reviewing and reading hundreds of pages of text. This study uses extractive and abstractive mechanisms to automatically extract and summarize information retrieved from various web documents on the same topic. The research aims to develop a methodology for designing and developing an information system for pre- and post-processing natural language obtained through web content search and web scraping, and for the automatic generation of a summary of the retrieved text. The research outlines two subtasks. As a first step, the system is designed to collect and process up-to-date information based on specific criteria from diverse web resources related to project funding, initiated by various organizations such as startups, sustainable companies, municipalities, government bodies, schools, the NGO sector, and others. As a second step, the collected extensive textual information about current projects and programs, which is typically intended for financial professionals, is to be summarized into a shorter version and transformed into a suitable format for a wide range of non-specialist users. The automated AI software tool, which will be developed using the proposed methodology, will be able to crawl and read project funding information from various web documents, select, process, and prepare a shortened version containing only the most important key information for its clients. Full article
Show Figures

Figure 1

15 pages, 1557 KiB  
Article
Factors Associated with Cure and Prediction of Cure of Clinical Mastitis of Dairy Cows
by Larissa V. F. Cruz, Ruan R. Daros, André Ostrensky and Cristina S. Sotomaior
Dairy 2025, 6(4), 37; https://doi.org/10.3390/dairy6040037 - 11 Jul 2025
Viewed by 255
Abstract
To study behavioral and productive factors to detect changes that may indicate and predict clinical mastitis cure, Holstein dairy cows (n = 60), in an automatic milking system (AMS) and equipped with behavioral monitoring collar, were monitored from the diagnosis of clinical [...] Read more.
To study behavioral and productive factors to detect changes that may indicate and predict clinical mastitis cure, Holstein dairy cows (n = 60), in an automatic milking system (AMS) and equipped with behavioral monitoring collar, were monitored from the diagnosis of clinical mastitis (D0) until clinical cure. The parameters collected through sensors were feeding activity, milk electrical conductivity (EC), milk yield, Mastitis Detection Index (MDi), milk flow, and number of gate passages. Clinical mastitis cases (n = 22) were monitored and divided into cured cases (n = 14) and non-cured cases within 30 days (n = 8), paired with a control case group (n = 28). Cows were assessed three times per week, and cure was determined when both clinical assessment and California Mastitis Test (CMT) results were negative in three consecutive evaluations. Mixed generalized linear regression was used to assess the relationship between parameters and clinical mastitis results. Mixed generalized logistic regression was used to create a predictive model. The average clinical cure time for cows with clinical mastitis was 11 days. Feeding activity, gate passages, milk yield, milk flow, EC, and the MDi were associated with cure. The predictive model based on data from D0 showed an Area Under the Curve of 0.89 (95% CI = 0.75–1). Sensitivity and specificity were 1 (95% CI = 1–1) and 0.63 (95% CI = 0.37–0.91), respectively. The predictive model demonstrated to have good internal sensitivity and specificity, showing promising potential for predicting clinical mastitis cure within 14 days based on data on the day of clinical mastitis diagnosis. Full article
(This article belongs to the Section Dairy Animal Health)
Show Figures

Figure 1

27 pages, 4490 KiB  
Article
An Indoor Environmental Quality Study for Higher Education Buildings with an Integrated BIM-Based Platform
by Mukhtar Maigari, Changfeng Fu, Efcharis Balodimou, Prapooja Kc, Seeja Sudhakaran and Mohammad Sakikhales
Sustainability 2025, 17(13), 6155; https://doi.org/10.3390/su17136155 - 4 Jul 2025
Viewed by 287
Abstract
Indoor environmental quality (IEQ) of higher education (HE) buildings significantly impacts the built environment sector. This research aimed to optimize learning environments and enhance student comfort, especially post-COVID-19. The study adopts the principles of Post-occupancy Evaluation (POE) to collect and analyze various quantitative [...] Read more.
Indoor environmental quality (IEQ) of higher education (HE) buildings significantly impacts the built environment sector. This research aimed to optimize learning environments and enhance student comfort, especially post-COVID-19. The study adopts the principles of Post-occupancy Evaluation (POE) to collect and analyze various quantitative and qualitative data through environmental data monitoring, a user perceptions survey, and semi-structured interviews with professionals. Although the environmental conditions generally met existing standards, the findings indicated opportunities for further improvements to better support university communities’ comfort and health. A significant challenge identified by this research is the inability of the facility management to physically manage and operate the vast and complex spaces within HE buildings with contemporary IEQ standards. In response to these findings, this research developed a BIM-based prototype for the real-time monitoring and automated control of IEQ. The prototype integrates a BIM model with Arduino-linked sensors, motors, and traffic lights, with the latter visually indicating IEQ status, while motors automatically adjust environmental conditions based on sensor inputs. The outcomes of this study not only contribute to the ongoing discourse on sustainable building management, especially post-pandemic, but also demonstrate an advancement in the application of BIM technologies to improve IEQ and by extension, occupant wellbeing in HE buildings. Full article
(This article belongs to the Special Issue Building a Sustainable Future: Sustainability and Innovation in BIM)
Show Figures

Figure 1

20 pages, 1198 KiB  
Article
Semi-Supervised Deep Learning Framework for Predictive Maintenance in Offshore Wind Turbines
by Valerio F. Barnabei, Tullio C. M. Ancora, Giovanni Delibra, Alessandro Corsini and Franco Rispoli
Int. J. Turbomach. Propuls. Power 2025, 10(3), 14; https://doi.org/10.3390/ijtpp10030014 - 4 Jul 2025
Viewed by 352
Abstract
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, [...] Read more.
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, generating vast quantities of time series data from various sensors. Anomaly detection techniques applied to this data offer the potential to proactively identify deviations from normal behavior, providing early warning signals of potential component failures. Traditional model-based approaches for fault detection often struggle to capture the complexity and non-linear dynamics of wind turbine systems. This has led to a growing interest in data-driven methods, particularly those leveraging machine learning and deep learning, to address anomaly detection in wind energy applications. This study focuses on the development and application of a semi-supervised, multivariate anomaly detection model for horizontal axis wind turbines. The core of this study lies in Bidirectional Long Short-Term Memory (BI-LSTM) networks, specifically a BI-LSTM autoencoder architecture, to analyze time series data from a SCADA system and automatically detect anomalous behavior that could indicate potential component failures. Moreover, the approach is reinforced by the integration of the Isolation Forest algorithm, which operates in an unsupervised manner to further refine normal behavior by identifying and excluding additional anomalous points in the training set, beyond those already labeled by the data provider. The research utilizes a real-world dataset provided by EDP Renewables, encompassing two years of comprehensive SCADA records collected from a single offshore wind turbine operating in the Gulf of Guinea. Furthermore, the dataset contains the logs of failure events and recorded alarms triggered by the SCADA system across a wide range of subsystems. The paper proposes a multi-modal anomaly detection framework orchestrating an unsupervised module (i.e., decision tree method) with a supervised one (i.e., BI-LSTM AE). The results highlight the efficacy of the BI-LSTM autoencoder in accurately identifying anomalies within the SCADA data that exhibit strong temporal correlation with logged warnings and the actual failure events. The model’s performance is rigorously evaluated using standard machine learning metrics, including precision, recall, F1 Score, and accuracy, all of which demonstrate favorable results. Further analysis is conducted using Cumulative Sum (CUSUM) control charts to gain a deeper understanding of the identified anomalies’ behavior, particularly their persistence and timing leading up to the failures. Full article
Show Figures

Figure 1

9 pages, 1016 KiB  
Article
TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems
by Tung Chiun Wen, Caroline Ferreira Freire, Luana Maria Benicio, Giselle Borges de Moura, Magno do Nascimento Amorim and Késia Oliveira da Silva-Miranda
Inventions 2025, 10(4), 52; https://doi.org/10.3390/inventions10040052 - 4 Jul 2025
Cited by 1 | Viewed by 336
Abstract
The automatic recognition of animal vocalizations is a valuable tool for monitoring pigs’ behavior, health, and welfare. This study investigates the feasibility of implementing a convolutional neural network (CNN) model for classifying pig vocalizations using tiny machine learning (TinyML) on a low-cost, resource-constrained [...] Read more.
The automatic recognition of animal vocalizations is a valuable tool for monitoring pigs’ behavior, health, and welfare. This study investigates the feasibility of implementing a convolutional neural network (CNN) model for classifying pig vocalizations using tiny machine learning (TinyML) on a low-cost, resource-constrained embedded system. The dataset was collected in 2011 at the University of Illinois at Urbana-Champaign on an experimental pig farm. In this experiment, 24 piglets were housed in environmentally controlled rooms and exposed to gradual thermal variations. Vocalizations were recorded using directional microphones, processed to reduce background noise, and categorized into “agonistic” and “social” behaviors using a CNN model developed on the Edge Impulse platform. Despite hardware limitations, the proposed approach achieved an accuracy of over 90%, demonstrating the potential of TinyML for real-time behavioral monitoring. These findings underscore the practical benefits of integrating TinyML into swine production systems, enabling early detection of issues that may impact animal welfare, reducing reliance on manual observations, and enhancing overall herd management. Full article
Show Figures

Figure 1

Back to TopTop