Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = onsite work learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5344 KiB  
Article
Real-Time Progress Monitoring of Bricklaying
by Ramez Magdy, Khaled A. Hamdy and Yasmeen A. S. Essawy
Buildings 2025, 15(14), 2456; https://doi.org/10.3390/buildings15142456 - 13 Jul 2025
Viewed by 355
Abstract
The construction industry is one of the largest contributors to the world economy. However, the level of automation and digitalization in the construction industry is still at its infancy in comparison with other industries due to the complex nature and the large size [...] Read more.
The construction industry is one of the largest contributors to the world economy. However, the level of automation and digitalization in the construction industry is still at its infancy in comparison with other industries due to the complex nature and the large size of construction projects. Meanwhile, construction projects are prone to cost overruns and schedule delays due to the adoption of traditional progress monitoring techniques to retrieve progress on-site, having indoor activities participating with an accountable ratio of these works. Improvements in deep learning and Computer Vision (CV) algorithms provide promising results in detecting objects in real time. Also, researchers have investigated the probability of using CV as a tool to create a Digital Twin (DT) for construction sites. This paper proposes a model utilizing the state-of-the-art YOLOv8 algorithm to monitor the progress of bricklaying activities, automatically extracting and analyzing real-time data from construction sites. The detected data is then integrated into a 3D Building Information Model (BIM), which serves as a DT, allowing project managers to visualize, track, and compare the actual progress of bricklaying with the planned schedule. By incorporating this technology, the model aims to enhance accuracy in progress monitoring, reduce human error, and enable real-time updates to project timelines, contributing to more efficient project management and timely completion. Full article
(This article belongs to the Special Issue AI in Construction: Automation, Optimization, and Safety)
Show Figures

Figure 1

32 pages, 5494 KiB  
Review
Colorimetric Biosensors: Advancements in Nanomaterials and Cutting-Edge Detection Strategies
by Yubeen Lee, Izzati Haizan, Sang Baek Sim and Jin-Ha Choi
Biosensors 2025, 15(6), 362; https://doi.org/10.3390/bios15060362 - 5 Jun 2025
Viewed by 1006
Abstract
Colorimetric-based biosensors are practical detection devices that can detect the presence and concentration of biomarkers through simple color changes. Conventional laboratory-based tests are highly sensitive but require long processing times and expensive equipment, which makes them difficult to apply for on-site diagnostics. In [...] Read more.
Colorimetric-based biosensors are practical detection devices that can detect the presence and concentration of biomarkers through simple color changes. Conventional laboratory-based tests are highly sensitive but require long processing times and expensive equipment, which makes them difficult to apply for on-site diagnostics. In contrast, the colorimetric method offers advantages for point-of-care testing and real-time monitoring due to its flexibility, simple operation, rapid results, and versatility across many applications. In order to enhance the color change reactions in colorimetric techniques, functional nanomaterials are often integrated due to their desirable intrinsic properties. In this review, the working principles of nanomaterial-based detection strategies in colorimetric systems are introduced. In addition, current signal amplification methods for colorimetric biosensors are comprehensively outlined and evaluated. Finally, the latest trends in artificial intelligence (AI) and machine learning integration into colorimetric-based biosensors, including their potential for technological advancements in the near future, are discussed. Future research is expected to develop highly sensitive and multifunctional colorimetric methods, which will serve as powerful alternatives for point-of-care testing and self-testing. Full article
(This article belongs to the Special Issue Functional Materials for Biosensing Applications)
Show Figures

Figure 1

28 pages, 50539 KiB  
Article
A Complete System for Automated Semantic–Geometric Mapping of Corrosion in Industrial Environments
by Rui Pimentel de Figueiredo, Stefan Nordborg Eriksen, Ignacio Rodriguez and Simon Bøgh
Automation 2025, 6(2), 23; https://doi.org/10.3390/automation6020023 - 30 May 2025
Viewed by 1303
Abstract
Corrosion, a naturally occurring process leading to the deterioration of metallic materials, demands diligent detection for quality control and the preservation of metal-based objects, especially within industrial contexts. Traditional techniques for corrosion identification, including ultrasonic testing, radiographic testing, and magnetic flux leakage, necessitate [...] Read more.
Corrosion, a naturally occurring process leading to the deterioration of metallic materials, demands diligent detection for quality control and the preservation of metal-based objects, especially within industrial contexts. Traditional techniques for corrosion identification, including ultrasonic testing, radiographic testing, and magnetic flux leakage, necessitate the deployment of expensive and bulky equipment on-site for effective data acquisition. An unexplored alternative involves employing lightweight, conventional camera systems and state-of-the-art computer vision methods for its identification. In this work, we propose a complete system for semi-automated corrosion identification and mapping in industrial environments. We leverage recent advances in three-dimensional (3D) point-cloud-based methods for localization and mapping, with vision-based semantic segmentation deep learning techniques, in order to build semantic–geometric maps of industrial environments. Unlike the previous corrosion identification systems available in the literature, which are either intrusive (e.g., electrochemical testing) or based on costly equipment (e.g., ultrasonic sensors), our designed multi-modal vision-based system is low cost, portable, and semi-autonomous and allows the collection of large datasets by untrained personnel. A set of experiments performed in relevant test environments demonstrated quantitatively the high accuracy of the employed 3D mapping and localization system, using a light detection and ranging (LiDAR) device, with less than 0.05 m and 0.02 m average absolute and relative pose errors. Also, our data-driven semantic segmentation model was shown to achieve 70% precision in corrosion detection when trained with our pixel-wise manually annotated dataset. Full article
Show Figures

Figure 1

40 pages, 3224 KiB  
Article
A Comparative Study of Image Processing and Machine Learning Methods for Classification of Rail Welding Defects
by Mohale Emmanuel Molefe, Jules Raymond Tapamo and Siboniso Sithembiso Vilakazi
J. Sens. Actuator Netw. 2025, 14(3), 58; https://doi.org/10.3390/jsan14030058 - 29 May 2025
Viewed by 1730
Abstract
Defects formed during the thermite welding process of two sections of rails require the welded joints to be inspected for quality, and the most used non-destructive method for inspection is radiography testing. However, the conventional defect investigation process from the obtained radiography images [...] Read more.
Defects formed during the thermite welding process of two sections of rails require the welded joints to be inspected for quality, and the most used non-destructive method for inspection is radiography testing. However, the conventional defect investigation process from the obtained radiography images is costly, lengthy, and subjective as it is conducted manually by trained experts. Additionally, it has been shown that most rail breaks occur due to a crack initiated from the weld joint defect that was either misclassified or undetected. To improve the condition monitoring of rails, the railway industry requires an automated defect investigation system capable of detecting and classifying defects automatically. Therefore, this work proposes a method based on image processing and machine learning techniques for the automated investigation of defects. Histogram Equalization methods are first applied to improve image quality. Then, the extraction of the weld joint from the image background is achieved using the Chan–Vese Active Contour Model. A comparative investigation is carried out between Deep Convolution Neural Networks, Local Binary Pattern extractors, and Bag of Visual Words methods (with the Speeded-Up Robust Features extractor) for extracting features in weld joint images. Classification of features extracted by local feature extractors is achieved using Support Vector Machines, K-Nearest Neighbor, and Naive Bayes classifiers. The highest classification accuracy of 95% is achieved by the Deep Convolution Neural Network model. A Graphical User Interface is provided for the onsite investigation of defects. Full article
(This article belongs to the Special Issue AI-Assisted Machine-Environment Interaction)
Show Figures

Figure 1

20 pages, 6117 KiB  
Article
Enhancing Dense-Scene Millet Appearance Quality Inspection Based on YOLO11s with Overlap-Partitioning Strategy for Procurement
by Leilei He, Ruiyang Wei, Yusong Ding, Juncai Huang, Xin Wei, Rui Li, Shaojin Wang and Longsheng Fu
Agronomy 2025, 15(6), 1284; https://doi.org/10.3390/agronomy15061284 - 23 May 2025
Viewed by 499
Abstract
Accurate millet appearance quality assessment is critical for fair procurement pricing. Traditional manual inspection is time-consuming and subjective, necessitating an automated solution. This study proposes a machine-vision-based approach using deep learning for dense-scene millet detection and quality evaluation. High-resolution images of standardized millet [...] Read more.
Accurate millet appearance quality assessment is critical for fair procurement pricing. Traditional manual inspection is time-consuming and subjective, necessitating an automated solution. This study proposes a machine-vision-based approach using deep learning for dense-scene millet detection and quality evaluation. High-resolution images of standardized millet samples were collected via smartphone and annotated into seven categories covering impurities, high-quality grains, and various defects. To address the challenges with small object detection and feature loss, the YOLO11s model with an overlap-partitioning strategy were introduced, dividing the high-resolution images into smaller patches for improved object representation. The experimental results show that the optimized model achieved a mean average precision (mAP) of 94.8%, significantly outperforming traditional whole-image detection with a mAP of 15.9%. The optimized model was deployed in a custom-developed mobile application, enabling low-cost, real-time millet inspection directly on smartphones. It can process full-resolution images (4608 × 3456 pixels) containing over 5000 kernels within 6.8 s. This work provides a practical solution for on-site quality evaluation in procurement and contributes to real-time agricultural inspection systems. Full article
Show Figures

Figure 1

22 pages, 8426 KiB  
Article
Development of an In-Line Vision-Based Measurement System for Shape and Size Calculation of Cross-Cutting Boards—Straightening Process Case
by Shitao Ge, Wei Zhang, Licheng Han, Yan Peng and Jianliang Sun
Appl. Sci. 2025, 15(10), 5752; https://doi.org/10.3390/app15105752 - 21 May 2025
Viewed by 291
Abstract
In the production process of cross-cutting boards, real-time measurement of dimensions online has been a long-standing technical problem in the production field. Currently, the detection of board dimensions in the production field relies on manual observation based on workers’ operational experience or stopping [...] Read more.
In the production process of cross-cutting boards, real-time measurement of dimensions online has been a long-standing technical problem in the production field. Currently, the detection of board dimensions in the production field relies on manual observation based on workers’ operational experience or stopping the machine for measurement. This paper proposes a machine vision-based real-time online measurement system for dimensional measurements of cross-cutting units. A certain angle measurement model is established by using a face-array industrial camera, and a more accurate edge contour extraction is realized by deep learning. A novel edge intersection extraction algorithm based on line fitting and least squares method was proposed to accurately measure the length, width, diagonal lines of cross-cutting boards using four intersection coordinates. The measurement of 100 cross-cutting boards in the industrial production site shows that the proposed online measurement system for cross-cut board dimensions in this article has high accuracy, with a length perception error of ±50 mm, width of ±2 mm, and diagonal difference of ±5 mm, meeting the production requirements in industrial settings. The on-site shutdown measurement work was reduced, thereby doubling the production efficiency and saving two staff members. Full article
Show Figures

Figure 1

24 pages, 8329 KiB  
Article
Leveraging Deep Learning and Internet of Things for Dynamic Construction Site Risk Management
by Li-Wei Lung, Yu-Ren Wang and Yung-Sung Chen
Buildings 2025, 15(8), 1325; https://doi.org/10.3390/buildings15081325 - 17 Apr 2025
Cited by 2 | Viewed by 1070
Abstract
The construction industry faces persistent occupational health and safety challenges, with numerous risks arising from construction sites’ complex and dynamic nature. Accidents frequently result from inadequate safety distances and poorly managed work-er–machine interactions, highlighting the need for advanced safety management solutions. This study [...] Read more.
The construction industry faces persistent occupational health and safety challenges, with numerous risks arising from construction sites’ complex and dynamic nature. Accidents frequently result from inadequate safety distances and poorly managed work-er–machine interactions, highlighting the need for advanced safety management solutions. This study develops and validates an innovative hazard warning system that leverages deep learning-based image recognition (YOLOv7) and Internet of Things (IoT) modules to enhance construction site safety. The system achieves a mean average precision (mAP) of 0.922 and an F1 score of 0.88 at a 0.595 confidence threshold, detecting hazards in under 1 s. Integrating IoT-enabled smart wearable devices provides real-time monitoring, delivering instant hazard alerts and personalized safety warnings, even in areas with limited network connectivity. The system employs the DIKW knowledge management framework to extract, transform, and load (ETL) high-quality labeled data and optimize worker and machinery recognition. Robust feature extraction is performed using convolutional neural networks (CNNs) and a fully connected approach for neural network training. Key innovations, such as perspective projection coordinate transformation (PPCT) and the security assessment block module (SABM), further enhance hazard detection and warning generation accuracy and reliability. Validated through extensive on-site experiments, the system demonstrates significant advancements in real-time hazard detection, improving site safety, reducing accident rates, and increasing productivity. The integration of IoT enhances scalability and adaptability, laying the groundwork for future advancements in construction automation and safety management. Full article
(This article belongs to the Special Issue Data Analytics Applications for Architecture and Construction)
Show Figures

Figure 1

20 pages, 13379 KiB  
Article
From Simulation to Field Validation: A Digital Twin-Driven Sim2real Transfer Approach for Strawberry Fruit Detection and Sizing
by Omeed Mirbod, Daeun Choi and John K. Schueller
AgriEngineering 2025, 7(3), 81; https://doi.org/10.3390/agriengineering7030081 - 17 Mar 2025
Cited by 1 | Viewed by 1721
Abstract
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated [...] Read more.
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated ground vehicle, to address these constraints by generating high-fidelity synthetic RGB and LiDAR data. These data enable the rapid development and evaluation of a deep learning-based machine vision pipeline for fruit detection and sizing without continuously relying on real-field access. Traditional simulators often lack visual realism, leading many studies to mix real images or adopt domain adaptation methods to address the reality gap. In contrast, this work relies solely on photorealistic simulation outputs for training, eliminating the need for real images or specialized adaptation approaches. After training exclusively on images captured in the virtual environment, the model was tested on a commercial-scale strawberry farm using a physical ground vehicle. Two separate trials with field images resulted in F1-scores of 0.92 and 0.81 for detection and a sizing error of 1.4 mm (R2 = 0.92) when comparing image-derived diameters against caliper measurements. These findings indicate that a digital twin-driven sim2real transfer can offer substantial time and cost savings by refining crucial tasks such as stereo sensor calibration and machine learning model development before extensive real-field deployments. In addition, the study examined geometric accuracy and visual fidelity through systematic comparisons of LiDAR and RGB sensor outputs from the virtual and real farms. Results demonstrated close alignment in both topography and textural details, validating the digital twin’s ability to replicate intricate field characteristics, including raised bed geometry and strawberry plant distribution. The techniques developed and validated in this strawberry project have broad applicability across agricultural commodities, particularly for fruit and vegetable production systems. This study demonstrates that integrating digital twins with simulation tools can significantly reduce the need for resource-intensive field data collection while accelerating the development and refinement of agricultural robotics algorithms and hardware. Full article
Show Figures

Graphical abstract

17 pages, 9626 KiB  
Article
Semantic Segmentation of Distribution Network Point Clouds Based on NF-PTV2
by Long Han, Bin Song, Shaocheng Wu, Deyu Nie, Zhenyang Chen and Linong Wang
Electronics 2025, 14(4), 812; https://doi.org/10.3390/electronics14040812 - 19 Feb 2025
Cited by 3 | Viewed by 563
Abstract
An on-site survey is the primary task of working live in distribution networks. However, the traditional manual on-site survey method is not only not very intuitive but also inefficient. The application of 3D point cloud technology has opened up new avenues for on-site [...] Read more.
An on-site survey is the primary task of working live in distribution networks. However, the traditional manual on-site survey method is not only not very intuitive but also inefficient. The application of 3D point cloud technology has opened up new avenues for on-site surveys in life working in distribution networks. This paper focused on the application of the Point Transformer V2(PTV2) model in the segmentation of distribution network point clouds. Given its deficiencies in boundary discrimination ability and limited feature extraction ability when processing the point clouds of distribution networks, an improved Non-local Focal Loss-Point Transformer V2 (NF-PTV2) model was proposed. With PTV2 as its core, this model incorporated the Non-Local attention to capturing long-distance feature dependencies, thereby compensating for the PTV2 model’s shortcomings in extracting features of large-scale objects with complex features. Simultaneously, the Focal Loss function was introduced to address the issue of class imbalance and enhance the model’s learning ability for small complex samples. The experimental results demonstrated that the overall accuracy (OA) of this model on the distribution network dataset reached 93.28%, the mean intersection over union (mIoU) reached 81.58%, and the mean accuracy (mAcc) reached 87.21%. In summary, the NF-PTV2 model proposed in this article demonstrated good performance in the point cloud segmentation task of the distribution network and can accurately identify various objects, which, to some extent, overcomes the limitations of the PTV2 model. Full article
Show Figures

Figure 1

16 pages, 1181 KiB  
Article
An Evaluation of Health Behavior Change Training for Health and Care Professionals in St. Helena
by Wendy Maltinsky, Vivien Swanson, Kamar Tanyan and Sarah Hotham
Healthcare 2025, 13(4), 435; https://doi.org/10.3390/healthcare13040435 - 18 Feb 2025
Cited by 1 | Viewed by 1570
Abstract
Background: Health behavior consultations support self-management if delivered by skilled practitioners. We summarize here the results of a collaborative training intervention program delivered to health and care practitioners working in a remote-island context. The program was designed to build confidence in the implementation [...] Read more.
Background: Health behavior consultations support self-management if delivered by skilled practitioners. We summarize here the results of a collaborative training intervention program delivered to health and care practitioners working in a remote-island context. The program was designed to build confidence in the implementation of communication and behavior change skills and to sustain their use in work settings. The setting for the behavior change training program was the South Atlantic island of St. Helena, a remote low-middle-income country which has a population with high levels of obesity and a prevalence of long-term conditions. Objectives: We aimed to increase knowledge, confidence, and implementation of behavior change techniques (BCTs) and communication skills of health and social care staff through delivering and evaluating training using the MAP (Motivation, Action, Prompt) behavior change framework. A successful training intervention could ultimately improve self-management and patient health outcomes. Methods: Co-production with onsite representatives adapted the program for local delivery. A two-day training program was delivered face-to-face to 32 multidisciplinary staff. Pre- and post-intervention and 18-month follow-up evaluation assessed reactions, learning and implementation using multiple methods, including participant feedback and primary care patient reports. Results: Positive reactions to training and significant improvement in confidence, perceived importance, intention to use and implementation of BCTs and communication skills immediately post-training and at long-term follow-up were observed. Patient reports suggested some techniques became routinely used. Methodological difficulties arose due to staff retention and disruption through the COVID-19 pandemic. Conclusions: The delivery of health behavior change training can be effective in remote contexts with sustainable impacts on healthcare. There are challenges working in this context including staff continuity and technological reliability. Full article
(This article belongs to the Section Nutrition and Public Health)
Show Figures

Figure 1

17 pages, 665 KiB  
Article
Telework Uncovered: Employees’ Perceptions Across Various Occupations in an Industrial Company
by Tea Korkeakunnas, Malin Lohela-Karlsson, Marina Heiden and Komalsingh Rambaree
Adm. Sci. 2025, 15(2), 56; https://doi.org/10.3390/admsci15020056 - 11 Feb 2025
Viewed by 1349
Abstract
To understand how telework is perceived among occupational groups with different work tasks within the same company, this qualitative study aimed to explore how managers and employees experience telework in relation to well-being, individual performance, and the work environment. This qualitative study used [...] Read more.
To understand how telework is perceived among occupational groups with different work tasks within the same company, this qualitative study aimed to explore how managers and employees experience telework in relation to well-being, individual performance, and the work environment. This qualitative study used a phenomenographic approach. Fourteen online interviews, comprising seven managers and seven employees from the same industrial company, were conducted between February 2022 and September 2023. The data were analyzed inductively to capture variations in telework perceptions. The findings showed that telework is not universally beneficial or challenging; its effectiveness depends on contextual factors such as team setting, job role, type of work, and organizational culture. Telework benefits both employees and managers engaged in individual tasks (e.g., reading, drafting contracts, and preparing reports) or global collaborations, including improved well-being, work–life balance, and overall performance. However, starting with an office-based period that facilitated team cohesion, faster learning, and a deeper understanding of the organizational culture. Face-to-face onsite work could be time-consuming and, therefore, stressful for some, but it is time-saving for others. Onsite employees and managers faced increased workloads when colleagues teleworked, as employees tended to rely more on colleagues physically present in the office. This research highlights the need for tailored strategies to enhance the advantages of telework while reducing its challenges. It contributes to existing research by providing nuanced insights into the relationship between telework and occupational groups within an industrial setting and offering practical guidance for telework in this field. Full article
Show Figures

Figure 1

27 pages, 3968 KiB  
Article
Drowsiness Detection of Construction Workers: Accident Prevention Leveraging Yolov8 Deep Learning and Computer Vision Techniques
by Adetayo Olugbenga Onososen, Innocent Musonda, Damilola Onatayo, Abdullahi Babatunde Saka, Samuel Adeniyi Adekunle and Eniola Onatayo
Buildings 2025, 15(3), 500; https://doi.org/10.3390/buildings15030500 - 5 Feb 2025
Cited by 2 | Viewed by 1467
Abstract
Construction projects’ unsatisfactory performance has been linked to factors influencing individuals’ well-being and mental alertness on projects. Drowsiness is a significant indicator of sleep deprivation and fatigue, so being able to identify the cognitive and physical preparedness of workers on site to engage [...] Read more.
Construction projects’ unsatisfactory performance has been linked to factors influencing individuals’ well-being and mental alertness on projects. Drowsiness is a significant indicator of sleep deprivation and fatigue, so being able to identify the cognitive and physical preparedness of workers on site to engage in construction tasks is important. As a consequence of the strenuous nature of the work involved in construction, long work hours, and environmental conditions, drowsiness is commonplace and has received less attention despite being a leading cause of accidents occurring on-site. Detecting drowsiness is essential for determining the safety and well-being of site workers. This study presents a vision-based approach using an improved version of the You Only Look Once (YOLOv8) algorithm for real-time drowsiness exposure among construction workers. The proposed method leverages computer vision techniques to analyze facial and eye features, enabling the early detection of signs of drowsiness, effectively preventing accidents, and enhancing on-site safety. The model showed significant precision and efficiency in detecting drowsiness from the given dataset, accomplishing a drowsiness class with a mean average precision (mAP) of 92%. However, it also exhibited difficulties handling imbalanced classes, particularly the underrepresented ‘Awake with PPE’ class, which was detected with high precision but comparatively lower recall and mAP. This highlighted the necessity of balanced datasets for optimal deep learning performance. The YOLOv8 model’s average mAP of 78% in drowsiness detection compared favorably with other studies employing different methodologies. The system improves productivity and reduces costs by preventing accidents and enhancing worker safety. However, limitations, such as sensitivity to lighting conditions and occlusions, must be addressed in future iterations. Full article
(This article belongs to the Special Issue Advances in Safety and Health at Work in Building Construction)
Show Figures

Figure 1

25 pages, 595 KiB  
Article
‘They Were Surprised That Such Jobs Even Exist…’ Supporting Students’ Career Awareness During Learning Activities at Museums and Environmental Education Centres
by Helene Uppin and Inge Timoštšuk
Soc. Sci. 2024, 13(12), 696; https://doi.org/10.3390/socsci13120696 - 20 Dec 2024
Viewed by 1054
Abstract
Many factors influence students’ career awareness and future career choices. Curricula-related learning activities that entail boundary-crossing between formal and nonformal contexts, such as museums and environmental education centres, can also broaden perspectives. Out-of-school learning can unveil career trajectories, introduce professions, spark interest in [...] Read more.
Many factors influence students’ career awareness and future career choices. Curricula-related learning activities that entail boundary-crossing between formal and nonformal contexts, such as museums and environmental education centres, can also broaden perspectives. Out-of-school learning can unveil career trajectories, introduce professions, spark interest in new topics, and support lifelong learning. Nevertheless, it is unclear how on-site educators of museums and environmental education centres perceive or address supporting students’ career awareness. We aimed to explore how Estonian on-site educators perceive the connection between curricula-related learning at their institutions and students’ career awareness (namely, work-related knowledge and self-awareness). The qualitative data are drawn from two datasets: (1) 27 out-of-school educators chosen by purposeful sampling participated in focus-group interviews about their practice; (2) 43 out-of-school educators filled out open-ended online surveys on career awareness education. Qualitative content analysis was used to find meaningful patterns from the dataset. Various specific examples of work-related learning activities emerged. However, career awareness was often understood narrowly or had not been previously conceptualised: students’ self-awareness was seldom explicitly perceived as part of career awareness. Moreover, supporting students’ lifelong learning or the development of sustainability competencies was explicitly emphasised only by more experienced or outstanding on-site educators. Full article
(This article belongs to the Special Issue Improving Integration of Formal Education and Work-Based Learning)
Show Figures

Figure 1

38 pages, 5080 KiB  
Article
An Ensemble of Machine Learning Models for the Classification and Selection of Categorical Variables in Traffic Inspection Work of Importance for the Sustainable Execution of Events
by Aleksandar Đukić, Milorad K. Banjanin, Mirko Stojčić, Tihomir Đurić, Radenka Đekić and Dejan Anđelković
Sustainability 2024, 16(22), 9720; https://doi.org/10.3390/su16229720 - 7 Nov 2024
Viewed by 1507
Abstract
Traffic inspection (TraffIns) work in this article is positioned as a specific module of road traffic with its primary function oriented towards monitoring and sustainably controlling safe traffic and the execution of significant events within a particular geographic area. Exploratory research on the [...] Read more.
Traffic inspection (TraffIns) work in this article is positioned as a specific module of road traffic with its primary function oriented towards monitoring and sustainably controlling safe traffic and the execution of significant events within a particular geographic area. Exploratory research on the significance of event execution in simple, complicated, and complex traffic flow and process situations is related to the activities of monitoring and controlling functional states and performance of categorical variables. These variables include objects and locations of road infrastructure, communication infrastructure, and networks of traffic inspection resources. It is emphasized that the words “work” and “traffic” have the semantic status as synonyms (in one world language), which is explained in the design of the Agent-based model of the complexity of content and contextual structure of TraffIns work at the singular and plural levels with 12 points of interest (POI) in the thematic research. An Event Execution Log (EEL) was created for on-site data collection with eight variables, seven of which are independent (event type, activities, objects, locations, host, duration period, and periodicity of the event) and one dependent (significance of the event) variable. The structured dataset includes 10,994 input-output vectors in 970 categories collected in the EEL created by 32 human agents (traffic inspectors) over a 30-day period. An algorithmic presentation of the methodological research procedure for preprocessing and final data processing in the ensemble of machine learning models for classification and selection of TraffIns tasks is provided. Data cleaning was performed on the available dataset to increase data consistency for further processing. Vector elimination has been carried out based on the Location variable, such that the total number of vectors equals the number of unique categories of this variable, which is 636. The main result of this research is the classification modeling of the significance of events in TraffIns work based on machine learning techniques and the Stacking ensemble. The created machine learning models for Event Significance classification modeling have high accuracy values. To evaluate the performance metrics of the Stacking ensemble of the models, the confusion matrix, Precision, Recall, and F1 score are used. Full article
(This article belongs to the Special Issue Traffic Safety, Traffic Management, and Sustainable Mobility)
Show Figures

Figure 1

24 pages, 1129 KiB  
Article
Infrared Image Generation Based on Visual State Space and Contrastive Learning
by Bing Li, Decao Ma, Fang He, Zhili Zhang, Daqiao Zhang and Shaopeng Li
Remote Sens. 2024, 16(20), 3817; https://doi.org/10.3390/rs16203817 - 14 Oct 2024
Cited by 1 | Viewed by 2068
Abstract
The preparation of infrared reference images is of great significance for improving the accuracy and precision of infrared imaging guidance. However, collecting infrared data on-site is difficult and time-consuming. Fortunately, the infrared images can be obtained from the corresponding visible-light images to enrich [...] Read more.
The preparation of infrared reference images is of great significance for improving the accuracy and precision of infrared imaging guidance. However, collecting infrared data on-site is difficult and time-consuming. Fortunately, the infrared images can be obtained from the corresponding visible-light images to enrich the infrared data. To this end, this present work proposes an image translation algorithm that converts visible-light images to infrared images. This algorithm, named V2IGAN, is founded on the visual state space attention module and multi-scale feature contrastive learning loss. Firstly, we introduce a visual state space attention module designed to sharpen the generative network’s focus on critical regions within visible-light images. This enhancement not only improves feature extraction but also bolsters the generator’s capacity to accurately model features, ultimately enhancing the quality of generated images. Furthermore, the method incorporates a multi-scale feature contrastive learning loss function, which serves to bolster the robustness of the model and refine the detail of the generated images. Experimental results show that the V2IGAN method outperforms existing typical infrared image generation techniques in both subjective visual assessments and objective metric evaluations. This suggests that the V2IGAN method is adept at enhancing the feature representation in images, refining the details of the generated infrared images, and yielding reliable, high-quality results. Full article
Show Figures

Figure 1

Back to TopTop