Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (88)

Search Parameters:
Keywords = push net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3607 KB  
Article
Efficient Image Restoration for Autonomous Vehicles and Traffic Systems: A Knowledge Distillation Approach to Enhancing Environmental Perception
by Yongheng Zhang
Computers 2025, 14(11), 459; https://doi.org/10.3390/computers14110459 - 24 Oct 2025
Viewed by 436
Abstract
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive [...] Read more.
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive computational complexity hinders real-world deployment on resource-constrained platforms. To bridge this gap, this paper introduces a novel Soft Knowledge Distillation (SKD) framework, designed specifically for creating highly efficient yet powerful image restoration models. Our core innovation is twofold: first, we propose a Multi-dimensional Cross-Net Attention(MCA) mechanism that allows a compact student model to learn comprehensive attention relationships from a large teacher model across both spatial and channel dimensions, capturing fine-grained details essential for high-quality restoration. Second, we pioneer the use of a contrastive learning loss at the reconstruction level, treating the teacher’s outputs as positives and the degraded inputs as negatives, which significantly elevates the student’s reconstruction quality. Extensive experiments demonstrate that our method achieves a superior trade-off between performance and efficiency, notably enhancing downstream tasks like object detection. The primary contributions of this work lie in delivering a practical and compelling solution for real-time perceptual enhancement in autonomous systems, pushing the boundaries of efficient model design. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

14 pages, 3620 KB  
Article
Lung Opacity Segmentation in Chest CT Images Using Multi-Head and Multi-Channel U-Nets with Partially Supervised Learning
by Shingo Mabu, Takuya Hamada, Satoru Ikebe and Shoji Kido
Appl. Sci. 2025, 15(19), 10373; https://doi.org/10.3390/app151910373 - 24 Sep 2025
Viewed by 335
Abstract
There has been a large amount of research applying deep learning to the medical field. However, obtaining sufficient training data is challenging in the medical domain because annotation requires specialized knowledge and significant effort. This is especially true for segmentation tasks, where preparing [...] Read more.
There has been a large amount of research applying deep learning to the medical field. However, obtaining sufficient training data is challenging in the medical domain because annotation requires specialized knowledge and significant effort. This is especially true for segmentation tasks, where preparing fully annotated data for every pixel within an image is difficult. To address this, we propose methods to extract useful features for segmentation using two types of U-net-based networks and partially supervised learning with incomplete annotated data. This research specifically focuses on the segmentation of diffuse lung disease opacities in chest CT images. In our dataset, each image is partially annotated with a single type of lung opacity. To tackle this, we designed two distinct U-net architectures: a multi-head U-net, which utilizes a shared encoder and separated decoders for each opacity type, and a multi-channel U-net, which shares the encoder and decoder layers for more efficient feature learning. Furthermore, we integrated partially supervised learning with these networks. This involves employing distinct loss functions to both bring annotated regions (ground truth) and segmented regions (predictions) closer, and to push them apart, thereby suppressing erroneous predictions. In our experiments, we trained the models on partially annotated data and subsequently tested them on fully annotated data to compare the segmentation performance of each method. The results show that the multi-channel model applying partially supervised learning achieved the best performance while also reducing the number of weight parameters. Full article
(This article belongs to the Special Issue Pattern Recognition Applications of Neural Networks and Deep Learning)
Show Figures

Figure 1

14 pages, 3320 KB  
Article
Numerical Simulation Research on Thermoacoustic Instability of Cryogenic Hydrogen Filling Pipeline
by Qidong Zhang, Yuan Ma, Fushou Xie, Liqiang Ai, Shengbao Wu and Yanzhong Li
Cryo 2025, 1(3), 9; https://doi.org/10.3390/cryo1030009 - 9 Jul 2025
Cited by 1 | Viewed by 439
Abstract
This article uses FLUENT to construct a two-dimensional axisymmetric numerical model of a cryogenic hydrogen charging pipeline. By loading with initial temperature gradient and transient initial pressure disturbance, the basic characteristics of low-temperature hydrogen Taconis thermoacoustic oscillation are calculated, including temperature, heat flux [...] Read more.
This article uses FLUENT to construct a two-dimensional axisymmetric numerical model of a cryogenic hydrogen charging pipeline. By loading with initial temperature gradient and transient initial pressure disturbance, the basic characteristics of low-temperature hydrogen Taconis thermoacoustic oscillation are calculated, including temperature, heat flux density distribution, pressure amplitude, and frequency. The instability boundary of hydrogen TAO is also obtained. The results show that (1) the temperature distribution and flow characteristics of the gas inside the pipeline exhibit significant periodic changes. In the first half of the oscillation period, the cold-end gas moves towards the end of the pipeline. Low-viscosity cold hydrogen is easily heated and rapidly expands. In the second half of the cycle, the expanding cold gas pushes the hot-end gas to move towards the cold end, forming a low-pressure zone and causing gas backflow. (2) Thermoacoustic oscillation can also cause additional thermal leakage on the pipeline wall. The average heat flux during one cycle is 1150.1 W/m2 for inflow and 1087.7 W/m2 for outflow, with a net inflow heat flux of 62.4 W/m2. (3) The instability boundary of the system is mainly determined by the temperature ratio of the cold and hot ends α, temperature gradient β, and length ratio of the cold and hot ends ξ. Increasing the pipe diameter and minimizing the pipe length can effectively weaken the amplitude of thermoacoustic oscillations. This study provides theoretical support for predicting thermoacoustic oscillations in low-temperature hydrogen transport pipeline systems and offers insights for system stability control and design verification. Full article
Show Figures

Figure 1

43 pages, 6844 KB  
Article
CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion
by Trinh Quoc Nguyen, Oky Dicky Ardiansyah Prima, Syahid Al Irfan, Hindriyanto Dwi Purnomo and Radius Tanone
AI Sens. 2025, 1(1), 4; https://doi.org/10.3390/aisens1010004 - 4 Jul 2025
Cited by 1 | Viewed by 1632
Abstract
This study presents CORE-ReID V2, an enhanced framework built upon CORE-ReID V1. The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID. During pre-training, CycleGAN is employed to [...] Read more.
This study presents CORE-ReID V2, an enhanced framework built upon CORE-ReID V1. The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID. During pre-training, CycleGAN is employed to synthesize diverse data, bridging image characteristic gaps across different domains. In the fine-tuning, an advanced ensemble fusion mechanism, consisting of the Efficient Channel Attention Block (ECAB) and the Simplified Efficient Channel Attention Block (SECAB), enhances both local and global feature representations while reducing ambiguity in pseudo-labels for target samples. Experimental results on widely used UDA person ReID and vehicle ReID datasets demonstrate that the proposed framework outperforms state-of-the-art methods, achieving top performance in mean average precision (mAP) and Rank-k Accuracy (Top-1, Top-5, Top-10). Moreover, the framework supports lightweight backbones such as ResNet18 and ResNet34, ensuring both scalability and efficiency. Our work not only pushes the boundaries of UDA-based object ReID but also provides a solid foundation for further research and advancements in this domain. Full article
Show Figures

Figure 1

14 pages, 2035 KB  
Article
Integration of YOLOv9 Segmentation and Monocular Depth Estimation in Thermal Imaging for Prediction of Estrus in Sows Based on Pixel Intensity Analysis
by Iyad Almadani, Aaron L. Robinson and Mohammed Abuhussein
Digital 2025, 5(2), 22; https://doi.org/10.3390/digital5020022 - 13 Jun 2025
Viewed by 777
Abstract
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. [...] Read more.
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. However, variations in camera distance during dataset collection can significantly affect the accuracy of this method, as different distances alter the resolution of the region of interest, causing pixel intensity values to represent varying areas and temperatures. This inconsistency hinders the detection of the subtle temperature differences required to distinguish between estrus and non-estrus states. Moreover, failure to maintain a consistent camera distance, along with external factors such as atmospheric conditions and improper calibration, can distort temperature readings, further compromising data accuracy and reliability. Furthermore, without addressing distance variations, the model’s generalizability diminishes, increasing the likelihood of false positives and negatives and ultimately reducing the effectiveness of estrus detection. In our previously proposed methodology for estrus detection in sows, we utilized YOLOv8 for segmentation and keypoint detection, while monocular depth estimation was used for camera calibration. This calibration helps establish a functional relationship between the measurements in the image (such as distances between labia, the clitoris-to-perineum distance, and vulva perimeter) and the depth distance to the camera, enabling accurate adjustments and calibration for our analysis. Estrus classification is performed by comparing new data points with reference datasets using a three-nearest-neighbor voting system. In this paper, we aim to enhance our previous method by incorporating the mean pixel intensity of the region of interest as an additional factor. We propose a detailed four-step methodology coupled with two stages of evaluation. First, we carefully annotate masks around the vulva to calculate its perimeter precisely. Leveraging the advantages of deep learning, we train a model on these annotated images, enabling segmentation using the cutting-edge YOLOv9 algorithm. This segmentation enables the detection of the sow’s vulva, allowing for analysis of its shape and facilitating the calculation of the mean pixel intensity in the region. Crucially, we use monocular depth estimation from the previous method, establishing a functional link between pixel intensity and the distance to the camera, ensuring accuracy in our analysis. We then introduce a classification approach that differentiates between estrus and non-estrus regions based on the mean pixel intensity of the vulva. This classification method involves calculating Euclidean distances between new data points and reference points from two datasets: one for “estrus” and the other for “non-estrus”. The classification process identifies the five closest neighbors from the datasets and applies a majority voting system to determine the label. A new point is classified as “estrus” if the majority of its nearest neighbors are labeled as estrus; otherwise, it is classified as “non-estrus”. This automated approach offers a robust solution for accurate estrus detection. To validate our method, we propose two evaluation stages: first, a quantitative analysis comparing the performance of our new YOLOv9 segmentation model with the older U-Net and YOLOv8 models. Secondly, we assess the classification process by defining a confusion matrix and comparing the results of our previous method, which used the three nearest points, with those of our new model that utilizes five nearest points. This comparison allows us to evaluate the improvements in accuracy and performance achieved with the updated model. The automation of this vital process holds the potential to revolutionize reproductive health management in agriculture, boosting breeding success rates. Through thorough evaluation and experimentation, our research highlights the transformative power of computer vision, pushing forward more advanced practices in the field. Full article
Show Figures

Figure 1

25 pages, 1339 KB  
Article
Link-State-Aware Proactive Data Delivery in Integrated Satellite–Terrestrial Networks for Multi-Modal Remote Sensing
by Ranshu Peng, Chunjiang Bian, Shi Chen and Min Wu
Remote Sens. 2025, 17(11), 1905; https://doi.org/10.3390/rs17111905 - 30 May 2025
Viewed by 1094
Abstract
This paper seeks to address the limitations of conventional remote sensing data dissemination algorithms, particularly their inability to model fine-grained multi-modal heterogeneous feature correlations and adapt to dynamic network topologies under resource constraints. This paper proposes multi-modal-MAPPO, a novel multi-modal deep reinforcement learning [...] Read more.
This paper seeks to address the limitations of conventional remote sensing data dissemination algorithms, particularly their inability to model fine-grained multi-modal heterogeneous feature correlations and adapt to dynamic network topologies under resource constraints. This paper proposes multi-modal-MAPPO, a novel multi-modal deep reinforcement learning (MDRL) framework designed for a proactive data push in large-scale integrated satellite–terrestrial networks (ISTNs). By integrating satellite cache states, user cache states, and multi-modal data attributes (including imagery, metadata, and temporal request patterns) into a unified Markov decision process (MDP), our approach pioneers the application of the multi-actor-attention-critic with parameter sharing (MAPPO) algorithm to ISTNs push tasks. Central to this framework is a dual-branch actor network architecture that dynamically fuses heterogeneous modalities: a lightweight MobileNet-v3-small backbone extracts semantic features from remote sensing imagery, while parallel branches—a multi-layer perceptron (MLP) for static attributes (e.g., payload specifications, geolocation tags) and a long short-term memory (LSTM) network for temporal user cache patterns—jointly model contextual and historical dependencies. A dynamically weighted attention mechanism further adapts modality-specific contributions to enhance cross-modal correlation modeling in complex, time-varying scenarios. To mitigate the curse of dimensionality in high-dimensional action spaces, we introduce a multi-dimensional discretization strategy that decomposes decisions into hierarchical sub-policies, balancing computational efficiency and decision granularity. Comprehensive experiments against state-of-the-art baselines (MAPPO, MAAC) demonstrate that multi-modal-MAPPO reduces the average content delivery latency by 53.55% and 29.55%, respectively, while improving push hit rates by 0.1718 and 0.4248. These results establish the framework as a scalable and adaptive solution for real-time intelligent data services in next-generation ISTNs, addressing critical challenges in resource-constrained, dynamic satellite–terrestrial environments. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
Show Figures

Figure 1

48 pages, 10706 KB  
Review
Navigating Sustainability: A Bibliometric Exploration of Environmental Decision-Making and Behavioral Shifts
by Maria Alexandra Crăciun, Adrian Domenteanu, Monica Dudian and Camelia Delcea
Sustainability 2025, 17(6), 2646; https://doi.org/10.3390/su17062646 - 17 Mar 2025
Cited by 2 | Viewed by 1430
Abstract
In recent years, the research area around environmental decision-making has drawn more and more interest, driven by a really big global push to achieve net-zero emissions. Significant investments in areas such as electric vehicles and renewable energy, coupled with increasingly limited access to [...] Read more.
In recent years, the research area around environmental decision-making has drawn more and more interest, driven by a really big global push to achieve net-zero emissions. Significant investments in areas such as electric vehicles and renewable energy, coupled with increasingly limited access to natural resources, have intensified research efforts in this field. New and emerging research is aimed at shifting consumer behavior to make more sustainable decisions. Building on this context, the purpose of this paper is to explore academic publications related to decision-making and behavioral shifts in the context of sustainability. Using an advanced bibliometric tool such as Biblioshiny, the authors analyze an Institute for Scientific Information Web of Science dataset to identify the main authors and affiliated journals and map the academic and thematic evolution of this interdisciplinary field, including the key topics and countries involved. The analysis reveals a 6.68% annual growth rate. Through thematic maps, three field plots, word clouds, and a review of the top 10 most cited papers, this study provides a comprehensive overview of the evolving domain of environmental decision-making. Full article
(This article belongs to the Special Issue Sustainable Energy: The Path to a Low-Carbon Economy)
Show Figures

Figure 1

15 pages, 9987 KB  
Article
Characterizing Optimum N Rate in Waterlogged Maize (Zea mays L.) with Unmanned Aerial Vehicle (UAV) Remote Sensing
by Bhawana Acharya, Syam Dodla, Brenda Tubana, Thanos Gentimis, Fagner Rontani, Rejina Adhikari, Dulis Duron, Giulia Bortolon and Tri Setiyono
Agronomy 2025, 15(2), 434; https://doi.org/10.3390/agronomy15020434 - 10 Feb 2025
Cited by 2 | Viewed by 1319
Abstract
High soil moisture due to frequent excessive precipitation can lead to reductions in maize grain yields and increased nitrogen (N) loss. The traditional methods of computing N status in crops are destructive and time-consuming, especially in waterlogged fields. Therefore, in this study, we [...] Read more.
High soil moisture due to frequent excessive precipitation can lead to reductions in maize grain yields and increased nitrogen (N) loss. The traditional methods of computing N status in crops are destructive and time-consuming, especially in waterlogged fields. Therefore, in this study, we used unmanned aerial vehicle (UAV) remote sensing to evaluate the status of maize under different N rates and excessive soil moisture conditions. The experiment was performed using a split plot design with four replications, with soil moisture conditions as main plots and different N rates as sub-plots. The artificial intelligence SciPy (version 1.5.2) optimization algorithm and spherical function were used to estimate the economically optimum N rate under the different treatments. The computed EONR for CRS 2022 was 157 kg N ha−1 for both treatments, with the maximum net return to N of USD 1203 ha−1. In 2023, the analysis suggested a lower maximum attainable yield in excessive water conditions, with EONR pushed up to 197 kg N ha−1 as compared to 185 kg N ha−1 in the control treatment, resulting in a lower maximum net return to N of USD 884 ha−1 as compared to USD 1019 ha−1 in the control treatment. This study reveals a slight reduction of the fraction of NDRE at EONR to maximum NDRE under excessive water conditions, highlighting the need for addressing such abiotic stress circumstances when arriving at an N rate recommendation based on an N-rich strip concept. This study confirms the importance of sensing technology for N monitoring in maize, particularly in supporting decision making in nutrient management under adverse weather conditions. Full article
Show Figures

Figure 1

17 pages, 5789 KB  
Article
Development of Low-Resistance Coastal Stow Net Using Numerical Analysis and Model Experiments
by Sua Park, Jihoon Lee, Minseo Park and Daeyeon Cho
Fishes 2024, 9(11), 434; https://doi.org/10.3390/fishes9110434 - 26 Oct 2024
Viewed by 1051
Abstract
In coastal stow net fishing, the heavy weight of a typical anchor (750–1000 kg) can increase the risk of capsizing the boat and crew member injury during hoisting operations. Thus, to prevent these accidents, a reduction in the anchor weight is required. One [...] Read more.
In coastal stow net fishing, the heavy weight of a typical anchor (750–1000 kg) can increase the risk of capsizing the boat and crew member injury during hoisting operations. Thus, to prevent these accidents, a reduction in the anchor weight is required. One strategy to achieve this is to reduce the resistance force of the fishing gear used, which would allow lighter anchors to be employed. This requires the accurate estimation of the resistance force for various gear designs. Therefore, the resistance force and shape during the operation of two representative types of coastal stow nets currently employed in the Korean coastal stow net fishing industry were investigated using simulations and modeling experiments. The modeled fishing gear was divided into four sections according to the mesh size. Based on the results, the twine thickness was reduced in order to target areas of the gear where the greatest resistance was observed, while the front part of the gear was redesigned to prevent the front of the net from being pushed back into a suboptimal shape. The proposed low-resistance fishing gear has the potential to improve occupational safety in the coastal stow net fishing industry. Full article
Show Figures

Figure 1

26 pages, 5286 KB  
Article
0-D Dynamic Performance Simulation of Hydrogen-Fueled Turboshaft Engine
by Mattia Magnani, Giacomo Silvagni, Vittorio Ravaglioli and Fabrizio Ponti
Aerospace 2024, 11(10), 816; https://doi.org/10.3390/aerospace11100816 - 6 Oct 2024
Cited by 2 | Viewed by 1935
Abstract
In the last few decades, the problem of pollution resulting from human activities has pushed research toward zero or net-zero carbon solutions for transportation. The main objective of this paper is to perform a preliminary performance assessment of the use of hydrogen in [...] Read more.
In the last few decades, the problem of pollution resulting from human activities has pushed research toward zero or net-zero carbon solutions for transportation. The main objective of this paper is to perform a preliminary performance assessment of the use of hydrogen in conventional turbine engines for aeronautical applications. A 0-D dynamic model of the Allison 250 C-18 turboshaft engine was designed and validated using conventional aviation fuel (kerosene Jet A-1). A dedicated, experimental campaign covering the whole engine operating range was conducted to obtain the thermodynamic data for the main engine components: the compressor, lateral ducts, combustion chamber, high- and low-pressure turbines, and exhaust nozzle. A theoretical chemical combustion model based on the NASA-CEA database was used to account for the energy conversion process in the combustor and to obtain quantitative feedback from the model in terms of fuel consumption. Once the engine and the turbomachinery of the engine were characterized, the work focused on designing a 0-D dynamic engine model based on the engine’s characteristics and the experimental data using the MATLAB/Simulink environment, which is capable of replicating the real engine behavior. Then, the 0-D dynamic model was validated by the acquired data and used to predict the engine’s performance with a different throttle profile (close to realistic request profiles during flight). Finally, the 0-D dynamic engine model was used to predict the performance of the engine using hydrogen as the input of the theoretical combustion model. The outputs of simulations running conventional kerosene Jet A-1 and hydrogen using different throttle profiles were compared, showing up to a 64% reduction in fuel mass flow rate and a 3% increase in thermal efficiency using hydrogen in flight-like conditions. The results confirm the potential of hydrogen as a suitable alternative fuel for small turbine engines and aircraft. Full article
Show Figures

Figure 1

13 pages, 1813 KB  
Perspective
The Potential Relationship between Biomass, Biorefineries, and Bitcoin
by Georgeio Semaan, Guizhou Wang, Quoc Si Vo and Gopalakrishnan Kumar
Sustainability 2024, 16(18), 7919; https://doi.org/10.3390/su16187919 - 11 Sep 2024
Cited by 1 | Viewed by 4779
Abstract
Despite advances in biofuel production and biomass processing technologies, biorefineries still experience commercialization issues. When costs exceed revenues, their long-term economic sustainability is threatened. Although integrated biorefineries have significant global potential due to process integration and product co-generation, it is crucial that they [...] Read more.
Despite advances in biofuel production and biomass processing technologies, biorefineries still experience commercialization issues. When costs exceed revenues, their long-term economic sustainability is threatened. Although integrated biorefineries have significant global potential due to process integration and product co-generation, it is crucial that they generate a positive net return, thereby incentivizing their continual operation. Nonetheless, research and development into new system designs and process integration are required to address current biorefinery inefficiencies. The integration of Bitcoin mining into biorefineries represents an innovative approach to diversify revenue streams and potentially offset costs, ensuring the economic viability and commercial success of biorefineries. When using bio-H2, a total of 3904 sats/kg fuel can be obtained as opposed to 537 sats/kg fuel when using syngas. Bitcoin, whether produced onsite or not, is an accretive asset that can offset the sales price of other produced biochemicals and biomaterials, thereby making biorefineries more competitive at offering their products. Collaborations with policy makers and industry stakeholders will be essential to address regulatory challenges and develop supportive frameworks for widespread implementation. Over time, the integration of Bitcoin mining in biorefineries could transform the financial dynamics of the bio-based products market, making them more affordable and accessible whilst pushing towards sustainable development and energy transition. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Graphical abstract

21 pages, 6058 KB  
Article
Synergistic Pushing and Grasping for Enhanced Robotic Manipulation Using Deep Reinforcement Learning
by Birhanemeskel Alamir Shiferaw, Tayachew F. Agidew, Ali Saeed Alzahrani and Ramasamy Srinivasagan
Actuators 2024, 13(8), 316; https://doi.org/10.3390/act13080316 - 20 Aug 2024
Cited by 2 | Viewed by 3062
Abstract
In robotic manipulation, achieving efficient and reliable grasping in cluttered environments remains a significant challenge. This study presents a novel approach that integrates pushing and grasping actions using deep reinforcement learning. The proposed model employs two fully convolutional neural networks—Push-Net and Grasp-Net—that predict [...] Read more.
In robotic manipulation, achieving efficient and reliable grasping in cluttered environments remains a significant challenge. This study presents a novel approach that integrates pushing and grasping actions using deep reinforcement learning. The proposed model employs two fully convolutional neural networks—Push-Net and Grasp-Net—that predict pixel-wise Q-values for potential pushing and grasping actions from heightmap images of the scene. The training process utilizes deep Q-learning with a reward structure that incentivizes both successful pushes and grasps, encouraging the robot to create favorable conditions for grasping through strategic pushing actions. Simulation results demonstrate that the proposed model significantly outperforms traditional grasp-only policies, achieving an 87% grasp success rate in cluttered environments, compared to 60% for grasp-only approaches. The model shows robust performance in various challenging scenarios, including well-ordered configurations and novel objects, with completion rates of up to 100% and grasp success rates as high as 95.8%. These findings highlight the model’s ability to generalize to unseen objects and configurations, making it a practical solution for real-world robotic manipulation tasks. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

25 pages, 2248 KB  
Article
SCMs: Systematic Conglomerated Models for Audio Cough Signal Classification
by Sunil Kumar Prabhakar and Dong-Ok Won
Algorithms 2024, 17(7), 302; https://doi.org/10.3390/a17070302 - 8 Jul 2024
Cited by 2 | Viewed by 1671
Abstract
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or [...] Read more.
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier. Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
Show Figures

Figure 1

13 pages, 3051 KB  
Article
Shoulder Bone Segmentation with DeepLab and U-Net
by Michael Carl, Kaustubh Lall, Darren Pai, Eric Y. Chang, Sheronda Statum, Anja Brau, Christine B. Chung, Maggie Fung and Won C. Bae
Osteology 2024, 4(2), 98-110; https://doi.org/10.3390/osteology4020008 - 11 Jun 2024
Cited by 3 | Viewed by 2964
Abstract
Evaluation of the 3D bone morphology of the glenohumeral joint is necessary for pre-surgical planning. Zero echo time (ZTE) magnetic resonance imaging (MRI) provides excellent bone contrast and can potentially be used in the place of computed tomography. Segmentation of the shoulder anatomy, [...] Read more.
Evaluation of the 3D bone morphology of the glenohumeral joint is necessary for pre-surgical planning. Zero echo time (ZTE) magnetic resonance imaging (MRI) provides excellent bone contrast and can potentially be used in the place of computed tomography. Segmentation of the shoulder anatomy, particularly the humeral head and the acetabulum, is needed for the detailed assessment of each anatomy and for pre-surgical preparation. In this study, we compared the performance of two popular deep learning models based on Google’s DeepLab and U-Net to perform automated segmentation on ZTE MRI of human shoulders. Axial ZTE images of normal shoulders (n = 31) acquired at 3-Tesla were annotated for training with DeepLab and 2D U-Net, and the trained model was validated with testing data (n = 13). While both models showed visually satisfactory results for segmenting the humeral bone, U-Net slightly over-estimated while DeepLab under-estimated the segmented area compared to the ground truth. Testing accuracy quantified by Dice score was significantly higher (p < 0.05) for U-Net (88%) than DeepLab (81%) for the humeral segmentation. We have also implemented the U-Net model onto an MRI console for push-button DL segmentation processing. Although this is an early work with limitations, our approach has the potential to improve shoulder MR evaluation hindered by manual post-processing and may provide clinical benefit for quickly visualizing bones of the glenohumeral joint. Full article
Show Figures

Figure 1

15 pages, 1877 KB  
Article
Life Cycle Assessment of a Wood Biomass Gasification Plant and Implications for Syngas and Biochar Utilization
by Francesco Arfelli, Cristian Tosi, Luca Ciacci and Fabrizio Passarini
Energies 2024, 17(11), 2599; https://doi.org/10.3390/en17112599 - 28 May 2024
Cited by 11 | Viewed by 4127
Abstract
The growing attention regarding the environmental challenges in the energy sectors pushes the industrial system toward the investigation of more sustainable and renewable energy sources to replace fossil ones. Among the promising alternatives, biomass is considered a valid source to convert the system [...] Read more.
The growing attention regarding the environmental challenges in the energy sectors pushes the industrial system toward the investigation of more sustainable and renewable energy sources to replace fossil ones. Among the promising alternatives, biomass is considered a valid source to convert the system and to reduce the fossil fraction of the national energy mixes, but its multiple potential uses need an environmental evaluation to understand the actual benefit when it is used as an energy resource. For this purpose, life cycle assessment (LCA) is applied to a wood biomass gasification system aimed to produce electricity and heat generated after the combustion of the produced syngas and the management of the biochar. The aim is to provide a quantitative comparison of (i) a baseline scenario where wood biomass is sourced from waste and (ii) a second scenario where wood biomass is drawn from dedicated cultivation. A further evaluation was finally applied to investigate the environmental implications associated with the biochar composition, assuming it was used on land. The proposed strategies resulted in an environmental credit for both the examined scenarios, but the outcomes showed a net preference for the baseline scenario, resulting in better environmental performances for all the examined categories with respect to the second one. It underlines the potentialities of using waste-sourced biomass. However, according to the Climate Change category, if on-site dedicated biomass cultivation is assumed for the second scenario, the baseline is considered preferable only if the biomass transportation distance is <600 km, which is estimated as a theoretical distance for scenarios to break even. Finally, biochar composition proved a particular concern for toxicity-related categories. This study highlights the importance of applying objective and standardized methodologies such as LCA to evaluate energy production systems based on alternative sources and to support decision-making toward achieving sustainability goals. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Back to TopTop