Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = points of interest annotation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 8968 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 (registering DOI) - 16 Oct 2025
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

10 pages, 4186 KB  
Proceeding Paper
Indirect Crop Line Detection in Precision Mechanical Weeding Using AI: A Comparative Analysis of Different Approaches
by Ioannis Glykos, Gerassimos G. Peteinatos and Konstantinos G. Arvanitis
Eng. Proc. 2025, 104(1), 32; https://doi.org/10.3390/engproc2025104032 - 25 Aug 2025
Viewed by 369
Abstract
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the [...] Read more.
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the crops in both the inter- and intra-row areas. The use of AI-based computer vision can assist in detecting crop lines and accurately guiding weeding tools. Additionally, AI-driven image analysis can be used for selective intra-row weeding with mechanized blades, distinguishing crops from weeds. However, until now, there have been two separate systems for these tasks. To enable simultaneous in-row weeding and row alignment, YOLOv8n and YOLO11n were trained and compared in a lettuce field (Lactuca sativa L.). The models were evaluated based on different metrics and inference time for three different image sizes. Crop lines were generated through linear regression on the bounding box centers of detected plants and compared against manually drawn ground truth lines, generated during the annotation process, using different deviation metrics. As more than one line appeared per image, the proposed methodology for classifying points in their corresponding crop line was tested for three different approaches with different empirical factor values. The best-performing approach achieved a mean horizontal error of 45 pixels, demonstrating the feasibility of a dual-functioning system using a single vision model. Full article
Show Figures

Figure 1

21 pages, 4314 KB  
Article
Panoptic Plant Recognition in 3D Point Clouds: A Dual-Representation Learning Approach with the PP3D Dataset
by Lin Zhao, Sheng Wu, Jiahao Fu, Shilin Fang, Shan Liu and Tengping Jiang
Remote Sens. 2025, 17(15), 2673; https://doi.org/10.3390/rs17152673 - 2 Aug 2025
Viewed by 922
Abstract
The advancement of Artificial Intelligence (AI) has significantly accelerated progress across various research domains, with growing interest in plant science due to its substantial economic potential. However, the integration of AI with digital vegetation analysis remains underexplored, largely due to the absence of [...] Read more.
The advancement of Artificial Intelligence (AI) has significantly accelerated progress across various research domains, with growing interest in plant science due to its substantial economic potential. However, the integration of AI with digital vegetation analysis remains underexplored, largely due to the absence of large-scale, real-world plant datasets, which are crucial for advancing this field. To address this gap, we introduce the PP3D dataset—a meticulously labeled collection of about 500 potted plants represented as 3D point clouds, featuring fine-grained annotations for approximately 20 species. The PP3D dataset provides 3D phenotypic data for about 20 plant species spanning model organisms (e.g., Arabidopsis thaliana), potted plants (e.g., Foliage plants, Flowering plants), and horticultural plants (e.g., Solanum lycopersicum), covering most of the common important plant species. Leveraging this dataset, we propose the panoptic plant recognition task, which combines semantic segmentation (stems and leaves) with leaf instance segmentation. To tackle this challenge, we present SCNet, a novel dual-representation learning network designed specifically for plant point cloud segmentation. SCNet integrates two key branches: a cylindrical feature extraction branch for robust spatial encoding and a sequential slice feature extraction branch for detailed structural analysis. By efficiently propagating features between these representations, SCNet achieves superior flexibility and computational efficiency, establishing a new baseline for panoptic plant recognition and paving the way for future AI-driven research in plant science. Full article
Show Figures

Figure 1

21 pages, 8731 KB  
Article
Individual Segmentation of Intertwined Apple Trees in a Row via Prompt Engineering
by Herearii Metuarea, François Laurens, Walter Guerra, Lidia Lozano, Andrea Patocchi, Shauny Van Hoye, Helin Dutagaci, Jeremy Labrosse, Pejman Rasti and David Rousseau
Sensors 2025, 25(15), 4721; https://doi.org/10.3390/s25154721 - 31 Jul 2025
Viewed by 818
Abstract
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree [...] Read more.
Computer vision is of wide interest to perform the phenotyping of horticultural crops such as apple trees at high throughput. In orchards specially constructed for variety testing or breeding programs, computer vision tools should be able to extract phenotypical information form each tree separately. We focus on segmenting individual apple trees as the main task in this context. Segmenting individual apple trees in dense orchard rows is challenging because of the complexity of outdoor illumination and intertwined branches. Traditional methods rely on supervised learning, which requires a large amount of annotated data. In this study, we explore an alternative approach using prompt engineering with the Segment Anything Model and its variants in a zero-shot setting. Specifically, we first detect the trunk and then position a prompt (five points in a diamond shape) located above the detected trunk to feed to the Segment Anything Model. We evaluate our method on the apple REFPOP, a new large-scale European apple tree dataset and on another publicly available dataset. On these datasets, our trunk detector, which utilizes a trained YOLOv11 model, achieves a good detection rate of 97% based on the prompt located above the detected trunk, achieving a Dice score of 70% without training on the REFPOP dataset and 84% without training on the publicly available dataset.We demonstrate that our method equals or even outperforms purely supervised segmentation approaches or non-prompted foundation models. These results underscore the potential of foundational models guided by well-designed prompts as scalable and annotation-efficient solutions for plant segmentation in complex agricultural environments. Full article
Show Figures

Figure 1

14 pages, 2035 KB  
Article
Integration of YOLOv9 Segmentation and Monocular Depth Estimation in Thermal Imaging for Prediction of Estrus in Sows Based on Pixel Intensity Analysis
by Iyad Almadani, Aaron L. Robinson and Mohammed Abuhussein
Digital 2025, 5(2), 22; https://doi.org/10.3390/digital5020022 - 13 Jun 2025
Viewed by 694
Abstract
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. [...] Read more.
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. However, variations in camera distance during dataset collection can significantly affect the accuracy of this method, as different distances alter the resolution of the region of interest, causing pixel intensity values to represent varying areas and temperatures. This inconsistency hinders the detection of the subtle temperature differences required to distinguish between estrus and non-estrus states. Moreover, failure to maintain a consistent camera distance, along with external factors such as atmospheric conditions and improper calibration, can distort temperature readings, further compromising data accuracy and reliability. Furthermore, without addressing distance variations, the model’s generalizability diminishes, increasing the likelihood of false positives and negatives and ultimately reducing the effectiveness of estrus detection. In our previously proposed methodology for estrus detection in sows, we utilized YOLOv8 for segmentation and keypoint detection, while monocular depth estimation was used for camera calibration. This calibration helps establish a functional relationship between the measurements in the image (such as distances between labia, the clitoris-to-perineum distance, and vulva perimeter) and the depth distance to the camera, enabling accurate adjustments and calibration for our analysis. Estrus classification is performed by comparing new data points with reference datasets using a three-nearest-neighbor voting system. In this paper, we aim to enhance our previous method by incorporating the mean pixel intensity of the region of interest as an additional factor. We propose a detailed four-step methodology coupled with two stages of evaluation. First, we carefully annotate masks around the vulva to calculate its perimeter precisely. Leveraging the advantages of deep learning, we train a model on these annotated images, enabling segmentation using the cutting-edge YOLOv9 algorithm. This segmentation enables the detection of the sow’s vulva, allowing for analysis of its shape and facilitating the calculation of the mean pixel intensity in the region. Crucially, we use monocular depth estimation from the previous method, establishing a functional link between pixel intensity and the distance to the camera, ensuring accuracy in our analysis. We then introduce a classification approach that differentiates between estrus and non-estrus regions based on the mean pixel intensity of the vulva. This classification method involves calculating Euclidean distances between new data points and reference points from two datasets: one for “estrus” and the other for “non-estrus”. The classification process identifies the five closest neighbors from the datasets and applies a majority voting system to determine the label. A new point is classified as “estrus” if the majority of its nearest neighbors are labeled as estrus; otherwise, it is classified as “non-estrus”. This automated approach offers a robust solution for accurate estrus detection. To validate our method, we propose two evaluation stages: first, a quantitative analysis comparing the performance of our new YOLOv9 segmentation model with the older U-Net and YOLOv8 models. Secondly, we assess the classification process by defining a confusion matrix and comparing the results of our previous method, which used the three nearest points, with those of our new model that utilizes five nearest points. This comparison allows us to evaluate the improvements in accuracy and performance achieved with the updated model. The automation of this vital process holds the potential to revolutionize reproductive health management in agriculture, boosting breeding success rates. Through thorough evaluation and experimentation, our research highlights the transformative power of computer vision, pushing forward more advanced practices in the field. Full article
Show Figures

Figure 1

10 pages, 1707 KB  
Technical Note
A Proposed Method for Deep Learning-Based Automatic Tracking with Minimal Training Data for Sports Biomechanics Research
by Daichi Yamashita, Minoru Matsumoto and Takeo Matsubayashi
Biomechanics 2025, 5(2), 25; https://doi.org/10.3390/biomechanics5020025 - 13 Apr 2025
Viewed by 1725
Abstract
Background: This technical note proposes a deep learning-based, few-shot automatic key point tracking technique tailored to sports biomechanics research. Methods: The present method facilitates the arbitrary definition of key points on athletes’ bodies or sports equipment. Initially, a limited number of video frames [...] Read more.
Background: This technical note proposes a deep learning-based, few-shot automatic key point tracking technique tailored to sports biomechanics research. Methods: The present method facilitates the arbitrary definition of key points on athletes’ bodies or sports equipment. Initially, a limited number of video frames are manually digitized to mark the points of interest. These annotated frames are subsequently used to train a deep learning model that leverages a pre-trained VGG16 network as its backbone and incorporates an additional convolutional head. Feature maps extracted from three intermediate layers of VGG16 are processed by the head network to generate a probability map, highlighting the most likely locations of the key points. Transfer learning is implemented by freezing the backbone weights and training only the head network. By restricting the training data generation to regions surrounding the manually annotated points and training specifically for each video, this approach minimizes training time while maintaining high precision. Conclusions: This technique substantially reduces the time and effort required compared to frame-by-frame manual digitization in various sports settings, and enables customized training tailored to specific analytical needs and video environments. Full article
(This article belongs to the Special Issue Biomechanics in Sport and Ageing: Artificial Intelligence)
Show Figures

Figure 1

22 pages, 5840 KB  
Article
Fast Monocular Measurement via Deep Learning-Based Object Detection for Real-Time Gas-Insulated Transmission Line Deformation Monitoring
by Guiyun Yang, Wengang Yang, Entuo Li, Qinglong Wang, Huilong Han, Jie Sun and Meng Wang
Energies 2025, 18(8), 1898; https://doi.org/10.3390/en18081898 - 8 Apr 2025
Viewed by 649
Abstract
Deformation monitoring of Gas-Insulated Transmission Lines (GILs) is critical for the early detection of structural issues and for ensuring safe power transmission. In this study, we introduce a rapid monocular measurement method that leverages deep learning for real-time monitoring. A YOLOv10 model is [...] Read more.
Deformation monitoring of Gas-Insulated Transmission Lines (GILs) is critical for the early detection of structural issues and for ensuring safe power transmission. In this study, we introduce a rapid monocular measurement method that leverages deep learning for real-time monitoring. A YOLOv10 model is developed for automatically identifying regions of interest (ROIs) that may exhibit deformations. Within these ROIs, grayscale data is used to dynamically set thresholds for FAST corner detection, while the Shi–Tomasi algorithm filters redundant corners to extract unique feature points for precise tracking. Subsequent subpixel refinement further enhances measurement accuracy. To correct image tilt, ArUco markers are employed for geometric correction and to compute a scaling factor based on their known edge lengths, thereby reducing errors caused by non-perpendicular camera angles. Simulated experiments validate our approach, demonstrating that combining refined ArUco marker coordinates with manually annotated features significantly improves detection accuracy. Our method achieves a mean absolute error of no more than 1.337 mm and a processing speed of approximately 0.024 s per frame, meeting the precision and efficiency requirements for GIL deformation monitoring. This integrated approach offers a robust solution for long-term, real-time monitoring of GIL deformations, with promising potential for practical applications in power transmission systems. Full article
Show Figures

Figure 1

13 pages, 553 KB  
Article
Early-Stage Infection-Specific Heterobasidion annosum (Fr.) Bref. Transcripts in H. annosumPinus sylvestris L. Pathosystem
by Maryna Ramanenka, Dainis Edgars Ruņģis and Vilnis Šķipars
Int. J. Mol. Sci. 2024, 25(21), 11375; https://doi.org/10.3390/ijms252111375 - 23 Oct 2024
Cited by 2 | Viewed by 1174
Abstract
Transcriptomes from stem-inoculated Scots pine saplings were analyzed to identify unique and enriched H. annosum transcripts in the early stages of infection. Comparing different time points since inoculation identified 131 differentially expressed H. annosum genes with p-values of ≤0.01. Our research supports [...] Read more.
Transcriptomes from stem-inoculated Scots pine saplings were analyzed to identify unique and enriched H. annosum transcripts in the early stages of infection. Comparing different time points since inoculation identified 131 differentially expressed H. annosum genes with p-values of ≤0.01. Our research supports the results of previous studies on the Norway spruce–Heterobasidion annosum s.l. pathosystem, indicating the role of carbohydrate and lignin degradation genes in pathogenesis at different time points post-inoculation and the role of lipid metabolism genes (including but not limited to the delta-12 fatty acid desaturase gene previously reported to be an important factor). The results of this study indicate that the malic enzyme could be a potential gene of interest in the context of H. annosum virulence. During this study, difficulties related to incomplete reference material of the host plant species and a low proportion of H. annosum transcripts in the RNA pool were encountered. In addition, H. annosum transcripts are currently not well annotated. Improvements in sequencing technologies (including sequencing depth) or bioinformatics focusing on small subpopulations of RNA would be welcome. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

29 pages, 6572 KB  
Article
Robust Parking Space Recognition Approach Based on Tightly Coupled Polarized Lidar and Pre-Integration IMU
by Jialiang Chen, Fei Li, Xiaohui Liu and Yuelin Yuan
Appl. Sci. 2024, 14(20), 9181; https://doi.org/10.3390/app14209181 - 10 Oct 2024
Cited by 1 | Viewed by 1978
Abstract
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition [...] Read more.
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition and annotation in unknown environments, this paper proposes an automatic parking space annotation approach with tight coupling of Lidar and Inertial Measurement Unit (IMU). First, the pose of the Lidar frame was tightly coupled with high-frequency IMU data to compensate for vehicle motion, reducing its impact on the pose transformation of the Lidar point cloud. Next, simultaneous localization and mapping (SLAM) were performed using the compensated Lidar frame. By extracting two-dimensional polarized edge features and planar features from the three-dimensional Lidar point cloud, a polarized Lidar odometry was constructed. The polarized Lidar odometry factor and loop closure factor were jointly optimized in the iSAM2. Finally, the pitch angle of the constructed local map was evaluated to filter out ground points, and the regions of interest (ROI) were projected onto a grid map. The free space between adjacent vehicle point clouds was assessed on the grid map using convex hull detection and straight-line fitting. The experiments were conducted on both local and open datasets. The proposed method achieved an average precision and recall of 98.89% and 98.79% on the local dataset, respectively; it also achieved 97.08% and 99.40% on the nuScenes dataset. And it reduced storage usage by 48.38% while ensuring running time. Comparative experiments on open datasets show that the proposed method can adapt to various scenarios and exhibits strong robustness. Full article
Show Figures

Figure 1

25 pages, 7113 KB  
Article
LidPose: Real-Time 3D Human Pose Estimation in Sparse Lidar Point Clouds with Non-Repetitive Circular Scanning Pattern
by Lóránt Kovács, Balázs M. Bódis and Csaba Benedek
Sensors 2024, 24(11), 3427; https://doi.org/10.3390/s24113427 - 26 May 2024
Cited by 5 | Viewed by 4445
Abstract
In this paper, we propose a novel, vision-transformer-based end-to-end pose estimation method, LidPose, for real-time human skeleton estimation in non-repetitive circular scanning (NRCS) lidar point clouds. Building on the ViTPose architecture, we introduce novel adaptations to address the unique properties of NRCS lidars, [...] Read more.
In this paper, we propose a novel, vision-transformer-based end-to-end pose estimation method, LidPose, for real-time human skeleton estimation in non-repetitive circular scanning (NRCS) lidar point clouds. Building on the ViTPose architecture, we introduce novel adaptations to address the unique properties of NRCS lidars, namely, the sparsity and unusual rosetta-like scanning pattern. The proposed method addresses a common issue of NRCS lidar-based perception, namely, the sparsity of the measurement, which needs balancing between the spatial and temporal resolution of the recorded data for efficient analysis of various phenomena. LidPose utilizes foreground and background segmentation techniques for the NRCS lidar sensor to select a region of interest (RoI), making LidPose a complete end-to-end approach to moving pedestrian detection and skeleton fitting from raw NRCS lidar measurement sequences captured by a static sensor for surveillance scenarios. To evaluate the method, we have created a novel, real-world, multi-modal dataset, containing camera images and lidar point clouds from a Livox Avia sensor, with annotated 2D and 3D human skeleton ground truth. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

28 pages, 1703 KB  
Article
The Complete Genome of a Novel Typical Species Thiocapsa bogorovii and Analysis of Its Central Metabolic Pathways
by Ekaterina Petushkova, Makhmadyusuf Khasimov, Ekaterina Mayorova, Yanina Delegan, Ekaterina Frantsuzova, Alexander Bogun, Elena Galkina and Anatoly Tsygankov
Microorganisms 2024, 12(2), 391; https://doi.org/10.3390/microorganisms12020391 - 15 Feb 2024
Cited by 4 | Viewed by 2403
Abstract
The purple sulfur bacterium Thiocapsa roseopersicina BBS is interesting from both fundamental and practical points of view. It possesses a thermostable HydSL hydrogenase, which is involved in the reaction of reversible hydrogen activation and a unique reaction of sulfur reduction to hydrogen sulfide. [...] Read more.
The purple sulfur bacterium Thiocapsa roseopersicina BBS is interesting from both fundamental and practical points of view. It possesses a thermostable HydSL hydrogenase, which is involved in the reaction of reversible hydrogen activation and a unique reaction of sulfur reduction to hydrogen sulfide. It is a very promising enzyme for enzymatic hydrogenase electrodes. There are speculations that HydSL hydrogenase of purple bacteria is closely related to sulfur metabolism, but confirmation is required. For that, the full genome sequence is necessary. Here, we sequenced and assembled the complete genome of this bacterium. The analysis of the obtained whole genome, through an integrative approach that comprised estimating the Average Nucleotide Identity (ANI) and digital DNA-DNA hybridization (DDH) parameters, allowed for validation of the systematic position of T. roseopersicina as T. bogorovii BBS. For the first time, we have assembled the whole genome of this typical strain of a new bacterial species and carried out its functional description against another purple sulfur bacterium: Allochromatium vinosum DSM 180T. We refined the automatic annotation of the whole genome of the bacteria T. bogorovii BBS and localized the genomic positions of several studied genes, including those involved in sulfur metabolism and genes encoding the enzymes required for the TCA and glyoxylate cycles and other central metabolic pathways. Eleven additional genes coding proteins involved in pigment biosynthesis was found. Full article
(This article belongs to the Section Systems Microbiology)
Show Figures

Graphical abstract

13 pages, 2102 KB  
Review
A Systematic Review of Lipid-Focused Cardiovascular Disease Research: Trends and Opportunities
by Uchenna Alex Anyaegbunam, Piyush More, Jean-Fred Fontaine, Vincent ten Cate, Katrin Bauer, Ute Distler, Elisa Araldi, Laura Bindila, Philipp Wild and Miguel A. Andrade-Navarro
Curr. Issues Mol. Biol. 2023, 45(12), 9904-9916; https://doi.org/10.3390/cimb45120618 - 9 Dec 2023
Cited by 4 | Viewed by 4037
Abstract
Lipids are important modifiers of protein function, particularly as parts of lipoproteins, which transport lipophilic substances and mediate cellular uptake of circulating lipids. As such, lipids are of particular interest as blood biological markers for cardiovascular disease (CVD) as well as for conditions [...] Read more.
Lipids are important modifiers of protein function, particularly as parts of lipoproteins, which transport lipophilic substances and mediate cellular uptake of circulating lipids. As such, lipids are of particular interest as blood biological markers for cardiovascular disease (CVD) as well as for conditions linked to CVD such as atherosclerosis, diabetes mellitus, obesity and dietary states. Notably, lipid research is particularly well developed in the context of CVD because of the relevance and multiple causes and risk factors of CVD. The advent of methods for high-throughput screening of biological molecules has recently resulted in the generation of lipidomic profiles that allow monitoring of lipid compositions in biological samples in an untargeted manner. These and other earlier advances in biomedical research have shaped the knowledge we have about lipids in CVD. To evaluate the knowledge acquired on the multiple biological functions of lipids in CVD and the trends in their research, we collected a dataset of references from the PubMed database of biomedical literature focused on plasma lipids and CVD in human and mouse. Using annotations from these records, we were able to categorize significant associations between lipids and particular types of research approaches, distinguish non-biological lipids used as markers, identify differential research between human and mouse models, and detect the increasingly mechanistic nature of the results in this field. Using known associations between lipids and proteins that metabolize or transport them, we constructed a comprehensive lipid–protein network, which we used to highlight proteins strongly connected to lipids found in the CVD-lipid literature. Our approach points to a series of proteins for which lipid-focused research would bring insights into CVD, including Prostaglandin G/H synthase 2 (PTGS2, a.k.a. COX2) and Acylglycerol kinase (AGK). In this review, we summarize our findings, putting them in a historical perspective of the evolution of lipid research in CVD. Full article
(This article belongs to the Special Issue A Focus on Molecular Basis in Cardiac Diseases)
Show Figures

Figure 1

28 pages, 24166 KB  
Article
Semi-Supervised Learning Method for the Augmentation of an Incomplete Image-Based Inventory of Earthquake-Induced Soil Liquefaction Surface Effects
by Adel Asadi, Laurie Gaskins Baise, Christina Sanon, Magaly Koch, Snehamoy Chatterjee and Babak Moaveni
Remote Sens. 2023, 15(19), 4883; https://doi.org/10.3390/rs15194883 - 9 Oct 2023
Cited by 6 | Viewed by 3296
Abstract
Soil liquefaction often occurs as a secondary hazard during earthquakes and can lead to significant structural and infrastructure damage. Liquefaction is most often documented through field reconnaissance and recorded as point locations. Complete liquefaction inventories across the impacted area are rare but valuable [...] Read more.
Soil liquefaction often occurs as a secondary hazard during earthquakes and can lead to significant structural and infrastructure damage. Liquefaction is most often documented through field reconnaissance and recorded as point locations. Complete liquefaction inventories across the impacted area are rare but valuable for developing empirical liquefaction prediction models. Remote sensing analysis can be used to rapidly produce the full spatial extent of liquefaction ejecta after an event to inform and supplement field investigations. Visually labeling liquefaction ejecta from remotely sensed imagery is time-consuming and prone to human error and inconsistency. This study uses a partially labeled liquefaction inventory created from visual annotations by experts and proposes a pixel-based approach to detecting unlabeled liquefaction using advanced machine learning and image processing techniques, and to generating an augmented inventory of liquefaction ejecta with high spatial completeness. The proposed methodology is applied to aerial imagery taken from the 2011 Christchurch earthquake and considers the available partial liquefaction labels as high-certainty liquefaction features. This study consists of two specific comparative analyses. (1) To tackle the limited availability of labeled data and their spatial incompleteness, a semi-supervised self-training classification via Linear Discriminant Analysis is presented, and the performance of the semi-supervised learning approach is compared with supervised learning classification. (2) A post-event aerial image with RGB (red-green-blue) channels is used to extract color transformation bands, statistical indices, texture components, and dimensionality reduction outputs, and performances of the classification model with different combinations of selected features from these four groups are compared. Building footprints are also used as the only non-imagery geospatial information to improve classification accuracy by masking out building roofs from the classification process. To prepare the multi-class labeled data, regions of interest (ROIs) were drawn to collect samples of seven land cover and land use classes. The labeled samples of liquefaction were also clustered into two groups (dark and light) using the Fuzzy C-Means clustering algorithm to split the liquefaction pixels into two classes. A comparison of the generated maps with fully and manually labeled liquefaction data showed that the proposed semi-supervised method performs best when selected high-ranked features of the two groups of statistical indices (gradient weight and sum of the band squares) and dimensionality reduction outputs (first and second principal components) are used. It also outperforms supervised learning and can better augment the liquefaction labels across the image in terms of spatial completeness. Full article
Show Figures

Graphical abstract

16 pages, 435 KB  
Article
Using Open-Source Automatic Speech Recognition Tools for the Annotation of Dutch Infant-Directed Speech
by Anika van der Klis, Frans Adriaans, Mengru Han and René Kager
Multimodal Technol. Interact. 2023, 7(7), 68; https://doi.org/10.3390/mti7070068 - 3 Jul 2023
Cited by 3 | Viewed by 3646
Abstract
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the [...] Read more.
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the annotation process, their effectiveness on this speech register is currently unknown. In this study, we assessed to what extent open-source ASR tools can successfully transcribe IDS. We used speech data from 21 Dutch mothers reading picture books containing target words to their 18- and 24-month-old children (IDS) and the experimenter (ADS). In Experiment 1, we examined how the ASR tool Kaldi-NL performs at annotating target words in IDS vs. ADS. We found that Kaldi-NL only found 55.8% of target words in IDS, while it annotated 66.8% correctly in ADS. In Experiment 2, we aimed to assess the difficulties in annotating IDS more broadly by transcribing all IDS utterances manually and comparing the word error rates (WERs) of two different ASR systems: Kaldi-NL and WhisperX. We found that WhisperX performs significantly better than Kaldi-NL. While there is much room for improvement, the results show that automatic transcriptions provide a promising starting point for researchers who have to transcribe a large amount of speech directed at infants. Full article
(This article belongs to the Special Issue Child–Computer Interaction and Multimodal Child Behavior Analysis)
Show Figures

Figure 1

20 pages, 3378 KB  
Article
Deep Bayesian-Assisted Keypoint Detection for Pose Estimation in Assembly Automation
by Debo Shi, Alireza Rahimpour, Amin Ghafourian, Mohammad Mahdi Naddaf Shargh, Devesh Upadhyay, Ty A. Lasky and Iman Soltani
Sensors 2023, 23(13), 6107; https://doi.org/10.3390/s23136107 - 2 Jul 2023
Cited by 4 | Viewed by 2858
Abstract
Pose estimation is crucial for automating assembly tasks, yet achieving sufficient accuracy for assembly automation remains challenging and part-specific. This paper presents a novel, streamlined approach to pose estimation that facilitates automation of assembly tasks. Our proposed method employs deep learning on a [...] Read more.
Pose estimation is crucial for automating assembly tasks, yet achieving sufficient accuracy for assembly automation remains challenging and part-specific. This paper presents a novel, streamlined approach to pose estimation that facilitates automation of assembly tasks. Our proposed method employs deep learning on a limited number of annotated images to identify a set of keypoints on the parts of interest. To compensate for network shortcomings and enhance accuracy we incorporated a Bayesian updating stage that leverages our detailed knowledge of the assembly part design. This Bayesian updating step refines the network output, significantly improving pose estimation accuracy. For this purpose, we utilized a subset of network-generated keypoint positions with higher quality as measurements, while for the remaining keypoints, the network outputs only serve as priors. The geometry data aid in constructing likelihood functions, which in turn result in enhanced posterior distributions of keypoint pixel positions. We then employed the maximum a posteriori (MAP) estimates of keypoint locations to obtain a final pose, allowing for an update to the nominal assembly trajectory. We evaluated our method on a 14-point snap-fit dash trim assembly for a Ford Mustang dashboard, demonstrating promising results. Our approach does not require tailoring to new applications, nor does it rely on extensive machine learning expertise or large amounts of training data. This makes our method a scalable and adaptable solution for the production floors. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop