Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = LBP-TOP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4681 KiB  
Article
Effects of Postpartal Relative Body Weight Change on Production Performance, Serum Biomarkers, and Fecal Microbiota in Multiparous Holstein Cows
by Siyuan Zhang, Yiming Xu, Tianyu Chen, Duo Gao, Jingjun Wang, Yimin Zhuang, Wen Jiang, Guobin Hou, Shuai Liu, Shengli Li, Wei Shao and Zhijun Cao
Animals 2025, 15(9), 1252; https://doi.org/10.3390/ani15091252 - 29 Apr 2025
Cited by 1 | Viewed by 411
Abstract
This study aimed to determine effects of postpartal relative body weight change (PRBWC) on production performance, serum biomarkers, and the relation between PRBWC and gastrointestinal microbiota. A total of 59 multiparous cows participated in this research. Every cow’s PRBWC was calculated by the [...] Read more.
This study aimed to determine effects of postpartal relative body weight change (PRBWC) on production performance, serum biomarkers, and the relation between PRBWC and gastrointestinal microbiota. A total of 59 multiparous cows participated in this research. Every cow’s PRBWC was calculated by the following equation: PRBWC = (BW21 − BW0)/BW0 × 100%, in which BW21 refers to body weight on Day 21 post-calving and BW0 refers to body weight on the day of parturition. Among the 59 enrolled cows, cows with the top 21 ranked PRBWC values were categorized into the high PRBWC (H-PRBWC) group; cows with the bottom 21 ranked PRBWC values were categorized into the low PRBWC (L-PRBWC) group. PRBWC did not have significant influences on average daily milk yield (ADMY). However, on Day 21, cows in the H-PRBWC group displayed significantly higher body weight (BW) and body condition scores (BCS) (BW, p = 0.02; BCS, p < 0.01). Additionally, levels of serum glucose (GLU) and albumin (ALB) were significantly higher in the H-PRBWC group on Day 21 (GLU, p = 0.05; ALB, p < 0.01), while the lipopolysaccharide-binding protein (LBP) level was significant lower (p = 0.03). Moreover, the microbiota of fecal samples on Day 0 (FE0) differed notably between groups, as evidenced by various alpha diversity indices, including Shannon (p = 0.02), Simpson (p = 0.03), Pielou_e (p = 0.02), and principal coordinate analysis (p = 0.002). The relative abundances of Monoglobus, norank_f__UCG-010, and Christensenellaceae_R-7_group were significantly higher in the H-PRBWC group (p < 0.05), while the relative abundances of Clostridium_sensu_stricto_1, Turicibacter, and Romboutsia were significantly lower (p < 0.05). Pathways related to amino acid biosynthesis were significantly enriched in the FE0 of the H-PRBWC group, while pathways involved in carbohydrate metabolism were significantly upregulated in the FE0 of the L-PRBWC group. This study argues the potential of PRBWC to describe alteration of energy status in the postpartum, evidenced by production performance, serum biomarkers, and the fecal microbiota. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

21 pages, 1915 KiB  
Article
Multi-Modal Temporal Hypergraph Neural Network for Flotation Condition Recognition
by Zunguan Fan, Yifan Feng, Kang Wang and Xiaoli Li
Entropy 2024, 26(3), 239; https://doi.org/10.3390/e26030239 - 8 Mar 2024
Cited by 1 | Viewed by 1796
Abstract
Efficient flotation beneficiation heavily relies on accurate flotation condition recognition based on monitored froth video. However, the recognition accuracy is hindered by limitations of extracting temporal features from froth videos and establishing correlations between complex multi-modal high-order data. To address the difficulties of [...] Read more.
Efficient flotation beneficiation heavily relies on accurate flotation condition recognition based on monitored froth video. However, the recognition accuracy is hindered by limitations of extracting temporal features from froth videos and establishing correlations between complex multi-modal high-order data. To address the difficulties of inadequate temporal feature extraction, inaccurate online condition detection, and inefficient flotation process operation, this paper proposes a novel flotation condition recognition method named the multi-modal temporal hypergraph neural network (MTHGNN) to extract and fuse multi-modal temporal features. To extract abundant dynamic texture features from froth images, the MTHGNN employs an enhanced version of the local binary pattern algorithm from three orthogonal planes (LBP-TOP) and incorporates additional features from the three-dimensional space as supplements. Furthermore, a novel multi-view temporal feature aggregation network (MVResNet) is introduced to extract temporal aggregation features from the froth image sequence. By constructing a temporal multi-modal hypergraph neural network, we encode complex high-order temporal features, establish robust associations between data structures, and flexibly model the features of froth image sequence, thus enabling accurate flotation condition identification through the fusion of multi-modal temporal features. The experimental results validate the effectiveness of the proposed method for flotation condition recognition, providing a foundation for optimizing flotation operations. Full article
(This article belongs to the Special Issue Advances in Complex Systems Modelling via Hypergraphs II)
Show Figures

Figure 1

16 pages, 6327 KiB  
Article
Research on Multi-Scale Feature Extraction and Working Condition Classification Algorithm of Lead-Zinc Ore Flotation Foam
by Xiaoping Jiang, Huilin Zhao, Junwei Liu, Suliang Ma and Mingzhen Hu
Appl. Sci. 2023, 13(6), 4028; https://doi.org/10.3390/app13064028 - 22 Mar 2023
Cited by 4 | Viewed by 2097
Abstract
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional [...] Read more.
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional neural networks) characteristics and improved LBP (local binary patterns) characteristics. We have divided the foam flotation conditions into six categories. First, the multi-directional and multi-scale selectivity and anisotropy of nonsubsampled shearlet transform (NSST) are used to decompose the flotation foam images at multiple frequency scales, and a multi-channel CNN network is designed to extract static features from the images at different frequencies. Then, the flotation video image sequences are rotated and dynamic features are extracted by the LBP-TOP (local binary patterns from three orthogonal planes), and the CNN-extracted static picture features are fused with the LBP dynamic video features. Finally, classification decisions are made by a PSO-RVFLNs (particle swarm optimization-random vector functional link networks) algorithm to accurately identify the foam flotation performance states. Experimental results show that the detection accuracy of the new method is significantly improved by 4.97% and 6.55%, respectively, compared to the single CNN algorithm and the traditional LBP algorithm, respectively. The accuracy of flotation performance state classification was as high as 95.17%, and the method reduced manual intervention, thus improving production efficiency. Full article
Show Figures

Figure 1

11 pages, 1798 KiB  
Article
Persistent Homology for Breast Tumor Classification Using Mammogram Scans
by Aras Asaad, Dashti Ali, Taban Majeed and Rasber Rashid
Mathematics 2022, 10(21), 4039; https://doi.org/10.3390/math10214039 - 31 Oct 2022
Cited by 10 | Viewed by 2622
Abstract
An important tool in the field of topological data analysis is persistent homology (PH), which is used to encode abstract representations of the homology of data at different resolutions in the form of persistence barcode (PB). Normally, one will obtain one PB from [...] Read more.
An important tool in the field of topological data analysis is persistent homology (PH), which is used to encode abstract representations of the homology of data at different resolutions in the form of persistence barcode (PB). Normally, one will obtain one PB from a digital image when using a sublevel-set filtration method. In this work, we built more than one PB representation of a single image based on a landmark selection method, known as local binary patterns (LBP), which encode different types of local texture from a digital image. Starting from the top-left corner of any 3-by-3 patch selected from an input image, the LBP process starts by subtracting the central pixel value from its eight neighboring pixel values. Then, each cell is assigned with 1 if the subtraction outcome is positive, and 0 otherwise, to obtain an 8-bit binary representation. This process will identify a set of landmark pixels to represent 0-simplices and use Vietoris–Rips filtration to obtain its corresponding PB. Using LBP, we can construct up to 56 PBs from a single image if we restrict to only using the binary codes that have two circular transitions between 1 and 0. The information within these 56 PBs contain detailed local and global topological and geometrical information, which can be used to design effective machine learning models. We used four different PB vectorizations, namely, persistence landscapes, persistence images, Betti curves (barcode binning), and PB statistics. We tested the effectiveness of the proposed landmark-based PH on two publicly available breast abnormality detection datasets using mammogram scans. The sensitivity and specificity of the landmark-based PH obtained was over 90% and 85%, respectively, in both datasets for the detection of abnormal breast scans. Finally, the experimental results provide new insights on using different PB vectorizations with sublevel set filtrations and landmark-based Vietoris–Rips filtration from digital mammogram scans. Full article
Show Figures

Figure 1

21 pages, 3531 KiB  
Article
An Improved Micro-Expression Recognition Method Based on Necessary Morphological Patches
by Yue Zhao and Jiancheng Xu
Symmetry 2019, 11(4), 497; https://doi.org/10.3390/sym11040497 - 5 Apr 2019
Cited by 16 | Viewed by 4151
Abstract
Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal [...] Read more.
Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions. Full article
Show Figures

Figure 1

13 pages, 1192 KiB  
Article
Objective Classes for Micro-Facial Expression Recognition
by Adrian K. Davison, Walied Merghani and Moi Hoon Yap
J. Imaging 2018, 4(10), 119; https://doi.org/10.3390/jimaging4100119 - 15 Oct 2018
Cited by 98 | Viewed by 9281
Abstract
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are [...] Read more.
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition. Full article
Show Figures

Figure 1

15 pages, 2480 KiB  
Article
Necessary Morphological Patches Extraction for Automatic Micro-Expression Recognition
by Yue Zhao and Jiancheng Xu
Appl. Sci. 2018, 8(10), 1811; https://doi.org/10.3390/app8101811 - 3 Oct 2018
Cited by 7 | Viewed by 4542
Abstract
Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only [...] Read more.
Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only 1/25 s. Furthermore, different from macro-expressions, micro-expressions have considerable low intensity and inadequate contraction of the facial muscles. Based on these characteristics, automatic micro-expression detection and recognition are great challenges in the field of computer vision. In this paper, we propose a novel automatic facial expression recognition framework based on necessary morphological patches (NMPs) to better detect and identify micro expressions. Micro expression is a subconscious facial muscle response. It is not controlled by the rational thought of the brain. Therefore, it calls on a few facial muscles and has local properties. NMPs are the facial regions that must be involved when a micro expression occurs. NMPs were screened based on weighting the facial active patches instead of the holistic utilization of the entire facial area. Firstly, we manually define the active facial patches according to the facial landmark coordinates and the facial action coding system (FACS). Secondly, we use a LBP-TOP descriptor to extract features in these patches and the Entropy-Weight method to select NMP. Finally, we obtain the weighted LBP-TOP features of these NMP. We test on two recent publicly available datasets: CASME II and SMIC database that provided sufficient samples. Compared with many recent state-of-the-art approaches, our method achieves more promising recognition results. Full article
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology)
Show Figures

Figure 1

33 pages, 15199 KiB  
Article
Using Deep Learning and Low-Cost RGB and Thermal Cameras to Detect Pedestrians in Aerial Images Captured by Multirotor UAV
by Diulhio Candido De Oliveira and Marco Aurelio Wehrmeister
Sensors 2018, 18(7), 2244; https://doi.org/10.3390/s18072244 - 12 Jul 2018
Cited by 61 | Viewed by 7966
Abstract
The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as [...] Read more.
The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as surveillance and search and rescue. Automatic detection of pedestrians in aerial images is a challenging task. The computing vision system must deal with many sources of variability in the aerial images captured with the UAV, e.g., low-resolution images of pedestrians, images captured at distinct angles due to the degrees of freedom that a UAV can move, the camera platform possibly experiencing some instability while the UAV flies, among others. In this work, we created and evaluated different implementations of Pattern Recognition Systems (PRS) aiming at the automatic detection of pedestrians in aerial images captured with multirotor UAV. The main goal is to assess the feasibility and suitability of distinct PRS implementations running on top of low-cost computing platforms, e.g., single-board computers such as the Raspberry Pi or regular laptops without a GPU. For that, we used four machine learning techniques in the feature extraction and classification steps, namely Haar cascade, LBP cascade, HOG + SVM and Convolutional Neural Networks (CNN). In order to improve the system performance (especially the processing time) and also to decrease the rate of false alarms, we applied the Saliency Map (SM) and Thermal Image Processing (TIP) within the segmentation and detection steps of the PRS. The classification results show the CNN to be the best technique with 99.7% accuracy, followed by HOG + SVM with 92.3%. In situations of partial occlusion, the CNN showed 71.1% sensitivity, which can be considered a good result in comparison with the current state-of-the-art, since part of the original image data is missing. As demonstrated in the experiments, by combining TIP with CNN, the PRS can process more than two frames per second (fps), whereas the PRS that combines TIP with HOG + SVM was able to process 100 fps. It is important to mention that our experiments show that a trade-off analysis must be performed during the design of a pedestrian detection PRS. The faster implementations lead to a decrease in the PRS accuracy. For instance, by using HOG + SVM with TIP, the PRS presented the best performance results, but the obtained accuracy was 35 percentage points lower than the CNN. The obtained results indicate that the best detection technique (i.e., the CNN) requires more computational resources to decrease the PRS computation time. Therefore, this work shows and discusses the pros/cons of each technique and trade-off situations, and hence, one can use such an analysis to improve and tailor the design of a PRS to detect pedestrians in aerial images. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

13 pages, 1610 KiB  
Article
NIRExpNet: Three-Stream 3D Convolutional Neural Network for Near Infrared Facial Expression Recognition
by Zhan Wu, Tong Chen, Ying Chen, Zhihao Zhang and Guangyuan Liu
Appl. Sci. 2017, 7(11), 1184; https://doi.org/10.3390/app7111184 - 17 Nov 2017
Cited by 15 | Viewed by 5888
Abstract
Facial expression recognition (FER) under active near-infrared (NIR) illumination has the advantages of illumination invariance. In this paper, we propose a three-stream 3D convolutional neural network, named as NIRExpNet for NIR FER. The 3D structure of NIRExpNet makes it possible to extract automatically, [...] Read more.
Facial expression recognition (FER) under active near-infrared (NIR) illumination has the advantages of illumination invariance. In this paper, we propose a three-stream 3D convolutional neural network, named as NIRExpNet for NIR FER. The 3D structure of NIRExpNet makes it possible to extract automatically, not just spatial features, but also, temporal features. The design of multiple streams of the NIRExpNet enables it to fuse local and global facial expression features. To avoid over-fitting, the NIRExpNet has a moderate size to suit the Oulu-CASIA NIR facial expression database that is a medium-size database. Experimental results show that the proposed NIRExpNet outperforms some previous state-of-art methods, such as Histogram of Oriented Gradient to 3D (HOG 3D), Local binary patterns from three orthogonal planes (LBP-TOP), deep temporal appearance-geometry network (DTAGN), and adapt 3D Convolutional Neural Networks (3D CNN DAP). Full article
Show Figures

Figure 1

Back to TopTop