Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = augmented descriptor system approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1712 KB  
Article
Machine Learning-Based Predictive Model for Risk Stratification of Multiple Myeloma from Monoclonal Gammopathy of Undetermined Significance
by Amparo Santamaría, Marcos Alfaro, Cristina Antón, Beatriz Sánchez-Quiñones, Nataly Ibarra, Arturo Gil, Oscar Reinoso and Luis Payá
Electronics 2025, 14(15), 3014; https://doi.org/10.3390/electronics14153014 - 29 Jul 2025
Viewed by 515
Abstract
Monoclonal Gammopathy of Undetermined Significance (MGUS) is a precursor to hematologic malignancies such as Multiple Myeloma (MM) and Waldenström Macroglobulinemia (WM). Accurate risk stratification of MGUS patients remains a clinical and computational challenge, with existing models often misclassifying both high-risk and low-risk individuals, [...] Read more.
Monoclonal Gammopathy of Undetermined Significance (MGUS) is a precursor to hematologic malignancies such as Multiple Myeloma (MM) and Waldenström Macroglobulinemia (WM). Accurate risk stratification of MGUS patients remains a clinical and computational challenge, with existing models often misclassifying both high-risk and low-risk individuals, leading to inefficient healthcare resource allocation. This study presents a machine learning (ML)-based approach for early prediction of MM/WM progression, using routinely collected hematological data, which are selected based on clinical relevance. A retrospective cohort of 292 MGUS patients, including 7 who progressed to malignancy, was analyzed. For each patient, a feature descriptor was constructed incorporating the latest biomarker values, their temporal trends over the previous year, age, and immunoglobulin subtype. To address the inherent class imbalance, data augmentation techniques were applied. Multiple ML classifiers were evaluated, with the Support Vector Machine (SVM) achieving the highest performance (94.3% accuracy and F1-score). The model demonstrates that a compact set of clinically relevant features can yield robust predictive performance. These findings highlight the potential of ML-driven decision-support systems in electronic health applications, offering a scalable solution for improving MGUS risk stratification, optimizing clinical workflows, and enabling earlier interventions. Full article
Show Figures

Graphical abstract

21 pages, 324 KB  
Review
Enhancing Communication and Swallowing Skills in Children with Cri Du Chat Syndrome: A Comprehensive Speech Therapy Guide
by Soultana Papadopoulou, Areti Anagnostopoulou, Dimitra V. Katsarou, Kalliopi Megari, Efthymia Efthymiou, Alexandros Argyriadis, Georgios Kougioumtzis, Maria Theodoratou, Maria Sofologi, Agathi Argyriadi, Efterpi Pavlidou and Eugenia I. Toki
Children 2024, 11(12), 1526; https://doi.org/10.3390/children11121526 - 16 Dec 2024
Cited by 2 | Viewed by 3607
Abstract
Background: A specific deletion on the short arm of chromosome 5 (5p) is the hallmark of the rare genetic syndrome called Cri du Chat Syndrome (CdCS). It causes severe difficulty with swallowing, speech, motor skills, and cognitive deficiencies. These arise from characteristic laryngeal [...] Read more.
Background: A specific deletion on the short arm of chromosome 5 (5p) is the hallmark of the rare genetic syndrome called Cri du Chat Syndrome (CdCS). It causes severe difficulty with swallowing, speech, motor skills, and cognitive deficiencies. These arise from characteristic laryngeal abnormalities and oral–motor dysfunctions. Objective: This study aims to investigate the effectiveness of speech and language intervention in addressing the multifaceted challenges of CdCS, including speech and language impairments, feeding difficulties, and social communication deficits. Methods: A narrative review was conducted to synthesize existing studies from the last 35 years on therapeutic interventions for individuals with CdCS. This review focused on interventions targeting speech, language, and swallowing therapy. Comprehensive searches were performed in the PubMed and Scopus databases using descriptors such as “Cri du Chat”, “swallowing disorders”, “speech disorders”, “speech and language disorders”, and “speech and language therapy.” From the identified records, 40 peer-reviewed English-language publications that addressed speech, language, and swallowing interventions were selected based on relevance and inclusion criteria. Data extraction was performed independently by four reviewers, working in two teams. Any disagreements between the teams were resolved through discussion with an independent researcher to ensure reliability and minimize bias. Results: The findings demonstrate that speech and language therapy (SLT) significantly enhances speech clarity, articulation, and oral–motor coordination. Augmentative communication systems effectively bridge gaps in nonverbal communication, fostering improved social interaction. Specific interventions reduce aspiration risks and improve feeding safety, enhancing the overall quality of life. Early multidisciplinary approaches and tailored therapeutic strategies are key to maximizing the benefits of SLT. Conclusions: SLT is crucial for improving communication, swallowing, and social integration in individuals with CdCS. Regular early intervention involving individualized programs and family participation is recommended to achieve optimal outcomes. Further research is needed to evaluate long-term effects and develop cultural and technologically adaptable therapies. Full article
(This article belongs to the Section Global Pediatric Health)
23 pages, 7574 KB  
Article
Monitoring and Reconstruction of Actuator and Sensor Attacks for Lipschitz Nonlinear Dynamic Systems Using Two Types of Augmented Descriptor Observers
by Hao Wang, Zhi-Wei Gao and Yuanhong Liu
Processes 2024, 12(7), 1383; https://doi.org/10.3390/pr12071383 - 2 Jul 2024
Cited by 1 | Viewed by 1634
Abstract
Fault data injection attacks may lead to a decrease in system performance and even a malfunction in system operation for an automatic feedback control system, which has motive to develop an effective method for rapidly detecting such attacks so that appropriate measures can [...] Read more.
Fault data injection attacks may lead to a decrease in system performance and even a malfunction in system operation for an automatic feedback control system, which has motive to develop an effective method for rapidly detecting such attacks so that appropriate measures can be taken correspondingly. In this study, a secure descriptor estimation technique is proposed for continuous-time Lipschitz nonlinear cyber physical systems affected by actuator attacks, sensor attacks, and unknown process uncertainties. Specifically, by forming a new state vector composed of original system states and sensor faults, an equivalent descriptor dynamic system is built. A proportional and derivate sliding-mode observer is presented so that the system states, sensor attack, and actuator attack can be reconstructed successfully. The observer gains are obtained by using linear matrix inequality to secure robustly stable estimation error dynamics. Moreover, a robust descriptor fast adaptive observer estimator is presented as a complement. Finally, the efficacy levels of the proposed design approaches are validated using a vertical take-off and landing aircraft system. Comparison studies are also carried out to assess the tracking performances of the proposed algorithms. Full article
(This article belongs to the Special Issue Monitoring and Control of Processes in the Context of Industry 4.0)
Show Figures

Figure 1

16 pages, 3323 KB  
Article
Advancing Pulmonary Nodule Diagnosis by Integrating Engineered and Deep Features Extracted from CT Scans
by Wiem Safta and Ahmed Shaffie
Algorithms 2024, 17(4), 161; https://doi.org/10.3390/a17040161 - 18 Apr 2024
Cited by 8 | Viewed by 3930
Abstract
Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP) [...] Read more.
Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP) descriptor for texture analysis, the 3D-Convolutional Neural Network (CNN) for extracting deep features, and geometric feature analysis to characterize pulmonary nodules. The 3D-LOP method innovatively captures nodule texture by analyzing the orientation and magnitude of voxel relationships, enabling the distinction of discriminative features. Simultaneously, the 3D-CNN extracts deep features from raw CT scans, providing comprehensive insights into nodule characteristics. Geometric features and assessing nodule shape further augment this analysis, offering a holistic view of potential malignancies. By amalgamating these analyses, our system employs a probability-based linear classifier to deliver a final diagnostic output. Validated on 822 Lung Image Database Consortium (LIDC) cases, the system’s performance was exceptional, with measures of 97.84%, 98.11%, 94.73%, and 0.9912 for accuracy, sensitivity, specificity, and Area Under the ROC Curve (AUC), respectively. These results highlight the system’s potential as a significant advancement in clinical diagnostics, offering a reliable, non-invasive tool for lung cancer detection that promises to improve patient outcomes through early diagnosis. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Graphical abstract

31 pages, 3525 KB  
Article
Automatic Face Recognition System Using Deep Convolutional Mixer Architecture and AdaBoost Classifier
by Qaisar Abbas, Talal Saad Albalawi, Ganeshkumar Perumal and M. Emre Celebi
Appl. Sci. 2023, 13(17), 9880; https://doi.org/10.3390/app13179880 - 31 Aug 2023
Cited by 21 | Viewed by 5447
Abstract
In recent years, advances in deep learning (DL) techniques for video analysis have developed to solve the problem of real-time processing. Automated face recognition in the runtime environment has become necessary in video surveillance systems for urban security. This is a difficult task [...] Read more.
In recent years, advances in deep learning (DL) techniques for video analysis have developed to solve the problem of real-time processing. Automated face recognition in the runtime environment has become necessary in video surveillance systems for urban security. This is a difficult task due to face occlusion, which makes it hard to capture effective features. Existing work focuses on improving performance while ignoring issues like a small dataset, high computational complexity, and a lack of lightweight and efficient feature descriptors. In this paper, face recognition (FR) using a Convolutional mixer (AFR-Conv) algorithm is developed to handle face occlusion problems. A novel AFR-Conv architecture is designed by assigning priority-based weight to the different face patches along with residual connections and an AdaBoost classifier for automatically recognizing human faces. The AFR-Conv also leverages the strengths of pre-trained CNNs by extracting features using ResNet-50, Inception-v3, and DenseNet-161. The AdaBoost classifier combines these features’ weighted votes to predict labels for testing images. To develop this system, we use the data augmentation method to enhance the number of datasets using human face images. The AFR-Conv method is then used to extract robust features from images. Finally, to recognize human identity, an AdaBoost classifier is utilized. For the training and evaluation of the AFR-Conv model, a set of face images is collected from online data sources. The experimental results of the AFR-Conv approach are presented in terms of precision (PR), recall (RE), detection accuracy (DA), and F1-score metrics. Particularly, the proposed approach attains 95.5% PR, 97.6% RE, 97.5% DA, and 98.5% of F1-score on 8500 face images. The experimental results show that our proposed scheme outperforms advanced methods for face classification. Full article
(This article belongs to the Special Issue Mobile Computing and Intelligent Sensing)
Show Figures

Figure 1

18 pages, 593 KB  
Article
Robust Fusion Kalman Estimator of the Multi-Sensor Descriptor System with Multiple Types of Noises and Packet Loss
by Jie Zheng, Wenxia Cui and Sian Sun
Sensors 2023, 23(15), 6968; https://doi.org/10.3390/s23156968 - 5 Aug 2023
Cited by 2 | Viewed by 1504
Abstract
Under the influence of multiple types of noises, missing measurement, one-step measurement delay and packet loss, the robust Kalman estimation problem is studied for the multi-sensor descriptor system (MSDS) in this paper. Moreover, the established MSDS model describes uncertain-variance noises, multiplicative noises, time [...] Read more.
Under the influence of multiple types of noises, missing measurement, one-step measurement delay and packet loss, the robust Kalman estimation problem is studied for the multi-sensor descriptor system (MSDS) in this paper. Moreover, the established MSDS model describes uncertain-variance noises, multiplicative noises, time delay and packet loss phenomena. Different types of noises and packet loss make it more difficult to build the estimators of MSDS. Firstly, MSDS is transformed to the new system model by applying the singular value decomposition (SVD) method, augmented state and fictitious noise approach. Furthermore, the robust Kalman estimator is constructed for the newly deduced augmented system based on the min-max robust estimation principle and Kalman filter theory. In addition, the given estimator consists of four parts, which are the usual Kalman filter, predictor, smoother and white noise deconvolution estimator. Then, the robust fusion Kalman estimator is obtained for MSDS according to the relation of augmented state and the original system state. Simultaneously, the robustness is demonstrated for the actual Kalman estimator of MSDS by using the mathematical induction method and Lyapunov’s equation. Furthermore, the error variance of the obtained Kalman estimator is guaranteed to the upper bound for all admissible uncertain noise variance. Finally, the simulation example of a circuit system is examined to illustrate the performance and effectiveness of the robust estimators. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 11309 KB  
Article
Mobile-HR: An Ophthalmologic-Based Classification System for Diagnosis of Hypertensive Retinopathy Using Optimized MobileNet Architecture
by Muhammad Zaheer Sajid, Imran Qureshi, Qaisar Abbas, Mubarak Albathan, Kashif Shaheed, Ayman Youssef, Sehrish Ferdous and Ayyaz Hussain
Diagnostics 2023, 13(8), 1439; https://doi.org/10.3390/diagnostics13081439 - 17 Apr 2023
Cited by 17 | Viewed by 6023
Abstract
Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR [...] Read more.
Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR symptoms. An ophthalmologist often makes the diagnosis of eye-related diseases by analyzing fundus images to identify the stages and symptoms of HR. The likelihood of vision loss can significantly decrease the initial detection of HR. In the past, a few computer-aided diagnostics (CADx) systems were developed to automatically detect HR eye-related diseases using machine learning (ML) and deep learning (DL) techniques. Compared to ML methods, the CADx systems use DL techniques that require the setting of hyperparameters, domain expert knowledge, a huge training dataset, and a high learning rate. Those CADx systems have shown to be good for automating the extraction of complex features, but they cause problems with class imbalance and overfitting. By ignoring the issues of a small dataset of HR, a high level of computational complexity, and the lack of lightweight feature descriptors, state-of-the-art efforts depend on performance enhancement. In this study, a pretrained transfer learning (TL)-based MobileNet architecture is developed by integrating dense blocks to optimize the network for the diagnosis of HR eye-related disease. We developed a lightweight HR-related eye disease diagnosis system, known as Mobile-HR, by integrating a pretrained model and dense blocks. To increase the size of the training and test datasets, we applied a data augmentation technique. The outcomes of the experiments show that the suggested approach was outperformed in many cases. This Mobile-HR system achieved an accuracy of 99% and an F1 score of 0.99 on different datasets. The results were verified by an expert ophthalmologist. These results indicate that the Mobile-HR CADx model produces positive outcomes and outperforms state-of-the-art HR systems in terms of accuracy. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

25 pages, 14005 KB  
Article
On Deceiving Malware Classification with Section Injection
by Adeilson Antonio da Silva and Mauricio Pamplona Segundo
Mach. Learn. Knowl. Extr. 2023, 5(1), 144-168; https://doi.org/10.3390/make5010009 - 16 Jan 2023
Cited by 5 | Viewed by 4061
Abstract
We investigate how to modify executable files to deceive malware classification systems. This work’s main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive [...] Read more.
We investigate how to modify executable files to deceive malware classification systems. This work’s main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive method, augmenting the data available for training. It respects the operating system file format to make sure the malware will still execute after our injection and will not change its behavior. We reproduced five state-of-the-art malware classification approaches to evaluate our injection scheme: one based on Global Image Descriptor (GIST) + K-Nearest-Neighbors (KNN), three Convolutional Neural Network (CNN) variations and one Gated CNN. We performed our experiments on a public dataset with 9339 malware samples from 25 different families. Our results show that a mere increase of 7% in the malware size causes an accuracy drop between 25% and 40% for malware family classification. They show that an automatic malware classification system may not be as trustworthy as initially reported in the literature. We also evaluate using modified malware alongside the original ones to increase networks robustness against the mentioned attacks. The results show that a combination of reordering malware sections and injecting random data can improve the overall performance of the classification. All the code is publicly available. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

30 pages, 4124 KB  
Article
CAD-ALZ: A Blockwise Fine-Tuning Strategy on Convolutional Model and Random Forest Classifier for Recognition of Multistage Alzheimer’s Disease
by Qaisar Abbas, Ayyaz Hussain and Abdul Rauf Baig
Diagnostics 2023, 13(1), 167; https://doi.org/10.3390/diagnostics13010167 - 3 Jan 2023
Cited by 14 | Viewed by 4665
Abstract
Mental deterioration or Alzheimer’s (ALZ) disease is progressive and causes both physical and mental dependency. There is a need for a computer-aided diagnosis (CAD) system that can help doctors make an immediate decision. (1) Background: Currently, CAD systems are developed based on hand-crafted [...] Read more.
Mental deterioration or Alzheimer’s (ALZ) disease is progressive and causes both physical and mental dependency. There is a need for a computer-aided diagnosis (CAD) system that can help doctors make an immediate decision. (1) Background: Currently, CAD systems are developed based on hand-crafted features, machine learning (ML), and deep learning (DL) techniques. Those CAD systems frequently require domain-expert knowledge and massive datasets to extract deep features or model training, which causes problems with class imbalance and overfitting. Additionally, there are still manual approaches used by radiologists due to the lack of dataset availability and to train the model with cost-effective computation. Existing works rely on performance improvement by neglecting the problems of the limited dataset, high computational complexity, and unavailability of lightweight and efficient feature descriptors. (2) Methods: To address these issues, a new approach, CAD-ALZ, is developed by extracting deep features through a ConvMixer layer with a blockwise fine-tuning strategy on a very small original dataset. At first, we apply the data augmentation method to images to increase the size of datasets. In this study, a blockwise fine-tuning strategy is employed on the ConvMixer model to detect robust features. Afterwards, a random forest (RF) is used to classify ALZ disease stages. (3) Results: The proposed CAD-ALZ model obtained significant results by using six evaluation metrics such as the F1-score, Kappa, accuracy, precision, sensitivity, and specificity. The CAD-ALZ model performed with a sensitivity of 99.69% and an F1-score of 99.61%. (4) Conclusions: The suggested CAD-ALZ approach is a potential technique for clinical use and computational efficiency compared to state-of-the-art approaches. The CAD-ALZ model code is freely available on GitHub for the scientific community. Full article
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

19 pages, 4247 KB  
Article
Solubility Characteristics of Acetaminophen and Phenacetin in Binary Mixtures of Aqueous Organic Solvents: Experimental and Deep Machine Learning Screening of Green Dissolution Media
by Piotr Cysewski, Tomasz Jeliński, Maciej Przybyłek, Wiktor Nowak and Michał Olczak
Pharmaceutics 2022, 14(12), 2828; https://doi.org/10.3390/pharmaceutics14122828 - 16 Dec 2022
Cited by 15 | Viewed by 5212
Abstract
The solubility of active pharmaceutical ingredients is a mandatory physicochemical characteristic in pharmaceutical practice. However, the number of potential solvents and their mixtures prevents direct measurements of all possible combinations for finding environmentally friendly, operational and cost-effective solubilizers. That is why support from [...] Read more.
The solubility of active pharmaceutical ingredients is a mandatory physicochemical characteristic in pharmaceutical practice. However, the number of potential solvents and their mixtures prevents direct measurements of all possible combinations for finding environmentally friendly, operational and cost-effective solubilizers. That is why support from theoretical screening seems to be valuable. Here, a collection of acetaminophen and phenacetin solubility data in neat and binary solvent mixtures was used for the development of a nonlinear deep machine learning model using new intuitive molecular descriptors derived from COSMO-RS computations. The literature dataset was augmented with results of new measurements in aqueous binary mixtures of 4-formylmorpholine, DMSO and DMF. The solubility values back-computed with the developed ensemble of neural networks are in perfect agreement with the experimental data, which enables the extensive screening of many combinations of solvents not studied experimentally within the applicability domain of the trained model. The final predictions were presented not only in the form of the set of optimal hyperparameters but also in a more intuitive way by the set of parameters of the Jouyban–Acree equation often used in the co-solvency domain. This new and effective approach is easily extendible to other systems, enabling the fast and reliable selection of candidates for new solvents and directing the experimental solubility screening of active pharmaceutical ingredients. Full article
(This article belongs to the Special Issue Strategies for Enhancing the Bioavailability of Poorly Soluble Drugs)
Show Figures

Graphical abstract

18 pages, 1826 KB  
Article
Evaluation of Keypoint Descriptors for Flight Simulator Cockpit Elements: WrightBroS Database
by Karolina Nurzynska, Przemysław Skurowski, Magdalena Pawlyta and Krzysztof Cyran
Sensors 2021, 21(22), 7687; https://doi.org/10.3390/s21227687 - 19 Nov 2021
Cited by 5 | Viewed by 3137
Abstract
The goal of the WrightBroS project is to design a system supporting the training of pilots in a flight simulator. The desired software should work on smart glasses supplementing the visual information with augmented reality data, displaying, for instance, additional training information or [...] Read more.
The goal of the WrightBroS project is to design a system supporting the training of pilots in a flight simulator. The desired software should work on smart glasses supplementing the visual information with augmented reality data, displaying, for instance, additional training information or descriptions of visible devices in real time. Therefore, the rapid recognition of observed objects and their exact positioning is crucial for successful deployment. The keypoint descriptor approach is a natural framework that is used for this purpose. For this to be applied, the thorough examination of specific keypoint location methods and types of keypoint descriptors is required first, as these are essential factors that affect the overall accuracy of the approach. In the presented research, we prepared a dedicated database presenting 27 various devices of flight simulator. Then, we used it to compare existing state-of-the-art techniques and verify their applicability. We investigated the time necessary for the computation of a keypoint position, the time needed for the preparation of a descriptor, and the classification accuracy of the considered approaches. In total, we compared the outcomes of 12 keypoint location methods and 10 keypoint descriptors. The best scores recorded for our database were almost 96% for a combination of the ORB method for keypoint localization followed by the BRISK approach as a descriptor. Full article
Show Figures

Figure 1

29 pages, 10554 KB  
Article
Mobile Augmented Reality for Low-End Devices Based on Planar Surface Recognition and Optimized Vertex Data Rendering
by Styliani Verykokou, Argyro-Maria Boutsi and Charalabos Ioannidis
Appl. Sci. 2021, 11(18), 8750; https://doi.org/10.3390/app11188750 - 20 Sep 2021
Cited by 4 | Viewed by 3776
Abstract
Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned [...] Read more.
Mobile Augmented Reality (MAR) is designed to keep pace with high-end mobile computing and their powerful sensors. This evolution excludes users with low-end devices and network constraints. This article presents ModAR, a hybrid Android prototype that expands the MAR experience to the aforementioned target group. It combines feature-based image matching and pose estimation with fast rendering of 3D textured models. Planar objects of the real environment are used as pattern images for overlaying users’ meshes or the app’s default ones. Since ModAR is based on the OpenCV C++ library at Android NDK and OpenGL ES 2.0 graphics API, there are no dependencies on additional software, operating system version or model-specific hardware. The developed 3D graphics engine implements optimized vertex-data rendering with a combination of data grouping, synchronization, sub-texture compression and instancing for limited CPU/GPU resources and a single-threaded approach. It achieves up to 3× speed-up compared to standard index rendering, and AR overlay of a 50 K vertices 3D model in less than 30 s. Several deployment scenarios on pose estimation demonstrate that the oriented FAST detector with an upper threshold of features per frame combined with the ORB descriptor yield best results in terms of robustness and efficiency, achieving a 90% reduction of image matching time compared to the time required by the AGAST detector and the BRISK descriptor, corresponding to pattern recognition accuracy of above 90% for a wide range of scale changes, regardless of any in-plane rotations and partial occlusions of the pattern. Full article
(This article belongs to the Special Issue Extended Reality: From Theory to Applications)
Show Figures

Figure 1

25 pages, 481 KB  
Article
H and Passive Fuzzy Control for Non-Linear Descriptor Systems with Time-Varying Delay and Sensor Faults
by Houssem Jerbi, Mourad Kchaou, Attia Boudjemline, Mohamed Amin Regaieg, Sondes Ben Aoun and Ahmed Lakhdar Kouzou
Mathematics 2021, 9(18), 2203; https://doi.org/10.3390/math9182203 - 8 Sep 2021
Cited by 6 | Viewed by 2380
Abstract
In this paper, the problem of reliable control design with mixed H /passive performance is discussed for a class of Takagi–Sugeno TS fuzzy descriptor systems with time-varying delay, sensor failure, and randomly occurred non-linearity. Based on the Lyapunov theory, firstly, a less [...] Read more.
In this paper, the problem of reliable control design with mixed H /passive performance is discussed for a class of Takagi–Sugeno TS fuzzy descriptor systems with time-varying delay, sensor failure, and randomly occurred non-linearity. Based on the Lyapunov theory, firstly, a less conservative admissible criterion is established by combining the delay decomposition and reciprocally convex approaches. Then, the attention is focused on the design of a reliable static output feedback (SOF) controller with mixed H /passive performance requirements. The key merit of the paper is to propose a simple method to design such a controller since the system output is subject to probabilistic missing data and noise. Using the output vector as a state component, an augmented model is introduced, and sufficient conditions are derived to achieve the desired performance of the closed-loop system. In addition, the cone complementarity linearization (CCL) algorithm is provided to calculate the controller gains. At last, three numerical examples, including computer-simulated truck-trailer and ball and beam systems are given to show the efficacy of our proposed approach, compared with existing ones in the literature. Full article
(This article belongs to the Special Issue Fuzzy and Extension of Fuzzy Theories)
Show Figures

Figure 1

33 pages, 918 KB  
Article
Robust Filtering for Discrete-Time Linear Parameter-Varying Descriptor Systems
by Carlos Rodriguez, Karina A. Barbosa and Daniel Coutinho
Symmetry 2020, 12(11), 1871; https://doi.org/10.3390/sym12111871 - 13 Nov 2020
Cited by 1 | Viewed by 2011
Abstract
This paper deals with robust state estimation for discrete-time, linear parameter varying (LPV) descriptor systems. It is assumed that all the system state-space matrices are affine functions of the uncertain parameters and both the parameters and their variations are bounded functions of time [...] Read more.
This paper deals with robust state estimation for discrete-time, linear parameter varying (LPV) descriptor systems. It is assumed that all the system state-space matrices are affine functions of the uncertain parameters and both the parameters and their variations are bounded functions of time with known minimum and maximum values. First, necessary and sufficient conditions are proposed for admissibility and bounded realness for discrete linear time-varying (DLTV) descriptor systems. Next, two convex optimisation based methods are proposed for designing admissible stationary linear descriptor filters for LPV descriptor systems which ensure a prescribed upper bound on the 2-induced gain from the noise signal to the estimation error regardless of model uncertainties. The proposed filter design results were based on parameter-dependent generalised Lyapunov functions, and full-order, augmented-order and reduced-order filters were considered. Numerical examples are presented to show the effectiveness of the proposed filtering scheme. In particular, the proposed approach was used to estimate the state variables of a controlled horizontal 2-DOF robotic manipulator based on noisy measurements. Full article
Show Figures

Figure 1

11 pages, 1919 KB  
Article
Large-Scale Whale-Call Classification by Transfer Learning on Multi-Scale Waveforms and Time-Frequency Features
by Lilun Zhang, Dezhi Wang, Changchun Bao, Yongxian Wang and Kele Xu
Appl. Sci. 2019, 9(5), 1020; https://doi.org/10.3390/app9051020 - 12 Mar 2019
Cited by 26 | Viewed by 5149
Abstract
Whale vocal calls contain valuable information and abundant characteristics that are important for classification of whale sub-populations and related biological research. In this study, an effective data-driven approach based on pre-trained Convolutional Neural Networks (CNN) using multi-scale waveforms and time-frequency feature representations is [...] Read more.
Whale vocal calls contain valuable information and abundant characteristics that are important for classification of whale sub-populations and related biological research. In this study, an effective data-driven approach based on pre-trained Convolutional Neural Networks (CNN) using multi-scale waveforms and time-frequency feature representations is developed in order to perform the classification of whale calls from a large open-source dataset recorded by sensors carried by whales. Specifically, the classification is carried out through a transfer learning approach by using pre-trained state-of-the-art CNN models in the field of computer vision. 1D raw waveforms and 2D log-mel features of the whale-call data are respectively used as the input of CNN models. For raw waveform input, windows are applied to capture multiple sketches of a whale-call clip at different time scales and stack the features from different sketches for classification. When using the log-mel features, the delta and delta-delta features are also calculated to produce a 3-channel feature representation for analysis. In the training, a 4-fold cross-validation technique is employed to reduce the overfitting effect, while the Mix-up technique is also applied to implement data augmentation in order to further improve the system performance. The results show that the proposed method can improve the accuracies by more than 20% in percentage for the classification into 16 whale pods compared with the baseline method using groups of 2D shape descriptors of spectrograms and the Fisher discriminant scores on the same dataset. Moreover, it is shown that classifications based on log-mel features have higher accuracies than those based directly on raw waveforms. The phylogeny graph is also produced to significantly illustrate the relationships among the whale sub-populations. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

Back to TopTop