Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = automatic rank determination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2180 KiB  
Article
Validation of the Automatic Real-Time Monitoring of Airborne Pollens in China Against the Reference Hirst-Type Trap Method
by Yiwei Liu, Wen Shao, Xiaolan Lei, Wenpu Shao, Zhongshan Gao, Jin Sun, Sixu Yang, Yunfei Cai, Zhen Ding, Na Sun, Songqiang Gu, Li Peng and Zhuohui Zhao
Atmosphere 2025, 16(5), 531; https://doi.org/10.3390/atmos16050531 - 30 Apr 2025
Viewed by 457
Abstract
Background: There is a lack of automatic real-time monitoring of airborne pollens in China and no validation study has been performed. Methods: Two-year continuous automatic real-time pollen monitoring (n = 437) was completed in 2023 (3 April–31 December) and 2024 (1 April–30 November) [...] Read more.
Background: There is a lack of automatic real-time monitoring of airborne pollens in China and no validation study has been performed. Methods: Two-year continuous automatic real-time pollen monitoring (n = 437) was completed in 2023 (3 April–31 December) and 2024 (1 April–30 November) in Shanghai, China, in parallel with the standard daily pollen sampling(n = 437) using a volumetric Hirst sampler (Hirst-type trap, according to the European standard). Daily ambient particulate matter and meteorological factors were collected simultaneously. Results: Across 2023 and 2024, the daily mean pollen concentration was 7 ± 9 (mean ± standard deviation (SD)) grains/m3 by automatic monitoring and 8 ± 10 grains/m3 by the standard Hirst-type method, respectively. The spring season had higher daily pollen levels by both methods (11 ± 14 grains/m3 and 12 ± 15 grains/m3) and the daily maximum reached 106 grains/m3 and 100 grains/m3, respectively. A strong correlation was observed between the two methods by either Pearson (coefficient 0.87, p < 0.001) or Spearman’s rank correlation (coefficient 0.70, p < 0.001). Compared to the standard method, both simple (R2 = 0.76) and multiple linear regression models (R2 = 0.76) showed a relatively high goodness of fit, which remained robust using a 5-fold cross-validation approach. The multiple regression mode adjusted for five additional covariates: daily mean temperature, relative humidity, wind speed, precipitation, and PM10. In the subset of samples with daily pollen concentration ≥ 10 grains/m3 (n = 98) and in the spring season (n = 145), the simple linear models remained robust and performed even better (R2 = 0.71 and 0.83). Conclusions: This is the first validation study on automatic real-time pollen monitoring by volumetric concentrations in China against the international standard manual method. A reliable and feasible simple linear regression model was determined to be adequate, and days with higher pollen levels (≥10 grains/m3) and in the spring season showed better fitness. More validation studies are needed in places with different ecological and climate characteristics to promote the volumetric real-time monitoring of pollens in China. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

16 pages, 1043 KiB  
Article
Relationship Between Subclinical Mastitis Occurrence and Pathogen Prevalence in Two Different Automatic Milking Systems
by Karise Fernanda Nogara, Marcos Busanello and Maity Zopollatto
Animals 2025, 15(6), 776; https://doi.org/10.3390/ani15060776 - 9 Mar 2025
Viewed by 1255
Abstract
This study compared two types of automatic milking systems (AMSs) and their relationship with epidemiological indices of subclinical mastitis (SCM) and prevalence of mastitis-causing pathogens. Conducted between 2020 and 2023 on a dairy farm in Vacaria, Rio Grande do Sul, Brazil, this study [...] Read more.
This study compared two types of automatic milking systems (AMSs) and their relationship with epidemiological indices of subclinical mastitis (SCM) and prevalence of mastitis-causing pathogens. Conducted between 2020 and 2023 on a dairy farm in Vacaria, Rio Grande do Sul, Brazil, this study analyzed data from 464 lactating cows housed in compost-bedded pack barns (CBPBs) and milked by eight AMS units: four from DeLaval (which utilizes teat cup for teat cleaning) and four from Lely (which utilizes brushes for teat cleaning). SCM incidence, prevalence, percentage of chronic, and cured cows were determined using somatic cell counts (SCCs) and microbiological cultures. Statistical analyses included the Wilcoxon signed-rank test and Chi-square test to evaluate SCM indices and pathogen associations with AMSs. No significant difference was observed in SCM prevalence (p = 0.3371), percentage of chronic (p = 0.3590) and cured cows (p = 0.4038), SCC (p = 0.1290), and total bacterial count (TBC) (p = 0.8750) between AMS types. However, the SCM incidence was higher in the Lely (14.7%) than in the DeLaval AMS (9.1%) (p = 0.0032). The Chi-square results revealed that the Lely AMS was associated with major pathogens like Staphylococcus aureus and Escherichia coli, whereas DeLaval showed associations with minor environmental and contagious pathogens, particularly non-aureus Staphylococci. The findings indicate a relationship between AMS-cleaning systems and pathogen spread, suggesting that Lely AMS may contribute to more aggressive infections due to its cleaning system. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

24 pages, 9722 KiB  
Article
Automation Applied to the Collection and Generation of Scientific Literature
by Nadia Paola Valadez-de la Paz, Jose Antonio Vazquez-Lopez, Aidee Hernandez-Lopez, Jaime Francisco Aviles-Viñas, Jose Luis Navarro-Gonzalez, Alfredo Valentin Reyes-Acosta and Ismael Lopez-Juarez
Publications 2025, 13(1), 11; https://doi.org/10.3390/publications13010011 - 6 Mar 2025
Viewed by 1265
Abstract
Preliminary activities of searching and selecting relevant articles are crucial in scientific research to determine the state of the art (SOTA) and enhance overall outcomes. While there are automatic tools for keyword extraction, these algorithms are often computationally expensive, storage-intensive, and reliant on [...] Read more.
Preliminary activities of searching and selecting relevant articles are crucial in scientific research to determine the state of the art (SOTA) and enhance overall outcomes. While there are automatic tools for keyword extraction, these algorithms are often computationally expensive, storage-intensive, and reliant on institutional subscriptions for metadata retrieval. Most importantly, they still require manual selection of literature. This paper introduces a framework that automates keyword searching in article abstracts to help select relevant literature for the SOTA by identifying key terms matching that we, hereafter, call source words. A case study in the food and beverage industry is provided to demonstrate the algorithm’s application. In the study, five relevant knowledge areas were defined to guide literature selection. The database from scientific repositories was categorized using six classification rules based on impact factor (IF), Open Access (OA) status, and JCR journal ranking. This classification revealed the knowledge area with the highest presence and highlighted the effectiveness of the selection rules in identifying articles for the SOTA. The approach included a panel of experts who confirmed the algorithm’s effectiveness in identifying source words in high-quality articles. The algorithm’s performance was evaluated using the F1 Score, which reached 0.83 after filtering out non-relevant articles. This result validates the algorithm’s ability to extract significant source words and demonstrates its usefulness in building the SOTA by focusing on the most scientifically impactful articles. Full article
Show Figures

Figure 1

11 pages, 2309 KiB  
Article
Radiomics Feature Stability in True and Virtual Non-Contrast Reconstructions from Cardiac Photon-Counting Detector CT Datasets
by Luca Canalini, Elif G. Becker, Franka Risch, Stefanie Bette, Simon Hellbrueck, Judith Becker, Katharina Rippel, Christian Scheurig-Muenkler, Thomas Kroencke and Josua A. Decker
Diagnostics 2024, 14(22), 2483; https://doi.org/10.3390/diagnostics14222483 - 7 Nov 2024
Viewed by 1018
Abstract
Objectives: Virtual non-contrast (VNC) series reconstructed from contrast-enhanced cardiac scans acquired with photon counting detector CT (PCD-CT) systems have the potential to replace true non-contrast (TNC) series. However, a quantitative comparison of the image characteristics of TNC and VNC data is necessary [...] Read more.
Objectives: Virtual non-contrast (VNC) series reconstructed from contrast-enhanced cardiac scans acquired with photon counting detector CT (PCD-CT) systems have the potential to replace true non-contrast (TNC) series. However, a quantitative comparison of the image characteristics of TNC and VNC data is necessary to determine to what extent they are interchangeable. This work quantitatively evaluates the image similarity between VNC and TNC reconstructions by measuring the stability of multi-class radiomics features extracted in intra-patient TNC and VNC reconstructions. Methods: TNC and VNC series of 84 patients were retrospectively collected. For each patient, the myocardium and epicardial adipose tissue (EAT) were semi-automatically segmented in both VNC and TNC reconstructions, and 105 radiomics features were extracted in each mask. Intra-feature correlation scores were computed using the intraclass correlation coefficient (ICC). Stable features were defined with an ICC higher than 0.75. Results: In the myocardium, 41 stable features were identified, and the three with the highest ICC were glrlm_GrayLevelVariance with ICC3 of 0.98 [0.97, 0.99], ngtdm_Strength with ICC3 of 0.97 [0.95, 0.98], firstorder_Variance with ICC3 of 0.96 [0.94, 0.98]. For the epicardial fat, 40 stable features were found, and the three highest ranked are firstorder_Median with ICC3 of 0.96 [0.93, 0.97], firstorder_RootMeanSquared with ICC3 of 0.95 [0.92, 0.97], firstorder_Mean with ICC3 of 0.95 [0.92, 0.97]. A total of 24 features (22.8%; 24/105) showed stability in both anatomical structures. Conclusions: The significant differences in the correlation of radiomics features in VNC and TNC volumes of the myocardium and epicardial fat suggested that the two reconstructions may differ more than initially assumed. This indicates that they may not be interchangeable, and such differences could have clinical implications. Therefore, care should be given when selecting VNC as a substitute for TNC in radiomics research to ensure accurate and reliable analysis. Moreover, the observed variations may impact clinical workflows, where precise tissue characterization is critical for diagnosis and treatment planning. Full article
(This article belongs to the Special Issue Recent Developments and Future Trends in Thoracic Imaging)
Show Figures

Figure 1

22 pages, 2550 KiB  
Article
Ensemble Fusion Models Using Various Strategies and Machine Learning for EEG Classification
by Sunil Kumar Prabhakar, Jae Jun Lee and Dong-Ok Won
Bioengineering 2024, 11(10), 986; https://doi.org/10.3390/bioengineering11100986 - 29 Sep 2024
Cited by 2 | Viewed by 2223
Abstract
Electroencephalography (EEG) helps to assess the electrical activities of the brain so that the neuronal activities of the brain are captured effectively. EEG is used to analyze many neurological disorders, as it serves as a low-cost equipment. To diagnose and treat every neurological [...] Read more.
Electroencephalography (EEG) helps to assess the electrical activities of the brain so that the neuronal activities of the brain are captured effectively. EEG is used to analyze many neurological disorders, as it serves as a low-cost equipment. To diagnose and treat every neurological disorder, lengthy EEG signals are needed, and different machine learning and deep learning techniques have been developed so that the EEG signals could be classified automatically. In this work, five ensemble models are proposed for EEG signal classification, and the main neurological disorder analyzed in this paper is epilepsy. The first proposed ensemble technique utilizes an equidistant assessment and ranking determination mode with the proposed Enhance the Sum of Connection and Distance (ESCD)-based feature selection technique for the classification of EEG signals; the second proposed ensemble technique utilizes the concept of Infinite Independent Component Analysis (I-ICA) and multiple classifiers with majority voting concept; the third proposed ensemble technique utilizes the concept of Genetic Algorithm (GA)-based feature selection technique and bagging Support Vector Machine (SVM)-based classification model. The fourth proposed ensemble technique utilizes the concept of Hilbert Huang Transform (HHT) and multiple classifiers with GA-based multiparameter optimization, and the fifth proposed ensemble technique utilizes the concept of Factor analysis with Ensemble layer K nearest neighbor (KNN) classifier. The best results are obtained when the Ensemble hybrid model using the equidistant assessment and ranking determination method with the proposed ESCD-based feature selection technique and Support Vector Machine (SVM) classifier is utilized, achieving a classification accuracy of 89.98%. Full article
(This article belongs to the Special Issue Machine Learning Technology in Predictive Healthcare)
Show Figures

Figure 1

22 pages, 6500 KiB  
Article
Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework
by Jesse Stevens, Daniel N. Wilke and Isaac I. Setshedi
Math. Comput. Appl. 2024, 29(5), 85; https://doi.org/10.3390/mca29050085 - 25 Sep 2024
Viewed by 1637
Abstract
Linear latent variable models such as principal component analysis (PCA), independent component analysis (ICA), canonical correlation analysis (CCA), and factor analysis (FA) identify latent directions (or loadings) either ordered or unordered. These data are then projected onto the latent directions to obtain their [...] Read more.
Linear latent variable models such as principal component analysis (PCA), independent component analysis (ICA), canonical correlation analysis (CCA), and factor analysis (FA) identify latent directions (or loadings) either ordered or unordered. These data are then projected onto the latent directions to obtain their projected representations (or scores). For example, PCA solvers usually rank principal directions by explaining the most variance to the least variance. In contrast, ICA solvers usually return independent directions unordered and often with single sources spread across multiple directions as multiple sub-sources, severely diminishing their usability and interpretability. This paper proposes a general framework to enhance latent space representations to improve the interpretability of linear latent spaces. Although the concepts in this paper are programming language agnostic, the framework is written in Python. This framework simplifies the process of clustering and ranking of latent vectors to enhance latent information per latent vector and the interpretation of latent vectors. Several innovative enhancements are incorporated, including latent ranking (LR), latent scaling (LS), latent clustering (LC), and latent condensing (LCON). LR ranks latent directions according to a specified scalar metric. LS scales latent directions according to a specified metric. LC automatically clusters latent directions into a specified number of clusters. Lastly, LCON automatically determines the appropriate number of clusters to condense the latent directions for a given metric to enable optimal latent discovery. Additional functionality of the framework includes single-channel and multi-channel data sources and data pre-processing strategies such as Hankelisation to seamlessly expand the applicability of linear latent variable models (LLVMs) to a wider variety of data. The effectiveness of LR, LS, LC, and LCON is shown in two foundational problems crafted with two applied latent variable models, namely, PCA and ICA. Full article
Show Figures

Figure 1

18 pages, 2372 KiB  
Article
Exploring Taxonomic and Genetic Relationships in the Pinus mugo Complex Using Genome Skimming Data
by Joanna Sikora and Konrad Celiński
Int. J. Mol. Sci. 2024, 25(18), 10178; https://doi.org/10.3390/ijms251810178 - 22 Sep 2024
Cited by 1 | Viewed by 1425
Abstract
Genome skimming is a novel approach that enables obtaining large-scale genomic information based on high-copy DNA fractions from shallow whole-genome sequencing. The simplicity of this method, low analysis costs, and large amounts of generated data have made it widely used in plant research, [...] Read more.
Genome skimming is a novel approach that enables obtaining large-scale genomic information based on high-copy DNA fractions from shallow whole-genome sequencing. The simplicity of this method, low analysis costs, and large amounts of generated data have made it widely used in plant research, including species identification, especially in the case of protected or endangered taxa. This task is particularly difficult in the case of closely related taxa. The Pinus mugo complex includes several dozen closely related taxa occurring in the most important mountain ranges in Europe. The taxonomic rank, origin, or distribution of many of these taxa have been debated for years. In this study, we used genome skimming and multilocus DNA barcoding approaches to obtain different sequence data sets and also to determine their genetic diversity and suitability for distinguishing closely related taxa in the Pinus mugo complex. We generated seven different data sets, which were then analyzed using three discrimination methods, i.e., tree based, distance based, and assembling species by automatic partitioning. Genetic diversity among populations and taxa was also investigated using haplotype network analysis and principal coordinate analysis. The proposed data set based on divergence hotspots is even twenty-times more variable than the other analyzed sets and improves the phylogenetic resolution of the Pinus mugo complex. In light of the obtained results, Pinus × rhaetica does not belong to the Pinus mugo complex and should not be identified with either Pinus uliginosa or Pinus rotundata. It seems to represent a fixed hybrid or introgressant between Pinus sylvestris and Pinus mugo. In turn, Pinus mugo and Pinus uncinata apparently played an important role in the origins of Pinus uliginosa and Pinus rotundata. Full article
Show Figures

Figure 1

19 pages, 1137 KiB  
Article
A Bayesian Tensor Decomposition Method for Joint Estimation of Channel and Interference Parameters
by Yuzhe Sun, Wei Wang, Yufan Wang and Yuanfeng He
Sensors 2024, 24(16), 5284; https://doi.org/10.3390/s24165284 - 15 Aug 2024
Cited by 2 | Viewed by 1595
Abstract
Bayesian tensor decomposition has been widely applied in channel parameter estimations, particularly in cases with the presence of interference. However, the types of interference are not considered in Bayesian tensor decomposition, making it difficult to accurately estimate the interference parameters. In this paper, [...] Read more.
Bayesian tensor decomposition has been widely applied in channel parameter estimations, particularly in cases with the presence of interference. However, the types of interference are not considered in Bayesian tensor decomposition, making it difficult to accurately estimate the interference parameters. In this paper, we present a robust tensor variational method using a CANDECOMP/PARAFAC (CP)-based additive interference model for multiple input–multiple output (MIMO) with orthogonal frequency division multiplexing (OFDM) systems. A more realistic interference model compared to traditional colored noise is considered in terms of co-channel interference (CCI) and front-end interference (FEI). In contrast to conventional algorithms that filter out interference, the proposed method jointly estimates the channel and interference parameters in the time–frequency domain. Simulation results validate the correctness of the proposed method by the evidence lower bound (ELBO) and reveal the fact that the proposed method outperforms traditional information-theoretic methods, tensor decomposition models, and robust model based on CP (RCP) in terms of estimation accuracy. Further, the interference parameter estimation technique has profound implications for anti-interference applications and dynamic spectrum allocation. Full article
(This article belongs to the Special Issue Integrated Localization and Communication: Advances and Challenges)
Show Figures

Figure 1

31 pages, 4589 KiB  
Article
Band Selection via Band Density Prominence Clustering for Hyperspectral Image Classification
by Chein-I Chang, Yi-Mei Kuo and Kenneth Yeonkong Ma
Remote Sens. 2024, 16(6), 942; https://doi.org/10.3390/rs16060942 - 7 Mar 2024
Cited by 2 | Viewed by 1461
Abstract
Band clustering has been widely used for hyperspectral band selection (BS). However, selecting an appropriate band to represent a band cluster is a key issue. Density peak clustering (DPC) provides an effective means for this purpose, referred to as DPC-based BS (DPC-BS). It [...] Read more.
Band clustering has been widely used for hyperspectral band selection (BS). However, selecting an appropriate band to represent a band cluster is a key issue. Density peak clustering (DPC) provides an effective means for this purpose, referred to as DPC-based BS (DPC-BS). It uses two indicators, cluster density and cluster distance, to rank all bands for BS. This paper reinterprets cluster density and cluster distance as band local density (BLD) and band distance (BD) and also introduces a new concept called band prominence value (BPV) as a third indicator. Combining BLD and BD with BPV derives new band prioritization criteria for BS, which can extend the currently used DPC-BS to a new DPC-BS method referred to as band density prominence clustering (BDPC). By taking advantage of the three key indicators of BDPC, i.e., cut-off band distance bc, k nearest neighboring-band local density, and BPV, two versions of BDPC can be derived called bc-BDPC and k-BDPC, both of which are quite different from existing DPC-based BS methods in three aspects. One is that the parameter bc of bc-BDPC and the parameter k of k-BDPC can be automatically determined by the number of clusters and virtual dimensionality (VD), respectively. Another is that instead of using Euclidean distance, a spectral discrimination measure is used to calculate BD as well as inter-band correlation. The most important and significant aspect is a novel idea that combines BPV with BLD and BD to derive new band prioritization criteria for BS. Extensive experiments demonstrate that BDPC generally performs better than DPC-BS as well as many current state-of-the art BS methods. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Data Exploitation II)
Show Figures

Figure 1

36 pages, 721 KiB  
Article
An Approach Based on Intuitionistic Fuzzy Sets for Considering Stakeholders’ Satisfaction, Dissatisfaction, and Hesitation in Software Features Prioritization
by Vassilis C. Gerogiannis, Dimitrios Tzimos, George Kakarontzas, Eftychia Tsoni, Omiros Iatrellis, Le Hoang Son and Andreas Kanavos
Mathematics 2024, 12(5), 680; https://doi.org/10.3390/math12050680 - 26 Feb 2024
Cited by 3 | Viewed by 3507
Abstract
This paper introduces a semi-automated approach for the prioritization of software features in medium- to large-sized software projects, considering stakeholders’ satisfaction and dissatisfaction as key criteria for the incorporation of candidate features. Our research acknowledges an inherent asymmetry in stakeholders’ evaluations, between the [...] Read more.
This paper introduces a semi-automated approach for the prioritization of software features in medium- to large-sized software projects, considering stakeholders’ satisfaction and dissatisfaction as key criteria for the incorporation of candidate features. Our research acknowledges an inherent asymmetry in stakeholders’ evaluations, between the satisfaction from offering certain features and the dissatisfaction from not offering the same features. Even with systematic, ordinal scale-based prioritization techniques, involved stakeholders may exhibit hesitation and uncertainty in their assessments. Our approach aims to address these challenges by employing the Binary Search Tree prioritization method and leveraging the mathematical framework of Intuitionistic Fuzzy Sets to quantify the uncertainty of stakeholders when expressing assessments on the value of software features. Stakeholders’ rankings, considering satisfaction and dissatisfaction as features prioritization criteria, are mapped into Intuitionistic Fuzzy Numbers, and objective weights are automatically computed. Rankings associated with less hesitation are considered more valuable to determine the final features’ priorities than those rankings with more hesitation, reflecting lower indeterminacy or lack of knowledge from stakeholders. We validate our proposed approach with a case study, illustrating its application, and conduct a comparative analysis with existing software requirements prioritization methods. Full article
(This article belongs to the Special Issue Applications of Soft Computing in Software Engineering)
Show Figures

Figure 1

17 pages, 1435 KiB  
Article
Reliability Study of an Intelligent Profiling Progressive Automatic Glue Cutter Based on the Improved FMECA Method
by Heng Zhang, Yaya Chen, Jingyu Cong, Junxiao Liu, Zhifu Zhang and Xirui Zhang
Agriculture 2023, 13(8), 1475; https://doi.org/10.3390/agriculture13081475 - 26 Jul 2023
Cited by 6 | Viewed by 1673
Abstract
This study introduces the fuzzy theory approach as an enhancement to the traditional failure mode, effect, and criticality analysis (FMECA) method in order to address its limitations, which primarily stem from subjectivity and a lack of quantitative analysis. The proposed method, referred to [...] Read more.
This study introduces the fuzzy theory approach as an enhancement to the traditional failure mode, effect, and criticality analysis (FMECA) method in order to address its limitations, which primarily stem from subjectivity and a lack of quantitative analysis. The proposed method, referred to as FMECA improvement based on fuzzy comprehensive evaluation, aims to quantify the qualitative aspect of the analysis and provides a detailed outline of the analysis procedure. By applying the enhanced FMECA method to assess the reliability of an intelligent profiling progressive automatic rubber cutter, the hazard ranking for each failure mode of the cutter can be determined, thereby identifying areas that require reliability improvement. The analysis outcomes demonstrate that this method establishes a theoretical foundation for subsequent cutter improvement designs, enables early identification of potential failures, and consequently leads to a reduced failure rate and an enhanced reliability level for the intelligent profiling progressive automatic cutter. Furthermore, this innovative agricultural equipment reliability analysis and testing approach holds significant value in elevating the reliability standards of agricultural equipment as a whole and can be explored and implemented in other agricultural machinery contexts. Full article
(This article belongs to the Special Issue Design, Optimization and Analysis of Agricultural Machinery)
Show Figures

Figure 1

22 pages, 640 KiB  
Article
Intelligent Identification of Trend Components in Singular Spectrum Analysis
by Nina Golyandina, Pavel Dudnik and Alex Shlemov
Algorithms 2023, 16(7), 353; https://doi.org/10.3390/a16070353 - 24 Jul 2023
Cited by 5 | Viewed by 2850
Abstract
Singular spectrum analysis (SSA) is a non-parametric adaptive technique used for time series analysis. It allows solving various problems related to time series without the need to define a model. In this study, we focus on the problem of trend extraction. To extract [...] Read more.
Singular spectrum analysis (SSA) is a non-parametric adaptive technique used for time series analysis. It allows solving various problems related to time series without the need to define a model. In this study, we focus on the problem of trend extraction. To extract trends using SSA, a grouping of elementary components is required. However, automating this process is challenging due to the nonparametric nature of SSA. Although there are some known approaches to automated grouping in SSA, they do not work well when the signal components are mixed. In this paper, a novel approach that combines automated identification of trend components with separability improvement is proposed. We also consider a new method called EOSSA for separability improvement, along with other known methods. The automated modifications are numerically compared and applied to real-life time series. The proposed approach demonstrated its advantage in extracting trends when dealing with mixed signal components. The separability-improving method EOSSA proved to be the most accurate when the signal rank is properly detected or slightly exceeded. The automated SSA was very successfully applied to US Unemployment data to separate an annual trend from seasonal effects. The proposed approach has shown its capability to automatically extract trends without the need to determine their parametric form. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

11 pages, 1792 KiB  
Proceeding Paper
Automated Approach for Generating and Evaluating Traffic Incident Response Plans
by Adel Almohammad and Panagiotis Georgakis
Eng. Proc. 2023, 39(1), 13; https://doi.org/10.3390/engproc2023039013 - 28 Jun 2023
Viewed by 1397
Abstract
Traffic incidents usually have negative effects on transportation systems such as delays and traffic jams. Therefore, a traffic incident response plan can guide management actors and operators to take action effectively and timely after traffic incidents. In this paper, an approach has been [...] Read more.
Traffic incidents usually have negative effects on transportation systems such as delays and traffic jams. Therefore, a traffic incident response plan can guide management actors and operators to take action effectively and timely after traffic incidents. In this paper, an approach has been proposed to generate and evaluate traffic incident response plans automatically when a traffic incident is detected. In this approach, a library of response action templates has been constructed beforehand to be used in the real-time generation process of a response plan template. According to the type and severity of the detected and confirmed traffic incident, a combination of relevant response action templates will provide a set of response plans. In addition, we have developed a simulation model for the study area by using Aimsun Next software (version 20.0.3), developed by Aimsun, to evaluate the performance of the generated response plans. Therefore, the simulation outcomes determine the rank of the generated response plans including the optimal response plans. The proposed approach considers the characteristics of input traffic incidents and transport road networks to generate response plans. Furthermore, the choice of the optimal response plan considers the characteristics of the input traffic incident. The implementation results show that the generated response plans can enhance and improve the overall network performance and conditions efficiently. In addition, the response plan ranking is considered to be a supportive tool in the network operators’ decision-making process in terms of the optimal response plan to be implemented or propagated. Full article
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

25 pages, 15181 KiB  
Article
A Hybrid Feature Selection and Multi-Label Driven Intelligent Fault Diagnosis Method for Gearbox
by Di Liu, Xiangfeng Zhang, Zhiyu Zhang and Hong Jiang
Sensors 2023, 23(10), 4792; https://doi.org/10.3390/s23104792 - 16 May 2023
Cited by 5 | Viewed by 1936
Abstract
Gearboxes are utilized in practically all complicated machinery equipment because they have great transmission accuracy and load capacities, so their failure frequently results in significant financial losses. The classification of high-dimensional data remains a difficult topic despite the fact that numerous data-driven intelligent [...] Read more.
Gearboxes are utilized in practically all complicated machinery equipment because they have great transmission accuracy and load capacities, so their failure frequently results in significant financial losses. The classification of high-dimensional data remains a difficult topic despite the fact that numerous data-driven intelligent diagnosis approaches have been suggested and employed for compound fault diagnosis in recent years with successful outcomes. In order to achieve the best diagnostic performance as the ultimate objective, a feature selection and fault decoupling framework is proposed in this paper. That is based on multi-label K-nearest neighbors (ML-kNN) as classifiers and can automatically determine the optimal subset from the original high-dimensional feature set. The proposed feature selection method is a hybrid framework that can be divided into three stages. The Fisher score, information gain, and Pearson’s correlation coefficient are three filter models that are used in the first stage to pre-rank candidate features. In the second stage, a weighting scheme based on the weighted average method is proposed to fuse the pre-ranking results obtained in the first stage and optimize the weights using a genetic algorithm to re-rank the features. The optimal subset is automatically and iteratively found in the third stage using three heuristic strategies, including binary search, sequential forward search, and sequential backward search. The method takes into account the consideration of feature irrelevance, redundancy and inter-feature interaction in the selection process, and the selected optimal subsets have better diagnostic performance. In two gearbox compound fault datasets, ML-kNN performs exceptionally well using the optimal subset with subset accuracy of 96.22% and 100%. The experimental findings demonstrate the effectiveness of the proposed method in predicting various labels for compound fault samples to identify and decouple compound faults. The proposed method performs better in terms of classification accuracy and optimal subset dimensionality when compared to other existing methods. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

28 pages, 5967 KiB  
Article
Air Traffic Complexity Evaluation with Hierarchical Graph Representation Learning
by Lu Zhang, Hongyu Yang and Xiping Wu
Aerospace 2023, 10(4), 352; https://doi.org/10.3390/aerospace10040352 - 3 Apr 2023
Cited by 5 | Viewed by 2899
Abstract
Air traffic management (ATM) relies on the running condition of the air traffic control sector (ATCS), and assessing whether it is overloaded is crucial for efficiency and safety for the entire aviation industry. Previous approaches to evaluating air traffic complexity in a sector [...] Read more.
Air traffic management (ATM) relies on the running condition of the air traffic control sector (ATCS), and assessing whether it is overloaded is crucial for efficiency and safety for the entire aviation industry. Previous approaches to evaluating air traffic complexity in a sector were mostly based on aircraft operational status and lacked comprehensiveness of characterization and were less adaptable in real situations. To settle these issues, a deep learning technique grounded on complex networks was proposed, employing the flight conflict network (FCN) to generate an air traffic situation graph (ATSG), with the air traffic control instruction (ATCOI) received by each aircraft included as an extra node attribute to increase the accuracy of the evaluation. A pooling method with a graph neural network (GNN) was used to analyze the graph-structured air traffic information and produce the sector complexity rank automatically. The model Hierarchical Graph Representing Learning (HGRL) was created to build comprehensive feature representations which involve two parts: graph structure coarsening and graph attribute learning. Structure coarsening reduced the feature map size by choosing an adaptive selection of nodes, while attribute coarsening selected key nodes in the graph-level representation. The experimental findings of a real dataset from the Chinese aviation industry reveal that our proposed model exceeds prior methods in its ability to extract critical information from an ATSG. Moreover, our work could be applied in the two main types of sectors and without extra factor calculations to determine the complexity of the airspace. Full article
(This article belongs to the Special Issue Advances in Air Traffic and Airspace Control and Management)
Show Figures

Figure 1

Back to TopTop