Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = heuristic DNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 5907 KiB  
Article
A Lightweight Breast Cancer Mass Classification Model Utilizing Simplified Swarm Optimization and Knowledge Distillation
by Wei-Chang Yeh, Wei-Chung Shia, Yun-Ting Hsu, Chun-Hui Huang and Yong-Shiuan Lee
Bioengineering 2025, 12(6), 640; https://doi.org/10.3390/bioengineering12060640 - 11 Jun 2025
Viewed by 596
Abstract
In recent years, an increasing number of women worldwide have been affected by breast cancer. Early detection is crucial, as it is the only way to identify abnormalities at an early stage. However, most deep learning models developed for classifying breast cancer abnormalities [...] Read more.
In recent years, an increasing number of women worldwide have been affected by breast cancer. Early detection is crucial, as it is the only way to identify abnormalities at an early stage. However, most deep learning models developed for classifying breast cancer abnormalities tend to be large-scale and computationally intensive, often overlooking the constraints of cost and limited computational resources. This research addresses these challenges by utilizing the CBIS-DDSM dataset and introducing a novel concatenated classification architecture and a two-stage strategy to develop an optimized, lightweight model for breast mass abnormality classification. Through data augmentation and image preprocessing, the proposed model demonstrates a superior performance compared to standalone CNN and DNN models. The two-stage strategy involves first constructing a compact model using knowledge distillation and then refining its structure with a heuristic approach known as Simplified Swarm Optimization (SSO). The experimental results confirm that knowledge distillation significantly enhances the model’s performance. Furthermore, by applying SSO’s full-variable update mechanism, the final model—SSO-Concatenated NASNetMobile (SSO-CNNM)—achieves outstanding performance metrics. It attains a compression rate of 96.17%, along with accuracy, precision, recall, and AUC scores of 96.47%, 97.4%, 94.94%, and 98.23%, respectively, outperforming other existing methods. Full article
Show Figures

Figure 1

19 pages, 7961 KiB  
Article
A Gait Sub-Phase Switching-Based Active Training Control Strategy and Its Application in a Novel Rehabilitation Robot
by Junyu Wu, Ran Wang, Zhuoqi Man, Yubin Liu, Jie Zhao and Hegao Cai
Biosensors 2025, 15(6), 356; https://doi.org/10.3390/bios15060356 - 4 Jun 2025
Viewed by 576
Abstract
This research study proposes a heuristic hybrid deep neural network (DNN) gait sub-phase recognition model based on multi-source heterogeneous motion data fusion which quantifies gait phases and is applied in balance disorder rehabilitation control, achieving a recognition accuracy exceeding 99%. Building upon this [...] Read more.
This research study proposes a heuristic hybrid deep neural network (DNN) gait sub-phase recognition model based on multi-source heterogeneous motion data fusion which quantifies gait phases and is applied in balance disorder rehabilitation control, achieving a recognition accuracy exceeding 99%. Building upon this model, a motion control strategy for a novel rehabilitation training robot is designed and developed. For patients with some degree of independent movement, an active training strategy is introduced; it combines gait recognition with a variable admittance control strategy. This strategy provides assistance during the stance phase and moderate support during the swing phase, effectively enhancing the patient’s autonomous movement capabilities and increasing engagement in the rehabilitation process. The gait phase recognition system not only provides rehabilitation practitioners with a comprehensive tool for patient assessment but also serves as a theoretical foundation for collaborative control in rehabilitation robots. Through the innovative active–passive training control strategy and its application in the novel rehabilitation robot, this research study overcomes the limitations of traditional rehabilitation robots, which typically operate in a single functional mode, thereby expanding their functional boundaries and enabling more precise, personalized rehabilitation training programs tailored to the needs of patients in different stages of recovery. Full article
(This article belongs to the Special Issue Wearable Sensors for Precise Exercise Monitoring and Analysis)
Show Figures

Figure 1

14 pages, 7528 KiB  
Article
Optimal Power Allocation in Optical GEO Satellite Downlinks Using Model-Free Deep Learning Algorithms
by Theodore T. Kapsis, Nikolaos K. Lyras and Athanasios D. Panagopoulos
Electronics 2024, 13(3), 647; https://doi.org/10.3390/electronics13030647 - 4 Feb 2024
Cited by 3 | Viewed by 1612
Abstract
Geostationary (GEO) satellites are employed in optical frequencies for a variety of satellite services providing wide coverage and connectivity. Multi-beam GEO high-throughput satellites offer Gbps broadband rates and, jointly with low-Earth-orbit mega-constellations, are anticipated to enable a large-scale free-space optical (FSO) network. In [...] Read more.
Geostationary (GEO) satellites are employed in optical frequencies for a variety of satellite services providing wide coverage and connectivity. Multi-beam GEO high-throughput satellites offer Gbps broadband rates and, jointly with low-Earth-orbit mega-constellations, are anticipated to enable a large-scale free-space optical (FSO) network. In this paper, a power allocation methodology based on deep reinforcement learning (DRL) is proposed for optical satellite systems disregarding any channel statistics knowledge requirements. An all-FSO, multi-aperture GEO-to-ground system is considered and an ergodic capacity optimization problem for the downlink is formulated with transmitted power constraints. A power allocation algorithm was developed, aided by a deep neural network (DNN) which is fed channel state information (CSI) observations and trained in a parameterized on-policy manner through a stochastic policy gradient approach. The proposed method does not require the channels’ transition models or fading distributions. To validate and test the proposed allocation scheme, experimental measurements from the European Space Agency’s ARTEMIS optical satellite campaign were utilized. It is demonstrated that the predicted average capacity greatly exceeds other baseline heuristic algorithms while strongly converging to the supervised, unparameterized approach. The predicted average channel powers differ only by 0.1 W from the reference ones, while the baselines differ significantly more, about 0.1–0.5 W. Full article
(This article belongs to the Special Issue New Advances of Microwave and Optical Communication)
Show Figures

Figure 1

24 pages, 6018 KiB  
Review
Application and Prospect of Artificial Intelligence Methods in Signal Integrity Prediction and Optimization of Microsystems
by Guangbao Shan, Guoliang Li, Yuxuan Wang, Chaoyang Xing, Yanwen Zheng and Yintang Yang
Micromachines 2023, 14(2), 344; https://doi.org/10.3390/mi14020344 - 29 Jan 2023
Cited by 10 | Viewed by 3633
Abstract
Microsystems are widely used in 5G, the Internet of Things, smart electronic devices and other fields, and signal integrity (SI) determines their performance. Establishing accurate and fast predictive models and intelligent optimization models for SI in microsystems is extremely essential. Recently, neural networks [...] Read more.
Microsystems are widely used in 5G, the Internet of Things, smart electronic devices and other fields, and signal integrity (SI) determines their performance. Establishing accurate and fast predictive models and intelligent optimization models for SI in microsystems is extremely essential. Recently, neural networks (NNs) and heuristic optimization algorithms have been widely used to predict the SI performance of microsystems. This paper systematically summarizes the neural network methods applied in the prediction of microsystem SI performance, including artificial neural network (ANN), deep neural network (DNN), recurrent neural network (RNN), convolutional neural network (CNN), etc., as well as intelligent algorithms applied in the optimization of microsystem SI, including genetic algorithm (GA), differential evolution (DE), deep partition tree Bayesian optimization (DPTBO), two stage Bayesian optimization (TSBO), etc., and compares and discusses the characteristics and application fields of the current applied methods. The future development prospects are also predicted. Finally, the article is summarized. Full article
Show Figures

Figure 1

22 pages, 2945 KiB  
Article
Factors Predicting Surgical Effort Using Explainable Artificial Intelligence in Advanced Stage Epithelial Ovarian Cancer
by Alexandros Laios, Evangelos Kalampokis, Racheal Johnson, Sarika Munot, Amudha Thangavelu, Richard Hutson, Tim Broadhead, Georgios Theophilou, Chris Leach, David Nugent and Diederick De Jong
Cancers 2022, 14(14), 3447; https://doi.org/10.3390/cancers14143447 - 15 Jul 2022
Cited by 26 | Viewed by 5234
Abstract
(1) Background: Surgical cytoreduction for epithelial ovarian cancer (EOC) is a complex procedure. Encompassed within the performance skills to achieve surgical precision, intra-operative surgical decision-making remains a core feature. The use of eXplainable Artificial Intelligence (XAI) could potentially interpret the influence of human [...] Read more.
(1) Background: Surgical cytoreduction for epithelial ovarian cancer (EOC) is a complex procedure. Encompassed within the performance skills to achieve surgical precision, intra-operative surgical decision-making remains a core feature. The use of eXplainable Artificial Intelligence (XAI) could potentially interpret the influence of human factors on the surgical effort for the cytoreductive outcome in question; (2) Methods: The retrospective cohort study evaluated 560 consecutive EOC patients who underwent cytoreductive surgery between January 2014 and December 2019 in a single public institution. The eXtreme Gradient Boosting (XGBoost) and Deep Neural Network (DNN) algorithms were employed to develop the predictive model, including patient- and operation-specific features, and novel features reflecting human factors in surgical heuristics. The precision, recall, F1 score, and area under curve (AUC) were compared between both training algorithms. The SHapley Additive exPlanations (SHAP) framework was used to provide global and local explainability for the predictive model; (3) Results: A surgical complexity score (SCS) cut-off value of five was calculated using a Receiver Operator Characteristic (ROC) curve, above which the probability of incomplete cytoreduction was more likely (area under the curve [AUC] = 0.644; 95% confidence interval [CI] = 0.598–0.69; sensitivity and specificity 34.1%, 86.5%, respectively; p = 0.000). The XGBoost outperformed the DNN assessment for the prediction of the above threshold surgical effort outcome (AUC = 0.77; 95% [CI] 0.69–0.85; p < 0.05 vs. AUC 0.739; 95% [CI] 0.655–0.823; p < 0.95). We identified “turning points” that demonstrated a clear preference towards above the given cut-off level of surgical effort; in consultant surgeons with <12 years of experience, age <53 years old, who, when attempting primary cytoreductive surgery, recorded the presence of ascites, an Intraoperative Mapping of Ovarian Cancer score >4, and a Peritoneal Carcinomatosis Index >7, in a surgical environment with the optimization of infrastructural support. (4) Conclusions: Using XAI, we explain how intra-operative decisions may consider human factors during EOC cytoreduction alongside factual knowledge, to maximize the magnitude of the selected trade-off in effort. XAI techniques are critical for a better understanding of Artificial Intelligence frameworks, and to enhance their incorporation in medical applications. Full article
Show Figures

Figure 1

21 pages, 3790 KiB  
Article
EGFAFS: A Novel Feature Selection Algorithm Based on Explosion Gravitation Field Algorithm
by Lan Huang, Xuemei Hu, Yan Wang and Yuan Fu
Entropy 2022, 24(7), 873; https://doi.org/10.3390/e24070873 - 25 Jun 2022
Cited by 2 | Viewed by 1628
Abstract
Feature selection (FS) is a vital step in data mining and machine learning, especially for analyzing the data in high-dimensional feature space. Gene expression data usually consist of a few samples characterized by high-dimensional feature space. As a result, they are not suitable [...] Read more.
Feature selection (FS) is a vital step in data mining and machine learning, especially for analyzing the data in high-dimensional feature space. Gene expression data usually consist of a few samples characterized by high-dimensional feature space. As a result, they are not suitable to be processed by simple methods, such as the filter-based method. In this study, we propose a novel feature selection algorithm based on the Explosion Gravitation Field Algorithm, called EGFAFS. To reduce the dimensions of the feature space to acceptable dimensions, we constructed a recommended feature pool by a series of Random Forests based on the Gini index. Furthermore, by paying more attention to the features in the recommended feature pool, we can find the best subset more efficiently. To verify the performance of EGFAFS for FS, we tested EGFAFS on eight gene expression datasets compared with four heuristic-based FS methods (GA, PSO, SA, and DE) and four other FS methods (Boruta, HSICLasso, DNN-FS, and EGSG). The results show that EGFAFS has better performance for FS on gene expression data in terms of evaluation metrics, having more than the other eight FS algorithms. The genes selected by EGFAGS play an essential role in the differential co-expression network and some biological functions further demonstrate the success of EGFAFS for solving FS problems on gene expression data. Full article
Show Figures

Figure 1

19 pages, 1202 KiB  
Article
A Machine Learning Approach for Global Steering Control Moment Gyroscope Clusters
by Charalampos Papakonstantinou, Ioannis Daramouskas, Vaios Lappas, Vassilis C. Moulianitis and Vassilis Kostopoulos
Aerospace 2022, 9(3), 164; https://doi.org/10.3390/aerospace9030164 - 17 Mar 2022
Cited by 13 | Viewed by 3384
Abstract
This paper addresses the problem of singularity avoidance for a 4-Control Moment Gyroscope (CMG) pyramid cluster, as used for the attitude control of a satellite using machine learning (ML) techniques. A data-set, generated using a heuristic algorithm, relates the initial gimbal configuration and [...] Read more.
This paper addresses the problem of singularity avoidance for a 4-Control Moment Gyroscope (CMG) pyramid cluster, as used for the attitude control of a satellite using machine learning (ML) techniques. A data-set, generated using a heuristic algorithm, relates the initial gimbal configuration and the desired maneuver—inputs—to a number of null space motions the gimbals have to execute—output. Two ML techniques—Deep Neural Network (DNN) and Random Forest Classifier (RFC)—are utilized to predict the required null motion for trajectories that are not included in the training set. The principal advantage of this approach is the exploitation of global information gathered from the whole maneuver compared to conventional steering laws that consider only some local information, near the current gimbal configuration for optimization and are prone to local extrema. The data-set generation and the predictions of the ML systems can be made offline, so no further calculations are needed on board, providing the possibility to inspect the way the system responds to any commanded maneuver before its execution. The RFC technique demonstrates enhanced accuracy for the test data compared to the DNN, validating that it is possible to correctly predict the null motion even for maneuvers that are not included in the training data. Full article
Show Figures

Figure 1

19 pages, 1644 KiB  
Article
Experimental Analysis of Hyperparameters for Deep Learning-Based Churn Prediction in the Banking Sector
by Edvaldo Domingos, Blessing Ojeme and Olawande Daramola
Computation 2021, 9(3), 34; https://doi.org/10.3390/computation9030034 - 16 Mar 2021
Cited by 47 | Viewed by 7338
Abstract
Until recently, traditional machine learning techniques (TMLTs) such as multilayer perceptrons (MLPs) and support vector machines (SVMs) have been used successfully for churn prediction, but with significant efforts expended on the configuration of the training parameters. The selection of the right training parameters [...] Read more.
Until recently, traditional machine learning techniques (TMLTs) such as multilayer perceptrons (MLPs) and support vector machines (SVMs) have been used successfully for churn prediction, but with significant efforts expended on the configuration of the training parameters. The selection of the right training parameters for supervised learning is almost always experimentally determined in an ad hoc manner. Deep neural networks (DNNs) have shown significant predictive strength over TMLTs when used for churn predictions. However, the more complex architecture of DNNs and their capacity to process huge amounts of non-linear input data demand more time and effort to configure the training hyperparameters for DNNs during churn modeling. This makes the process more challenging for inexperienced machine learning practitioners and researchers. So far, limited research has been done to establish the effects of different hyperparameters on the performance of DNNs during churn prediction. There is a lack of empirically derived heuristic knowledge to guide the selection of hyperparameters when DNNs are used for churn modeling. This paper presents an experimental analysis of the effects of different hyperparameters when DNNs are used for churn prediction in the banking sector. The results from three experiments revealed that the deep neural network (DNN) model performed better than the MLP when a rectifier function was used for activation in the hidden layers and a sigmoid function was used in the output layer. The performance of the DNN was better when the batch size was smaller than the size of the test set data, while the RemsProp training algorithm had better accuracy when compared with the stochastic gradient descent (SGD), Adam, AdaGrad, Adadelta, and AdaMax algorithms. The study provides heuristic knowledge that could guide researchers and practitioners in machine learning-based churn prediction from the tabular data for customer relationship management in the banking sector when DNNs are used. Full article
Show Figures

Figure 1

Back to TopTop