Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = differentiable architecture search (DARTS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2777 KiB  
Article
Chromosome Image Classification Based on Improved Differentiable Architecture Search
by Jianming Li, Changchang Zeng, Min Zhou, Zeyi Shang and Jiangang Zhu
Electronics 2025, 14(9), 1820; https://doi.org/10.3390/electronics14091820 - 29 Apr 2025
Viewed by 425
Abstract
Chromosomes are essential carriers of human genetic material, and karyotype diagnosis plays a crucial role in prenatal diagnostics, genetic disease identification, and medical research. Physicians rely heavily on karyotype images to diagnose potential abnormalities in chromosome numbers and structure. However, the process is [...] Read more.
Chromosomes are essential carriers of human genetic material, and karyotype diagnosis plays a crucial role in prenatal diagnostics, genetic disease identification, and medical research. Physicians rely heavily on karyotype images to diagnose potential abnormalities in chromosome numbers and structure. However, the process is tedious and challenging. To improve diagnostic efficiency and accuracy, artificial intelligence (AI) researchers have developed convolutional neural networks (CNNs) for chromosome image classification. Despite this progress, the gap between cytogeneticists and AI experts results in a time-consuming workflow. In this study, we propose a framework based on improved Differentiable Architecture Search (DARTS) to automatically design convolutional architectures for the classification task. The improvement strategies based on DARTS are implemented in two stages. First, a procedural approach was designed to comprehensively analyze the evolution of architectural parameters. Based on this analysis, the search space of the DARTS algorithm was refined, resulting in an optimized search space. Next, an entropy-based regularization term was incorporated into the supernetwork’s objective function to guide the algorithm in searching for a more effective architecture. Then, extensive experiments were conducted on CIFAR-10, ImageNet, and the Copenhagen datasets to evaluate the performance of the searched architecture in comparison with related works. The network composed of the searched architecture achieved accuracies of 97.27 ± 0.05%, 75.40%, and 98.64% on the three datasets, respectively. These results demonstrate that the architecture is high-performing and the proposed framework for designing networks for chromosome classification is effective. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 458 KiB  
Article
Neural Architecture Search via Trainless Pruning Algorithm: A Bayesian Evaluation of a Network with Multiple Indicators
by Yiqi Lin, Yuki Endo, Jinho Lee and Shunsuke Kamijo
Electronics 2024, 13(22), 4547; https://doi.org/10.3390/electronics13224547 - 19 Nov 2024
Viewed by 1390
Abstract
Neural Architecture Search (NAS) has found applications in various areas of computer vision, including image recognition and object detection. An increasing number of algorithms, such as ENAS (Efficient Neural Architecture Search via Parameter Sharing) and DARTS (Differentiable Architecture Search), have been applied to [...] Read more.
Neural Architecture Search (NAS) has found applications in various areas of computer vision, including image recognition and object detection. An increasing number of algorithms, such as ENAS (Efficient Neural Architecture Search via Parameter Sharing) and DARTS (Differentiable Architecture Search), have been applied to NAS. Nevertheless, the current Training-free NAS methods continue to exhibit unreliability and inefficiency. This paper introduces a training-free prune-based algorithm called TTNAS (True-Skill Training-Free Neural Architecture Search), which utilizes a Bayesian method (true-skill algorithm) to combine multiple indicators for evaluating neural networks across different datasets. The algorithm demonstrates highly competitive accuracy and efficiency compared to state-of-the-art approaches on various datasets. Specifically, it achieves 93.90% accuracy on CIFAR-10, 71.91% accuracy on CIFAR-100, and 44.96% accuracy on ImageNet 16-120, using 1466 GPU seconds in NAS-Bench-201. Additionally, the algorithm exhibits improved adaptation to other datasets and tasks. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

28 pages, 24617 KiB  
Article
Noise-Disruption-Inspired Neural Architecture Search with Spatial–Spectral Attention for Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Shiyu Dai, Yuji Iwahori and Xiaoyu Yu
Remote Sens. 2024, 16(17), 3123; https://doi.org/10.3390/rs16173123 - 24 Aug 2024
Cited by 2 | Viewed by 1526
Abstract
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that [...] Read more.
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that not only automatically searches for neural network architectures best suited to the characteristics of HSI data, but also avoids the possible limitations of manual design of neural networks when dealing with new classification tasks. However, the existing NAS-based HSIC methods have the following limitations: (1) the search space lacks efficient convolution operators that can fully extract discriminative spatial–spectral features, and (2) NAS based on traditional differentiable architecture search (DARTS) has performance collapse caused by unfair competition. To overcome these limitations, we proposed a neural architecture search method with receptive field spatial–spectral attention (RFSS-NAS), which is specifically designed to automatically search the optimal architecture for HSIC. Considering the core needs of the model in extracting more discriminative spatial–spectral features, we designed a novel and efficient attention search space. The core component of this innovative space is the receptive field spatial–spectral attention convolution operator, which is capable of precisely focusing on the critical information in the image, thus greatly enhancing the quality of feature extraction. Meanwhile, for the purpose of solving the unfair competition issue in the traditional differentiable architecture search (DARTS) strategy, we skillfully introduce the Noisy-DARTS strategy. The strategy ensures the fairness and efficiency of the search process and effectively avoids the risk of performance crash. In addition, to further improve the robustness of the model and ability to recognize difficult-to-classify samples, we proposed a fusion loss function by combining the advantages of the label smoothing loss and the polynomial expansion perspective loss function, which not only smooths the label distribution and reduces the risk of overfitting, but also effectively handles those difficult-to-classify samples, thus improving the overall classification accuracy. Experiments on three public datasets fully validate the superior performance of RFSS-NAS. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Graphical abstract

15 pages, 570 KiB  
Article
LMD-DARTS: Low-Memory, Densely Connected, Differentiable Architecture Search
by Zhongnian Li, Yixin Xu, Peng Ying, Hu Chen, Renke Sun and Xinzheng Xu
Electronics 2024, 13(14), 2743; https://doi.org/10.3390/electronics13142743 - 12 Jul 2024
Viewed by 1371
Abstract
Neural network architecture search (NAS) technology is pivotal for designing lightweight convolutional neural networks (CNNs), facilitating the automatic search for network structures without requiring extensive prior knowledge. However, NAS is resource-intensive, consuming significant computational power and time due to the evaluation of numerous [...] Read more.
Neural network architecture search (NAS) technology is pivotal for designing lightweight convolutional neural networks (CNNs), facilitating the automatic search for network structures without requiring extensive prior knowledge. However, NAS is resource-intensive, consuming significant computational power and time due to the evaluation of numerous candidate architectures. To address the issues of high memory usage and slow search speed in traditional NAS algorithms, we propose the Low-Memory, Densely Connected, Differentiable Architecture Search (LMD-DARTS) algorithm. To expedite the updating speed of the optional operation weights during the search process, LMD-DARTS introduces a continuous strategy based on weight redistribution. Furthermore, to mitigate the influence of low-weight operations on classification results and reduce the number of searches, LMD-DARTS employs a dynamic sampler to prune underperforming operations during the search process, thereby lowering memory consumption and simplifying the complexity of individual searches. Additionally, to sparsify the dense connection matrix and mitigate redundant connections while maintaining optimal network performance, we introduce an adaptive downsampling search algorithm. Our experimental results show that the proposed LMD-DARTS achieves a remarkable 20% reduction in search time, along with a significant decrease in memory utilization within NAS process. Notably, the lightweight CNNs derived through this algorithm exhibit commendable classification accuracy, underscoring their effectiveness and efficiency for practical applications. Full article
Show Figures

Figure 1

14 pages, 1403 KiB  
Article
Remote Sensing Image Classification Based on Neural Networks Designed Using an Efficient Neural Architecture Search Methodology
by Lan Song, Lixin Ding, Mengjia Yin, Wei Ding, Zhigao Zeng and Chunxia Xiao
Mathematics 2024, 12(10), 1563; https://doi.org/10.3390/math12101563 - 17 May 2024
Cited by 2 | Viewed by 1412
Abstract
Successful applications of machine learning for the analysis of remote sensing images remain limited by the difficulty of designing neural networks manually. However, while the development of neural architecture search offers the unique potential for discovering new and more effective network architectures, existing [...] Read more.
Successful applications of machine learning for the analysis of remote sensing images remain limited by the difficulty of designing neural networks manually. However, while the development of neural architecture search offers the unique potential for discovering new and more effective network architectures, existing neural architecture search algorithms are computationally intensive methods requiring a large amount of data and computational resources and are therefore challenging to apply for developing optimal neural network architectures for remote sensing image classification. Our proposed method uses a differentiable neural architecture search approach for remote sensing image classification. We utilize a binary gate strategy for partial channel connections to reduce the sizes of the network parameters, creating a sparse connection pattern that lowers memory consumption and NAS computational costs. Experimental results indicate that our method achieves a 15.1% increase in validation accuracy during the search phase compared to DDSAS, although slightly lower (by 4.5%) than DARTS. However, we reduced the search time by 88% and network parameter size by 84% compared to DARTS. In the architecture evaluation phase, our method demonstrates a 2.79% improvement in validation accuracy over a manually configured CNN network. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control, 2nd Edition)
Show Figures

Figure 1

25 pages, 4779 KiB  
Article
NDARTS: A Differentiable Architecture Search Based on the Neumann Series
by Xiaoyu Han, Chenyu Li, Zifan Wang and Guohua Liu
Algorithms 2023, 16(12), 536; https://doi.org/10.3390/a16120536 - 25 Nov 2023
Viewed by 2123
Abstract
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require [...] Read more.
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require hundreds of GPU days. Unlike searching in a discrete search space based on reinforcement learning and evolutionary algorithms, the differentiable neural architecture search (DARTS) continuously relaxes the search space, allowing for optimization using gradient-based methods. Based on DARTS, we propose NDARTS in this article. The new algorithm uses the Implicit Function Theorem and the Neumann series to approximate the hyper-gradient, which obtains better results than DARTS. In the simulation experiment, an ablation experiment was carried out to study the influence of the different parameters on the NDARTS algorithm and to determine the optimal weight, then the best performance of the NDARTS algorithm was searched for in the DARTS search space and the NAS-BENCH-201 search space. Compared with other NAS algorithms, the results showed that NDARTS achieved excellent results on the CIFAR-10, CIFAR-100, and ImageNet datasets, and was an effective neural architecture search algorithm. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

24 pages, 688 KiB  
Article
TA-DARTS: Temperature Annealing of Discrete Operator Distribution for Effective Differential Architecture Search
by Jiyong Shin, Kyongseok Park and Dae-Ki Kang
Appl. Sci. 2023, 13(18), 10138; https://doi.org/10.3390/app131810138 - 8 Sep 2023
Cited by 2 | Viewed by 1342
Abstract
In the realm of machine learning, the optimization of hyperparameters and the design of neural architectures entail laborious and time-intensive endeavors. To address these challenges, considerable research effort has been directed towards Automated Machine Learning (AutoML), with a focus on enhancing these inherent [...] Read more.
In the realm of machine learning, the optimization of hyperparameters and the design of neural architectures entail laborious and time-intensive endeavors. To address these challenges, considerable research effort has been directed towards Automated Machine Learning (AutoML), with a focus on enhancing these inherent inefficiencies. A pivotal facet of this pursuit is Neural Architecture Search (NAS), a domain dedicated to the automated formulation of neural network architectures. Given the pronounced impact of network architecture on neural network performance, NAS techniques strive to identify architectures that can manifest optimal performance outcomes. A prominent algorithm in this area is Differentiable Architecture Search (DARTS), which transforms discrete search spaces into continuous counterparts using gradient-based methodologies, thereby surpassing prior NAS methodologies. Notwithstanding DARTS’ achievements, a discrepancy between discrete and continuously encoded architectures persists. To ameliorate this disparity, we propose TA-DARTS in this study—a temperature annealing technique applied to the Softmax function, utilized for encoding the continuous search space. By leveraging temperature values, architectural weights are judiciously adjusted to alleviate biases in the search process or to align resulting architectures more closely with discrete values. Our findings exhibit advancements over the original DARTS methodology, evidenced by a 0.07%p enhancement in validation accuracy and a 0.16%p improvement in test accuracy on the CIFAR-100 dataset. Through systematic experimentation on benchmark datasets, we establish the superiority of TA-DARTS over the original mixed operator, thereby underscoring its efficacy in automating neural architecture design. Full article
(This article belongs to the Special Issue Recent Advances in Automated Machine Learning)
Show Figures

Figure 1

22 pages, 24276 KiB  
Article
Multi-Scale Spatial–Spectral Attention-Based Neural Architecture Search for Hyperspectral Image Classification
by Yingluo Song, Aili Wang, Yan Zhao, Haibin Wu and Yuji Iwahori
Electronics 2023, 12(17), 3641; https://doi.org/10.3390/electronics12173641 - 29 Aug 2023
Cited by 4 | Viewed by 1876
Abstract
Convolutional neural networks (CNNs) are indeed commonly employed for hyperspectral image classification. However, the architecture of cellular neural networks typically requires manual design and fine-tuning, which can be quite laborious. Fortunately, there have been recent advancements in the field of Neural Architecture Search [...] Read more.
Convolutional neural networks (CNNs) are indeed commonly employed for hyperspectral image classification. However, the architecture of cellular neural networks typically requires manual design and fine-tuning, which can be quite laborious. Fortunately, there have been recent advancements in the field of Neural Architecture Search (NAS) that enable the automatic design of networks. These NAS techniques have significantly improved the accuracy of HSI classification, pushing it to new levels. This article proposes a Multi-Scale Spatial–Spectral Attention-based NAS, MS3ANAS) framework for HSI classification to automatically design a neural network structure for HSI classifiers. First, this paper constructs a multi-scale attention mechanism extended search space, which considers multi-scale filters to reduce parameters while maintaining large-scale receptive field and enhanced multi-scale spectral–spatial feature extraction to increase network sensitivity towards hyperspectral information. Then, we combined the slow–fast learning architecture update paradigm to optimize and iteratively update the architecture vector and effectively improve the model’s generalization ability. Finally, we introduced the Lion optimizer to track only momentum and use symbol operations to calculate updates, thereby reducing memory overhead and effectively reducing training time. The proposed NAS method demonstrates impressive classification performance and effectively improves accuracy across three HSI datasets (University of Pavia, Xuzhou, and WHU-Hi-Hanchuan). Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Pattern Recognition)
Show Figures

Figure 1

22 pages, 2988 KiB  
Article
Efficient Object Detection in SAR Images Based on Computation-Aware Neural Architecture Search
by Chuanyou Li, Yifan Li, Huanyun Hu, Jiangwei Shang, Kun Zhang, Lei Qian and Kexiang Wang
Appl. Sci. 2022, 12(21), 10978; https://doi.org/10.3390/app122110978 - 29 Oct 2022
Cited by 4 | Viewed by 3508
Abstract
Remote sensing techniques are becoming more sophisticated as radar imaging techniques mature. Synthetic aperture radar (SAR) can now provide high-resolution images for day-and-night earth observation. Detecting objects in SAR images is increasingly playing a significant role in a series of applications. In this [...] Read more.
Remote sensing techniques are becoming more sophisticated as radar imaging techniques mature. Synthetic aperture radar (SAR) can now provide high-resolution images for day-and-night earth observation. Detecting objects in SAR images is increasingly playing a significant role in a series of applications. In this paper, we address an edge detection problem that applies to scenarios with ship-like objects, where the detection accuracy and efficiency must be considered together. The key to ship detection lies in feature extraction. To efficiently extract features, many existing studies have proposed lightweight neural networks by pruning well-known models in the computer vision field. We found that although different baseline models have been tailored, a large amount of computation is still required. In order to achieve a lighter neural network-based ship detector, we propose Darts_Tiny, a novel differentiable neural architecture search model, to design dedicated convolutional neural networks automatically. Darts_Tiny is customized from Darts. It prunes superfluous operations to simplify the search model and adopts a computation-aware search process to enhance the detection efficiency. The computation-aware search process not only integrates a scheme cutting down the number of channels on purpose but also adopts a synthetic loss function combining the cross-entropy loss and the amount of computation. Comprehensive experiments are conducted to evaluate Darts_Tiny on two open datasets, HRSID and SSDD. Experimental results demonstrate that our neural networks win by at least an order of magnitude in terms of model complexity compared with SOTA lightweight models. A representative model obtained from Darts_Tiny (158 KB model volume, 28 K parameters and 0.58 G computations) yields a faster detection speed such that more than 750 frames per second (800×800 SAR images) could be achieved when testing on a platform equipped with an Nvidia Tesla V100 and an Intel Xeon Platinum 8260. The lightweight neural networks generated by Darts_Tiny are still competitive in detection accuracy: the F1 score can still reach more than 83 and 90, respectively, on HRSID and SSDD. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision)
Show Figures

Figure 1

15 pages, 3787 KiB  
Article
FastDARTSDet: Fast Differentiable Architecture Joint Search on Backbone and FPN for Object Detection
by Chunxian Wang, Xiaoxing Wang, Yiwen Wang, Shengchao Hu, Hongyang Chen, Xuehai Gu, Junchi Yan and Tao He
Appl. Sci. 2022, 12(20), 10530; https://doi.org/10.3390/app122010530 - 19 Oct 2022
Cited by 6 | Viewed by 2236
Abstract
Neural architecture search (NAS) is a popular branch of automatic machine learning (AutoML), which aims to search for efficient network structures. Many prior works have explored a wide range of search algorithms for classification tasks, and have achieved better performance than manually designed [...] Read more.
Neural architecture search (NAS) is a popular branch of automatic machine learning (AutoML), which aims to search for efficient network structures. Many prior works have explored a wide range of search algorithms for classification tasks, and have achieved better performance than manually designed network architectures. However, few works have explored NAS for object detection tasks due to the difficulty to train convolution neural networks from scratch. In this paper, we propose a framework, named as FastDARTSDet, to directly search on a larger-scale object detection dataset (MS-COCO). Specifically, we propose to apply differentiable architecture search method (DARTS) to jointly search backbone and feature pyramid network (FPN) architectures for object detection task. Extensive experimental results on MS-COCO show the efficient and efficacy of our method. Specifically, our method achieves 40.0% mean average precision (mAP) on the test set, outperforming many recent NAS methods. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection and Tracking)
Show Figures

Figure 1

20 pages, 6069 KiB  
Article
Tab2vox: CNN-Based Multivariate Multilevel Demand Forecasting Framework by Tabular-To-Voxel Image Conversion
by Euna Lee, Myungwoo Nam and Hongchul Lee
Sustainability 2022, 14(18), 11745; https://doi.org/10.3390/su141811745 - 19 Sep 2022
Cited by 4 | Viewed by 2402
Abstract
Since demand is influenced by a wide variety of causes, it is necessary to decompose the explanatory variables into different levels, extract their relationships effectively, and reflect them in the forecast. In particular, this contextual information can be very useful in demand forecasting [...] Read more.
Since demand is influenced by a wide variety of causes, it is necessary to decompose the explanatory variables into different levels, extract their relationships effectively, and reflect them in the forecast. In particular, this contextual information can be very useful in demand forecasting with large demand volatility or intermittent demand patterns. Convolutional neural networks (CNNs) have been successfully used in many fields where important information in data is represented by images. CNNs are powerful because they accept samples as images and use adjacent voxel sets to integrate multi-dimensional important information and learn important features. On the other hand, although the demand-forecasting model has been improved, the input data is still limited in its tabular form and is not suitable for CNN modeling. In this study, we propose a Tab2vox neural architecture search (NAS) model as a method to convert a high-dimensional tabular sample into a well-formed 3D voxel image and use it in a 3D CNN network. For each image representation, the 3D CNN forecasting model proposed from the Tab2vox framework showed superior performance, compared to the existing time series and machine learning techniques using tabular data, and the latest image transformation studies. Full article
Show Figures

Figure 1

Back to TopTop