Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Authors = Zhengtong Yin ORCID = 0000-0002-9818-9205

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4090 KiB  
Article
Enhancing Chinese Dialogue Generation with Word–Phrase Fusion Embedding and Sparse SoftMax Optimization
by Shenrong Lv, Siyu Lu, Ruiyang Wang, Lirong Yin, Zhengtong Yin, Salman A. AlQahtani, Jiawei Tian and Wenfeng Zheng
Systems 2024, 12(12), 516; https://doi.org/10.3390/systems12120516 - 24 Nov 2024
Cited by 3 | Viewed by 878
Abstract
Chinese dialogue generation faces multiple challenges, such as semantic understanding, information matching, and response fluency. Generative dialogue systems for Chinese conversation are somehow difficult to construct because of the flexible word order, the great impact of word replacement on semantics, and the complex [...] Read more.
Chinese dialogue generation faces multiple challenges, such as semantic understanding, information matching, and response fluency. Generative dialogue systems for Chinese conversation are somehow difficult to construct because of the flexible word order, the great impact of word replacement on semantics, and the complex implicit context. Existing methods still have limitations in addressing these issues. To tackle these problems, this paper proposes an improved Chinese dialogue generation model based on transformer architecture. The model uses a multi-layer transformer decoder as the backbone and introduces two key techniques, namely incorporating pre-trained language model word embeddings and optimizing the sparse Softmax loss function. For word-embedding fusion, we concatenate the word vectors from the pre-trained model with character-based embeddings to enhance the semantic information of word representations. The sparse Softmax optimization effectively mitigates the overfitting issue by introducing a sparsity regularization term. Experimental results on the Chinese short text conversation (STC) dataset demonstrate that our proposed model significantly outperforms the baseline models on automatic evaluation metrics, such as BLEU and Distinct, with an average improvement of 3.5 percentage points. Human evaluations also validate the superiority of our model in generating fluent and relevant responses. This work provides new insights and solutions for building more intelligent and human-like Chinese dialogue systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

15 pages, 5740 KiB  
Article
Stacked Noise Reduction Auto Encoder–OCEAN: A Novel Personalized Recommendation Model Enhanced
by Bixi Wang, Wenfeng Zheng, Ruiyang Wang, Siyu Lu, Lirong Yin, Lei Wang, Zhengtong Yin and Xinbing Chen
Systems 2024, 12(6), 188; https://doi.org/10.3390/systems12060188 - 26 May 2024
Cited by 29 | Viewed by 2073
Abstract
With the continuous development of information technology and the rapid increase in new users of social networking sites, recommendation technology is becoming more and more important. After research, it was found that the behavior of users on social networking sites has a great [...] Read more.
With the continuous development of information technology and the rapid increase in new users of social networking sites, recommendation technology is becoming more and more important. After research, it was found that the behavior of users on social networking sites has a great correlation with their personalities. The five characteristics of the OCEAN personality model can cover all aspects of a user’s personality. In this research, a micro-directional propagation model based on the OCEAN personality model and a Stacked Denoising Auto Encoder (SDAE) was built through the application of deep learning to a collaborative filtering technique. Firstly, the dimension of the user and item feature matrices was lowered using SDAE in order to extract deeper information. The user OCEAN personality model matrix and the reduced user feature matrix were integrated to create a new user feature matrix. Finally, the multiple linear regression approach was used to predict user-unrated goods and generate recommendations. This approach allowed us to leverage the relationships between various factors to deliver personalized recommendations. This experiment evaluated the RMSE and MAE of the model. The evaluation results show that the stacked denoising auto encoder collaborative filtering algorithm can improve the accuracy of recommendations, and the user’s OCEAN personality model improves the accuracy of the model to a certain extent. Full article
(This article belongs to the Special Issue Business Intelligence as a Tool for Business Competitiveness)
Show Figures

Figure 1

15 pages, 6550 KiB  
Article
FI-NPI: Exploring Optimal Control in Parallel Platform Systems
by Ruiyang Wang, Qiuxiang Gu, Siyu Lu, Jiawei Tian, Zhengtong Yin, Lirong Yin and Wenfeng Zheng
Electronics 2024, 13(7), 1168; https://doi.org/10.3390/electronics13071168 - 22 Mar 2024
Cited by 61 | Viewed by 1724
Abstract
Typically, the current and speed loop closure of servo motor of the parallel platform is accomplished with incremental PI regulation. The control method has strong robustness, but the parameter tuning process is cumbersome, and it is difficult to achieve the optimal control state. [...] Read more.
Typically, the current and speed loop closure of servo motor of the parallel platform is accomplished with incremental PI regulation. The control method has strong robustness, but the parameter tuning process is cumbersome, and it is difficult to achieve the optimal control state. In order to further optimize the performance, this paper proposes a double-loop control structure based on fuzzy integral and neuron proportional integral (FI-NPI). The structure makes full use of the control advantages of the fuzzy controller and integrator to improve the performance of speed closed-loop control. And through the feedforward branch, the speed error is used as the teacher signal for neuron supervised learning, which improves the effect of current closed-loop control. Through comparative simulation experiments, this paper verifies that the FI-NPI controller has a faster dynamic response speed than the traditional PI controller. Finally, in this paper, the FI-NPI controller is implemented in C language in the servo-driven lower computer, and the speed closed-loop test of the BLDC motor is carried out. The experimental results show that the FI-NPI double-loop controller is better than the traditional double-PI controller in performance indicators such as convergence rate and RMSE, which confirms that the FI-NPI double-loop controller is more suitable for BLDC servo control. Full article
(This article belongs to the Special Issue State-of-the-Art Research in Systems and Control Engineering)
Show Figures

Figure 1

3 pages, 166 KiB  
Editorial
Geospatial AI in Earth Observation, Remote Sensing, and GIScience
by Shan Liu, Kenan Li, Xuan Liu and Zhengtong Yin
Appl. Sci. 2023, 13(22), 12203; https://doi.org/10.3390/app132212203 - 10 Nov 2023
Cited by 1 | Viewed by 3347
Abstract
Geospatial artificial intelligence (Geo-AI) methods have revolutionarily impacted earth observation and remote sensing [...] Full article
(This article belongs to the Special Issue Geospatial AI in Earth Observation, Remote Sensing and GIScience)
12 pages, 10510 KiB  
Article
An Efficient Sinogram Domain Fully Convolutional Interpolation Network for Sparse-View Computed Tomography Reconstruction
by Fupei Guo, Bo Yang, Hao Feng, Wenfeng Zheng, Lirong Yin, Zhengtong Yin and Chao Liu
Appl. Sci. 2023, 13(20), 11264; https://doi.org/10.3390/app132011264 - 13 Oct 2023
Cited by 5 | Viewed by 2537
Abstract
Recently, deep learning techniques have been used for low-dose CT (LDCT) reconstruction to reduce the radiation risk for patients. Despite the improvement in performance, the network models used for LDCT reconstruction are becoming increasingly complex and computationally expensive under the mantra of “deeper [...] Read more.
Recently, deep learning techniques have been used for low-dose CT (LDCT) reconstruction to reduce the radiation risk for patients. Despite the improvement in performance, the network models used for LDCT reconstruction are becoming increasingly complex and computationally expensive under the mantra of “deeper is better”. However, in clinical settings, lightweight models with a low computational cost and short reconstruction times are more popular. For this reason, this paper proposes a computationally efficient CNN model with a simple structure for sparse-view LDCT reconstruction. Inspired by super-resolution networks for natural images, the proposed model interpolates projection data directly in the sinogram domain with a fully convolutional neural network that consists of only four convolution layers. The proposed model can be used directly for sparse-view CT reconstruction by concatenating the classic filtered back-projection (FBP) module, or it can be incorporated into existing dual-domain reconstruction frameworks as a generic sinogram domain module. The proposed model is validated on both the 2016 NIH-AAPM-Mayo Clinic LDCT Grand Challenge dataset and The Lung Image Database Consortium dataset. It is shown that despite the computational simplicity of the proposed model, its reconstruction performance at lower sparsity levels (1/2 and 1/4 radiation dose) is comparable to that of the sophisticated baseline models and shows some advantages at higher sparsity levels (1/8 and 1/15 radiation dose). Compared to existing sinogram domain baseline models, the proposed model is computationally efficient and easy to train on small training datasets, and is thus well suited for clinical real-time reconstruction tasks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 14960 KiB  
Article
U-Net-LSTM: Time Series-Enhanced Lake Boundary Prediction Model
by Lirong Yin, Lei Wang, Tingqiao Li, Siyu Lu, Jiawei Tian, Zhengtong Yin, Xiaolu Li and Wenfeng Zheng
Land 2023, 12(10), 1859; https://doi.org/10.3390/land12101859 - 29 Sep 2023
Cited by 126 | Viewed by 7666
Abstract
Change detection of natural lake boundaries is one of the important tasks in remote sensing image interpretation. In an ordinary fully connected network, or CNN, the signal of neurons in each layer can only be propagated to the upper layer, and the processing [...] Read more.
Change detection of natural lake boundaries is one of the important tasks in remote sensing image interpretation. In an ordinary fully connected network, or CNN, the signal of neurons in each layer can only be propagated to the upper layer, and the processing of samples is independent at each moment. However, for time-series data with transferability, the learned change information needs to be recorded and utilized. To solve the above problems, we propose a lake boundary change prediction model combining U-Net and LSTM. The ensemble of LSTMs helps to improve the overall accuracy and robustness of the model by capturing the spatial and temporal nuances in the data, resulting in more precise predictions. This study selected Lake Urmia as the research area and used the annual panoramic remote sensing images from 1996 to 2014 (Lat: 37°00′ N to 38°15′ N, Lon: 46°10′ E to 44°50′ E) obtained by Google Earth Professional Edition 7.3 software as the research data set. This model uses the U-Net network to extract multi-level change features and analyze the change trend of lake boundaries. The LSTM module is introduced after U-Net to optimize the predictive model using historical data storage and forgetting as well as current input data. This method enables the model to automatically fit the trend of time series data and mine the deep information of lake boundary changes. Through experimental verification, the model’s prediction accuracy for lake boundary changes after training can reach 89.43%. Comparative experiments with the existing U-Net-STN model show that the U-Net-LSTM model used in this study has higher prediction accuracy and lower mean square error. Full article
(This article belongs to the Special Issue Ground Deformation Monitoring via Remote Sensing Time Series Data)
Show Figures

Figure 1

17 pages, 9834 KiB  
Article
YOLOV4_CSPBi: Enhanced Land Target Detection Model
by Lirong Yin, Lei Wang, Jianqiang Li, Siyu Lu, Jiawei Tian, Zhengtong Yin, Shan Liu and Wenfeng Zheng
Land 2023, 12(9), 1813; https://doi.org/10.3390/land12091813 - 21 Sep 2023
Cited by 113 | Viewed by 3124
Abstract
The identification of small land targets in remote sensing imagery has emerged as a significant research objective. Despite significant advancements in object detection strategies based on deep learning for visible remote sensing images, the performance of detecting a small and densely distributed number [...] Read more.
The identification of small land targets in remote sensing imagery has emerged as a significant research objective. Despite significant advancements in object detection strategies based on deep learning for visible remote sensing images, the performance of detecting a small and densely distributed number of small targets remains suboptimal. To address this issue, this study introduces an improved model named YOLOV4_CPSBi, based on the YOLOV4 architecture, specifically designed to enhance the detection capability of small land targets in remote sensing imagery. The proposed model enhances the traditional CSPNet by redefining its channel partitioning and integrating this enhanced structure into the neck part of the YOLO network model. Additionally, the conventional pyramid fusion structure used in the traditional BiFPN is removed. By integrating a weight-based bidirectional multi-scale mechanism for feature fusion, the model is capable of effectively reasoning about objects of various sizes, with a particular focus on detecting small land targets, without introducing a significant increase in computational costs. Using the DOTA dataset as research data, this study quantifies the object detection performance of the proposed model. Compared with various baseline models, for the detection of small targets, its AP performance has been improved by nearly 8% compared with YOLOV4. By combining these modifications, the proposed model demonstrates promising results in identifying small land targets in visible remote sensing images. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

16 pages, 2170 KiB  
Article
Adapting Feature Selection Algorithms for the Classification of Chinese Texts
by Xuan Liu, Shuang Wang, Siyu Lu, Zhengtong Yin, Xiaolu Li, Lirong Yin, Jiawei Tian and Wenfeng Zheng
Systems 2023, 11(9), 483; https://doi.org/10.3390/systems11090483 - 20 Sep 2023
Cited by 129 | Viewed by 4479
Abstract
Text classification has been highlighted as the key process to organize online texts for better communication in the Digital Media Age. Text classification establishes classification rules based on text features, so the accuracy of feature selection is the basis of text classification. Facing [...] Read more.
Text classification has been highlighted as the key process to organize online texts for better communication in the Digital Media Age. Text classification establishes classification rules based on text features, so the accuracy of feature selection is the basis of text classification. Facing fast-increasing Chinese electronic documents in the digital environment, scholars have accumulated quite a few algorithms for the feature selection for the automatic classification of Chinese texts in recent years. However, discussion about how to adapt existing feature selection algorithms for various types of Chinese texts is still inadequate. To address this, this study proposes three improved feature selection algorithms and tests their performance on different types of Chinese texts. These include an enhanced CHI square with mutual information (MI) algorithm, which simultaneously introduces word frequency and term adjustment (CHMI); a term frequency–CHI square (TF–CHI) algorithm, which enhances weight calculation; and a term frequency–inverse document frequency (TF–IDF) algorithm enhanced with the extreme gradient boosting (XGBoost) algorithm, which improves the algorithm’s ability of word filtering (TF–XGBoost). This study randomly chooses 3000 texts from six different categories of the Sogou news corpus to obtain the confusion matrix and evaluate the performance of the new algorithms with precision and the F1-score. Experimental comparisons are conducted on support vector machine (SVM) and naive Bayes (NB) classifiers. The experimental results demonstrate that the feature selection algorithms proposed in this paper improve performance across various news corpora, although the best feature selection schemes for each type of corpus are different. Further studies of the application of the improved feature selection methods in other languages and the improvement in classifiers are suggested. Full article
(This article belongs to the Special Issue Communication for the Digital Media Age)
Show Figures

Figure 1

19 pages, 8760 KiB  
Article
Three-Dimensional Point Cloud Reconstruction Method of Cardiac Soft Tissue Based on Binocular Endoscopic Images
by Jiawei Tian, Botao Ma, Siyu Lu, Bo Yang, Shan Liu and Zhengtong Yin
Electronics 2023, 12(18), 3799; https://doi.org/10.3390/electronics12183799 - 8 Sep 2023
Cited by 4 | Viewed by 1997
Abstract
Three-dimensional reconstruction technology based on binocular stereo vision is a key research area with potential clinical applications. Mainstream research has focused on sparse point reconstruction within the soft tissue domain, limiting the comprehensive 3D data acquisition required for effective surgical robot navigation. This [...] Read more.
Three-dimensional reconstruction technology based on binocular stereo vision is a key research area with potential clinical applications. Mainstream research has focused on sparse point reconstruction within the soft tissue domain, limiting the comprehensive 3D data acquisition required for effective surgical robot navigation. This study introduces a new paradigm to address existing challenges. An innovative stereoscopic endoscopic image correction algorithm is proposed, exploiting intrinsic insights into stereoscopic calibration parameters. The synergy between the stereoscopic endoscope parameters and the disparity map derived from the cardiac soft tissue images ultimately leads to the acquisition of precise 3D points. Guided by deliberate filtering and optimization methods, the triangulation process subsequently facilitates the reconstruction of the complex surface of the cardiac soft tissue. The experimental results strongly emphasize the accuracy of the calibration algorithm, confirming its utility in stereoscopic endoscopy. Furthermore, the image rectification algorithm exhibits a significant reduction in vertical parallax, which effectively enhances the stereo matching process. The resulting 3D reconstruction technique enables the targeted surface reconstruction of different regions of interest in the cardiac soft tissue landscape. This study demonstrates the potential of binocular stereo vision-based 3D reconstruction techniques for integration into clinical settings. The combination of joint calibration algorithms, image correction innovations, and precise tissue reconstruction enhances the promise of improved surgical precision and outcomes in the field of cardiac interventions. Full article
Show Figures

Figure 1

23 pages, 8544 KiB  
Article
U-Net-STN: A Novel End-to-End Lake Boundary Prediction Model
by Lirong Yin, Lei Wang, Tingqiao Li, Siyu Lu, Zhengtong Yin, Xuan Liu, Xiaolu Li and Wenfeng Zheng
Land 2023, 12(8), 1602; https://doi.org/10.3390/land12081602 - 14 Aug 2023
Cited by 138 | Viewed by 3710
Abstract
Detecting changes in land cover is a critical task in remote sensing image interpretation, with particular significance placed on accurately determining the boundaries of lakes. Lake boundaries are closely tied to land resources, and any alterations can have substantial implications for the surrounding [...] Read more.
Detecting changes in land cover is a critical task in remote sensing image interpretation, with particular significance placed on accurately determining the boundaries of lakes. Lake boundaries are closely tied to land resources, and any alterations can have substantial implications for the surrounding environment and ecosystem. This paper introduces an innovative end-to-end model that combines U-Net and spatial transformation network (STN) to predict changes in lake boundaries and investigate the evolution of the Lake Urmia boundary. The proposed approach involves pre-processing annual panoramic remote sensing images of Lake Urmia, obtained from 1996 to 2014 through Google Earth Pro Version 7.3 software, using image segmentation and grayscale filling techniques. The results of the experiments demonstrate the model’s ability to accurately forecast the evolution of lake boundaries in remote sensing images. Additionally, the model exhibits a high degree of adaptability, effectively learning and adjusting to changing patterns over time. The study also evaluates the influence of varying time series lengths on prediction accuracy and reveals that longer time series provide a larger number of samples, resulting in more precise predictions. The maximum achieved accuracy reaches 89.3%. The findings and methodologies presented in this study offer valuable insights into the utilization of deep learning techniques for investigating and managing lake boundary changes, thereby contributing to the effective management and conservation of this significant ecosystem. Full article
(This article belongs to the Special Issue Assessment of Land Use/Cover Change Using Geospatial Technology)
Show Figures

Figure 1

16 pages, 1368 KiB  
Article
Developing Multi-Labelled Corpus of Twitter Short Texts: A Semi-Automatic Method
by Xuan Liu, Guohui Zhou, Minghui Kong, Zhengtong Yin, Xiaolu Li, Lirong Yin and Wenfeng Zheng
Systems 2023, 11(8), 390; https://doi.org/10.3390/systems11080390 - 1 Aug 2023
Cited by 139 | Viewed by 4382
Abstract
Facing fast-increasing electronic documents in the Digital Media Age, the need to extract textual features of online texts for better communication is growing. Sentiment classification might be the key method to catch emotions of online communication, and developing corpora with annotation of emotions [...] Read more.
Facing fast-increasing electronic documents in the Digital Media Age, the need to extract textual features of online texts for better communication is growing. Sentiment classification might be the key method to catch emotions of online communication, and developing corpora with annotation of emotions is the first step to achieving sentiment classification. However, the labour-intensive and costly manual annotation has resulted in the lack of corpora for emotional words. Furthermore, single-label semantic corpora could hardly meet the requirement of modern analysis of complicated user’s emotions, but tagging emotional words with multiple labels is even more difficult than usual. Improvement of the methods of automatic emotion tagging with multiple emotion labels to construct new semantic corpora is urgently needed. Taking Twitter short texts as the case, this study proposes a new semi-automatic method to annotate Internet short texts with multiple labels and form a multi-labelled corpus for further algorithm training. Each sentence is tagged with both the emotional tendency and polarity, and each tweet, which generally contains several sentences, is tagged with the first two major emotional tendencies. The semi-automatic multi-labelled annotation is achieved through the process of selecting the base corpus and emotional tags, data preprocessing, automatic annotation through word matching and weight calculation, and manual correction in case of multiple emotional tendencies are found. The experiments on the Sentiment140 published Twitter corpus demonstrate the effectiveness of the proposed approach and show consistency between the results of semi-automatic annotation and manual annotation. By applying this method, this study summarises the annotation specification and constructs a multi-labelled emotion corpus with 6500 tweets for further algorithm training. Full article
(This article belongs to the Special Issue Communication for the Digital Media Age)
Show Figures

Figure 1

15 pages, 2108 KiB  
Article
A Scenario-Generic Neural Machine Translation Data Augmentation Method
by Xiner Liu, Jianshu He, Mingzhe Liu, Zhengtong Yin, Lirong Yin and Wenfeng Zheng
Electronics 2023, 12(10), 2320; https://doi.org/10.3390/electronics12102320 - 21 May 2023
Cited by 73 | Viewed by 4042
Abstract
Amid the rapid advancement of neural machine translation, the challenge of data sparsity has been a major obstacle. To address this issue, this study proposes a general data augmentation technique for various scenarios. It examines the predicament of parallel corpora diversity and high [...] Read more.
Amid the rapid advancement of neural machine translation, the challenge of data sparsity has been a major obstacle. To address this issue, this study proposes a general data augmentation technique for various scenarios. It examines the predicament of parallel corpora diversity and high quality in both rich- and low-resource settings, and integrates the low-frequency word substitution method and reverse translation approach for complementary benefits. Additionally, this method improves the pseudo-parallel corpus generated by the reverse translation method by substituting low-frequency words and includes a grammar error correction module to reduce grammatical errors in low-resource scenarios. The experimental data are partitioned into rich- and low-resource scenarios at a 10:1 ratio. It verifies the necessity of grammatical error correction for pseudo-corpus in low-resource scenarios. Models and methods are chosen from the backbone network and related literature for comparative experiments. The experimental findings demonstrate that the data augmentation approach proposed in this study is suitable for both rich- and low-resource scenarios and is effective in enhancing the training corpus to improve the performance of translation tasks. Full article
(This article belongs to the Special Issue Natural Language Processing and Information Retrieval)
Show Figures

Figure 1

15 pages, 3118 KiB  
Article
Heterogeneous Quasi-Continuous Spiking Cortical Model for Pulse Shape Discrimination
by Runxi Liu, Haoran Liu, Bo Yang, Borui Gu, Zhengtong Yin and Shan Liu
Electronics 2023, 12(10), 2234; https://doi.org/10.3390/electronics12102234 - 14 May 2023
Cited by 1 | Viewed by 1774
Abstract
The present study introduces the heterogeneous quasi-continuous spiking cortical model (HQC-SCM) method as a novel approach for neutron and gamma-ray pulse shape discrimination. The method utilizes specific neural responses to extract features in the falling edge and delayed fluorescence parts of radiation pulse [...] Read more.
The present study introduces the heterogeneous quasi-continuous spiking cortical model (HQC-SCM) method as a novel approach for neutron and gamma-ray pulse shape discrimination. The method utilizes specific neural responses to extract features in the falling edge and delayed fluorescence parts of radiation pulse signals. In addition, the study investigates the contributions of HQC-SCM’s parameters to its discrimination performance, leading to the development of an automatic parameter selection strategy. As HQC-SCM is a chaotic system, a genetic algorithm-based parameter optimization method was proposed to locate local optima of HQC-SCM’s parameter solutions efficiently and robustly in just a few iterations of evolution. The experimental results of this study demonstrate that the HQC-SCM method outperforms traditional and state-of-the-art pulse shape discrimination algorithms, including falling edge percentage slope, zero crossing, charge comparison, frequency gradient analysis, pulse-coupled neural network, and ladder gradient methods. The outstanding discrimination performance of HQC-SCM enables plastic scintillators to compete with liquid and crystal scintillators’ neutron and gamma-ray pulse shape discrimination ability. Additionally, the HQC-SCM method outperforms other methods when dealing with noisy radiation pulse signals. Therefore, it is an effective and robust approach that can be applied in radiation detection systems across various fields. Full article
Show Figures

Graphical abstract

17 pages, 3033 KiB  
Article
Study on the Thermospheric Density Distribution Pattern during Geomagnetic Activity
by Lirong Yin, Lei Wang, Lijun Ge, Jiawei Tian, Zhengtong Yin, Mingzhe Liu and Wenfeng Zheng
Appl. Sci. 2023, 13(9), 5564; https://doi.org/10.3390/app13095564 - 30 Apr 2023
Cited by 49 | Viewed by 3381
Abstract
The atmospheric density of the thermosphere is a fundamental parameter for spacecraft launch and orbit control. Under magnetic storm conditions, the thermospheric atmospheric density experiences significant fluctuations, which have a negative impact on spacecraft control. Exploring thermospheric density during geomagnetic storms can help [...] Read more.
The atmospheric density of the thermosphere is a fundamental parameter for spacecraft launch and orbit control. Under magnetic storm conditions, the thermospheric atmospheric density experiences significant fluctuations, which have a negative impact on spacecraft control. Exploring thermospheric density during geomagnetic storms can help to mitigate the effects of such events. Research on the inversion of accelerometer measurements for different satellites and the variations of atmospheric density under extreme conditions is still in its infancy. In this paper, the distribution of atmospheric density during three geomagnetic storms is investigated from the inversion results of the Swarm-C accelerometer. Three major geomagnetic storms and their recovery phases are selected as case studies. The thermospheric density obtained by Swarm-C is separated into day and night regions. The empirical orthogonal function analysis method is used to study the spatiotemporal distribution of thermospheric density during geomagnetic storms. The results indicate that storms have a more significant impact on nighttime thermospheric density. The impact of magnetic storms on the temporal distribution of thermospheric density is considerable. The first-order empirical orthogonal function (EOF) time coefficient value on the day after the storm is the largest, reaching 2–3 times that before the magnetic storm. The impact of magnetic storms on atmospheric density is mainly reflected in the time distribution. The spatial distribution of atmospheric density is less affected by magnetic storms and is relatively stable in the short term. The impact of magnetic storms on the spatial distribution of nighttime thermospheric density is more significant than that of daytime regions, and the response of daytime regions to magnetic storms is slower. Full article
(This article belongs to the Special Issue State-of-the-Art Earth Sciences and Geography in China)
Show Figures

Figure 1

21 pages, 10391 KiB  
Article
A Novel Architecture of a Six Degrees of Freedom Parallel Platform
by Qiuxiang Gu, Jiawei Tian, Bo Yang, Mingzhe Liu, Borui Gu, Zhengtong Yin, Lirong Yin and Wenfeng Zheng
Electronics 2023, 12(8), 1774; https://doi.org/10.3390/electronics12081774 - 9 Apr 2023
Cited by 61 | Viewed by 4155
Abstract
With the rapid development of the manufacturing industry, industrial automation equipment represented by computer numerical control (CNC) machine tools has put forward higher and higher requirements for the machining accuracy of parts. Compared with the multi-axis serial platform solution, the parallel platform solution [...] Read more.
With the rapid development of the manufacturing industry, industrial automation equipment represented by computer numerical control (CNC) machine tools has put forward higher and higher requirements for the machining accuracy of parts. Compared with the multi-axis serial platform solution, the parallel platform solution is theoretically more suitable for high-precision machining equipment. There are many parallel platform solutions, but not one can provide a common physical platform to test the effectiveness of a variety of control algorithms. To achieve the goals, this paper is based on the Stewart six degrees of freedom parallel platform, and it mainly studies the platform construction. This study completed the mechanical structure design of the parallel platform. Based on the microprogrammed control unit (MCU) + pre-driver chip + three-phase full bridge solution, we have completed the circuit design of the motor driver. We wrote the program of MCU to drive six parallel robotic arms as well as the program of the parallel platform control center on the PC, and we completed the system joint debugging. The closed-loop control effect of the parallel platform workspace pose is realized. Full article
Show Figures

Figure 1

Back to TopTop