Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = auto-adaptive bias

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 445 KiB  
Article
Utilizing Language Models to Expand Vision-Based Commonsense Knowledge Graphs
by Navid Rezaei and Marek Z. Reformat
Symmetry 2022, 14(8), 1715; https://doi.org/10.3390/sym14081715 - 17 Aug 2022
Cited by 1 | Viewed by 3254
Abstract
The introduction and ever-growing size of the transformer deep-learning architecture have had a tremendous impact not only in the field of natural language processing but also in other fields. The transformer-based language models have contributed to a renewed interest in commonsense knowledge due [...] Read more.
The introduction and ever-growing size of the transformer deep-learning architecture have had a tremendous impact not only in the field of natural language processing but also in other fields. The transformer-based language models have contributed to a renewed interest in commonsense knowledge due to the abilities of deep learning models. Recent literature has focused on analyzing commonsense embedded within the pre-trained parameters of these models and embedding missing commonsense using knowledge graphs and fine-tuning. We base our current work on the empirically proven language understanding of very large transformer-based language models to expand a limited commonsense knowledge graph, initially generated only on visual data. The few-shot-prompted pre-trained language models can learn the context of an initial knowledge graph with less bias than language models fine-tuned on a large initial corpus. It is also shown that these models can offer new concepts that are added to the vision-based knowledge graph. This two-step approach of vision mining and language model prompts results in the auto-generation of a commonsense knowledge graph well equipped with physical commonsense, which is human commonsense gained by interacting with the physical world. To prompt the language models, we adapted the chain-of-thought method of prompting. To the best of our knowledge, it is a novel contribution to the domain of the generation of commonsense knowledge, which can result in a five-fold cost reduction compared to the state-of-the-art. Another contribution is assigning fuzzy linguistic terms to the generated triples. The process is end to end in the context of knowledge graphs. It means the triples are verbalized to natural language, and after being processed, the results are converted back to triples and added to the commonsense knowledge graph. Full article
(This article belongs to the Special Issue Computational Intelligence and Soft Computing: Recent Applications)
Show Figures

Figure 1

17 pages, 3315 KiB  
Article
PFVAE: A Planar Flow-Based Variational Auto-Encoder Prediction Model for Time Series Data
by Xue-Bo Jin, Wen-Tao Gong, Jian-Lei Kong, Yu-Ting Bai and Ting-Li Su
Mathematics 2022, 10(4), 610; https://doi.org/10.3390/math10040610 - 16 Feb 2022
Cited by 114 | Viewed by 13464
Abstract
Prediction based on time series has a wide range of applications. Due to the complex nonlinear and random distribution of time series data, the performance of learning prediction models can be reduced by the modeling bias or overfitting. This paper proposes a novel [...] Read more.
Prediction based on time series has a wide range of applications. Due to the complex nonlinear and random distribution of time series data, the performance of learning prediction models can be reduced by the modeling bias or overfitting. This paper proposes a novel planar flow-based variational auto-encoder prediction model (PFVAE), which uses the long- and short-term memory network (LSTM) as the auto-encoder and designs the variational auto-encoder (VAE) as a time series data predictor to overcome the noise effects. In addition, the internal structure of VAE is transformed using planar flow, which enables it to learn and fit the nonlinearity of time series data and improve the dynamic adaptability of the network. The prediction experiments verify that the proposed model is superior to other models regarding prediction accuracy and proves it is effective for predicting time series data. Full article
(This article belongs to the Special Issue Mathematical Method and Application of Machine Learning)
Show Figures

Figure 1

10 pages, 3163 KiB  
Article
Low Phase Noise and Wide-Range Class-C VCO Using Auto-Adaptive Bias Technique
by Jeong-Yun Lee, Gwang Sub Kim, Goo-Han Ko, Kwang-Il Oh, Jae Gyeong Park and Donghyun Baek
Electronics 2020, 9(8), 1290; https://doi.org/10.3390/electronics9081290 - 11 Aug 2020
Cited by 10 | Viewed by 7167
Abstract
This paper proposes a new structure of 24-GHz class-C voltage-controlled oscillator (VCO) using an auto-adaptive bias technique. The VCO in this paper uses a digitally controlled circuit to eliminate the possibility of start-up failure that a class-C structure can have and has low [...] Read more.
This paper proposes a new structure of 24-GHz class-C voltage-controlled oscillator (VCO) using an auto-adaptive bias technique. The VCO in this paper uses a digitally controlled circuit to eliminate the possibility of start-up failure that a class-C structure can have and has low phase noise and a wide frequency range. To expand the frequency tuning range, a 3-bit cap-bank is used and a triple-coupled transformer is used as the core inductor. The proposed class-C VCO implements a 65-nm RF CMOS process. It has a phase noise performance of −105 dBc/Hz or less at 1-MHz offset frequency and the output frequency range is from 22.8 GHz to 27.3 GHz, which consumes 8.3–10.6 mW of power. The figure-of-merit with tuning range (FoMT) of this design reached 191.1 dBc/Hz. Full article
(This article belongs to the Special Issue 5G Front-End Transceivers)
Show Figures

Figure 1

27 pages, 935 KiB  
Article
State of Charge Estimation Using the Extended Kalman Filter for Battery Management Systems Based on the ARX Battery Model
by Shifei Yuan, Hongjie Wu and Chengliang Yin
Energies 2013, 6(1), 444-470; https://doi.org/10.3390/en6010444 - 17 Jan 2013
Cited by 133 | Viewed by 15469
Abstract
State of charge (SOC) is a critical factor to guarantee that a battery system is operating in a safe and reliable manner. Many uncertainties and noises, such as fluctuating current, sensor measurement accuracy and bias, temperature effects, calibration errors or even sensor failure, [...] Read more.
State of charge (SOC) is a critical factor to guarantee that a battery system is operating in a safe and reliable manner. Many uncertainties and noises, such as fluctuating current, sensor measurement accuracy and bias, temperature effects, calibration errors or even sensor failure, etc. pose a challenge to the accurate estimation of SOC in real applications. This paper adds two contributions to the existing literature. First, the auto regressive exogenous (ARX) model is proposed here to simulate the battery nonlinear dynamics. Due to its discrete form and ease of implemention, this straightforward approach could be more suitable for real applications. Second, its order selection principle and parameter identification method is illustrated in detail in this paper. The hybrid pulse power characterization (HPPC) cycles are implemented on the 60AH LiFePO4 battery module for the model identification and validation. Based on the proposed ARX model, SOC estimation is pursued using the extended Kalman filter. Evaluation of the adaptability of the battery models and robustness of the SOC estimation algorithm are also verified. The results indicate that the SOC estimation method using the Kalman filter based on the ARX model shows great performance. It increases the model output voltage accuracy, thereby having the potential to be used in real applications, such as EVs and HEVs. Full article
(This article belongs to the Special Issue Vehicle to Grid)
Show Figures

Graphical abstract

Back to TopTop