Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = artistic font generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 868 KiB  
Article
Trademark Text Recognition Combining SwinTransformer and Feature-Query Mechanisms
by Boxiu Zhou, Xiuhui Wang, Wenchao Zhou and Longwen Li
Electronics 2024, 13(14), 2814; https://doi.org/10.3390/electronics13142814 - 17 Jul 2024
Cited by 1 | Viewed by 1002
Abstract
The task of trademark text recognition is a fundamental component of scene text recognition (STR), which currently faces a number of challenges, including the presence of unordered, irregular or curved text, as well as text that is distorted or rotated. In applications such [...] Read more.
The task of trademark text recognition is a fundamental component of scene text recognition (STR), which currently faces a number of challenges, including the presence of unordered, irregular or curved text, as well as text that is distorted or rotated. In applications such as trademark infringement detection and analysis of brand effects, the diversification of artistic fonts in trademarks and the complexity of the product surfaces where the trademarks are located pose major challenges for relevant research. To tackle these issues, this paper proposes a novel recognition framework named SwinCornerTR, which aims to enhance the accuracy and robustness of trademark text recognition. Firstly, a novel feature-extraction network based on SwinTransformer with EFPN (enhanced feature pyramid network) is proposed. By incorporating SwinTransformer as the backbone, efficient capture of global information in trademark images is achieved through the self-attention mechanism and enhanced feature pyramid module, providing more accurate and expressive feature representations for subsequent text extraction. Then, during the encoding stage, a novel feature point-retrieval algorithm based on corner detection is designed. The OTSU-based fast corner detector is presented to generate a corner map, achieving efficient and accurate corner detection. Furthermore, in the encoding phase, a feature point-retrieval mechanism based on corner detection is introduced to achieve priority selection of key-point regions, eliminating character-to-character lines and suppressing background interference. Finally, we conducted extensive experiments on two open-access benchmark datasets, SVT and CUTE80, as well as a self-constructed trademark dataset, to assess the effectiveness of the proposed method. Our results showed that the proposed method achieved accuracies of 92.9%, 92.3% and 84.8%, respectively, on these datasets. These results demonstrate the effectiveness and robustness of the proposed method in the analysis of trademark data. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 11759 KiB  
Article
Design and Implementation of Dongba Character Font Style Transfer Model Based on AFGAN
by Congwang Bao, Yuan Li and En Lu
Sensors 2024, 24(11), 3424; https://doi.org/10.3390/s24113424 - 26 May 2024
Cited by 2 | Viewed by 1635
Abstract
Dongba characters are ancient ideographic scripts with abstract expressions that differ greatly from modern Chinese characters; directly applying existing methods cannot achieve the font style transfer of Dongba characters. This paper proposes an Attention-based Font style transfer Generative Adversarial Network (AFGAN) method. Based [...] Read more.
Dongba characters are ancient ideographic scripts with abstract expressions that differ greatly from modern Chinese characters; directly applying existing methods cannot achieve the font style transfer of Dongba characters. This paper proposes an Attention-based Font style transfer Generative Adversarial Network (AFGAN) method. Based on the characteristics of Dongba character images, two core modules are set up in the proposed AFGAN, namely void constraint and font stroke constraint. In addition, in order to enhance the feature learning ability of the network and improve the style transfer effect, the Convolutional Block Attention Module (CBAM) mechanism is added in the down-sampling stage to help the network better adapt to input font images with different styles. The quantitative and qualitative analyses of the generated font and the real font were conducted by consulting with professional artists based on the newly built small seal script, slender gold script, and Dongba character dataset, and the styles of the small seal script and slender gold script were transferred to Dongba characters. The results indicate that the proposed AFGAN method has advantages in evaluation indexes and visual quality compared to existing networks. At the same time, this method can effectively learn the style features of small seal script and slender gold script, and transfer them to Dongba characters, indicating the effectiveness of this method. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

15 pages, 3461 KiB  
Article
Deep Deformable Artistic Font Style Transfer
by Xuanying Zhu, Mugang Lin, Kunhui Wen, Huihuang Zhao and Xianfang Sun
Electronics 2023, 12(7), 1561; https://doi.org/10.3390/electronics12071561 - 26 Mar 2023
Cited by 4 | Viewed by 3081
Abstract
The essence of font style transfer is to move the style features of an image into a font while maintaining the font’s glyph structure. At present, generative adversarial networks based on convolutional neural networks play an important role in font style generation. However, [...] Read more.
The essence of font style transfer is to move the style features of an image into a font while maintaining the font’s glyph structure. At present, generative adversarial networks based on convolutional neural networks play an important role in font style generation. However, traditional convolutional neural networks that recognize font images suffer from poor adaptability to unknown image changes, weak generalization abilities, and poor texture feature extractions. When the glyph structure is very complex, stylized font images cannot be effectively recognized. In this paper, a deep deformable style transfer network is proposed for artistic font style transfer, which can adjust the degree of font deformation according to the style and realize the multiscale artistic style transfer of text. The new model consists of a sketch module for learning glyph mapping, a glyph module for learning style features, and a transfer module for a fusion of style textures. In the glyph module, the Deform-Resblock encoder is designed to extract glyph features, in which a deformable convolution is introduced and the size of the residual module is changed to achieve a fusion of feature information at different scales, preserve the font structure better, and enhance the controllability of text deformation. Therefore, our network has greater control over text, processes image feature information better, and can produce more exquisite artistic fonts. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

Back to TopTop