Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = sentence compression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1658 KB  
Article
The Effect of Modulation Enhancement Scheme on Speech Recognition in Spatial Noise Among Young Adults with Normal Hearing
by Vibha Kanagokar, M. A. Yashu, Jayashree S. Bhat and Arivudai Nambi Pitchaimuthu
Audiol. Res. 2026, 16(1), 26; https://doi.org/10.3390/audiolres16010026 - 14 Feb 2026
Viewed by 449
Abstract
Background/Objectives: Speech understanding in noise relies on both temporal fine structure (TFS) and temporal envelope (ENV) cues. While TFS primarily conveys interaural time differences (ITDs) at low frequencies, ENV cues can also support ITD processing, especially when TFS is unavailable or degraded. [...] Read more.
Background/Objectives: Speech understanding in noise relies on both temporal fine structure (TFS) and temporal envelope (ENV) cues. While TFS primarily conveys interaural time differences (ITDs) at low frequencies, ENV cues can also support ITD processing, especially when TFS is unavailable or degraded. Expanding the ENV by increasing modulation depth has been proposed to improve speech perception, but its effects on spatial release from masking (SRM) and binaural temporal processing in normal-hearing listeners remain unclear. The goal of this study was to evaluate the effect of ENV enhancement on SRM in young adults with normal hearing and its influence on ITD sensitivity and interaural coherence (IC). Method: Thirty normal-hearing native Kannada speakers (19–34 years) participated. Speech stimuli consisted of Kannada sentences embedded in four-talker babble at −5, 0, and +5 dB signal to noise ratio (SNR). Target and masker were spatialized using head-related transfer functions at 0°, 15°, and 37.5° azimuths. Stimuli were presented with and without ENV enhancement (compression–expansion algorithm). Speech recognition scores were analyzed using generalized linear mixed models, and SRM was calculated as performance differences between co-located and spatially separated conditions. Cross-correlation analyses were performed to estimate ITDs and IC across SNRs. Result: ENV enhancement yielded significantly higher SRM values across all SNRs and spatial separations. Benefits were greatest at lower SNRs and wider target–masker separations. Cross-correlation analysis showed enhanced IC and more reliable ITD estimates under the expanded condition, particularly at moderate SNRs. Conclusions: Temporal ENV enhancement strengthens spatial unmasking and binaural timing cues in normal-hearing adults, especially under adverse listening conditions. These findings highlight its potential application in auditory rehabilitation and hearing technologies where ENV cues are critical. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

26 pages, 842 KB  
Article
Speech Production Intelligibility Is Associated with Speech Recognition in Adult Cochlear Implant Users
by Victoria A. Sevich, Davia J. Williams, Aaron C. Moberly and Terrin N. Tamati
Brain Sci. 2025, 15(10), 1066; https://doi.org/10.3390/brainsci15101066 - 30 Sep 2025
Viewed by 1868
Abstract
Background/Objectives: Adult cochlear implant (CI) users exhibit broad variability in speech perception and production outcomes. Cochlear implantation improves the intelligibility (comprehensibility) of CI users’ speech, but the degraded auditory signal delivered by the CI may attenuate this benefit. Among other effects, degraded [...] Read more.
Background/Objectives: Adult cochlear implant (CI) users exhibit broad variability in speech perception and production outcomes. Cochlear implantation improves the intelligibility (comprehensibility) of CI users’ speech, but the degraded auditory signal delivered by the CI may attenuate this benefit. Among other effects, degraded auditory feedback can lead to compression of the acoustic–phonetic vowel space, which makes vowel productions confusable, decreasing intelligibility. Sustained exposure to degraded auditory feedback may also weaken phonological representations. The current study examined the relationship between subjective ratings and acoustic measures of speech production, speech recognition accuracy, and phonological processing (cognitive processing of speech sounds) in adult CI users. Methods: Fifteen adult CI users read aloud a series of short words, which were analyzed in two ways. First, acoustic measures of vowel distinctiveness (i.e., vowel dispersion) were calculated. Second, thirty-seven normal-hearing (NH) participants listened to the words produced by the CI users and rated the subjective intelligibility of each word from 1 (least understandable) to 100 (most understandable). CI users also completed an auditory sentence recognition task and a nonauditory cognitive test of phonological processing. Results: CI users rated as having more understandable speech demonstrated more accurate sentence recognition than those rated as having less understandable speech, but intelligibility ratings were only marginally related to phonological processing. Further, vowel distinctiveness was marginally associated with sentence recognition but not related to phonological processing or subjective ratings of intelligibility. Conclusions: The results suggest that speech intelligibility ratings are related to speech recognition accuracy in adult CI users, and future investigation is needed to identify the extent to which this relationship is mediated by individual differences in phonological processing. Full article
(This article belongs to the Special Issue Language, Communication and the Brain—2nd Edition)
Show Figures

Figure 1

22 pages, 3497 KB  
Article
CPS-LSTM: Privacy-Sensitive Entity Adaptive Recognition Model for Power Systems
by Hao Zhang, Jing Wang, Xuanyuan Wang, Xuhui Lü, Zhenzhi Guan, Zhenghua Cai and Hua Zhang
Energies 2025, 18(8), 2013; https://doi.org/10.3390/en18082013 - 14 Apr 2025
Viewed by 669
Abstract
With the widespread application of Android devices in the energy sector, an increasing number of applications rely on SDKs to access privacy-sensitive data, such as device identifiers, location information, energy consumption, and user behavior. However, these data are often stored in different formats [...] Read more.
With the widespread application of Android devices in the energy sector, an increasing number of applications rely on SDKs to access privacy-sensitive data, such as device identifiers, location information, energy consumption, and user behavior. However, these data are often stored in different formats and naming conventions, which poses challenges for consistent extraction and identification. Traditional taint analysis methods are inefficient in identifying these entities, hindering the realization of accurate identification. To address this issue, we first propose a high-quality data construction method based on privacy protocols, which includes sentence segmentation, compression encoding, and entity annotation. We then introduce CPS-LSTM (Character-level Privacy-sensitive Entity Adaptive Recognition Model), which enhances the recognition capability of privacy-sensitive entities in mixed Chinese and English text through character-level embedding and word vector fusion. The model features a streamlined architecture, accelerating convergence and enabling parallel sentence processing. Our experimental results demonstrate that CPS-LSTM significantly outperforms the baseline methods in terms of accuracy and recall. The accuracy of CPS-LSTM is 0.09 higher than Lattice LSTM, 0.14 higher than WC-LSTM, and 0.05 higher than FLAT. In terms of recall, CPS-LSTM is 0.07 higher than Lattice LSTM, 0.12 higher than WC-LSTM, and 0.02 higher than FLAT. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

24 pages, 3475 KB  
Article
A Knowledge-Graph-Driven Method for Intelligent Decision Making on Power Communication Equipment Faults
by Huiying Qu, Yiying Zhang, Kun Liang, Siwei Li and Xianxu Huo
Electronics 2023, 12(18), 3939; https://doi.org/10.3390/electronics12183939 - 18 Sep 2023
Cited by 13 | Viewed by 2932
Abstract
The grid terminal deploys numerous types of communication equipment for the digital construction of the smart grid. Once communication equipment failure occurs, it might jeopardize the safety of the power grid. The massive amount of communication equipment leads to a dramatic increase in [...] Read more.
The grid terminal deploys numerous types of communication equipment for the digital construction of the smart grid. Once communication equipment failure occurs, it might jeopardize the safety of the power grid. The massive amount of communication equipment leads to a dramatic increase in fault research and judgment data, making it difficult to locate fault information in equipment maintenance. Therefore, this paper designs a knowledge-graph-driven method for intelligent decision making on power communication equipment faults. The method consists of two parts: power knowledge extraction and user intent multi-feature learning recommendation. The power knowledge extraction model utilizes a multi-layer bidirectional encoder to capture the global features of the sentence and then characterizes the deep local semantics of the sentence through a convolutional pooling layer, which achieves the joint extraction and visual display of the fault entity relations. The user intent multi-feature learning recommendation model uses a graph convolutional neural network to aggregate the higher-order neighborhood information of faulty entities and then the cross-compression matrix to solve the feature interaction degree of the user and graph, which achieves accurate prediction of fault retrieval. The experimental results show that the method is optimal in knowledge extraction compared to classical models such as BERT-CRF, in which the F1 value reaches 81.7%, which can effectively extract fault knowledge. User intent multi-feature learning recommendation works best, with an F1 value of 87%. Compared with the classical models such as CKAN and KGCN, it is improved by 5%~11%, which can effectively solve the problem of insufficient mining of user retrieval intent. This method realizes accurate retrieval and personalized recommendation of fault information of electric power communication equipment. Full article
Show Figures

Figure 1

13 pages, 1602 KB  
Article
Text Summarization Method Based on Gated Attention Graph Neural Network
by Jingui Huang, Wenya Wu, Jingyi Li and Shengchun Wang
Sensors 2023, 23(3), 1654; https://doi.org/10.3390/s23031654 - 2 Feb 2023
Cited by 10 | Viewed by 4260
Abstract
Text summarization is an information compression technology to extract important information from long text, which has become a challenging research direction in the field of natural language processing. At present, the text summary model based on deep learning has shown good results, but [...] Read more.
Text summarization is an information compression technology to extract important information from long text, which has become a challenging research direction in the field of natural language processing. At present, the text summary model based on deep learning has shown good results, but how to more effectively model the relationship between words, more accurately extract feature information and eliminate redundant information is still a problem of concern. This paper proposes a graph neural network model GA-GNN based on gated attention, which effectively improves the accuracy and readability of text summarization. First, the words are encoded using a concatenated sentence encoder to generate a deeper vector containing local and global semantic information. Secondly, the ability to extract key information features is improved by using gated attention units to eliminate local irrelevant information. Finally, the loss function is optimized from the three aspects of contrastive learning, confidence calculation of important sentences, and graph feature extraction to improve the robustness of the model. Experimental validation was conducted on a CNN/Daily Mail dataset and MR dataset, and the results showed that the model in this paper outperformed existing methods. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

23 pages, 2053 KB  
Article
Automatic Text Summarization for Hindi Using Real Coded Genetic Algorithm
by Arti Jain, Anuja Arora, Jorge Morato, Divakar Yadav and Kumar Vimal Kumar
Appl. Sci. 2022, 12(13), 6584; https://doi.org/10.3390/app12136584 - 29 Jun 2022
Cited by 36 | Viewed by 5517
Abstract
In the present scenario, Automatic Text Summarization (ATS) is in great demand to address the ever-growing volume of text data available online to discover relevant information faster. In this research, the ATS methodology is proposed for the Hindi language using Real Coded Genetic [...] Read more.
In the present scenario, Automatic Text Summarization (ATS) is in great demand to address the ever-growing volume of text data available online to discover relevant information faster. In this research, the ATS methodology is proposed for the Hindi language using Real Coded Genetic Algorithm (RCGA) over the health corpus, available in the Kaggle dataset. The methodology comprises five phases: preprocessing, feature extraction, processing, sentence ranking, and summary generation. Rigorous experimentation on varied feature sets is performed where distinguishing features, namely- sentence similarity and named entity features are combined with others for computing the evaluation metrics. The top 14 feature combinations are evaluated through Recall-Oriented Understudy for Gisting Evaluation (ROUGE) measure. RCGA computes appropriate feature weights through strings of features, chromosomes selection, and reproduction operators: Simulating Binary Crossover and Polynomial Mutation. To extract the highest scored sentences as the corpus summary, different compression rates are tested. In comparison with existing summarization tools, the ATS extractive method gives a summary reduction of 65%. Full article
Show Figures

Figure 1

10 pages, 1063 KB  
Article
A Hierarchical Representation Model Based on Longformer and Transformer for Extractive Summarization
by Shihao Yang, Shaoru Zhang, Ming Fang, Fengqin Yang and Shuhua Liu
Electronics 2022, 11(11), 1706; https://doi.org/10.3390/electronics11111706 - 27 May 2022
Cited by 11 | Viewed by 5467
Abstract
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation [...] Read more.
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation method is crucial for the quality of the generated summarization. To effectively represent the document, we propose a hierarchical document representation model Long-Trans-Extr for Extractive Summarization, which uses Longformer as the sentence encoder and Transformer as the document encoder. The advantage of Longformer as sentence encoder is that the model can input long document up to 4096 tokens with adding relative a little calculation. The proposed model Long-Trans-Extr is evaluated on three benchmark datasets: CNN (Cable News Network), DailyMail, and the combined CNN/DailyMail. It achieves 43.78 (Rouge-1) and 39.71 (Rouge-L) on CNN/DailyMail and 33.75 (Rouge-1), 13.11 (Rouge-2), and 30.44 (Rouge-L) on the CNN datasets. They are very competitive results, and furthermore, they show that our model has better performance on long documents, such as the CNN corpus. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 2778 KB  
Article
X-Transformer: A Machine Translation Model Enhanced by the Self-Attention Mechanism
by Huey-Ing Liu and Wei-Lin Chen
Appl. Sci. 2022, 12(9), 4502; https://doi.org/10.3390/app12094502 - 29 Apr 2022
Cited by 26 | Viewed by 7430
Abstract
Machine translation has received significant attention in the field of natural language processing not only because of its challenges but also due to the translation needs that arise in the daily life of modern people. In this study, we design a new machine [...] Read more.
Machine translation has received significant attention in the field of natural language processing not only because of its challenges but also due to the translation needs that arise in the daily life of modern people. In this study, we design a new machine translation model named X-Transformer, which refines the original Transformer model regarding three aspects. First, the model parameter of the encoder is compressed. Second, the encoder structure is modified by adopting two layers of the self-attention mechanism consecutively and reducing the point-wise feed forward layer to help the model understand the semantic structure of sentences precisely. Third, we streamline the decoder model size, while maintaining the accuracy. Through experiments, we demonstrate that having a large number of decoder layers not only affects the performance of the translation model but also increases the inference time. The X-Transformer reaches the state-of-the-art result of 46.63 and 55.63 points in the BiLingual Evaluation Understudy (BLEU) metric of the World Machine Translation (WMT), from 2014, using the English–German and English–French translation corpora, thus outperforming the Transformer model with 19 and 18 BLEU points, respectively. The X-Transformer significantly reduces the training time to only 1/3 times that of the Transformer. In addition, the heat maps of the X-Transformer reach token-level precision (i.e., token-to-token attention), while the Transformer model remains at the sentence level (i.e., token-to-sentence attention). Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 3505 KB  
Article
A Novel Approach for Semantic Extractive Text Summarization
by Waseemullah, Zainab Fatima, Shehnila Zardari, Muhammad Fahim, Maria Andleeb Siddiqui, Ag. Asri Ag. Ibrahim, Kashif Nisar and Laviza Falak Naz
Appl. Sci. 2022, 12(9), 4479; https://doi.org/10.3390/app12094479 - 28 Apr 2022
Cited by 19 | Viewed by 8348
Abstract
Text summarization is a technique for shortening down or exacting a long text or document. It becomes critical when someone needs a quick and accurate summary of very long content. Manual text summarization can be expensive and time-consuming. While summarizing, some important content, [...] Read more.
Text summarization is a technique for shortening down or exacting a long text or document. It becomes critical when someone needs a quick and accurate summary of very long content. Manual text summarization can be expensive and time-consuming. While summarizing, some important content, such as information, concepts, and features of the document, can be lost; therefore, the retention ratio, which contains informative sentences, is lost, and if more information is added, then lengthy texts can be produced, increasing the compression ratio. Therefore, there is a tradeoff between two ratios (compression and retention). The model preserves or collects all the informative sentences by taking only the long sentences and removing the short sentences with less of a compression ratio. It tries to balance the retention ratio by avoiding text redundancies and also filters irrelevant information from the text by removing outliers. It generates sentences in chronological order as the sentences are mentioned in the original document. It also uses a heuristic approach for selecting the best cluster or group, which contains more meaningful sentences that are present in the topmost sentences of the summary. Our proposed model extractive summarizer overcomes these deficiencies and tries to balance between compression and retention ratios. Full article
Show Figures

Figure 1

24 pages, 4184 KB  
Systematic Review
Central Auditory Functions of Alzheimer’s Disease and Its Preclinical Stages: A Systematic Review and Meta-Analysis
by Hadeel Y. Tarawneh, Holly K. Menegola, Andrew Peou, Hanadi Tarawneh and Dona M. P. Jayakody
Cells 2022, 11(6), 1007; https://doi.org/10.3390/cells11061007 - 16 Mar 2022
Cited by 25 | Viewed by 6644
Abstract
In 2020, 55 million people worldwide were living with dementia, and this number is projected to reach 139 million in 2050. However, approximately 75% of people living with dementia have not received a formal diagnosis. Hence, they do not have access to treatment [...] Read more.
In 2020, 55 million people worldwide were living with dementia, and this number is projected to reach 139 million in 2050. However, approximately 75% of people living with dementia have not received a formal diagnosis. Hence, they do not have access to treatment and care. Without effective treatment in the foreseeable future, it is essential to focus on modifiable risk factors and early intervention. Central auditory processing is impaired in people diagnosed with Alzheimer’s disease (AD) and its preclinical stages and may manifest many years before clinical diagnosis. This study systematically reviewed central auditory processing function in AD and its preclinical stages using behavioural central auditory processing tests. Eleven studies met the full inclusion criteria, and seven were included in the meta-analyses. The results revealed that those with mild cognitive impairment perform significantly worse than healthy controls within channel adaptive tests of temporal response (ATTR), time-compressed speech test (TCS), Dichotic Digits Test (DDT), Dichotic Sentence Identification (DSI), Speech in Noise (SPIN), and Synthetic Sentence Identification-Ipsilateral Competing Message (SSI-ICM) central auditory processing tests. In addition, this analysis indicates that participants with AD performed significantly worse than healthy controls in DDT, DSI, and SSI-ICM tasks. Clinical implications are discussed in detail. Full article
(This article belongs to the Special Issue Biomarkers of Alzheimer’s Disease: New Insights)
Show Figures

Figure 1

11 pages, 427 KB  
Article
Word Sense Disambiguation Using Clustered Sense Labels
by Jeong Yeon Park, Hyeong Jin Shin and Jae Sung Lee
Appl. Sci. 2022, 12(4), 1857; https://doi.org/10.3390/app12041857 - 11 Feb 2022
Cited by 11 | Viewed by 3862
Abstract
Sequence labeling models for word sense disambiguation have proven highly effective when the sense vocabulary is compressed based on the thesaurus hierarchy. In this paper, we propose a method for compressing the sense vocabulary without using a thesaurus. For this, sense definitions in [...] Read more.
Sequence labeling models for word sense disambiguation have proven highly effective when the sense vocabulary is compressed based on the thesaurus hierarchy. In this paper, we propose a method for compressing the sense vocabulary without using a thesaurus. For this, sense definitions in a dictionary are converted into sentence vectors and clustered into the compressed senses. First, the very large set of sense vectors is partitioned for less computational complexity, and then it is clustered hierarchically with awareness of homographs. The experiment was done on the English Senseval and Semeval datasets and the Korean Sejong sense annotated corpus. This process demonstrated that the performance greatly increased compared to that of the uncompressed sense model and is comparable to that of the thesaurus-based model. Full article
Show Figures

Figure 1

21 pages, 3230 KB  
Article
Chinese Neural Question Generation: Augmenting Knowledge into Multiple Neural Encoders
by Ming Liu and Jinxu Zhang
Appl. Sci. 2022, 12(3), 1032; https://doi.org/10.3390/app12031032 - 19 Jan 2022
Cited by 5 | Viewed by 3875
Abstract
Neural question generation (NQG) is the task of automatically generating a question from a given passage and answering it with sequence-to-sequence neural models. Passage compression has been proposed to address the challenge of generating questions from a long passage text by only extracting [...] Read more.
Neural question generation (NQG) is the task of automatically generating a question from a given passage and answering it with sequence-to-sequence neural models. Passage compression has been proposed to address the challenge of generating questions from a long passage text by only extracting relevant sentences containing the answer. However, it may not work well if the discarded irrelevant sentences contain the contextual information for the target question. Therefore, this study investigated how to incorporate knowledge triples into the sequence-to-sequence neural model to reduce such contextual information loss and proposed a multi-encoder neural model for Chinese question generation. This approach has been extensively evaluated in a large Chinese question and answer dataset. The study results showed that our approach outperformed the state-of-the-art NQG models by 5.938 points on the BLEU score and 7.120 points on the ROUGE-L score on the average since the proposed model is answer focused, which is helpful to produce an interrogative word matching the answer type. In addition, augmenting the information from the knowledge graph improves the BLEU score by 10.884 points. Finally, we discuss the challenges remaining for Chinese NQG. Full article
(This article belongs to the Special Issue Technologies and Environments of Intelligent Education)
Show Figures

Figure 1

12 pages, 553 KB  
Article
Sentence Compression Using BERT and Graph Convolutional Networks
by Yo-Han Park, Gyong-Ho Lee, Yong-Seok Choi and Kong-Joo Lee
Appl. Sci. 2021, 11(21), 9910; https://doi.org/10.3390/app11219910 - 23 Oct 2021
Cited by 5 | Viewed by 3756
Abstract
Sentence compression is a natural language-processing task that produces a short paraphrase of an input sentence by deleting words from the input sentence while ensuring grammatical correctness and preserving meaningful core information. This study introduces a graph convolutional network (GCN) into a sentence [...] Read more.
Sentence compression is a natural language-processing task that produces a short paraphrase of an input sentence by deleting words from the input sentence while ensuring grammatical correctness and preserving meaningful core information. This study introduces a graph convolutional network (GCN) into a sentence compression task to encode syntactic information, such as dependency trees. As we upgrade the GCN to activate a directed edge, the compression model with the GCN layers can distinguish between parent and child nodes in a dependency tree when aggregating adjacent nodes. Furthermore, by increasing the number of GCN layers, the model can gradually collect high-order information of a dependency tree when propagating node information through the layers. We implement a sentence compression model for Korean and English, respectively. This model consists of three components: pre-trained BERT model, GCN layers, and a scoring layer. The scoring layer can determine whether a word should remain in a compressed sentence by relying on the word vector containing contextual and syntactic information encoded by BERT and GCN layers. To train and evaluate the proposed model, we used the Google sentence compression dataset for English and a Korean sentence compression corpus containing about 140,000 sentence pairs for Korean. The experimental results demonstrate that the proposed model achieves state-of-the-art performance for English. To the best of our knowledge, this sentence compression model based on the deep learning model trained with a large-scale corpus is the first attempt for Korean. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1916 KB  
Article
Acoustic Sensing Analytics Applied to Speech in Reverberation Conditions
by Piotr Odya, Jozef Kotus, Adam Kurowski and Bozena Kostek
Sensors 2021, 21(18), 6320; https://doi.org/10.3390/s21186320 - 21 Sep 2021
Cited by 8 | Viewed by 3713
Abstract
The paper aims to discuss a case study of sensing analytics and technology in acoustics when applied to reverberation conditions. Reverberation is one of the issues that makes speech in indoor spaces challenging to understand. This problem is particularly critical in large spaces [...] Read more.
The paper aims to discuss a case study of sensing analytics and technology in acoustics when applied to reverberation conditions. Reverberation is one of the issues that makes speech in indoor spaces challenging to understand. This problem is particularly critical in large spaces with few absorbing or diffusing surfaces. One of the natural remedies to improve speech intelligibility in such conditions may be achieved through speaking slowly. It is possible to use algorithms that reduce the rate of speech (RoS) in real time. Therefore, the study aims to find recommended values of RoS in the context of STI (speech transmission index) in different acoustic environments. In the experiments, speech intelligibility for six impulse responses recorded in spaces with different STIs is investigated using a sentence test (for the Polish language). Fifteen subjects with normal hearing participated in these tests. The results of the analytical analysis enabled us to propose a curve specifying the maximum RoS values translating into understandable speech under given acoustic conditions. This curve can be used in speech processing control technology as well as compressive reverse acoustic sensing. Full article
(This article belongs to the Special Issue Analytics and Applications of Audio and Image Sensing Techniques)
Show Figures

Figure 1

14 pages, 314 KB  
Article
Abstractive Sentence Compression with Event Attention
by Su Jeong Choi, Ian Jung, Seyoung Park and Seong-Bae Park
Appl. Sci. 2019, 9(19), 3949; https://doi.org/10.3390/app9193949 - 20 Sep 2019
Cited by 2 | Viewed by 3097
Abstract
Sentence compression aims at generating a shorter sentence from a long and complex source sentence while preserving the important content of the source sentence. Since it provides enhanced comprehensibility and readability to readers, sentence compression is required for summarizing news articles in which [...] Read more.
Sentence compression aims at generating a shorter sentence from a long and complex source sentence while preserving the important content of the source sentence. Since it provides enhanced comprehensibility and readability to readers, sentence compression is required for summarizing news articles in which event words play a key role in delivering the meaning of the source sentence. Therefore, this paper proposes an abstractive sentence compression with event attention. In compressing a sentence of news articles, event words should be preserved as important information for sentence compression. For this, event attention is proposed which focuses on the event words of the source sentence in generating a compressed sentence. The global information in the source sentence is as significant as event words, since it captures the information of a whole source sentence. As a result, the proposed model generates a compressed sentence by combining both attentions. According to experimental results, the proposed model outperforms both the normal sequence-to-sequence model and the pointer generator on three datasets, namely the MSR dataset, Filippova dataset, and Korean sentence compression dataset. In particular, it shows 122% higher BLEU score than the sequence-to-sequence model. Therefore, the proposed model is effective in sentence compression. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop