MDPI Contact

MDPI AG
St. Alban-Anlage 66,
4052 Basel, Switzerland
Support contact
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18

For more contact information, see here.

Advanced Search

You can use * to search for partial matches.

Search Results

6 articles matched your search query. Search Parameters:
Keywords = time-frequency texture

Matches by word:

TIME (17770) , FREQUENCY (4363) , TEXTURE (510)

View options
order results:
result details:
results per page:
Articles per page View Sort by
Displaying article 1-50 on page 1 of 1.
Export citation of selected articles as:
Open AccessArticle A Novel Vision-Based Classification System for Explosion Phenomena
J. Imaging 2017, 3(2), 14; doi:10.3390/jimaging3020014
Received: 4 October 2016 / Revised: 10 April 2017 / Accepted: 10 April 2017 / Published: 15 April 2017
Viewed by 828 | PDF Full-text (7812 KB) | HTML Full-text | XML Full-text
Abstract
The need for a proper design and implementation of adequate surveillance system for detecting and categorizing explosion phenomena is nowadays rising as a part of the development planning for risk reduction processes including mitigation and preparedness. In this context, we introduce state-of-the-art explosions
[...] Read more.
The need for a proper design and implementation of adequate surveillance system for detecting and categorizing explosion phenomena is nowadays rising as a part of the development planning for risk reduction processes including mitigation and preparedness. In this context, we introduce state-of-the-art explosions classification using pattern recognition techniques. Consequently, we define seven patterns for some of explosion and non-explosion phenomena including: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds. Towards the classification goal, we collected a new dataset of 5327 2D RGB images that are used for training the classifier. Furthermore, in order to achieve high reliability in the proposed explosion classification system and to provide multiple analysis for the monitored phenomena, we propose employing multiple approaches for feature extraction on images including texture features, features in the spatial domain, and features in the transform domain. Texture features are measured on intensity levels using the Principal Component Analysis (PCA) algorithm to obtain the highest 100 eigenvectors and eigenvalues. Moreover, features in the spatial domain are calculated using amplitude features such as the YCbCr color model; then, PCA is used to reduce vectors’ dimensionality to 100 features. Lastly, features in the transform domain are calculated using Radix-2 Fast Fourier Transform (Radix-2 FFT), and PCA is then employed to extract the highest 100 eigenvectors. In addition, these textures, amplitude and frequency features are combined in an input vector of length 300 which provides a valuable insight into the images under consideration. Accordingly, these features are fed into a combiner to map the input frames to the desired outputs and divide the space into regions or categories. Thus, we propose to employ one-against-one multi-class degree-3 polynomial kernel Support Vector Machine (SVM). The efficiency of the proposed research methodology was evaluated on a totality of 980 frames that were retrieved from multiple YouTube videos. These videos were taken in real outdoor environments for the seven scenarios of the respective defined classes. As a result, we obtained an accuracy of 94.08%, and the total time for categorizing one frame was approximately 0.12 s. Full article
Figures

Figure 1

Open AccessArticle Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor
Sensors 2017, 17(4), 734; doi:10.3390/s17040734
Received: 8 February 2017 / Revised: 28 March 2017 / Accepted: 29 March 2017 / Published: 31 March 2017
Viewed by 443 | PDF Full-text (10350 KB) | HTML Full-text | XML Full-text
Abstract
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture
[...] Read more.
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face. Full article
(This article belongs to the Section Physical Sensors)
Figures

Figure 1

Open AccessArticle Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition
Sensors 2015, 15(1), 1458-1478; doi:10.3390/s150101458
Received: 16 September 2014 / Accepted: 1 December 2014 / Published: 14 January 2015
Cited by 7 | Viewed by 1481 | PDF Full-text (1190 KB) | HTML Full-text | XML Full-text
Abstract
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture
[...] Read more.
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. Full article
(This article belongs to the Section Physical Sensors)
Open AccessArticle The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech
Sensors 2014, 14(9), 16692-16714; doi:10.3390/s140916692
Received: 9 June 2014 / Revised: 24 August 2014 / Accepted: 29 August 2014 / Published: 9 September 2014
Cited by 13 | Viewed by 1546 | PDF Full-text (1377 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS). This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we
[...] Read more.
In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS). This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII) derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB) and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB), to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM) as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D) TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech. Full article
(This article belongs to the Section Physical Sensors)
Open AccessArticle Effect of Radio Frequency Heating on Yoghurt, II: Microstructure and Texture
Foods 2014, 3(2), 369-393; doi:10.3390/foods3020369
Received: 28 February 2014 / Revised: 17 April 2014 / Accepted: 9 June 2014 / Published: 20 June 2014
Cited by 2 | Viewed by 1947 | PDF Full-text (917 KB) | HTML Full-text | XML Full-text
Abstract
Radio frequency (RF) heating was applied to stirred yoghurt after culturing in order to enhance the shelf-life and thereby meet industrial demands in countries where the distribution cold chain cannot be implicitly guaranteed. In parallel, a convectional (CV) heating process was also tested.
[...] Read more.
Radio frequency (RF) heating was applied to stirred yoghurt after culturing in order to enhance the shelf-life and thereby meet industrial demands in countries where the distribution cold chain cannot be implicitly guaranteed. In parallel, a convectional (CV) heating process was also tested. In order to meet consumers’ expectations with regard to texture and sensory properties, the yoghurts were heated to different temperatures (58, 65 and 72 °C). This second part of our feasibility study focused on the changes in microstructure and texture caused by post-fermentative heat treatment. It was shown that there were always microstructural changes with additional heat treatment. Compared to the dense and compact casein network in the stirred reference yoghurt, network contractions and further protein aggregation were observed after heat treatment, while at the same time larger pore geometries were detected. The changes in microstructure as well as other physical and sensorial texture properties (syneresis, hardness, cohesiveness, gumminess, apparent viscosity, G’, G’’, homogeneity) were in good agreement with the temperature and time of the heat treatment (thermal stress). The RF heated products were found to be very similar to the stirred reference yoghurt, showing potential for further industrial development such as novel heating strategies to obtain products with prolonged shelf-life. Full article
(This article belongs to the Special Issue Thermal Processing of Foods)
Figures

Open AccessArticle Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network
Sensors 2011, 11(11), 10534-10556; doi:10.3390/s111110534
Received: 4 July 2011 / Revised: 8 October 2011 / Accepted: 1 November 2011 / Published: 4 November 2011
Cited by 14 | Viewed by 3538 | PDF Full-text (1985 KB) | HTML Full-text | XML Full-text
Abstract
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada)
[...] Read more.
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches. Full article
(This article belongs to the Section Physical Sensors)

Years

Subjects

Refine Subjects

Journals

Refine Journals

Article Types

Refine Types

Countries

All Countries Refine Countries
Back to Top