Next Article in Journal
Insights into Entropy as a Measure of Multivariate Variability
Next Article in Special Issue
A Reliable Algorithm for a Local Fractional Tricomi Equation Arising in Fractal Transonic Flow
Previous Article in Journal
Entropy and the Self-Organization of Information and Value
Previous Article in Special Issue
Fractal Information by Means of Harmonic Mappings and Some Physical Implications
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Detection of Left-Sided and Right-Sided Hearing Loss via Fractional Fourier Transform

School of Computer Science and Technology, Nanjing Normal University, Nanjing 210023, China
Department of Radiology, Nanjing Children’s Hospital, Nanjing Medical University, Nanjing 210008, China
School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan 430073, China
School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
School of Information Science and Engineering, Changzhou University, Changzhou 213164, China
Department of Radiology, Zhong Da Hospital, Southeast University, Nanjing 210009, China
Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, Nanjing 210042, China
Key Laboratory of Statistical Information Technology and Data Mining, State Statistics Bureau, Chengdu 610225, China
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
Author to whom correspondence should be addressed.
These authors contribute equally to this paper.
Entropy 2016, 18(5), 194;
Submission received: 30 March 2016 / Revised: 12 May 2016 / Accepted: 16 May 2016 / Published: 19 May 2016
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)


In order to detect hearing loss more efficiently and accurately, this study proposed a new method based on fractional Fourier transform (FRFT). Three-dimensional volumetric magnetic resonance images were obtained from 15 patients with left-sided hearing loss (LHL), 20 healthy controls (HC), and 14 patients with right-sided hearing loss (RHL). Twenty-five FRFT spectrums were reduced by principal component analysis with thresholds of 90%, 95%, and 98%, respectively. The classifier is the single-hidden-layer feed-forward neural network (SFN) trained by the Levenberg–Marquardt algorithm. The results showed that the accuracies of all three classes are higher than 95%. In all, our method is promising and may raise interest from other researchers.

1. Introduction

Sensorineural hearing loss (SNHL) is a type of deafness. It is characterized by the gradual decrease of frequency response thresholds [1]. SNHL is composed of sensory hearing loss and neural hearing loss [2]. The former is mainly because of poor cochlear hair cell function, and the latter is due to damage to the cochlear nerve [3,4].
From the point of neuroimaging, SNHL is featured in slight atrophy in several brain regions [5,6,7]. Nevertheless, it is rather difficult for physicians to investigate the alternation areas. Hence, computer-aided diagnosis (CAD) is commonly used to assist physicians.
The common three stages of a CAD system are feature extraction, feature selection, and classification. The feature extraction process needs to obtain distinguishing features. The feature selection process reduces the number of features, and this process may be omitted when the feature number is small. The final classification process generates a classifier to recognize the input features.
Scholars tend to use discrete wavelet transform (DWT). Fatemizadeh and Shooshtari [8] used region-based DWT and an adaptive mesh design to realize Magnetic Resonance (MR) image compression. Gareis et al. [9] utilized discrete dyadic wavelet transform to extract features on brain-computer interfaces. Arizmendi et al. [10] employed DWT and Bayesian neural networks over Magnetic Resonance Spectroscopy (MRS) data to classify human brain tumors. Vivas et al. [11] used DWT and an adaptive neuro-fuzzy inference system to develop a brain-machine interface. Nayak et al. [12] used DWT to classify brain magnetic resonance (MR) images. Saber et al. [13] used DWT to detect parallel transmission line faults. Yang et al. [14] used DWT to analyze a spectrum for detecting brain tumors. Sharma et al. [15] used DWT to identify focal electroencephalogram signals. Sours et al. [16] used DWT to investigate multiple frequency ranges of resting state functional connectivity in mild traumatic brain injury patients.
However, it is difficult to determine the optimal wavelet function. Besides, DWT suffers from translational variance. Although stationary wavelet transform and wavelet packet transform can solve this problem, they increase the computing burden significantly [17,18,19].
In this paper, we suggested the use of a new transform method—the fractional Fourier transform (FRFT) [20]. FRFT is related to the fractional derivative [21], fractal geometry [22], the conformable derivative [23], and fractal theory [24]. FRFT can transform a given image to the so-called “unified time-frequency domain (UTFD)”. FRFT is proven to deliver better performance than DWT in many applications [25,26].
The remainder of this paper is below: Section 2 presents the materials. Section 3 gives the preprocessing steps. Section 4 describes the methodology. Section 5 offers the results and discussions. Finally, Section 6 concludes the paper and raises some potential research directions.

2. Materials

The study cohort included 15 patients with left-sided hearing loss (LHL), 14 patients with right-sided hearing loss (RHL) and 20 age- and sex-matched healthy controls (HC). Subjects, not only healthy but also of sudden sensorineural unilateral hearing loss (UHL) with a moderate-to-severe degree, were enrolled from the outpatients of a department of otorhinolaryngology and head-neck surgery and the community by advertisement. Subjects were excluded when there was evidence of known psychiatric or neurological diseases, brain lesions such as tumors or strokes, taking psychotropic medications, as well as contraindications to MR imaging. Informed written consent was obtained from all subjects and the study was approved by the Ethics Committee of Zhongda Hospital which is associated with Southeast University.
Magnetic resonance imaging (MRI) was performed using a 3.0-T MRI system (Siemens Verio System, Erlangen, Germany). The imaging parameters were as follows: 3D SPGR-TR 1900 ms, TE 2.48 ms, TI 900 ms, Flip 9°, FOV 256 mm × 256 mm, voxel dimension 1 mm × 1 mm × 1 mm voxels, and 1.0 mm sagittal slices.
A pure tone audiometry with six different octave frequencies (250, 500, 1000, 2000, 4000 and 8000 Hz) was used to evaluate the pure tone average (PTA) and to reflect hearing performance. Note that all patients were diagnosed with normal hearing in one ear (PTA ≤ 25 dB) and UHL in the other one (PTA ≥ 40 dB). The hearing loss was persistent and sudden for each patient. No patients used any hearing aid over the impaired ear. Subject characteristics are shown in Table 1.

3. Preprocessing

FMRIB Software Library (FSL) v5.0 was used to perform preprocessing. We use the brain extraction tool (BET) to extract the brain and remove skulls. The results were shown in Figure 1, where the red lines outline the edges of extracted brains.
Then, all brains of subjects were normalized into standard stereotaxic anatomical Montreal Neurological Institute (MNI) space using FMRIB’s Linear Image Registration Tool (FLIRT) and FMRIB’s Nonlinear Image Registration Tool (FNIRT) tools. The former performs linear registration, i.e., it translates, rotates, zooms, and shears the brain image to the standard MNI template, and the latter permits local deformation so as to achieve better registration results. The normalized images were resampled to 2 mm isotropic voxels.
Finally, the images were spatially smoothed via isotropic Gaussian filter with a full-width at a half-maximum of 10 mm. Three experienced radiologists were instructed to select the most distinctive (around the 40th) slice between SNHLs and HCs which contains the significant discrepancy information.

4. Methodology

Fractional Fourier transform (FRFT) [27,28,29] can be viewed as a transform than obtains spectrums in a unified time-frequency domain (UTFD). Ran et al. [30] researched the progress of FRFT, and they pointed out FRFT can be regarded as a rotation in the time-frequency plane, and thus defined the UTFD. Deng and Tao [31] also believed FRFT was a unified time-frequency transform. Zhang et al. [32] employed FRFT to obtain the unified time-frequency spectrum.
The unified time-frequency spectrum of a time-domain signal is a representation of that signal in the UTFD. It has been reported that the UTFD offers better classification performance than discrete wavelet transform (DWT) in many fields. The reason is that UTFD can permit the rotational angle of arbitrary precision; however, the DWT usually has an upper limit of decomposition levels. For example, Pan et al. [33] offered a UTFD orthogonal frequency division multiplexing transmission with a self-interference cancellation system. Zhu et al. [34] used UTFD to analyze time-modulated arrays. Tripathy et al. [35] employed UTFD and a differential relaying scheme to create a double-circuit transmission line.
Mathematically, the fractional Fourier transform (FRFT) [36,37,38] is a powerful tool to analyze signals in UTFD. Suppose the one-dimensional (1D) or two-dimensional (2D) signal is x(t), and its FRFT with rotational angle α is [39]
X α ( u ) = x ( t ) K α ( u , t ) d t
where u denotes the spectral frequency (not angular frequency). K denotes for a transform kernel as [40]
K α ( u , t ) = 1 j cot α exp ( j π ( t 2 cot α 2 u t csc α + u 2 cot α ) )
where j denotes for the imaginary unit. When α takes the value of the multiple of the ratio of the circumferences to diameter π, we can use the limit of the function to obtain the final result [41,42,43].
To illustrate, a simulated sigmoid function sin(t) with two periods is used. Figure 2 shows the FRFT results in which α increases from 0 to 1 with an equal step of 0.2. We know that these FRFT results correspond to the UTFD in the way that α works as an adjusting parameter. When α increases to 1, the UTFD will approximate to the traditional frequency spectrum. On the contrary, when α decreases to 0, the UTFD will approximate the time domain (for the time signal) and spatial domain (for image).
In this study, we assign the five values of 0.2, 0.4, 0.6, 0.8, 1.0 to both rotation angles of (i) α for the row direction and (ii) β for the column direction. There are, in total, 5 × 5 = 25 combination sets of α and β, and hence the FRFT will yield 25 UTFD spectrums for a brain image. In this study, the programs of FRFT were downloaded from the website [44].
The UTFD spectrums by FRFT were then combined together, vectorized and aligned into a column vector C. Afterwards, principal component analysis (PCA) was used to extract features from the column vector C with three different thresholds of 90%, 95%, and 98%, respectively.
The reduced features were then submitted to a single-hidden-layer feed-forward neural network (SFN) [45,46,47]. We did not use multiple hidden layers, since the sample number is small and the problem is not so complicated. To guarantee the performance, the hidden number is usually assigned a large value (50 in this study). Then, we decreased its value until the classification performance deteriorated. To train the weights and biases of SFN, we employed the classical Levenberg–Marquardt algorithm [48,49,50] which shows superior performance in many fields.
Our dataset is a bit small, so it will cause overfitting when dividing the dataset into training, validation, and test sets. Instead, a 10-fold cross-validation was used to help avoid overfitting, and thus out-of-sample errors can be estimated. We then repeat the 10-fold cross-validation 10 times. The 10 repetitions can alleviate the random effects, and our experiences showed that increasing the repetitions will enlarge the computation burdens.

5. Results and Discussion

5.1. Unified Time-Frequency Domain

The 25 UTFD spectrums are displayed in Figure 3. Here we can see that the UTFD will degrade to the traditional frequency spectrum when both rotation angles are equal to 1.0. Those 25 spectra reflect the unified time-frequency features that traditional Fourier transform cannot extract.

5.2. Optimal Threshold of PCA

Next, those vectorized features from above 25 spectrums of each image were formed in a data matrix. PCA was employed with the thresholds set to 90%, 95%, and 98%, respectively. The average accuracy (AA) was used as the measure.
We see from Figure 4 that the AA achieved 91.84% for a threshold of 90%, 94.29% for a threshold of 95%, and 95.10% for a threshold of 98%. This suggests an increasing AA when the threshold becomes larger. Nevertheless, the increase of the threshold will yield more reduced features, and causes a computational burden. Hence, we finally assign the threshold to 98%.

5.3. Evaluation

The evaluation results of the 10 repetitions of 10-fold cross-validation are displayed in Table 2. The overall average accuracy of our method is 95.10%. For the single HC class, we achieved a sensitivity of 96.50%, a specificity of 97.93%, a precision of 96.98%, and an accuracy of 97.35%. For the single LHL class, we achieved a sensitivity of 94.00%, a specificity of 97.35%, a precision of 94.00%, and an accuracy of 96.33%. For the single RHL class, we achieved a sensitivity of 94.29%, a specificity of 97.43%, a precision of 93.62%, and an accuracy of 96.53%.
Table 2 shows our method yields satisfying detection results on HC, LHL, and RHL. The detection accuracies are all higher than 95%. This indicates that our method can be applied in hospitals to assist physicians in making diagnoses based on magnetic resonance images. Nevertheless, our method does not achieve 100% accuracy, and this leaves us a future research direction.

6. Conclusions

In this study, we developed a new method for detecting unilateral hearing loss (both left-sided and right-sided). Our method is based on the combination of fractional Fourier transform and principal component analysis. The results show our method yields exciting results.
In the future, we will continue to increase the classification performance, and we will test some advanced classifiers, such as the linear regression classifier [51]. Besides, FLAIR imaging [52] and the computed tomography (CT) technique will be embedded to increase the classification performance. Another research direction is to use the fractional derivative [53] to extract hearing loss–related features.


This paper was supported by the Natural Science Foundation of Jiangsu Province (BK20150983), the Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing (BM2013006), the Program of Natural Science Research of Jiangsu Higher Education Institutions (15KJB470010), the Special Funds for Scientific and Technological Achievement Transformation Project in Jiangsu Province (BA2013058), the Nanjing Normal University Research Foundation for Talented Scholars (2013119XGQ0061, 2014119XGQ0080), the Open Project Program of the State Key Lab of CAD&CG, Zhejiang University (A1616), the Open Fund of Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, Jilin University (93K172016K17), the Open Fund of Key Laboratory of Statistical Information Technology and Data Mining, the State Statistics Bureau (SDL201608), the Science and Technology Program of Changzhou City (CE20145055), and the Qing Lan Project of Jiangsu Province. We also thank Y. Chen for his substantial help.

Author Contributions

Shuihua Wang and Ming Yang conceived the study. Yudong Zhang and Yin Zhang designed the model. Ming Yang and Bin Liu acquired the data. Jianwu Li and Ling Zou analyzed the data. Siyuan Lu and Yudong Zhang processed the data. Shuihua Wang and Jiquan Yang interpreted the results. Shuihua Wang and Ming Yang developed the program. Shuihua Wang, Jianwu Li and Siyuan Lu wrote the draft. All authors gave critical revisions. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.


The following abbreviations are used in this manuscript:
SNHLSensorineural hearing loss
UHLUnilateral hearing loss
LHLLeft-sided hearing loss
RHLRight-sided hearing loss
HCHealthy control
MRIMagnetic resonance imaging
PTAPure tone average
MNIMontreal neurologic institute
UTFDUnified time-frequency domain


  1. Tseng, C.-C.; Hu, L.-Y.; Liu, M.-E.; Yang, A.C.; Shen, C.-C.; Tsai, S.-J. Risk of depressive disorders following sudden sensorineural hearing loss: A nationwide population-based retrospective cohort study. J. Affect. Disord. 2016, 197, 94–99. [Google Scholar] [CrossRef] [PubMed]
  2. Kitoh, R.; Nishio, S.Y.; Ogawa, K.; Okamoto, M.; Kitamura, K.; Gyo, K.; Sato, H.; Nakashima, T.; Fukuda, S.; Fukushima, K.; et al. SOD1 gene polymorphisms in sudden sensorineural hearing loss. Acta Oto-Laryngol. 2016, 136, 465–469. [Google Scholar] [CrossRef] [PubMed]
  3. Kim, T.S.; Yoo, M.H.; Lee, H.S.; Yang, C.J.; Ahn, J.H.; Chung, J.W.; Park, H.J. Short-term changes in tinnitus pitch related to audiometric shape in sudden sensorineural hearing loss. Auris Nasus Larynx 2016, 43, 281–286. [Google Scholar] [CrossRef] [PubMed]
  4. Aarhus, L.; Tambs, K.; Nafstad, P.; Bjorgan, E.; Engdahl, B. Childhood sensorineural hearing loss: Effects of combined exposure with aging or noise exposure later in life. Eur. Arch. Oto-Rhino-Laryn. 2016, 273, 1099–1105. [Google Scholar] [CrossRef] [PubMed]
  5. Komara, M.; John, A.; Suleiman, J.; Ali, B.R.; Al-Gazali, L. Clinical and molecular delineation of dysequilibrium syndrome type 2 and profound sensorineural hearing loss in an inbred Arab family. Am. J. Med. Genet. A 2016, 170, 540–543. [Google Scholar] [CrossRef] [PubMed]
  6. Geng, Z.J.; Zhang, Q.; Li, W.; Zhang, J. Auditory cortical responses evoked by pure tones in healthy and sensorineural hearing loss subjects: Functional MRI and magnetoencephalography. Chin. Med. J. 2006, 119, 1548–1554. [Google Scholar]
  7. Fan, W.L.; Zhang, W.J.; Li, J.; Zhao, X.Y.; Mella, G.; Lei, P.; Liu, Y.; Wang, H.H.; Cheng, H.M.; Shi, H.; et al. Altered contralateral auditory cortical morphology in unilateral sudden sensorineural hearing loss. Otol. Neurotol. 2015, 36, 1622–1627. [Google Scholar] [CrossRef] [PubMed]
  8. Fatemizadeh, E.; Shooshtari, P. Roi-based 3D human brain magnetic resonance images compression using adaptive mesh design and region-based discrete wavelet transform. Int. J. Wavelets Multiresolut. Inf. Process. 2010, 8, 407–430. [Google Scholar] [CrossRef]
  9. Gareis, I.; Gentiletti, G.; Acevedo, R.; Rufiner, L. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results. J. Phys. Conf. Ser. 2011, 313, 012011. [Google Scholar] [CrossRef]
  10. Arizmendi, C.; Vellido, A.; Romero, E. Classification of human brain tumours from MRS data using Discrete Wavelet Transform and Bayesian Neural Networks. Expert Syst. Appl. 2012, 39, 5223–5232. [Google Scholar] [CrossRef]
  11. Vivas, E.L.A.; Garcia-Gonzalez, A.; Figueroa, I.; Fuentes, R.Q. Discrete Wavelet Transform and ANFIS Classifier for Brain-Machine Interface based on EEG. In Proceedings of the 6th International Conference on Human System Interactions, Sopot, Poland, 6–8 June 2013; Paja, W.A., Wilamowski, B.M., Eds.; IEEE: New York, NY, USA, 2013; pp. 137–144. [Google Scholar]
  12. Nayak, D.R.; Dash, R.; Majhi, B. Brain MR image classification using two-dimensional discrete wavelet transform and AdaBoost with random forests. Neurocomputing 2016, 177, 188–197. [Google Scholar] [CrossRef]
  13. Saber, A.; Emam, A.; Amer, R. Discrete wavelet transform and support vector machine-based parallel transmission line faults classification. IEEJ Trans. Electr. Electron. Eng. 2016, 11, 43–48. [Google Scholar] [CrossRef]
  14. Yang, G.; Nawaz, T.; Barrick, T.R.; Howe, F.A.; Slabaugh, G. Discrete Wavelet Transform-Based Whole-Spectral and Subspectral Analysis for Improved Brain Tumor Clustering Using Single Voxel MR Spectroscopy. IEEE Trans. Biomed. Eng. 2015, 62, 2860–2866. [Google Scholar] [CrossRef] [PubMed]
  15. Sharma, R.; Pachori, R.B.; Acharya, U.R. An Integrated Index for the Identification of Focal Electroencephalogram Signals Using Discrete Wavelet Transform and Entropy Measures. Entropy 2015, 17, 5218–5240. [Google Scholar] [CrossRef]
  16. Sours, C.; Chen, H.; Roys, S.; Zhuo, J.; Varshney, A.; Gullapalli, R.P. Investigation of Multiple Frequency Ranges Using Discrete Wavelet Decomposition of Resting-State Functional Connectivity in Mild Traumatic Brain Injury Patients. Brain Connect. 2015, 5, 442–450. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, S.; Du, S.; Atangana, A.; Liu, A.; Lu, Z. Application of stationary wavelet entropy in pathological brain detection. Multimed. Tools Appl. 2016. [Google Scholar] [CrossRef]
  18. Hemmati, F.; Orfali, W.; Gadala, M.S. Roller bearing acoustic signature extraction by wavelet packet transform, applications in fault detection and size estimation. Appl. Acoust. 2016, 104, 101–118. [Google Scholar] [CrossRef]
  19. Asgarian, B.; Aghaeidoost, V.; Shokrgozar, H.R. Damage detection of jacket type offshore platforms using rate of signal energy using wavelet packet transform. Mar. Struct. 2016, 45, 1–21. [Google Scholar] [CrossRef]
  20. Poularikas, A.D. Transforms and Applications Handbook; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  21. Atangana, A.; Alkahtanil, B.S.T. New model of groundwater flowing within a confine aquifer: Application of Caputo-Fabrizio derivative. Arabian J. Geosci. 2016, 9. [Google Scholar] [CrossRef]
  22. Cattani, C.; Pierro, G. On the Fractal Geometry of DNA by the Binary Image Analysis. Bull. Math. Biol. 2013, 75, 1544–1570. [Google Scholar] [CrossRef] [PubMed]
  23. Atangana, A.; Baleanu, D.; Alsaedi, A. New properties of conformable derivative. Open Math. 2015, 13, 889–898. [Google Scholar] [CrossRef]
  24. Jian-Kai, L.; Cattani, C.; Wan-Qing, S. Power Load Prediction Based on Fractal Theory. Adv. Math. Phys. 2015, 2015, 827238. [Google Scholar] [CrossRef]
  25. Zhang, Y.-D.; Chen, S.; Wang, S.-H.; Yang, J.-F.; Phillips, P. Magnetic resonance brain image classification based on weighted-type fractional Fourier transform and nonparallel support vector machine. Int. J. Imaging Syst. Technol. 2015, 25, 317–327. [Google Scholar] [CrossRef]
  26. Yang, R.Q.; Bai, Z.Y.; Yin, L.G.; Gao, H. Detecting of Copy-Move Forgery in Digital Images Using Fractional Fourier Transform. In Proceedings of the Seventh International Conference on Digital Image Processing, Los Angeles, CA, USA, 9–10 April 2015.
  27. Ozaktas, H.M.; Kutay, M.A.; Mendlovic, D. Introduction to the Fractional Fourier Transform and Its Applications. Adv. Imaging Electron Phys. 1999, 106, 239–291. [Google Scholar]
  28. Healy, J.J.; Kutay, M.A.; Ozaktas, H.M.; Sheridan, J.T. Linear Canonical Transforms; Springer: New York, NY, USA, 2016. [Google Scholar]
  29. Ozaktas, H.M.; Zalevsky, Z.; Kutay, M.A. The Fractional Fourier Transform; Wiley: Chichester, UK, 2001. [Google Scholar]
  30. Ran, T.; Feng, Z.; Yue, W. Research progress on discretization of fractional Fourier transform. Sci. China Ser. F Inf. Sci. 2008, 51, 859–880. [Google Scholar]
  31. Deng, B.; Tao, R. The Analysis of Resolution of the Discrete Fractional Fourier Transform. In Proceedings of the First International Conference on Innovative Computing, Information and Control, Beijing, China, 30 August–1 September 2006; pp. 10–13.
  32. Zhang, Y.D.; Wang, S.H.; Liu, G.; Yang, J.Q. Computer-aided diagnosis of abnormal breasts in mammogram images by weighted-type fractional Fourier transform. Adv. Mech. Eng. 2016, 8. [Google Scholar] [CrossRef]
  33. Pan, C.Y.; Dai, L.L.; Yang, Z.X. Unified Time-Frequency OFDM Transmission with Self Interference Cancellation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2013, E96A, 807–813. [Google Scholar] [CrossRef]
  34. Zhu, Q.J.; Yang, S.W.; Yao, R.L.; Huang, M.; Nie, Z.P. Unified Time- and Frequency-Domain Study on Time-Modulated Arrays. IEEE Trans. Antennas Propag. 2013, 61, 3069–3076. [Google Scholar] [CrossRef]
  35. Tripathy, L.N.; Samantaray, S.R.; Dash, P.K. A fast time-frequency transform based differential relaying scheme for UPFC based double-circuit transmission line. Int. J. Electric. Power Energy Syst. 2016, 77, 404–417. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Yang, X.; Cattani, C.; Rao, R.; Wang, S.; Phillips, P. Tea Category Identification Using a Novel Fractional Fourier Entropy and Jaya Algorithm. Entropy 2016, 18, 77. [Google Scholar] [CrossRef]
  37. Azoug, S.E.; Bouguezel, S. A non-linear preprocessing for opto-digital image encryption using multiple-parameter discrete fractional Fourier transform. Opt. Commun. 2016, 359, 85–94. [Google Scholar] [CrossRef]
  38. Goel, N.; Singh, K. Convolution and correlation theorems for the offset fractional Fourier transform and its application. AEU Int. J. Electron. Commun. 2016, 70, 138–150. [Google Scholar] [CrossRef]
  39. Ozaktas, H.M.; Arik, S.O.; Coskun, T. Fundamental structure of Fresnel diffraction: Longitudinal uniformity with respect to fractional Fourier order. Opt. Lett. 2012, 37, 103–105. [Google Scholar] [CrossRef] [PubMed]
  40. Oktem, F.S.; Ozaktas, H.M. Equivalence of linear canonical transform domains to fractional Fourier domains and the bicanonical width product: A generalization of the space-bandwidth product. J. Opt. Soc. Am. A 2010, 27, 1885–1895. [Google Scholar] [CrossRef] [PubMed]
  41. Yang, X.; Sun, P.; Dong, Z.; Liu, A.; Yuan, T.-F. Pathological Brain Detection by a Novel Image Feature—Fractional Fourier Entropy. Entropy 2015, 17, 8275–8296. [Google Scholar]
  42. Elhoseny, H.M.; Faragallah, O.S.; Ahmed, H.E.H.; Kazemian, H.B.; El-sayed, H.S.; Abd El-Samie, F.E. The Effect of Fractional Fourier Transform Angle in Encryption Quality for Digital Images. Optik 2016, 127, 315–319. [Google Scholar] [CrossRef]
  43. Tang, L.L.; Huang, C.T.; Pan, J.S.; Liu, C.Y. Dual watermarking algorithm based on the Fractional Fourier Transform. Multimed. Tools Appl. 2015, 74, 4397–4413. [Google Scholar] [CrossRef]
  44. Calculation of the Fractional Fourier Transform. Available online: (accessed on 18 May 2016).
  45. Zhang, Y.; Wang, S.; Ji, G.; Phillips, P. Fruit classification using computer vision and feedforward neural network. J. Food Eng. 2014, 143, 167–177. [Google Scholar] [CrossRef]
  46. Lahmiri, S. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques. Physica A 2016, 444, 388–396. [Google Scholar] [CrossRef]
  47. Simsir, M.; Bayjr, R.; Uyaroglu, Y. Real-Time Monitoring and Fault Diagnosis of a Low Power Hub Motor Using Feedforward Neural Network. Comput. Intell. Neurosci. 2016, 2016, 7129376. [Google Scholar] [CrossRef] [PubMed]
  48. Zhang, Y.; Wu, L.; Naggaz, N.; Wang, S.; Wei, G. Remote-sensing Image Classification Based on an Improved Probabilistic Neural Network. Sensors 2009, 9, 7516–7539. [Google Scholar] [CrossRef] [PubMed]
  49. Celik, O.; Teke, A.; Yildirim, H.B. The optimized artificial neural network model with Levenberg-Marquardt algorithm for global solar radiation estimation in Eastern Mediterranean Region of Turkey. J. Clean. Prod. 2016, 116, 1–12. [Google Scholar] [CrossRef]
  50. Prado, D.R.; Alvarez, J.; Arrebola, M.; Pino, M.R.; Ayestaran, R.G.; Las-Heras, F. Efficient, Accurate and Scalable Reflectarray Phase-Only Synthesis Based on the Levenberg-Marquardt Algorithm. Appl. Comput. Electromagn. Soc. J. 2015, 30, 1246–1255. [Google Scholar]
  51. Seal, A.; Bhattacharjee, D.; Nasipuri, M.; Basu, D.K. UGC-JU face database and its benchmarking using linear regression classifier. Multimed. Tools Appl. 2015, 74, 2913–2937. [Google Scholar] [CrossRef]
  52. Naganawa, S.; Kawai, H.; Taoka, T.; Suzuki, K.; Iwano, S.; Satake, H.; Sone, M.; Ikeda, M. Heavily T2-Weighted 3D-FLAIR Improves the Detection of Cochlear Lymph Fluid Signal Abnormalities in Patients with Sudden Sensorineural Hearing Loss. Magn. Reson. Med. Sci. 2016, 15, 203–211. [Google Scholar] [CrossRef] [PubMed]
  53. Atangana, A.; Alqahtani, R.T. Modelling the Spread of River Blindness Disease via the Caputo Fractional Derivative and the Beta-derivative. Entropy 2016, 18, 40. [Google Scholar] [CrossRef]
Figure 1. Brain extraction result. (a) Saggital; (b) Coronal; (c) Axial directions.
Figure 1. Brain extraction result. (a) Saggital; (b) Coronal; (c) Axial directions.
Entropy 18 00194 g001
Figure 2. FRFT results of (a) sin function with different α values: (b) 0.2, (c) 0.4; (d) 0.6; (e) 0.8; (f) 1.0. (red represents the real part, and blue represents the imaginary part. The horizont denotes x-axis, and the vertical denotes y-axis.).
Figure 2. FRFT results of (a) sin function with different α values: (b) 0.2, (c) 0.4; (d) 0.6; (e) 0.8; (f) 1.0. (red represents the real part, and blue represents the imaginary part. The horizont denotes x-axis, and the vertical denotes y-axis.).
Entropy 18 00194 g002aEntropy 18 00194 g002b
Figure 3. UTFDs of brain image.
Figure 3. UTFDs of brain image.
Entropy 18 00194 g003
Figure 4. The overall accuracy versus PCA threshold.
Figure 4. The overall accuracy versus PCA threshold.
Entropy 18 00194 g004
Table 1. Characteristics of subjects.
Table 1. Characteristics of subjects.
LHLRHLControlF/x2/tp Value
Gender (m/f)8/76/88/12
Age (year)51.7 ± 9.653.9 ± 7.653.6 ± 5.40.3050.739
Education level (year)12.5 ± 1.712.1 ± 2.411.5 ± 3.20.4870.618
Disease duration (year)17.6 ± 17.314.2 ± 14.90.5170.610
PTA of left ear (dB)78.1 ± 17.921.8 ± 3.222.2 ± 2.1156.4270.00
PTA of right ear (dB)20.4 ± 4.280.9 ± 17.421.3 ± 2.2167.7960.00
Table 2. Evaluation.
Table 2. Evaluation.

Share and Cite

MDPI and ACS Style

Wang, S.; Yang, M.; Zhang, Y.; Li, J.; Zou, L.; Lu, S.; Liu, B.; Yang, J.; Zhang, Y. Detection of Left-Sided and Right-Sided Hearing Loss via Fractional Fourier Transform. Entropy 2016, 18, 194.

AMA Style

Wang S, Yang M, Zhang Y, Li J, Zou L, Lu S, Liu B, Yang J, Zhang Y. Detection of Left-Sided and Right-Sided Hearing Loss via Fractional Fourier Transform. Entropy. 2016; 18(5):194.

Chicago/Turabian Style

Wang, Shuihua, Ming Yang, Yin Zhang, Jianwu Li, Ling Zou, Siyuan Lu, Bin Liu, Jiquan Yang, and Yudong Zhang. 2016. "Detection of Left-Sided and Right-Sided Hearing Loss via Fractional Fourier Transform" Entropy 18, no. 5: 194.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop