Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = Just Noticeable Difference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2654 KB  
Article
The Evaluation of a Deep Learning Approach to Automatic Segmentation of Teeth and Shade Guides for Tooth Shade Matching Using the SAM2 Algorithm
by KyeongHwan Han, JaeHyung Lim, Jin-Soo Ahn and Ki-Sun Lee
Bioengineering 2025, 12(9), 959; https://doi.org/10.3390/bioengineering12090959 - 6 Sep 2025
Cited by 1 | Viewed by 857
Abstract
Accurate shade matching is essential in restorative and prosthetic dentistry yet remains difficult due to subjectivity in visual assessments. We develop and evaluate a deep learning approach for the simultaneous segmentation of natural teeth and shade guides in intraoral photographs using four fine-tuned [...] Read more.
Accurate shade matching is essential in restorative and prosthetic dentistry yet remains difficult due to subjectivity in visual assessments. We develop and evaluate a deep learning approach for the simultaneous segmentation of natural teeth and shade guides in intraoral photographs using four fine-tuned variants of Segment Anything Model 2 (SAM2: tiny, small, base plus, and large) and a UNet baseline trained under the same protocol. The spatial performance was assessed using the Dice Similarity Coefficient (DSC), the Intersection over the Union (IoU), and the 95th-percentile Hausdorff distance normalized by the ground-truth equivalent diameter (HD95). The color consistency within masks was quantified by the coefficient of variation (CV) of the CIELAB components (L*, a*, b*). The perceptual color difference was measured using CIEDE2000 (ΔE00). On a held-out test set, all SAM2 variants achieved a high overlap accuracy; SAM2-large performed best (DSC: 0.987 ± 0.006; IoU: 0.975 ± 0.012; HD95: 1.25 ± 1.80%), followed by SAM2-small (0.987 ± 0.008; 0.974 ± 0.014; 2.96 ± 11.03%), SAM2-base plus (0.985 ± 0.011; 0.971 ± 0.021; 1.71 ± 3.28%), and SAM2-tiny (0.979 ± 0.015; 0.959 ± 0.028; 6.16 ± 11.17%). UNet reached a DSC = 0.972 ± 0.020, an IoU = 0.947 ± 0.035, and an HD95 = 6.54 ± 16.35%. The CV distributions for all of the prediction models closely matched the ground truth (e.g., GT L*: 0.164 ± 0.040; UNet: 0.144 ± 0.028; SAM2-small: 0.164 ± 0.038; SAM2-base plus: 0.162 ± 0.039). The full-mask ΔE00 was low across models, with the summary statistics reported as the median (mean ± SD): UNet: 0.325 (0.487 ± 0.364); SAM2-tiny: 0.162 (0.410 ± 0.665); SAM2-small: 0.078 (0.126 ± 0.166); SAM2-base plus: 0.072 (0.198 ± 0.417); SAM2-large: 0.065 (0.167 ± 0.257). These ΔE00 values lie well below the ≈1 just noticeable difference threshold on average, indicating close chromatic agreement between the predictions and annotations. Within a single dataset and training protocol, fine-tuned SAM2, especially its larger variants, provides robust spatial accuracy, boundary reliability, and color fidelity suitable for clinical shade-matching workflows, while UNet offers a competitive convolutional baseline. These results indicate technical feasibility rather than clinical validation; broader baselines and external, multi-center evaluations are needed to determine its suitability for routine shade-matching workflows. Full article
Show Figures

Figure 1

24 pages, 1751 KB  
Article
Robust JND-Guided Video Watermarking via Adaptive Block Selection and Temporal Redundancy
by Antonio Cedillo-Hernandez, Lydia Velazquez-Garcia, Manuel Cedillo-Hernandez, Ismael Dominguez-Jimenez and David Conchouso-Gonzalez
Mathematics 2025, 13(15), 2493; https://doi.org/10.3390/math13152493 - 3 Aug 2025
Viewed by 785
Abstract
This paper introduces a robust and imperceptible video watermarking framework designed for blind extraction in dynamic video environments. The proposed method operates in the spatial domain and combines multiscale perceptual analysis, adaptive Just Noticeable Difference (JND)-based quantization, and temporal redundancy via multiframe embedding. [...] Read more.
This paper introduces a robust and imperceptible video watermarking framework designed for blind extraction in dynamic video environments. The proposed method operates in the spatial domain and combines multiscale perceptual analysis, adaptive Just Noticeable Difference (JND)-based quantization, and temporal redundancy via multiframe embedding. Watermark bits are embedded selectively in blocks with high perceptual masking using a QIM strategy, and the corresponding DCT coefficients are estimated directly from the spatial domain to reduce complexity. To enhance resilience, each bit is redundantly inserted across multiple keyframes selected based on scene transitions. Extensive simulations over 21 benchmark videos (CIF, 4CIF, HD) validate that the method achieves superior performance in robustness and perceptual quality, with an average Bit Error Rate (BER) of 1.03%, PSNR of 50.1 dB, SSIM of 0.996, and VMAF of 97.3 under compression, noise, cropping, and temporal desynchronization. The system outperforms several recent state-of-the-art techniques in both quality and speed, requiring no access to the original video during extraction. These results confirm the method’s viability for practical applications such as copyright protection and secure video streaming. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

24 pages, 18515 KB  
Article
Simplified Fly Tower Modeling for Preliminary Acoustic Predictions in Opera Houses
by Fabrizio Cumo, Umberto Derme and Sofia Agostinelli
Appl. Sci. 2025, 15(15), 8393; https://doi.org/10.3390/app15158393 - 29 Jul 2025
Viewed by 584
Abstract
The acoustic field of an opera house is much more difficult to predict than those of concert halls because, in the fly tower, the absorption characteristics vary from time to time, according to the opera piece layout. For this reason, the paper aims [...] Read more.
The acoustic field of an opera house is much more difficult to predict than those of concert halls because, in the fly tower, the absorption characteristics vary from time to time, according to the opera piece layout. For this reason, the paper aims to find a simplified fly tower model to be used as a fixed reference in a preliminary acoustic prediction for opera houses. Firstly, referring to a case study, the effects of the fly tower Depth and absorptive characteristics are investigated to identify the simplified model. As a traditional opera is set on an empty stage, and modern pieces are supported by a virtual projected environment, the influence of the variable stage elements on Reverberation Time RT, Clarity C80, and Strength G is considered, comparing the traditional Semiramide opera to a modern digital one, according to the Just Noticeable Difference JND. Results confirm the utility of the suggested fly tower model, which does not require any set definition. Full article
(This article belongs to the Special Issue Acoustics Analysis and Noise Control for Buildings)
Show Figures

Figure 1

32 pages, 15292 KB  
Article
Compression Ratio as Picture-Wise Just Noticeable Difference Predictor
by Nenad Stojanović, Boban Bondžulić, Vladimir Lukin, Dimitrije Bujaković, Sergii Kryvenko and Oleg Ieremeiev
Mathematics 2025, 13(9), 1445; https://doi.org/10.3390/math13091445 - 28 Apr 2025
Cited by 1 | Viewed by 809
Abstract
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak [...] Read more.
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak signal-to-noise ratio, PSNR) and image representation in bits per pixel (bpp). In this analysis, the results of subjective tests from four publicly available databases are used as ground truth for comparison with the results obtained using the compression ratio as a predictor. Through a wide analysis of color and grayscale infrared JPEG and Better Portable Graphics (BPG) compressed images, the values of parameters that control these two types of compression and for which CR is calculated are proposed. It is shown that PSNR and bpp predictions can be significantly improved by using CR calculated using these proposed values, regardless of the type of compression and whether color or infrared images are used. In this paper, CR is used for the first time in predicting the boundary between visually lossless and visually lossy compression for images from the infrared part of the electromagnetic spectrum, as well as in the prediction of BPG compressed content. This paper indicates the great potential of CR so that in future research, it can be used in joint prediction based on several features or through the CR curve obtained for different values of the parameters controlling the compression. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

14 pages, 615 KB  
Article
Genetic Diversity and Breeding System of the Pestiferous Subterranean Termite Reticulitermes flaviceps Across Shaanxi and Sichuan Provinces
by Zahid Khan, Haroon, Yu-Feng Meng and Lian-Xi Xing
Curr. Issues Mol. Biol. 2025, 47(5), 304; https://doi.org/10.3390/cimb47050304 - 26 Apr 2025
Viewed by 704
Abstract
The genetic diversity of 22 colonies of the termite Reticulitermes flaviceps was analyzed in Shaanxi and Sichuan provinces. It was found that the genetic diversity in both regions was quite similar. However, the distribution of genetic variations within the colonies was uneven. The [...] Read more.
The genetic diversity of 22 colonies of the termite Reticulitermes flaviceps was analyzed in Shaanxi and Sichuan provinces. It was found that the genetic diversity in both regions was quite similar. However, the distribution of genetic variations within the colonies was uneven. The termite colonies showed moderately high genetic diversity, a positive sign for adaptability and survival. The study also revealed a favorable mix of different genetic types within the colonies, indicating a healthy level of genetic variation. However, there was limited genetic exchange among different colonies, leading to noticeable genetic differences. When looking at the genetic structures, the colonies in Shaanxi were quite similar; those in Sichuan showed more variation, and some Sichuan colonies had identical genetic structures to those in Shaanxi. Regarding breeding systems, the colonies in Shaanxi were mainly extended families, meaning they had multiple generations living together. In contrast, most colonies in Sichuan were simple families consisting of just one generation; this difference might be due to the natural, less disturbed environments in Shaanxi, which support more extensive and complex colonies. On the other hand, the urban environments in Sichuan, with their intricate cement structures, made it difficult for termite colonies to expand. Overall, the study highlights the genetic diversity and breeding strategies of R. flaviceps in different environments, providing insights into their adaptability and survival mechanisms. Full article
(This article belongs to the Section Bioinformatics and Systems Biology)
Show Figures

Figure 1

21 pages, 13069 KB  
Article
An Adaptive Luminance Mapping Scheme for High Dynamic Range Content Display
by Deju Huang, Xifeng Zheng, Jingxu Li, Junchang Chen, Fengxia Liu, Xinyue Mao, Yufeng Chen and Yu Chen
Electronics 2025, 14(6), 1202; https://doi.org/10.3390/electronics14061202 - 19 Mar 2025
Viewed by 1081
Abstract
Ideally, the Perceptual Quantizer (PQ) for High Dynamic Range (HDR) image presentation requires a 12-bit depth to ensure accurate quantization. In most cases, mainstream displays employ a limited 10-bit PQ function for HDR image display, resulting in notable issues such as perceived contrast [...] Read more.
Ideally, the Perceptual Quantizer (PQ) for High Dynamic Range (HDR) image presentation requires a 12-bit depth to ensure accurate quantization. In most cases, mainstream displays employ a limited 10-bit PQ function for HDR image display, resulting in notable issues such as perceived contrast loss and the emergence of pseudo-contours in the image hierarchy, particularly in low-brightness scenes. To address this issue, this paper proposes a novel luminance mapping relationship while preserving a 10-bit depth. Unlike conventional methods that derive PQ using a fixed Just Noticeable Difference (JND) fraction, this approach incorporates an adaptive adjustment factor. By adjusting the JND fraction according to brightness levels, this method effectively optimizes the quantization interval, improves reproducible contrast, and ensures uniform perception. The results demonstrate that the proposed approach effectively reduces the perception of contrast loss in low-brightness scenes, eliminates potential artifacts, and enhances the presentation quality of the display system. Full article
Show Figures

Figure 1

13 pages, 1754 KB  
Article
Cross-Modal Interactions and Movement-Related Tactile Gating: The Role of Vision
by Maria Casado-Palacios, Alessia Tonelli, Claudio Campus and Monica Gori
Brain Sci. 2025, 15(3), 288; https://doi.org/10.3390/brainsci15030288 - 8 Mar 2025
Cited by 1 | Viewed by 1661
Abstract
Background: When engaging with the environment, multisensory cues interact and are integrated to create a coherent representation of the world around us, a process that has been suggested to be affected by the lack of visual feedback in blind individuals. In addition, the [...] Read more.
Background: When engaging with the environment, multisensory cues interact and are integrated to create a coherent representation of the world around us, a process that has been suggested to be affected by the lack of visual feedback in blind individuals. In addition, the presence of voluntary movement can be responsible for suppressing somatosensory information processed by the cortex, which might lead to a worse encoding of tactile information. Objectives: In this work, we aim to explore how cross-modal interaction can be affected by active movements and the role of vision in this process. Methods: To this end, we measured the precision of 18 blind individuals and 18 age-matched sighted controls in a velocity discrimination task. The participants were instructed to detect the faster stimulus between a sequence of two in both passive and active touch conditions. The sensory stimulation could be either just tactile or audio–tactile, where a non-informative sound co-occurred with the tactile stimulation. The measure of precision was obtained by computing the just noticeable difference (JND) of each participant. Results: The results show worse precision with the audio–tactile sensory stimulation in the active condition for the sighted group (p = 0.046) but not for the blind one (p = 0.513). For blind participants, only the movement itself had an effect. Conclusions: For sighted individuals, the presence of noise from active touch made them vulnerable to auditory interference. However, the blind group exhibited less sensory interaction, experiencing only the detrimental effect of movement. Our work should be considered when developing next-generation haptic devices. Full article
(This article belongs to the Special Issue Multisensory Perception of the Body and Its Movement)
Show Figures

Figure 1

14 pages, 966 KB  
Article
Stiffness Perception Analysis in Haptic Teleoperation with Imperfect Communication Network
by Yonghyun Park, Chanyoung Ju and Hyoung Il Son
Electronics 2025, 14(4), 792; https://doi.org/10.3390/electronics14040792 - 18 Feb 2025
Viewed by 964
Abstract
Incomplete communication networks (e.g., time delay and packet loss/switching) in haptic interaction and remote teleoperation systems can degrade both user performance and system stability. In this study, we hypothesized that human operator performance would decrease monotonically as network imperfections worsened. To test this [...] Read more.
Incomplete communication networks (e.g., time delay and packet loss/switching) in haptic interaction and remote teleoperation systems can degrade both user performance and system stability. In this study, we hypothesized that human operator performance would decrease monotonically as network imperfections worsened. To test this hypothesis, we conducted two psychophysical experiments measuring the just-noticeable difference (JND), point of subjective equality (PSE), and perception time under varying conditions of packet separation time and packet loss. Our findings show that increasing packet separation time significantly elevated both JND and PSE, indicating a poorer discrimination ability and a systematic bias toward perceiving the environment as stiffer. By contrast, packet loss rates of up to 75% had no significant impact on perceptual performance, suggesting that, at sufficiently high sampling rates, human operators can compensate for substantial data loss. Overall, the results underscore that packet separation time, rather than packet loss, is the dominant factor affecting perceptual performance in haptic teleoperation. Full article
(This article belongs to the Special Issue Haptic Systems and the Tactile Internet: Design and Applications)
Show Figures

Figure 1

14 pages, 1928 KB  
Article
Comparison of Microfluidic Synthesis of Silver Nanoparticles in Flow and Drop Reactors at Low Dean Numbers
by Konstantia Nathanael, Nina M. Kovalchuk and Mark J. H. Simmons
Micromachines 2025, 16(1), 75; https://doi.org/10.3390/mi16010075 - 10 Jan 2025
Cited by 2 | Viewed by 2469
Abstract
This study evaluates the performance of continuous flow and drop-based microfluidic devices for the synthesis of silver nanoparticles (AgNPs) under identical hydrodynamic and chemical conditions. Flows at low values of Dean number (De < 1) were investigated, where the contribution of the vortices [...] Read more.
This study evaluates the performance of continuous flow and drop-based microfluidic devices for the synthesis of silver nanoparticles (AgNPs) under identical hydrodynamic and chemical conditions. Flows at low values of Dean number (De < 1) were investigated, where the contribution of the vortices forming inside the drop to the additional mixing inside the reactor should be most noticeable. In the drop-based microfluidic device, discrete aqueous drops serving as reactors were generated by flow focusing using silicone oil as the continuous phase. Aqueous solutions of reagents were supplied through two different channels merging just before the drops were formed. In the continuous flow device, the reagents merged at a Tee junction, and the reaction was carried out in the outlet tube. Although continuous flow systems may face challenges such as particle concentration reduction due to deposition on the channel wall or fouling, they are often more practical for research due to their operational simplicity, primarily through the elimination of the need to separate the aqueous nanoparticle dispersion from the oil phase. The results demonstrate that both microfluidic approaches produced AgNPs of similar sizes when the hydrodynamic conditions defined by the values of De and the residence time within the reactor were similar. Full article
(This article belongs to the Special Issue Microfluidic Nanoparticle Synthesis)
Show Figures

Figure 1

21 pages, 4458 KB  
Article
Video-Wise Just-Noticeable Distortion Prediction Model for Video Compression with a Spatial–Temporal Network
by Huanhua Liu, Shengzong Liu, Jianyu Xiao, Dandan Xu and Xiaoping Fan
Electronics 2024, 13(24), 4977; https://doi.org/10.3390/electronics13244977 - 18 Dec 2024
Viewed by 1355
Abstract
Just-Noticeable Difference (JND) in an image/video refers to the maximum difference that the human visual system cannot perceive, which has been widely applied in perception-guided image/video compression. In this work, we propose a Binary Decision-based Video-Wise Just-Noticeable Difference Prediction Method (BD-VW-JND-PM) with deep [...] Read more.
Just-Noticeable Difference (JND) in an image/video refers to the maximum difference that the human visual system cannot perceive, which has been widely applied in perception-guided image/video compression. In this work, we propose a Binary Decision-based Video-Wise Just-Noticeable Difference Prediction Method (BD-VW-JND-PM) with deep learning. Firstly, we model the VW-JND prediction problem as a binary decision process to reduce the inferring complexity. Then, we propose a Perceptually Lossy/Lossless Predictor for Compressed Video (PLLP-CV) to identify whether the distortion can be perceived or not. In the PLLP-CV, a Spatial–Temporal Network-based Perceptually Lossy/Lossless predictor (ST-Network-PLLP) is proposed for key frames by learning the spatial and temporal distortion features, and a threshold-based integration strategy is proposed to obtain the final results. Experimental results evaluated on the VideoSet database show that the mean prediction accuracy of PLLP-CV is about 95.6%, and the mean JND prediction error is 1.46 in QP and 0.74 in Peak-to-Noise Ratio (PSNR), which achieve 15% and 14.9% improvements, respectively. Full article
Show Figures

Figure 1

18 pages, 2627 KB  
Article
Some Approaches for Light and Color on the Surface of Mars
by Manuel Melgosa, Javier Hernández-Andrés, Manuel Sánchez-Marañón, Javier Cuadros and Álvaro Vicente-Retortillo
Appl. Sci. 2024, 14(23), 10812; https://doi.org/10.3390/app142310812 - 22 Nov 2024
Viewed by 1050
Abstract
We analyzed the main colorimetric characteristics of lights on Mars’ surface from 3139 total spectral irradiances provided by the COMIMART model (J. Space Weather Space Clim. 5, A33, 2015), modifying the parameters of ‘solar zenith angle’ and ‘opacity’, related to the time of [...] Read more.
We analyzed the main colorimetric characteristics of lights on Mars’ surface from 3139 total spectral irradiances provided by the COMIMART model (J. Space Weather Space Clim. 5, A33, 2015), modifying the parameters of ‘solar zenith angle’ and ‘opacity’, related to the time of day and the amount of dust in the atmosphere of Mars, respectively. Lights on Mars’ surface have chromaticities that are mainly located below the Planckian locus, correlated color temperature in the range of 2333 K–5868 K, and CIE 2017 color fidelity indices above 93. For the 24 samples in the X-Rite ColorChecker® and an extreme dust opacity change from 0.1 to 8.1 in the atmosphere, the average color inconstancy generated by the change in Mars’ light using the chromatic adaptation transform CIECAT16 was about 5 and 8 CIELAB units for solar zenith angles of 0° and 72°, respectively. We propose a method to compute total spectral irradiances on the surface of Mars from a selected value of correlated color temperature in the range of 2333 K–5868 K. This method is analogous to the one currently adopted by the International Commission on Illumination to compute daylight illuminants on the surface of Earth (CIE 015:2018, clause 4.1.2). The average accuracy of 3139 reconstructed total spectral irradiances using the proposed method was 0.9999558 using GFC (J. Opt. Soc. Am. A 14, 1007–1014, 1997) and 0.0009 ΔEuv units, a value just below noticeable chromaticity differences perceptible by human observers at 50% probability. Total spectral irradiances proposed by Barnes for five correlated temperatures agreed with those obtained from the current proposed method: on the average, GFC = 0.9979521 and 0.0023 ΔEv units. Full article
(This article belongs to the Special Issue Interdisciplinary Approaches and Applications of Optics & Photonics)
Show Figures

Figure 1

14 pages, 3201 KB  
Article
A Vibrotactile Belt for Measuring Vibrotactile Acuities on the Human Torso Using Coin Motors
by Shaoyi Wang, Wei Dai, Lichao Yu, Yong Liu, Yidong Yang, Ruomi Guo, Yuemin Hong, Jianning Chen, Shangxiong Lin, Xingxing Ruan, Qiangqiang Ouyang and Xiaoying Wang
Micromachines 2024, 15(11), 1341; https://doi.org/10.3390/mi15111341 - 31 Oct 2024
Cited by 1 | Viewed by 1935
Abstract
Accurate measurement of the vibrotactile acuities of the human torso is the key to designing effective torso-worn vibrotactile displays for healthcare applications such as navigation aids for visually impaired persons. Although efforts have been made to measure vibrotactile acuities, there remains a lack [...] Read more.
Accurate measurement of the vibrotactile acuities of the human torso is the key to designing effective torso-worn vibrotactile displays for healthcare applications such as navigation aids for visually impaired persons. Although efforts have been made to measure vibrotactile acuities, there remains a lack of systematic studies addressing the spatial, temporal, and intensity-related aspects of vibrotactile sensitivity on the human torso. In this work, a torso-worn vibrotactile belt consisting of two crossed coin motor arrays was designed and a psychophysical study was carried out to measure the spatial, temporal, and intensity-related vibrotactile acuities of a set of human subjects wearing the designed belt. The objective parameters of vibrational intensity and the timing latency of the coin motor were also determined before measuring the vibrotactile acuities. The experimental results indicated that the tested coin motor was able to generate a median number of five and six available just-noticeable differences in intensity and duration, respectively. Among the four parameters of vibrational intensity, the perceived intensity was the most relevant to vibrational displacement. The spatial acuities measured as the degree of two-point spatial thresholds (TPTs) showed less individual difference than the distance TPTs. The results from the current work provide valuable guidance for the design of a comfortable torso-worn vibrotactile display using coin motors. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Figure 1

16 pages, 3583 KB  
Article
BHT-QAOA: The Generalization of Quantum Approximate Optimization Algorithm to Solve Arbitrary Boolean Problems as Hamiltonians
by Ali Al-Bayaty and Marek Perkowski
Entropy 2024, 26(10), 843; https://doi.org/10.3390/e26100843 - 6 Oct 2024
Cited by 2 | Viewed by 2003
Abstract
A new methodology is introduced to solve classical Boolean problems as Hamiltonians, using the quantum approximate optimization algorithm (QAOA). This methodology is termed the “Boolean-Hamiltonians Transform for QAOA” (BHT-QAOA). Because a great deal of research and studies are mainly focused on solving combinatorial [...] Read more.
A new methodology is introduced to solve classical Boolean problems as Hamiltonians, using the quantum approximate optimization algorithm (QAOA). This methodology is termed the “Boolean-Hamiltonians Transform for QAOA” (BHT-QAOA). Because a great deal of research and studies are mainly focused on solving combinatorial optimization problems using QAOA, the BHT-QAOA adds an additional capability to QAOA to find all optimized approximated solutions for Boolean problems, by transforming such problems from Boolean oracles (in different structures) into Phase oracles, and then into the Hamiltonians of QAOA. From such a transformation, we noticed that the total utilized numbers of qubits and quantum gates are dramatically minimized for the generated Hamiltonians of QAOA. In this article, arbitrary Boolean problems are examined by successfully solving them with our BHT-QAOA, using different structures based on various logic synthesis methods, an IBM quantum computer, and a classical optimization minimizer. Accordingly, the BHT-QAOA will provide broad opportunities to solve many classical Boolean-based problems as Hamiltonians, for the practical engineering applications of several algorithms, digital synthesizers, robotics, and machine learning, just to name a few, in the hybrid classical-quantum domain. Full article
(This article belongs to the Special Issue The Future of Quantum Machine Learning and Quantum AI)
Show Figures

Figure 1

18 pages, 14537 KB  
Article
Experimental Study of the Influence of Occupants on Speech Intelligibility in an Automotive Cabin
by Linda Liang, Miao Ren, Linghui Liao, Ye Zhao, Wei Xiong and Liuying Ou
Appl. Sci. 2024, 14(17), 7942; https://doi.org/10.3390/app14177942 - 5 Sep 2024
Viewed by 1499
Abstract
Adding occupants to an enclosed space often leads to perceptible changes in the sound field and, therefore, speech intelligibility; however, this issue has not yet been examined in automotive cabins. This study investigated the effect of occupants in an automotive cabin on SI. [...] Read more.
Adding occupants to an enclosed space often leads to perceptible changes in the sound field and, therefore, speech intelligibility; however, this issue has not yet been examined in automotive cabins. This study investigated the effect of occupants in an automotive cabin on SI. Binaural room impulse responses (BRIRs) were measured in an automotive cabin with an artificial mouth and dummy head under different speaker–listener position configurations and occupancy modes. Based on the measured BRIRs, the speech transmission index (STI) was determined, and subjective speech–reception thresholds (SRTs) in Mandarin Chinese were assessed. The results indicate that speech intelligibility mostly decreased slightly after adding additional occupants. In most cases, the occupants did not significantly affect speech intelligibility, with STI variations of no more than the just-noticeable difference and SRT variation within 1 dB. When the listener was in the back-right seat, the effect of the occupants on speech intelligibility could not be ignored, with STI variations of up to 0.07 and an SRT variation of 2 dB under different occupancy modes. In addition, the influence of front-row passengers on the speech intelligibility of rear-row listeners was extremely small, and vice versa. Furthermore, altering the distribution of occupants had an effect comparable to changing the number of occupants. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

32 pages, 6053 KB  
Article
Are Strong Baselines Enough? False News Detection with Machine Learning
by Lara Aslan, Michal Ptaszynski and Jukka Jauhiainen
Future Internet 2024, 16(9), 322; https://doi.org/10.3390/fi16090322 - 5 Sep 2024
Cited by 2 | Viewed by 6186
Abstract
False news refers to false, fake, or misleading information presented as real news. In recent years, there has been a noticeable increase in false news on the Internet. The goal of this paper was to study the automatic detection of such false news [...] Read more.
False news refers to false, fake, or misleading information presented as real news. In recent years, there has been a noticeable increase in false news on the Internet. The goal of this paper was to study the automatic detection of such false news using machine learning and natural language processing techniques and to determine which techniques work the most effectively. This article first studies what constitutes false news and how it differs from other types of misleading information. We also study the results achieved by other researchers on the same topic. After building a foundation to understand false news and the various ways of automatically detecting it, this article provides its own experiments. These experiments were carried out on four different datasets, one that was made just for this article, using 10 different machine learning methods. The results of this article were satisfactory and provided answers to the original research questions set up at the beginning of this article. This article could determine from the experiments that passive aggressive algorithms, support vector machines, and random forests are the most efficient methods for automatic false news detection. This article also concluded that more complex experiments, such as using multiple levels of identifying false news or detecting computer-generated false news, require more complex machine learning models. Full article
Show Figures

Figure 1

Back to TopTop