Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = computationally biased

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1988 KiB  
Article
The Effect of Macroeconomic Announcements on U.S. Treasury Markets: An Autometric General-to-Specific Analysis of the Greenspan Era
by James J. Forest
Econometrics 2025, 13(3), 24; https://doi.org/10.3390/econometrics13030024 - 21 Jun 2025
Viewed by 814
Abstract
This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations of standard static regression (SSR) models, which may suffer from omitted variable bias, [...] Read more.
This research studies the impact of macroeconomic announcement surprises on daily U.S. Treasury excess returns during the heart of Alan Greenspan’s tenure as Federal Reserve Chair, addressing the possible limitations of standard static regression (SSR) models, which may suffer from omitted variable bias, parameter instability, and poor mis-specification diagnostics. To complement the SSR framework, an automated general-to-specific (Gets) modeling approach, enhanced with modern indicator saturation methods for robustness, is applied to improve empirical model discovery and mitigate potential biases. By progressively reducing an initially broad set of candidate variables, the Gets methodology steers the model toward congruence, dispenses unstable parameters, and seeks to limit information loss while seeking model congruence and precision. The findings, herein, suggest that U.S. Treasury market responses to macroeconomic news shocks exhibited stability for a core set of announcements that reliably influenced excess returns. In contrast to computationally costless standard static models, the automated Gets-based approach enhances parameter precision and provides a more adaptive structure for identifying relevant predictors. These results demonstrate the potential value of incorporating interpretable automated model selection techniques alongside traditional SSR and Markov switching approaches to improve empirical insights into macroeconomic announcement effects on financial markets. Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
Show Figures

Figure 1

16 pages, 643 KiB  
Article
Cross-Cultural Biases of Emotion Perception in Music
by Marjorie G. Li, Kirk N. Olsen and William Forde Thompson
Brain Sci. 2025, 15(5), 477; https://doi.org/10.3390/brainsci15050477 - 29 Apr 2025
Cited by 1 | Viewed by 1470
Abstract
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features [...] Read more.
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features that predict emotion perception. Methods: White-European (Western) listeners from the UK, USA, New Zealand, and Australia (N = 100) listened to 48 ten-second excerpts of Western classical and Chinese traditional bowed-string music that were validated by experts to convey happiness, sadness, agitation, and calmness. After each excerpt, participants rated the familiarity, enjoyment, and perceived intensity of the four emotions. Musical features were computationally extracted for regression analyses. Results: Western listeners experienced Western classical music as more familiar and enjoyable than Chinese music. Happiness and sadness were recognised more accurately in Western classical music, whereas agitation was more accurately identified in Chinese music. The perceived intensity of happiness and sadness was greater for Western classical music; conversely, the perceived intensity of agitation was greater for Chinese music. Furthermore, emotion perception was influenced by both culture-shared (e.g., timbre) and culture-specific (e.g., dynamics) musical features. Conclusions: Our findings reveal clear cultural biases in the way individuals perceive and classify music, highlighting how these biases are shaped by the interaction between cultural familiarity and the emotional and structural qualities of the music. We discuss the possibility that purposeful engagement with music from diverse cultural traditions—especially in educational and therapeutic settings—may cultivate intercultural empathy and an appreciation of the values and aesthetics of other cultures. Full article
(This article belongs to the Special Issue Advances in Emotion Processing and Cognitive Neuropsychology)
Show Figures

Graphical abstract

15 pages, 11172 KiB  
Article
GaussianMix: Rethinking Receptive Field for Efficient Data Augmentation
by A. F. M. Shahab Uddin, Maryam Qamar, Jueun Mun, Yuje Lee and Sung-Ho Bae
Appl. Sci. 2025, 15(9), 4704; https://doi.org/10.3390/app15094704 - 24 Apr 2025
Viewed by 418
Abstract
Mixed Sample Data Augmentation (MSDA) enhances deep learning model generalization by blending a source patch into a target image. Selecting source patches based on image saliency helps to prevent label errors and irrelevant content; however, it relies on computationally expensive saliency detection algorithms. [...] Read more.
Mixed Sample Data Augmentation (MSDA) enhances deep learning model generalization by blending a source patch into a target image. Selecting source patches based on image saliency helps to prevent label errors and irrelevant content; however, it relies on computationally expensive saliency detection algorithms. Studies suggest that a convolutional neural network’s receptive field follows a Gaussian distribution, with central pixels being more influential. Leveraging this, we propose GaussianMix, an effective and efficient augmentation strategy that selects source patches using a center-biased Gaussian distribution in order to avoiding additional computational costs. GaussianMix achieves top-1 error rates of 21.26% and 20.09% on ResNet-50 and ResNet-101 for ImageNet classification, respectively, while also improving robustness against adversarial perturbations and enhancing object detection performance. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

44 pages, 8130 KiB  
Article
Classification-Based Q-Value Estimation for Continuous Actor-Critic Reinforcement Learning
by Chayoung Kim
Symmetry 2025, 17(5), 638; https://doi.org/10.3390/sym17050638 - 23 Apr 2025
Viewed by 522
Abstract
Stable Q-value estimation is critical for effective policy learning in deep reinforcement learning (DRL), especially continuous control tasks. Traditional algorithms like Soft Actor-Critic (SAC) and Twin Delayed Deep Deterministic (TD3) policy gradients rely on Mean Squared Error (MSE) loss for Q-value approximation, which [...] Read more.
Stable Q-value estimation is critical for effective policy learning in deep reinforcement learning (DRL), especially continuous control tasks. Traditional algorithms like Soft Actor-Critic (SAC) and Twin Delayed Deep Deterministic (TD3) policy gradients rely on Mean Squared Error (MSE) loss for Q-value approximation, which may cause instability due to misestimation and overestimation biases. Although distributional reinforcement learning (RL) algorithms like C51 have improved robustness in discrete action spaces, their application to continuous control remains computationally expensive owing to distribution projection needs. To address this, we propose a classification-based Q-value learning method that reformulates Q-value estimation as a classification problem rather than a regression task. Replacing MSE loss with cross-entropy (CE) and Kullback–Leibler (KL) divergence losses, the proposed method improves learning stability and mitigates overestimation errors. Our statistical analysis across 30 independent runs shows that the approach achieves an approximately 10% lower Q-value estimation error in the pendulum environment and a 40–60% reduced training time compared to SAC and Continuous Twin Delayed Distributed Deep Deterministic (CTD4) Policy Gradient. Experimental results on OpenAI Gym benchmark environments demonstrate that our approach, with up to 77% fewer parameters, outperforms the SAC and CTD4 policy gradients regarding training stability and convergence speed, while maintaining a competitive final policy performance. Full article
(This article belongs to the Special Issue Symmetry in Intelligent Algorithms)
Show Figures

Figure 1

24 pages, 2795 KiB  
Article
Importance Sampling for Cost-Optimized Estimation of Burn Probability Maps in Wildfire Monte Carlo Simulations
by Valentin Waeselynck and David Saah
Fire 2024, 7(12), 455; https://doi.org/10.3390/fire7120455 - 3 Dec 2024
Viewed by 916
Abstract
Background: Wildfire modelers rely on Monte Carlo simulations of wildland fire to produce burn probability maps. These simulations are computationally expensive. Methods: We study the application of importance sampling to accelerate the estimation of burn probability maps, using L2 distance as the metric [...] Read more.
Background: Wildfire modelers rely on Monte Carlo simulations of wildland fire to produce burn probability maps. These simulations are computationally expensive. Methods: We study the application of importance sampling to accelerate the estimation of burn probability maps, using L2 distance as the metric of deviation. Results: Assuming a large area of interest, we prove that the optimal proposal distribution reweights the probability of ignitions by the square root of the expected burned area divided by the expected computational cost and then generalize these results to the assets-weighted L2 distance. We also propose a practical approach to searching for a good proposal distribution. Conclusions: These findings contribute quantitative methods for optimizing the precision/computation ratio of wildfire Monte Carlo simulations without biasing the results, offer a principled conceptual framework for justifying and reasoning about other computational shortcuts, and can be readily generalized to a broader spectrum of simulation-based risk modeling. Full article
(This article belongs to the Special Issue Patterns, Drivers, and Multiscale Impacts of Wildland Fires)
Show Figures

Figure 1

21 pages, 1536 KiB  
Article
Deep Learning Classification of Traffic-Related Tweets: An Advanced Framework Using Deep Learning for Contextual Understanding and Traffic-Related Short Text Classification
by Wasen Yahya Melhem, Asad Abdi and Farid Meziane
Appl. Sci. 2024, 14(23), 11009; https://doi.org/10.3390/app142311009 - 27 Nov 2024
Cited by 3 | Viewed by 1203
Abstract
Classifying social media (SM) messages into relevant or irrelevant categories is challenging due to data sparsity, imbalance, and ambiguity. This study aims to improve Intelligent Transport Systems (ITS) by enhancing short text classification of traffic-related SM data. Deep learning methods such as RNNs, [...] Read more.
Classifying social media (SM) messages into relevant or irrelevant categories is challenging due to data sparsity, imbalance, and ambiguity. This study aims to improve Intelligent Transport Systems (ITS) by enhancing short text classification of traffic-related SM data. Deep learning methods such as RNNs, CNNs, and BERT are effective at capturing context, but they can be computationally expensive, struggle with very short texts, and perform poorly with rare words. On the other hand, transfer learning leverages pre-trained knowledge but may be biased towards the pre-training domain. To address these challenges, we propose DLCTC, a novel system combining character-level, word-level, and context features with BiLSTM and TextCNN-based attention. By utilizing external knowledge, DLCTC ensures an accurate understanding of concepts and abbreviations in traffic-related short texts. BiLSTM captures context and term correlations; TextCNN captures local patterns. Multi-level attention focuses on important features across character, word, and concept levels. Experimental studies demonstrate DLCTC’s effectiveness over well-known short-text classification approaches based on CNN, RNN, and BERT. Full article
(This article belongs to the Special Issue Speech Recognition and Natural Language Processing)
Show Figures

Figure 1

23 pages, 2176 KiB  
Article
Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming
by Waleed B. Altukhaes, Mahdi Roozbeh and Nur A. Mohamed
Mathematics 2024, 12(17), 2787; https://doi.org/10.3390/math12172787 - 9 Sep 2024
Cited by 3 | Viewed by 993
Abstract
Outliers are a common problem in applied statistics, together with multicollinearity. In this paper, robust Liu estimators are introduced into a partially linear model to combat the presence of multicollinearity and outlier challenges when the error terms are not independent and some linear [...] Read more.
Outliers are a common problem in applied statistics, together with multicollinearity. In this paper, robust Liu estimators are introduced into a partially linear model to combat the presence of multicollinearity and outlier challenges when the error terms are not independent and some linear constraints are assumed to hold in the parameter space. The Liu estimator is used to address the multicollinearity, while robust methods are used to handle the outlier problem. In the literature on the Liu methodology, obtaining the best value for the biased parameter plays an important role in model prediction and is still an unsolved problem. In this regard, some robust estimators of the biased parameter are proposed based on the least trimmed squares (LTS) technique and its extensions using a semidefinite programming approach. Based on a set of observations with a sample size of n, and the integer trimming parameter hn, the LTS estimator computes the hyperplane that minimizes the sum of the lowest h squared residuals. Even though the LTS estimator is statistically more effective than the widely used least median squares (LMS) estimate, it is less complicated computationally than LMS. It is shown that the proposed robust extended Liu estimators perform better than classical estimators. As part of our proposal, using Monte Carlo simulation schemes and a real data example, the performance of robust Liu estimators is compared with that of classical ones in restricted partially linear models. Full article
(This article belongs to the Special Issue Nonparametric Regression Models: Theory and Applications)
Show Figures

Figure 1

16 pages, 476 KiB  
Article
A Comparison of Limited Information Estimation Methods for the Two-Parameter Normal-Ogive Model with Locally Dependent Items
by Alexander Robitzsch
Stats 2024, 7(3), 576-591; https://doi.org/10.3390/stats7030035 - 21 Jun 2024
Cited by 1 | Viewed by 1276
Abstract
The two-parameter normal-ogive (2PNO) model is one of the most popular item response theory (IRT) models for analyzing dichotomous items. Consistent parameter estimation of the 2PNO model using marginal maximum likelihood estimation relies on the local independence assumption. However, the assumption of local [...] Read more.
The two-parameter normal-ogive (2PNO) model is one of the most popular item response theory (IRT) models for analyzing dichotomous items. Consistent parameter estimation of the 2PNO model using marginal maximum likelihood estimation relies on the local independence assumption. However, the assumption of local independence might be violated in practice. Likelihood-based estimation of the local dependence structure is often computationally demanding. Moreover, many IRT models that model local dependence do not have a marginal interpretation of item parameters. In this article, limited information estimation methods are reviewed that allow the convenient and straightforward handling of local dependence in estimating the 2PNO model. In detail, pairwise likelihood, weighted least squares, and normal-ogive harmonic analysis robust method (NOHARM) estimation are compared with marginal maximum likelihood estimation that ignores local dependence. A simulation study revealed that item parameters can be consistently estimated with limited information methods. At the same time, marginal maximum likelihood estimation resulted in biased item parameter estimates in the presence of local dependence. From a practical perspective, there were only minor differences regarding the statistical quality of item parameter estimates of the different estimation methods. Differences between the estimation methods are also compared for two empirical datasets. Full article
(This article belongs to the Special Issue Statistics, Analytics, and Inferences for Discrete Data)
Show Figures

Figure 1

18 pages, 10221 KiB  
Article
Development of a DC-Biased AC-Stimulated Microfluidic Device for the Electrokinetic Separation of Bacterial and Yeast Cells
by Nuzhet Nihaar Nasir Ahamed, Carlos A. Mendiola-Escobedo, Victor H. Perez-Gonzalez and Blanca H. Lapizco-Encinas
Biosensors 2024, 14(5), 237; https://doi.org/10.3390/bios14050237 - 9 May 2024
Cited by 3 | Viewed by 1598
Abstract
Electrokinetic (EK) microsystems, which are capable of performing separations without the need for labeling analytes, are a rapidly growing area in microfluidics. The present work demonstrated three distinct binary microbial separations, computationally modeled and experimentally performed, in an insulator-based EK (iEK) system stimulated [...] Read more.
Electrokinetic (EK) microsystems, which are capable of performing separations without the need for labeling analytes, are a rapidly growing area in microfluidics. The present work demonstrated three distinct binary microbial separations, computationally modeled and experimentally performed, in an insulator-based EK (iEK) system stimulated by DC-biased AC potentials. The separations had an increasing order of difficulty. First, a separation between cells of two distinct domains (Escherichia coli and Saccharomyces cerevisiae) was demonstrated. The second separation was for cells from the same domain but different species (Bacillus subtilis and Bacillus cereus). The last separation included cells from two closely related microbial strains of the same domain and the same species (two distinct S. cerevisiae strains). For each separation, a novel computational model, employing a continuous spatial and temporal function for predicting the particle velocity, was used to predict the retention time (tR,p) of each cell type, which aided the experimentation. All three cases resulted in separation resolution values Rs>1.5, indicating complete separation between the two cell species, with good reproducibility between the experimental repetitions (deviations < 6%) and good agreement (deviations < 18%) between the predicted tR,p and experimental (tR,e) retention time values. This study demonstrated the potential of DC-biased AC iEK systems for performing challenging microbial separations. Full article
(This article belongs to the Special Issue Advanced Microfluidic Devices and Lab-on-Chip (Bio)sensors)
Show Figures

Figure 1

23 pages, 5414 KiB  
Article
Estimation of Temperature and Salinity from Marine Seismic Data—A Two-Step Approach
by Dwaipayan Chakraborty and Subhashis Mallick
J. Mar. Sci. Eng. 2024, 12(3), 471; https://doi.org/10.3390/jmse12030471 - 9 Mar 2024
Viewed by 1710
Abstract
Ocean-water temperature and salinity are two vital properties that are required for weather-, climate-, and marine biology-related research. These properties are usually measured using disposable instruments at sparse locations, typically from tens to hundreds of kilometers apart. Laterally interpolating these sparse measurements provides [...] Read more.
Ocean-water temperature and salinity are two vital properties that are required for weather-, climate-, and marine biology-related research. These properties are usually measured using disposable instruments at sparse locations, typically from tens to hundreds of kilometers apart. Laterally interpolating these sparse measurements provides smooth temperature and salinity distributions within the oceans, although they may not be very accurate. Marine seismic data, on the other hand, show visible reflections within the water-column which are primarily controlled by subtle sound-speed variations. Because these variations are functions of the temperature, salinity, and pressure, estimating sound-speed from marine seismic data and relating them to temperature and salinity have been attempted in the past. These seismically derived properties are of much higher lateral resolution (less than 25 m) than the sparse measurements and can be potentially used for climate and marine biology research. Estimating sound-speeds from seismic data, however, requires running iterative seismic inversions, which need a good initial model. Currently practiced ways to generate this initial model are computationally challenging, labor-intensive, and subject to human error and bias. In this research, we outline an automated method to generate the initial model which is neither computational and labor-intensive nor prone to human errors and biases. We also use a two-step process of, first, estimating the sound-speed from seismic inversion data and then estimating the salinity and temperature. Furthermore, by applying this method to real seismic data, we demonstrate the feasibility of our approach and discuss how the use of machine learning can further improve the computational efficiency of the method and make an impact on the future of climate modeling, weather prediction, and marine biology research. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

20 pages, 4189 KiB  
Article
Statistical Study of the Bias and Precision for Six Estimation Methods for the Fractal Dimension of Randomly Rough Surfaces
by Jorge Luis Flores Alarcón, Carlos Gabriel Figueroa, Víctor Hugo Jacobo, Fernando Velázquez Villegas and Rafael Schouwenaars
Fractal Fract. 2024, 8(3), 152; https://doi.org/10.3390/fractalfract8030152 - 7 Mar 2024
Cited by 5 | Viewed by 1871
Abstract
The simulation and characterisation of randomly rough surfaces is an important topic in surface science, tribology, geo- and planetary sciences, image analysis and optics. Extensions to general random processes with two continuous variables are straightforward. Several surface generation algorithms are available, and preference [...] Read more.
The simulation and characterisation of randomly rough surfaces is an important topic in surface science, tribology, geo- and planetary sciences, image analysis and optics. Extensions to general random processes with two continuous variables are straightforward. Several surface generation algorithms are available, and preference for one or another method often depends on the specific scientific field. The same holds for the methods to estimate the fractal dimension D. This work analyses six algorithms for the determination of D as a function of the size of the domain, variance, and the input value for D, using surfaces generated by Fourier filtering techniques and the random midpoint displacement algorithm. Several of the methods to determine fractal dimension are needlessly complex and severely biased, whereas simple and computationally efficient methods produce better results. A fine-tuned analysis of the power spectral density is very precise and shows how the different surface generation algorithms deviate from ideal fractal behaviour. For large datasets defined on equidistant two-dimensional grids, it is clearly the most sensitive and precise method to determine fractal dimension. Full article
(This article belongs to the Special Issue Fractal Analysis and Its Applications in Geophysical Science)
Show Figures

Figure 1

20 pages, 362 KiB  
Article
Variable Selection for Length-Biased and Interval-Censored Failure Time Data
by Fan Feng, Guanghui Cheng and Jianguo Sun
Mathematics 2023, 11(22), 4576; https://doi.org/10.3390/math11224576 - 8 Nov 2023
Cited by 1 | Viewed by 1429
Abstract
Length-biased failure time data occur often in various biomedical fields, including clinical trials, epidemiological cohort studies and genome-wide association studies, and their analyses have been attracting a surge of interest. In practical applications, because one may collect a large number of candidate covariates [...] Read more.
Length-biased failure time data occur often in various biomedical fields, including clinical trials, epidemiological cohort studies and genome-wide association studies, and their analyses have been attracting a surge of interest. In practical applications, because one may collect a large number of candidate covariates for the failure event of interest, variable selection becomes a useful tool to identify the important risk factors and enhance the estimation accuracy. In this paper, we consider Cox’s proportional hazards model and develop a penalized variable selection technique with various popular penalty functions for length-biased data, in which the failure event of interest suffers from interval censoring. Specifically, a computationally stable and reliable penalized expectation-maximization algorithm via two-stage data augmentation is developed to overcome the challenge in maximizing the intractable penalized likelihood. We establish the oracle property of the proposed method and present some simulation results, suggesting that the proposed method outperforms the traditional variable selection method based on the conditional likelihood. The proposed method is then applied to a set of real data arising from the Prostate, Lung, Colorectal and Ovarian cancer screening trial. The analysis results show that African Americans and having immediate family members with prostate cancer significantly increase the risk of developing prostate cancer, while having diabetes exhibited a significantly lower risk of developing prostate cancer. Full article
(This article belongs to the Section D1: Probability and Statistics)
14 pages, 5255 KiB  
Article
Critical Pattern Selection Method Based on CNN Embeddings for Full-Chip Optimization
by Qingyan Zhang, Junbo Liu, Ji Zhou, Chuan Jin, Jian Wang, Song Hu and Haifeng Sun
Photonics 2023, 10(11), 1186; https://doi.org/10.3390/photonics10111186 - 25 Oct 2023
Viewed by 1865
Abstract
Source mask optimization (SMO), a primary resolution enhancement technology, is one of the most pivotal technologies for enhancing lithography imaging quality. Due to the high computation complexity of SMO, patterns should be selected by a selection algorithm before optimization. However, the limitations of [...] Read more.
Source mask optimization (SMO), a primary resolution enhancement technology, is one of the most pivotal technologies for enhancing lithography imaging quality. Due to the high computation complexity of SMO, patterns should be selected by a selection algorithm before optimization. However, the limitations of existing selection methods are twofold: they are computationally intensive and they produce biased selection results. The representative method having the former limitation is the diffraction signature method. And IBM’s method utilizing the rigid transfer function tends to cause biased selection results. To address this problem, this study proposes a novel pattern cluster and selection algorithm architecture based on a convolutional neural network (CNN). The proposed method provides a paradigm for solving the critical pattern selection problem by CNN to transfer patterns from the source image domain to unified embeddings in a K-dimensional feature space, exhibiting higher efficiency and maintaining high accuracy. Full article
(This article belongs to the Section Data-Science Based Techniques in Photonics)
Show Figures

Figure 1

24 pages, 2495 KiB  
Article
RayBench: An Advanced NVIDIA-Centric GPU Rendering Benchmark Suite for Optimal Performance Analysis
by Peng Wang and Zhibin Yu
Electronics 2023, 12(19), 4124; https://doi.org/10.3390/electronics12194124 - 2 Oct 2023
Cited by 2 | Viewed by 2481
Abstract
This study aims to collect GPU rendering programs and analyze their characteristics to construct a benchmark dataset that reflects the characteristics of GPU rendering programs, providing a reference basis for designing the next generation of graphics processors. The research framework includes four parts: [...] Read more.
This study aims to collect GPU rendering programs and analyze their characteristics to construct a benchmark dataset that reflects the characteristics of GPU rendering programs, providing a reference basis for designing the next generation of graphics processors. The research framework includes four parts: GPU rendering program integration, data collection, program analysis, and similarity analysis. In the program integration and data collection phase, 1000 GPU rendering programs were collected from open-source repositories, and 100 representative programs were selected as the initial benchmark dataset. The program analysis phase involves instruction-level, thread-level, and memory-level analysis, as well as five machine learning algorithms for importance ranking. Finally, through Pearson similarity analysis, rendering programs with high similarity were eliminated, and the final GPU rendering program benchmark dataset was selected based on the benchmark’s comprehensiveness and representativeness. The experimental results of this study show that, due to the need to load and process texture and geometry data in rendering programs, the average global memory access efficiency is generally lower compared to the averages of the Rodinia and Parboil benchmarks. The GPU occupancy rate is related to the computationally intensive tasks of rendering programs. The efficiency of stream processor execution and thread bundle execution is influenced by branch statements and conditional judgments. Common operations such as lighting calculations and texture sampling in rendering programs require branch judgments, which reduce the execution efficiency. Bandwidth utilization is improved because rendering programs reduce frequent memory access and data transfer to the main memory through data caching and reuse. Furthermore, this study used multiple machine learning methods to rank the importance of 160 characteristics of 100 rendering programs on four different NVIDIA GPUs. Different methods demonstrate robustness and stability when facing different data distributions and characteristic relationships. By comparing the results of multiple methods, biases inherent to individual methods can be reduced, thus enhancing the reliability of the results. The contribution of this study lies in the analysis of workload characteristics of rendering programs, enabling targeted performance optimization to improve the efficiency and quality of rendering programs. By comprehensively collecting GPU rendering program data and performing characteristic analysis and importance ranking using machine learning methods, reliable reference guidelines are provided for GPU design. This is of significant importance in driving the development of rendering technology. Full article
Show Figures

Figure 1

27 pages, 13847 KiB  
Article
Reconstructing 42 Years (1979–2020) of Great Lakes Surface Temperature through a Deep Learning Approach
by Miraj B. Kayastha, Tao Liu, Daniel Titze, Timothy C. Havens, Chenfu Huang and Pengfei Xue
Remote Sens. 2023, 15(17), 4253; https://doi.org/10.3390/rs15174253 - 30 Aug 2023
Cited by 3 | Viewed by 2317
Abstract
Accurate estimates for the lake surface temperature (LST) of the Great Lakes are critical to understanding the regional climate. Dedicated lake models of various complexity have been used to simulate LST but they suffer from noticeable biases and can be computationally expensive. Additionally, [...] Read more.
Accurate estimates for the lake surface temperature (LST) of the Great Lakes are critical to understanding the regional climate. Dedicated lake models of various complexity have been used to simulate LST but they suffer from noticeable biases and can be computationally expensive. Additionally, the available historical LST datasets are limited by either short temporal coverage (<30 years) or lower spatial resolution (0.25° × 0.25°). Therefore, in this study, we employed a deep learning model based on Long Short-Term Memory (LSTM) neural networks to produce a daily LST dataset for the Great Lakes that spans an unparalleled 42 years (1979–2020) at a spatial resolution of ~1 km. In our dataset, the Great Lakes are represented by ~33,000 unstructured grid points and the LSTM training incorporated the information from each grid point. The LSTM was trained with seven meteorological variables from reanalysis data as feature variables and the LST from a historical satellite-derived dataset as the target variable. The LSTM was able to capture the spatial heterogeneity of LST in the Great Lakes well and exhibited high correlation (≥0.92) and low bias (limited to ±1.5 °C) for the temporal evolution of LST during the training (1995–2020) and testing (1979–1994) periods. Full article
Show Figures

Figure 1

Back to TopTop