1. Introduction
Digital games have transformed the entertainment industry and are increasingly integrated into educational and technological domains. Their interactive nature enhances user engagement and fosters cognitive development, such as flexible thinking and strategic problem-solving. Technological advancements have propelled these digital experiences to new heights [
1,
2,
3,
4], offering both functional applications and opportunities for learning and personal growth.
A feature that distinguishes digital games is interactivity; users can make decisions and actively engage within virtual environments. This dynamic interaction cultivates adaptable thinking, enabling players to navigate complex scenes and apply diverse problem-solving strategies [
5,
6]. In such environments, adapting requires environment cues. These challenges are amplified by intricate design, elegant visuals and sounds [
4,
5].
Supersampling is a widely adopted method to reduce aliasing and improve visual quality [
7,
8,
9]. However, traditional implementations rely on fixed settings and suffer from performance bottlenecks and poor hardware adaptability.
Figure 1 illustrates a supersampled scene in the developed game, showing enhanced detail from 1920 × 1080 to 6000 × 6000 resolution. A key limitation in existing scaling algorithms is their inability to adapt in real time to changing performance needs. This often results in degraded visuals or unstable frame rates in demanding environments. To address this, we propose CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm that adjusts resolution during gameplay based on system load, without requiring manual intervention.
The core key contributions include a dynamic resolution scaling technique using Mathf.Clamp to ensure numerical stability, evaluation across multiple hardware configurations and resolutions (2K, 4K), consistent visual fidelity and improved performance in real-time use.
This paper is structured as follows:
Section 2 reviews prior work;
Section 3 presents the algorithm;
Section 4 describes the experimental results;
Section 5 concludes with directions for future work.
Traditional rendering samples pixels at native resolution, often introducing artifacts in high-contrast scenes. Supersampling mitigates this by averaging multiple samples per pixel, then downscaling, resulting in smoother images. While resource-intensive, modern hardware and optimization techniques make it feasible. Our goal is to maintain superior graphics at a minimum of 30 FPS, leveraging Shader 5.0. To optimize performance, occlusion culling was implemented as well; only objects visible in the camera’s view are rendered, reducing memory load during Garbage Collection (GC).
2. Related Work
Resolution scaling algorithms in graphics rendering are designed to dynamically adjust image resolution to optimize performance and visual quality. A common issue in many implementations is that the scaling percentage fails to exceed 100%. This often stems from the algorithm not fully utilizing available computational resources or lacking optimization for parallel processing and advanced hardware, leading to performance bottlenecks.
These algorithms also face difficulties with complex visual content, such as large textures, dynamic lighting, or rapidly changing scenes. If they cannot adapt to such variations, their ability to scale effectively is limited.
The lack of real-time adaptability in many games and certain algorithms cannot show the ability to dynamically adjust to changing computational demands. Modern applications and graphics-intensive tasks require algorithms that can dynamically scale resolutions based on the available resources and user preferences. Algorithms that lack adaptability may hit a failing point to respond to the evolving demands of the system.
Dynamic Super Resolution (DSR) enhances visual fidelity in gaming by rendering at higher-than-native resolutions and downscaling to the display size. This improves image quality across textures, shadows, effects, anti-aliasing, ambient occlusion, and geometric detail. However, its effectiveness can vary and some in game elements may not show noticeable improvement, maintaining their original resolution despite scaling. To ensure consistent visual enhancement, developers must continuously optimize game scenes and strike a balance between image quality and performance [
10,
11]. DSR’s performance also depends on hardware capabilities, requiring thorough testing across a range of systems from low- to high-end rigs. Understanding these limitations helps developers make informed decisions about the implementation of the resolution scaling algorithm in their games. Additionally, developers need to consider the dynamic nature of gaming environments. Continuous refinement of these algorithms can help mitigate artifacts or inconsistencies that may arise in such scenarios, ensuring a seamless and immersive experience for the algorithm.
Variable Supersampling Rate (VSR) is a rendering technique that dynamically adjusts in game resolution to balance image quality and performance. Unlike traditional settings, VSR adapts in real time to scene complexity and available hardware. While it offers significant benefits on high-performance systems, VSR can strain lower-end PCs, increasing computational demands and reducing frame rates.
To address this, users can often customize VSR settings, adjusting target resolution, scaling factors, and more to suit their hardware. However, this customization can be challenging on low-end systems, and not all games offer these settings. Lack of support or poor implementation may lead to compatibility issues, bugs or glitches, limiting VSR’s effectiveness in certain titles [
12].
Passionate users with experience in resolution scaling often focus on maximizing sharpening effects, though these typically offer fewer parameters to fine-tune compared to the broader adjustments in scaling. This can feel limiting for those used to more nuanced control. As a result, users must adapt to the available tools and explore supplementary or complementary settings to enhance visual quality. Providing feedback to developers for greater customization is also important, potentially influencing future updates to better match user preferences.
Sharpening effect techniques such as Fast Approximate Anti-Aliasing (FXAA), Morphological Anti-Aliasing (MLAA), Subpixel Morphological Anti-Aliasing (SMAA), and (TAA) Temporal Anti-Aliasing do improve image quality; however, they operate in different ways and have distinct effects on resolution scaling that downgrade pixel view on screen [
9,
13,
14].
Fast Approximate Anti-Aliasing (FXAA) is a post-processing technique that smooths jagged edges and reduces aliasing by analyzing the final rendered image and applying a filter to reduce high-frequency noise. Unlike traditional methods, FXAA does not rely on high-resolution sampling. While it effectively improves image quality, it has limitations in resolution scaling. When resolution is scaled down, FXAA may still smooth edges, but overall image quality suffers due to fewer pixels. FXAA’s method of extending pixel size artificially in the settings is not a substitute for native resolution; the benefits are limited, and it cannot fully compensate for the loss of pixel information. Furthermore, FXAA struggles with transparency and alpha-tested textures, often producing artifacts or insufficient smoothing in areas where partial transparency is involved, as the technique primarily operates on the final image without access to underlying geometric or sub-pixel data [
7,
13,
15,
16,
17,
18]. In game, the implemented algorithm at higher resolutions works especially well, keeping edges clean without significant loss of detail, while at lower resolutions, there is less pixel information to utilize; however, transparency is preserved.
Morphological Anti-Aliasing (MLAA) it is another anti-aliasing technique that works by smoothing out jagged edges, similar to FXAA. It operates by analyzing the pixels in an image and applying a morphological filter to smooth out the edges. While MLAA can enhance image quality by reducing aliasing, it does not inherently contribute to resolution scaling. It applies a blur to the pixels along these edges to smooth out jagged lines. MLAA’s blur is not a substitute for native resolution; it cannot fully compensate for the loss of pixel information that occurs when the native resolution is not utilized to its full potential [
9,
19,
20,
21].
Subpixel Morphological Anti-Aliasing (SMAA) uses color and depth data from neighboring pixels to apply anti-aliasing selectively. Temporal Anti-Aliasing (TAA), meanwhile, incorporates data from previous frames to smooth transitions, reducing flickering or shimmering in moving objects and creating a more stable image. However, both SMAA and TAA may face limitations when it comes to resolution scaling. This is because the information available to SMAA or TAA is based on the lower-resolution image, and when scaled up, the algorithms does not capture the finer details required for projects [
2,
4,
13].
In a review of existing literature on supersampling, several techniques were identified for improving visual quality, such as Dynamic Super Resolution (DSR) and Variable Supersampling Rate (VSR), as well as various sharpening effects like Fast Approximate Anti-Aliasing (FXAA), Morphological Anti-Aliasing (MLAA), Subpixel Morphological Anti-Aliasing (SMAA), and Temporal Anti-Aliasing (TAA). While these methods are relevant to the domain of graphics rendering, they were not incorporated into the CPOY SR algorithm. They are discussed and mentioned here solely for descriptive and contextual purposes to highlight the distinctions of the proposed method. A key limitation of traditional anti-aliasing techniques is their inability to fully compensate for the loss of pixel information that occurs during resolution scaling. For example, FXAA operates by filtering the final rendered image to smooth jagged edges, but its effectiveness diminishes at lower resolutions where there is less pixel data to work with. Similarly, MLAA applies a blur to smooth edges but is not a substitute for native resolution. TAA and SMAA also face limitations, as the information available to them is based on a lower-resolution image, which may not capture the fine details needed when scaled up. Unlike these methods, the CPOY SR algorithm dynamically adjusts the resolution itself based on system load, providing a real-time solution to balance visual fidelity and performance.
3. Proposed Method
The advantage of using “Scaling Resolution” is the ability to both decrease and increase resolution simultaneously without exiting the game. This allows for scaling up to a resolution of 4K or higher. As it increases procedurally, it fixes the appropriate resolution for each screen.
In
Figure 2, from a fixed square matrix resolution of 432 × 768, the process of resolution scaling involves smoothing and interpolating the distribution of pixels on the screen. Scaling this square matrix to fit a different screen resolution involves calculating the appropriate pixel values for the new camera dimensions while maintaining the visual integrity of the original content. Keeping the 432 × 768 resolution, a fixed resolution that will run exactly on every screen, requires fixed dimensions.
Each resolution is different, but with a percentage value of 2× or 3×, as it increases in value (see
Table 1,
Figure 3).
Figure 4 illustrates the proposed algorithm, CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), for dynamic, real-time resolution scaling. The more detailed the pixel rendered on the screen, the heavier the load on the graphics card memory. In other words, if there are many pixels, the performance will be slower, while if there are fewer pixels, the performance will be smoother. For example, a screen with 1920 × 1080 resolution can run at 5508 × 7972. The resolution closest to ~3840 × 2160 is equal to 4K or even higher. For a laptop screen with a resolution of 1366 × 768, the decimal resolution would be, e.g., 756.3 × 562.1 depending on its screen resolution.
In
Figure 4, the algorithm shows that the Mathf.Clamp function plays a role in maintaining numerical stability. This function ensures that a given value remains within a specified minimum and maximum range, effectively preventing computational artifacts caused by values exceeding expected bounds. When applied to scaling factors on resolution dimensions, Mathf.Clamp helps preserve image fidelity by restricting scale adjustments to a defined safe range, thereby avoiding distortions and performance degradation that may arise from extreme upscaling or downscaling. This is particularly important in adaptive systems where resolution dynamically changes in response to conditions via user input.
One of the strongest features is the ability to function across diverse hardware from low-end laptops to high-resolution displays, making it accessible and effective across diverse computing environments. Users often operate on a wide spectrum of devices with varying GPU capabilities, memory limits, and processing power. By dynamically adjusting resolution scaling parameters in real time, the CPOY SR algorithm ensures smooth gameplay and stable frame rates even on entry-level systems, without significantly compromising visual fidelity. Furthermore, on high-performance systems, the algorithm leverages available hardware resources to maximize graphical detail and responsiveness. This cross-platform robustness reduces the need for manual configuration or hardware-specific optimization, thereby widening the accessibility for developers and end-users alike.
Bilinear and bicubic filters are popular choices for their ability to interpolate pixel values and create smoother images [
22]. While there are tempting indicators to utilize them in image filters on screen, their application in the context of sizing the entire screen poses certain downscale portions. The primary issue stems from the fact that these filters are primarily used to enhance image quality (sharpening effect) by introducing intermediate pixel values. Using them involves reducing the number of pixels on screen, and the interpolation performed by these filters might not be well-suited for such a task. The filters may blur or misrepresent details, leading to a loss of clarity and visual fidelity.
The challenge we are facing is related to the inherent limitations when it comes to scaling significantly. The algorithms listed are commonly employed for resizing images, each with its unique characteristics and suitability for specific scenarios. Nearest-neighbor interpolation might lead to blocky artifacts, while bicubic interpolation tends to be smoother but might result in blurring.
In achieving precise native pixel resolution for this project, it was worth comparing and exploring different techniques. The impact of the source image characteristics, experimenting with different combinations and parameters, did help in finding out the most suitable approach for the algorithm’s use case.
Inaccurate pixel values can introduce noticeable discrepancies in the interpolated results, leading to color banding, aliasing, or other undesirable effects.
To address these challenges, advanced interpolation techniques often incorporate higher-order polynomials or more sophisticated algorithms to enhance the accuracy of pixel estimation, particularly in regions where visual fidelity is critical.
Dynamic selection for different resolutions empowers users to seamlessly switch between these resolutions based on their specific needs or preferences, fostering flexibility and customization in how information is presented on the screen—an expansion in screen, a more extensive canvas to work with.
The dynamic capability implemented is valuable in various contexts, such as graphic design, gaming, or multimedia content creation, where users require different resolutions for optimal performance in visual clarity.
The project aim, to support multiple PC hardware configurations, ensuring compatibility with a range of display devices, from compact laptops to large, high-resolution monitors. This adaptability is very important in modern computing environments, where users interact with various devices interchangeably. In practical terms, dynamic resolution in reconfiguring screen resolution promotes a user-friendly experience as it eliminates the need for hard manual adjustments and simplifies the process. Whether engaging in detailed photo editing, immersive gaming, or simply managing multiple applications concurrently, the ability to dynamically select resolutions enhances productivity and fosters a more personalized and efficient computing environment.
One may encounter challenges such as configuring the camera within specified dimensions and fields of view that need adjustments. The field-of-view function determines the extent of the observable environment through the camera lens, and fine-tuning this parameter for different desired perspectives is necessary.
To address camera size parameters, resolution needs to be correctly reconfigured; it is problematic if the screen is too large in vertical view because objects will be seen in inaccurate dimensions, and similarly so in cases of horizontal view. It is therefore necessary to specify the number of pixels along both the X and Y axes.
4. Experiments
These experiments involved testing on various hardware configurations, including different PCs and laptops, to ensure optimal performance and compatibility, to identify potential bottlenecks related to hardware-specific issues. It was important to deliver a seamless and immersive experience, requiring rigorous testing with different game resources, graphics rendering capabilities, and overall efficiency levels, taking into account input devices and system requirements. This comprehensive approach aimed to create a project that not only met technical standards but also provided a satisfying and enjoyable gameplay experience for a diverse audience and satisfied the need for refinement, ensuring that the algorithm would perform optimally on a variety of hardware setups while maintaining a high level of visual and interactive quality [
23,
24,
25,
26].
Testing RAM behavior during resolution scaling, specifically involving different resolutions such as 2K and 4K, and beyond, involves examining how efficiently the RAM handles the increased workload associated with rendering content at higher resolutions.
For instance, in scaling a 1080 pixels image to a 2K display, the algorithm did not require much additional processing power to ensure a clear and sharp image. For higher resolutions, such as 4K (3840 × 2160 pixels), the algorithm is still stable and does not require much memory to store graphical data for smooth rendering. Influenced by various factors during testing, its performance under different conditions, including resolution changes, provides valuable insights into the algorithm’s capabilities.
To ensure the reliability of the findings, each experiment was conducted under controlled conditions and repeated ten times across all hardware configurations (Intel i3-4005U, Intel i5-4570, and Intel i9-10900X). The system was tested with no other resource-intensive applications running in the background, just the application, in game with the scene loaded. The performance metrics, including FPS, were recorded over a 20 s duration. To present the most stable and representative data, the figures and the accompanying analysis in this paper use the average values from these repeated tests. This approach allows us to minimize the impact of any transient system fluctuations and provide a robust assessment of the CPOY SR algorithm’s performance across different hardware setups.
The figures present a comparative analysis of in game resource utilization across three distinct rendering resolutions: standard (1366 × 768), 1K, and 2K. The time graphs in the first parts of the figures illustrate the dynamic processing workload throughout the execution in game.
Figure 5 on 1K and 2K Resolutions: Experiments on a laptop with an Intel i3-4005U CPU showed that scaling from the standard 1366 × 768 resolution to 1K and then 2K increased RAM usage. The RAM usage increased from 42% at standard resolution to 48% at 1K, and then to 52% at 2K, with a corresponding change in CPU core load.
Figure 6 and
Figure 7 on 4K and 8K Resolutions: A test on a PC with an Intel i5-4570 CPU running on a 3840 × 2160 screen showed the impact of scaling to 4K and 8K. RAM usage increased from 5626 MB to 5802 MB at 4K and to 6628 MB at 8K, out of 32,659 MB total RAM. Similarly, a more powerful PC with an Intel i9-10900X CPU running the same 3840 × 2160 screen saw RAM usage increase from 5844 MB to 5984 MB at 4K and to 6555 MB at 8K, out of 97,982 MB total RAM.
Regarding FPS on an Intel i3-4005U laptop, the average FPS at the native resolution (1366 × 768) was 60 FPS. When the resolution was scaled up to 1K, the average FPS dropped to 23.9, and at 2K, it further decreased to 12.3. On a medium-end PC with an Intel i5-4570 CPU, the average FPS was 60 at the native resolution (3840 × 2160), 44.7 at 4K, and 17.3 at 8K. This data quantifies the performance trade-offs at higher resolutions, moving beyond qualitative observations of stability. The goal of maintaining a minimum of 30 FPS was achieved for resolutions up to 4K on the high-end PC with the Intel i9-10900X CPU, which had an average FPS of 59.6 at native resolution and 47.6 at 4K. This provides a direct measure of the method’s performance.
The algorithm’s graph latency is described as follows: performance is excellent when latency is <2 ms (blue), very good when latency is <4 ms (dark blue), good when <8 ms (yellow), acceptable when <12 ms (orange), and at the stuttering threshold when >12 ms (red). Although the metric showcase was designed for a specific type of hardware, it also was tested on multiple computers and demonstrated the applicability across various systems. The experimental evaluation was carried out on machines with different configurations to assess the algorithm’s performance and stability. These tests not only confirmed the metric’s expected outcome but also provided valuable insights into its consistency in many practical scenarios, making this a strong candidate for broader deployment and benchmarking purposes.
Resolution scaling algorithms on display can impact processing; the amount of data that needs to be processed stored in the RAM may introduce latency, affecting the responsiveness of the game load and potentially impeding the overall gaming experience. These algorithms are responsible for dynamically adjusting the resolution based on the rendering workload and available resources, which can significantly impact processing requirements and memory utilization.
Factors such as bad game memory load introduces additional computational overhead; they can result in a degradation of visual fidelity. Finding the right balance between resolution, texture quality, and other visual parameters becomes a delicate optimization process, and improper configurations may lead to suboptimal performance or visual artifacts.
Figure 8 provides a quantitative analysis of image quality. The image quality test was conducted on a PC with an Intel i5-4570 CPU running on a 3840 × 2160 screen. The results show a Peak Signal-to-Noise Ratio (PSNR) of 21.16 dB and a Structural Similarity Index (SSIM) of 0.9675. The PSNR metric measures the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. In simpler terms, a higher PSNR value indicates a lower amount of noise or degradation in the image. The SSIM metric, on the other hand, evaluates image quality based on perceived changes in structural information, brightness, and contrast. A value close to 1.0 indicates a very high degree of similarity between the test image and the reference image, suggesting minimal loss of visual quality. The high SSIM score of 0.9675 shows that the CPOY SR algorithm maintains the visual integrity of the image even with resolution scaling.
5. Conclusions and Future Work
This article reveals the analysis of a resolution scaling algorithm based on memory load in a multimedia game project. While resolution scaling algorithms are powerful tools for enhancing performance in graphics-intensive applications, they come with trade-offs that need to be adjusted from many different perspectives. The dynamic adjustments introduced by these algorithms can exert notable influences on processing demands, RAM storage requirements, and the overall gaming experience.
Striking the right algorithmic balance on a resolution scale can be an intensive task in a project; adjusting between visual quality and performance is a critical challenge for developers in the evolving landscape of gaming technology. With good optimized algorithms, much more can be achieved.
Thorough testing across a spectrum of use cases, incorporating feedback from scenarios in game and embracing the settings advancements in hardware technology, allows the resolution scaling algorithm to not only exceed the 100% threshold but also enhance visual experience and facilitate optimal performance across a wide range of applications [
2,
27,
28,
29].
We considered introducing post-processing, such as denoising images during resolution scaling, but it is a computationally intensive task that often requires significant computational resources and memory. The process involves reducing the noise, artifacts present in the original image, which can be a complex operation that typically involves filtering techniques to estimate pixel values in the image, to differentiate between actual details and noise. There was a concern about memory usage in image denoising during resolution scaling. High-resolution images require a large amount of memory to store and filter afterwards. In addition to these challenges, researchers and engineers often employ optimization techniques, parallel processing, and hardware acceleration to speed up the denoising.
Despite these efforts, denoising remains a resource-intensive task in computational operation. To not sacrifice memory, it was not used and implemented in this project.