Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = Human Visual System (HVS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1662 KB  
Article
YOLO-HVS: Infrared Small Target Detection Inspired by the Human Visual System
by Xiaoge Wang, Yunlong Sheng, Qun Hao, Haiyuan Hou and Suzhen Nie
Biomimetics 2025, 10(7), 451; https://doi.org/10.3390/biomimetics10070451 - 8 Jul 2025
Cited by 4 | Viewed by 1371
Abstract
To address challenges of background interference and limited multi-scale feature extraction in infrared small target detection, this paper proposes a YOLO-HVS detection algorithm inspired by the human visual system. Based on YOLOv8, we design a multi-scale spatially enhanced attention module (MultiSEAM) using multi-branch [...] Read more.
To address challenges of background interference and limited multi-scale feature extraction in infrared small target detection, this paper proposes a YOLO-HVS detection algorithm inspired by the human visual system. Based on YOLOv8, we design a multi-scale spatially enhanced attention module (MultiSEAM) using multi-branch depth-separable convolution to suppress background noise and enhance occluded targets, integrating local details and global context. Meanwhile, the C2f_DWR (dilation-wise residual) module with regional-semantic dual residual structure is designed to significantly improve the efficiency of capturing multi-scale contextual information by expanding convolution and two-step feature extraction mechanism. We construct the DroneRoadVehicles dataset containing 1028 infrared images captured at 70–300 m, covering complex occlusion and multi-scale targets. Experiments show that YOLO-HVS achieves mAP50 of 83.4% and 97.8% on the public dataset DroneVehicle and the self-built dataset, respectively, which is an improvement of 1.1% and 0.7% over the baseline YOLOv8, and the number of model parameters only increases by 2.3 M, and the increase of GFLOPs is controlled at 0.1 G. The experimental results demonstrate that the proposed approach exhibits enhanced robustness in detecting targets under severe occlusion and low SNR conditions, while enabling efficient real-time infrared small target detection. Full article
(This article belongs to the Special Issue Advanced Biologically Inspired Vision and Its Application)
Show Figures

Graphical abstract

19 pages, 25413 KB  
Article
No-Reference Image Quality Assessment with Moving Spectrum and Laplacian Filter for Autonomous Driving Environment
by Woongchan Nam, Taehyun Youn and Chunghun Ha
Vehicles 2025, 7(1), 8; https://doi.org/10.3390/vehicles7010008 - 21 Jan 2025
Cited by 4 | Viewed by 1922
Abstract
The increasing integration of autonomous driving systems into modern vehicles heightens the significance of Image Quality Assessment (IQA), as it pertains directly to vehicular safety. In this context, the development of metrics that can emulate the Human Visual System (HVS) in assessing image [...] Read more.
The increasing integration of autonomous driving systems into modern vehicles heightens the significance of Image Quality Assessment (IQA), as it pertains directly to vehicular safety. In this context, the development of metrics that can emulate the Human Visual System (HVS) in assessing image quality assumes critical importance. Given that blur is often the primary aberration in images captured by aging or deteriorating camera sensors, this study introduces a No-Reference (NR) IQA model termed BREMOLA (Blind/Referenceless Model via Moving Spectrum and Laplacian Filter). This model is designed to sensitively respond to varying degrees of blur in images. BREMOLA employs the Fourier transform to quantify the decline in image sharpness associated with increased blur. Subsequently, deviations in the Fourier spectrum arising from factors such as nighttime lighting or the presence of various objects are normalized using the Laplacian filter. Experimental application of the BREMOLA model demonstrates its capability to differentiate between images processed with a 3 × 3 average filter and their unprocessed counterparts. Additionally, the model effectively mitigates the variance introduced in the Fourier spectrum due to variables like nighttime conditions, object count, and environmental factors. Thus, BREMOLA presents a robust approach to IQA in the specific context of autonomous driving systems. Full article
Show Figures

Figure 1

24 pages, 12882 KB  
Article
Infrared Small Target Detection Based on Weighted Improved Double Local Contrast Measure
by Han Wang, Yong Hu, Yang Wang, Long Cheng, Cailan Gong, Shuo Huang and Fuqiang Zheng
Remote Sens. 2024, 16(21), 4030; https://doi.org/10.3390/rs16214030 - 30 Oct 2024
Cited by 8 | Viewed by 2910
Abstract
The robust detection of infrared small targets plays an important role in infrared early warning systems. However, the high-brightness interference present in the background makes it challenging. To solve this problem, we propose a weighted improved double local contrast measure (WIDLCM) algorithm in [...] Read more.
The robust detection of infrared small targets plays an important role in infrared early warning systems. However, the high-brightness interference present in the background makes it challenging. To solve this problem, we propose a weighted improved double local contrast measure (WIDLCM) algorithm in this paper. Firstly, we utilize a fixed-scale three-layer window to compute the double neighborhood gray difference to screen candidate target pixels and estimate the target size. Then, according to the size information of each candidate target pixel, an improved double local contrast measure (IDLCM) based on the gray difference is designed to enhance the target and suppress the background. Next, considering the structural characteristics of the target edge, we propose the variance-based weighting coefficient to eliminate clutter further. Finally, the targets are detected by an adaptive threshold. Extensive experimental results demonstrate that our method outperforms several state-of-the-art methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

18 pages, 11943 KB  
Article
Efficient Image Details Preservation of Image Processing Pipeline Based on Two-Stage Tone Mapping
by Weijian Xu, Yuyang Cai, Feng Qian, Yuan Hu and Jingwen Yan
Mathematics 2024, 12(10), 1592; https://doi.org/10.3390/math12101592 - 20 May 2024
Cited by 1 | Viewed by 3133
Abstract
Converting a camera’s RAW image to an RGB format for human perception involves utilizing an imaging pipeline, and a series of processing modules. Existing modules often result in varying degrees of original information loss, which can render the reverse imaging pipeline unable to [...] Read more.
Converting a camera’s RAW image to an RGB format for human perception involves utilizing an imaging pipeline, and a series of processing modules. Existing modules often result in varying degrees of original information loss, which can render the reverse imaging pipeline unable to recover the original RAW image information. To this end, this paper proposes a new, almost reversible image imaging pipeline. Thus, RGB images and RAW images can be effectively converted between each other. Considering the impact of original information loss, this paper introduces a two-stage tone mapping operation (TMO). In the first stage, the RAW image with a linear response is transformed into an RGB color image. In the second stage, color scale mapping corrects the dynamic range of the image suitable for human perception through linear stretching, and reduces the loss of sensitive information to the human eye during the integer process. effectively preserving the original image’s dynamic information. The DCRAW imaging pipeline addresses the problem of high light overflow by directly highlighting cuts. The proposed imaging pipeline constructs an independent highlight processing module, and preserves the highlighted information of the image. The experimental results demonstrate that the two-stage tone mapping operation embedded in the imaging processing pipeline provided in this article ensures that the image output is suitable for human visual system (HVS) perception and retains more original image information. Full article
Show Figures

Figure 1

19 pages, 5498 KB  
Article
Integral Imaging Display System Based on Human Visual Distance Perception Model
by Lijin Deng, Zhihong Li, Yuejianan Gu and Qi Wang
Sensors 2023, 23(21), 9011; https://doi.org/10.3390/s23219011 - 6 Nov 2023
Cited by 4 | Viewed by 2790
Abstract
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This [...] Read more.
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This research examines the visual characteristics of the human eye and analyzes the path of light from a point source to the eye in the process of capturing and reconstructing the light field. Then, an overall depth of field (DOF) model of II is derived based on the human visual system (HVS). On this basis, an II system based on the human visual distance (HVD) perception model is proposed, and an interactive II display system is constructed. The experimental results confirm the effectiveness of the proposed method. The display system improves the viewing distance range, enhances spatial resolution and provides better stereoscopic display effects. When comparing our method with three other methods, it is clear that our approach produces better results in optical experiments and objective evaluations: the cumulative probability of blur detection (CPBD) value is 38.73%, the structural similarity index (SSIM) value is 86.56%, and the peak signal-to-noise ratio (PSNR) value is 31.12. These values align with subjective evaluations based on the characteristics of the human visual system. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

19 pages, 4882 KB  
Article
No-Reference Image Quality Assessment Based on a Multitask Image Restoration Network
by Fan Chen, Hong Fu, Hengyong Yu and Ying Chu
Appl. Sci. 2023, 13(11), 6802; https://doi.org/10.3390/app13116802 - 3 Jun 2023
Cited by 5 | Viewed by 3498
Abstract
When image quality is evaluated, the human visual system (HVS) infers the details in the image through its internal generative mechanism. In this process, the HVS integrates both local and global information about the image, utilizes contextual information to restore the original image [...] Read more.
When image quality is evaluated, the human visual system (HVS) infers the details in the image through its internal generative mechanism. In this process, the HVS integrates both local and global information about the image, utilizes contextual information to restore the original image information, and compares it with the distorted image information for image quality evaluation. Inspired by this mechanism, a no-reference image quality assessment method is proposed based on a multitask image restoration network. The multitask image restoration network generates a pseudo-reference image as the main task and produces a structural similarity index measure map as an auxiliary task. By mutually promoting the two tasks, a higher-quality pseudo-reference image is generated. In addition, when predicting the image quality score, both the quality restoration features and the difference features between the distorted and reference images are used, thereby fully utilizing the information from the pseudo-reference image. In order to facilitate the model’s ability to extract both global and local features, we introduce a multi-scale feature fusion module. Experimental results demonstrate that the proposed method achieves excellent performance on both synthetically and authentically distorted databases. Full article
(This article belongs to the Special Issue Artificial Neural Network Applications in Pattern Recognition)
Show Figures

Figure 1

20 pages, 4338 KB  
Article
Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality
by Fan Chen, Hong Fu, Hengyong Yu and Ying Chu
Sensors 2023, 23(10), 4974; https://doi.org/10.3390/s23104974 - 22 May 2023
Cited by 4 | Viewed by 2676
Abstract
Blind image quality assessment (BIQA) aims to evaluate image quality in a way that closely matches human perception. To achieve this goal, the strengths of deep learning and the characteristics of the human visual system (HVS) can be combined. In this paper, inspired [...] Read more.
Blind image quality assessment (BIQA) aims to evaluate image quality in a way that closely matches human perception. To achieve this goal, the strengths of deep learning and the characteristics of the human visual system (HVS) can be combined. In this paper, inspired by the ventral pathway and the dorsal pathway of the HVS, a dual-pathway convolutional neural network is proposed for BIQA tasks. The proposed method consists of two pathways: the “what” pathway, which mimics the ventral pathway of the HVS to extract the content features of distorted images, and the “where” pathway, which mimics the dorsal pathway of the HVS to extract the global shape features of distorted images. Then, the features from the two pathways are fused and mapped to an image quality score. Additionally, gradient images weighted by contrast sensitivity are used as the input to the “where” pathway, allowing it to extract global shape features that are more sensitive to human perception. Moreover, a dual-pathway multi-scale feature fusion module is designed to fuse the multi-scale features of the two pathways, enabling the model to capture both global features and local details, thus improving the overall performance of the model. Experiments conducted on six databases show that the proposed method achieves state-of-the-art performance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 2000 KB  
Article
PW-360IQA: Perceptually-Weighted Multichannel CNN for Blind 360-Degree Image Quality Assessment
by Abderrezzaq Sendjasni and Mohamed-Chaker Larabi
Sensors 2023, 23(9), 4242; https://doi.org/10.3390/s23094242 - 24 Apr 2023
Cited by 9 | Viewed by 2613
Abstract
Image quality assessment of 360-degree images is still in its early stages, especially when it comes to solutions that rely on machine learning. There are many challenges to be addressed related to training strategies and model architecture. In this paper, we propose a [...] Read more.
Image quality assessment of 360-degree images is still in its early stages, especially when it comes to solutions that rely on machine learning. There are many challenges to be addressed related to training strategies and model architecture. In this paper, we propose a perceptually weighted multichannel convolutional neural network (CNN) using a weight-sharing strategy for 360-degree IQA (PW-360IQA). Our approach involves extracting visually important viewports based on several visual scan-path predictions, which are then fed to a multichannel CNN using DenseNet-121 as the backbone. In addition, we account for users’ exploration behavior and human visual system (HVS) properties by using information regarding visual trajectory and distortion probability maps. The inter-observer variability is integrated by leveraging different visual scan-paths to enrich the training data. PW-360IQA is designed to learn the local quality of each viewport and its contribution to the overall quality. We validate our model on two publicly available datasets, CVIQ and OIQA, and demonstrate that it performs robustly. Furthermore, the adopted strategy considerably decreases the complexity when compared to the state-of-the-art, enabling the model to attain comparable, if not better, results while requiring less computational complexity. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

15 pages, 5694 KB  
Article
Full-Reference Image Quality Assessment with Transformer and DISTS
by Pei-Fen Tsai, Huai-Nan Peng, Chia-Hung Liao and Shyan-Ming Yuan
Mathematics 2023, 11(7), 1599; https://doi.org/10.3390/math11071599 - 26 Mar 2023
Cited by 2 | Viewed by 6527
Abstract
To improve data transmission efficiency, image compression is a commonly used method with the disadvantage of accompanying image distortion. There are many image restoration (IR) algorithms, and one of the most advanced algorithms is the generative adversarial network (GAN)-based method with a high [...] Read more.
To improve data transmission efficiency, image compression is a commonly used method with the disadvantage of accompanying image distortion. There are many image restoration (IR) algorithms, and one of the most advanced algorithms is the generative adversarial network (GAN)-based method with a high correlation to the human visual system (HVS). To evaluate the performance of GAN-based IR algorithms, we proposed an ensemble image quality assessment (IQA) called ATDIQA (Auxiliary Transformer with DISTS IQA) to give weights on multiscale features global self-attention transformers and local features of convolutional neural network (CNN) IQA of DISTS. The result not only performed better on the perceptual image processing algorithms (PIPAL) dataset with images by GAN IR algorithms but also has good model generalization over LIVE and TID2013 as traditional distorted image datasets. The ATDIQA ensemble successfully demonstrates its performance with a high correlation with the human judgment score of distorted images. Full article
Show Figures

Figure 1

16 pages, 8732 KB  
Article
Just Noticeable Difference Model for Images with Color Sensitivity
by Zhao Zhang, Xiwu Shang, Guoping Li and Guozhong Wang
Sensors 2023, 23(5), 2634; https://doi.org/10.3390/s23052634 - 27 Feb 2023
Cited by 6 | Viewed by 5727
Abstract
The just noticeable difference (JND) model reflects the visibility limitations of the human visual system (HVS), which plays an important role in perceptual image/video processing and is commonly applied to perceptual redundancy removal. However, existing JND models are usually constructed by treating the [...] Read more.
The just noticeable difference (JND) model reflects the visibility limitations of the human visual system (HVS), which plays an important role in perceptual image/video processing and is commonly applied to perceptual redundancy removal. However, existing JND models are usually constructed by treating the color components of three channels equally, and their estimation of the masking effect is inadequate. In this paper, we introduce visual saliency and color sensitivity modulation to improve the JND model. Firstly, we comprehensively combined contrast masking, pattern masking, and edge protection to estimate the masking effect. Then, the visual saliency of HVS was taken into account to adaptively modulate the masking effect. Finally, we built color sensitivity modulation according to the perceptual sensitivities of HVS, to adjust the sub-JND thresholds of Y, Cb, and Cr components. Thus, the color-sensitivity-based JND model (CSJND) was constructed. Extensive experiments and subjective tests were conducted to verify the effectiveness of the CSJND model. We found that consistency between the CSJND model and HVS was better than existing state-of-the-art JND models. Full article
(This article belongs to the Special Issue Image/Signal Processing and Machine Vision in Sensing Applications)
Show Figures

Figure 1

14 pages, 3959 KB  
Article
Image Interpolation Based on Spiking Neural Network Model
by Mürsel Ozan İncetaş
Appl. Sci. 2023, 13(4), 2438; https://doi.org/10.3390/app13042438 - 14 Feb 2023
Cited by 8 | Viewed by 3003
Abstract
Image interpolation is used in many areas of image processing. It is seen that many techniques developed to date have been successful in both protecting edges and increasing image quality. However, these techniques generally detect edges with gradient-based linear calculations. In this study, [...] Read more.
Image interpolation is used in many areas of image processing. It is seen that many techniques developed to date have been successful in both protecting edges and increasing image quality. However, these techniques generally detect edges with gradient-based linear calculations. In this study, spiking neural networks (SNNs), which are known to successfully simulate the human visual system (HVS), are used to detect edge pixels instead of the gradient. With the help of the proposed SNN-based model, the pixels marked as edges are interpolated with a 1D directional filter. For the remaining pixels, the standard bicubic interpolation technique is used. Additionally, the success of the proposed method is compared to known methods using various metrics. The experimental results show that the proposed method is more successful than the other methods. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

13 pages, 4174 KB  
Article
Multi-Scale Strengthened Directional Difference Algorithm Based on the Human Vision System
by Yuye Zhang, Ying Zheng and Xiuhong Li
Sensors 2022, 22(24), 10009; https://doi.org/10.3390/s222410009 - 19 Dec 2022
Cited by 3 | Viewed by 2402
Abstract
The human visual system (HVS) mechanism has been successfully introduced into the field of infrared small target detection. However, most of the current detection algorithms based on the mechanism of the human visual system ignore the continuous direction information and are easily disturbed [...] Read more.
The human visual system (HVS) mechanism has been successfully introduced into the field of infrared small target detection. However, most of the current detection algorithms based on the mechanism of the human visual system ignore the continuous direction information and are easily disturbed by highlight noise and object edges. In this paper, a multi-scale strengthened directional difference (MSDD) algorithm is proposed. It is mainly divided into two parts: local directional intensity measure (LDIM) and local directional fluctuation measure (LDFM). In LDIM, an improved window is used to suppress most edge clutter, highlights, and holes and enhance true targets. In LDFM, the characteristics of the target area, the background area, and the connection between the target and the background are considered, which further highlights the true target signal and suppresses the corner clutter. Then, the MSDD saliency map is obtained by fusing the LDIM map and the LDFM map. Finally, an adaptive threshold segmentation method is employed to capture true targets. The experiments show that the proposed method achieves better detection performance in complex backgrounds than several classical and widely used methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 1541 KB  
Article
No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System
by Md Mehedi Hasan, Md. Ariful Islam, Sejuti Rahman, Michael R. Frater and John F. Arnold
Appl. Sci. 2022, 12(19), 10090; https://doi.org/10.3390/app121910090 - 7 Oct 2022
Cited by 4 | Viewed by 2436
Abstract
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual [...] Read more.
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features. Full article
(This article belongs to the Special Issue Computational Intelligence in Image and Video Analysis)
Show Figures

Figure 1

19 pages, 3897 KB  
Article
A No-Reference Quality Assessment Method for Screen Content Images Based on Human Visual Perception Characteristics
by Yuxin Hong, Caihong Wang and Xiuhua Jiang
Electronics 2022, 11(19), 3155; https://doi.org/10.3390/electronics11193155 - 1 Oct 2022
Cited by 1 | Viewed by 2038
Abstract
The widespread application of screen content images (SCIs) has met the needs of remote display and online working. It is a topic that is challenging and worthwhile discussing in research on quality assessment for SCIs. However, existing methods focus on extracting artificial features [...] Read more.
The widespread application of screen content images (SCIs) has met the needs of remote display and online working. It is a topic that is challenging and worthwhile discussing in research on quality assessment for SCIs. However, existing methods focus on extracting artificial features to predict image quality, which are subjective and incomplete, or lack good interpretability. To overcome these problems, we propose an effective quality assessment method for SCIs based on human visual perceptual characteristics. The proposed method simulates the multi-channel working mechanism of the human visual system (HVS) through pyramid decomposition and the information extraction process of brains with the help of dictionary learning and sparse coding. The input SCIs are first decomposed at multiple scales, and then dictionary learning and sparse coding are applied to the images at each scale. Furthermore, the sparse representation results are analyzed from multiple perspectives. First, a pooling scheme about generalized Gaussian distribution and log-normal distribution is designed to describe the sparse coefficients with and without zero values, respectively. Then the sparse coefficients are used to characterize the energy characteristics. Additionally, the probability of each atom is calculated to describe the statistical property of SCIs. Since the above process only deals with brightness, color-related features are also added to make the model more general and robust. Experimental results on three public SCI databases show that the proposed method can achieve better performance than existing methods. Full article
Show Figures

Figure 1

21 pages, 2099 KB  
Article
A Human Visual System Inspired No-Reference Image Quality Assessment Method Based on Local Feature Descriptors
by Domonkos Varga
Sensors 2022, 22(18), 6775; https://doi.org/10.3390/s22186775 - 7 Sep 2022
Cited by 9 | Viewed by 4090
Abstract
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various [...] Read more.
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various sequence of HVS inspired filters were applied to the color channels of an input image to enhance those statistical regularities in the image to which the human visual system is sensitive. From the obtained feature maps, the statistics of a wide range of local feature descriptors were extracted to compile quality-aware features since they treat images from the human visual system’s point of view. To prove the efficiency of the proposed method, it was compared to 16 state-of-the-art NR-IQA techniques on five large benchmark databases, i.e., CLIVE, KonIQ-10k, SPAQ, TID2013, and KADID-10k. It was demonstrated that the proposed method is superior to the state-of-the-art in terms of three different performance indices. Full article
(This article belongs to the Special Issue Advanced Measures for Imaging System Performance and Image Quality)
Show Figures

Figure 1

Back to TopTop