Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = low color information dependency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 16392 KiB  
Article
TOSD: A Hierarchical Object-Centric Descriptor Integrating Shape, Color, and Topology
by Jun-Hyeon Choi, Jeong-Won Pyo, Ye-Chan An and Tae-Yong Kuc
Sensors 2025, 25(15), 4614; https://doi.org/10.3390/s25154614 - 25 Jul 2025
Viewed by 273
Abstract
This paper introduces a hierarchical object-centric descriptor framework called TOSD (Triplet Object-Centric Semantic Descriptor). The goal of this method is to overcome the limitations of existing pixel-based and global feature embedding approaches. To this end, the framework adopts a hierarchical representation that is [...] Read more.
This paper introduces a hierarchical object-centric descriptor framework called TOSD (Triplet Object-Centric Semantic Descriptor). The goal of this method is to overcome the limitations of existing pixel-based and global feature embedding approaches. To this end, the framework adopts a hierarchical representation that is explicitly designed for multi-level reasoning. TOSD combines shape, color, and topological information without depending on predefined class labels. The shape descriptor captures the geometric configuration of each object. The color descriptor focuses on internal appearance by extracting normalized color features. The topology descriptor models the spatial and semantic relationships between objects in a scene. These components are integrated at both object and scene levels to produce compact and consistent embeddings. The resulting representation covers three levels of abstraction: low-level pixel details, mid-level object features, and high-level semantic structure. This hierarchical organization makes it possible to represent both local cues and global context in a unified form. We evaluate the proposed method on multiple vision tasks. The results show that TOSD performs competitively compared to baseline methods, while maintaining robustness in challenging cases such as occlusion and viewpoint changes. The framework is applicable to visual odometry, SLAM, object tracking, global localization, scene clustering, and image retrieval. In addition, this work extends our previous research on the Semantic Modeling Framework, which represents environments using layered structures of places, objects, and their ontological relations. Full article
(This article belongs to the Special Issue Event-Driven Vision Sensor Architectures and Application Scenarios)
Show Figures

Figure 1

22 pages, 4021 KiB  
Article
Image Characteristic-Guided Learning Method for Remote-Sensing Image Inpainting
by Ying Zhou, Xiang Gao, Xinrong Wu, Fan Wang, Weipeng Jing and Xiaopeng Hu
Remote Sens. 2025, 17(13), 2132; https://doi.org/10.3390/rs17132132 - 21 Jun 2025
Viewed by 408
Abstract
Inpainting noisy remote-sensing images can reduce the cost of acquiring remote-sensing images (RSIs). Since RSIs contain complex land structure features and concentrated obscured areas, existing inpainting methods often produce color inconsistency and structural smoothing when applied to RSIs with a high missing ratio. [...] Read more.
Inpainting noisy remote-sensing images can reduce the cost of acquiring remote-sensing images (RSIs). Since RSIs contain complex land structure features and concentrated obscured areas, existing inpainting methods often produce color inconsistency and structural smoothing when applied to RSIs with a high missing ratio. To address these problems, inspired by tensor recovery, a lightweight image Inpainting Generative Adversarial Network (GAN) method combining low-rankness and local-smoothness (IGLL) is proposed. IGLL utilizes the low-rankness and local-smoothness characteristics of RSIs to guide the deep-learning inpainting. Based on the strong low rankness characteristic of the RSIs, IGLL fully utilizes the background information for foreground inpainting and constrains the consistency of the key ranks. Based on the low smoothness characteristic of the RSIs, learnable edges and structure priors are designed to enhance the non-smoothness of the results. Specifically, the generator of IGLL consists of a pixel-level reconstruction net (PIRN) and a perception-level reconstruction net (PERN). In PIRN, the proposed global attention module (GAM) establishes long-range pixel dependencies. GAM performs precise normalization and avoids overfitting. In PERN, the proposed flexible feature similarity module (FFSM) computes the similarity between background and foreground features and selects a reasonable feature for recovery. Compared with existing works, FFSM improves the fineness of feature matching. To avoid the problem of local-smoothness in the results, both the generator and discriminator utilize the structure priors and learnable edges to regularize large concentrated missing regions. Additionally, IGLL incorporates mathematical constraints into deep-learning models. A singular value decomposition (SVD) loss item is proposed to model the low-rankness characteristic, and it constrains feature consistency. Extensive experiments demonstrate that the proposed IGLL performs favorably against state-of-the-art methods in terms of the reconstruction quality and computation costs, especially on RSIs with high mask ratios. Moreover, our ablation studies reveal the effectiveness of GAM, FFSM, and SVD loss. Source code is publicly available on GitHub. Full article
Show Figures

Figure 1

29 pages, 18881 KiB  
Article
A Novel Entropy-Based Approach for Thermal Image Segmentation Using Multilevel Thresholding
by Thaweesak Trongtirakul, Karen Panetta, Artyom M. Grigoryan and Sos S. Agaian
Entropy 2025, 27(5), 526; https://doi.org/10.3390/e27050526 - 14 May 2025
Viewed by 730
Abstract
Image segmentation is a fundamental challenge in computer vision, transforming complex image representations into meaningful, analyzable components. While entropy-based multilevel thresholding techniques, including Otsu, Shannon, fuzzy, Tsallis, Renyi, and Kapur approaches, have shown potential in image segmentation, they encounter significant limitations when processing [...] Read more.
Image segmentation is a fundamental challenge in computer vision, transforming complex image representations into meaningful, analyzable components. While entropy-based multilevel thresholding techniques, including Otsu, Shannon, fuzzy, Tsallis, Renyi, and Kapur approaches, have shown potential in image segmentation, they encounter significant limitations when processing thermal images, such as poor spatial resolution, low contrast, lack of color and texture information, and susceptibility to noise and background clutter. This paper introduces a novel adaptive unsupervised entropy algorithm (A-Entropy) to enhance multilevel thresholding for thermal image segmentation. Our key contributions include (i) an image-dependent thermal enhancement technique specifically designed for thermal images to improve visibility and contrast in regions of interest, (ii) a so-called A-Entropy concept for unsupervised thermal image thresholding, and (iii) a comprehensive evaluation using the Benchmarking IR Dataset for Surveillance with Aerial Intelligence (BIRDSAI). Experimental results demonstrate the superiority of our proposal compared to other state-of-the-art methods on the BIRDSAI dataset, which comprises both real and synthetic thermal images with substantial variations in scale, contrast, background clutter, and noise. Comparative analysis indicates improved segmentation accuracy and robustness compared to traditional entropy-based methods. The framework’s versatility suggests promising applications in brain tumor detection, optical character recognition, thermal energy leakage detection, and face recognition. Full article
Show Figures

Figure 1

28 pages, 2925 KiB  
Review
Cationized Cellulose Materials: Enhancing Surface Adsorption Properties Towards Synthetic and Natural Dyes
by Arvind Negi
Polymers 2025, 17(1), 36; https://doi.org/10.3390/polym17010036 - 27 Dec 2024
Cited by 6 | Viewed by 2437
Abstract
Cellulose is a homopolymer composed of β-glucose units linked by 1,4-beta linkages in a linear arrangement, providing its structure with intermolecular H-bonding networking and crystallinity. The participation of hydroxy groups in the H-bonding network results in a low-to-average nucleophilicity of cellulose, which is [...] Read more.
Cellulose is a homopolymer composed of β-glucose units linked by 1,4-beta linkages in a linear arrangement, providing its structure with intermolecular H-bonding networking and crystallinity. The participation of hydroxy groups in the H-bonding network results in a low-to-average nucleophilicity of cellulose, which is insufficient for executing a nucleophilic reaction. Importantly, as a polyhydroxy biopolymer, cellulose has a high proportion of hydroxy groups in secondary and primary forms, providing it with limited aqueous solubility, highly dependent on its form, size, and other materialistic properties. Therefore, cellulose materials are generally known for their low reactivity and limited aqueous solubility and usually undergo aqueous medium-assisted pretreatment methods. The cationization of cellulose materials is one such example of pretreatment, which introduces a positive charge over its surface, improving its accessibility towards anionic group-containing molecules or application-targeted functionalization. The chemistry of cationization of cellulose has been widely explored, leading to the development of various building blocks for different material-based applications. Specifically, in coloration applications, cationized cellulose materials have been extensively studied, as the dyeing process benefits from the enhanced ionic interactions with anionic groups (such as sulfate, carboxylic groups, or phenolic groups), minimizing/eliminating the need for chemical auxiliaries. This study provides insights into the chemistry of cellulose cationization, which can benefit the material, polymer, textile, and color chemist. This paper deals with the chemistry information of cationization and how it enhances the reactivity of cellulose fibers towards its processing. Full article
(This article belongs to the Special Issue Reactive and Functional Biopolymers)
Show Figures

Figure 1

19 pages, 2461 KiB  
Article
Optimization of Breeding Tools in Quinoa (Chenopodium quinoa) and Identification of Suitable Breeding Material for NW Europe
by Tim Vleugels, Chris Van Waes, Ellen De Keyser and Gerda Cnops
Plants 2025, 14(1), 3; https://doi.org/10.3390/plants14010003 - 24 Dec 2024
Viewed by 1090
Abstract
Quinoa (Chenopodium quinoa) cultivation has become increasingly popular in NW Europe but little is known about the performance of contract-free varieties in this region. In this study, we phenotyped 25 quinoa varieties on a single-plant basis in a field trial in [...] Read more.
Quinoa (Chenopodium quinoa) cultivation has become increasingly popular in NW Europe but little is known about the performance of contract-free varieties in this region. In this study, we phenotyped 25 quinoa varieties on a single-plant basis in a field trial in Belgium. In addition, we optimized breeding tools such as NIRS (near-infrared reflectance spectroscopy) to estimate the seed crude protein content and a multiplex PCR set to identify true F1 progeny from pair crosses. We identified 14 varieties with sufficiently early maturity, 17 varieties with plant height below 150 cm, 21 large-seeded varieties, four varieties with a crude protein content exceeding 15%, and two low-saponin varieties. A variety of seed colors and plant morphological traits was observed. Seed yield was not correlated with maturity, plant height or saponin content, but was negatively correlated with seed crude protein content. NIRS could accurately predict seed crude protein content with a determination coefficient of 0.94. Our multiplex SSR set could correctly identify the paternity in 77% to 97% of progeny, depending on the pair cross. In conclusion, our study identified various contract-free varieties that may be suitable for cultivation in NW Europe. In addition, our study provides valuable phenotypic information and breeding tools that breeders can harness for breeding efforts in NW European quinoa. Full article
(This article belongs to the Special Issue Genomics-Assisted Improvement of Quinoa)
Show Figures

Figure 1

15 pages, 7767 KiB  
Article
Can Surface Water Color Accurately Determine Sediment Concentration and Grain Size? A Hyperspectral Imaging Study
by David Bazzett and Ruo-Qian Wang
Water 2024, 16(15), 2184; https://doi.org/10.3390/w16152184 - 1 Aug 2024
Viewed by 1472
Abstract
The characteristics of suspended sediments determine the water color, and remote sensing methods have been developed to leverage this physics to determine sediment concentration and size. However, current measurement practices rely on empirical correlations, which have only been tested for a limited range [...] Read more.
The characteristics of suspended sediments determine the water color, and remote sensing methods have been developed to leverage this physics to determine sediment concentration and size. However, current measurement practices rely on empirical correlations, which have only been tested for a limited range of particle conditions. This gap prevents their applicability in the field. To address the issue, this study analyzes hyperspectral spectra across various wavelength bands to characterize spectral signatures of different sediment sizes and concentrations. The results reveal inflection points of the light scattering of suspended sediment solution depending on particle concentration and sizes: the light scattering positively correlates with a low concentration but negatively correlates with a high concentration, while it negatively correlates with particle size for low concentrations but positively correlates for high concentrations. Sensitivity analyses indicate increased responsiveness to concentration changes at low concentrations and a higher sensitivity to particle size changes at both low and high concentrations. Machine learning models were tested for simulated satellite bands, and it was found that existing machine learning models are limited in reliably determining sediment characteristics, reaching an R-square of up to 0.8 for concentration and 0.7 for particle size. This research highlights the importance of selecting appropriate wavelength bands in the appropriate range of sediments and the need to develop advanced models for remote sensing measurements. This work underscores hyperspectral imaging’s potential in environmental monitoring and remote sensing, revealing the complicated physics behind water color changes due to turbidity and informing next-generation remote sensing technology for turbidity measurements. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

27 pages, 6545 KiB  
Article
Compositional and Microstructural Investigations of Prehistoric Ceramics from Southern Romania (Middle Neolithic Pottery)
by Rodica-Mariana Ion, Ancuta-Elena Pungoi, Lorena Iancu, Ramona Marina Grigorescu, Gabriel Vasilievici, Anca Irina Gheboianu, Sofia Slamnoiu-Teodorescu and Elvira Alexandrescu
Appl. Sci. 2024, 14(13), 5755; https://doi.org/10.3390/app14135755 - 1 Jul 2024
Cited by 1 | Viewed by 2257
Abstract
In this paper, based on our previous expertise on ceramic artifacts, several archaeometric methods applied to some samples collected from the Dudești archaeological site (Oltenia region, Romania) are reported for the first time in the literature. The chemical composition, and microstructural and morphological [...] Read more.
In this paper, based on our previous expertise on ceramic artifacts, several archaeometric methods applied to some samples collected from the Dudești archaeological site (Oltenia region, Romania) are reported for the first time in the literature. The chemical composition, and microstructural and morphological characterization of these samples offer important conclusions about the processing conditions. Some specific techniques such as X-ray diffraction (XRD), wavelength-dispersive X-ray fluorescence (WDXRF), optical microscopy (OM), stereomicroscopy, environmental scanning electron microscopy (ESEM), Fourier-transform infrared spectroscopy (FTIR), and Raman spectroscopy provide compositional information about composition and the decay processes. Additionally, the Brunauer–Emmett–Teller (BET) method helps to estimate pore sizes and specific surface areas. A thermogravimetric analysis (TGA/TDG) was used to establish details regarding the production technology and also the raw materials source used to make the ceramics. The obtained results indicated that the ceramics are based on a paste of muscovite and feldspar, with high plasticity, together with quartz and hematite/goethite and calcite, the latter in very low concentrations. According to the obtained results, we could assume that clays from the investigated samples had a low concentration of calcium. Gypsum is present as paste in a very low concentration, identified by the presence of a sulphate group in WDXRF. In the same context, iron oxides have a significant impact on the firing atmosphere of iron-rich clay, resulting in blackening under reducing conditions and a reddish coloration under oxidative conditions. The use of hematite and gypsum as pigments further contributes to the color variations in the pottery. The consistent firing temperature range of 200–600 °C in Dudești pottery implies a standardized production process, the variation in color being dependent on the specific reducing/oxidative regime conditions (reducing atmosphere followed by rapid oxidation). This relationship between clay composition and local sources suggests a connection to Neolithic pottery production in the region and their color depending on the reducing/oxidative regime conditions. Full article
Show Figures

Figure 1

18 pages, 4037 KiB  
Article
Saliency Detection Based on Multiple-Level Feature Learning
by Xiaoli Li, Yunpeng Liu and Huaici Zhao
Entropy 2024, 26(5), 383; https://doi.org/10.3390/e26050383 - 30 Apr 2024
Cited by 2 | Viewed by 2118
Abstract
Finding the most interesting areas of an image is the aim of saliency detection. Conventional methods based on low-level features rely on biological cues like texture and color. These methods, however, have trouble with processing complicated or low-contrast images. In this paper, we [...] Read more.
Finding the most interesting areas of an image is the aim of saliency detection. Conventional methods based on low-level features rely on biological cues like texture and color. These methods, however, have trouble with processing complicated or low-contrast images. In this paper, we introduce a deep neural network-based saliency detection method. First, using semantic segmentation, we construct a pixel-level model that gives each pixel a saliency value depending on its semantic category. Next, we create a region feature model by combining both hand-crafted and deep features, which extracts and fuses the local and global information of each superpixel region. Third, we combine the results from the previous two steps, along with the over-segmented superpixel images and the original images, to construct a multi-level feature model. We feed the model into a deep convolutional network, which generates the final saliency map by learning to integrate the macro and micro information based on the pixels and superpixels. We assess our method on five benchmark datasets and contrast it against 14 state-of-the-art saliency detection algorithms. According to the experimental results, our method performs better than the other methods in terms of F-measure, precision, recall, and runtime. Additionally, we analyze the limitations of our method and propose potential future developments. Full article
Show Figures

Figure 1

18 pages, 7473 KiB  
Article
Green Space Reverse Pixel Shuffle Network: Urban Green Space Segmentation Using Reverse Pixel Shuffle for Down-Sampling from High-Resolution Remote Sensing Images
by Mingyu Jiang, Hua Shao, Xingyu Zhu and Yang Li
Forests 2024, 15(1), 197; https://doi.org/10.3390/f15010197 - 19 Jan 2024
Cited by 2 | Viewed by 2197
Abstract
Urban green spaces (UGS) play a crucial role in the urban environmental system by aiding in mitigating the urban heat island effect, promoting sustainable urban development, and ensuring the physical and mental well-being of residents. The utilization of remote sensing imagery enables the [...] Read more.
Urban green spaces (UGS) play a crucial role in the urban environmental system by aiding in mitigating the urban heat island effect, promoting sustainable urban development, and ensuring the physical and mental well-being of residents. The utilization of remote sensing imagery enables the real-time surveying and mapping of UGS. By analyzing the spatial distribution and spectral information of a UGS, it can be found that the UGS constitutes a kind of low-rank feature. Thus, the accuracy of the UGS segmentation model is not heavily dependent on the depth of neural networks. On the contrary, emphasizing the preservation of more surface texture features and color information contributes significantly to enhancing the model’s segmentation accuracy. In this paper, we proposed a UGS segmentation model, which was specifically designed according to the unique characteristics of a UGS, named the Green Space Reverse Pixel Shuffle Network (GSRPnet). GSRPnet is a straightforward but effective model, which uses an improved RPS-ResNet as the feature extraction backbone network to enhance its ability to extract UGS features. Experiments conducted on GaoFen-2 remote sensing imagery and the Wuhan Dense Labeling Dataset (WHDLD) demonstrate that, in comparison with other methods, GSRPnet achieves superior results in terms of precision, F1-score, intersection over union, and overall accuracy. It demonstrates smoother edge performance in UGS border regions and excels at identifying discrete small-scale UGS. Meanwhile, the ablation experiments validated the correctness of the hypotheses and methods we proposed in this paper. Additionally, GSRPnet’s parameters are merely 17.999 M, and this effectively demonstrates that the improvement in accuracy of GSRPnet is not only determined by an increase in model parameters. Full article
(This article belongs to the Special Issue Image Processing for Forest Characterization)
Show Figures

Figure 1

25 pages, 5029 KiB  
Article
Pasture Biomass Estimation Using Ultra-High-Resolution RGB UAVs Images and Deep Learning
by Milad Vahidi, Sanaz Shafian, Summer Thomas and Rory Maguire
Remote Sens. 2023, 15(24), 5714; https://doi.org/10.3390/rs15245714 - 13 Dec 2023
Cited by 10 | Viewed by 3244
Abstract
The continuous assessment of grassland biomass during the growth season plays a vital role in making informed, location-specific management choices. The implementation of precision agriculture techniques can facilitate and enhance these decision-making processes. Nonetheless, precision agriculture depends on the availability of prompt and [...] Read more.
The continuous assessment of grassland biomass during the growth season plays a vital role in making informed, location-specific management choices. The implementation of precision agriculture techniques can facilitate and enhance these decision-making processes. Nonetheless, precision agriculture depends on the availability of prompt and precise data pertaining to plant characteristics, necessitating both high spatial and temporal resolutions. Utilizing structural and spectral attributes extracted from low-cost sensors on unmanned aerial vehicles (UAVs) presents a promising non-invasive method to evaluate plant traits, including above-ground biomass and plant height. Therefore, the main objective was to develop an artificial neural network capable of estimating pasture biomass by using UAV RGB images and the canopy height models (CHM) during the growing season over three common types of paddocks: Rest, bale grazing, and sacrifice. Subsequently, this study first explored the variation of structural and color-related features derived from statistics of CHM and RGB image values under different levels of plant growth. Then, an ANN model was trained for accurate biomass volume estimation based on a rigorous assessment employing statistical criteria and ground observations. The model demonstrated a high level of precision, yielding a coefficient of determination (R2) of 0.94 and a root mean square error (RMSE) of 62 (g/m2). The evaluation underscores the critical role of ultra-high-resolution photogrammetric CHMs and red, green, and blue (RGB) values in capturing meaningful variations and enhancing the model’s accuracy across diverse paddock types, including bale grazing, rest, and sacrifice paddocks. Furthermore, the model’s sensitivity to areas with minimal or virtually absent biomass during the plant growth period is visually demonstrated in the generated maps. Notably, it effectively discerned low-biomass regions in bale grazing paddocks and areas with reduced biomass impact in sacrifice paddocks compared to other types. These findings highlight the model’s versatility in estimating biomass across a range of scenarios, making it well suited for deployment across various paddock types and environmental conditions. Full article
(This article belongs to the Special Issue UAS Technology and Applications in Precision Agriculture)
Show Figures

Figure 1

16 pages, 2920 KiB  
Article
Underwater Image Super-Resolution via Dual-aware Integrated Network
by Aiye Shi and Haimin Ding
Appl. Sci. 2023, 13(24), 12985; https://doi.org/10.3390/app132412985 - 5 Dec 2023
Cited by 5 | Viewed by 1989
Abstract
Underwater scenes are often affected by issues such as blurred details, color distortion, and low contrast, which are primarily caused by wavelength-dependent light scattering; these factors significantly impact human visual perception. Convolutional neural networks (CNNs) have recently displayed very promising performance in underwater [...] Read more.
Underwater scenes are often affected by issues such as blurred details, color distortion, and low contrast, which are primarily caused by wavelength-dependent light scattering; these factors significantly impact human visual perception. Convolutional neural networks (CNNs) have recently displayed very promising performance in underwater super-resolution (SR). However, the nature of CNN-based methods is local operations, making it difficult to reconstruct rich features. To solve these problems, we present an efficient and lightweight dual-aware integrated network (DAIN) comprising a series of dual-aware enhancement modules (DAEMs) for underwater SR tasks. In particular, DAEMs primarily consist of a multi-scale color correction block (MCCB) and a swin transformer layer (STL). These components work together to incorporate both local and global features, thereby enhancing the quality of image reconstruction. MCCBs can use multiple channels to process the different colors of underwater images to restore the uneven underwater light decay-affected real color and details of the images. The STL captures long-range dependencies and global contextual information, enabling the extraction of neglected features in underwater images. Experimental results demonstrate significant enhancements with a DAIN over conventional SR methods. Full article
(This article belongs to the Special Issue AI, Machine Learning and Deep Learning in Signal Processing)
Show Figures

Figure 1

21 pages, 3539 KiB  
Article
Light Absorption by Optically Active Components in the Arctic Region (August 2020) and the Possibility of Application to Satellite Products for Water Quality Assessment
by Tatiana Efimova, Tatiana Churilova, Elena Skorokhod, Vyacheslav Suslin, Anatoly S. Buchelnikov, Dmitry Glukhovets, Aleksandr Khrapko and Natalia Moiseeva
Remote Sens. 2023, 15(17), 4346; https://doi.org/10.3390/rs15174346 - 4 Sep 2023
Cited by 6 | Viewed by 1864
Abstract
In August 2020, during the 80th cruise of the R/V “Akademik Mstislav Keldysh”, the chlorophyll a concentration (Chl-a) and spectral coefficients of light absorption by phytoplankton pigments, non-algal particles (NAP) and colored dissolved organic matter (CDOM) were measured in the Norwegian [...] Read more.
In August 2020, during the 80th cruise of the R/V “Akademik Mstislav Keldysh”, the chlorophyll a concentration (Chl-a) and spectral coefficients of light absorption by phytoplankton pigments, non-algal particles (NAP) and colored dissolved organic matter (CDOM) were measured in the Norwegian Sea, the Barents Sea and the adjacent area of the Arctic Ocean. It was shown that the spatial distribution of the three light-absorbing components in the explored Arctic region was non-homogenous. It was revealed that CDOM contributed largely to the total non-water light absorption (atot(λ) = aph(λ) + aNAP(λ) + aCDOM(λ)) in the blue spectral range in the Arctic Ocean and the Barents Sea. The fraction of NAP in the total non-water absorption was low (less than 20%). The depth of the euphotic zone depended on atot(λ) in the surface water layer, which was described by a power equation. The Arctic Ocean, the Norwegian Sea and the Barents Sea did not differ in the Chl-a-specific light absorption coefficients of phytoplankton. In the blue maximum of phytoplankton absorption spectra, Chl-a-specific light absorption coefficients of phytoplankton in the upper mixed layer (UML) were higher than those below the UML. Relationships between phytoplankton absorption coefficients and Chl-a were derived by least squares fitting to power functions for the whole visible domain with a 1 nm interval. The OCI, OC3 and GIOP algorithms were validated using a database of co-located results (day-to-day) of in situ measurements (n = 63) and the ocean color scanner data: the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra (EOS AM) and Aqua (EOS PM) satellites, the Visible and Infrared Imager/Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (S-NPP) and JPSS-1 satellites (also known as NOAA-20), and the Ocean and the Land Color Imager (OLCI) onboard the Sentinel-3A and Sentinel-3B satellites. The comparison showed that despite the technological progress in optical scanners and the algorithms refinement, the considered standard products (chlor_a, chl_ocx, aph_443, adg_443) carried little information about inherent optical properties in Arctic waters. Based on the statistic metrics (Bias, MdAD, MAE and RMSE), it was concluded that refinement of the algorithm for retrieval of water bio-optical properties based on remote sensing data was required for the Arctic region. Full article
Show Figures

Graphical abstract

32 pages, 4766 KiB  
Review
Stockpile Volume Estimation in Open and Confined Environments: A Review
by Ahmad Alsayed and Mostafa R. A. Nabawy
Drones 2023, 7(8), 537; https://doi.org/10.3390/drones7080537 - 20 Aug 2023
Cited by 11 | Viewed by 9793
Abstract
This paper offers a comprehensive review of traditional and advanced stockpile volume-estimation techniques employed within both outdoor and indoor confined spaces, whether that be a terrestrial- or an aerial-based technique. Traditional methods, such as manual measurement and satellite imagery, exhibit limitations in handling [...] Read more.
This paper offers a comprehensive review of traditional and advanced stockpile volume-estimation techniques employed within both outdoor and indoor confined spaces, whether that be a terrestrial- or an aerial-based technique. Traditional methods, such as manual measurement and satellite imagery, exhibit limitations in handling irregular or constantly changing stockpiles. On the other hand, more advanced techniques, such as global navigation satellite system (GNSS), terrestrial laser scanning (TLS), drone photogrammetry, and airborne light detection and ranging (LiDAR), have emerged to address these challenges, providing enhanced accuracy and efficiency. Terrestrial techniques relying on GNSS, TLS, and LiDAR offer accurate solutions; however, to minimize or eliminate occlusions, surveyors must access geometrically constrained places, representing a serious safety hazard. With the speedy rise of drone technologies, it was not unexpected that they found their way to the stockpile volume-estimation application, offering advantages such as ease of use, speed, safety, occlusion elimination, and acceptable accuracy compared to current standard methods, such as TLS and GNSS. For outdoor drone missions, image-based approaches, like drone photogrammetry, surpass airborne LiDAR in cost-effectiveness, ease of deployment, and color information, whereas airborne LiDAR becomes advantageous when mapping complex terrain with vegetation cover, mapping during low-light or dusty conditions, and/or detecting small or narrow objects. Indoor missions, on the other hand, face challenges such as low lighting, obstacles, dust, and limited space. For such applications, most studies applied LiDAR sensors mounted on tripods or integrated on rail platforms, whereas very few utilized drone solutions. In fact, the choice of the most suitable technique/approach depends on factors such as site complexity, required accuracy, project cost, and safety considerations. However, this review puts more focus on the potential of drones for stockpile volume estimation in confined spaces, and explores emerging technologies, such as solid-state LiDAR and indoor localization systems, which hold significant promise for the future. Notably, further research and real-world applications of these technologies will be essential for realizing their full potential and overcoming the challenges of operating robots in confined spaces. Full article
Show Figures

Figure 1

15 pages, 4919 KiB  
Article
Efficient and Low Color Information Dependency Skin Segmentation Model
by Hojoon You, Kunyoung Lee, Jaemu Oh and Eui Chul Lee
Mathematics 2023, 11(9), 2057; https://doi.org/10.3390/math11092057 - 26 Apr 2023
Cited by 2 | Viewed by 2749
Abstract
Skin segmentation involves segmenting the human skin region in an image. It is a preprocessing technique mainly used in many applications such as face detection, hand gesture recognition, and remote biosignal measurements. As the performance of skin segmentation directly affects the performance of [...] Read more.
Skin segmentation involves segmenting the human skin region in an image. It is a preprocessing technique mainly used in many applications such as face detection, hand gesture recognition, and remote biosignal measurements. As the performance of skin segmentation directly affects the performance of these applications, precise skin segmentation methods have been studied. However, previous skin segmentation methods are unsuitable for real-world environments because they rely heavily on color information. In addition, deep-learning-based skin segmentation methods incur high computational costs, even though skin segmentation is mainly used for preprocessing. This study proposes a lightweight skin segmentation model with a high performance. Additionally, we used data augmentation techniques that modify the hue, saturation, and values, allowing the model to learn texture or contextual information better without relying on color information. Our proposed model requires 1.09M parameters and 5.04 giga multiply-accumulate. Through experiments, we demonstrated that our proposed model shows high performance with an F-score of 0.9492 and consistent performance even for modified images. Furthermore, our proposed model showed a fast processing speed of approximately 68 fps, based on 3 × 512 × 512 images and an NVIDIA RTX 2080TI GPU (11GB VRAM) graphics card. Full article
Show Figures

Figure 1

14 pages, 5302 KiB  
Article
Low-Light Image Enhancement by Combining Transformer and Convolutional Neural Network
by Nianzeng Yuan, Xingyun Zhao, Bangyong Sun, Wenjia Han, Jiahai Tan, Tao Duan and Xiaomei Gao
Mathematics 2023, 11(7), 1657; https://doi.org/10.3390/math11071657 - 30 Mar 2023
Cited by 8 | Viewed by 3739
Abstract
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To [...] Read more.
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To address the above problems, we propose an end-to-end low-light image enhancement network by combining transformer and CNN (convolutional neural network) to restore the normal light images. Specifically, the proposed enhancement network is designed into a U-shape structure with several functional fusion blocks. Each fusion block includes a transformer stem and a CNN stem, and those two stems collaborate to accurately extract the local and global features. In this way, the transformer stem is responsible for efficiently learning global semantic information and capturing long-term dependencies, while the CNN stem is good at learning local features and focusing on detailed features. Thus, the proposed enhancement network can accurately capture the comprehensive semantic information of low-light images, which significantly contribute to recover normal light images. The proposed method is compared with the current popular algorithms quantitatively and qualitatively. Subjectively, our method significantly improves the image brightness, suppresses the image noise, and maintains the texture details and color information. For objective metrics such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image perceptual similarity (LPIPS), DeltaE, and NIQE, our method improves the optimal values by 1.73 dB, 0.05, 0.043, 0.7939, and 0.6906, respectively, compared with other methods. The experimental results show that our proposed method can effectively solve the problems of underexposure, noise interference, and color inconsistency in micro-optical images, and has certain application value. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Machine Learning)
Show Figures

Figure 1

Back to TopTop