You are currently viewing a new version of our website. To view the old version click .
Journal of Imaging
  • Article
  • Open Access

20 December 2025

Development of a Multispectral Image Database in Visible–Near–Infrared for Demosaicking and Machine Learning Applications

,
and
ImViA Laboratory, UFR Sciences et Techniques, Université Bourgogne Europe, 21000 Dijon, France
*
Author to whom correspondence should be addressed.
J. Imaging2026, 12(1), 2;https://doi.org/10.3390/jimaging12010002 
(registering DOI)
This article belongs to the Special Issue Imaging Applications in Agriculture

Abstract

The use of Multispectral (MS) imaging is growing fast across many research fields. However, one of the obstacles researchers face is the limited availability of multispectral image databases. This arises from two factors: multispectral cameras are a relatively recent technology, and they are not widely available. Hence, the development of an image database is crucial for research on multispectral images. This study takes advantage of two high-end MS cameras in visible and near-infrared based on filter array technology developed in the PImRob platform, the University of Burgundy, to provide a freely accessible database. The database includes high-resolution MS images taken from different plants and weeds, along with annotated images and masks. The original raw images and the demosaicked images have been provided. The database has been developed for research on demosaicking techniques, segmentation algorithms, or deep learning for crop/weed discrimination.

1. Introduction

Multispectral (MS) imaging provides rich spectral information through high-resolution spatial images, using either single-shot or multiple-shot acquisition methods. This capability makes it highly valuable in various domains such as medical diagnostics, food quality control assessment, object detection, and remote sensing. In recent years, MS imaging has received considerable attention in agriculture, especially in precision farming, where it supports tasks such as crop monitoring, weed detection, and stress analysis [1,2,3].
Traditionally, an MS image is represented as a cube of 3 to 8 spectral bands, typically covering the visible (VIS) spectrum or extending into the ultraviolet (UV) or near-infrared (NIR) regions. The number of bands and the spectral regions selected depend largely on the target application and the required processing algorithms. The wide range of MS imaging applications includes information extraction, segmentation, object detection, classification, geometric measurements, and image fusion [4,5,6,7,8,9,10].
MS imaging has been growing rapidly and has responded to the needs of diverse fields [11,12,13,14,15,16,17]. There exist several types of MS image acquisition systems, such as Single Sensor with Filter Wheel, Array of Monochromatic Cameras, and Beam Splitter with Multiple Sensors. A recent advancement in MS imaging is the Multispectral Filter Array (MSFA) technology, which is an extension of the color filter pattern used in color cameras to include more spectral bands. Accordingly, MSFA-based cameras acquire a raw MS image in a single shot, where each pixel records a specific band. These images require dedicated demosaicking algorithms to reconstruct the full spectral cube [18]. This technology offers compact and inexpensive MS cameras, which makes it promising for many applications. Also, MS imaging based on MSFA technology reduces the mechanical complexity of the imaging system compared to set-ups needing separate filters or multiple cameras.
The significance of MS imaging for crop/weed segmentation and deep learning-based plant detection arises from the fact that the spectral response of each object is unique. Hence, by analysis of the images at specific spectral bands, it is more effective to detect and discriminate different plants. Borregaard et al. [19] employed line imaging spectroscopy from 670 to 1070 nm to separate potatoes, sugar beet, and several weed species; analysis of selected bands showed high recognition rates, with potato vs. weeds and sugar beet vs. weeds correctly classified in 94% and 87% of cases, respectively. Liu et al. [20] collected hyperspectral images from 380 to 760 nm over carrot seedlings and several common weeds; by working entirely in the VIS domain, they still achieved up to about 91% accuracy when all weeds were aggregated into a single class, suggesting that visible-only imaging can suffice where color and leaf structure differ strongly. Zu et al. [21] measured canopy reflectance of two cabbage cultivars and five grass and broadleaf weeds between 350 and 2500 nm, but band-selection analysis indicated that a carefully chosen subset of bands, heavily concentrated in the VIS and VNIR region, was enough to reach 100% classification accuracy, so full SWIR coverage was not essential for crop–weed separation. More recently, UAV-based datasets such as WeedsGalore have used discrete multispectral bands, including RGB, a red-edge band around 730 nm, and an NIR band around 840 nm, and transformer–CNN segmentation models using these bands have achieved mean IoU values near 79%, markedly outperforming RGB-only baselines and confirming the benefit of adding NIR and red-edge channels in the 700–900 nm region for robust crop–weed segmentation under variable field conditions.
The increasing demand for robust algorithms in these domains highlights the need for publicly available, high-quality MS datasets. However, such datasets remain scarce, particularly for MSFA-based systems, thus limiting progress in research such as demosaicking, segmentation, and deep learning applications. Hordley et al. [22] developed a small database of MS images, which was used for the comparison of an MS acquisition system based on interferometry and a spectral tele-spectroradiometer. The database included 22 images. Chang et al. [23] provided an MS image database for face recognition. The whole dataset included 2624 MS images. The MS acquisition system used consisted of a monochrome camera coupled with an electronically tuned LCTF (Liquid Crystal Tunable Filter). Lapray et al. [24] developed a polarimetric and multispectral image database that covered the VIS and NIR regions of the spectrum. Their acquisition system, based on a dual-RGB method, produced six spectral/polarimetric channels in each acquisition. The database includes a variety of objects such as plastic, painting, liquid, fabric, etc. In a recent study, Gutierrez-Navarro et al. [25] introduced an open-access MS image database for hand perfusion evaluation based on MSFA technology. The MS camera utilized in this study was a snapshot camera which could record 8 spectral bands in VIS and NIR. The database includes the MS images of the hands of 45 healthy participants.
This work introduces a novel MS image database designed to address the lack of MS image database based on MSFA technology. The dataset includes raw mosaicked images and corresponding annotations, enabling both unsupervised and supervised learning approaches. It comprises acquisitions from two MS cameras operating in the VIS and NIR ranges, capturing the same agricultural scenes. The proposed dataset is especially relevant for agricultural imaging applications, such as the following:
  • Development and benchmarking of demosaicking and mosaicking techniques adapted to MSFA.
  • Semantic segmentation of vegetation (e.g., crop vs. weed).
  • Plant species classification and phenotyping under field conditions.
  • Design and evaluation of deep learning models for multispectral agricultural tasks.
To the best of our knowledge, this is the first publicly available, free MS image dataset based on MSFA technology, specifically tailored for both general-purpose and agricultural image processing applications.

2. Materials and Methods

This section presents the details of the image acquisition system, the structure of the images, and the organization of the database. It aims to help researchers prepare their desired data from raw images and develop their own algorithms based on the dataset and the provided labeled images. Establishing a database of MS images involves several steps, including the design and development of MSFAs, development of the MS camera, creation of supporting hardware and software, image acquisition, and preliminary processing. These steps will be explained in the following sections.

2.1. Multispectral Acquisition Set-Up

The development of the cameras was performed in the PImRob platform of ImViA (Image et Vision Artificielle) laboratory of the Université de Bourgogne Europe. The acquisition system consisted of two MS cameras covering VIS and NIR (Figure 1). The cameras capture 16 spectral bands (i.e., 8 bands for each) from 380 to 720 nm and from 680 to 950 nm for VIS and NIR, respectively. The acquisition set-up consisted of the cameras placed beside each other and three halogen lamps placed around the cameras. The cameras were controlled simultaneously by two software interfaces. The images have been saved in TIF format with a resolution of 1280 × 1024.
Figure 1. Set-up for MS image acquisition (this camera was supported by the EU PENTA/CAVIAR).
The database includes MS images of four plants (i.e., tomato, cucumber, garden bean, and bell pepper) and five weed species (i.e., bindweed, edible sedge, Plantago lanceolata, potentilla, and sorrel). The plants were freshly imported to the lab for image acquisition. The plants were displaced by their root to the imaging scene. This could keep them healthy during image acquisition. Figure 2 represents several color examples of the scene. The database includes different arrangements of plants with sparse or dense imaging scenes.
Figure 2. Color examples representing the scene where MS images were obtained.

2.2. Multispectral Filter Array (MSFA)

MSFAs consisted of mosaics of filters arranged in a matrix where each filter is associated with a specific spectrum. In this work, a set of 8 filters based on the Fabry–Perot principle was used to cover the VIS and NIR regions [26]. The architectures of the MSFAs are presented in Figure 3. This figure shows the architecture for a single mosaic pattern, and the architecture has been repeated all over the image sensor. The architecture of the MSFA affects the spectral image reconstruction as it determines how many pixels will be associated with each band and how the missing information for each spectral band needs to be interpolated or estimated.
Figure 3. The architectures of MSFAs: (a) visible camera and (b) near-infrared camera.
Table 1 provides the spectral details of the spectral bands for both VIS and NIR cameras. After demosaicking, the extracted images can be associated with their spectral bands for further processing. This information can be beneficial for the research for the detection and analysis of plants based on specific spectral bands. For example, in the VIS camera, band 7 has the center frequency of 646 nm, which is ideal for plants’ Chlorophyll b analysis (i.e., the peak absorption band is 642 nm). The table also shows the FWHM (Full Width at Half Maximum) that varies for different bands. Narrow FWHM leads to the capturing of light from a small wavelength range, which allows for more precise measurements. A wider FWHM represents that more information about different light wavelengths is present in the MS image. It is noted that the last band in the VIS camera and the first band in the NIR camera possess the same spectral center frequency.
Table 1. Details of the spectral bands of MSFAs of both cameras.
Previous research represents that the selected bands can be used for effective discrimination of crops and weeds. Brown et al. [27] used 400–900 nm leaf reflectance in no-till corn and identified key wavelengths around 440, 530, 650, and 730 nm to discriminate cotton from several weed species, showing that both Chlorophyll-related visible bands and the red edge are informative for weed sensing. Jurado-Exposito et al. [28] exploited a 750–950 nm window to distinguish broadleaf weeds, sunflower and wheat seedlings, highlighting the power of NIR reflectance and early red-edge behavior for separating cereal crops from broadleaf weeds. Mao et al. [29] focused on 700–1100 nm for wheat and broadleaf weeds such as goosefoots and shepherd’s purse, reporting about 97% correct classification when using selected NIR bands, again confirming the diagnostic value of NIR for crop–weed differentiation at the seedling stage.

2.3. Hybrid Sensor

The choice of the CMOS sensor is of high importance as its performance directly impacts the quality of the final images. E2V (Teledyne Company, Saint-Egrève, France) was chosen as it had a minimum pixel size of 4.5 µm and an extended sensitivity to the near-infrared. This sensor has a frame rate of 50 fps and a bit depth of 10 bits. Next, the MSFAs were placed on top of the image sensors to develop the MS image sensors. And finally, the MS image sensors were assembled with other components such as electronic boards, cables, and camera cases.

2.4. Demosaicking

The image taken by the MS camera is a single-layer image providing reflective information in different spectral bands. For the reconstruction of images of each spectral band, it is necessary to extract the spectral information of each band and find the missing values. MS image demosaicking is a method that estimates missing pixels in different spectral channels of the mosaic image. This was performed using the bilinear demosaicking technique in MATLAB software (R2018b, the US). The reconstructed images are the same size as the original images.
To evaluate the computational performance of the proposed demosaicking method, we measured the average execution time on a set of three test images. Each image was processed using MATLAB on a PC equipped with a 13th Gen Intel® Core™ i7-1370P processor running at 1.90 GHz and 32 GB of RAM. The average execution time observed was 0.85 s. The maximum memory usage during processing was approximately 125 MB.

2.5. Image Labeling

Preparing labeled images is a challenging and time-consuming task. There exist several techniques and tools to perform image labeling, including automatic or manual toolboxes as well as online platforms. In this study, the MATLAB Image Labeler application was used to manually label each image to achieve the highest possible precision. The application provides a user-friendly interface and robust features for accurate image annotation. To perform image labeling, the images are loaded into the application and begin by defining regions of interest (ROIs) around the objects of interest in each image. Labels are then assigned to each ROI to identify the corresponding objects, using the built-in text and selection tools. The manual drawing functionality also allowed us to draw contours around objects in the images, providing additional flexibility for annotation. Once the ROIs are defined and labels are assigned, it can organize the annotations into logical groups and export the labeled data in various formats compatible with the needs of our research or application project.

3. Discussion

3.1. Structure of the Database

The database has been organized in a way that facilitates the automatic reading of images. MS image, demosaicked images, binary masks, and the annotated image are in separate folders and all have been placed in a folder under image number. Then, all image folders are in the main database folder. The raw MS images have a resolution of 1280 × 1024 in TIF extension and saved in 8-bit format. The database provides a total of 512 images.

3.2. Demosaicking

Demosaicking represents the final and crucial step in the multispectral image acquisition process, aiming to reconstruct the missing pixel values in a way that they are closest to the real value of those pixels. To reconstruct a full multispectral data cube from the raw mosaic produced by the Multispectral Filter Array (MSFA), we applied a band-wise bilinear demosaicking procedure adapted to the MSFA pattern (Figure 4). First, the MSFA layout was used to identify the spatial locations corresponding to each spectral band. For every band, a sparse 2D image was initialized by placing the measured pixel values at their corresponding positions and assigning zeros elsewhere. Each band image was then independently interpolated onto the full spatial grid using bilinear interpolation, where missing pixels were estimated as the weighted average of their four nearest neighbors in the horizontal and vertical directions. This approach leverages the local spatial smoothness commonly present in multispectral scenes while preserving the relative intensity consistency between neighboring pixels. Although bilinear interpolation is a simple and computationally efficient method, it provides a stable baseline reconstruction for MSFA-based imaging and enables subsequent evaluation against more advanced demosaicking techniques. The traditional demosaicking techniques include nearest-neighbor replication, bilinear interpolation, cubic spline interpolation, heuristic methods, wavelet-based methods, and so on [30,31,32]. Recently, deep learning methods have been gaining much attention for demosaicking [33,34,35,36,37,38,39,40,41]. However, simple interpolation extensions (bilinear, guided filtering, and pseudo-panchromatic (PPAN) strategies) remain attractive for their low complexity and reasonable spatial preservation when a dense reference band exists; recent generic PPAN-based algorithms show robust spectral consistency across varied MSFA layouts but can blur fine spatial details when spectral sampling is very sparse [42]. Model-based approaches (sparse coding, iterative linear solvers, and physics-informed reconstructions) explicitly enforce spectral priors and inter-band correlations, yielding higher spectral fidelity and better preservation of narrow absorption features at the cost of heavier computation and parameter tuning; iterative methods that estimate pseudo-panchromatic images or jointly solve for spatial and spectral components are particularly effective when training data are limited [43].
Figure 4. Raw image and its diagonally demosaicked version.
On the other hand, deep learning methods (2D/3D CNNs, residual networks, and attention modules) now dominate top performance in benchmarks: they recover fine spatial details and reduce aliasing/artifacts when large, representative training sets are available, but they are computationally expensive to train, can suffer spectral distortion if loss functions do not explicitly penalize spectral errors, and generalize poorly across sensor types unless domain adaptation or paired RGB/MS capture strategies are used [44]. Recent work therefore emphasizes hybrid designs—e.g., using a fast interpolation/PPAN initializer followed by a light-weight learned refinement or a sparsity/physics prior—to balance spectral fidelity, spatial resolution, and practical compute/memory constraints for real snapshot multispectral cameras [45].
Yu et al. [46] used a two-step method for demosaicking MS images based on MSFA technology. The first step was to apply a weighted bilinear interpolation convolution filter and then use the multi-scale dense connections large kernel attention method (MDLKA) to obtain high-resolution images. Liu et al. [47] proposed a deep learning method based on pseudo-panchromatic images for demosaicking of snapshot MS images. The proposed technique included two networks: Deep PPI Generation Network (DPG-Net) and Deep Demosaic Network (DDM-Net). They reported that the proposed method outperformed previous traditional and deep learning methods.

3.3. Annotated Images

The database provides the annotated images for all MS images in the database. Figure 5 shows examples of the annotated images and the corresponding RGB images. In these images, all the spots of the same color belong to a specific plant. These labels can be utilized during the training and evaluation of deep networks. As the figure shows, the annotation has been performed precisely, especially on the edges of leaves and the critical curvatures. These annotated images are necessary for many segmentations and deep learning techniques for the training stage and testing the models. By incorporating these ground truth images into the training process, models can learn to accurately distinguish between different object classes, estimate their positions, and recognize the complex patterns.
Figure 5. Examples of annotated ground truth images.
In addition to the annotated images, we have provided binary masks in the database (Figure 6). These masks are images of the same size as the original images and provide labels corresponding only to a single plant. These masks will help researchers to build their dataset more easily for deep learning applications, as each plant can be introduced separately to the network.
Figure 6. Examples of extraction of binary masks from labeled images.
It is worth noting that while the database has been prepared for demosaicking, segmentation, and deep learning research, it can serve specifically agricultural research for different purposes and applications, including plant detection, leaf counting, single-leaf segmentation, and so forth. Work remains to be performed on the extension of the database, increasing spectral bands, extension of the spectral range, improving the volume, the number of plants, etc.

3.4. Comparison with Other Datasets

To better position our contribution within the context of existing multispectral datasets, we compared our MSFA-based database with three widely used public datasets: the CAVE dataset, the Harvard dataset, and the NTIRE 2020 dataset. Table 2 presents this comparison in terms of the number of images, spatial resolution, number of spectral bands, spectral range, acquisition technology, and availability of annotations. Unlike CAVE and Harvard, which rely on rotary filter systems with sequential image acquisition, our database uses an MSFA approach, enabling single-shot acquisition through spatial mosaicking. This technology is particularly well suited for dynamic scenes and applications requiring high-speed image capture.
Table 2. Comparison of MS Image Databases.
The database consists of 512 high-resolution mosaicked images (1280 × 1024 pixels), each comprising 16 spectral bands—8 in the visible range (400–720 nm) and 8 in the near-infrared (660–950 nm). The images are provided in TIFF format, along with pixel-level annotated ground truth to support supervised learning tasks. To our knowledge, this is the only publicly available MSFA-based dataset that combines high-resolution imaging, precise annotation, and a wide spectral coverage (400–1100 nm). In comparison,
  • The CAVE dataset contains only 32 images, limited to the visible spectrum (400–700 nm).
  • The Harvard dataset offers 50 images, with a slightly extended range (420–720 nm).
  • The NTIRE 2020 dataset includes 480 images, but annotations are limited to specific subtasks, and the image resolutions vary.
This comparison highlights the added value of our dataset for agricultural research, plant segmentation, and the development of deep learning methods based on MSFA technology.

4. Conclusions

Due to the need for MS image databases, this work was undertaken to provide a free database dedicated to demosaicking and plant/weed discrimination based on deep learning applications. The database is based on MSFA technology as one of the most recent technologies in MS imaging, making it a valuable resource for research on MSFA design, optimization, and demosaicking algorithms. The database includes original MS images in visible and near-infrared, each composed of eight spectral bands. Both VIS and NIR images have the same MSFA pattern and can be perfectly utilized for demosaicking techniques. A diverse set of plants was used to construct scenes with significant variability. The MS images come with the annotated images to make them usable for both supervised and unsupervised segmentation or deep learning techniques. Future work will focus on expanding the diversity of plant species, integrating advanced calibration procedures, and offering ready-to-use benchmarks to promote reproducibility and facilitate collaboration within the research community.

Author Contributions

Conceptualization, P.G.; methodology, P.G. and V.M.; software, S.G.S.; validation, P.G.; investigation, V.M.; data acquisition, S.G.S.; writing—original draft preparation, V.M.; writing—review and editing, P.G. and V.M.; supervision, P.G.; funding acquisition, P.G. All authors have read and agreed to the published version of the manuscript.

Funding

The UE PENTA/CAVIAR project (17003).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request. The database will be downloadable via a direct download link under request: https://multispectraldatabase.vercel.app/ (accessed on 16 December 2025).

Acknowledgments

These two cameras have been developed in the Platform PImRob of Burgundy University. The authors would like to appreciate the support of the UE PENTA/CAVIAR project (2018–2022) and the Franche Comté and Burgundy Council (BFC), France for financial support of the Platform PImRob.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, W.H.; Sun, D.W. Multispectral imaging for plant food quality analysis and visualization. Compr. Rev. Food Sci. Food Saf. 2018, 17, 220–239. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, R.; Li, C.; Paterson, A.H. Multispectral imaging and unmanned aerial systems for cotton plant phenotyping. PLoS ONE 2019, 14, e0205083. [Google Scholar] [CrossRef] [PubMed]
  3. Ferreira Lima, M.C.; Krus, A.; Valero, C.; Barrientos, A.; Del Cerro, J.; Roldán-Gómez, J.J. Monitoring plant status and fertilization strategy through multispectral images. Sensors 2020, 20, 435. [Google Scholar] [CrossRef] [PubMed]
  4. Dian, R.; Li, S.; Sun, B.; Guo, A. Recent advances and new guidelines on hyperspectral and multispectral image fusion. Inf. Fusion 2021, 69, 40–51. [Google Scholar] [CrossRef]
  5. Ma, F.; Yuan, M.; Kozak, I. Multispectral imaging: Review of current applications. Surv. Ophthalmol. 2023, 68, 889–904. [Google Scholar] [CrossRef]
  6. Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef]
  7. Podorozhniak, A.; Lubchenko, N.; Balenko, O.; Zhuikov, D. Neural network approach for multispectral image processing. In Proceedings of the 2018 14th International Conference on Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET), Lviv-Slavske, Ukraine, 20–24 February 2018; IEEE: New York, NY, USA, 2018; pp. 978–981. [Google Scholar]
  8. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  9. Dian, R.; Li, S.; Kang, X. Regularizing hyperspectral and multispectral image fusion by CNN denoiser. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1124–1135. [Google Scholar] [CrossRef]
  10. Berry, S.; Giraldo, N.A.; Green, B.F.; Cottrell, T.R.; Stein, J.E.; Engle, E.L.; Xu, H.; Ogurtsova, A.; Roberts, C.; Wang, D.; et al. Analysis of multispectral imaging with the AstroPath platform informs efficacy of PD-1 blockade. Science 2021, 372, eaba2609. [Google Scholar] [CrossRef]
  11. Geelen, B.; Blanch, C.; Gonzalez, P.; Tack, N.; Lambrechts, A. A tiny VIS-NIR snapshot multispectral camera. In Proceedings of the Advanced Fabrication Technologies for Micro/Nano Optics and Photonics VIII, San Francisco, CA, USA, 7–12 February 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9374, pp. 194–201. [Google Scholar]
  12. Cao, X.; Yue, T.; Lin, X.; Lin, S.; Yuan, X.; Dai, Q.; Carin, L.; Brady, D.J. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. IEEE Signal Process. Mag. 2016, 33, 95–108. [Google Scholar] [CrossRef]
  13. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  14. Morales, A.; Guerra, R.; Horstrand, P.; Diaz, M.; Jimenez, A.; Melian, J.; Lopez, S.; Lopez, J.F. A multispectral camera development: From the prototype assembly until its use in a UAV system. Sensors 2020, 20, 6129. [Google Scholar] [CrossRef] [PubMed]
  15. Genser, N.; Seiler, J.; Kaup, A. Camera array for multi-spectral imaging. IEEE Trans. Image Process. 2020, 29, 9234–9249. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, J.F.; Liu, D.W.; Xue, B.; Lyu, J.; Liu, J.J.; Li, F.; Ren, X.; Ge, W.; Liu, B.; Ma, X.L.; et al. Design and ground verification for multispectral camera on the Mars Tianwen-1 rover. Space Sci. Rev. 2022, 218, 19. [Google Scholar] [CrossRef]
  17. Di Gennaro, S.F.; Toscano, P.; Gatti, M.; Poni, S.; Berton, A.; Matese, A. Spectral comparison of UAV-Based hyper and multispectral cameras for precision viticulture. Remote Sens. 2022, 14, 449. [Google Scholar] [CrossRef]
  18. Hounsou, N.; Mahama, A.T.S.; Gouton, P. 4-Band Multispectral Images Demosaicking Combining LMMSE and Adaptive Kernel Regression Methods. J. Imaging 2022, 8, 295. [Google Scholar] [CrossRef]
  19. Borregaard, T.; Nielsen, H.; Nørgaard, L.; Have, H. Crop–weed discrimination by line imaging spectroscopy. J. Agric. Eng. Res. 2000, 75, 389–400. [Google Scholar] [CrossRef]
  20. Liu, B.; Fang, J.Y.; Liu, X.; Zhang, L.F.; Zhang, B.; Tong, Q.X. Research on crop-weed discrimination using a field imaging spectrometer. Guang Pu Xue Yu Guang Pu Fen Xi 2010, 30, 1830–1833. [Google Scholar]
  21. Zu, Q.; Zhao, C.J.; Deng, W.; Wang, X. Research on discrimination of cabbage and weeds based on visible and near-infrared spectrum analysis. Spectrosc. Spectr. Anal. 2013, 33, 1202–1205. [Google Scholar]
  22. Hordley, S.; Finalyson, G.; Morovic, P. A multi-spectral image database and its application to image rendering across illumination. In Proceedings of the Third International Conference on Image and Graphics (ICIG’04), Washington, DC, USA, 18–20 December 2004; IEEE: New York, NY, USA, 2004; pp. 394–397. [Google Scholar]
  23. Chang, H.; Harishwaran, H.; Yi, M.; Koschan, A.; Abidi, B.; Abidi, M. An indoor and outdoor, multimodal, multispectral and multi-illuminant database for face recognition. In Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), New York, NY, USA, 17–22 June 2006; IEEE: New York, NY, USA, 2006; p. 54. [Google Scholar]
  24. Lapray, P.J.; Gendre, L.; Foulonneau, A.; Bigué, L. Database of polarimetric and multispectral images in the visible and NIR regions. In Proceedings of the Unconventional Optical Imaging, Strasbourg, France, 22–26 April 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10677, pp. 666–679. [Google Scholar]
  25. Gutierrez-Navarro, O.; Granados-Castro, L.; Mejia-Rodriguez, A.R.; Campos-Delgado, D.U. A multi-spectral image database for in-vivo hand perfusion evaluation. IEEE Access 2023, 11, 87543–87557. [Google Scholar] [CrossRef]
  26. Mohammadi, V.; Gouton, P.; Rossé, M.; Katakpe, K.K. Design and Development of Large-Band Dual-MSFA Sensor Camera for Precision Agriculture. Sensors 2023, 24, 64. [Google Scholar] [CrossRef] [PubMed]
  27. Brown, R.B.; Steckler, J.P.G.A.; Anderson, G.W. Remote sensing for identification of weeds in no-till corn. Trans. ASAE 1994, 37, 297–302. [Google Scholar] [CrossRef]
  28. Jurado-Expósito, M.; López-Granados, F.; Atenciano, S.; Garcıa-Torres, L.; González-Andújar, J.L. Discrimination of weed seedlings, wheat (Triticum aestivum) stubble and sunflower (Helianthus annuus) by near-infrared reflectance spectroscopy (NIRS). Crop Prot. 2003, 22, 1177–1180. [Google Scholar] [CrossRef]
  29. Mao, W.H.; Wang, Y.Q.; Wang, Y.M.; Zhang, X.C. Spectrum analysis of crop and weeds at seedling. Guang Pu Xue Yu Guang Pu Fen Xi 2005, 25, 984–987. [Google Scholar]
  30. Li, X.; Gunturk, B.; Zhang, L. Image demosaicking: A systematic survey. In Proceedings of the Visual Communications and Image Processing, San Jose, CA, USA, 27–31 January 2008; SPIE: Bellingham, WA, USA, 2008; Volume 6822, pp. 489–503. [Google Scholar]
  31. Safna Asiq, M.S.; Sam Emmanuel, W.R. Colour filter array demosaicking: A brief survey. Imaging Sci. J. 2018, 66, 502–512. [Google Scholar] [CrossRef]
  32. Wang, Y.; Cao, R.; Guan, Y.; Liu, T.; Yu, Z. A deep survey in the Applications of demosaicking. In Proceedings of the 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 10–12 December 2021; IEEE: New York, NY, USA, 2021; pp. 596–602. [Google Scholar]
  33. Tan, D.S.; Chen, W.Y.; Hua, K.L. DeepDemosaicking: Adaptive image demosaicking via multiple deep fully convolutional networks. IEEE Trans. Image Process. 2018, 27, 2408–2419. [Google Scholar] [CrossRef]
  34. Kokkinos, F.; Lefkimmiatis, S. Deep image demosaicking using a cascade of convolutional residual denoising networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 303–319. [Google Scholar]
  35. Chi, Z.; Shu, X.; Wu, X. Joint demosaicking and blind deblurring using deep convolutional neural network. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: New York, NY, USA, 2019; pp. 2169–2173. [Google Scholar]
  36. Wang, Y.; Yin, S.; Zhu, S.; Ma, Z.; Xiong, R.; Zeng, B. NTSDCN: New three-stage deep convolutional image demosaicking network. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 3725–3729. [Google Scholar] [CrossRef]
  37. Khadidos, A.O.; Khadidos, A.O.; Khan, F.Q.; Tsaramirsis, G.; Ahmad, A. Bayer image demosaicking and denoising based on specialized networks using deep learning. Multimed. Syst. 2021, 27, 807–819. [Google Scholar] [CrossRef]
  38. Kumar, S.P.; Peter, K.J.; Kingsly, C.S. De-noising and Demosaicking of Bayer image using deep convolutional attention residual learning. Multimed. Tools Appl. 2023, 82, 20323–20342. [Google Scholar] [CrossRef]
  39. Houssou, M.E.E.; Mahama, A.T.S.; Gouton, P.; Degla, G. Comparison of Four Demosaicking Methods for Facial Recognition Algorithms. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 20–28. [Google Scholar]
  40. Bai, C.; Qiao, W.; Li, J. Universal deep demosaicking for sparse color filter arrays. Signal Process. Image Commun. 2024, 126, 117135. [Google Scholar] [CrossRef]
  41. Chinnaiyan, A.M.; Alfred Sylam, B.W. Deep demosaicking convolution neural network and quantum wavelet transform-based image denoising. Netw. Comput. Neural Syst. 2024, 36, 1138–1162. [Google Scholar] [CrossRef]
  42. Rathi, V.; Goyal, P. Generic multispectral demosaicking using spectral correlation between spectral bands and pseudo-panchromatic image. Signal Process. Image Commun. 2023, 110, 116893. [Google Scholar] [CrossRef]
  43. Jeong, K.; Kim, S.; Kang, M.G. Multispectral demosaicing based on iterative-linear-regression model for estimating pseudo-panchromatic image. Sensors 2024, 24, 760. [Google Scholar] [CrossRef]
  44. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Wu, Y.; Wu, X.; Fan, Z.; Xia, C.; Zhang, F.; et al. NTIRE 2022 spectral demosaicing challenge and data set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 882–896. [Google Scholar]
  45. Wisotzky, E.L.; Wallburg, L.; Hilsmann, A.; Eisert, P.; Wittenberg, T.; Göb, S. Efficient and accurate hyperspectral image demosaicing with neural network architectures. arXiv 2023, arXiv:2403.12050. [Google Scholar] [CrossRef]
  46. Yu, S.; Song, B.; Du, W.; Yuan, J.; Sun, W. Multispectral Image Demosaicking Based on Multi-scale Dense Connections and Large-kernel Attention. In Proceedings of the 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), Atlanta, GA, USA, 6–8 November 2023; IEEE: New York, NY, USA, 2023; pp. 1007–1011. [Google Scholar]
  47. Liu, S.; Zhang, Y.; Chen, J.; Lim, K.P.; Rahardja, S. A deep joint network for multispectral demosaicking based on pseudo-panchromatic images. IEEE J. Sel. Top. Signal Process. 2022, 16, 622–635. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.