Next Article in Journal
Impact of Fat Content and Lactose Presence on Refractive Index in Different Types of Cow Milk
Previous Article in Journal
Digitalization Processes in Distribution Grids: A Comprehensive Review of Strategies and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Visible Spectroscopy through Integrated Machine Learning and Image Processing Techniques

1
Department of Mechanical Engineering, Parala Maharaja Engineering College, Berhampur 761003, India
2
Department of Automobile Engineering, Parala Maharaja Engineering College, Berhampur 761003, India
3
Department of Advanced Materials Technology, CSIR-Institute of Minerals and Materials Technology, Bhubaneswar 751013, India
4
Academy of Scientific and Innovative Research, CSIR-HRD Centre Campus, Ghaziabad 201002, India
5
School of Mechanical Engineering, Lovely Professional University, Phagwara 144001, India
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4527; https://doi.org/10.3390/app14114527
Submission received: 10 April 2024 / Revised: 21 May 2024 / Accepted: 23 May 2024 / Published: 25 May 2024

Abstract

:
This research introduces an approach to visible spectroscopy leveraging image processing techniques and machine learning (ML) algorithms. The methodology involves calculating the hue value of an image and deriving the corresponding dominant wavelength. Initially, a six-degree polynomial regression supervised machine learning model is trained to establish a relationship between the hue values and dominant wavelengths. Subsequently, the ML model is employed to analyse the visible wavelengths emitted by various sources, including sodium vapour, neon lamps, mercury vapour, copper vapour lasers, and helium vapour. The performance of the proposed method is evaluated through error analysis, revealing remarkably low error percentages of 0.04%, 0.01%, 3.7%, 1%, and 0.07% for sodium vapour, neon lamp, copper vapour laser, and helium vapour, respectively. This approach offers a promising avenue for accurate and efficient visible spectroscopy, with potential applications in diverse fields such as material science, environmental monitoring, and biomedical research. This research presents a visible spectroscopy method harnessing image processing and machine learning algorithms. By calculating hue values and identifying dominant wavelengths, the approach demonstrates consistently low error rates across diverse light sources.

1. Introduction

Visible spectroscopy, a fundamental technique in analytical chemistry and material science, plays a pivotal role in characterising the spectral properties of various substances based on their interaction with light. Traditional spectroscopic methods often rely on the complex instrumentation and manual interpretation of spectral data, posing challenges in terms of accuracy and efficiency. However, recent advancements in machine learning (ML) and image processing have opened new avenues for improving the accuracy and automation of spectroscopic analysis. This paper introduces a methodology that combines ML algorithms with image processing techniques to enhance visible spectroscopy. By leveraging the power of ML, which excels in pattern recognition and data analysis, in conjunction with image processing, which enables the extraction of valuable spectral information from visual data, this approach aims to overcome the limitations of traditional spectroscopic methods. The foundation of this methodology lies in the integration of polynomial regression with ML algorithms. The machine learning model employed is a polynomial regression of six degree, a supervised learning approach designed to capture complex relationships between input features and the target variable by fitting a polynomial equation. The algorithm learns from labelled data to determine the coefficients of the polynomial equation that best fits the training data, thereby enabling it to make predictions on unseen data. This model is adept at capturing nonlinear patterns in the data, though caution must be exercised to prevent overfitting, potentially requiring regularisation techniques for improved generalisation. Polynomial regression serves as a mathematical framework for modelling the relationship between hue values extracted from images and the corresponding dominant wavelengths. By fitting polynomial functions to the dataset, the methodology captures complex spectral patterns and enables the accurate prediction of dominant wavelengths. Polynomial regression is often preferred in scenarios where the relationship between the independent and dependent variables is non-linear. Unlike simple linear regression, which assumes a linear relationship between the variables, polynomial regression allows for more flexible modelling through fitting a polynomial function to the data. This flexibility enables polynomial regression to capture complex patterns and variations in the data that cannot be adequately represented by a straight line. One of the key advantages of polynomial regression is its ability to model curvature in the data. By introducing polynomial terms of a higher degree, the model can better capture the curvature and non-linearity present in many real-world datasets. This makes polynomial regression a valuable tool for predicting outcomes in fields such as finance, economics, and engineering, where relationships between variables may be more intricate and nuanced.
Additionally, polynomial regression can provide improved accuracy compared to linear regression when the underlying relationship between the variables is non-linear. By allowing the model to flexibly adapt to the data, polynomial regression can yield more accurate predictions and better capture the true underlying patterns in the data.
Furthermore, the utilisation of image processing techniques enables the extraction of hue values from images of vapour sources, thereby providing quantitative inputs for ML algorithms. This integrated approach facilitates the automated analysis of spectral data, eliminating the need for manual interpretation and enhancing the efficiency of spectroscopic analysis. The significance of this research lies in its potential to revolutionise visible spectroscopy through offering a robust and automated methodology for spectral analysis. By harnessing the capabilities of machine learning and image processing, researchers can achieve unprecedented levels of accuracy and efficiency in characterising the spectral properties of various substances. This paper presents the methodology, experimental results, and implications of this approach, highlighting its potential applications in diverse fields such as material science, environmental monitoring, and biomedical research.

2. Literature Review

The literature review explores the growing field of spectroscopy, which is crucial in various scientific and industrial fields due to advancements in image processing and machine learning. This field is widely used in natural surface characterisation, sensor calibration, material analysis, and environmental sensing.
Milton et al. [1] provided a comprehensive review of field spectroscopy, emphasising its applications in characterising natural surfaces and supporting sensor calibration. Building on this, John-Herpin et al. [2] introduced a technique utilising visible light for infrared spectroscopy, highlighting its potential in material analysis and environmental sensing. Pholpho et al. [3] presented a study on classifying bruised longan fruits using visible spectroscopy, demonstrating its industrial applications. Similarly, Wang and ElMasry [4] developed a spectrophotometric method for detecting surface bruises on apples, offering an automated quality control approach in fruit processing. Beberniss and Ehrhardt [5] discussed the impact of high-speed computers on theoretical spectroscopy, which enabled detailed analyses and facilitated collaborations in experimental research. In another domain, Van Vliet and De Groot [6] reviewed high-pressure sodium discharge lamps, focusing on advancements in design and their applications for outdoor lighting. Kitsinelis et al. [7] investigated medium-pressure mercury discharge lamps as intense white light sources, providing insights into their atomic emission spectra and potential applications.
Further extending the exploration of discharge characteristics, Trunec et al. [8] studied atmospheric pressure glow discharge in neon, shedding light on its behaviour in different gas mixtures. Complementing this, Golubovskii et al. [9] utilised numerical simulations to analyse homogeneous barrier discharge in helium, contributing to the understanding of plasma physics. Amendola and Meneghetti [10] proposed a method for sizing gold nanoparticles using UV-vis spectroscopy, offering a practical approach for nanoparticle characterisation. Finally, Godiksen et al. [11] employed electron paramagnetic resonance spectroscopy to identify and quantify copper sites in zeolites, enhancing our understanding of their catalytic properties.
Windom et al. [12] utilised Raman spectroscopy to study molybdenum disulfide and trioxide, demonstrating their applications in tribological systems. Rauscher et al. [13] highlighted the need for improved detector technologies for spectroscopic biosignature characterisation in space telescopes, underscoring the significance of advanced detection techniques. Kurniastuti et al. [14] developed a method for determining Hue Saturation Value (HSV) colour features in kidney histology images, enabling automated segmentation. Similarly, Cantrell et al. [15] investigated the Hue parameter in the HSV colour space for bitonal optical sensors, demonstrating its superior precision in sensing applications. Ma et al. [16] proposed a computational framework for turbid water single-pixel imaging, which enhanced image recovery in underwater environments. This work was complemented by Steinegger et al. [17], who discussed various aspects of spectroscopies related to pH values and the benefits of different materials and their applications. Wang et al. [18] designed a refinement module to optimise the details of underwater images, achieving better visual effects and reporting the effectiveness of the proposed UIE-Convformer. In the realm of medical diagnosis, Simbolon et al. [19] employed colour segmentation in CT scans for pneumonia detection, demonstrating the significant potential of visual computing techniques in healthcare applications. Garaba et al.’s [20] study on the effectiveness of the Forel-Ule Colour Index (FUI) scale in assessing natural water properties revealed its versatility in measuring water quality through ocean colour remote sensing products, handheld scales, and mobile apps. Verma et al. [21] developed MPR for haze removal from images, enhancing methods and addressing challenges in environmental monitoring and medical diagnostics using advanced spectroscopy and imaging technologies.
Kang et al. [22] reported that the machine learning-assisted fluorescence hyperspectral technique was beneficial for classifying rice varieties. Blake et al.‘s [23] literature review on machine learning methods for cancer classification using Raman spectroscopy data highlighted the popularity of deep learning models, the need to address methodological challenges, and the need for benchmark datasets. Ede [24] discussed the future scope of deep learning in the field of electron microscopy, pointing towards its potential for significant advancements. Carey et al. [25] utilised machine learning and genetic programming to analyse spectroscopic data, providing insights into material composition and enabling informed decision-making through interpretable rules. In a related study, Li et al. [26] presented a methodology for dermatological disease detection using image processing and machine learning, showcasing the potential of these techniques in improving diagnostic accuracy and efficiency in dermatology. Rodellar et al. [27] focused on image processing and machine learning for the morphological analysis of blood cells, highlighting the utility of these techniques in automating cell image analysis and supporting clinical diagnosis. Carey et al. [28] machine learning method interprets vibrational spectroscopy data, creating interpretable rules for explanatory analysis.
Zeng et al. [29] developed a method for the speciation of arsenic using liquid phase microextraction coupled with flame atomic absorption spectroscopy, demonstrating its applicability in environmental and biological sample analysis. Similarly, Bai and Fan [30] proposed a flow injection micelle-mediated methodology for the determination of lead using electrothermal atomic absorption spectrometry, showcasing its sensitivity and accuracy in trace lead detection. Li et al.‘s [31] study utilised image-based modelling and deep learning to predict the mechanical properties of heterogeneous materials, highlighting the potential of deep learning to establish implicit mappings between macroscale and mesoscale structures, providing insights for material behaviour optimisation. Additionally, Zhang et al. [32] explored the performance of classification models for toxins based on Raman spectroscopy using machine learning algorithms. Their study demonstrated the effectiveness of preprocessing methods and classification models in accurately identifying protein toxins, offering promising avenues for toxin detection and public health protection.
In recent years, advancements in imaging technology revolutionised various industries, including agriculture, food processing, and pharmaceuticals. For instance, Kruse et al. [33] introduced the Spectral Image Processing System (SIPS), facilitating the interactive visualisation and analysis of imaging spectrometer data. Yang et al. [34] investigated the identification of the geographic origin of peaches using visible-near infrared spectroscopy, fluorescence spectroscopy, and image processing technology. Similarly, Simon et al. [35] compared external bulk video imaging (BVI) with focused beam reflectance measurement (FBRM) and ultraviolet-visible (UV/vis) spectroscopy for identifying metastable zones in crystallisation processes. Furthermore, Vong and Larbi [36] evaluated processed image data obtained from different commercial cameras for visible spectroscopy applications. Liu et al. [37] provided a comprehensive review of wavelength selection techniques for hyperspectral image processing in the food industry. Plaza et al. [38] showcased recent advancements in hyperspectral image processing techniques, focusing on algorithms for handling high-dimensional data and integrating spatial and spectral information, highlighting their potential for innovation across various industries.
Yonezawa et al. [39] created MOLASS, an analytical software that automates the processing of matrix data from SAXS, UV-visible spectroscopy, and SEC-SAXS, using matrix optimisation with low-rank factorisation. Building on advancements in educational tools, Grasse et al. [40] introduced a method for teaching UV-Vis spectroscopy using a 3D-printable smartphone spectrophotometer, thus addressing accessibility issues associated with traditional laboratory equipment in educational settings. Green et al.’s [41] study on imaging spectroscopy and the Air-borne Visible/Infrared Imaging Spectrometer (AVIRIS) provides a comprehensive analysis of its diverse applications in Earth remote sensing research, including environmental monitoring and Earth observation.
Shakya et al. [42] introduced a visible light spectroscopy method using plasmonic colour filter arrays, which offers a spatial resolution of 1.4 mm and a video rate of up to 30 frames per second, demonstrating its feasibility and potential in hyper-spectral imaging systems for improved spectral resolution and accuracy. Usami et al. [43] developed a real-time terahertz imaging system with a spatial resolution of 1.4 mm and a video rate of up to 30 frames per second for materials science and imaging applications. Delaney et al.’s [44] study utilised reflectance imaging spectroscopy for pigment mapping in paintings, demonstrating the effectiveness of hyperspectral imaging cameras in identifying and mapping artist pigments. Al Ktash et al. [45] used UV-Vis/NIR spectroscopy and hyperspectral imaging to identify raw cotton types, revealing proteins, hydrocarbons, and hy-droxyl groups as dominant structures. Baka et al. [46] developed a UV-Vis spectroscopy method for measuring transformer oil’s interfacial tension, offering a cost-effective and rapid alternative for transformer maintenance. Haffert et al. [47] developed VIS-X, a high-resolution spectrograph for the Magellan Clay 6.5 m telescope, enhancing sensitivity to protoplanetary systems and providing insights into planet formation processes in the visible spectrum. Van der Woerd et al. [48] conducted a study on the use of hue-angle products in optical satellite sensors for water quality monitoring, revealing accurate colour properties from ocean colour satellite instruments since 1997.
The identified literature gap revealed a lack of processes utilising image processing for visible spectroscopy. This research addressed this gap by conducting a visible spectroscopy of chemical element vapours through the integration of machine learning and image processing techniques.

3. Methodology

Visible spectroscopy plays a crucial role in various scientific and industrial applications, facilitating the analysis of materials based on their spectral characteristics. In recent years, the integration of image processing techniques and machine learning algorithms has emerged as a promising approach to enhance the accuracy and efficiency of spectroscopic methods. This research aims to leverage such advancements by developing a methodology for visible spectroscopy using polynomial regression and machine learning algorithms.
The TAS-990 Atomic Absorption Spectrophotometer was used by the researchers for the spectroscopy of different vapours and stands out as one of the most precise spectrometers available, renowned for its exceptional accuracy in providing data on elemental emissions (See Table 1 for its features and specifications).
Artificial intelligence (AI) encompasses a broad spectrum of technologies and techniques aimed at enabling machines to mimic human cognitive functions, learn from data, and make autonomous decisions. Within the realm of AI, machine learning is a pivotal subfield that empowers systems to improve their performance on tasks through experience. One notable technique within machine learning is polynomial regression, which is a type of regression analysis where the relationship between the independent variable (or variables) and the dependent variable is modelled as an nth-degree polynomial function. Polynomial regression extends linear regression to capture more complex relationships between variables, making it particularly useful when the data do not follow a strictly linear pattern. By fitting a polynomial curve to the data points, polynomial regression enables AI systems to uncover intricate patterns and trends, thus enhancing their predictive capabilities. In the context of AI, polynomial regression finds applications in various domains, including finance, economics, engineering, and the sciences. For instance, in finance, polynomial regression can be employed to model stock price movements, considering factors beyond linear trends. Similarly, in climate science, it can aid in analysing the nonlinear relationships between atmospheric variables. Moreover, polynomial regression serves as a fundamental building block for more advanced AI techniques, such as neural networks. Neural networks often leverage polynomial regression as an activation function within their layers, allowing them to learn and represent nonlinear relationships in data. In summary, polynomial regression plays a crucial role in the AI landscape by enabling systems to model and understand nonlinear phenomena, thereby advancing their capabilities in prediction, decision-making, and pattern recognition.
The research begins by acquiring a dataset containing hue versus dominant wavelength values. In Van der Woerd and Wernand’s [48] study, the generation of the dataset, hue values, and corresponding wavelengths involves a multi-step process. Initially, 500 simulated satellite remote sensing reflectance (Rrs) spectra are utilised, and corrected for atmospheric influences and other optical properties. These Rrs spectra are functions of wavelength, representing the measured light intensity at different wavelengths. Tristimulus values are then determined through integrating Rrs with standard colorimetry two-degree Colour Matching Functions (CMFs), which serve as weighting functions based on the sensitivity of human vision to different wavelengths. For each sensor used in the study, such as SeaWiFS, MODIS, MERIS, OLCI, Landsat, and MSI, weights (M) are precalculated based on CMFs and the central wavelengths of each sensor’s spectral bands. These sensors typically have narrow spectral bands, ensuring the accurate measurement of Rrs at specific wavelengths. Subsequently, the offset for hue angles is determined by comparing the reconstructed sensor spectrum with the original hyperspectral spectrum. This comparison ensures the accuracy of hue angle calculations across different sensors. For instruments with broad spectral bands, such as the MSI on Sentinel-2 or MODIS 500-m product, which have a varying detection efficiency within each band described using the Spectral Response Function (SRF), a similar approach is followed. Weights are precalculated based on the mean wavelength of each band, and Rrs is derived by folding IOCCG and field spectra with the sensor’s SRF. This process allows for the determination of hue angles from the derived Rrs values, which are then compared with the original hyperspectral data to assess accuracy. In summary, the dataset, hue values, and corresponding wavelengths are generated through a meticulous process involving the correction of Rrs spectra, integration with CMFs, the pre-calculation of weights based on sensor characteristics, and the folding of spectra with sensor SRFs to derive Rrs values for each sensor band, ultimately allowing for the accurate determination of hue angles. In this study, a total of 241 data were used, with variations across different materials and their corresponding hue values being calculated. A breakdown of the dataset distribution for each light source could provide further insight into the distribution and composition of the samples. The hue value represents the colour information extracted from an image, while the dominant wavelength corresponds to the peak wavelength of the emitted light. This dataset serves as the foundation for establishing a mathematical relationship between hue values and dominant wavelengths.
To model this relationship, polynomial regression is initially applied to the dataset. To model the relationship between the independent variable X (representing the hue value) and the dependent variable y (representing the hue value of an image), we initially employed polynomial regression on the dataset. Polynomial regression is a statistical technique utilised to fit a polynomial function to observed data points, allowing us to capture potentially nonlinear relationships between variables.
In the provided code snippet that is Table 2, the dataset is divided into training and testing sets using the “train_test_split” function, where X and Y represent the independent and dependent variables, respectively, with a test size of 20% and a specified random state for reproducibility.
In this study, polynomial regression models with degrees ranging from 2 to 5 are constructed and evaluated. However, upon analysis, it is observed that these models exhibit significant errors and fail to adequately capture the underlying patterns in the data.
Recognising the need for a more flexible and expressive model, a 6th-degree polynomial regression model is subsequently employed. The decision to increase the polynomial degree is motivated by the desire to capture higher-order relationships and finer details in the dataset. Remarkably, the 6th-degree polynomial regression model demonstrates improved performance, exhibiting better alignment with the dataset and reduced errors compared to lower-degree models. This highlights the importance of selecting an appropriate model complexity to accurately represent the underlying data distribution.
The successful implementation of the 6th-degree polynomial regression model lays the groundwork for further advancements in the research. With an effective mathematical model in place, the next step involves integrating machine learning algorithms to enhance the spectroscopic analysis process. Specifically, the developed model serves as a basis for training machine learning algorithms to predict dominant wavelengths based on hue values extracted from input images.
By combining polynomial regression with machine learning algorithms, the proposed methodology offered a framework for visible spectroscopy. This integrated approach leverages the strengths of both techniques, enabling accurate and efficient analysis of spectral data. The ML algorithms were integrated into another algorithm, which calculated the corresponding wavelength values based on hue values extracted from images of different vapours. The hue value is calculated using OpenCV framework. OpenCV calculates the hue value of an image using the HSV (Hue, Saturation, Value) colour space. In this colour space, hue represents the dominant wavelength of light, effectively denoting the colour itself. To calculate the hue value of an image, OpenCV first converts the image from the default BGR (Blue, Green, Red) colour space to HSV. This conversion separates the colour information from the brightness and intensity components, facilitating easier manipulation and analysis of colours. Once the image is in the HSV colour space, OpenCV extracts the hue component, which ranges from 0 to 179 degrees or 0 to 255, depending on the data type used. This hue component is essentially a measure of the colour’s position on the colour wheel, with red at 0 degrees, green at 120 degrees, and blue at 240 degrees. By isolating the hue component, OpenCV effectively quantifies the colour of each pixel in the image, providing a numerical representation of its hue. This combined approach enabled the ML model to accurately predict the dominant wavelengths associated with the hue values obtained from the input images. In simpler terms, the algorithm processed the images to extract hue values, which were then fed into the ML model to determine the corresponding wavelengths, as shown in Figure 1. This allowed the system to effectively analyse various vapour sources and provide accurate spectral information based on the visual data input.
In this study, machine learning algorithms were seamlessly integrated into an image processing framework, orchestrating the extraction of hue values from images capturing different vapours. This synergistic approach facilitated the ML model’s ability to make precise predictions regarding the predominant wavelengths corresponding to these hue values derived from the input imagery. The experiment encompassed the examination of sodium vapour, neon lamp, copper vapour laser, and helium vapour, where their respective hue values were utilised to deduce their dominant wavelengths, as shown in Figure 2. This methodological fusion empowered the system to adeptly scrutinise diverse vapour sources, harnessing visual data input to furnish valuable spectral insights.
The significance of this phase lies in its ability to bridge the gap between visual information and spectral data through a systematic and data-driven approach. By acquiring images of different vapours along with their known dominant wavelengths, researchers can establish a robust dataset for training ML algorithms. This dataset serves as a foundation for teaching the ML model to recognise and associate specific hue values extracted from the vapour images with their corresponding dominant wavelengths.
Moreover, the development of an algorithm to process these images and extract hue values adds a layer of sophistication to the analysis process. This algorithm plays a pivotal role in preprocessing the visual data and converting it into a format suitable for ML analysis. By calculating the hue values from the images, the algorithm provides quantitative inputs that can be directly fed into the ML model.
The integrated approach of combining image processing with ML techniques offers several significant advantages. It enables the ML algorithm to learn complex patterns and relationships between hue values and dominant wavelengths. Through iterative training on a diverse dataset of vapour images, the ML model can refine its predictions and improve its accuracy over time.
Combining polynomial regression with machine learning (ML) algorithms represents a powerful synergy that enhances the methodology for visible spectroscopy in several ways.
Firstly, polynomial regression provides a mathematical framework to model the relationship between hue values and dominant wavelengths. By fitting a polynomial function to the dataset, it captures the underlying patterns and trends in the spectral data. This allows for more accurate predictions of dominant wavelengths based on hue values extracted from images. The use of polynomial regression ensures that the model can effectively handle nonlinear relationships and complex spectral variations, which are common in visible spectroscopy.
On the other hand, ML algorithms bring additional capabilities to the methodology. These algorithms have the ability to learn from data, recognise patterns, and make predictions without being explicitly programmed. By integrating ML techniques, the methodology gains the advantages of adaptability and scalability. ML algorithms can iteratively refine their predictions based on new data, leading to continuous improvement in accuracy and performance over time. Moreover, ML algorithms can handle complex datasets with high dimensionality, making them well-suited for analysing diverse vapour sources with varying spectral characteristics. Unlike traditional spectroscopic methods that may be limited to specific types of samples or conditions, ML algorithms can generalise patterns across different vapour sources. This flexibility enables the methodology to accommodate a wide range of applications and scenarios, from industrial process monitoring to environmental sensing. Furthermore, the combination of polynomial regression with ML algorithms offers a robust framework for spectroscopic analysis. Polynomial regression provides a solid mathematical foundation, while ML algorithms add a layer of intelligence and adaptability. This dual approach ensures that the methodology is both accurate and versatile, capable of handling the complexities inherent in visible spectroscopy. Overall, through integrating polynomial regression with ML algorithms, the methodology for visible spectroscopy becomes more than the sum of its parts. It offers a comprehensive and effective framework for analysing spectral data, enhancing accuracy, and enabling the analysis of diverse vapour sources. This synergy between mathematical modelling and machine learning represents a significant advancement in spectroscopic techniques, with wide-ranging applications across various scientific and industrial fields.

4. Results and Discussions

In this study, polynomial regression models spanning degrees 2 to 5 were formulated and assessed. However, upon examination, it was noted that these models yielded notable errors and struggled to accurately depict the inherent patterns within the dataset. The resultant curves displayed a poor alignment with the data points, highlighting the constraints associated with the selected polynomial degrees, as illustrated in Figure 3.
According to Figure 3, the coefficient of determination for polynomial regression for the second degree is 97.77%, the third degree is 98.31%, the fourth degree is 98.69%, the fifth degree is 99.71%, the sixth degree is 99.77%, and the seventh degree is 99.72%. It can be seen that the fifth degree, sixth degree, and seventh degree polynomial regressions are equivalent. However, among all, the sixth degree of polynomial regression has the highest coefficient of determination. Notably, the sixth degree polynomial regression model highlights enhanced performance, displaying superior alignment with the dataset and diminished errors. For a more clear interpretation, the sixth degree polynomial regression has been shown with training data in Figure 4.
After applying sixth-degree polynomial regression, the machine learning model generates a curve, as shown in Figure 5. Table 3 provides a comparison between the wavelengths in the dataset and the corresponding wavelengths at certain points after training. The results presented in Table 4 indicate the performance of the proposed methodology in predicting the dominant wavelengths emitted by various vapour sources compared to their known or original values. The methodology involves a combination of polynomial regression and machine learning algorithms to analyse spectral data extracted from images.
The original wavelengths (in nm) for various vapour lamps are as follows: sodium (589), neon (588.2), copper (578.2), mercury (546.074), and helium (587.562) [22,23,24,25,26].
For sodium (Na), the predicted dominant wavelength is found to be 588.73231 nm, while the original value is 589 nm. This results in a very low error percentage of 0.04%, indicating a highly accurate prediction.
Similarly, for neon (Ne), the predicted wavelength is 588.14 nm, slightly lower than the original value of 588.2 nm, resulting in an error percentage of only 0.01%. This demonstrates a high level of precision in predicting the dominant wavelength of neon vapour.
However, for copper (Cu), there is a larger discrepancy between the predicted wavelength (572.38412 nm) and the original value (578.2 nm), resulting in an error percentage of 1%. While this error is relatively small, it suggests that there may be some limitations or challenges in accurately predicting the spectral characteristics of copper vapour using the current methodology.
The results for mercury (Hg) indicate a more significant error, with the predicted wavelength (566.5212 nm) deviating considerably from the original value (546.074 nm). The error percentage in this case is 3.7%, indicating that the methodology may struggle to accurately predict the dominant wavelength of mercury vapour, potentially due to the complex spectral characteristics of mercury emissions.
Lastly, for helium (He), the predicted wavelength (587.97620 nm) is very close to the original value of 587.562 nm, resulting in a relatively low error percentage of 0.07%. This indicates that the methodology performs well in predicting the dominant wavelength of helium vapour with a high level of accuracy.
Overall, the results, as shown in Table 1 and Figure 5, demonstrate the effectiveness of the proposed methodology in predicting the dominant wavelengths of vapour sources, with generally low error percentages across different elements. While some discrepancies exist, particularly for elements like copper and mercury, the methodology shows promise for a wide range of applications in visible spectroscopy. Further refinement and optimisation may be necessary to improve accuracy, particularly for elements with complex spectral characteristics.
This approach enhances the versatility and applicability of spectroscopic analysis. By leveraging image data, researchers can analyse vapour sources in real-world scenarios, capturing variations in lighting conditions, camera settings, and other environmental factors. This capability is particularly valuable for field applications where traditional spectroscopic instruments may be impractical or inaccessible.
Furthermore, the ability to accurately predict dominant wavelengths based on hue values extracted from vapour images holds immense potential for various scientific and industrial applications. For instance, in environmental monitoring, this methodology could be employed to identify and quantify pollutants or trace gases based on their spectral signatures. In material science, it could aid in the characterisation of materials based on their optical properties.
Overall, the integration of image processing and ML techniques in spectroscopic analysis represents a significant advancement with far-reaching implications. By leveraging visual data to enhance spectral analysis, researchers can unlock new insights into the composition, behaviour, and properties of vapour sources, paving the way for solutions in diverse fields.

5. Conclusions

In conclusion, the presented research offers a methodology for visible spectroscopy through integrating polynomial regression with machine learning algorithms. The results demonstrate the effectiveness of this approach in accurately predicting the dominant wavelengths emitted by various vapour sources. The methodology achieves low error percentages for most elements, indicating a high precision in wavelength prediction. Specifically, for sodium, neon, and helium, the error percentages are exceptionally low at 0.04%, 0.01%, and 0.07%, respectively. Despite minor discrepancies observed in elements like copper and mercury, with error percentages of 1% and 3.7%, respectively, the overall performance of the methodology remains good.
The combination of polynomial regression and machine learning provides a robust framework for spectroscopic analysis, enhancing the accuracy and versatility of the methodology. While some discrepancies exist, particularly for elements with complex spectral characteristics such as copper and mercury, overall, the methodology shows promise for a wide range of applications in visible spectroscopy.
Further refinement and optimisation may be necessary to improve accuracy, particularly for elements with challenging spectral profiles. Additionally, future research could explore the scalability of the methodology to analyse a broader range of vapour sources and its applicability in real-world scenarios.
Overall, this research represents a significant advancement in visible spectroscopy, offering a comprehensive and effective approach that holds promise for various scientific and industrial applications. With continued development and validation, the methodology has the potential to contribute to advancements in fields such as colorimetric assay, the analysis of transition metal complexes, and the analysis of natural pigments.

Author Contributions

Conceptualisation, A.P., K.K., A.B. and S.P.; data curation, A.P., A.B. and S.P.; formal analysis, A.P., K.K. and S.P.; investigation, A.P. and A.B.; methodology, A.P. and K.K.; resources, A.P. and K.K.; software, A.P.; supervision, S.P.; validation, A.P. and A.B.; visualisation, A.B.; writing—original draft, A.P., K.K. and A.B.; writing—review and editing, K.K., A.B. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Milton, E.J.; Schaepman, M.E.; Anderson, K.; Kneubühler, M.; Fox, N. Progress in Field Spectroscopy. Remote Sens. Environ. 2009, 113, S92–S109. [Google Scholar] [CrossRef]
  2. John-Herpin, A.; Tittl, A.; Kühner, L.; Richter, F.; Huang, S.H.; Shvets, G.; Oh, S.; Altug, H. Metasurface-Enhanced Infrared Spectroscopy: An Abundance of Materials and Functionalities. Adv. Mater. 2022, 35, 2110163. [Google Scholar] [CrossRef]
  3. Pholpho, T.; Pathaveerat, S.; Sirisomboon, P. Classification of Longan Fruit Bruising Using Visible Spectroscopy. J. Food Eng. 2011, 104, 169–172. [Google Scholar] [CrossRef]
  4. Wang, N.; ElMasry, G. Bruise Detection of Apples Using Hyperspectral Imaging. In Hyperspectral Imaging for Food Quality Analysis and Control; Academic Press: Cambridge, MA, USA, 2010; pp. 295–320. [Google Scholar]
  5. Beberniss, T.J.; Ehrhardt, D.A. High-Speed 3D Digital Image Correlation Vibration Measurement: Recent Advancements and Noted Limitations. Mech. Syst. Signal Process. 2017, 86, 35–48. [Google Scholar] [CrossRef]
  6. Van Vliet, J.A.J.M.; De Groot, J.J. High-Pressure Sodium Discharge Lamps. IEE Proc. 1981, 128, 415. [Google Scholar] [CrossRef]
  7. Kitsinelis, S.; Devonshire, R.; Stone, D.A.; Tozer, R.C. Medium Pressure Mercury Discharge for Use as an Intense White Light Source. J. Phys. D Appl. Phys. 2005, 38, 3208–3216. [Google Scholar] [CrossRef]
  8. Trunec, D.; Brablec, A.; Buchta, J. Atmospheric Pressure Glow Discharge in Neon. J. Phys. D Appl. Phys. 2001, 34, 1697–1699. [Google Scholar] [CrossRef]
  9. Golubovskiǐ, Y.B.; Maйopoв, B.A.; Behnke, J.F.; Behnke, J.F. Modelling of the Homogeneous Barrier Discharge in Helium at Atmospheric Pressure. J. Phys. D Appl. Phys. 2002, 36, 39–49. [Google Scholar] [CrossRef]
  10. Amendola, V.; Meneghetti, M. Size Evaluation of Gold Nanoparticles by UV−vis Spectroscopy. J. Phys. Chem. C 2009, 113, 4277–4285. [Google Scholar] [CrossRef]
  11. Godiksen, A.; Vennestrøm, P.N.R.; Rasmussen, S.B.; Mossin, S. Identification and Quantification of Copper Sites in Zeolites by Electron Paramagnetic Resonance Spectroscopy. Top. Catal. 2016, 60, 13–29. [Google Scholar] [CrossRef]
  12. Windom, B.; Sawyer, W.G.; Hahn, D.W. A Raman Spectroscopic Study of MOS2 and MOO3: Applications to Tribological Systems. Tribol. Lett. 2011, 42, 301–310. [Google Scholar] [CrossRef]
  13. Rauscher, B.J.; Canavan, E.R.; Moseley, S.H.; Sadleir, J.E.; Stevenson, T. Detectors and Cooling Technology for Direct Spectroscopic Biosignature Characterization. J. Astron. Telesc. Instrum. Syst. 2016, 2, 041212. [Google Scholar] [CrossRef]
  14. Kurniastuti, I.; Yuliati, E.N.I.; Yudianto, F.; Wulan, T.D. Determination of Hue Saturation Value (HSV) Color Feature in Kidney Histology Image. J. Phys. Conf. Ser. 2022, 2157, 012020. [Google Scholar] [CrossRef]
  15. Cantrell, K.; Erenas, M.M.; de Orbe-Payá, I.; Capitán-Vallvey, L.F. Use of the Hue Parameter of the Hue, Saturation, Value Color Space As a Quantitative Analytical Parameter for Bitonal Optical Sensors. Anal. Chem. 2010, 82, 531–542. [Google Scholar] [CrossRef] [PubMed]
  16. Ma, M.; Gu, L.; Shen, Y.; Guan, Q.; Wang, C.; Deng, H.; Zhong, X.; Xia, M.; Shi, D. Computational Framework for Turbid Water Single-Pixel Imaging by Polynomial Regression and Feature Enhancement. IEEE Trans. Instrum. Meas. 2023, 72, 5021111. [Google Scholar] [CrossRef]
  17. Steinegger, A.; Wolfbeis, O.S.; Borisov, S. Optical Sensing and Imaging of pH Values: Spectroscopies, Materials, and Applications. Chem. Rev. 2020, 120, 12357–12489. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, B.; Xu, H.; Jiang, G.; Yu, M.; Ren, T.; Luo, T.; Zhu, Z. UIE-ConvFormer: Underwater Image Enhancement Based on Convolution and Feature Fusion Transformer. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 1952–1968. [Google Scholar] [CrossRef]
  19. Simbolon, S.; Jumadi, J.; Khairil, K.; Yupianti, Y.; Yulianti, L.; Supiyandi, S.; Windarto, A.P.; Wahyuni, S. Image Segmentation Using Color Value of the Hue in CT Scan Result. J. Phys. Conf. Ser. 2022, 2394, 012017. [Google Scholar] [CrossRef]
  20. Garaba, S.P.; Friedrichs, A.; Voß, D.; Zielinski, O. Classifying Natural Waters with the Forel-Ule Colour Index System: Results, Applications, Correlations and Crowdsourcing. Int. J. Environ. Res. Public Health 2015, 12, 16096–16109. [Google Scholar] [CrossRef]
  21. Verma, M.; Yadav, V.; Kaushik, V.D.; Pathak, V.K. Multiple Polynomial Regression for Solving Atmospheric Scattering Model. Int. J. Adv. Intell. Paradig. 2019, 12, 400. [Google Scholar] [CrossRef]
  22. Kang, Z.; Fan, R.; Chen, Z.; Wu, Y.; Lin, Y.; Li, K.; Qu, R.; Xu, L. The Rapid Non-Destructive Differentiation of Different Varieties of Rice by Fluorescence Hyperspectral Technology Combined with Machine Learning. Molecules 2024, 29, 682. [Google Scholar] [CrossRef] [PubMed]
  23. Blake, N.; Gaifulina, R.; Griffin, L.D.; Bell, I.M.; Thomas, G.M.H. Machine Learning of Raman Spectroscopy Data for Classifying Cancers: A Review of the Recent Literature. Diagnostics 2022, 12, 1491. [Google Scholar] [CrossRef] [PubMed]
  24. Ede, J.M. Deep Learning in Electron Microscopy. Mach. Learn. Sci. Technol. 2021, 2, 011004. [Google Scholar] [CrossRef]
  25. Goodacre, R. Explanatory Analysis of Spectroscopic Data Using Machine Learning of Simple, Interpretable Rules. Vib. Spectrosc. 2003, 32, 33–45. [Google Scholar] [CrossRef]
  26. Li, L.; Zhang, Q.; Ding, Y.; Jiang, H.; Thiers, B.H.; Wang, J. Automatic Diagnosis of Melanoma Using Machine Learning Methods on a Spectroscopic System. BMC Med. Imaging 2014, 14, 36. [Google Scholar] [CrossRef] [PubMed]
  27. Rodellar, J.; Alférez, S.; Acevedo, A.; Molina, Á.; Merino, A. Image Processing and Machine Learning in the Morphological Analysis of Blood Cells. Int. J. Lab. Hematol. 2018, 40, 46–53. [Google Scholar] [CrossRef]
  28. Carey, C.; Boucher, T.; Mahadevan, S.; Dyar, M.D.; Bartholomew, P. Machine Learning Tools for Mineral Recognition and Classification from Raman Spectroscopy. J. Raman Spectrosc. 2014, 1783, 5053. [Google Scholar] [CrossRef]
  29. Zeng, C.; Yan, Y.; Tang, J.; Wu, Y.; Zhong, S. Speciation of Arsenic(III) and Arsenic(V) Based on Triton X-100 Hollow Fiber Liquid Phase Microextraction Coupled with Flame Atomic Absorption Spectrometry. Spectrosc. Lett. 2017, 50, 220–226. [Google Scholar] [CrossRef]
  30. Bai, F.; Fan, Z. Flow Injection Micelle-Mediated Methodology for Determination of Lead by Electrothermal Atomic Absorption Spectrometry. Mikrochim. Acta 2007, 159, 235–240. [Google Scholar] [CrossRef]
  31. Li, X.; Liu, Z.; Cui, S.; Luo, C.; Li, C.; Zhuang, Z. Predicting the Effective Mechanical Property of Heterogeneous Materials by Image Based Modeling and Deep Learning. Comput. Methods Appl. Mech. Eng. 2019, 347, 735–753. [Google Scholar] [CrossRef]
  32. Zhang, P.; Liu, B.; Mu, X.; Xu, J.; Du, B.; Wang, J.; Liu, Z.; Tong, Z. Performance of Classification Models of Toxins Based on RAMAN Spectroscopy Using Machine Learning Algorithms. Molecules 2023, 29, 197. [Google Scholar] [CrossRef] [PubMed]
  33. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A. The Spectral Image Processing System (SIPS)—Interactive Visualization and Analysis of Imaging Spectrometer Data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  34. Yang, Q.; Tian, S.; Xu, H. Identification of the Geographic Origin of Peaches by VIS-NIR Spectroscopy, Fluorescence Spectroscopy and Image Processing Technology. J. Food Compos. Anal. 2022, 114, 104843. [Google Scholar] [CrossRef]
  35. Simon, L.L.; Nagy, Z.K.; Hungerbühler, K. Comparison of External Bulk Video Imaging with Focused Beam Reflectance Measurement and Ultra-Violet Visible Spectroscopy for Metastable Zone Identification in Food and Pharmaceutical Crystallization Processes. Chem. Eng. Sci. 2009, 64, 3344–3351. [Google Scholar] [CrossRef]
  36. Vong, C.N.; Larbi, P.A. Comparison of Image Data Obtained with Different Commercial Cameras for Use in Visible Spectroscopy. In Proceedings of the 2016 ASABE Annual International Meeting, Orlando, FL. USA, 17–20 July 2016. [Google Scholar] [CrossRef]
  37. Liŭ, D.; Sun, D.; Zeng, X. Recent Advances in Wavelength Selection Techniques for Hyperspectral Image Processing in the Food Industry. Food Bioprocess Technol. 2013, 7, 307–323. [Google Scholar] [CrossRef]
  38. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.G.; et al. Recent Advances in Techniques for Hyperspectral Image Processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  39. Yonezawa, K.; Takahashi, M.; Yatabe, K.; Nagatani, Y.; Shimizu, N. MOLASS: Software for Automatic Processing of Matrix Data Obtained from Small-Angle X-Ray Scattering and UV–Visible Spectroscopy Combined with Size-Exclusion Chromatography. Biophys. Physicobiol. 2023, 20, e200001. [Google Scholar] [CrossRef]
  40. Grasse, E.K.; Torcasio, M.H.; Smith, A.W. Teaching UV–Vis Spectroscopy with a 3D-Printable Smartphone Spectrophotometer. J. Chem. Educ. 2015, 93, 146–151. [Google Scholar] [CrossRef]
  41. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.; Pavri, B.; Chovit, C.; Solis, M.; et al. Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  42. Shakya, J.R.; Shashi, F.H.; Wang, A.X. Plasmonic Color Filter Array Based Visible Light Spectroscopy. Sci. Rep. 2021, 11, 23687. [Google Scholar] [CrossRef]
  43. Usami, M.; Iwamoto, T.; Fukasawa, R.; Tani, M.; Watanabe, M.; Sakai, K. Development of a THz Spectroscopic Imaging System. Phys. Med. Biol. 2002, 47, 3749–3753. [Google Scholar] [CrossRef] [PubMed]
  44. Delaney, J.K.; Zeibel, J.G.; Thoury, M.; Littleton, R.T.; Morales, K.M.; Palmer, M.R.; De La Rie, E.R. Visible and Infrared Reflectance Imaging Spectroscopy of Paintings: Pigment Mapping and Improved Infrared Reflectography. In Proceedings of the SPIE, Munich, Germany, 7 July 2009. [Google Scholar] [CrossRef]
  45. Ktash, M.A.; Hauler, O.; Ostertag, E.; Brecht, M. Ultraviolet-Visible/near Infrared Spectroscopy and Hyperspectral Imaging to Study the Different Types of Raw Cotton. J. Spectr. Imaging 2020, 9, 1–11. [Google Scholar] [CrossRef]
  46. Baka, N.A.; Abu-Siada, A.; Islam, S.; El-Naggar, M. A New Technique to Measure Interfacial Tension of Transformer Oil Using UV-Vis Spectroscopy. IEEE Trans. Dielectr. Electr. Insul. 2015, 22, 1275–1282. [Google Scholar] [CrossRef]
  47. Haffert, S.Y.; Males, J.R.; Close, L.M.; Van Gorkom, K.; Long, J.D.; Hedglen, A.D.; Guyon, O.; Schatz, L.; Kautz, M.; Lumbres, J.; et al. The Visible Integral-field Spectrograph eXtreme (VIS-X): High-resolution spectroscopy with MagAO-X. arXiv 2022, arXiv:2208.02720. [Google Scholar]
  48. Van der Woerd, H.J.; Wernand, M.R. Hue-Angle Product for Low to Medium Spatial Resolution Optical Satellite Sensors. Remote Sens. 2018, 10, 180. [Google Scholar] [CrossRef]
  49. Wikipedia Contributors Sodium-Vapor Lamp. Available online: https://en.wikipedia.org/wiki/Sodium-vapor_lamp (accessed on 7 April 2024).
  50. Wikipedia Contributors Gas-Discharge Lamp. Available online: https://en.wikipedia.org/wiki/Gas-discharge_lamp (accessed on 7 April 2024).
  51. Xometry, T. Copper Vapor Laser: Definition, Importance, and How It Works. Available online: https://www.xometry.com/resources/sheet/copper-vapor-laser/#:~:text=A%20copper%20vapor%20laser%20is,temperatures%20required%20to%20vaporize%20copper%20lamp (accessed on 7 April 2024).
  52. Atomic Spectra. Available online: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/atspect.html (accessed on 7 April 2024).
  53. Wikipedia Contributors Mercury-Vapor Lamp. Available online: https://en.wikipedia.org/wiki/Mercury-vapor_lamp (accessed on 7 April 2024).
Figure 1. Process flow of the work.
Figure 1. Process flow of the work.
Applsci 14 04527 g001
Figure 2. Different vapour light discharge test cases: (i) sodium [49] (ii) neon [50] (iii) copper [51] (iv) helium [52] (v) mercury [53].
Figure 2. Different vapour light discharge test cases: (i) sodium [49] (ii) neon [50] (iii) copper [51] (iv) helium [52] (v) mercury [53].
Applsci 14 04527 g002
Figure 3. Different degree of polynomial with respect to training data.
Figure 3. Different degree of polynomial with respect to training data.
Applsci 14 04527 g003
Figure 4. Graph of 6th degree of polynomial with respect to training data.
Figure 4. Graph of 6th degree of polynomial with respect to training data.
Applsci 14 04527 g004
Figure 5. Predicted vs. original wavelengths and error percentage for different vapours.
Figure 5. Predicted vs. original wavelengths and error percentage for different vapours.
Applsci 14 04527 g005
Table 1. Specifications of the spectrometer.
Table 1. Specifications of the spectrometer.
FeatureSpecification
TechniqueAtomic Absorption Spectroscopy (AAS)
Wavelength Range190 nm to 900 nm
Spectral Bandwidth0.1, 0.2, 0.4, 1.0, 2.0 nm (selectable)
Wavelength Accuracy±0.25 nm
Wavelength Repeatability±0.15 nm
Baseline Drift≤0.005 Abs/30 min
Background CorrectionDeuterium lamp background correction
Atomisation MethodsFlame Atomiser, Graphite Furnace Atomiser
Heating temperature range (Graphite Furnace)Room temperature~2650 °C
Lamp TypeHollow Cathode Lamp (HCL)
Number of Lamp Positions8 (automatic turret)
Sample TypesLiquids, Solids
Precision (RSD)≤3%
Software CompatibilityWindows Platform
Table 2. Code snippet of polynomial regression.
Table 2. Code snippet of polynomial regression.
StepsCodeComment
1.X_train, X_test, y_train, y_test = train_test_split (X, Y, test_size = 0.2, random_state = 0)Splitting the data into training and testing sets
2.poly_reg = PolynomialFeatures (degree = 6)Creating polynomial features with specified degree
3.X_poly = poly_reg.fit_transform (X_train)Transforming the training features into polynomial features
4.pol_reg = LinearRegression ()Initialising linear regression model
5.pol_reg.fit (X_poly, y_train)Fitting the polynomial features to the target variable
Table 3. Comparison of dataset based wavelength i.e., training data (Van der Woerd and Wernand [48]) and ML-based wavelength i.e., polynomial regression.
Table 3. Comparison of dataset based wavelength i.e., training data (Van der Woerd and Wernand [48]) and ML-based wavelength i.e., polynomial regression.
HueTraining Data [48]Polynomial RegressionHueTraining Data [48]Polynomial RegressionHueTraining Data [48]Polynomial Regression
0610613.1381561558.97162496493.99
1609611.0282560558.10163496493.70
2607609.0483560557.22164495493.42
3605607.1784559556.32165495493.16
4604605.4285558555.41166495492.90
5603603.7686557554.49167495492.67
6602602.2187557553.56168494492.44
7601600.7588556552.62169494492.23
8600599.3989554551.67170494492.03
9599598.1090553550.72171494491.84
10598596.9091552549.75172494491.66
11597595.7892552548.77173493491.49
12596594.7393552547.79174493491.33
13595593.7594551546.80175493491.18
14594592.8395550545.81176493491.04
15594591.9796549544.81177492490.91
16593591.1797548543.80178492490.78
17592590.4398547542.79179492490.66
18592589.7399546541.78180492490.54
19591589.08100545540.76181492490.43
20590588.47101544539.75182491490.33
21590587.90102542538.73183491490.23
22589587.37103541537.71184491490.13
23589586.88104540536.69185491490.03
24588586.41105538535.67186490489.93
25588585.97106537534.65187490489.84
26587585.56107536533.63188490489.74
27586585.17108534532.61189490489.64
28586584.81109533531.60190489489.54
29585584.46110531530.59191489489.44
30585584.12111530529.59192489489.33
31584583.80112528528.59193489489.22
32584583.50113526527.59194489489.10
33583583.20114525526.60195488488.97
34583582.91115523525.62196488488.83
35583582.63116522524.64197488488.69
36582582.35117521523.67198488488.53
37582582.07118519522.71199487488.37
38581581.79119518521.76200487488.19
39581581.52120517520.81201487488.00
40580581.24121516519.88202486487.79
41580580.96122515518.96203486487.57
42579580.67123514518.04204486487.33
43579580.38124513517.14205486487.07
44578580.08125512516.25206485486.79
45578579.78126511515.37207485486.50
46578579.46127510514.51208485486.18
47577579.14128510513.66209484485.84
48577578.80129509512.82210484485.47
49576578.45130508511.99211484485.08
50576578.09131508511.18212483484.67
51575577.72132507510.38213483484.23
52575577.34133506509.60214483483.76
53575576.93134506508.83215482483.25
54574576.52135505508.07216482482.72
55574576.09136505507.34217481482.16
56573575.64137504506.61218481481.56
57573575.18138504505.91219480480.93
58572574.70139503505.22220480480.26
59572574.20140503504.54221479479.56
60572573.69141503503.89222479478.82
61571573.16142502503.25223478478.04
62571572.61143502502.62224478477.21
63570572.04144501502.02225477476.35
64570571.46145501501.43226476475.44
65569570.86146501500.86227476474.49
66569570.24147500500.30228475473.49
67568569.61148500499.76229474472.45
68568568.95149500499.24230473471.36
69567568.28150499498.74231472470.22
70567567.60151499498.25232471469.03
71566566.89152499497.78233470467.79
72566566.17153498497.33234468466.49
73565565.43154498496.89235467465.15
74565564.68155498496.48236465463.74
75564563.91156498496.07237463462.29
76564563.12157497495.69238460460.77
77563562.32158497495.32239457459.20
78563561.51159497494.96240453457.57
79562560.68160496494.62
80561559.83161496494.30
Table 4. Predicted wavelength of different vapours.
Table 4. Predicted wavelength of different vapours.
VapourPredicted Wavelength (nm)Original Wavelength (nm)Error Percentage
Sodium588.732315890.04%
Neon588.14588.20.01%
Copper572.38412578.21%
Mercury566.5212546.0743.7%
Helium587.97620587.5620.07%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Patra, A.; Kumari, K.; Barua, A.; Pradhan, S. Advancing Visible Spectroscopy through Integrated Machine Learning and Image Processing Techniques. Appl. Sci. 2024, 14, 4527. https://doi.org/10.3390/app14114527

AMA Style

Patra A, Kumari K, Barua A, Pradhan S. Advancing Visible Spectroscopy through Integrated Machine Learning and Image Processing Techniques. Applied Sciences. 2024; 14(11):4527. https://doi.org/10.3390/app14114527

Chicago/Turabian Style

Patra, Aman, Kanchan Kumari, Abhishek Barua, and Swastik Pradhan. 2024. "Advancing Visible Spectroscopy through Integrated Machine Learning and Image Processing Techniques" Applied Sciences 14, no. 11: 4527. https://doi.org/10.3390/app14114527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop