<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/"
 xmlns:dc="http://purl.org/dc/elements/1.1/"
 xmlns:dcterms="http://purl.org/dc/terms/"
 xmlns:cc="http://web.resource.org/cc/"
 xmlns:prism="http://prismstandard.org/namespaces/basic/2.0/"
 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 xmlns:admin="http://webns.net/mvcb/"
 xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel rdf:about="https://www.mdpi.com/rss/journal/lights">
		<title>Lights</title>
		<description>Latest open access articles published in Lights at https://www.mdpi.com/journal/lights</description>
		<link>https://www.mdpi.com/journal/lights</link>
		<admin:generatorAgent rdf:resource="https://www.mdpi.com/journal/lights"/>
		<admin:errorReportsTo rdf:resource="mailto:support@mdpi.com"/>
		<dc:publisher>MDPI</dc:publisher>
		<dc:language>en</dc:language>
		<dc:rights>Creative Commons Attribution (CC-BY)</dc:rights>
						<prism:copyright>MDPI</prism:copyright>
		<prism:rightsAgent>support@mdpi.com</prism:rightsAgent>
		<image rdf:resource="https://pub.mdpi-res.com/img/design/mdpi-pub-logo.png?13cf3b5bd783e021?1778581344"/>
				<items>
			<rdf:Seq>
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/2/2/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/2/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/2/1/1" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/1/1/5" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/1/1/4" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/1/1/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/1/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/3042-7886/1/1/1" />
                    	</rdf:Seq>
		</items>
				<cc:license rdf:resource="https://creativecommons.org/licenses/by/4.0/" />
	</channel>

        <item rdf:about="https://www.mdpi.com/3042-7886/2/2/3">

	<title>Lights, Vol. 2, Pages 3: Designing Reproducible Test Environments for rPPG: A System for Camera Sensor Response Validation</title>
	<link>https://www.mdpi.com/3042-7886/2/2/3</link>
	<description>Remote photoplethysmography (rPPG) enables non-contact vital sign measurements using standard smart device cameras, opening up the potential of scalable health applications on consumer smart devices. However, rPPG signal quality is highly sensitive to camera sensor characteristics and image processing pipelines, which can vary between devices. This variation limits reproducibility and generalisation of rPPG-based algorithms beyond specific hardware platforms. This work presents a reproducible test environment for the validation of the camera sensor response in the context of rPPG signals. A microcontroller-driven illumination system and mechanically constrained setup are used to generate controlled, repeatable optical signals. Two characterisation tests are introduced: a time domain morphology analysis and a frequency domain attenuation analysis. Pulse timing consistency, pulse waveform morphology and normalised frequency responses are compared to assess sensor similarity. This method is applied to selected consumer devices and demonstrates consistent camera response patterns under the controlled test conditions. By explicitly addressing validation of the camera sensor and image processing pipeline, this work supports the development of more robust and transferable rPPG-based vital sign applications across a wider range of consumer devices.</description>
	<pubDate>2026-03-25</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 2, Pages 3: Designing Reproducible Test Environments for rPPG: A System for Camera Sensor Response Validation</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/2/2/3">doi: 10.3390/lights2020003</a></p>
	<p>Authors:
		Lieke Dorine van Putten
		Ivan Veleslavov
		Ayman Ahmed
		Aristide Mathieu
		Simon Wegerif
		</p>
	<p>Remote photoplethysmography (rPPG) enables non-contact vital sign measurements using standard smart device cameras, opening up the potential of scalable health applications on consumer smart devices. However, rPPG signal quality is highly sensitive to camera sensor characteristics and image processing pipelines, which can vary between devices. This variation limits reproducibility and generalisation of rPPG-based algorithms beyond specific hardware platforms. This work presents a reproducible test environment for the validation of the camera sensor response in the context of rPPG signals. A microcontroller-driven illumination system and mechanically constrained setup are used to generate controlled, repeatable optical signals. Two characterisation tests are introduced: a time domain morphology analysis and a frequency domain attenuation analysis. Pulse timing consistency, pulse waveform morphology and normalised frequency responses are compared to assess sensor similarity. This method is applied to selected consumer devices and demonstrates consistent camera response patterns under the controlled test conditions. By explicitly addressing validation of the camera sensor and image processing pipeline, this work supports the development of more robust and transferable rPPG-based vital sign applications across a wider range of consumer devices.</p>
	]]></content:encoded>

	<dc:title>Designing Reproducible Test Environments for rPPG: A System for Camera Sensor Response Validation</dc:title>
			<dc:creator>Lieke Dorine van Putten</dc:creator>
			<dc:creator>Ivan Veleslavov</dc:creator>
			<dc:creator>Ayman Ahmed</dc:creator>
			<dc:creator>Aristide Mathieu</dc:creator>
			<dc:creator>Simon Wegerif</dc:creator>
		<dc:identifier>doi: 10.3390/lights2020003</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2026-03-25</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2026-03-25</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>3</prism:startingPage>
		<prism:doi>10.3390/lights2020003</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/2/2/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/2/1/2">

	<title>Lights, Vol. 2, Pages 2: Photobiomodulation Applications in Clinical Veterinary Surgery: Current Status and Future Perspectives</title>
	<link>https://www.mdpi.com/3042-7886/2/1/2</link>
	<description>Photobiomodulation (PBM) has emerged as a noninvasive therapeutic tool with promising clinical applications in veterinary clinical surgery. Its mechanism of action is based on the stimulation of cellular processes through low-intensity light, promoting adenosine triphosphate production, inflammatory modulation, and tissue regeneration. This narrative review examines the current state of knowledge on the use of PBM in veterinary surgical contexts, with an emphasis on its clinical application in wound healing, postoperative pain control, and functional recovery. The physiological foundations of the technique, the main technical parameters that determine its effectiveness (wavelength, dose, frequency, and mode of application), and the available clinical evidence from different specialties such as soft tissue surgery, orthopedics, dentistry, and neurosurgery are analyzed. Current limitations, such as the lack of standardized protocols and their limited inclusion in clinical guidelines, are also addressed, as are future opportunities related to treatment personalization, the development of specific veterinary devices, and integration with emerging technologies. PBM represents a safe and effective adjuvant therapeutic strategy with the potential to become an integral part of veterinary postoperative management.</description>
	<pubDate>2026-02-03</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 2, Pages 2: Photobiomodulation Applications in Clinical Veterinary Surgery: Current Status and Future Perspectives</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/2/1/2">doi: 10.3390/lights2010002</a></p>
	<p>Authors:
		Mario García-González
		Francisco Vidal-Negreira
		Antonio González-Cantalapiedra
		</p>
	<p>Photobiomodulation (PBM) has emerged as a noninvasive therapeutic tool with promising clinical applications in veterinary clinical surgery. Its mechanism of action is based on the stimulation of cellular processes through low-intensity light, promoting adenosine triphosphate production, inflammatory modulation, and tissue regeneration. This narrative review examines the current state of knowledge on the use of PBM in veterinary surgical contexts, with an emphasis on its clinical application in wound healing, postoperative pain control, and functional recovery. The physiological foundations of the technique, the main technical parameters that determine its effectiveness (wavelength, dose, frequency, and mode of application), and the available clinical evidence from different specialties such as soft tissue surgery, orthopedics, dentistry, and neurosurgery are analyzed. Current limitations, such as the lack of standardized protocols and their limited inclusion in clinical guidelines, are also addressed, as are future opportunities related to treatment personalization, the development of specific veterinary devices, and integration with emerging technologies. PBM represents a safe and effective adjuvant therapeutic strategy with the potential to become an integral part of veterinary postoperative management.</p>
	]]></content:encoded>

	<dc:title>Photobiomodulation Applications in Clinical Veterinary Surgery: Current Status and Future Perspectives</dc:title>
			<dc:creator>Mario García-González</dc:creator>
			<dc:creator>Francisco Vidal-Negreira</dc:creator>
			<dc:creator>Antonio González-Cantalapiedra</dc:creator>
		<dc:identifier>doi: 10.3390/lights2010002</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2026-02-03</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2026-02-03</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>2</prism:startingPage>
		<prism:doi>10.3390/lights2010002</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/2/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/2/1/1">

	<title>Lights, Vol. 2, Pages 1: DeepFluoNet: A Novel Deep Learning Framework for Enhanced Analysis of Fluorescence Microscopy Data</title>
	<link>https://www.mdpi.com/3042-7886/2/1/1</link>
	<description>Fluorescence microscopy is a cornerstone technique in biological research, offering unparalleled insights into cellular and subcellular structures. However, inherent limitations such as photobleaching, phototoxicity, and low signal-to-noise ratios (SNR) often hinder its full potential. This paper introduces DeepFluoNet, a novel deep learning framework designed to significantly enhance the analysis of fluorescence microscopy data. DeepFluoNet leverages a sophisticated convolutional neural network architecture, meticulously optimized for denoising, segmentation, and classification tasks in fluorescence images. DeepFluoNet achieved a 98.5% accuracy in cell nucleus classification, a 95.2% F1-score in mitochondrial segmentation, and a 25% improvement in SNR for low-light images, surpassing state-of-the-art methods by an average of 7.3% in overall performance metrics. Furthermore, the inference time of DeepFluoNet is optimized to be 0.05 s per image, making it suitable for high-throughput analysis. This research bridges critical gaps in existing methodologies by providing a robust, efficient, and highly accurate solution for fluorescence microscopy data analysis, paving the way for more precise biological discoveries.</description>
	<pubDate>2026-01-04</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 2, Pages 1: DeepFluoNet: A Novel Deep Learning Framework for Enhanced Analysis of Fluorescence Microscopy Data</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/2/1/1">doi: 10.3390/lights2010001</a></p>
	<p>Authors:
		Fatema A. Albalooshi
		M. R. Qader
		Mazen Ali
		Yasser Ismail
		</p>
	<p>Fluorescence microscopy is a cornerstone technique in biological research, offering unparalleled insights into cellular and subcellular structures. However, inherent limitations such as photobleaching, phototoxicity, and low signal-to-noise ratios (SNR) often hinder its full potential. This paper introduces DeepFluoNet, a novel deep learning framework designed to significantly enhance the analysis of fluorescence microscopy data. DeepFluoNet leverages a sophisticated convolutional neural network architecture, meticulously optimized for denoising, segmentation, and classification tasks in fluorescence images. DeepFluoNet achieved a 98.5% accuracy in cell nucleus classification, a 95.2% F1-score in mitochondrial segmentation, and a 25% improvement in SNR for low-light images, surpassing state-of-the-art methods by an average of 7.3% in overall performance metrics. Furthermore, the inference time of DeepFluoNet is optimized to be 0.05 s per image, making it suitable for high-throughput analysis. This research bridges critical gaps in existing methodologies by providing a robust, efficient, and highly accurate solution for fluorescence microscopy data analysis, paving the way for more precise biological discoveries.</p>
	]]></content:encoded>

	<dc:title>DeepFluoNet: A Novel Deep Learning Framework for Enhanced Analysis of Fluorescence Microscopy Data</dc:title>
			<dc:creator>Fatema A. Albalooshi</dc:creator>
			<dc:creator>M. R. Qader</dc:creator>
			<dc:creator>Mazen Ali</dc:creator>
			<dc:creator>Yasser Ismail</dc:creator>
		<dc:identifier>doi: 10.3390/lights2010001</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2026-01-04</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2026-01-04</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/lights2010001</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/2/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/1/1/5">

	<title>Lights, Vol. 1, Pages 5: Advancing Light Research: From Quantum Light to Sustainable Innovation</title>
	<link>https://www.mdpi.com/3042-7886/1/1/5</link>
	<description>With advances in photonics, light research is at a turning point in its history, as technological advances and breakthroughs shift from fundamental research to transformative applications [...]</description>
	<pubDate>2025-12-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 1, Pages 5: Advancing Light Research: From Quantum Light to Sustainable Innovation</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/1/1/5">doi: 10.3390/lights1010005</a></p>
	<p>Authors:
		Roberto Morandotti
		</p>
	<p>With advances in photonics, light research is at a turning point in its history, as technological advances and breakthroughs shift from fundamental research to transformative applications [...]</p>
	]]></content:encoded>

	<dc:title>Advancing Light Research: From Quantum Light to Sustainable Innovation</dc:title>
			<dc:creator>Roberto Morandotti</dc:creator>
		<dc:identifier>doi: 10.3390/lights1010005</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2025-12-18</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2025-12-18</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Editorial</prism:section>
	<prism:startingPage>5</prism:startingPage>
		<prism:doi>10.3390/lights1010005</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/1/1/5</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/1/1/4">

	<title>Lights, Vol. 1, Pages 4: Spatiotemporal Dynamics of Anthropogenic Night Light in China</title>
	<link>https://www.mdpi.com/3042-7886/1/1/4</link>
	<description>Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more than a decade of near-daily observations of anthropogenic night light. The objective of this study is to quantify changes in ANL in developed eastern China post-2013 using VIIRS DNB monthly mean brightness composites. Specifically, to constrain sub-annual and interannual changes in night light brightness to distinguish between apparent and actual change of ANL sources, and then conduct a spatiotemporal analysis of observed changes to identify areas of human activity, urban development and rural electrification. This analysis is based on a combination of time-sequential bitemporal brightness distributions and quantification of the spatiotemporal evolution of night light using Empirical Orthogonal Function (EOF) analysis. Bitemporal brightness distributions show that bright (&amp;amp;gt;~1 nW/cm2/sr) ANL is heteroskedastic, with temporal variability diminishing with increasing brightness. Hence, brighter lights are more temporally stable. In contrast, dimmer (&amp;amp;lt;~1 nW/cm2/sr) ANL is much more variable on monthly time scales. The same patterns of heteroskedasticity and variability of the lower tail of the brightness distribution are observed in year-to-year distributions. However, year-to-year brightness increases vary somewhat among different years. While bivariate distributions quantify aggregate changes on both subannual and interannual time scales, spatiotemporal analysis quantifies spatial variations in the year-to-year temporal evolution of ANL. The spatial distribution of brightening (and, much less commonly, dimming) revealed by the EOF analysis indicates that most of the brightening since 2013 has occurred at the peripheries of large cities and throughout the networks of smaller settlements on the North China Plain, the Yangtze River Valley, and the Sichuan Basin. A particularly unusual pattern of sequential brightening and dimming is observed on the Loess Plateau north of Xi&amp;amp;rsquo;an, where extensive terrace construction has occurred. All aspects of this analysis highlight the difference between apparent and actual changes in night light sources. This is important because many users of VIIRS night light attribute all observed changes in imaged night light to actual changes in anthropogenic light sources&amp;amp;mdash;without consideration of low luminance variability related to the imaging process itself.</description>
	<pubDate>2025-11-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 1, Pages 4: Spatiotemporal Dynamics of Anthropogenic Night Light in China</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/1/1/4">doi: 10.3390/lights1010004</a></p>
	<p>Authors:
		Christopher Small
		</p>
	<p>Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more than a decade of near-daily observations of anthropogenic night light. The objective of this study is to quantify changes in ANL in developed eastern China post-2013 using VIIRS DNB monthly mean brightness composites. Specifically, to constrain sub-annual and interannual changes in night light brightness to distinguish between apparent and actual change of ANL sources, and then conduct a spatiotemporal analysis of observed changes to identify areas of human activity, urban development and rural electrification. This analysis is based on a combination of time-sequential bitemporal brightness distributions and quantification of the spatiotemporal evolution of night light using Empirical Orthogonal Function (EOF) analysis. Bitemporal brightness distributions show that bright (&amp;amp;gt;~1 nW/cm2/sr) ANL is heteroskedastic, with temporal variability diminishing with increasing brightness. Hence, brighter lights are more temporally stable. In contrast, dimmer (&amp;amp;lt;~1 nW/cm2/sr) ANL is much more variable on monthly time scales. The same patterns of heteroskedasticity and variability of the lower tail of the brightness distribution are observed in year-to-year distributions. However, year-to-year brightness increases vary somewhat among different years. While bivariate distributions quantify aggregate changes on both subannual and interannual time scales, spatiotemporal analysis quantifies spatial variations in the year-to-year temporal evolution of ANL. The spatial distribution of brightening (and, much less commonly, dimming) revealed by the EOF analysis indicates that most of the brightening since 2013 has occurred at the peripheries of large cities and throughout the networks of smaller settlements on the North China Plain, the Yangtze River Valley, and the Sichuan Basin. A particularly unusual pattern of sequential brightening and dimming is observed on the Loess Plateau north of Xi&amp;amp;rsquo;an, where extensive terrace construction has occurred. All aspects of this analysis highlight the difference between apparent and actual changes in night light sources. This is important because many users of VIIRS night light attribute all observed changes in imaged night light to actual changes in anthropogenic light sources&amp;amp;mdash;without consideration of low luminance variability related to the imaging process itself.</p>
	]]></content:encoded>

	<dc:title>Spatiotemporal Dynamics of Anthropogenic Night Light in China</dc:title>
			<dc:creator>Christopher Small</dc:creator>
		<dc:identifier>doi: 10.3390/lights1010004</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2025-11-21</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2025-11-21</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>4</prism:startingPage>
		<prism:doi>10.3390/lights1010004</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/1/1/4</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/1/1/3">

	<title>Lights, Vol. 1, Pages 3: Optical Diagnostics Applications to Laboratory Astrophysical Research</title>
	<link>https://www.mdpi.com/3042-7886/1/1/3</link>
	<description>Laboratory astrophysics is an emerging interdisciplinary field bridging high-energy-density plasma physics and astrophysics. Optical diagnostic techniques offer high spatiotemporal resolution and the unique capability for simultaneous multi-field measurements. These attributes make them indispensable for deciphering extreme plasma dynamics in laboratory astrophysics. This review systematically elaborates on the physical principles and inversion methodologies of key optical diagnostics, including Nomarski interferometry, shadowgraphy, and Faraday rotation. Highlighting frontier progress by our team, we showcase the application of these techniques in analyzing jet collimation mechanisms, turbulent magnetic reconnection, collisionless shocks, and particle acceleration. Future trajectories for optical diagnostic development are also discussed.</description>
	<pubDate>2025-10-31</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 1, Pages 3: Optical Diagnostics Applications to Laboratory Astrophysical Research</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/1/1/3">doi: 10.3390/lights1010003</a></p>
	<p>Authors:
		Wei Sun
		Dawei Yuan
		Zhe Zhang
		Jiayong Zhong
		Gang Zhao
		</p>
	<p>Laboratory astrophysics is an emerging interdisciplinary field bridging high-energy-density plasma physics and astrophysics. Optical diagnostic techniques offer high spatiotemporal resolution and the unique capability for simultaneous multi-field measurements. These attributes make them indispensable for deciphering extreme plasma dynamics in laboratory astrophysics. This review systematically elaborates on the physical principles and inversion methodologies of key optical diagnostics, including Nomarski interferometry, shadowgraphy, and Faraday rotation. Highlighting frontier progress by our team, we showcase the application of these techniques in analyzing jet collimation mechanisms, turbulent magnetic reconnection, collisionless shocks, and particle acceleration. Future trajectories for optical diagnostic development are also discussed.</p>
	]]></content:encoded>

	<dc:title>Optical Diagnostics Applications to Laboratory Astrophysical Research</dc:title>
			<dc:creator>Wei Sun</dc:creator>
			<dc:creator>Dawei Yuan</dc:creator>
			<dc:creator>Zhe Zhang</dc:creator>
			<dc:creator>Jiayong Zhong</dc:creator>
			<dc:creator>Gang Zhao</dc:creator>
		<dc:identifier>doi: 10.3390/lights1010003</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2025-10-31</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2025-10-31</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>3</prism:startingPage>
		<prism:doi>10.3390/lights1010003</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/1/1/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/1/1/2">

	<title>Lights, Vol. 1, Pages 2: Noise Suppression Strategies in Computer Holography: Methods and Techniques</title>
	<link>https://www.mdpi.com/3042-7886/1/1/2</link>
	<description>Computer holography enables precise modulation of optical fields, facilitating advanced applications such as optical manipulation, micro-/nanofabrication, and high-resolution three-dimensional displays. However, noise remains one of the most critical challenges, as it significantly reduces the accuracy and visual quality of the reconstructed optical fields. Over the past decades, substantial research has been devoted to identifying noise sources and developing a wide range of suppression techniques. In this article, we present a systematic analysis of the origins and characteristics of noise in computer holography, structured based on computational methods, device characteristics, and system configurations. The representative suppression strategies aimed at enhancing holographic reconstruction quality are investigated. This study aims to deepen the understanding of noise characteristics and provide valuable insights and guidance for future developments in hologram optimization, system integration, and high-performance holographic reconstruction techniques.</description>
	<pubDate>2025-09-11</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 1, Pages 2: Noise Suppression Strategies in Computer Holography: Methods and Techniques</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/1/1/2">doi: 10.3390/lights1010002</a></p>
	<p>Authors:
		Songzhi Tian
		Zijia Feng
		Hao Zhang
		Qiaofeng Tan
		Liqun Sun
		</p>
	<p>Computer holography enables precise modulation of optical fields, facilitating advanced applications such as optical manipulation, micro-/nanofabrication, and high-resolution three-dimensional displays. However, noise remains one of the most critical challenges, as it significantly reduces the accuracy and visual quality of the reconstructed optical fields. Over the past decades, substantial research has been devoted to identifying noise sources and developing a wide range of suppression techniques. In this article, we present a systematic analysis of the origins and characteristics of noise in computer holography, structured based on computational methods, device characteristics, and system configurations. The representative suppression strategies aimed at enhancing holographic reconstruction quality are investigated. This study aims to deepen the understanding of noise characteristics and provide valuable insights and guidance for future developments in hologram optimization, system integration, and high-performance holographic reconstruction techniques.</p>
	]]></content:encoded>

	<dc:title>Noise Suppression Strategies in Computer Holography: Methods and Techniques</dc:title>
			<dc:creator>Songzhi Tian</dc:creator>
			<dc:creator>Zijia Feng</dc:creator>
			<dc:creator>Hao Zhang</dc:creator>
			<dc:creator>Qiaofeng Tan</dc:creator>
			<dc:creator>Liqun Sun</dc:creator>
		<dc:identifier>doi: 10.3390/lights1010002</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2025-09-11</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2025-09-11</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>2</prism:startingPage>
		<prism:doi>10.3390/lights1010002</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/1/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/3042-7886/1/1/1">

	<title>Lights, Vol. 1, Pages 1: Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review</title>
	<link>https://www.mdpi.com/3042-7886/1/1/1</link>
	<description>Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems.</description>
	<pubDate>2025-07-14</pubDate>

	<content:encoded><![CDATA[
	<p><b>Lights, Vol. 1, Pages 1: Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review</b></p>
	<p>Lights <a href="https://www.mdpi.com/3042-7886/1/1/1">doi: 10.3390/lights1010001</a></p>
	<p>Authors:
		Dimitar Rangelov
		Sierd Waanders
		Kars Waanders
		Maurice van Keulen
		Radoslav Miltchev
		</p>
	<p>Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems.</p>
	]]></content:encoded>

	<dc:title>Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review</dc:title>
			<dc:creator>Dimitar Rangelov</dc:creator>
			<dc:creator>Sierd Waanders</dc:creator>
			<dc:creator>Kars Waanders</dc:creator>
			<dc:creator>Maurice van Keulen</dc:creator>
			<dc:creator>Radoslav Miltchev</dc:creator>
		<dc:identifier>doi: 10.3390/lights1010001</dc:identifier>
	<dc:source>Lights</dc:source>
	<dc:date>2025-07-14</dc:date>

	<prism:publicationName>Lights</prism:publicationName>
	<prism:publicationDate>2025-07-14</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/lights1010001</prism:doi>
	<prism:url>https://www.mdpi.com/3042-7886/1/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
    
<cc:License rdf:about="https://creativecommons.org/licenses/by/4.0/">
	<cc:permits rdf:resource="https://creativecommons.org/ns#Reproduction" />
	<cc:permits rdf:resource="https://creativecommons.org/ns#Distribution" />
	<cc:permits rdf:resource="https://creativecommons.org/ns#DerivativeWorks" />
</cc:License>

</rdf:RDF>
