Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = document image registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9298 KB  
Article
Integrated Construction-Site Hazard Detection System Using AI Algorithms in Support of Sustainable Occupational Safety Management
by Zuzanna Woźniak, Krzysztof Trybuszewski, Tomasz Nowobilski, Marta Stolarz and Filip Šmalec
Sustainability 2025, 17(23), 10584; https://doi.org/10.3390/su172310584 - 26 Nov 2025
Viewed by 1490
Abstract
Despite preventive measures, the construction industry continues to exhibit high accident rates. In response, visual detection system was developed to support safety management on construction sites and promote sustainable working environments. The solution integrates the YOLOv8 algorithm with asynchronous video processing, incident registration, [...] Read more.
Despite preventive measures, the construction industry continues to exhibit high accident rates. In response, visual detection system was developed to support safety management on construction sites and promote sustainable working environments. The solution integrates the YOLOv8 algorithm with asynchronous video processing, incident registration, an open API, and a web-based interface. The system detects the absence of safety helmets (NHD) and worker falls (FD). Its low hardware requirements make it suitable for small and medium-sized construction enterprises, contributing to resource efficiency and digital transformation in line with sustainable development goals. This study advances practice by providing an integrated, low-resource solution that unites multi-hazard detection, event documentation, and system interoperability, addressing a key gap in existing research and implementations. The contribution includes an operational architecture proven to run in real time, addressing a gap between model-centred research and deployable, OHS applications. The system was validated using two independent test datasets, each comprising 100 images: one for NHD and one for FD. For NHD, the system achieved a precision of 0.93, an accuracy of 0.88, and an F1-score of 0.79. For FD, a precision of 1.00, though with a limited recall of 0.45. The results demonstrate the system’s potential for sustainable construction site safety monitoring. Full article
(This article belongs to the Section Sustainable Engineering and Science)
Show Figures

Figure 1

22 pages, 114644 KB  
Article
Bringing Light into the Darkness: Integrating Light Painting and 3D Recording for the Documentation of the Hypogean Tomba dell’Orco, Tarquinia
by Matteo Lombardi, Maria Felicia Rega, Vincenzo Bellelli, Riccardo Frontoni, Maria Cristina Tomassetti and Daniele Ferdani
Appl. Sci. 2025, 15(23), 12463; https://doi.org/10.3390/app152312463 - 24 Nov 2025
Viewed by 948
Abstract
The three-dimensional documentation of hypogean structures poses significant methodological challenges due to the absence of natural light, confined spaces, and the presence of fragile painted surfaces. This study presents an integrated workflow for the survey of the Tomba dell’Orco (Tarquinia), combining terrestrial laser [...] Read more.
The three-dimensional documentation of hypogean structures poses significant methodological challenges due to the absence of natural light, confined spaces, and the presence of fragile painted surfaces. This study presents an integrated workflow for the survey of the Tomba dell’Orco (Tarquinia), combining terrestrial laser scanning, photogrammetry, and the light painting technique. Borrowed from photographic practice, light painting was employed as a dynamic lighting strategy during photogrammetric acquisition to overcome issues of uneven illumination and harsh shadows typical of underground environments. By moving handheld LED sources throughout long-exposure shots, operators produced evenly illuminated images suitable for feature extraction and high-resolution texture generation. These image datasets were subsequently integrated with laser scanning point clouds through a structured pipeline encompassing registration, optimization, and texture reprojection, culminating in web dissemination via the ATON framework. The methodological focus demonstrates that light painting provides a scalable and replicable solution for documenting complex hypogean contexts, improving the photometric quality and surface readability of 3D models while reducing acquisition time compared to static lighting setups. The results highlight the potential of dynamic illumination as an operational enhancement for 3D recording workflows in low-light cultural heritage environments. Full article
Show Figures

Figure 1

27 pages, 24458 KB  
Article
Application of Structure from Motion Techniques Using Historical Aerial Images, Orthomosaics, and Aerial LiDAR Point Cloud Datasets for the Investigation of Debris Flow Source Areas
by Bianca Voglino, Danilo Godone, Marco Baldo, Barbara Bono, Fabio Luino, Riccardo Bonomelli, Paolo Colosio, Luca Beretta, Luca Albertelli and Laura Turconi
Remote Sens. 2025, 17(22), 3658; https://doi.org/10.3390/rs17223658 - 7 Nov 2025
Viewed by 1390
Abstract
Detecting topographic change in mountainous areas using historical aerial imagery is challenging due to complex terrain and variable data quality. This study evaluates the potential of Structure from Motion (SfM) for deriving 3D information from archival photograms in the Rabbia basin (Central Italian [...] Read more.
Detecting topographic change in mountainous areas using historical aerial imagery is challenging due to complex terrain and variable data quality. This study evaluates the potential of Structure from Motion (SfM) for deriving 3D information from archival photograms in the Rabbia basin (Central Italian Alps), a catchment with a well-documented history of debris flow activity. The aim is to assess the impact of input configurations and photogrammetric processing strategies on the quality and interpretability of 3D reconstructions from historical aerial imagery, as a basis for further geomorphological analyses. A 1999 aerial dataset was processed via SfM workflow to generate a point cloud and orthomosaic, and then co-registered with a 2021 LiDAR-derived dataset. Multi-temporal analysis was conducted using point cloud distance computations and visual interpretation of orthomosaics. Additional aerial images spanning nearly 80 years expanded the temporal scale of the analysis, providing valuable retrospective insight into long-term terrain evolution. The results, although considered semi-quantitative due to data quality limitations, are consistent with geomorphological trends in the area. The study confirms that historical SfM-derived products, when supported by robust co-registration and quality checks, can contribute to sediment dynamics and hazard evaluation in alpine environments, though result interpretation should remain cautious due to dataset-specific uncertainties. Full article
Show Figures

Graphical abstract

25 pages, 34678 KB  
Article
Historical Coast Snaps: Using Centennial Imagery to Track Shoreline Change
by Fátima Valverde, Rui Taborda, Amy E. East and Cristina Ponte Lira
Remote Sens. 2025, 17(8), 1326; https://doi.org/10.3390/rs17081326 - 8 Apr 2025
Cited by 1 | Viewed by 2144
Abstract
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based [...] Read more.
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based images, such as historical photographs and picture postcards, provide an alternative source of shoreline data for earlier periods when other datasets are scarce. Despite their frequent use for documenting qualitative morphological changes, these valuable historical data sources have rarely supported quantitative assessments of coastal evolution. This study demonstrates the potential of historical ground-oblique images for quantitatively assessing shoreline position and long-term change. Using Conceição-Duquesa Beach (Cascais, Portugal) as a case study, we analyze shoreline evolution over 92 years by applying a novel methodology to historical photographs and postcards. The approach combines image registration, shoreline detection, coordinate transformation, and rectification while accounting for positional uncertainty. Results reveal a significant counterclockwise rotation of the shoreline between the 20th and 21st centuries, exceeding estimated uncertainty thresholds. This study highlights the feasibility of using historical ground-based imagery to reconstruct shoreline positions and quantify long-term coastal change. The methodology is straightforward, adaptable, and offers a promising avenue for extending the temporal range of shoreline datasets, advancing our understanding of coastal evolution. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Figure 1

19 pages, 6419 KB  
Article
Efficacy of Tocopherol vs. Chlorhexidine in the Management of Oral Biopsy Site: A Randomized Clinical Trial
by Arianna Baldin, Clara Nucibella, Claudia Manera and Christian Bacci
J. Clin. Med. 2025, 14(3), 788; https://doi.org/10.3390/jcm14030788 - 25 Jan 2025
Viewed by 2357
Abstract
Background/Objectives: Chlorhexidine digluconate (CHX) is widely regarded as the gold standard for oral mucosa antiseptic treatments but has been associated with delayed healing, scar formation, microbiome alterations, and fibroblast toxicity. Tocopherol, with its ability to accelerate tissue healing and minimal side effects, [...] Read more.
Background/Objectives: Chlorhexidine digluconate (CHX) is widely regarded as the gold standard for oral mucosa antiseptic treatments but has been associated with delayed healing, scar formation, microbiome alterations, and fibroblast toxicity. Tocopherol, with its ability to accelerate tissue healing and minimal side effects, has emerged as a potential alternative. This randomized clinical trial aimed to compare the efficacy of topical tocopherol acetate and 0.2% chlorhexidine in managing postoperative pain and wound healing following oral cavity biopsies. Methods: Seventy-seven patients undergoing oral biopsies were divided into two groups: the test group (tocopherol acetate) and the control group (0.2% chlorhexidine). Pain was assessed using VAS (Visual Analogue Scale) scores on days 1 and 6 postoperatively, and wound healing was evaluated through measurements of the biopsy site’s height and width from standardized photographs analyzed with ImageJ. Painkiller use was also documented. The study followed CONSORT (Consolidated Standards of Reporting Trials) guidelines, with ethical approval from the Padua Ethics Committee and registration on ISRCTN. Results: No significant differences were found between the groups in VAS scores, wound dimensions, or painkiller use (p > 0.05). However, significant pain reduction within each group was observed (p < 0.0001). Conclusions: Tocopherol acetate showed comparable efficacy to chlorhexidine, suggesting it could be a viable alternative for postoperative care in oral surgery. Full article
(This article belongs to the Special Issue Current Challenges in Oral Surgery)
Show Figures

Figure 1

20 pages, 9857 KB  
Article
Data Science for Health Image Alignment: A User-Friendly Open-Source ImageJ/Fiji Plugin for Aligning Multimodality/Immunohistochemistry/Immunofluorescence 2D Microscopy Images
by Filippo Piccinini, Marcella Tazzari, Maria Maddalena Tumedei, Mariachiara Stellato, Daniel Remondini, Enrico Giampieri, Giovanni Martinelli, Gastone Castellani and Antonella Carbonaro
Sensors 2024, 24(2), 451; https://doi.org/10.3390/s24020451 - 11 Jan 2024
Cited by 6 | Viewed by 3471
Abstract
Most of the time, the deep analysis of a biological sample requires the acquisition of images at different time points, using different modalities and/or different stainings. This information gives morphological, functional, and physiological insights, but the acquired images must be aligned to be [...] Read more.
Most of the time, the deep analysis of a biological sample requires the acquisition of images at different time points, using different modalities and/or different stainings. This information gives morphological, functional, and physiological insights, but the acquired images must be aligned to be able to proceed with the co-localisation analysis. Practically speaking, according to Aristotle’s principle, “The whole is greater than the sum of its parts”, multi-modal image registration is a challenging task that involves fusing complementary signals. In the past few years, several methods for image registration have been described in the literature, but unfortunately, there is not one method that works for all applications. In addition, there is currently no user-friendly solution for aligning images that does not require any computer skills. In this work, DS4H Image Alignment (DS4H-IA), an open-source ImageJ/Fiji plugin for aligning multimodality, immunohistochemistry (IHC), and/or immunofluorescence (IF) 2D microscopy images, designed with the goal of being extremely easy to use, is described. All of the available solutions for aligning 2D microscopy images have also been revised. The DS4H-IA source code; standalone applications for MAC, Linux, and Windows; video tutorials; manual documentation; and sample datasets are publicly available. Full article
Show Figures

Figure 1

24 pages, 26594 KB  
Article
Unfolding WWII Heritages with Airborne and Ground-Based Laser Scanning
by Kathleen Fei-Ching Sit, Chun-Ho Pun, Wallace W. L. Lai, Dexter Kin-Wang Chung and Chi-Man Kwong
Heritage 2023, 6(9), 6189-6212; https://doi.org/10.3390/heritage6090325 - 4 Sep 2023
Cited by 2 | Viewed by 3817
Abstract
Considering how difficult it is for a pin in the ocean to be found, painstaking searches among historical documents and eyewitness accounts often end up with more unknowns and questions. We developed a three-tier geo-spatial tech-based approach to discover and unfold the lost [...] Read more.
Considering how difficult it is for a pin in the ocean to be found, painstaking searches among historical documents and eyewitness accounts often end up with more unknowns and questions. We developed a three-tier geo-spatial tech-based approach to discover and unfold the lost WWII heritage features in the countryside of Hong Kong that can be applied in other contexts. It started with an analysis of historical texts, old maps, aerial photos, and military plans in the historical geographic information system (HGIS) Project ‘The Battle of Hong Kong 1941: a Spatial History Project’ by Hong Kong Baptist University to define regions/points of interest. Then, 3D point clouds extracted from the government’s airborne LiDAR were migrated to form a digital terrain model (DTM) for geo-registration in GIS. All point clouds were geo-referenced in HK1980 Grid via accurate positioning using the global navigation satellite system—real-time kinematics (GNSS-RTK). A red relief image map (RRIM) was then used to image the tunnels, trenches, and pillboxes in great detail by calculating the topographical openness. The last tier of the tech work was field work involving ground validation of the findings from the previous two tiers and on-site imaging using terrestrial LiDAR. The ground 3D LiDAR model of the heritage feature was then built and integrated into the DTM. The three-tier tech-based approach developed in this paper is standardised and adopted to streamline the workflow of historical and archaeological studies not only in Hong Kong but also elsewhere. Full article
(This article belongs to the Special Issue Photogrammetry, Remote Sensing and GIS for Built Heritage)
Show Figures

Figure 1

18 pages, 8605 KB  
Article
Railway Bridge Geometry Assessment Supported by Cutting-Edge Reality Capture Technologies and 3D As-Designed Models
by Rafael Cabral, Rogério Oliveira, Diogo Ribeiro, Anna M. Rakoczy, Ricardo Santos, Miguel Azenha and José Correia
Infrastructures 2023, 8(7), 114; https://doi.org/10.3390/infrastructures8070114 - 20 Jul 2023
Cited by 19 | Viewed by 3742 | Correction
Abstract
Documentation of structural visual inspections is necessary for its monitoring, maintenance, and decision about its rehabilitation, and structural strengthening. In recent times, close-range photogrammetry (CRP) based on unmanned aerial vehicles (UAVs) and terrestrial laser scanners (TLS) have greatly improved the survey phase. These [...] Read more.
Documentation of structural visual inspections is necessary for its monitoring, maintenance, and decision about its rehabilitation, and structural strengthening. In recent times, close-range photogrammetry (CRP) based on unmanned aerial vehicles (UAVs) and terrestrial laser scanners (TLS) have greatly improved the survey phase. These technologies can be used independently or in combination to provide a 3D as-is image-based model of the railway bridge. In this study, TLS captured the side and bottom sections of the deck, while the CRP-based UAV captured the side and top sections of the deck, and the track. The combination of post-processing techniques enabled the merging of TLS and CRP models, resulting in the creation of an accurate 3D representation of the complete railway bridge deck. Additionally, a 3D as-designed model was developed based on the design plans of the bridge. The as-designed model is compared to the as-is model through a 3D digital registration. The comparison allows the detection of dimensional deviation and surface alignments. The results reveal slight deviations in the structural dimension with a global average value of 9 mm. Full article
Show Figures

Figure 1

25 pages, 3690 KB  
Review
Optical Coherence Tomography and Optical Coherence Tomography Angiography in Pediatric Retinal Diseases
by Chung-Ting Wang, Yin-Hsi Chang, Gavin S. W. Tan, Shu Yen Lee, R. V. Paul Chan, Wei-Chi Wu and Andrew S. H. Tsai
Diagnostics 2023, 13(8), 1461; https://doi.org/10.3390/diagnostics13081461 - 18 Apr 2023
Cited by 5 | Viewed by 5730
Abstract
Indirect ophthalmoscopy and handheld retinal imaging are the most common and traditional modalities for the evaluation and documentation of the pediatric fundus, especially for pre-verbal children. Optical coherence tomography (OCT) allows for in vivo visualization that resembles histology, and optical coherence tomography angiography [...] Read more.
Indirect ophthalmoscopy and handheld retinal imaging are the most common and traditional modalities for the evaluation and documentation of the pediatric fundus, especially for pre-verbal children. Optical coherence tomography (OCT) allows for in vivo visualization that resembles histology, and optical coherence tomography angiography (OCTA) allows for non-invasive depth-resolved imaging of the retinal vasculature. Both OCT and OCTA were extensively used and studied in adults, but not in children. The advent of prototype handheld OCT and OCTA have allowed for detailed imaging in younger infants and even neonates in the neonatal care intensive unit with retinopathy of prematurity (ROP). In this review, we discuss the use of OCTA and OCTA in various pediatric retinal diseases, including ROP, familial exudative vitreoretinopathy (FEVR), Coats disease and other less common diseases. For example, handheld portable OCT was shown to detect subclinical macular edema and incomplete foveal development in ROP, as well as subretinal exudation and fibrosis in Coats disease. Some challenges in the pediatric age group include the lack of a normative database and the difficulty in image registration for longitudinal comparison. We believe that technological improvements in the use of OCT and OCTA will improve our understanding and care of pediatric retina patients in the future. Full article
Show Figures

Figure 1

31 pages, 8132 KB  
Article
Multi-Sensor Data Fusion for 3D Reconstruction of Complex Structures: A Case Study on a Real High Formwork Project
by Linlin Zhao, Huirong Zhang and Jasper Mbachu
Remote Sens. 2023, 15(5), 1264; https://doi.org/10.3390/rs15051264 - 24 Feb 2023
Cited by 25 | Viewed by 7403
Abstract
As the most comprehensive document types for the recording and display of real-world information regarding construction projects, 3D realistic models are capable of recording and displaying simultaneously textures and geometric shapes in the same 3D scene. However, at present, the documentation for much [...] Read more.
As the most comprehensive document types for the recording and display of real-world information regarding construction projects, 3D realistic models are capable of recording and displaying simultaneously textures and geometric shapes in the same 3D scene. However, at present, the documentation for much of construction infrastructure faces significant challenges. Based on TLS, GNSS/IMU, mature photogrammetry, a UAV platform, computer vision technologies, and AI algorithms, this study proposes a workflow for 3D modeling of complex structures with multiple-source data. A deep learning LoFTR network was used first for image matching, which can improve matching accuracy. Then, a NeuralRecon network was employed to generate a 3D point cloud with global consistency. GNSS information was used to reduce search space in image matching and produce an accurate transformation matrix between the image scene and the global reference system. In addition, to enhance the effectiveness and efficiency of the co-registration of the two-source point clouds, an RPM-net was used. The proposed workflow processed the 3D laser point cloud and UAV low-altitude multi-view image data to generate a complete, accurate, high-resolution, and detailed 3D model. Experimental validation on a real high formwork project was carried out, and the result indicates that the generated 3D model has satisfactory accuracy with a registration error value of 5 cm. Model comparison between the TLS, image-based, data fusion 1 (using the common method), and data fusion 2 (using the proposed method) models were conducted in terms of completeness, geometrical accuracy, texture appearance, and appeal to professionals. The results denote that the generated 3D model has similar accuracy to the TLS model yet also provides a complete model with a photorealistic appearance that most professionals chose as their favorite. Full article
Show Figures

Figure 1

18 pages, 1548 KB  
Article
Multimodal Registration for Image-Guided EBUS Bronchoscopy
by Xiaonan Zang, Wennan Zhao, Jennifer Toth, Rebecca Bascom and William Higgins
J. Imaging 2022, 8(7), 189; https://doi.org/10.3390/jimaging8070189 - 8 Jul 2022
Cited by 8 | Viewed by 4399
Abstract
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, [...] Read more.
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8, respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy. Full article
Show Figures

Figure 1

27 pages, 24116 KB  
Article
Two-Step Alignment of Mixed Reality Devices to Existing Building Data
by Jelle Vermandere, Maarten Bassier and Maarten Vergauwen
Remote Sens. 2022, 14(11), 2680; https://doi.org/10.3390/rs14112680 - 3 Jun 2022
Cited by 13 | Viewed by 3978
Abstract
With the emergence of XR technologies, the demand for new time- and cost-saving applications in the AEC industry based on these new technologies is rapidly increasing. Their real-time feedback and digital interaction in the field makes these systems very well suited for construction [...] Read more.
With the emergence of XR technologies, the demand for new time- and cost-saving applications in the AEC industry based on these new technologies is rapidly increasing. Their real-time feedback and digital interaction in the field makes these systems very well suited for construction site monitoring, maintenance, project planning, and so on. However, the continuously changing environments of construction sites and facilities requires extraordinary robust and dynamic data acquisition technologies to capture and update the built environment. New XR devices already have the hardware to accomplish these tasks, but the framework to document and geolocate multi-temporal mappings of a changing environment is still very much the subject of ongoing research. The goal of this research is, therefore, to study whether Lidar and photogrammetric technologies can be adapted to process XR sensory data and align multiple time series in the same coordinate system. Given the sometimes drastic changes on sites, we do not only use the sensory data but also any preexisting remote sensing data and as-is or as-designed BIM to aid the registration. In this work, we specifically study the low-resolution geometry and image matching of the Hololens 2 during consecutive stages of a construction. During the experiments, multiple time series of constructions are captured and registered. The experiments show that XR-captured data can be reliably registered to preexisting datasets with an accuracy that matches or exceeds the resolution of the sensory data. These results indicate that this method is an excellent way to align generic XR devices to a wide variety of existing reference data. Full article
(This article belongs to the Special Issue 3D Indoor Mapping and BIM Reconstruction)
Show Figures

Figure 1

12 pages, 1151 KB  
Article
Social Media and the Pandemic: Consumption Habits of the Spanish Population before and during the COVID-19 Lockdown
by Diego Gudiño, María Jesús Fernández-Sánchez, María Teresa Becerra-Traver and Susana Sánchez
Sustainability 2022, 14(9), 5490; https://doi.org/10.3390/su14095490 - 3 May 2022
Cited by 12 | Viewed by 6364
Abstract
The confinement of the Spanish population due to the COVID-19 pandemic triggered a change in patterns of electronic device usage, leading to an increase in internet traffic. This study sought to evaluate the use of social media by the Spanish population before and [...] Read more.
The confinement of the Spanish population due to the COVID-19 pandemic triggered a change in patterns of electronic device usage, leading to an increase in internet traffic. This study sought to evaluate the use of social media by the Spanish population before and during the COVID-19 lockdown. An extensive ad hoc questionnaire was prepared and distributed to a total of 397 people of different ages from different Spanish provinces. The questionnaire was previously validated and was found to be reliable. The results showed that during the lockdown, the most frequently used social networks were WhatsApp and Facebook, although others, such as Telegram and TikTok, also experienced a significant increase in user registrations. There was also an increase in the number of hours spent per week using social media, especially Facebook, WhatsApp and YouTube, to share images, videos and audio messages, with a significant increase in document sharing and knowledge acquisition. The final section discusses some of the results and concludes by highlighting the importance of analyzing social behavior in times of crisis in order to design more effective and personalized communication strategies. Full article
Show Figures

Figure 1

18 pages, 87785 KB  
Article
Reflectance Imaging Spectroscopy (RIS) for Operation Night Watch: Challenges and Achievements of Imaging Rembrandt’s Masterpiece in the Glass Chamber at the Rijksmuseum
by Francesca Gabrieli, John K. Delaney, Robert G. Erdmann, Victor Gonzalez, Annelies van Loon, Patrick Smulders, Roy Berkeveld, Robert van Langh and Katrien Keune
Sensors 2021, 21(20), 6855; https://doi.org/10.3390/s21206855 - 15 Oct 2021
Cited by 25 | Viewed by 9563
Abstract
Visible and infrared reflectance imaging spectroscopy is one of the several non-invasive techniques used during Operation Night Watch for the study of Rembrandt’s iconic masterpiece The Night Watch (1642). The goals of this project include the identification and mapping of the artists’ materials, [...] Read more.
Visible and infrared reflectance imaging spectroscopy is one of the several non-invasive techniques used during Operation Night Watch for the study of Rembrandt’s iconic masterpiece The Night Watch (1642). The goals of this project include the identification and mapping of the artists’ materials, providing information about the painting technique used as well as documenting the painting’s current state and ultimately determining the possible conservation plan. The large size of the painting (3.78 m by 4.53 m) and the diversity of the technical investigations being performed make Operation Night Watch the largest research project ever undertaken at the Rijksmuseum. To construct a complete reflectance image cube at a high spatial resolution (168 µm2) and spectral resolution (2.54 to 6 nm), the painting was imaged with two high-sensitivity line scanning hyperspectral cameras (VNIR 400 to 1000 nm, 2.54 nm, and SWIR 900 to 2500 nm, 6 nm). Given the large size of the painting, a custom computer-controlled 3-D imaging frame was constructed to move each camera, along with lights, across the painting surface. A third axis, normal to the painting, was added along with a distance-sensing system which kept the cameras in focus during the scanning. A total of 200 hyperspectral image swaths were collected, mosaicked and registered to a high-resolution color image to sub-pixel accuracy using a novel registration algorithm. The preliminary analysis of the VNIR and SWIR reflectance images has identified many of the pigments used and their distribution across the painting. The SWIR, in particular, has provided an improved visualization of the preparatory sketches and changes in the painted composition. These data sets, when combined with the results from the other spectral imaging modalities and paint sample analyses, will provide the most complete understanding of the materials and painting techniques used by Rembrandt in The Night Watch. Full article
Show Figures

Figure 1

20 pages, 10112 KB  
Article
Documenting Paintings with Gigapixel Photography
by Pedro M. Cabezos-Bernal, Pablo Rodriguez-Navarro and Teresa Gil-Piqueras
J. Imaging 2021, 7(8), 156; https://doi.org/10.3390/jimaging7080156 - 21 Aug 2021
Cited by 16 | Viewed by 4705
Abstract
Digital photographic capture of pictorial artworks with gigapixel resolution (around 1000 megapixels or greater) is a novel technique that is beginning to be used by some important international museums as a means of documentation, analysis, and dissemination of their masterpieces. This line of [...] Read more.
Digital photographic capture of pictorial artworks with gigapixel resolution (around 1000 megapixels or greater) is a novel technique that is beginning to be used by some important international museums as a means of documentation, analysis, and dissemination of their masterpieces. This line of research is extremely interesting, not only for art curators and scholars but also for the general public. The results can be disseminated through online virtual museum displays, offering a detailed interactive visualization. These virtual visualizations allow the viewer to delve into the artwork in such a way that it is possible to zoom in and observe those details, which would be negligible to the naked eye in a real visit. Therefore, this kind of virtual visualization using gigapixel images has become an essential tool to enhance cultural heritage and to make it accessible to everyone. Since today’s professional digital cameras provide images of around 40 megapixels, obtaining gigapixel images requires some special capture and editing techniques. This article describes a series of photographic methodologies and equipment, developed by the team of researchers, that have been put into practice to achieve a very high level of detail and chromatic fidelity, in the documentation and dissemination of pictorial artworks. The result of this research work consisted in the gigapixel documentation of several masterpieces of the Museo de Bellas Artes of Valencia, one of the main art galleries in Spain. The results will be disseminated through the Internet, as will be shown with some examples. Full article
Show Figures

Figure 1

Back to TopTop