Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = artefact density

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13439 KB  
Article
Quality Assessment of Digital 3D Models of Museum Artefacts from the Mobile LiDAR iPhone and Structured Light Scanners
by Jerzy Montusiewicz, Marek Milosz, Wojciech Sarnowski and Rahim Kayumov
Appl. Sci. 2026, 16(4), 2100; https://doi.org/10.3390/app16042100 - 21 Feb 2026
Cited by 1 | Viewed by 546
Abstract
Creating a digital 3D model of museum artefacts has been a common practice for many years. Such models can be used for archiving, research, and marketing purposes, as well as to counteract various types of exclusion. A digital copy created using professional 3D [...] Read more.
Creating a digital 3D model of museum artefacts has been a common practice for many years. Such models can be used for archiving, research, and marketing purposes, as well as to counteract various types of exclusion. A digital copy created using professional 3D scanners using 3D structured-light scanning (3D SLS) or terrestrial laser scanning technology requires expensive equipment, specialised software for postprocessing, and a trained team. The introduction of mobile phones with Light Detection and Ranging (LiDAR) sensors and the development of appropriate open-access software have enabled the use of phones to generate digital 3D models. This study compares the quality of 3D models created with 3D SLS and mobile LiDAR technologies using three identical small museum artefacts from the Silk Road area of the Samarkand State University museum in Uzbekistan. They were digitised in 2017 and 2025. The results indicate that digital 3D models generated with an iPhone 16 PRO MAX device using Scaniverse LiDAR software are incomplete and thus less versatile. Therefore, they cannot serve as archival models. Their accuracy and quality (mesh density, size, and texture quality), as well as the speed of generating 3D models, make them ideal for marketing purposes and digital tourism. Full article
Show Figures

Figure 1

22 pages, 2732 KB  
Article
Automated Single-Sensor 3D Scanning and Modular Benchmark Objects for Human-Scale 3D Reconstruction
by Kartik Choudhary, Mats Isaksson, Gavin W. Lambert and Tony Dicker
Sensors 2026, 26(4), 1331; https://doi.org/10.3390/s26041331 - 19 Feb 2026
Viewed by 514
Abstract
High-fidelity 3D reconstruction of human-sized objects typically requires multi-sensor scanning systems that are expensive, complex, and rely on proprietary hardware configurations. Existing low-cost approaches often rely on handheld scanning, which is inherently unstructured and operator-dependent, leading to inconsistent coverage and variable reconstruction quality. [...] Read more.
High-fidelity 3D reconstruction of human-sized objects typically requires multi-sensor scanning systems that are expensive, complex, and rely on proprietary hardware configurations. Existing low-cost approaches often rely on handheld scanning, which is inherently unstructured and operator-dependent, leading to inconsistent coverage and variable reconstruction quality. This limitation necessitates the need for a controlled, repeatable, and affordable scanning method that can generate high-quality data without requiring multi-sensor hardware or external tracking markers. This study presents a marker-less scanning platform designed for human-scale reconstruction. The system consists of a single structured-light sensor mounted on a vertical linear actuator, synchronised with a motorised turntable that rotates the subject. This constrained kinematic setup ensures a repeatable cylindrical acquisition trajectory. To address the geometric ambiguity often found in vertical translational symmetry (i.e., where distinct elevation steps appear identical), the system employs a sensor-assisted initialisation strategy, where feedback from the rotary encoder and linear drive serves as constraints for the registration pipeline. The captured frames are reconstructed into a complete model through a two-step Iterative Closest Point (ICP) procedure that eliminates the vertical drift and model collapse (often referred to as “telescoping”) common in unconstrained scanning. To evaluate system performance, a modular anthropometric benchmark object representing a human-sized target (1.6 m) was scanned. The reconstructed model was assessed in terms of surface coverage and volumetric fidelity relative to a CAD reference. The results demonstrate high sampling stability, achieving a mean surface density of 0.760points/mm2 on front-facing surfaces. Geometric deviation analysis revealed a mean signed error of −1.54 mm (σ= 2.27 mm), corresponding to a relative volumetric error of approximately 0.096% over the full vertical span. These findings confirm that a single-sensor system, when guided by precise kinematics, can mitigate the non-linear bending and drift artefacts of handheld acquisition, providing an accessible yet rigorously accurate alternative to industrial multi-sensor systems. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

47 pages, 5559 KB  
Review
Phase Behaviour of Binary Mixtures Involving Near-Critical and Supercritical Carbon Dioxide—A Review
by Pradnya N. P. Ghoderao and Patrice Paricaud
Molecules 2026, 31(4), 614; https://doi.org/10.3390/molecules31040614 - 10 Feb 2026
Viewed by 738
Abstract
Near-critical and supercritical carbon dioxide (SC-CO2) is extensively utilized in high-pressure separation, extraction, polymer processing, and carbon capture and utilization (CCU) technologies owing to its tunable density, low viscosity, high diffusivity, and environmentally benign nature. Reliable phase equilibrium data are indispensable [...] Read more.
Near-critical and supercritical carbon dioxide (SC-CO2) is extensively utilized in high-pressure separation, extraction, polymer processing, and carbon capture and utilization (CCU) technologies owing to its tunable density, low viscosity, high diffusivity, and environmentally benign nature. Reliable phase equilibrium data are indispensable for process design and optimization, especially in the near-critical region characterized by pronounced non-idealities, high compressibility, and density fluctuations. This review synthesizes experimental phase behaviour studies for binary mixtures of CO2 with diverse components, including hydrocarbons, alcohols, ethers, esters, ketones, water, monomers/polymers, ionic liquids (ILs), and deep eutectic solvents (DESs), compiling extensive vapour–liquid equilibrium (VLE), liquid–liquid equilibrium (LLE), and critical data across industrially relevant pressure (up to 40 MPa) and temperature (up to 400 K) ranges. It critically evaluates analytical (sampling and non-sampling) and synthetic methodologies, addressing challenges in CO2-rich phase handling, depressurization artefacts, and near-critical phenomena, while assessing data consistency against established reliability criteria. Key trends emerge, such as enhanced solubility with increasing pressure and CO2 density, chain-length dependencies in hydrocarbons and alcohols, and Lewis acid–base interactions driving solvation in polar systems. The review highlights gaps in multicomponent data and proposes integrating high-quality experiments with advanced thermodynamic modelling to enhance predictive accuracy. Future directions emphasize high-precision in situ techniques, expanded datasets for complex mixtures, and novel CO2-philic solvents to advance sustainable SC-CO2 applications. Full article
(This article belongs to the Special Issue Review Papers in Physical Chemistry)
Show Figures

Figure 1

9 pages, 253 KB  
Comment
Comment on Makó et al. Examination of Age-Depth Models Through Loess-Paleosol Sections in the Carpathian Basin. Quaternary 2025, 8, 55
by Zoran M. Perić, Milica G. Bosnić, Rastko S. Marković and Slobodan B. Marković
Quaternary 2026, 9(1), 10; https://doi.org/10.3390/quat9010010 - 30 Jan 2026
Viewed by 463
Abstract
This commentary re-evaluates the study by Makó et al. which reconstructs dust accumulation rates from loess–paleosol sequences in the Carpathian Basin. Several methodological and factual issues substantially limit the reliability of their interpretations. The study reports linear sedimentation rates (mm a−1) [...] Read more.
This commentary re-evaluates the study by Makó et al. which reconstructs dust accumulation rates from loess–paleosol sequences in the Carpathian Basin. Several methodological and factual issues substantially limit the reliability of their interpretations. The study reports linear sedimentation rates (mm a−1) as mass accumulation rates (MARs) without accounting for bulk density, rendering their values non-comparable with established MAR datasets. It also overlooks a documented systematic bias between 14C and luminescence-derived MARs which are shown to differ by a factor of nearly three in Perić et al., a directly relevant synthesis that is not cited. Furthermore, the conflation of distinct sites (Surduk and Veliki Surduk) and the incorrect attribution of the Surduk section’s location indicate errors in basic site metadata. Together, these issues suggest that the reported “high accumulation axis” may reflect methodological artefacts rather than genuine environmental gradients. Improved methodological transparency and consistency are essential for robust regional reconstructions. Full article
20 pages, 6158 KB  
Article
Improving Surface Roughness and Printability of LPBF Ti6246 Components Without Affecting Their Structure, Mechanical Properties and Building Rate
by Thibault Mouret, Aurore Leclercq, Patrick K. Dubois and Vladimir Brailovski
Metals 2026, 16(1), 32; https://doi.org/10.3390/met16010032 - 27 Dec 2025
Cited by 1 | Viewed by 621
Abstract
Laser powder bed fusion (LPBF) is the best suited technology to manufacture temperature-resistant Ti-6Al-2Sn-4Zr-6Mo parts with complex geometrical features for high-end applications. Improving printing accuracy by reducing the layer thickness (t) generally requires repeating a tedious and time-consuming process optimization routine. [...] Read more.
Laser powder bed fusion (LPBF) is the best suited technology to manufacture temperature-resistant Ti-6Al-2Sn-4Zr-6Mo parts with complex geometrical features for high-end applications. Improving printing accuracy by reducing the layer thickness (t) generally requires repeating a tedious and time-consuming process optimization routine. To simplify this endeavour, the present work proposes three process equivalence criteria allowing to transfer optimized process conditions from one printing parameter set to another. This approach recommends keeping the volumetric laser energy density (VED) and hatching space-to-layer thickness ratio (h/t) constant, while adjusting the scanning speed (v) and hatching space (h) accordingly. To validate this approach, Ti6246 parts were printed with 50 µm and 25 µm layer thicknesses, while keeping VED = 100 J/mm3 and h/t = 3 constant for both cases. The printed samples were analyzed in terms of their density, microstructure and mechanical properties, as well as the geometric compliance of wall-, gap- and channel-containing artefacts. Highly dense samples exhibiting comparable microstructures and mechanical properties were obtained with both parameters sets investigated. However, they induced markedly differing geometric characteristics. Notably, using 25 µm layers allowed printing walls as thin as 0.2 mm as compared to 1.0 mm for 50 µm layers. Full article
(This article belongs to the Special Issue Recent Advances in Powder-Based Additive Manufacturing of Metals)
Show Figures

Figure 1

12 pages, 1982 KB  
Article
Spectroscopic Probing of Solute–Solvent Interactions in Aqueous Methylsulphonylmethane (MSM) Solutions: An Integrated ATR-FTIR, Chemometric, and DFT Study
by Aneta Panuszko, Przemysław Pastwa, Paulina Giemza and Piotr Bruździak
Int. J. Mol. Sci. 2025, 26(22), 10953; https://doi.org/10.3390/ijms262210953 - 12 Nov 2025
Viewed by 690
Abstract
The widespread use of methylsulphonylmethane (MSM) as a dietary supplement highlights the need to understand its fundamental behaviour in aqueous solutions. In this paper, we investigate changes in the MSM band shape as a function of its concentration using Attenuated Total Reflection FTIR [...] Read more.
The widespread use of methylsulphonylmethane (MSM) as a dietary supplement highlights the need to understand its fundamental behaviour in aqueous solutions. In this paper, we investigate changes in the MSM band shape as a function of its concentration using Attenuated Total Reflection FTIR (ATR-FTIR) spectroscopy. ATR spectra may be complicated by significant optical artefacts arising from refractive index changes. These can be misinterpreted as genuine vibrational shifts, leading to erroneous conclusions. Here, we systematically investigate aqueous MSM solutions using three different internal reflection elements. Applying a rigorous ATR correction procedure, validated by transmission measurements and PARAFAC (Parallel Factor Analysis) analysis, decouples physical phenomena from optical distortions. The corrected spectra reveal a crucial finding: the primary effect of MSM is not a shift in the sulphone band position, but a distinct change in its shape. This result, supported by DFT (Density Functional Theory) calculations, indicates increased heterogeneity of local hydration environments and demonstrates the criticality of proper ATR correction. Full article
(This article belongs to the Special Issue FTIR Miscrospectroscopy: Opportunities and Challenges)
Show Figures

Figure 1

32 pages, 12348 KB  
Article
Advances in Unsupervised Parameterization of the Seasonal–Diurnal Surface Wind Vector
by Nicholas J. Cook
Meteorology 2025, 4(3), 21; https://doi.org/10.3390/meteorology4030021 - 29 Jul 2025
Viewed by 1145
Abstract
The Offset Elliptical Normal (OEN) mixture model represents the seasonal–diurnal surface wind vector for wind engineering design applications. This study upgrades the parameterization of OEN by accounting for changes in format of the global database of surface observations, improving performance by eliminating manual [...] Read more.
The Offset Elliptical Normal (OEN) mixture model represents the seasonal–diurnal surface wind vector for wind engineering design applications. This study upgrades the parameterization of OEN by accounting for changes in format of the global database of surface observations, improving performance by eliminating manual supervision and extending the scope of the model to include skewness. The previous coordinate transformation of binned speed and direction, used to evaluate the joint probability distributions of the wind vector, is replaced by direct kernel density estimation. The slow process of sequentially adding additional components is replaced by initializing all components together using fuzzy clustering. The supervised process of sequencing each mixture component through time is replaced by a fully automated unsupervised process using pattern matching. Previously reported departures from normal in the tails of the fuzzy-demodulated OEN orthogonal vectors are investigated by directly fitting the bivariate skew generalized t distribution, showing that the small observed skew is likely real but that the observed kurtosis is an artefact of the demodulation process, leading to a new Offset Skew Normal mixture model. The supplied open-source R scripts fully automate parametrization for locations in the NCEI Integrated Surface Hourly global database of wind observations. Full article
Show Figures

Figure 1

26 pages, 6721 KB  
Article
Advanced Detection and Classification of Kelp Habitats Using Multibeam Echosounder Water Column Point Cloud Data
by Amy W. Nau, Vanessa Lucieer, Alexandre C. G. Schimel, Haris Kunnath, Yoann Ladroit and Tara Martin
Remote Sens. 2025, 17(3), 449; https://doi.org/10.3390/rs17030449 - 28 Jan 2025
Cited by 4 | Viewed by 2971
Abstract
Kelps are important habitat-forming species in shallow marine environments, providing critical habitat, structure, and productivity for temperate reef ecosystems worldwide. Many kelp species are currently endangered by myriad pressures, including changing water temperatures, invasive species, and anthropogenic threats. This situation necessitates advanced methods [...] Read more.
Kelps are important habitat-forming species in shallow marine environments, providing critical habitat, structure, and productivity for temperate reef ecosystems worldwide. Many kelp species are currently endangered by myriad pressures, including changing water temperatures, invasive species, and anthropogenic threats. This situation necessitates advanced methods to detect kelp density, which would allow tracking density changes, understanding ecosystem dynamics, and informing evidence-based management strategies. This study introduces an innovative approach to detect kelp density with multibeam echosounder water column data. First, these data are filtered into a point cloud. Then, a range of variables are derived from these point cloud data, including average acoustic energy, volume, and point density. Finally, these variables are used as input to a Random Forest model in combination with bathymetric variables to classify sand, bare rock, sparse kelp, and dense kelp habitats. At 5 m resolution, we achieved an overall accuracy of 72.5% with an overall Area Under the Curve of 0.874. Notably, our method achieved high accuracy across the entire multibeam swath, with only a 1 percent point decrease in model accuracy for data falling within the part of the multibeam water column data impacted by sidelobe artefact noise, which significantly expands the potential of this data type for wide-scale monitoring of threatened kelp ecosystems. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

28 pages, 9638 KB  
Article
Structure of Spectral Composition and Synchronization in Human Sleep on the Whole Scalp: A Pilot Study
by Jesús Pastor, Paula Garrido Zabala and Lorena Vega-Zelaya
Brain Sci. 2024, 14(10), 1007; https://doi.org/10.3390/brainsci14101007 - 6 Oct 2024
Cited by 2 | Viewed by 1745
Abstract
We used numerical methods to define the normative structure of the different stages of sleep and wake (W) in a pilot study of 19 participants without pathology (18–64 years old) using a double-banana bipolar montage. Artefact-free 120–240 s epoch lengths were visually identified [...] Read more.
We used numerical methods to define the normative structure of the different stages of sleep and wake (W) in a pilot study of 19 participants without pathology (18–64 years old) using a double-banana bipolar montage. Artefact-free 120–240 s epoch lengths were visually identified and divided into 1 s windows with a 10% overlap. Differential channels were grouped into frontal, parieto-occipital, and temporal lobes. For every channel, the power spectrum (PS) was calculated via fast Fourier transform and used to compute the areas for the delta (0–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), and beta (13–30 Hz) bands, which were log-transformed. Furthermore, Pearson’s correlation coefficient and coherence by bands were computed. Differences in logPS and synchronization from the whole scalp were observed between the sexes for specific stages. However, these differences vanished when specific lobes were considered. Considering the location and stages, the logPS and synchronization vary highly and specifically in a complex manner. Furthermore, the average spectra for every channel and stage were very well defined, with phase-specific features (e.g., the sigma band during N2 and N3, or the occipital alpha component during wakefulness), although the slow alpha component (8.0–8.5 Hz) persisted during NREM and REM sleep. The average spectra were symmetric between hemispheres. The properties of K-complexes and the sigma band (mainly due to sleep spindles—SSs) were deeply analyzed during the NREM N2 stage. The properties of the sigma band are directly related to the density of SSs. The average frequency of SSs in the frontal lobe was lower than that in the occipital lobe. In approximately 30% of the participants, SSs showed bimodal components in the anterior regions. qEEG can be easily and reliably used to study sleep in healthy participants and patients. Full article
Show Figures

Figure 1

28 pages, 16840 KB  
Article
Working in Tandem to Uncover 3D Artefact Distribution in Archaeological Excavations: Mathematical Interpretation through Positional and Relational Methods
by Miguel Ángel Dilena
Heritage 2024, 7(8), 4472-4499; https://doi.org/10.3390/heritage7080211 - 18 Aug 2024
Cited by 1 | Viewed by 2429
Abstract
In recent years, the most advanced pioneering techniques in the computing field have found application in assorted areas. Deep learning approaches, including artificial neural networks (ANNs), have become popular thanks to their ability to draw inferences from intricate and seemingly unconnected datasets. Additionally, [...] Read more.
In recent years, the most advanced pioneering techniques in the computing field have found application in assorted areas. Deep learning approaches, including artificial neural networks (ANNs), have become popular thanks to their ability to draw inferences from intricate and seemingly unconnected datasets. Additionally, 3D clustering techniques manage to associate groups of elements by identifying the specific inherent structures exhibited by such objects based on similarity measures. Generally, the characteristics of archaeological information gathered after extraction operations align with the previously mentioned challenges. Hence, an excavation could be an opportunity to use these prior innovative computing approaches. Our objective is to integrate software techniques to organise recovered artefacts and derive logical conclusions from their spatial location and the correlation between tangible attributes. These results can statistically improve our approach to investigations and provide a mathematical interpretation of archaeological excavations. Full article
Show Figures

Figure 1

25 pages, 13831 KB  
Review
Energy Performance Indicators for Air-Conditioned Museums in Tropical Climates
by Elena Lucchi
Buildings 2024, 14(8), 2301; https://doi.org/10.3390/buildings14082301 - 25 Jul 2024
Cited by 7 | Viewed by 4882
Abstract
The energy design of museums in developing countries is a subject that has been poorly studied, despite its significant implications for heritage preservation, human comfort, energy efficiency, and environmental sustainability. This study introduces a comprehensive framework of Energy Performance Indicators tailored to air-conditioned [...] Read more.
The energy design of museums in developing countries is a subject that has been poorly studied, despite its significant implications for heritage preservation, human comfort, energy efficiency, and environmental sustainability. This study introduces a comprehensive framework of Energy Performance Indicators tailored to air-conditioned museums in tropical regions, which represent the most prevalent museum type. These indicators are particularly important as international standards may not be applicable in these contexts. A comprehensive review of the factors and their design implications is provided at the building, system, and component levels. Efficient integration of lighting and air conditioning systems can optimize energy use while maintaining appropriate conditions for both artefact preservation and visitor comfort. Parameters such as average illuminance, uniformity of lighting, lighting power density and lighting energy use intensity are critical in balancing visual quality and energy efficiency. Recommended values and strategies, such as the use of LED lighting and daylight harvesting, help to minimize energy consumption. In addition, parameters such as power density and energy use intensity of air conditioning systems are essential for assessing their efficiency. Techniques such as the integration of solar-assisted, optimized performance indices can effectively reduce energy consumption. Synthetic indicators for assessing lighting quality and overall energy performance are (i) Average Illuminance Ratio, which assesses the adequacy of lighting in a space by comparing the average measured illuminance with the recommended illuminance levels for that space, and (ii) Energy Use Intensity, which represents the total annual energy consumption per unit area of conditioned space. By adopting these indicators, tropical museums can advance energy efficiency and broader sustainability objectives, taking a significant step towards a more energy-conscious and sustainable future. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

24 pages, 41154 KB  
Article
A Novel and Reliable Pixel Response Correction Method (DAC-Shifting) for Spectral Photon-Counting CT Imaging
by Navrit Johan Singh Bal, Imaiyan Chitra Ragupathy, Trine Tramm and Jasper Nijkamp
Tomography 2024, 10(7), 1168-1191; https://doi.org/10.3390/tomography10070089 - 22 Jul 2024
Cited by 1 | Viewed by 2469
Abstract
Spectral photon-counting cone-beam computed tomography (CT) imaging is challenged by individual pixel response behaviours, which lead to noisy projection images and subsequent image artefacts like rings. Existing methods to correct for this either use calibration measurements, like signal-to-thickness calibration (STC), or perform a [...] Read more.
Spectral photon-counting cone-beam computed tomography (CT) imaging is challenged by individual pixel response behaviours, which lead to noisy projection images and subsequent image artefacts like rings. Existing methods to correct for this either use calibration measurements, like signal-to-thickness calibration (STC), or perform a post-processing ring artefact correction of sinogram data or scan reconstructions without taking the pixel response explicitly into account. Here, we present a novel post-processing method (digital-to-analogue converter (DAC)-shifting) which explicitly measures the current pixel response using flat-field images and subsequently corrects the projection data. The DAC-shifting method was evaluated using a repeat series of the spectral photon-counting imaging (Medipix3) of a phantom with different density inserts and iodine K-edge imaging. The method was also compared against polymethyl methacrylate (PMMA)-based STC. The DAC-shifting method was shown to be effective in correcting individual pixel responses and was robust against detector instability; it led to a 47.4% average reduction in CT-number variation in homogeneous materials, with a range of 40.7–55.6%. On the contrary, the STC correction showed varying results; a 13.7% average reduction in CT-number variation, ranging from a 43.7% increase to a 45.5% reduction. In K-edge imaging, DAC-shifting provides a sharper attenuation peak and more uniform CT values, which are expected to benefit iodine concentration quantifications. Full article
Show Figures

Figure 1

25 pages, 2065 KB  
Review
Challenges and Prospects of Applying Nanocellulose for the Conservation of Wooden Cultural Heritage—A Review
by Paulina Kryg, Bartłomiej Mazela, Waldemar Perdoch and Magdalena Broda
Forests 2024, 15(7), 1174; https://doi.org/10.3390/f15071174 - 5 Jul 2024
Cited by 8 | Viewed by 3728
Abstract
Nanocellulose is a nanostructured form of cellulose, which retains valuable properties of cellulose such as renewability, biodegradability, biocompatibility, nontoxicity, and sustainability and, due to its nano-sizes, acquires several useful features, such as low density, high aspect ratio and stiffness, a high specific surface [...] Read more.
Nanocellulose is a nanostructured form of cellulose, which retains valuable properties of cellulose such as renewability, biodegradability, biocompatibility, nontoxicity, and sustainability and, due to its nano-sizes, acquires several useful features, such as low density, high aspect ratio and stiffness, a high specific surface area, easy processing and functionalisation, and good thermal stability. All these make it a highly versatile green nanomaterial for multiple applications, including the conservation of cultural heritage. This review provides the basic characteristics of all nanocellulose forms and their properties and presents the results of recent research on nanocellulose formulations applied for conserving historical artefacts made of wood and paper, discussing their effectiveness, advantages, and disadvantages. Pure nanocellulose proves particularly useful for conserving historical paper since it can form a durable, stable coating that consolidates the surface of a degraded object. However, it is not as effective for wood consolidation treatment due to its poor penetration into the wood structure. The research shows that this disadvantage can be overcome by various chemical modifications of the nanocellulose surface; owing to its specific chemistry, nanocellulose can be easily functionalised and, thus, enriched with the properties required for an effective wood consolidant. Moreover, combining nanocellulose with other agents can also improve its properties, adding new functionalities to the developed supramolecular systems that would address multiple needs of degraded artefacts. Since the broad use of nanocellulose in conservation practice depends on its properties, price, and availability, the development of new, effective, green, and industrial-scale production methods ensuring the manufacture of nanocellulose particles with standardised properties is necessary. Nanocellulose is an interesting and very promising solution for the conservation of cultural heritage artefacts made of paper and wood; however, further thorough interdisciplinary research is still necessary to devise new green methods of its production as well as develop new effective and sustainable nanocellulose-based conservation agents, which would replace synthetic, non-sustainable consolidants and enable proper conservation of historical objects of our cultural heritage. Full article
(This article belongs to the Special Issue Wood as Cultural Heritage Material: 2nd Edition)
Show Figures

Graphical abstract

23 pages, 3810 KB  
Article
Improved Video-Based Point Cloud Compression via Segmentation
by Faranak Tohidi, Manoranjan Paul, Anwaar Ulhaq and Subrata Chakraborty
Sensors 2024, 24(13), 4285; https://doi.org/10.3390/s24134285 - 1 Jul 2024
Cited by 4 | Viewed by 3850
Abstract
A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, [...] Read more.
A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points’ proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate–distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 6597 KB  
Article
A Performance Comparison of 3D Survey Instruments for Their Application in the Cultural Heritage Field
by Irene Lunghi, Emma Vannini, Alice Dal Fovo, Valentina Di Sarno, Alessandra Rocco and Raffaella Fontana
Sensors 2024, 24(12), 3876; https://doi.org/10.3390/s24123876 - 15 Jun 2024
Cited by 8 | Viewed by 2187
Abstract
Thanks to the recent development of innovative instruments and software with high accuracy and resolution, 3D modelling provides useful insights in several sectors (from industrial metrology to cultural heritage). Moreover, the 3D reconstruction of objects of artistic interest is becoming mandatory, not only [...] Read more.
Thanks to the recent development of innovative instruments and software with high accuracy and resolution, 3D modelling provides useful insights in several sectors (from industrial metrology to cultural heritage). Moreover, the 3D reconstruction of objects of artistic interest is becoming mandatory, not only because of the risks to which works of art are increasingly exposed (e.g., wars and climatic disasters) but also because of the leading role that the virtual fruition of art is taking. In this work, we compared the performance of four 3D instruments based on different working principles and techniques (laser micro-profilometry, structured-light topography and the phase-shifting method) by measuring four samples of different sizes, dimensions and surface characteristics. We aimed to assess the capabilities and limitations of these instruments to verify their accuracy and the technical specifications given in the suppliers’ data sheets. To this end, we calculated the point densities and extracted several profiles from the models to evaluate both their lateral (XY) and axial (Z) resolution. A comparison between the nominal resolution values and those calculated on samples representative of cultural artefacts was used to predict the performance of the instruments in real case studies. Overall, the purpose of this comparison is to provide a quantitative assessment of the performance of the instruments that allows for their correct application to works of art according to their specific characteristics. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

Back to TopTop