Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = polynomial reconstruction problem

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5850 KiB  
Article
Reconstruction of Tokamak Plasma Emissivity Distribution by Approximation with Basis Functions
by Tomasz Czarski, Maryna Chernyshova, Katarzyna Mikszuta-Michalik and Karol Malinowski
Sensors 2025, 25(10), 3162; https://doi.org/10.3390/s25103162 - 17 May 2025
Viewed by 435
Abstract
The present study focuses on the development of a diagnostic system for measuring radiated power and core soft X-ray intensity emissions with the goal of detecting a broad spectrum of photon energies emitted from the central plasma region of the DEMO tokamak. The [...] Read more.
The present study focuses on the development of a diagnostic system for measuring radiated power and core soft X-ray intensity emissions with the goal of detecting a broad spectrum of photon energies emitted from the central plasma region of the DEMO tokamak. The principal objective of the diagnostic apparatus is to deliver a comprehensive characterization of the radiation emitted by the plasma, with a particular focus on estimating the radiated power from the core region. This measurement is essential for determining and monitoring the power crossing the separatrix, which is a critical parameter controlling overall plasma performance. Since diagnostics rely on line-integrated measurements, the application of tomographic reconstruction techniques is necessary to extract spatially resolved information on core plasma radiation. This contribution presents the development of numerical algorithms addressing the problem of radiation tomography reconstruction. A robust and computationally efficient method is proposed for reconstructing the spatial distribution of plasma radiated power, with a view toward enabling real-time applications. The reconstruction methodology is based on a linear model formulated using a set of predefined basis functions, which define the radiation distribution within a specified plasma cross-section. In the initial stages of emissivity reconstruction in tokamak plasmas, it is typically assumed that the radiation distribution is dependent on magnetic flux surfaces. As a baseline approach, the plasma radiative properties are considered invariant along these surfaces and can thus be represented as one-dimensional profiles parameterized by the poloidal magnetic flux. Within this framework, the reconstruction method employs an approximation model utilizing three sets of basis functions: (i) polynomial splines, as well as Gaussian functions with (ii) sigma parameters and (iii) position parameters. The performance of the proposed method was evaluated using two synthetic radiated power emission phantoms, developed for the DEMO plasma scenario. The results indicate that the method is effective under the specified conditions. Full article
(This article belongs to the Special Issue Tomographic and Multi-Dimensional Sensors)
Show Figures

Figure 1

16 pages, 9232 KiB  
Article
DSM Reconstruction from Uncalibrated Multi-View Satellite Stereo Images by RPC Estimation and Integration
by Dong-Uk Seo and Soon-Yong Park
Remote Sens. 2024, 16(20), 3863; https://doi.org/10.3390/rs16203863 - 17 Oct 2024
Viewed by 1399
Abstract
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs [...] Read more.
In this paper, we propose a 3D Digital Surface Model (DSM) reconstruction method from uncalibrated Multi-view Satellite Stereo (MVSS) images, where Rational Polynomial Coefficient (RPC) sensor parameters are not available. While recent investigations have introduced several techniques to reconstruct high-precision and high-density DSMs from MVSS images, they inherently depend on the use of geo-corrected RPC sensor parameters. However, RPC parameters from satellite sensors are subject to being erroneous due to inaccurate sensor data. In addition, due to the increasing data availability from the internet, uncalibrated satellite images can be easily obtained without RPC parameters. This study proposes a novel method to reconstruct a 3D DSM from uncalibrated MVSS images by estimating and integrating RPC parameters. To do this, we first employ a structure from motion (SfM) and 3D homography-based geo-referencing method to reconstruct an initial DSM. Second, we sample 3D points from the initial DSM as references and reproject them to the 2D image space to determine 3D–2D correspondences. Using the correspondences, we directly calculate all RPC parameters. To overcome the memory shortage problem while running the large size of satellite images, we also propose an RPC integration method. Image space is partitioned to multiple tiles, and RPC estimation is performed independently in each tile. Then, all tiles’ RPCs are integrated into the final RPC to represent the geometry of the whole image space. Finally, the integrated RPC is used to run a true MVSS pipeline to obtain the 3D DSM. The experimental results show that the proposed method can achieve 1.455 m Mean Absolute Error (MAE) in the height map reconstruction from multi-view satellite benchmark datasets. We also show that the proposed method can be used to reconstruct a geo-referenced 3D DSM from uncalibrated and freely available Google Earth imagery. Full article
Show Figures

Figure 1

14 pages, 841 KiB  
Article
A Closed-Form Analytical Conversion between Zernike and Gatinel–Malet Basis Polynomials to Present Relevant Aberrations in Ophthalmology and Refractive Surgery
by Masoud Mehrjoo, Damien Gatinel, Jacques Malet and Samuel Arba Mosquera
Photonics 2024, 11(9), 883; https://doi.org/10.3390/photonics11090883 - 20 Sep 2024
Cited by 2 | Viewed by 1639
Abstract
The Zernike representation of wavefronts interlinks low- and high-order aberrations, which may result in imprecise clinical estimates. Recently, the Gatinel–Malet wavefront representation has been introduced to resolve this problem by deriving a new, unlinked basis originating from Zernike polynomials. This new basis preserves [...] Read more.
The Zernike representation of wavefronts interlinks low- and high-order aberrations, which may result in imprecise clinical estimates. Recently, the Gatinel–Malet wavefront representation has been introduced to resolve this problem by deriving a new, unlinked basis originating from Zernike polynomials. This new basis preserves the classical low and high aberration subgroups’ structure, as well as the orthogonality within each subgroup, but not the orthogonality between low and high aberrations. This feature has led to conversions relying on separate wavefront reconstructions for each subgroup, which may increase the associated numerical errors. This study proposes a robust, minimised-error (lossless) analytical approach for conversion between the Zernike and Gatinel–Malet spaces. This method analytically reformulates the conversion as a nonhomogeneous system of linear equations and computationally solves it using matrix factorisation and decomposition techniques with high-level accuracy. This work fundamentally demonstrates the lossless expression of complex wavefronts in a format that is more clinically interpretable, with potential applications in various areas of ophthalmology, such as refractive surgery. Full article
(This article belongs to the Special Issue Visual Optics)
Show Figures

Figure 1

17 pages, 477 KiB  
Article
Robust Direction Estimation of Terrestrial Signal via Sparse Non-Uniform Array Reconfiguration under Perturbations
by Rongling Lang, Hao Xu and Fei Gao
Remote Sens. 2024, 16(18), 3482; https://doi.org/10.3390/rs16183482 - 19 Sep 2024
Cited by 3 | Viewed by 1220
Abstract
DOA (Direction of Arrival), as an important observation parameter for accurately locating the Signals of Opportunity (SOP), is vital for navigation in GNSS-challenged environments and can be effectively obtained through sparse arrays. In practical application, array perturbations affect the estimation accuracy and stability [...] Read more.
DOA (Direction of Arrival), as an important observation parameter for accurately locating the Signals of Opportunity (SOP), is vital for navigation in GNSS-challenged environments and can be effectively obtained through sparse arrays. In practical application, array perturbations affect the estimation accuracy and stability of DOA, thereby adversely affecting the positioning performance of SOP. Against this backdrop, we propose an approach to reconstruct non-uniform arrays under perturbation conditions, aiming to improve the robustness of DOA estimation in sparse arrays. Firstly, we theoretically derive the mathematical expressions of the Cramér–Rao Bound (CRB) and Spatial Correlation Coefficient (SCC) for the uniform linear array (ULA) with perturbation. Then, we minimize CRB as the objective function to mitigate the adverse effects of array perturbations on DOA estimation, and use SCC as a constraint to suppress sidelobes. By doing this, the non-uniform array reconstruction model is formulated as a high-order 0–1 optimization problem. To effectively solve this nonconvex model, we propose a polynomial-time algorithm, which can converge to the optimal approximate solution of the original model. Finally, through a series of simulation experiments utilizing frequency modulation (FM) signal as an example, the exceptional performance of this method in array reconstruction has been thoroughly validated. Experimental data show that the reconstructed non-uniform array excels in DOA estimation accuracy compared to other sparse arrays, making it particularly suitable for estimating the direction of terrestrial SOP in perturbed environments. Full article
Show Figures

Figure 1

24 pages, 6869 KiB  
Article
Automobile-Demand Forecasting Based on Trend Extrapolation and Causality Analysis
by Zhengzhu Zhang, Haining Chai, Liyan Wu, Ning Zhang and Fenghe Wu
Electronics 2024, 13(16), 3294; https://doi.org/10.3390/electronics13163294 - 19 Aug 2024
Viewed by 3697
Abstract
Accurate automobile-demand forecasting can provide effective guidance for automobile-manufacturing enterprises in terms of production planning and supply planning. However, automobile sales volume is affected by historical sales volume and other external factors, and it shows strong non-stationarity, nonlinearity, autocorrelation and other complex characteristics. [...] Read more.
Accurate automobile-demand forecasting can provide effective guidance for automobile-manufacturing enterprises in terms of production planning and supply planning. However, automobile sales volume is affected by historical sales volume and other external factors, and it shows strong non-stationarity, nonlinearity, autocorrelation and other complex characteristics. It is difficult to accurately forecast sales volume using traditional models. To solve this problem, a forecasting model combining trend extrapolation and causality analysis is proposed and derived from the historical predictors of sales volume and the influence of external factors. In the trend-extrapolation model, the historical predictors of sales series was captured based on the Seasonal Autoregressive Integrated Moving Average (SARIMA) and Polynomial Regression (PR); then, Empirical Mode Decomposition (EMD), a stationarity-test algorithm, and an autocorrelation-test algorithm were introduced to reconstruct the sales sequence into stationary components with strong seasonality and trend components, which reduced the influences of non-stationarity and nonlinearity on the modeling. In the causality-analysis submodel, 31-dimensional feature data were extracted from influencing factors, such as date, macroeconomy, and promotion activities, and a Gradient-Boosting Decision Tree (GBDT) was used to establish the mapping between influencing factors and future sales because of its excellent ability to fit nonlinear relationships. Finally, the forecasting performance of three combination strategies, namely the boosting series, stacking parallel and weighted-average parallel strategies, were tested. Comparative experiments on three groups of sales data showed that the weighted-average parallel combination strategy had the best performance, with loss reductions of 16.81% and 4.68% for data from the number-one brand, 25.60% and 2.79% for data from the number-two brand, and 46.26% and 14.37% for data from the number-three brand compared with the other combination strategies. Other ablation studies and comparative experiments with six basic models proved the effectiveness and superiority of the proposed model. Full article
(This article belongs to the Special Issue Innovations, Challenges and Emerging Technologies in Data Engineering)
Show Figures

Figure 1

19 pages, 11782 KiB  
Article
Forest 3D Radar Reflectivity Reconstruction at X-Band Using a Lidar Derived Polarimetric Coherence Tomography Basis
by Roman Guliaev, Matteo Pardini and Konstantinos P. Papathanassiou
Remote Sens. 2024, 16(12), 2146; https://doi.org/10.3390/rs16122146 - 13 Jun 2024
Cited by 1 | Viewed by 1350
Abstract
Tomographic Synthetic Aperture Radar (SAR) allows the reconstruction of the 3D radar reflectivity of forests from a large(r) number of multi-angular acquisitions. However, in most practical implementations it suffers from limited vertical resolution and/or reconstruction artefacts as the result of non-ideal acquisition setups. [...] Read more.
Tomographic Synthetic Aperture Radar (SAR) allows the reconstruction of the 3D radar reflectivity of forests from a large(r) number of multi-angular acquisitions. However, in most practical implementations it suffers from limited vertical resolution and/or reconstruction artefacts as the result of non-ideal acquisition setups. Polarisation Coherence Tomography (PCT) offers an alternative to traditional tomographic techniques that allow the reconstruction of the low-frequency 3D radar reflectivity components from a small(er) number of multi-angular SAR acquisitions. PCT formulates the tomographic reconstruction problem as a series expansion on a given function basis. The expansion coefficients are estimated from interferometric coherence measurements between acquisitions. In its original form, PCT uses the Legendre polynomial basis for the reconstruction of the 3D radar reflectivity. This paper investigates the use of new basis functions for the reconstruction of X-band 3D radar reflectivity of forests derived from available lidar waveforms. This approach enables an improved 3D radar reflectivity reconstruction with enhanced vertical resolution, tailored to individual forest conditions. It also allows the translation from sparse lidar waveform vertical reflectivity information into continuous vertical reflectivity estimates when combined with interferometric SAR measurements. This is especially relevant for exploring the synergy of actual missions such as GEDI and TanDEM-X. The quality of the reconstructed 3D radar reflectivity is assessed by comparing simulated InSAR coherences derived from the reconstructed 3D radar reflectivity against measured coherences at different spatial baselines. The assessment is performed and discussed for interferometric TanDEM-X acquisitions performed over two tropical Gabonese rainforest sites: Mondah and Lopé. The results demonstrate that the lidar-derived basis provides more physically realistic vertical reflectivity profiles, which also produce a smaller bias in the simulated coherence validation, compared to the conventional Legendre polynomial basis. Full article
Show Figures

Figure 1

26 pages, 2812 KiB  
Article
Algorithms for the Reconstruction of Genomic Structures with Proofs of Their Low Polynomial Complexity and High Exactness
by Konstantin Gorbunov and Vassily Lyubetsky
Mathematics 2024, 12(6), 817; https://doi.org/10.3390/math12060817 - 11 Mar 2024
Cited by 1 | Viewed by 1259
Abstract
The mathematical side of applied problems in multiple subject areas (biology, pattern recognition, etc.) is reduced to the problem of discrete optimization in the following mathematical method. We were provided a network and graphs in its leaves, for which we needed to find [...] Read more.
The mathematical side of applied problems in multiple subject areas (biology, pattern recognition, etc.) is reduced to the problem of discrete optimization in the following mathematical method. We were provided a network and graphs in its leaves, for which we needed to find a rearrangement of graphs by non-leaf nodes, in which the given functional reached its minimum. Such a problem, even in the simplest case, is NP-hard, which means unavoidable restrictions on the network, on graphs, or on the functional. In this publication, this problem is addressed in the case of all graphs being so-called “structures”, meaning directed-loaded graphs consisting of paths and cycles, and the functional as the sum (over all edges in the network) of distances between structures at the endpoints of every edge. The distance itself is equal to the minimal length of sequence from the fixed list of operations, the composition of which transforms the structure at one endpoint of the edge into the structure at its other endpoint. The list of operations (and their costs) on such a graph is fixed. Under these conditions, the given discrete optimization problem is called the reconstruction problem. This paper presents novel algorithms for solving the reconstruction problem, along with full proofs of their low error and low polynomial complexity. For example, for the network, the problem is solved with a zero error algorithm that has a linear polynomial computational complexity; and for the tree the problem is solved using an algorithm with a multiplicative error of at most two, which has a second order polynomial computational complexity. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 3806 KiB  
Article
Design and Experiment of Bionic Straw-Cutting Blades Based on Locusta Migratoria Manilensis
by Jinpeng Hu, Lizhang Xu, Yang Yu, Jin Lu, Dianlei Han, Xiaoyu Chai, Qinhao Wu and Linjun Zhu
Agriculture 2023, 13(12), 2231; https://doi.org/10.3390/agriculture13122231 - 1 Dec 2023
Cited by 16 | Viewed by 2150
Abstract
Aimed at addressing the problems of the existing straw choppers on combine harvesters, such as a large cutting resistance and poor cutting effect, combined with bionic engineering technology and biological characteristics, a bionic model was used to extract the characteristics of the cutting [...] Read more.
Aimed at addressing the problems of the existing straw choppers on combine harvesters, such as a large cutting resistance and poor cutting effect, combined with bionic engineering technology and biological characteristics, a bionic model was used to extract the characteristics of the cutting blades of locusta migratoria manilensis’s upper jaw. A 3D point cloud reconstruction and machine vision methods were used to fit the polynomial curve of the blade edge using Matlab 2016. A straw-cutting process was simulated using the discrete element method, and the cutting effect of the bionic blade was verified. Cutting experiments with rice straws were conducted using a physical property tester, and the cutting resistance of straw to bionic blades and general blades was compared. On the whole, the average cutting force of the bionic blades was lower than that of the general blades. The average cutting force of the bionic blade was 18.74~38.23% lower than that of a smooth blade and 1.63~25.23% lower than that of a serrated blade. Similarly, the maximum instantaneous cutting force of the bionic blade was reduced by 2.30~2.89% compared with the general blade, which had a significant drag reduction effect. By comparing the time–force curves of different blades’ cutting processes, it was determined that the drag-reducing effect of the bionic blade lies in shortening the straw rupture time. The larger the contact area between the blade and the straw, the more uniform the cutting morphology of the straw after cutting. Field experiment results indicate that the average power consumption of a straw chopper partially installed with bionic blades was 5.48% lower than one with smooth blades, measured using a wireless torque analysis module. In this research study, the structure of the straw chopper of an existing combine harvester was improved based on the bionic principle, which reduced resistance when cutting crop straw, thus reducing the power consumption required by the straw chopper and improving the effectiveness and stability of the blades. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

14 pages, 543 KiB  
Article
Chebyshev Interpolation Using Almost Equally Spaced Points and Applications in Emission Tomography
by Vangelis Marinakis, Athanassios S. Fokas, George A. Kastis and Nicholas E. Protonotarios
Mathematics 2023, 11(23), 4757; https://doi.org/10.3390/math11234757 - 24 Nov 2023
Cited by 1 | Viewed by 3974
Abstract
Since their introduction, Chebyshev polynomials of the first kind have been extensively investigated, especially in the context of approximation and interpolation. Although standard interpolation methods usually employ equally spaced points, this is not the case in Chebyshev interpolation. Instead of equally spaced points [...] Read more.
Since their introduction, Chebyshev polynomials of the first kind have been extensively investigated, especially in the context of approximation and interpolation. Although standard interpolation methods usually employ equally spaced points, this is not the case in Chebyshev interpolation. Instead of equally spaced points along a line, Chebyshev interpolation involves the roots of Chebyshev polynomials, known as Chebyshev nodes, corresponding to equally spaced points along the unit semicircle. By reviewing prior research on the applications of Chebyshev interpolation, it becomes apparent that this interpolation is rather impractical for medical imaging. Especially in clinical positron emission tomography (PET) and in single-photon emission computerized tomography (SPECT), the so-called sinogram is always calculated at equally spaced points, since the detectors are almost always uniformly distributed. We have been able to overcome this difficulty as follows. Suppose that the function to be interpolated has compact support and is known at q equally spaced points in 1,1. We extend the domain to a,a, a>1, and select a sufficiently large value of a, such that exactlyq Chebyshev nodes are included in 1,1, which are almost equally spaced. This construction provides a generalization of the concept of standard Chebyshev interpolation to almost equally spaced points. Our preliminary results indicate that our modification of the Chebyshev method provides comparable, or, in several cases including Runge’s phenomenon, superior interpolation over the standard Chebyshev interpolation. In terms of the L norm of the interpolation error, a decrease of up to 75% was observed. Furthermore, our approach opens the way for using Chebyshev polynomials in the solution of the inverse problems arising in PET and SPECT image reconstruction. Full article
(This article belongs to the Special Issue Advances in Inverse Problems and Imaging)
Show Figures

Figure 1

32 pages, 22341 KiB  
Article
Nonrigid Point Cloud Registration Using Piecewise Tricubic Polynomials as Transformation Model
by Philipp Glira, Christoph Weidinger, Johannes Otepka-Schremmer, Camillo Ressl, Norbert Pfeifer and Michaela Haberler-Weber
Remote Sens. 2023, 15(22), 5348; https://doi.org/10.3390/rs15225348 - 13 Nov 2023
Cited by 2 | Viewed by 3533
Abstract
Nonrigid registration presents a significant challenge in the domain of point cloud processing. The general objective is to model complex nonrigid deformations between two or more overlapping point clouds. Applications are diverse and span multiple research fields, including registration of topographic data, scene [...] Read more.
Nonrigid registration presents a significant challenge in the domain of point cloud processing. The general objective is to model complex nonrigid deformations between two or more overlapping point clouds. Applications are diverse and span multiple research fields, including registration of topographic data, scene flow estimation, and dynamic shape reconstruction. To provide context, the first part of the paper gives a general introduction to the topic of point cloud registration, including a categorization of existing methods. Then, a general mathematical formulation for the point cloud registration problem is introduced, which is then extended to address also nonrigid registration methods. A detailed discussion and categorization of existing approaches to nonrigid registration follows. In the second part of the paper, we propose a new method that uses piecewise tricubic polynomials for modeling nonrigid deformations. Our method offers several advantages over existing methods. These advantages include easy control of flexibility through a small number of intuitive tuning parameters, a closed-form optimization solution, and an efficient transformation of huge point clouds. We demonstrate our method through multiple examples that cover a broad range of applications, with a focus on remote sensing applications—namely, the registration of airborne laser scanning (ALS), mobile laser scanning (MLS), and terrestrial laser scanning (TLS) point clouds. The implementation of our algorithms is open source and can be found our public repository. Full article
Show Figures

Figure 1

25 pages, 12707 KiB  
Article
Unsupervised Nonlinear Hyperspectral Unmixing with Reduced Spectral Variability via Superpixel-Based Fisher Transformation
by Zhangqiang Yin and Bin Yang
Remote Sens. 2023, 15(20), 5028; https://doi.org/10.3390/rs15205028 - 19 Oct 2023
Cited by 1 | Viewed by 1843
Abstract
In hyperspectral unmixing, dealing with nonlinear mixing effects and spectral variability (SV) is a significant challenge. Traditional linear unmixing can be seriously deteriorated by the coupled residuals of nonlinearity and SV in remote sensing scenarios. For the simplification of calculation, current unmixing studies [...] Read more.
In hyperspectral unmixing, dealing with nonlinear mixing effects and spectral variability (SV) is a significant challenge. Traditional linear unmixing can be seriously deteriorated by the coupled residuals of nonlinearity and SV in remote sensing scenarios. For the simplification of calculation, current unmixing studies usually separate the consideration of nonlinearity and SV. As a result, errors individually caused by the nonlinearity or SV still persist, potentially leading to overfitting and the decreased accuracy of estimated endmembers and abundances. In this paper, a novel unsupervised nonlinear unmixing method accounting for SV is proposed. First, an improved Fisher transformation scheme is constructed by combining an abundance-driven dynamic classification strategy with superpixel segmentation. It can enlarge the differences between different types of pixels and reduce the differences between pixels corresponding to the same class, thereby reducing the influence of SV. Besides, spectral similarity can be well maintained in local homogeneous regions. Second, the polynomial postnonlinear model is employed to represent observed pixels and explain nonlinear components. Regularized by a Fisher transformation operator and abundances’ spatial smoothness, data reconstruction errors in the original spectral space and the transformed space are weighed to derive the unmixing problem. Finally, this problem is solved by a dimensional division-based particle swarm optimization algorithm to produce accurate unmixing results. Extensive experiments on synthetic and real hyperspectral remote sensing data demonstrate the superiority of the proposed method in comparison with state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Graphical abstract

19 pages, 12411 KiB  
Article
Modelling Spectral Unmixing of Geological Mixtures: An Experimental Study Using Rock Samples
by Maitreya Mohan Sahoo, R. Kalimuthu, Arun PV, Alok Porwal and Shibu K. Mathew
Remote Sens. 2023, 15(13), 3300; https://doi.org/10.3390/rs15133300 - 27 Jun 2023
Cited by 3 | Viewed by 3454
Abstract
Spectral unmixing of geological mixtures, such as rocks, is a challenging inversion problem because of nonlinear interactions of light with the intimately mixed minerals at a microscopic scale. The fine-scale mixing of minerals in rocks limits the sensor’s ability to identify pure mineral [...] Read more.
Spectral unmixing of geological mixtures, such as rocks, is a challenging inversion problem because of nonlinear interactions of light with the intimately mixed minerals at a microscopic scale. The fine-scale mixing of minerals in rocks limits the sensor’s ability to identify pure mineral endmembers and spectrally resolve these constituents within a given spatial resolution. In this study, we attempt to model the spectral unmixing of two rocks, namely, serpentinite and granite, by acquiring their hyperspectral images in a controlled environment, having uniform illumination, using a laboratory-based imaging spectroradiometer. The endmember spectra of each rock were identified by comparing a limited set of pure hyperspectral image pixels with the constituent minerals of the rocks based on their diagnostic spectral features. A series of spectral unmixing paradigms for explaining geological mixtures, including those ranging from simple physics-based light interaction models (linear, bilinear, and polynomial models) to classification-based models (support vector machines (SVMs) and half Siamese network (HSN)), were tested to estimate the fractional abundances of the endmembers at each pixel position of the image. The analysis of the results of the spectral unmixing algorithms using the ground truth abundance maps and actual mineralogical composition of the rock samples (estimated using X-ray diffraction (XRD) analysis) indicate a better performance of the pure pixel-guided HSN model in comparison to the linear, bilinear, polynomial, and SVM-based unmixing approaches. The HSN-based approach yielded reduced errors of abundance estimation, image reconstruction, and mineralogical composition for serpentinite and granite. With its ability to train using limited pure pixels, the half-Siamese network model has a scope for spectrally unmixing rock samples of varying mineralogical composition and grain sizes. Hence, HSN-based approaches effectively address the modelling of nonlinear mixing in geological mixtures. Full article
Show Figures

Figure 1

19 pages, 1143 KiB  
Article
Algorithm for Enhancing Event Reconstruction Efficiency by Addressing False Track Filtering Issues in the SPD NICA Experiment
by Gulshat Amirkhanova, Madina Mansurova, Gennadii Ososkov, Nasurlla Burtebayev, Adai Shomanov and Murat Kunelbayev
Algorithms 2023, 16(7), 312; https://doi.org/10.3390/a16070312 - 22 Jun 2023
Viewed by 1608
Abstract
This paper introduces methods for parallelizing the algorithm to enhance the efficiency of event recovery in Spin Physics Detector (SPD) experiments at the Nuclotron-based Ion Collider Facility (NICA). The problem of eliminating false tracks during the particle trajectory detection process remains a crucial [...] Read more.
This paper introduces methods for parallelizing the algorithm to enhance the efficiency of event recovery in Spin Physics Detector (SPD) experiments at the Nuclotron-based Ion Collider Facility (NICA). The problem of eliminating false tracks during the particle trajectory detection process remains a crucial challenge in overcoming performance bottlenecks in processing collider data generated in high volumes and at a fast pace. In this paper, we propose and show fast parallel false track elimination methods based on the introduced criterion of a clustering-based thresholding approach with a chi-squared quality-of-fit metric. The proposed strategy achieves a good trade-off between the effectiveness of track reconstruction and the pace of execution on today’s advanced multicore computers. To facilitate this, a quality benchmark for reconstruction is established, using the root mean square (rms) error of spiral and polynomial fitting for the datasets identified as the subsequent track candidate by the neural network. Choosing the right benchmark enables us to maintain the recall and precision indicators of the neural network track recognition performance at a level that is satisfactory to physicists, even though these metrics will inevitably decline as the data noise increases. Moreover, it has been possible to improve the processing speed of the complete program pipeline by 6 times through parallelization of the algorithm, achieving a rate of 2000 events per second, even when handling extremely noisy input data. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 3524 KiB  
Article
A Two-Step Model-Based Reconstruction and Imaging Method for Baseline-Free Lamb Wave Inspection
by Hang Fan, Fei Gao, Wenhao Li and Kun Zhang
Symmetry 2023, 15(6), 1171; https://doi.org/10.3390/sym15061171 - 30 May 2023
Cited by 2 | Viewed by 1707
Abstract
Traditional Lamb wave inspection and imaging methods heavily rely on prior knowledge of dispersion curves and baseline recordings, which may not be feasible in the majority of real cases due to production uncertainties and environmental variations. In order to solve this problem, a [...] Read more.
Traditional Lamb wave inspection and imaging methods heavily rely on prior knowledge of dispersion curves and baseline recordings, which may not be feasible in the majority of real cases due to production uncertainties and environmental variations. In order to solve this problem, a two-step Lamb wave strategy utilizing adaptive multiple signal classification (MUSIC) and sparse reconstruction of dispersion reconstruction is proposed. The multimodal Lamb waves are initially reconstructed in the f-k domain using random measurements, allowing for the identification and characterization of multimodal Lamb waves. Then, using local polynomial expansion and derivation, the phase and group velocities for each Lamb wave mode could be computed. Thus, the steering vectors of all potential scattering Lamb waves for each grid in the scanning area can be established, thereby allowing for the formulation of the MUSIC algorithm. To increase the precision and adaptability of the MUSIC method, the local wave components resulting from potential scatters are extracted with an adaptive window, which is governed by the group velocities and distances of Lamb wave propagation. As a result, the reconstructed dispersion relations and windowed wave components can be used to highlight the scattering features. For the method investigation, both a simulation and experiment are carried out, and both the dispersion curves and damage locations can be detected. The results demonstrate that damage localization is possible without theoretical dispersion data and baseline recordings while exhibiting a considerable accuracy and resolution. Full article
Show Figures

Figure 1

20 pages, 58299 KiB  
Article
A Rehabilitation of Pixel-Based Spectral Reconstruction from RGB Images
by Yi-Tun Lin and Graham D. Finlayson
Sensors 2023, 23(8), 4155; https://doi.org/10.3390/s23084155 - 21 Apr 2023
Cited by 7 | Viewed by 4114
Abstract
Recently, many deep neural networks (DNN) have been proposed to solve the spectral reconstruction (SR) problem: recovering spectra from RGB measurements. Most DNNs seek to learn the relationship between an RGB viewed in a given spatial context and its corresponding spectra. Significantly, it [...] Read more.
Recently, many deep neural networks (DNN) have been proposed to solve the spectral reconstruction (SR) problem: recovering spectra from RGB measurements. Most DNNs seek to learn the relationship between an RGB viewed in a given spatial context and its corresponding spectra. Significantly, it is argued that the same RGB can map to different spectra depending on the context with respect to which it is seen and, more generally, that accounting for spatial context leads to improved SR. However, as it stands, DNN performance is only slightly better than the much simpler pixel-based methods where spatial context is not used. In this paper, we present a new pixel-based algorithm called A++ (an extension of the A+ sparse coding algorithm). In A+, RGBs are clustered, and within each cluster, a designated linear SR map is trained to recover spectra. In A++, we cluster the spectra instead in an attempt to ensure neighboring spectra (i.e., spectra in the same cluster) are recovered by the same SR map. A polynomial regression framework is developed to estimate the spectral neighborhoods given only the RGB values in testing, which in turn determines which mapping should be used to map each testing RGB to its reconstructed spectrum. Compared to the leading DNNs, not only does A++ deliver the best results, it is parameterized by orders of magnitude fewer parameters and has a significantly faster implementation. Moreover, in contradistinction to some DNN methods, A++ uses pixel-based processing, which is robust to image manipulations that alter the spatial context (e.g., blurring and rotations). Our demonstration on the scene relighting application also shows that, while SR methods, in general, provide more accurate relighting results compared to the traditional diagonal matrix correction, A++ provides superior color accuracy and robustness compared to the top DNN methods. Full article
(This article belongs to the Special Issue Hyperspectral Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop