Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = RocksDB

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10015 KB  
Article
Simulation and Optimization of Highly Efficient Sound-Absorbing and -Insulating Materials
by Xiao Liu, Chengyuan Wu, Haopeng Wang, Wangqiang Xiao and Zhiqin Cai
Processes 2025, 13(9), 2947; https://doi.org/10.3390/pr13092947 - 16 Sep 2025
Viewed by 543
Abstract
Although crucial transport equipment in coal mining enterprises, tubular belt conveyors cause serious noise pollution. In this paper, the sound absorption and isolation performance of three kinds of highly efficient sound-absorbing and -insulating materials were studied by finite element multiphysics field software COMSOL [...] Read more.
Although crucial transport equipment in coal mining enterprises, tubular belt conveyors cause serious noise pollution. In this paper, the sound absorption and isolation performance of three kinds of highly efficient sound-absorbing and -insulating materials were studied by finite element multiphysics field software COMSOL and acoustic tests, and the structure of highly efficient sound-absorbing and -insulating materials was optimized and designed. The results show that the acoustic superstructure plate has an excellent sound insulation effect of 36 dB, and achieves an excellent sound absorption coefficient of 0.95 at 210 Hz on the acoustic simulation test. The simulated weighted sound insulation of acoustic metamaterial plate is 37 dB, and the simulated weighted sound insulation of acoustic metamaterial plate filled with particle material is 42 dB, which improves the sound insulation effect by 4~7 dB after filling with particle material, and the comprehensive absorption coefficient of the high-frequency noise of more than 800 Hz reaches 0.94, and it can effectively absorb and block the low-frequency noise as well; rock wool acoustic panels in the 500 Hz to achieve a better acoustic capacity, the absorption coefficient of 0.8 or more, but the low-frequency noise acoustic capacity is still lacking, and can not be a good solution to the full-frequency band of the acoustic problem. It can be seen that the acoustic metamaterial plate has the best sound absorption and insulation effect. At the same time, the acoustic metamaterials based on the honeycomb structure are optimized, and the sound absorption and insulation structure with the angle of 60° of the inclined plate and the length of 693 mm of the inclined plate is the optimal structure. It provides a solution to the noise pollution caused by tubular belt conveyors. Full article
Show Figures

Figure 1

21 pages, 2482 KB  
Article
SwiftKV: A Metadata Indexing Scheme Integrating LSM-Tree and Learned Index for Distributed KV Stores
by Zhenfei Wang, Jianxun Feng, Longxiang Dun, Ziliang Bao and Chunfeng Du
Future Internet 2025, 17(9), 398; https://doi.org/10.3390/fi17090398 - 30 Aug 2025
Viewed by 594
Abstract
Optimizing metadata indexing remains critical for enhancing distributed file system performance. The Traditional Log-Structured Merge-Trees (LSM-Trees) architecture, while effective for write-intensive operations, exhibits significant limitations when handling massive metadata workloads, particularly manifesting as suboptimal read performance and substantial indexing overhead. Although existing learned [...] Read more.
Optimizing metadata indexing remains critical for enhancing distributed file system performance. The Traditional Log-Structured Merge-Trees (LSM-Trees) architecture, while effective for write-intensive operations, exhibits significant limitations when handling massive metadata workloads, particularly manifesting as suboptimal read performance and substantial indexing overhead. Although existing learned indexes perform well on read-only workloads, they struggle to support modifications such as inserts and updates effectively. This paper proposes SwiftKV, a novel metadata indexing scheme that combines LSM-Tree and learned indexes to address these issues. Firstly, SwiftKV employs a dynamic partition strategy to narrow the metadata search range. Secondly, a two-level learned index block, consisting of Greedy Piecewise Linear Regression (Greedy-PLR) and Linear Regression (LR) models, is leveraged to replace the typical Sorted String Table (SSTable) index block for faster location prediction than binary search. Thirdly, SwiftKV incorporates a load-aware construction mechanism and parallel optimization to minimize training overhead and enhance efficiency. This work bridges the gap between LSM-Trees’ write efficiency and learned indexes’ query performance, offering a scalable and high-performance solution for modern distributed file systems. This paper implements the prototype of SwiftKV based on RocksDB. The experimental results show that it narrows the memory usage of index blocks by 30.06% and reduces read latency by 1.19×~1.60× without affecting write performance. Furthermore, SwiftKV’s two-level learned index achieves a 15.13% reduction in query latency and a 44.03% reduction in memory overhead compared to a single-level model. For all YCSB workloads, SwiftKV outperforms other schemes. Full article
Show Figures

Figure 1

23 pages, 6440 KB  
Article
A Gravity Data Denoising Method Based on Multi-Scale Attention Mechanism and Physical Constraints Using U-Net
by Bing Liu, Houpu Li, Shaofeng Bian, Chaoliang Zhang, Bing Ji and Yujie Zhang
Appl. Sci. 2025, 15(14), 7956; https://doi.org/10.3390/app15147956 - 17 Jul 2025
Viewed by 593
Abstract
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this [...] Read more.
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this study proposes an improved U-Net deep learning framework that integrates multi-scale feature extraction and attention mechanisms. Furthermore, a Laplace consistency constraint is introduced into the loss function to enhance denoising performance and physical interpretability. Notably, the datasets used in this study are generated by the authors, involving simulations of subsurface prism distributions with realistic density perturbations (±20% of typical rock densities) and the addition of controlled Gaussian noise (5%, 10%, 15%, and 30%) to simulate field-like conditions, ensuring the diversity and physical relevance of training samples. Experimental validation on these synthetic datasets and real field datasets demonstrates the superiority of the proposed method over conventional techniques. For noise levels of 5%, 10%, 15%, and 30% in test sets, the improved U-Net achieves Peak Signal-to-Noise Ratios (PSNR) of 59.13 dB, 52.03 dB, 48.62 dB, and 48.81 dB, respectively, outperforming wavelet transforms, moving averages, and low-pass filtering by 10–30 dB. In multi-component gravity gradient denoising, our method excels in detail preservation and noise suppression, improving Structural Similarity Index (SSIM) by 15–25%. Field data tests further confirm enhanced identification of key geological anomalies and overall data quality improvement. In summary, the improved U-Net not only delivers quantitative advancements in gravity data denoising but also provides a novel approach for high-precision geophysical data preprocessing. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Earth Sciences—2nd Edition)
Show Figures

Figure 1

22 pages, 1159 KB  
Article
Compaction-Aware Flash Memory Remapping for Key–Value Stores
by Jialin Wang, Zhen Yang, Yi Fan and Yajuan Du
Micromachines 2025, 16(6), 699; https://doi.org/10.3390/mi16060699 - 11 Jun 2025
Viewed by 1640
Abstract
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in [...] Read more.
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in systems based on key–value pairs due to its efficient writing performance. In LSM-tree-based KV stores, typically deployed on systems with DRAM-SSD storage, the KV items are first organized into MemTable as buffer for SSTables in main memory. When the buffer size exceeds the threshold, MemTable is flushed to the SSD and reorganized into an SSTable, which is then passed down level by level through compaction. However, the compaction degrades write performance and SSD endurance due to significant write amplification. To address this issue, recent proposals have mostly focused on redesigning the structure of LSM trees. We discover the prevalence of unchanged data blocks (UDBs) in the LSM-tree compaction process, i.e., UDBs are written back to SSD the same as they are read into memory, which induces extra write amplification and degrades I/O performance. In this paper, we propose a KV store design in SSD, called RemapCom, to exploit remapping on these UDBs. RemapCom first identifies UDBs with a lightweight state machine integrated into the compaction merge process. In order to increase the ratio of UDBs, RemapCom further designs a UDB retention method to further develop the benefit of remapping. Moreover, we implement a prototype of RemapCom on LevelDB by providing two primitives for the remapping. Compared to the state of the art, the evaluation results demonstrate that RemapCom can reduce write amplification by up to 53% and improve write throughput by up to 30%. Full article
Show Figures

Figure 1

16 pages, 4292 KB  
Article
PreEdgeDB: A Lightweight Platform for Energy Prediction on Low-Power Edge Devices
by Woojin Cho, Dongju Kim, Byunghyun Lim and Jaehoi Gu
Electronics 2025, 14(10), 1912; https://doi.org/10.3390/electronics14101912 - 8 May 2025
Viewed by 678
Abstract
Rising energy costs due to environmental degradation, climate change, global conflicts, and pandemics have prompted the need for efficient energy management. Edge devices are increasingly recognized for improving energy efficiency; however, their role as primary computing units remains underexplored. This study presents PreEdgeDB, [...] Read more.
Rising energy costs due to environmental degradation, climate change, global conflicts, and pandemics have prompted the need for efficient energy management. Edge devices are increasingly recognized for improving energy efficiency; however, their role as primary computing units remains underexplored. This study presents PreEdgeDB, a lightweight platform deployed on low-power edge devices to optimize energy usage in industrial complexes, which consume approximately 57.29% of South Korea’s total energy. The platform integrates real-time data preprocessing, time-series storage, and prediction capabilities, enabling independent operation at individual factories. A low-resource preprocessing module was developed to handle missing and anomalous data. For storage, RocksDB—a lightweight, high-performance key–value database—was optimized for edge environments. For prediction, Light Gradient Boosting Machine (LightGBM) was adopted due to its efficiency and high accuracy on limited-resource systems. The resulting model achieved a coefficient of variation of the root mean squared error (CV(RMSE)) of 14.36% and a prediction score of 0.8240. The total processing time from data collection to prediction was under 300 milliseconds. With memory usage below 150 MB and CPU utilization around 60%, PreEdgeDB enables fully autonomous energy prediction and analysis on edge devices, without relying on centralized servers. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

22 pages, 1541 KB  
Article
A Framework for Integrating Log-Structured Merge-Trees and Key–Value Separation in Tiered Storage
by Charles Jaranilla, Guangxun Zhao, Gunhee Choi, Sohyun Park and Jongmoo Choi
Electronics 2025, 14(3), 564; https://doi.org/10.3390/electronics14030564 - 30 Jan 2025
Viewed by 1719
Abstract
This paper presents an approach that integrates tiered storage into the Log-Structured Merge (LSM)-tree to balance Key–Value Store (KVS) performance and storage financial cost trade-offs. The implementation focuses on applying tiered storage to LSM-tree-based KVS architectures, using both vertical and horizontal storage alignment [...] Read more.
This paper presents an approach that integrates tiered storage into the Log-Structured Merge (LSM)-tree to balance Key–Value Store (KVS) performance and storage financial cost trade-offs. The implementation focuses on applying tiered storage to LSM-tree-based KVS architectures, using both vertical and horizontal storage alignment strategies or a combination of both. Additionally, these configurations leverage key–value (KV) separation to further improve performance. Our experiments reveal that this approach reduces storage financial costs while offering trade-offs in write and read performance. For write-intensive workloads, our approach achieves competitive performance compared to a fast NVMe Solid State Drive (SSD)-only approach while storing 96% of data on more affordable SATA SSDs. Additionally, it exhibits lookup performance comparable to BlobDB, and improves range query performance by 1.8x over RocksDB on NVMe SSDs. Overall, the approach results in a 49.5% reduction in storage financial cost compared to RocksDB and BlobDB on NVMe SSDs. The integration of selective KV separation further advances these improvements, setting the stage for future research into offloading remote data in LSM-tree tiered storage systems. Full article
(This article belongs to the Special Issue Future Trends of Artificial Intelligence (AI) and Big Data)
Show Figures

Figure 1

24 pages, 11340 KB  
Article
Experimental Investigation of Embedment Depth Effects on the Rocking Behavior of Foundations
by Mohamadali Moradi, Ali Khezri, Seyed Majdeddin Mir Mohammad Hosseini, Hongbae Park and Daeyong Lee
Geosciences 2024, 14(12), 351; https://doi.org/10.3390/geosciences14120351 - 18 Dec 2024
Viewed by 1410
Abstract
Shallow foundations supporting high-rise structures are often subjected to extreme lateral loading from wind and seismic activities. Nonlinear soil–foundation system behaviors, such as foundation uplift or bearing capacity mobilization (i.e., rocking behavior), can act as energy dissipation mechanisms, potentially reducing structural demands. However, [...] Read more.
Shallow foundations supporting high-rise structures are often subjected to extreme lateral loading from wind and seismic activities. Nonlinear soil–foundation system behaviors, such as foundation uplift or bearing capacity mobilization (i.e., rocking behavior), can act as energy dissipation mechanisms, potentially reducing structural demands. However, such merits may be achieved at the expense of large residual deformations and settlements, which are influenced by various factors. One key factor which is highly influential on soil deformation mechanisms during rocking is the foundation embedment depth. This aspect of rocking foundations is investigated in this study under varying subgrade densities and initial vertical factors of safety (FSv), using the PIV technique and appropriate instrumentation. A series of reduced-scale slow cyclic tests were performed using a single-degree-of-freedom (SDOF) structure model. This study first examines the deformation mechanisms of strip foundations with depth-to-width (D/B) ratios of 0, 0.25, and 1, and then explores the effects of embedment depth on the performance of square foundations, evaluating moment capacity, settlement, recentering capability, rotational stiffness, and damping characteristics. The results demonstrate that the predominant deformation mechanism of the soil mass transitions from a wedge mechanism in surface foundations to a scoop mechanism in embedded foundations. Increasing the embedment depth enhances recentering capabilities, reduces damping, decreases settlement, increases rotational stiffness, and improves the moment capacity of the foundations. This comprehensive exploration of foundation performance and soil deformation mechanisms, considering varying embedment depths, FSv values, and soil relative densities, offers insights for optimizing the performance of rocking foundations under lateral loading conditions. Full article
(This article belongs to the Special Issue Geotechnical Earthquake Engineering and Geohazard Prevention)
Show Figures

Figure 1

18 pages, 13407 KB  
Article
The Coupled Application of the DB-IWHR Model and the MIKE 21 Model for the Assessment of Dam Failure Risk
by Junling Ma, Feng Zhou, Chunfang Yue, Qiji Sun and Xuehu Wang
Water 2024, 16(20), 2919; https://doi.org/10.3390/w16202919 - 14 Oct 2024
Cited by 2 | Viewed by 1751
Abstract
The phenomenon of global climate change has led to an increase in the frequency of extreme precipitation events, an acceleration in the melting of glaciers and snow cover, and an elevation of the risk of flooding. In this study, the DB-IWHR model was [...] Read more.
The phenomenon of global climate change has led to an increase in the frequency of extreme precipitation events, an acceleration in the melting of glaciers and snow cover, and an elevation of the risk of flooding. In this study, the DB-IWHR model was employed in conjunction with the MIKE 21 hydrodynamic model to develop a simulation system for the dam failure flow process of an earth and rock dam. The study concentrated on the KET reservoir, and 12 dam failure scenarios were devised based on varying design flood criteria. The impact of reservoir failures on flood-risk areas was subjected to detailed analysis, with consideration given to a range of potential failure scenarios and flood sizes. It was determined that under identical inflow frequency conditions, the higher the water level, the more rapid the breakout process and the corresponding increase in flood peak discharge. Conversely, for a given frequency of incoming water, an elevated water level results in a transient breach process, accompanied by a reduction in flood peak flow. Moreover, for a given water level, an increase in water frequency results in a reduction in breaching time, an extension of flood duration, and an increase in flood peak flow. The observed trend of flood spreading is generally north-south, and this process is highly compatible with the topographic and geomorphological features, demonstrating good adaptability. Full article
Show Figures

Figure 1

15 pages, 4718 KB  
Article
Intelligent Prediction of Ore Block Shapes Based on Novel View Synthesis Technology
by Lin Bi, Dewei Bai and Boxun Chen
Appl. Sci. 2024, 14(18), 8273; https://doi.org/10.3390/app14188273 - 13 Sep 2024
Viewed by 1143
Abstract
To address the problem of incomplete perception of limited viewpoints of ore blocks in future remote and intelligent shoveling-dominated mining scenarios, a method of using new view generation technology to predict ore blocks with limited view based on a latent diffusion model is [...] Read more.
To address the problem of incomplete perception of limited viewpoints of ore blocks in future remote and intelligent shoveling-dominated mining scenarios, a method of using new view generation technology to predict ore blocks with limited view based on a latent diffusion model is proposed. Initially, an ore block image-pose dataset is created. Then, based on prior knowledge, the latent diffusion model undergoes transfer learning to develop an intelligent ore block shape prediction model (IOBSPM) for rock blocks. During training, structural similarity loss is innovatively introduced to constrain the prediction results and solve the issue of discontinuity in generated images. Finally, neural surface reconstruction is performed using the generated multi-view images of rock blocks to obtain a 3D model. Experimental results show that the prediction model, trained on the rock block dataset, produces better morphological and detail generation compared to the original model, with single-view generation time within 5 s. The average PSNR, SSIM, and LPIPS values reach 23.02 dB, 0.754, and 0.268, respectively. The generated views also demonstrate good performance in 3D reconstruction, highlighting significant implications for future research on remote and autonomous shoveling. Full article
Show Figures

Figure 1

20 pages, 1232 KB  
Review
Requirements and Trade-Offs of Compression Techniques in Key–Value Stores: A Survey
by Charles Jaranilla and Jongmoo Choi
Electronics 2023, 12(20), 4280; https://doi.org/10.3390/electronics12204280 - 16 Oct 2023
Cited by 5 | Viewed by 4648
Abstract
The prevalence of big data has caused a notable surge in both the diversity and magnitude of data. Consequently, this has prompted the emergence and advancement of two distinct technologies: unstructured data management and data volume reduction. Key–value stores, such as Google’s LevelDB [...] Read more.
The prevalence of big data has caused a notable surge in both the diversity and magnitude of data. Consequently, this has prompted the emergence and advancement of two distinct technologies: unstructured data management and data volume reduction. Key–value stores, such as Google’s LevelDB and Meta’s RocksDB, have emerged as a popular solution for managing unstructured data due to their ability to handle diverse data types with a simple key–value abstraction. Simultaneously, a multitude of data management tools have actively adopted compression techniques, such as Snappy and Zstd, to effectively reduce data volume. The objective of this study is to explore how these two technologies influence each other. For this purpose, we first examine a classification of compression techniques and discuss their strength and weakness, especially those adopted by modern key–value stores. We also investigate the internal structures and operations, such as batch writing and compaction, in order to grasp the characteristics of key–value stores. Then, we quantitatively evaluate the compression ratio and performance using RocksDB under diverse compression techniques, block sizes, value sizes, and workloads. Our evaluation shows that compression not only saves storage space but also decreases compaction overhead. It also reveals that compression techniques have their inherent trade-offs, meaning that some provide a better compression ratio, while others yield better compression performance. Based on our evaluation, a number of potential avenues for further research have been identified. These include the exploration of a compression-aware compaction mechanism, selective compression, and revisiting compression granularity. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 3165 KB  
Article
HoaKV: High-Performance KV Store Based on the Hot-Awareness in Mixed Workloads
by Jingyu Liu, Xiaoqin Fan, Youxi Wu, Yong Zheng and Lu Liu
Electronics 2023, 12(15), 3227; https://doi.org/10.3390/electronics12153227 - 26 Jul 2023
Cited by 1 | Viewed by 2267
Abstract
Key–value (KV) stores based on the LSM-tree have become the mainstream of contemporary store engines, but there are problems with high write and read amplification. Moreover, the real-world workload has a high data skew, and the existing KV store lacks hot-awareness, leading to [...] Read more.
Key–value (KV) stores based on the LSM-tree have become the mainstream of contemporary store engines, but there are problems with high write and read amplification. Moreover, the real-world workload has a high data skew, and the existing KV store lacks hot-awareness, leading to its unreliable and poor performance on the highly skewed real-world workload. In this paper, we propose HoaKV, which unifies the key design ideas of hot issues, KV separation, and hybrid indexing technology in a system. Specifically, HoaKV uses the heat differentiation in KV pairs to manage the hot data and the cold data and conducts real-time dynamic adjustment data classification management. It also uses partial KV separation technology to manage differential KV pairs for large and small KV pairs in the cold data. In addition, HoaKV uses hybrid indexing technology to index the hot data and the cold data, respectively, to improve the performance of reading, writing, and scanning at the same time. In the mixed read and write workloads experments show that HoaKV performs significantly better than several state-of-the-art KV store technologies such as LevelDB, RocksDB, PebblesDB, and WiscKey. Full article
(This article belongs to the Special Issue AI-Driven Network Security and Privacy)
Show Figures

Graphical abstract

22 pages, 7239 KB  
Article
The Single-Channel Microseismic Mine Signal Denoising Method and Application Based on Frequency Domain Singular Value Decomposition (FSVD)
by Quanjie Zhu, Longkun Sui, Qingsong Li, Yage Li, Lei Gu and Dacang Wang
Sustainability 2023, 15(13), 10588; https://doi.org/10.3390/su151310588 - 5 Jul 2023
Cited by 3 | Viewed by 1883
Abstract
The purpose of denoising microseismic mine signals (MMS) is to extract relevant signals from background interference, enabling their utilization in wave classification, identification, time analysis, location calculations, and detailed mining feature analysis, among other applications. To enhance the signal-to-noise ratio (SNR) [...] Read more.
The purpose of denoising microseismic mine signals (MMS) is to extract relevant signals from background interference, enabling their utilization in wave classification, identification, time analysis, location calculations, and detailed mining feature analysis, among other applications. To enhance the signal-to-noise ratio (SNR) of single-channel MMS, a frequency-domain denoising method based on the Fourier transform, inverse transform, and singular value decomposition was proposed, along with its processing workflow. The establishment of key parameters, such as time delay, τ, reconstruction order, k, Hankel matrix length, n, and dimension, m, were introduced. The reconstruction order for SVD was determined by introducing the energy difference spectrum, E, and the denoised two-dimensional microseismic time series was obtained based on the SVD recovery principle. Through the analysis and processing of three types of typical microseismic waveforms in mining (blast, rock burst, and background noise) and with the evaluation of four indicators, SNR, ESN, RMSE, and STI, the results show that the SNR is improved by more than 10 dB after FSVD processing, indicating a strong noise suppression capability. This method is of significant importance for the rapid analysis and processing of microseismic signals in mining, as well as subsequently and accurately picking the initial arrival times and the exploration and analysis of microseismic signal characteristics in mines. Full article
(This article belongs to the Special Issue Mining Risk and Safety Management)
Show Figures

Figure 1

18 pages, 12394 KB  
Article
Quantifying the Rock Damage Intensity Controlled by Mineral Compositions: Insights from Fractal Analyses
by Özge Dinç Göğüş, Elif Avşar, Kayhan Develi and Ayten Çalık
Fractal Fract. 2023, 7(5), 383; https://doi.org/10.3390/fractalfract7050383 - 3 May 2023
Cited by 16 | Viewed by 2635
Abstract
Since each rock type represents different deformation characteristics, prediction of the damage beforehand is one of the most fundamental problems of industrial activities and rock engineering studies. Previous studies have predicted the stress–strain behaviors preceding rock failure; however, quantitative analyses of the progressive [...] Read more.
Since each rock type represents different deformation characteristics, prediction of the damage beforehand is one of the most fundamental problems of industrial activities and rock engineering studies. Previous studies have predicted the stress–strain behaviors preceding rock failure; however, quantitative analyses of the progressive damage in different rocks under stress have not been accurately presented. This study aims to quantify pre-failure rock damage by investigating the stress-induced microscale cracking process in three different rock types, including diabase, ignimbrite, and marble, representing strong, medium-hard, and weak rock types, respectively. We demonstrate crack intensity at critical stress levels where cracking initiates (σci), propagates (σcd), and where failure occurs (σpeak) based on scanning electron microscope (SEM) images. Furthermore, the progression of rock damage was quantified for each rock type through the fractal analyses of crack patterns on these images. Our results show that the patterns in diabase have the highest fractal dimensions (DB) for all three stress levels. While marble produces the lowest DB value up to σci stress level, it presents greater DB values than those of ignimbrite, starting from the σcd level. This is because rock damage in ignimbrite is controlled by the groundmass, proceeding from such stress level. Rock texture controls the rock stiffness and, hence, the DB values of cracking. The mineral composition is effective on the rock strength, but the textural pattern of the minerals has a first-order control on the rock deformation behavior. Overall, our results provide a better understanding of progressive damage in different rock types, which is crucial in the design of engineering structures. Full article
(This article belongs to the Topic Geomechanics for Energy and the Environment)
Show Figures

Figure 1

14 pages, 3887 KB  
Article
A Novel Strategy for Comprehensive Estimation of Lattice Energy, Bulk Modulus, Chemical Hardness and Electronic Polarizability of ANB8-N Binary Inorganic Crystals
by Xinyu Zhao and Xiaoli Wang
Crystals 2023, 13(4), 668; https://doi.org/10.3390/cryst13040668 - 12 Apr 2023
Cited by 4 | Viewed by 2457
Abstract
How to search for a convenient method without a complicated calculation process to predict the physicochemical properties of inorganic crystals through a simple micro-parameter is a greatly important issue in the field of materials science. Herein, this paper presents a new and facile [...] Read more.
How to search for a convenient method without a complicated calculation process to predict the physicochemical properties of inorganic crystals through a simple micro-parameter is a greatly important issue in the field of materials science. Herein, this paper presents a new and facile technique for the comprehensive estimation of lattice energy (U), bulk modulus (B), chemical hardness (ƞ), and electronic polarizability (α), just by using a simple mathematic fitting formula with a few structure parameters, such as the systems of rock salt crystals (group I–VII, II–VI) and tetrahedral coordinated crystals (group II–VI, III–V). For the typical binary ANB8-N crystal systems, our present conclusions suggest that a good quantitative correlation between U, B, ƞ, α and chemical bond length (d) is observed, the normal mathematical expression is P = a·db (P represents these physicochemical parameters), constants a and b depend on the type of crystals, and the relevant squares of the correlation coefficient (R2) are larger than 0.9. The results indicate that lattice energy, bulk modulus, and chemical hardness decrease with increases in chemical bond length, but electronic polarizability increases with an increase in chemical bond length. Meanwhile, the new data on the lattice energy, bulk modulus, chemical hardness, and electronic polarizability values of binary ANB8-N crystal systems considered in the present study are calculated via the obtained curve fitting equations without any complex calculation process. We find that there is a very good linear trend in our calculated results along with the values reported in the literature. The present study will be important in solid-state chemistry, which may give researchers useful guidance in searching for relevant data for predicting the properties of new materials or synthetic routes based on a simple mathematic empirical model. Full article
Show Figures

Figure 1

16 pages, 4590 KB  
Article
Petrophysical Database for European Pegmatite Exploration—EuroPeg
by Claudia Haase and Claudia M. Pohl
Minerals 2022, 12(12), 1498; https://doi.org/10.3390/min12121498 - 24 Nov 2022
Cited by 7 | Viewed by 4234
Abstract
Granitic pegmatites contain natural concentrations of a variety of raw materials invaluable for modern technologies and a green and sustainable society. The most abundant ones are silicon for high-purity quartz applications, and indispensable lithium for today’s batteries. However, the exploration of these target [...] Read more.
Granitic pegmatites contain natural concentrations of a variety of raw materials invaluable for modern technologies and a green and sustainable society. The most abundant ones are silicon for high-purity quartz applications, and indispensable lithium for today’s batteries. However, the exploration of these target materials in Europe is underdeveloped, causing high dependencies on non-European supply chains. The European Commission Horizon 2020 project GREENPEG (GA no. 869274) is addressing the exploration of buried, small-scale pegmatite deposits in Europe through the development of innovative new exploration toolsets. One component of these toolsets is petrophysical data of pegmatite ores and their wall rock. These data are essential to supplement and ground-truth non-invasive geophysical investigations and deposit modeling. Both important tools in mineral exploration can then be used in a more targeted and cost-effective way. Petrophysical parameters measured on drill core and field samples and acquired through geophysical borehole logging are compiled in the first database for European Pegmatite deposits: EuroPeg_PetroDB. Samples are supplemented with meta-information, and the database is comprehensively structured in an easy-to-use format. Supporting the initiative of FAIR data, EuroPeg is freely accessible on an open data repository. The sample content and petrophysical measurements are described, followed by the structure and usability of the database. Full article
(This article belongs to the Special Issue Petrology and Mineralogy of Pegmatite Deposits)
Show Figures

Figure 1

Back to TopTop