Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (57)

Search Parameters:
Keywords = pre-flaw

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1292 KiB  
Article
Trust Domain Extensions Guest Fuzzing Framework for Security Vulnerability Detection
by Eran Dahan, Itzhak Aviv and Michael Kiperberg
Mathematics 2025, 13(11), 1879; https://doi.org/10.3390/math13111879 - 4 Jun 2025
Viewed by 663
Abstract
The Intel® Trust Domain Extensions (TDX) encrypt guest memory and minimize host interactions to provide hardware-enforced isolation for sensitive virtual machines (VMs). Software vulnerabilities in the guest OS continue to pose a serious risk even as the TDX improves security against a [...] Read more.
The Intel® Trust Domain Extensions (TDX) encrypt guest memory and minimize host interactions to provide hardware-enforced isolation for sensitive virtual machines (VMs). Software vulnerabilities in the guest OS continue to pose a serious risk even as the TDX improves security against a malicious hypervisor. We suggest a comprehensive TDX Guest Fuzzing Framework that systematically explores the guest’s code paths handling untrusted inputs. Our method uses a customized coverage-guided fuzzer to target those pathways with random input mutations following integrating static analysis to identify possible attack surfaces, where the guest reads data from the host. To achieve high throughput, we also use snapshot-based virtual machine execution, which returns the guest to its pre-interaction state at the end of each fuzz iteration. We show how our framework reveals undiscovered vulnerabilities in device initialization procedures, hypercall error-handling, and random number seeding logic using a QEMU/KVM-based TDX emulator and a TDX-enabled Linux kernel. We demonstrate that a large number of vulnerabilities occur when developers implicitly rely on values supplied by a hypervisor rather than thoroughly verifying them. This study highlights the urgent need for ongoing, automated testing in private computing environments by connecting theoretical completeness arguments for coverage-guided fuzzing with real-world results on TDX-specific code. We discovered several memory corruption and concurrency weaknesses in the TDX guest OS through our coverage-guided fuzzing campaigns. These flaws ranged from nested #VE handler deadlocks to buffer overflows in paravirtual device initialization to faulty randomness-seeding logic. By exploiting these vulnerabilities, the TDX’s hardware-based memory isolation may be compromised or denial-of-service attacks may be made possible. Thus, our results demonstrate that, although the TDX offers a robust hardware barrier, comprehensive input validation and equally stringent software defenses are essential to preserving overall security. Full article
Show Figures

Figure 1

31 pages, 9582 KiB  
Article
Increasing the Classification Achievement of Steel Surface Defects by Applying a Specific Deep Strategy and a New Image Processing Approach
by Fatih Demir and Koray Sener Parlak
Appl. Sci. 2025, 15(8), 4255; https://doi.org/10.3390/app15084255 - 11 Apr 2025
Viewed by 891
Abstract
Defect detection is still challenging to apply in reality because the goal of the entire classification assignment is to identify the exact type and location of every problem in an image. Since defect detection is a task that includes location and categorization, it [...] Read more.
Defect detection is still challenging to apply in reality because the goal of the entire classification assignment is to identify the exact type and location of every problem in an image. Since defect detection is a task that includes location and categorization, it is difficult to take both accuracy factors into account when designing related solutions. Flaw detection deployment requires a unique detection dataset that is accurately annotated. Producing steel free of flaws is crucial, particularly in large production systems. Thus, in this study, we proposed a novel deep learning-based flaw detection system with an industrial focus on automated steel surface defect identification. To create processed images from raw steel surface images, a novel method was applied. A new deep learning model called the Parallel Attention–Residual CNN (PARC) model was constructed to extract deep features concurrently by training residual structures and attention. The Iterative Neighborhood Component Analysis (INCA) technique was chosen for distinguishing features to lower the computational cost. The classification assessed the SVM method using a convincing dataset (Severstal: Steel Defect Detection). The accuracy in both the binary and multi-class classification tests was above 90%. Moreover, using the same dataset, the suggested model was contrasted with pre-existing models. Full article
(This article belongs to the Special Issue Object Detection and Image Classification)
Show Figures

Figure 1

14 pages, 1814 KiB  
Article
Analysis of Phosphorus Soil Sorption Data: Improved Results from Global Least-Squares Fitting
by Joel Tellinghuisen, Paul Holford and Paul J. Milham
Soil Syst. 2025, 9(1), 22; https://doi.org/10.3390/soilsystems9010022 - 4 Mar 2025
Cited by 1 | Viewed by 637
Abstract
Phosphate sorption data are often analyzed by least-squares fitting to the two- or three-parameter Freundlich model. The standard methods are flawed by (1) treating the measured pseudo-equilibrium concentration C as the independent (hence error-free) variable and (2) neglecting the weighting that should accommodate [...] Read more.
Phosphate sorption data are often analyzed by least-squares fitting to the two- or three-parameter Freundlich model. The standard methods are flawed by (1) treating the measured pseudo-equilibrium concentration C as the independent (hence error-free) variable and (2) neglecting the weighting that should accommodate the varying precision of the data. Here, we address both of these shortfalls and use a global fit model to achieve optimal precision in fitting data for five acidic Australian soil types. Each individual dataset consists of measured C values for up to nine phosphate spiking levels C0. For each soil type, there are three–five such datasets from varying levels of phosphate fertilizer pre-exposure (Pf) two years earlier. These datasets are fitted simultaneously by expressing the Freundlich capacity factor a and exponent b as theoretically predicted functions of the assay amounts of Fe, Al, and P measured for each Pf. The analysis allows for uncertainty in both C and C0, with inverse-variance weighting from variance functions estimated by residuals analysis. The estimated presorbed P amounts Q depend linearly on Pf, with positive intercepts at Pf = 0, indicating residual phosphate in the soils prior to the laboratory phosphate treatments. The key takeaway points are as follows: (1) global analysis yields optimal estimates and improved precision for the fit parameters; (2) allowing for uncertainty in C is essential when the data include C values near 0; (3) varying data precision requires weighting to yield optimal parameter estimates and reliable uncertainties. Full article
(This article belongs to the Special Issue Adsorption Processes in Soils and Sediments)
Show Figures

Graphical abstract

17 pages, 10082 KiB  
Article
Damage Evolution and Failure Precursor of Rock-like Material Under Uniaxial Compression Based on Strain Rate Field Statistics
by Jin Jin, Ping Cao, Jun Zhang, Yanchao Wang, Chenxi Miao, Jie Li and Xiaohong Bai
Appl. Sci. 2025, 15(2), 686; https://doi.org/10.3390/app15020686 - 12 Jan 2025
Cited by 1 | Viewed by 816
Abstract
In rock engineering, it is crucial to collect and analyze precursor information of rock failure. This paper has attempted to study the strain rate field of rock-like material to obtain the precursor information of its failure. Based on the available laboratory experiments, the [...] Read more.
In rock engineering, it is crucial to collect and analyze precursor information of rock failure. This paper has attempted to study the strain rate field of rock-like material to obtain the precursor information of its failure. Based on the available laboratory experiments, the intact BPM (bonded-particle model) and other BPMs with a single open prefabricated flaw were simulated by PFC (Particle Flow Code). The volume strain rate field data before the peak stress have been obtained from two hundred measurement circles across each model. The strain rate field data have been firstly statistically analyzed to explore the failure precursor based on the intact model and 45° flaw model and then compared to find the influence of the pre-existing flaw on the damage evolution and precursor signal. The results indicate that (1) all types of statistical data are positively correlated with the increment of microcracks; (2) corresponding to the fluctuation patterns of statistical data, the damage evolution of BPMs in the pre-peak stage can be divided into three parts; (3) the pre-existing flaw would accelerate the damage evolution; (4) the location and evolution rate of damage could be determined by comprehensively analyzing the average deviation curve, the coefficient of variation, and the contour maps of the strain rate field. These analyses of the particle displacement field can be used to distinguish the impacts of the flaw angle and provide some assistance for the failure forecast. Full article
(This article belongs to the Special Issue Recent Advances in Rock Mass Engineering)
Show Figures

Figure 1

17 pages, 2640 KiB  
Article
An Expanded Wing Crack Model for Fracture and Mechanical Behavior of Sandstone Under Triaxial Compression
by Esraa Alomari, Kam Ng and Lokendra Khatri
Materials 2024, 17(23), 5973; https://doi.org/10.3390/ma17235973 - 6 Dec 2024
Cited by 1 | Viewed by 860
Abstract
A new model is developed to predict the mechanical behavior of brittle sandstone under triaxial compression. The proposed model aims to determine the normalized critical crack length (Lcr), through which the failure strength (σf) of sandstone [...] Read more.
A new model is developed to predict the mechanical behavior of brittle sandstone under triaxial compression. The proposed model aims to determine the normalized critical crack length (Lcr), through which the failure strength (σf) of sandstone can be estimated based on fracture mechanics applied to secondary cracks emanating from pre-existing flaws, while considering the interaction of neighboring cracks. In this study, the wing crack model developed by Ashby and Hallam (1986) was adopted to account for the total stress intensity at the crack tip (KI) as the summation of the stress intensity due to crack initiation and crack interaction. The proposed model is developed by first deriving the Lcr and then setting the crack length equal to the Lcr. Next, the total stress intensity is set equal to the rock fracture toughness in the original equation of KI, resulting in an estimate of the σf. Finally, to evaluate the performance of the proposed model on predicting σf, theoretical results are compared with laboratory data obtained on sandstone formations collected from Wyoming and the published literature. Moreover, the σf predicted by our proposed model is compared with those predicted from other failure criteria from the literature. The comparison shows that the proposed model better predicts the rock failure strength under triaxial compression, based on the lowest RMSE and MAD values of 36.95 and 30.93, respectively. Full article
(This article belongs to the Special Issue Advances in Rock and Mineral Materials)
Show Figures

Figure 1

15 pages, 301 KiB  
Article
Chosen-Ciphertext Secure Unidirectional Proxy Re-Encryption Based on Asymmetric Pairings
by Benjamin Zengin, Paulin Deupmann, Nicolas Buchmann and Marian Margraf
Appl. Sci. 2024, 14(23), 11322; https://doi.org/10.3390/app142311322 - 4 Dec 2024
Viewed by 920
Abstract
Proxy re-encryption (PRE) is a cryptographic primitive that extends public key encryption by allowing ciphertexts to be re-encrypted from one user to another without revealing information about the underlying plaintext. This makes it an essential privacy-enhancing technology, as only the intended recipient is [...] Read more.
Proxy re-encryption (PRE) is a cryptographic primitive that extends public key encryption by allowing ciphertexts to be re-encrypted from one user to another without revealing information about the underlying plaintext. This makes it an essential privacy-enhancing technology, as only the intended recipient is able to decrypt sensitive personal information. Previous PRE schemes were commonly based on symmetric bilinear pairings. However, these have been found to be slower and less secure than the more modern asymmetric pairings. To address this, we propose two new PRE scheme variants, based on the unidirectional symmetric pairing-based scheme by Weng et al. and adapted to utilize asymmetric pairings. We employ a known automated black-box reduction technique to transform the base scheme to the asymmetric setting, identify its shortcomings, and subsequently present an alternative manual transformation that fixes these flaws. The adapted schemes retain the properties of the base scheme and are therefore CCA-secure in the adaptive corruption model without the use of random oracles, while being faster, practical, and more secure overall than the base scheme. Full article
(This article belongs to the Special Issue Cryptography in Data Protection and Privacy-Enhancing Technologies)
19 pages, 6034 KiB  
Article
GMN+: A Binary Homologous Vulnerability Detection Method Based on Graph Matching Neural Network with Enhanced Attention
by Zheng Zhao, Tianhao Zhang, Xiaoya Fan, Qian Mao, Dafeng Wang and Qi Zhao
Appl. Sci. 2024, 14(22), 10762; https://doi.org/10.3390/app142210762 - 20 Nov 2024
Viewed by 1661
Abstract
The widespread reuse of code in the open-source community has led to the proliferation of homologous vulnerabilities, which are security flaws propagated across diverse software systems through the reuse of vulnerable code. Such vulnerabilities pose serious cybersecurity risks, as attackers can exploit the [...] Read more.
The widespread reuse of code in the open-source community has led to the proliferation of homologous vulnerabilities, which are security flaws propagated across diverse software systems through the reuse of vulnerable code. Such vulnerabilities pose serious cybersecurity risks, as attackers can exploit the same weaknesses across multiple platforms. Deep learning has emerged as a promising approach for detecting homologous vulnerabilities in binary code due to their automated feature extraction and high efficiency. However, existing deep learning methods often struggle to capture deep semantic features in binary code, limiting their effectiveness. To address this limitation, this paper presents GMN+, which is a novel graph matching neural network with enhanced attention for detecting homologous vulnerabilities. This method comprehensively considers the information contained in instructions and incorporates types of input instruction. Masked Language Modeling and Instruction Type Prediction are developed as pre-training tasks to enhance the ability of GMN+ in extracting semantic information from basic blocks. GMN+ utilizes an attention mechanism to focus concurrently on the critical semantic information within functions and differences between them, generating robust function embeddings. Experimental results indicate that GMN+ outperforms state-of-the-art methods in various tasks and achieves notable performance in real-world vulnerability detection scenarios. Full article
Show Figures

Figure 1

17 pages, 315 KiB  
Article
Pre-Service Science Teachers’ Beliefs About Creativity at School: A Study in the Hispanic Context
by Leidy Dahiana Rios-Atehortua, Tarcilo Torres-Valois, Joan Josep Solaz-Portolés and Vicente Sanjosé
Educ. Sci. 2024, 14(11), 1194; https://doi.org/10.3390/educsci14111194 - 31 Oct 2024
Viewed by 1275
Abstract
The present study examines the beliefs of pre-service science teachers on creativity in science teaching and learning and identifies factors in the school environment that, in their view, can influence students’ creativity. A total of 152 Colombian prospective science teachers participated in this [...] Read more.
The present study examines the beliefs of pre-service science teachers on creativity in science teaching and learning and identifies factors in the school environment that, in their view, can influence students’ creativity. A total of 152 Colombian prospective science teachers participated in this study. A questionnaire, with an open and a closed part, was administered to participants. Descriptive and inferential statistical analysis of the qualitative and quantitative data collected was carried out. The results revealed that (a) the concept of creativity held by the participants was incomplete and significantly diverged from expert definitions; (b) they viewed creativity as a universal potential that can be nurtured within the school system; (c) the ability to identify problems and ask challenging questions was rarely selected as a creative personality trait; (d) they demonstrated unclear ideas about the relationship between creativity and intelligence and the role of prior knowledge in students’ creativity; and (e) the subject or curricular domain was seen as an important factor influencing students’ creativity. From all this, it could be concluded that Colombian future science teachers exhibited flawed concepts of creativity based on poorly articulated beliefs, which is consistent with findings in other international studies. Full article
16 pages, 4258 KiB  
Article
The Remaining Life Prediction of Rails Based on Convolutional Bi-Directional Long and Short-Term Memory Neural Network with Residual Self-Attention Mechanism
by Gang Huang, Lin Gong, Yuhan Zhang, Zhongmei Wang and Songlin Yuan
Appl. Sci. 2024, 14(9), 3781; https://doi.org/10.3390/app14093781 - 28 Apr 2024
Cited by 2 | Viewed by 1697
Abstract
In the railway industry, the rail is the basic load-bearing structure of railway tracks. The prediction of the remaining useful life (RUL) for rails is important to avoid unexpected system failures and reduce the cost of maintaining the system. However, the existing detection [...] Read more.
In the railway industry, the rail is the basic load-bearing structure of railway tracks. The prediction of the remaining useful life (RUL) for rails is important to avoid unexpected system failures and reduce the cost of maintaining the system. However, the existing detection of rail flaws is difficult, the rail deterioration mechanisms are diverse, and the traditional data-driven methods have insufficient feature extraction. This causes low prediction accuracy. With objectives set in relation to the problems outlined above, a rail RUL prediction approach based on a convolutional bidirectional long- and short-term memory neural network with a residual self-attention (CNNBiLSTM-RSA) mechanism is designed. Firstly, the pre-processed vibration data are taken as the input for the convolutional bi-directional long- and short-term memory neural network (CNNBiLSTM) to extract the forward and backward dependencies and features of the rail data. Secondly, the RSA mechanism is introduced in order to obtain the contributions of the features at different moments during the degradation process of the rail. Finally, an end-to-end RUL prediction implementation based on the convolutional bi-directional long- and short-term memory neural network with the residual self-attention mechanism is established. The experiments were carried out using the full life-cycle data of rails collected at the railway site. The results show that the method achieves a higher accuracy in the RUL prediction of rails. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

68 pages, 4001 KiB  
Systematic Review
Surface Treatment of Dental Mini-Sized Implants and Screws: A Systematic Review with Meta-Analysis
by Ana Luísa Figueiredo, Raquel Travassos, Catarina Nunes, Madalena Prata Ribeiro, Mariana Santos, Flavia Iaculli, Anabela Baptista Paula, Carlos Miguel Marto, Francisco Caramelo, Inês Francisco and Francisco Vale
J. Funct. Biomater. 2024, 15(3), 68; https://doi.org/10.3390/jfb15030068 - 10 Mar 2024
Viewed by 4062
Abstract
Miniscrews are devices that allow for absolute skeletal anchorage. However, their use has a higher failure rate (10–30%) than dental implants (10%). To overcome these flaws, chemical and/or mechanical treatment of the surface of miniscrews has been suggested. There is no consensus in [...] Read more.
Miniscrews are devices that allow for absolute skeletal anchorage. However, their use has a higher failure rate (10–30%) than dental implants (10%). To overcome these flaws, chemical and/or mechanical treatment of the surface of miniscrews has been suggested. There is no consensus in the current literature about which of these methods is the gold standard; thus, our objective was to carry out a systematic review and meta-analysis of the literature on surface treatments of miniscrews. The review protocol was registered (PROSPERO CRD42023408011) and is in accordance with the PRISMA guidelines. A bibliographic search was carried out on PubMed via MEDLINE, Cochrane Library, Embase and Web of Science. The initial search of the databases yielded 1684 results, with 98 studies included in the review, with one article originating from the search in the bibliographic references of the included studies. The results of this systematic review show that the protocols of miniscrew surface treatments, such as acid-etching; sandblasting, large-grit and acid-etching; photofunctionalization with ultraviolet light; and photobiomodulation, can increase stability and the success of orthodontic treatment. The meta-analysis revealed that the treatment with the highest removal torque is SLA, followed by acid-etching. On the other hand, techniques such as oxidative anodization, anodization with pre-calcification and heat treatment, as well as deposition of chemical compounds, require further investigation to confirm their effectiveness. Full article
(This article belongs to the Special Issue Orthodontics Materials and Technologies)
Show Figures

Figure 1

32 pages, 18478 KiB  
Article
Explainable Deep Learning Approach for Multi-Class Brain Magnetic Resonance Imaging Tumor Classification and Localization Using Gradient-Weighted Class Activation Mapping
by Tahir Hussain and Hayaru Shouno
Information 2023, 14(12), 642; https://doi.org/10.3390/info14120642 - 30 Nov 2023
Cited by 27 | Viewed by 5337
Abstract
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist [...] Read more.
Brain tumors (BT) present a considerable global health concern because of their high mortality rates across diverse age groups. A delay in diagnosing BT can lead to death. Therefore, a timely and accurate diagnosis through magnetic resonance imaging (MRI) is crucial. A radiologist makes the final decision to identify the tumor through MRI. However, manual assessments are flawed, time-consuming, and rely on experienced radiologists or neurologists to identify and diagnose a BT. Computer-aided classification models often lack performance and explainability for clinical translation, particularly in neuroscience research, resulting in physicians perceiving the model results as inadequate due to the black box model. Explainable deep learning (XDL) can advance neuroscientific research and healthcare tasks. To enhance the explainability of deep learning (DL) and provide diagnostic support, we propose a new classification and localization model, combining existing methods to enhance the explainability of DL and provide diagnostic support. We adopt a pre-trained visual geometry group (pre-trained-VGG-19), scratch-VGG-19, and EfficientNet model that runs a modified form of the class activation mapping (CAM), gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ algorithms. These algorithms, introduced into a convolutional neural network (CNN), uncover a crucial part of the classification and can provide an explanatory interface for diagnosing BT. The experimental results demonstrate that the pre-trained-VGG-19 with Grad-CAM provides better classification and visualization results than the scratch-VGG-19, EfficientNet, and cutting-edge DL techniques regarding visual and quantitative evaluations with increased accuracy. The proposed approach may contribute to reducing the diagnostic uncertainty and validating BT classification. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

6 pages, 710 KiB  
Proceeding Paper
A Novel Quality Assessment Method for the Clinical Reproduction of Orthodontic Attachments Based on Differential Entropy
by Fabio Salmeri, Emmanuele Barberi, Frank Lipari and Fabiana Nicita
Eng. Proc. 2023, 56(1), 15; https://doi.org/10.3390/ASEC2023-15245 - 26 Oct 2023
Cited by 4 | Viewed by 812
Abstract
In this study, the effectiveness of an experimental clinical technique for the reproduction of attachments during an orthodontic treatment with clear aligners was evaluated using a new index (CorAl) for quality assessment that exploits the differential entropy of point clouds. The procedure involves [...] Read more.
In this study, the effectiveness of an experimental clinical technique for the reproduction of attachments during an orthodontic treatment with clear aligners was evaluated using a new index (CorAl) for quality assessment that exploits the differential entropy of point clouds. The procedure involves the use of a pre-drilled template and a second pre-loaded template with a high-viscosity composite and is compared with the standard technique. Attachment planning was conducted on four prototypes of dental arches with extracted teeth which were divided into two groups according to the proposed operating procedures. Digital scans were utilized to capture dental impressions for both the purposes of virtual planning and to reproduce the clinical outcomes post-procedure. The point clouds obtained after the reproduction of the attachments were aligned with those from the virtual planning, and the deviation analysis was conducted using the quality index of the CorAl method. Though no significant discrepancies were found among the groups regarding morphological flaws, detachments, or maximum defect values, the differential entropy analysis revealed that the experimental technique offers good alignment in attachment placement. The outcome supports that the innovative procedure of the clinical reproduction of attachments proved to be reliable and operationally simple, with additional benefits derived from using the CorAl index. The advantages of CorAl include the use of a single comparison index, no problem of comparison commutativity, noise immunity, low influence from the presence of holes, and point cloud densities. This allows for the drawing of quality maps that show areas with the highest deviation. Full article
(This article belongs to the Proceedings of The 4th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

13 pages, 1461 KiB  
Review
Pellet Production from Pruning and Alternative Forest Biomass: A Review of the Most Recent Research Findings
by Rodolfo Picchio, Nicolò Di Marzio, Luca Cozzolino, Rachele Venanzi, Walter Stefanoni, Leonardo Bianchini, Luigi Pari and Francesco Latterini
Materials 2023, 16(13), 4689; https://doi.org/10.3390/ma16134689 - 29 Jun 2023
Cited by 12 | Viewed by 2785
Abstract
Typically, coniferous sawdust from debarked stems is used to make pellets. Given the high lignin content, which ensures strong binding and high calorific values, this feedstock provides the best quality available. However, finding alternative feedstocks for pellet production is crucial if small-scale pellet [...] Read more.
Typically, coniferous sawdust from debarked stems is used to make pellets. Given the high lignin content, which ensures strong binding and high calorific values, this feedstock provides the best quality available. However, finding alternative feedstocks for pellet production is crucial if small-scale pellet production is to be developed and used to support the economy and energy independence of rural communities. These communities have to be able to create pellets devoid of additives and without biomass pre-processing so that the feedstock price remains low. The features of pellets made from other sources of forest biomass, such as different types of waste, broadleaf species, and pruning biomass, have attracted some attention in this context. This review sought to provide an overview of the most recent (2019–2023) knowledge on the subject and to bring into consideration potential feedstocks for the growth of small-scale pellet production. Findings from the literature show that poor bulk density and mechanical durability are the most frequent issues when making pellets from different feedstocks. All of the tested alternative biomass typologies have these shortcomings, which are also a result of the use of low-performance pelletizers in small-scale production, preventing the achievement of adequate mechanical qualities. Pellets made from pruning biomass, coniferous residues, and wood from short-rotation coppice plants all have significant flaws in terms of ash content and, in some cases, nitrogen, sulfur, and chlorine content as well. All things considered, research suggests that broadleaf wood from beech and oak trees, collected through routine forest management activities, makes the best feasible feedstock for small-scale pellet production. Despite having poor mechanical qualities, these feedstocks can provide pellets with a low ash level. High ash content is a significant disadvantage when considering pellet manufacture and use on a small scale since it can significantly raise maintenance costs, compromising the supply chain’s ability to operate cost-effectively. Pellets with low bulk density and low mechanical durability can be successfully used in a small-scale supply chain with the advantages of reducing travel distance from the production site and storage time. Full article
(This article belongs to the Special Issue Mechanical Processing of Granular and Fibrous Materials)
Show Figures

Figure 1

20 pages, 10404 KiB  
Article
Strength Properties and Damage Evolution Mechanism of Single-Flawed Brazilian Discs: An Experimental Study and Particle Flow Simulation
by Yao Bai, Haoyu Dou, Peng Sun, Tiancheng Ma, Yujing Wang and Yuqin Wang
Symmetry 2023, 15(4), 895; https://doi.org/10.3390/sym15040895 - 10 Apr 2023
Cited by 4 | Viewed by 2223
Abstract
Understanding the tensile strength properties and damage evolution mechanism in fissured rock is very important to fundamental research and engineering design. The effects of flaw dip angle on the tensile strength, macroscopic crack propagation and failure mode of symmetrical Brazilian discs of rock-like [...] Read more.
Understanding the tensile strength properties and damage evolution mechanism in fissured rock is very important to fundamental research and engineering design. The effects of flaw dip angle on the tensile strength, macroscopic crack propagation and failure mode of symmetrical Brazilian discs of rock-like materials were investigated. A parallel bonding model was proposed to examine the damage of pre-flawed discs under splitting the load. The microscopic parameters of particles and bonds in the model that can characterize rock-like materials’ mechanical and deformation properties were obtained by calibrating against the laboratory test results. The crack development, energy evolution and damage characteristics of Brazil discs containing a single pre-existing flaw were studied at the microscopic scale. The results show that the flaw significantly weakens the strength of the Brazilian disc, and both the peak load and the initial cracking load decrease with increasing flaw angle. The failure modes of the rock-like specimens are mainly divided into three types: wing crack penetration damage mode, tensile-shear penetration damage mode and radial penetration failure mode. Except for the flaw dip angle 0°, the wing cracks generally sprouted at the tip of the pre-flaw, and the wing cracks at both tips of the pre-flaw are centrosymmetric. Crack coalescence was concentrated in the post-peak stage. Based on the particle flow code (PFC) energy partitions, the damage variables characterized by dissipation energy were proposed. The disc specimen’s pre-peak damage variables and peak damage variables decreased with increasing flaw angle, and the damage was concentrated in the post-peak phase. Full article
(This article belongs to the Topic Advances in Computational Materials Sciences)
Show Figures

Figure 1

17 pages, 8413 KiB  
Article
Discrete Element Modeling of Thermally Damaged Sandstone Containing Two Pre-Existing Flaws at High Confining Pressure
by Jinzhou Tang, Shengqi Yang, Ke Yang, Wenling Tian, Guangjian Liu and Minke Duan
Sustainability 2023, 15(7), 6318; https://doi.org/10.3390/su15076318 - 6 Apr 2023
Cited by 7 | Viewed by 1857
Abstract
An underground coal gasification (UCG) process is strongly exothermic, which will cause thermal damage on rock cap. We proposed a new thermal damage numerical model based on a two dimension particle flow code (PFC2D) to analyze the inception and extension of cracks on [...] Read more.
An underground coal gasification (UCG) process is strongly exothermic, which will cause thermal damage on rock cap. We proposed a new thermal damage numerical model based on a two dimension particle flow code (PFC2D) to analyze the inception and extension of cracks on pre-cracked red sandstone, which were thermally treated at a temperature of 25~1000 °C. The results indicated that: (1) a thermal damage value DT obtained by extracting the thermal crack area of scanning electron microscope (SEM), which can be used as an indicator of the degree of thermal damage of the sandstone; (2) a thermal damage numerical model established by replacing the flat-joint model with the smooth-joint model based on the thermal damage value DT, this approach can properly simulate the mechanical behavior and failure patterns of sandstone; (3) the critical temperature for strength reduction was 750 °C. The peak strength increased as pre-treatment temperature increased from 25 to 750 °C and then decreased. The elastic modulus E1 decreased with the increasing thermal treatment temperature; (4) micro-scale cracks initiate from the tip of the prefabricated fissure, and expand along the direction of prefabricated fissure, finally developing into macroscopic fracture. This approach has the potential to enhance the predictive capability of modeling and presents a reliable model to simulate the mechanical behavior of thermally damaged sandstones, thereby offering a sound scientific basis for the utilization of space after UCG. Full article
(This article belongs to the Special Issue Sustainable Engineering: Prevention of Rock and Thermal Damage)
Show Figures

Figure 1

Back to TopTop