Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (155)

Search Parameters:
Keywords = K-file

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 9069 KiB  
Article
Prediction of Temperature Distribution with Deep Learning Approaches for SM1 Flame Configuration
by Gökhan Deveci, Özgün Yücel and Ali Bahadır Olcay
Energies 2025, 18(14), 3783; https://doi.org/10.3390/en18143783 - 17 Jul 2025
Viewed by 205
Abstract
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST [...] Read more.
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST k-ω turbulence model. The first approach employs a fully connected dense neural network to directly map scalar input parameters—fuel velocity, swirl ratio, and equivalence ratio—to high-resolution temperature contour images. In addition, a comparison was made with different deep learning networks, namely Res-Net, EfficientNetB0, and Inception Net V3, to better understand the performance of the model. In the first approach, the results of the Inception V3 model and the developed Dense Model were found to be better than Res-Net and Efficient Net. At the same time, file sizes and usability were examined. The second framework employs a U-Net-based convolutional neural network enhanced by an RGB Fusion preprocessing technique, which integrates multiple scalar fields from non-reacting (cold flow) conditions into composite images, significantly improving spatial feature extraction. The training and validation processes for both models were conducted using 80% of the CFD data for training and 20% for testing, which helped assess their ability to generalize new input conditions. In the secondary approach, similar to the first approach, studies were conducted with different deep learning models, namely Res-Net, Efficient Net, and Inception Net, to evaluate model performance. The U-Net model, which is well developed, stands out with its low error and small file size. The dense network is appropriate for direct parametric analyses, while the image-based U-Net model provides a rapid and scalable option to utilize the cold flow CFD images. This framework can be further refined in future research to estimate more flow factors and tested against experimental measurements for enhanced applicability. Full article
Show Figures

Figure 1

16 pages, 2296 KiB  
Article
Magnetoelectric Effects in Bilayers of PZT and Co and Ti Substituted M-Type Hexagonal Ferrites
by Sujoy Saha, Sabita Acharya, Sidharth Menon, Rao Bidthanapally, Michael R. Page, Menka Jain and Gopalan Srinivasan
J. Compos. Sci. 2025, 9(7), 336; https://doi.org/10.3390/jcs9070336 - 27 Jun 2025
Viewed by 229
Abstract
This report is on Co and Ti substituted M-type barium and strontium hexagonal ferrites that are reported to be single phase multiferroics due to a transition from Neel type ferrimagnetic order to a spiral spin structure that is accompanied by a ferroelectric polarization [...] Read more.
This report is on Co and Ti substituted M-type barium and strontium hexagonal ferrites that are reported to be single phase multiferroics due to a transition from Neel type ferrimagnetic order to a spiral spin structure that is accompanied by a ferroelectric polarization in an applied magnetic field. The focus here is the nature of magnetoelectric (ME) interactions in the bilayers of ferroelectric PZT and Co and Ti substituted BaM and SrM. The ME coupling in the ferrite-PZT bilayers arise due to the transfer of magnetostriction-induced mechanical deformation in a magnetic field in the ferrite resulting in an induced electric field in PZT. Polycrystalline Co and Ti doped ferrites, Ba (CoTi)x Fe12−2xO19, (BCTx), and Sr (CoTi)x Fe12−2xO19 (SCTx) (x = 0–4) were found to be free of impurity phases for all x-values except for SCTx, which had a small amount of α-Fe2O3 in the X-ray diffraction patterns for x ≤ 2.0. The magnetostriction for the ferrites increased with applied filed H to a maximum value of around 2 to 6 ppm for H~5 kOe. BCTx/SCTx samples showed ferromagnetic resonance (FMR) for x = 1.5–2.0, and the estimated anisotropy field was on the order of 5 kOe. The magnetization increased with the amount of Co and Ti doping, and it decreased rapidly with x for x > 1.0. Measurements of ME coupling strengths were conducted on the bilayers of BCTx/SCTx platelets bonded to PZT. The bilayer was subjected to an AC and DC magnetic field H, and the magnetoelectric voltage coefficient (MEVC) was measured as a function of H and frequency of the AC field. For BCTx-PZT, the maximum value of MEVC at low frequency was ~5 mV/cm Oe, and a 40-fold increase at electromechanical resonance (EMR). SCTx–PZT composites also showed a similar behavior with the highest MEVC value of ~14 mV/cm Oe at low frequencies and ~200 mV/cm Oe at EMR. All the bilayers showed ME coupling for zero magnetic bias due to the magnetocrystalline anisotropy field in the ferrite that provided a built-in bias field. Full article
(This article belongs to the Special Issue Metal Composites, Volume II)
Show Figures

Figure 1

16 pages, 1913 KiB  
Article
Evaluation of Ultra-Low-Dose CBCT Protocols to Investigate Vestibular Bone Defects in the Context of Immediate Implant Planning: An Ex Vivo Study on Cadaver Skulls
by Mats Wernfried Heinrich Böse, Jonas Buchholz, Florian Beuer, Stefano Pieralli and Axel Bumann
J. Clin. Med. 2025, 14(12), 4196; https://doi.org/10.3390/jcm14124196 - 12 Jun 2025
Viewed by 510
Abstract
Background/Objectives: This ex vivo study aimed to evaluate the diagnostic performance of ultra-low-dose (ULD) cone-beam computed tomography (CBCT) protocols in detecting vestibular bone defects for immediate implant planning, using intraoral scan (IOS) data as a reference. Methods: Four CBCT protocols (ENDO, A, B, [...] Read more.
Background/Objectives: This ex vivo study aimed to evaluate the diagnostic performance of ultra-low-dose (ULD) cone-beam computed tomography (CBCT) protocols in detecting vestibular bone defects for immediate implant planning, using intraoral scan (IOS) data as a reference. Methods: Four CBCT protocols (ENDO, A, B, C) were applied to four dried human skulls using a standardized setup and a single CBCT unit (Planmeca ProMax® 3D Mid, Planmeca Oy, Helsinki, Finland). All scans were taken at 90 kV, with varying parameters: (1) ENDO (40 × 50 mm, 75 µm, 12 mA, 80–120 µSv, 15 s), (2) A (50 × 50 mm, 75 µm, 9 mA, 20–40 µSv, 5 s), (3) B (100 × 60 mm, 150 µm, 7.1 mA, 22–32 µSv, 5 s), and (4) C (100 × 100 mm, 200 µm, 7.1 mA, 44 µSv, 4 s). Vestibular root surfaces of single-rooted teeth (FDI regions 15–25 and 35–45) were digitized via IOS and exported as STL files. CBCT datasets were superimposed using 3D software (Blender 2.79), and surface defects were measured and compared using one-sample t-tests and Bland–Altman analysis. The level of significance was set at p < 0.05. Results: A total of 330 vestibular surfaces from 66 teeth were analyzed. Compared to the IOS reference, protocols ENDO and A showed minimal differences (p > 0.05). In contrast, protocols B and C exhibited statistically significant deviations (p < 0.05). Protocol B demonstrated a mean difference of −0.477 mm2 with limits of agreement (LoA) from −2.04 to 1.09 mm2 and significant intra-rater variability (p < 0.05). Protocol C revealed a similar mean deviation (−0.455 mm2) but a wider LoA (−2.72 to 1.81 mm2), indicating greater measurement variability. Overall, larger voxel sizes were associated with increased random error, although deviations remained within clinically acceptable limits. Conclusions: Despite statistical significance, deviations for protocols B and C remained within clinically acceptable limits. ULD CBCT protocols are, thus, suitable for evaluating vestibular bone defects with reduced radiation exposure. Full article
(This article belongs to the Special Issue Emerging Technologies for Dental Imaging)
Show Figures

Figure 1

11 pages, 554 KiB  
Article
Exploring the Antimicrobial and Clinical Efficacy of a Novel Technology in Pediatric Endodontics: An In Vivo Study
by Luca De Gregoriis, Tatiane Cristina Dotta, Morena Petrini, Silvia Di Lodovico, Loredana D’Ercole, Simonetta D’Ercole and Domenico Tripodi
Appl. Sci. 2025, 15(12), 6491; https://doi.org/10.3390/app15126491 - 9 Jun 2025
Viewed by 414
Abstract
Pediatric dentistry continually seeks effective and efficient treatments for young patients, especially within pediatric endodontics, where cooperation can often be challenging. This in vivo study aimed to evaluate the effectiveness of a novel photodynamic therapy (PDT) protocol using a 5-aminolevulinic acid gel (Aladent, [...] Read more.
Pediatric dentistry continually seeks effective and efficient treatments for young patients, especially within pediatric endodontics, where cooperation can often be challenging. This in vivo study aimed to evaluate the effectiveness of a novel photodynamic therapy (PDT) protocol using a 5-aminolevulinic acid gel (Aladent, ALAD) combined with light irradiation during the endodontic treatment of primary teeth. This study included primary teeth requiring root canal therapy due to carious lesions or trauma, with clinical symptoms suggesting irreversible pulpitis or acute apical periodontitis. Following local anesthesia and isolation with a rubber dam, carious lesions were excavated, and access to the pulp chamber was established. Canal preparation included determining the working length and using a sequence of k-files. Afterward, ALAD gel was applied, and the patients were divided into two groups based on their visit duration (Group A with a single visit, Group B returning after one week). Microbiological analysis was conducted on the samples taken before and after treatment. The findings demonstrated significant antibacterial efficacy of the PDT protocol in reducing root canal bacterial load, suggesting ALAD-based PDT may serve as an alternative to traditional endodontic treatment in cases where retaining primary teeth is essential for orthodontic or strategic reasons. Clinically, improvement in symptoms and fistula resolution were observed. Treatment time, patient compliance, and protocol safety in pediatric applications are also discussed, highlighting the protocol’s potential to enhance clinical outcomes in pediatric endodontics. Full article
Show Figures

Figure 1

13 pages, 1873 KiB  
Article
Achieving Patency in Straight Canals Obturated with AH Plus Bioceramic Sealer: An Ex Vivo Study
by Inês Ferreira, Beatriz Fernandes, Ana Cristina Braga, Maria Ascensão Lopes and Irene Pina-Vaz
Appl. Sci. 2025, 15(11), 5855; https://doi.org/10.3390/app15115855 - 23 May 2025
Viewed by 399
Abstract
This study compared the efficacy of different solutions in achieving patency in teeth filled with AH Plus Bioceramic sealer. Eighty-five premolars with a straight canal were prepared. After sealer placement, a master gutta-percha cone was introduced 2 mm short of the working length. [...] Read more.
This study compared the efficacy of different solutions in achieving patency in teeth filled with AH Plus Bioceramic sealer. Eighty-five premolars with a straight canal were prepared. After sealer placement, a master gutta-percha cone was introduced 2 mm short of the working length. The teeth were stored at 37 °C and 100% humidity for five weeks before retreatment. Filling materials were removed up to the gutta-percha cone’s length. The canals were then randomly assigned to groups: G1 (control, no solution), G2 (5.25% NaOCl), G3 (17% EDTA), G4 (10% citric acid), and G5 (10% formic acid). The apical patency was attempted with a 10 K file within a period of 10 min, by a blinded operator. Additionally, sealer samples were immersed in the solutions, followed by scanning electron microscopy analysis. The Kruskal–Wallis test was used for statistical analysis. Patency was achieved in all canals except one in the control and one in the NaOCl groups. No significant differences were found in the time required to achieve patency. Acid solutions had a greater impact on the sealer’s structural integrity, and a decalcifying effect of EDTA and citric acid was registered. Apical patency in straight canals obturated with AH Plus Bioceramic sealer was consistently achieved regardless of the solution used. Full article
(This article belongs to the Special Issue Advanced Dental Materials and Its Applications)
Show Figures

Figure 1

40 pages, 371 KiB  
Article
Determinants and Drivers of Large Negative Book-Tax Differences: Evidence from S&P 500
by Sina Rahiminejad
J. Risk Financial Manag. 2025, 18(6), 291; https://doi.org/10.3390/jrfm18060291 - 23 May 2025
Viewed by 488
Abstract
Temporary book-tax differences (BTDs) serve as critical proxies for understanding corporate earnings management and tax planning. However, the drivers of large negative BTDs (LNBTDs)—where book income falls below taxable income—remain underexplored. This study investigates the determinants and components of LNBTDs, focusing on their [...] Read more.
Temporary book-tax differences (BTDs) serve as critical proxies for understanding corporate earnings management and tax planning. However, the drivers of large negative BTDs (LNBTDs)—where book income falls below taxable income—remain underexplored. This study investigates the determinants and components of LNBTDs, focusing on their relationship with deferred tax assets (DTAs) and liabilities (DTLs). Utilizing hand-collected data from the tax disclosures of S&P 500 firms’ 10-K filings (2007–2023), I analyze 4685 firm-year observations to identify specific accounting items driving LNBTDs. Findings reveal that deferred revenue, goodwill impairments, R&D, CapEx, environmental obligations, pensions, contingency liabilities, leases, and receivables are significant contributors, often generating substantial DTAs due to timing mismatches between book and tax recognition. Notably, high-tech industries, like the pharmaceutical, medical, and computers and software industries, exhibit pronounced LNBTDs, driven by upfront revenue recognition for tax purposes and deferred recognition for financial reporting, capitalization, amortization and depreciation effects, and other deferred tax components. Regression analyses confirm strong associations between these components and LNBTDs, with asymmetry in reversal patterns suggesting that initial differences do not always offset symmetrically over time. While prior research emphasizes large positive BTDs and tax avoidance, this study highlights economic and industry-specific characteristics as key LNBTD drivers, with limited evidence of earnings manipulation via deferred taxes. These insights enhance the value relevance of deferred tax disclosures and offer implications for reporting standards, tax policy, and research into BTD dynamics. Full article
(This article belongs to the Section Applied Economics and Finance)
42 pages, 4293 KiB  
Article
Optimizing Hydrogen Liquefaction Efficiency Through Waste Heat Recovery: A Comparative Study of Three Process Configurations
by Seyed Masoud Banijamali, Adrian Ilinca, Ali Alizadeh Afrouzi and Daniel R. Rousse
Processes 2025, 13(5), 1349; https://doi.org/10.3390/pr13051349 - 28 Apr 2025
Viewed by 623
Abstract
Hydrogen (H2) liquefaction is an energy-intensive process, and improving its efficiency is critical for large-scale deployment in H2 infrastructure. Industrial waste heat recovery contributes to energy savings and environmental improvements in liquid H2 processes. This study proposes a comparative [...] Read more.
Hydrogen (H2) liquefaction is an energy-intensive process, and improving its efficiency is critical for large-scale deployment in H2 infrastructure. Industrial waste heat recovery contributes to energy savings and environmental improvements in liquid H2 processes. This study proposes a comparative framework for industrial waste heat recovery in H2 liquefaction systems by examining three recovery cycles, including an ammonia–water absorption refrigeration (ABR) unit, a diffusion absorption refrigeration (DAR) process, and a combined organic Rankine/Kalina plant. All scenarios incorporate 2 MW of industrial waste heat to improve precooling and reduce the external power demand. The simulations were conducted using Aspen HYSYS (V10) in combination with an m-file code in MATLAB (R2022b) programming to model each configuration under consistent operating conditions. Detailed energy and exergy analyses are performed to assess performance. Among the three scenarios, the ORC/Kalina-based system achieves the lowest specific power consumption (4.306 kWh/kg LH2) and the highest exergy efficiency in the precooling unit (70.84%), making it the most energy-efficient solution. Although the DAR-based system shows slightly lower performance, the ABR-based system achieves the highest exergy efficiency of 52.47%, despite its reduced energy efficiency. By comparing three innovative configurations using the same industrial waste heat input, this work provides a valuable tool for selecting the most suitable design based on either energy performance or thermodynamic efficiency. The proposed methodology can serve as a foundation for future system optimization and scale-up. Full article
(This article belongs to the Special Issue Insights into Hydrogen Production Using Solar Energy)
Show Figures

Figure 1

17 pages, 5465 KiB  
Article
A Machine Learning-Based Ransomware Detection Method for Attackers’ Neutralization Techniques Using Format-Preserving Encryption
by Jaehyuk Lee, Jinwook Kim, Hanjo Jeong and Kyungroul Lee
Sensors 2025, 25(8), 2406; https://doi.org/10.3390/s25082406 - 10 Apr 2025
Cited by 1 | Viewed by 1209
Abstract
Ransomware, a type of malware that first appeared in 1989, encrypts user files and demands money for decryption, causing increasing global damage. To reduce the impact of ransomware, various file-based detection technologies are being developed; however, these have limitations, such as difficulties in [...] Read more.
Ransomware, a type of malware that first appeared in 1989, encrypts user files and demands money for decryption, causing increasing global damage. To reduce the impact of ransomware, various file-based detection technologies are being developed; however, these have limitations, such as difficulties in detecting ransomware that bypasses traditional methods like decoy files. A newer approach measures file entropy to detect infected files, but attackers counter this by using encoding algorithms like Base64 to bypass detection thresholds. Additionally, attackers can neutralize detection through format-preserving encryption (FPE), which allows files to be encrypted without changing their format, complicating detection. In this article, we present a machine learning-based method for detecting ransomware-infected files encrypted using FPE techniques. We employed various machine learning models, including K-Nearest Neighbors (KNN), Logistic Regression, and Decision Tree, and found that most trained models—except for Logistic Regression and Multi-Layer Perceptron (MLP)—effectively detected ransomware-infected files encrypted with FPE. In summary, to counter the ransomware neutralization attack using FPE and entropy manipulation, this paper proposes a machine learning-based method for detecting files infected with such manipulated ransomware entropy. The experimental results showed an average precision of 94.64% across various datasets, indicating that the proposed method effectively detects ransomware-infected files. Therefore, the findings of this study offer a solution to address new ransomware attacks that aim to bypass entropy-based detection techniques, contributing to the advancement of ransomware detection and the protection of users’ files and systems. Full article
(This article belongs to the Special Issue Cyber Security and AI—2nd Edition)
Show Figures

Figure 1

15 pages, 548 KiB  
Article
Centralized Hierarchical Coded Caching Scheme for Two-Layer Network
by Kun Zhao, Jinyu Wang and Minquan Cheng
Entropy 2025, 27(3), 316; https://doi.org/10.3390/e27030316 - 18 Mar 2025
Viewed by 390
Abstract
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and [...] Read more.
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and M2 files, respectively. The server can only broadcast to the mirrors, and each mirror can only broadcast to its connected users. For such a network, we propose a novel coded caching scheme based on two known placement delivery arrays (PDAs). To fully utilize the cache memory of both the mirrors and users, we first treat the mirrors and users as cache nodes of the same type; i.e., the cache memory of each mirror is regarded as an additional part of the connected users’ cache, then the server broadcasts messages to all mirrors according to a K1K2-user PDA in the first layer. In the second layer, each mirror first cancels useless file packets (if any) in the received useful messages and forwards them to the connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to the connected users according to a K2-user PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to different mirrors may be different. Numerical comparison showed that the proposed scheme achieved lower coding delays compared to existing hierarchical coded caching schemes at most memory ratio points. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

12 pages, 284 KiB  
Article
Coded Distributed Computing Under Combination Networks
by Yongcheng Yang, Yifei Huang, Xiaohuan Qin and Shenglian Lu
Entropy 2025, 27(3), 311; https://doi.org/10.3390/e27030311 - 16 Mar 2025
Viewed by 587
Abstract
Coded distributed computing (CDC) is a powerful approach to reduce the communication overhead in distributed computing frameworks by utilizing coding techniques. In this paper, we focus on the CDC problem in (H,L)-combination networks, where H APs act as [...] Read more.
Coded distributed computing (CDC) is a powerful approach to reduce the communication overhead in distributed computing frameworks by utilizing coding techniques. In this paper, we focus on the CDC problem in (H,L)-combination networks, where H APs act as intermediate pivots and K=HL workers are connected to different subsets of L APs. Each worker processes a subset of the input file and computes intermediate values (IVs) locally, which are then exchanged via uplink and downlink transmissions through the AP station to ensure that all workers compute their assigned output functions. In this paper, we first novelly characterize the transmission scheme for the shuffle phase from the view point of the coefficient matrix and then obtain the scheme by using the Combined Placement Delivery Array (CPDA). Compared with the baseline scheme, our scheme significantly improves the uplink and downlink communication loads while maintaining the robustness and efficiency of the combined multi-AP network. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

24 pages, 5296 KiB  
Article
LSTM Attention-Driven Similarity Learning for Effective Bug Localization
by Geunseok Yang, Jinfeng Ji and Eontae Kim
Appl. Sci. 2025, 15(3), 1582; https://doi.org/10.3390/app15031582 - 4 Feb 2025
Viewed by 978
Abstract
Objective: The complexity of software systems, with their multifaceted functionalities and intricate source code structures, poses significant challenges for developers in identifying and resolving bugs. This study aims to address these challenges by proposing an efficient bug localization method that improves the accuracy [...] Read more.
Objective: The complexity of software systems, with their multifaceted functionalities and intricate source code structures, poses significant challenges for developers in identifying and resolving bugs. This study aims to address these challenges by proposing an efficient bug localization method that improves the accuracy and effectiveness of identifying faulty code based on bug reports. Method: We introduce a novel bug localization approach that integrates a Long Short-Term Memory (LSTM) attention mechanism with top-K code similarity learning. The proposed method preprocesses bug reports and source code files, calculates top-K code similarities using the BM25 algorithm, and trains an LSTM-Attention model to predict the most relevant buggy source code files. Results: The model was evaluated on six open-source projects (Tomcat, AspectJ, Birt, Eclipse, JDT, SWT) and demonstrated significant improvements over the baseline method, DNNLoc. Notably, the proposed approach improved accuracy across all projects, with average gains of 18% in prediction accuracy compared to the baseline. Conclusion: This study highlights the efficacy of combining similarity learning with attention mechanisms for bug localization. By streamlining debugging workflows and enhancing predictive accuracy, the proposed method offers a practical solution for improving software quality and reducing development costs. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7037 KiB  
Article
An Artificial Intelligence Home Monitoring System That Uses CNN and LSTM and Is Based on the Android Studio Development Platform
by Guo-Ming Sung, Sachin D. Kohale, Te-Hui Chiang and Yu-Jie Chong
Appl. Sci. 2025, 15(3), 1207; https://doi.org/10.3390/app15031207 - 24 Jan 2025
Cited by 1 | Viewed by 933
Abstract
This paper developed an artificial intelligence home environment monitoring system by using the Android Studio development platform. A database was constructed within a server to store sensor data. The proposed system comprises multiple sensors, a message queueing telemetry transport (MQTT) communication protocol, cloud [...] Read more.
This paper developed an artificial intelligence home environment monitoring system by using the Android Studio development platform. A database was constructed within a server to store sensor data. The proposed system comprises multiple sensors, a message queueing telemetry transport (MQTT) communication protocol, cloud data storage and computation, and end device control. A mobile application was developed using MongoDB software, which is a file-oriented NoSQL database management system developed using C++. This system represents a new database for processing big sensor data. The k-nearest neighbor (KNN) algorithm was used to impute missing data. Node-RED development software was used within the server as a data-receiving, storage, and computing environment that is convenient to manage and maintain. Data on indoor temperature, humidity, and carbon dioxide concentrations are transmitted to a mobile phone application through the MQTT communication protocol for real-time display and monitoring. The system can control a fan or warning light through the mobile application to maintain ambient temperature inside the house and to warn users of emergencies. A long short-term memory (LSTM) model and a convolutional neural network (CNN) model were used to predict indoor temperature, humidity, and carbon dioxide concentrations. Average relative errors in the predicted values of humidity and carbon dioxide concentration were approximately 0.0415% and 0.134%, respectively, for data storage using the KNN algorithm. For indoor temperature prediction, the LSTM model had a mean absolute percentage error of 0.180% and a root-mean-squared error of 0.042 °C. The CNN–LSTM model had a mean absolute percentage error of 1.370% and a root-mean-squared error of 0.117 °C. Full article
Show Figures

Figure 1

22 pages, 12407 KiB  
Article
Analyzing Archive Transit Multibeam Data for Nodule Occurrences
by Mark E. Mussett, David F. Naar, David W. Caress, Tracey A. Conrad, Alastair G. C. Graham, Max Kaufmann and Marcia Maia
J. Mar. Sci. Eng. 2024, 12(12), 2322; https://doi.org/10.3390/jmse12122322 - 18 Dec 2024
Cited by 1 | Viewed by 1090
Abstract
We show that analyzing archived and future multibeam backscatter and bathymetry data, in tandem with regional environmental parameters, can help to identify polymetallic nodule fields in the world’s oceans. Extensive archived multibeam transit data through remote areas of the world’s oceans are available [...] Read more.
We show that analyzing archived and future multibeam backscatter and bathymetry data, in tandem with regional environmental parameters, can help to identify polymetallic nodule fields in the world’s oceans. Extensive archived multibeam transit data through remote areas of the world’s oceans are available for data mining. New multibeam data will be made available through the Seabed 2030 Project. Uniformity of along- and across-track backscatter, backscatter intensity, angular response, water depth, nearby ground-truth data, local slope, sedimentation rate, and seafloor age provide thresholds for discriminating areas that are permissive to nodule presence. A case study of this methodology is presented, using archived multibeam data from a remote section of the South Pacific along the Foundation Seamounts between the Selkirk paleomicroplate and East Pacific Rise, that were collected during the 1997 Foundation–Hotline expedition on R/V Atalante. The 12 kHz Simrad EM12D multibeam data and the other forementioned data strongly suggest that a previously unknown nodule occurrence exists along the expedition transit. We also compare the utility of three different backscatter products to demonstrate that scans of printed backscatter maps can be a useful substitute for digital backscatter mosaics calculated using primary multibeam data files. We show that this expeditious analysis of legacy multibeam data could characterize benthic habitat types efficiently in remote deep-ocean areas, prior to more time-consuming and expensive video and sample acquisition surveys. Additionally, utilizing software other than specialty sonar processing programs during this research allows an exploration of how multibeam data products could be interrogated by a broader range of scientists and data users. Future mapping, video, and sampling cruises in this area would test our prediction and investigate how far it might extend to the north and south. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Figure 1

10 pages, 1640 KiB  
Proceeding Paper
Lamiaceae Plants and Cardiovascular Health: A Patent-Driven Path to Functional Foods
by Reda El Boukhari and Ahmed Fatimi
Biol. Life Sci. Forum 2024, 40(1), 2; https://doi.org/10.3390/blsf2024040002 - 12 Dec 2024
Viewed by 1116
Abstract
The Lamiaceae family of medicinal plants holds immense promise in the development of functional foods aimed at preventing and treating cardiovascular diseases (CVDs). These plants are rich in bioactive compounds, such as phenolic acids, flavonoids, and terpenoids, which act as potent enzyme inhibitors [...] Read more.
The Lamiaceae family of medicinal plants holds immense promise in the development of functional foods aimed at preventing and treating cardiovascular diseases (CVDs). These plants are rich in bioactive compounds, such as phenolic acids, flavonoids, and terpenoids, which act as potent enzyme inhibitors and exhibit strong antioxidant, anti-inflammatory, and antihyperlipidemic properties. Key phenolic compounds, such as rosmarinic acid and caffeic acid, along with flavonoids like luteolin, apigenin, and quercetin, contribute to these health benefits. Essential oils derived from Lamiaceae species have demonstrated diverse biological activities, including vasorelaxant, thrombolytic, and cytotoxic effects, making them valuable in nutraceutical formulations. This study analyzes and investigates global patent trends related to Lamiaceae plants targeting cardiovascular health, focusing on applications in nutraceuticals and functional foods. Using patent databases, we examine the technological landscape, identify leading applicants, and evaluate the geographical distribution of innovations. Our analysis reveals a notable increase in patent filings since the late 1970s, peaking in 2007, indicating a growing interest in leveraging Lamiaceae plants for cardiovascular health. Tianjin Tasly Pharmaceuticals Co., Ltd. emerges as a leading applicant, reflecting active engagement by pharmaceutical companies alongside independent researchers and organizations. Geographically, China leads patent activity, followed by the United States and Europe, underscoring global interest and market potential. Key International Patent Classification (IPC) codes identified include A61K36/53 (Lamiaceae extracts), A61P9/00 (cardiovascular drugs), and A61P9/10 (treatments of ischemic or atherosclerotic diseases). These findings highlight the therapeutic and commercial relevance of Lamiaceae bioactives, offering insights into their potential in advancing cardiovascular health and shaping the future of the functional food and nutraceutical industries. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Foods)
Show Figures

Figure 1

9 pages, 741 KiB  
Article
Multicenter Study on the Impact of the Masker Babble Spectrum on the Acceptable Noise Level (ANL) Test
by Mark Laureyns, Giorgia Pugliese, Melinda Freyaldenhoven Bryan, Marieke Willekens, Anna Maria Gasbarre, Diego Zanetti, Julien Gilson, Paul Van Doren and Federica Di Berardino
Audiol. Res. 2024, 14(6), 1075-1083; https://doi.org/10.3390/audiolres14060088 - 7 Dec 2024
Viewed by 1047
Abstract
Introduction: Acceptable Noise Level (ANL) is defined as the most comfortable level (MCL) intensity for speech and is calculated by subtracting the maximum noise tolerable by an individual. The ANL test has been used over time to predict hearing aid use and the [...] Read more.
Introduction: Acceptable Noise Level (ANL) is defined as the most comfortable level (MCL) intensity for speech and is calculated by subtracting the maximum noise tolerable by an individual. The ANL test has been used over time to predict hearing aid use and the impact of digital noise reduction. This study analyzes this impact by using different masker babble spectra when performing the ANL test in both hearing-impaired and healthy subjects in three different languages (Dutch, French, and Italian). Materials and Methods: A total of 198 patients underwent the ANL test in their native language using a standardized protocol. The babble file was speech-weighted to match the long-term spectrum of the specific ANL language version. ANL was proposed in three different masking conditions: with multitalker Matched babble speech noise, with the same masking signal with the spectrum reduced from 2 kHz onwards (High Cut), and with the spectrum increased from 2 kHz onwards (High Boost). Results: In all of the comparisons among the three languages, ANL with High Boost noise gave significantly higher (worse) scores than ANL with Matched noise (p-value S1: <0.0001, S2: <0.0001, S3: 0.0003) and ANL with High Cut noise (p-value S1: 0.0002, S2: <0.0001, S3: <0.0001). The ANL values did not show any significant correlation with age and gender. In French, a weak correlation was found between ANL with High Cut noise and the Fletcher index of the worst ear. In Italian, a weak correlation was found between both ANL with Matched and High Boost noise and the Fletcher index of the best ear. Conclusions: ANL with High Boost added to noise stimuli was less acceptable for all patients in all of the languages. The ANL results did not vary in relation to the patients’ characteristics. This study confirms that the ANL test has potential application for clinical use regardless of the native language spoken. Full article
(This article belongs to the Special Issue Hearing Loss: Causes, Symptoms, Diagnosis, and Treatment)
Show Figures

Figure 1

Back to TopTop