Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = block storage extension

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1994 KB  
Article
IVCLNet: A Hybrid Deep Learning Framework Integrating Signal Decomposition and Attention-Enhanced CNN-LSTM for Lithium-Ion Battery SOH Prediction and RUL Estimation
by Yulong Pei, Hua Huo, Yinpeng Guo, Shilu Kang and Jiaxin Xu
Energies 2025, 18(21), 5677; https://doi.org/10.3390/en18215677 - 29 Oct 2025
Viewed by 638
Abstract
Accurate prediction of the degradation trajectory and estimation of the remaining useful life (RUL) of lithium-ion batteries are crucial for ensuring the reliability and safety of modern energy storage systems. However, many existing approaches rely on deep or highly complex models to achieve [...] Read more.
Accurate prediction of the degradation trajectory and estimation of the remaining useful life (RUL) of lithium-ion batteries are crucial for ensuring the reliability and safety of modern energy storage systems. However, many existing approaches rely on deep or highly complex models to achieve high accuracy, often at the cost of computational efficiency and practical applicability. To tackle this challenge, we propose a novel hybrid deep-learning framework, IVCLNet, which predicts the battery’s state-of-health (SOH) evolution and estimates RUL by identifying the end-of-life threshold (SOH = 80%). The framework integrates Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN), Variational Mode Decomposition (VMD), and an attention-enhanced Long Short-Term Memory (LSTM) network. IVCLNet leverages a cascade decomposition strategy to capture multi-scale degradation patterns and employs multiple indirect health indicators (HIs) to enrich feature representation. A lightweight Convolutional Block Attention Module (CBAM) is embedded to strengthen the model’s perception of critical features, guiding the one-dimensional convolutional layers to focus on informative components. Combined with LSTM-based temporal modeling, the framework ensures both accuracy and interpretability. Extensive experiments conducted on two publicly available lithium-ion battery datasets demonstrated that IVCLNet significantly outperforms existing methods in terms of prediction accuracy, robustness, and computational efficiency. The findings indicate that the proposed framework is promising for practical applications in battery health management systems. Full article
Show Figures

Figure 1

28 pages, 5988 KB  
Article
Triple Active Bridge Modeling and Decoupling Control
by Andrés Camilo Henao-Muñoz, Mohammed B. Debbat, Antonio Pepiciello and José Luis Domínguez-García
Electronics 2025, 14(21), 4224; https://doi.org/10.3390/electronics14214224 - 29 Oct 2025
Cited by 1 | Viewed by 638
Abstract
The increased penetration of power electronics interfaced resources in modern power systems is unlocking new opportunities and challenges. New concepts like multiport converters can further enhance the efficiency and power density of power electronics-based solutions. The triple active bridge is an isolated multiport [...] Read more.
The increased penetration of power electronics interfaced resources in modern power systems is unlocking new opportunities and challenges. New concepts like multiport converters can further enhance the efficiency and power density of power electronics-based solutions. The triple active bridge is an isolated multiport converter with soft switching and high voltage gain that can integrate different sources, storage, and loads, or act as a building block for modular systems. However, the triple active bridge suffers from power flow cross-coupling, which affects its dynamic performance if it is not removed or mitigated. Unlike the extensive literature on two-port power converters, studies on modeling and control comparison for multiport converters are still lacking. Therefore, this paper presents and compares different modeling and decoupling control approaches applied to the triple active bridge converter, highlighting their benefits and limitations. The converter operation and modulation are introduced, and modeling and control strategies based on the single phase shift power flow control are detailed. The switching model, generalized full-order average model, and the reduced-order model derivations are presented thoroughly, and a comparison reveals that first harmonic approximations can be detrimental when modeling the triple active bridge. Furthermore, the model accuracy is highly sensitive to the operating point, showing that the generalized average model better represents some dynamics than the lossless reduced-order model. Furthermore, three decoupling control strategies are derived aiming to mitigate cross-coupling effects to ensure decoupled power flow and improve system stability. To assess their performance, the TAB converter is subjected to power and voltage disturbances and parameter uncertainty. A comprehensive comparison reveals that linear PI controllers with an inverse decoupling matrix can effectively control the TAB but exhibit large settling time and voltage deviations due to persistent cross-coupling. Furthermore, the decoupling matrix is highly sensitive to inaccuracies in the converter’s model parameters. In contrast, linear active disturbance rejection control and sliding mode control based on a linear extended state observer achieve rapid stabilization, demonstrating strong decoupling capability under disturbances. Furthermore, both control strategies demonstrate robust performance under parameter uncertainty. Full article
(This article belongs to the Special Issue Power Electronics and Renewable Energy System)
Show Figures

Figure 1

23 pages, 4503 KB  
Article
Design and Evaluation of a Cloud Computing System for Real-Time Measurements in Polarization-Independent Long-Range DAS Based on Coherent Detection
by Abdusomad Nur, Almaz Demise and Yonas Muanenda
Sensors 2024, 24(24), 8194; https://doi.org/10.3390/s24248194 - 22 Dec 2024
Cited by 1 | Viewed by 1198
Abstract
CloudSim is a versatile simulation framework for modeling cloud infrastructure components that supports customizable and extensible application provisioning strategies, allowing for the simulation of cloud services. On the other hand, Distributed Acoustic Sensing (DAS) is a ubiquitous technique used for measuring vibrations over [...] Read more.
CloudSim is a versatile simulation framework for modeling cloud infrastructure components that supports customizable and extensible application provisioning strategies, allowing for the simulation of cloud services. On the other hand, Distributed Acoustic Sensing (DAS) is a ubiquitous technique used for measuring vibrations over an extended region. Data handling in DAS remains an open issue, as many applications need continuous monitoring of a volume of samples whose storage and processing in real time require high-capacity memory and computing resources. We employ the CloudSim tool to design and evaluate a cloud computing scheme for long-range, polarization-independent DAS using coherent detection of Rayleigh backscattering signals and uncover valuable insights on the evolution of the processing times for a diverse range of Virtual Machine (VM) capacities as well as sizes of blocks of processed data. Our analysis demonstrates that the choice of VM significantly impacts computational times in real-time measurements in long-range DAS and that achieving polarization independence introduces minimal processing overheads in the system. Additionally, the increase in the block size of processed samples per cycle results in diminishing increments in overall processing times per batch of new samples added, demonstrating the scalability of cloud computing schemes in long-range DAS and its capability to manage larger datasets efficiently. Full article
(This article belongs to the Special Issue Optical Sensors for Industrial Applications)
Show Figures

Figure 1

16 pages, 5072 KB  
Article
Experimental Investigation of Enhanced Oil Recovery Mechanism of CO2 Huff and Puff in Saturated Heavy Oil Reservoirs
by Xiaorong Shi, Qian Wang, Ke Zhao, Yongbin Wu, Hong Dong, Jipeng Zhang and Ye Yao
Energies 2024, 17(24), 6391; https://doi.org/10.3390/en17246391 - 19 Dec 2024
Cited by 3 | Viewed by 1148
Abstract
Due to the significance of carbon utilization and storage, CO2 huff and puff is increasingly receiving attention. However, the mechanisms and effects of CO2 huff and puff extraction in medium to deep saturated heavy oil reservoirs remain unclear. Therefore, in this [...] Read more.
Due to the significance of carbon utilization and storage, CO2 huff and puff is increasingly receiving attention. However, the mechanisms and effects of CO2 huff and puff extraction in medium to deep saturated heavy oil reservoirs remain unclear. Therefore, in this study, by targeting the medium to deep saturated heavy oil reservoirs in the block Xia of the Xinjiang oil field, measurements of physical properties were conducted through PVT analysis and viscosity measurement to explore the dissolution and diffusion characteristics of CO2-degassed and CO2-saturated oil systems. Multiple sets of physical simulation of CO2 huff and puff in medium to deep saturated heavy oil reservoirs were conducted using a one-dimensional core holder to evaluate the EOR mechanism of CO2 huff and puff. The results demonstrate that the solubility of CO2 in degassed crude oil is linearly correlated with pressure. Higher pressure effectively increases the solubility of CO2, reaching 49.1 m3/m3 at a saturation pressure of 10.0 MPa, thus facilitating oil expansion and viscosity reduction. Meanwhile, crude oil saturated with CH4 still retains the capacity to further dissolve additional CO2, reaching 24.5 m3/m3 of incremental CO2 solubilization at 10.0 MPa, and the hybrid effect of CO2 and CH4 reduces oil viscosity to 1161 mPa·s, which is slightly lower than the pure CO2 dissolution case. Temperature increases suppress solubility but promote molecular diffusion, allowing CH4 and CO2 to maintain a certain solubility at high temperatures. In terms of dynamic dissolution and diffusion, the initial CO2 dissolution rate is high, reaching 0.009 m3/(m3·min), the mid-term dissolution rate stabilizes at approximately 0.002 m3/(m3·min), and the dissolution capability significantly decreases later on. CO2 exhibits high molecular diffusion capability in gas-saturated crude oil, with a diffusion coefficient of 8.62 × 10−7 m2/s. For CO2 huff and puff, oil production is positively correlated with the CO2 injection rate and the cycle injection volume; it initially increases with the extension of the soak time but eventually decreases. Therefore, the optimal injection speed, injection volume, and soak time should be determined in conjunction with reservoir characteristics. During the huff and puff process, the bottom hole pressure should be higher than the bubble point pressure of the crude oil to prevent gas escape. Moreover, as the huff and puff cycles increase, the content of saturates in the oil rises, while those of aromatic, resin, and asphaltene decrease, leading to a gradual deterioration of the huff and puff effect. This study provides a comprehensive reference method and conclusions for studying the fluid property changes and enhanced recovery mechanisms in medium to deep heavy oil reservoirs with CO2 huff and puff. Full article
Show Figures

Figure 1

20 pages, 7525 KB  
Article
Study on Quantitative Assessment of CO2 Leakage Risk Along the Wellbore Under the Geological Storage of the Salt Water Layer
by Shaobo Gao, Shanpo Jia, Yanwei Zhu, Long Zhao, Yuxuan Cao, Xianyin Qi and Fatian Guan
Energies 2024, 17(21), 5302; https://doi.org/10.3390/en17215302 - 25 Oct 2024
Cited by 5 | Viewed by 1383
Abstract
In the process of CO2 geological storage in the salt water layer, CO2 leakage along the wellbore will seriously affect the effective storage of CO2 in the target geological area. To solve this problem, based on the investigation of a [...] Read more.
In the process of CO2 geological storage in the salt water layer, CO2 leakage along the wellbore will seriously affect the effective storage of CO2 in the target geological area. To solve this problem, based on the investigation of a large number of failure cases of CO2 storage along the wellbore and failure cases of gas storage wells in the injection stage of the wellbore, the influencing factors of CO2 leakage risk along the wellbore were investigated in detail. Based on the analytic hierarchy process (AHP) and extension theory, 17 basic evaluation indexes were selected from 6 perspectives to establish the evaluation index system of CO2 leakage risk along the wellbore. The established evaluation system was used to evaluate the leakage risk of a CO2 storage well in the X gas field of BZ Block. The results showed that the influencing factors of tubing had the smallest weight, followed by cement sheath, and the influencing factors of casing–cement sheath interface and cement sheath–formation interface had the largest weight, accounting for 23.73% and 34.32%, respectively. The CO2 storage well leakage risk evaluation grade was Ι, with minimal leakage risk. The CO2 storage effect was excellent. The evaluation system comprehensively considers the tubing string, cement sheath, and micro-annulus interface, which can provide a scientific basis for the risk assessment of CO2 leakage along the wellbore under the CO2 geological storage of the salt water layer. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

14 pages, 3443 KB  
Article
Learning the Meta Feature Transformer for Unsupervised Person Re-Identification
by Qing Li, Chuan Yan and Xiaojiang Peng
Mathematics 2024, 12(12), 1812; https://doi.org/10.3390/math12121812 - 11 Jun 2024
Cited by 1 | Viewed by 2011
Abstract
Although unsupervised person re-identification (Re-ID) has drawn increasing research attention, it still faces the challenge of learning discriminative features in the absence of pairwise labels across disjoint camera views. To tackle the issue of label scarcity, researchers have delved into clustering and multilabel [...] Read more.
Although unsupervised person re-identification (Re-ID) has drawn increasing research attention, it still faces the challenge of learning discriminative features in the absence of pairwise labels across disjoint camera views. To tackle the issue of label scarcity, researchers have delved into clustering and multilabel learning using memory dictionaries. Although effective in improving unsupervised Re-ID performance, these methods require substantial computational resources and introduce additional training complexity. To address this issue, we propose a conceptually simple yet effective and learnable module effective block, named the meta feature transformer (MFT). MFT is a streamlined, lightweight network architecture that operates without the need for complex networks or feature memory bank storage. It primarily focuses on learning interactions between sample features within small groups using a transformer mechanism in each mini-batch. It then generates a new sample feature for each group through a weighted sum. The main benefits of MFT arise from two aspects: (1) it allows for the use of numerous new samples for training, which significantly expands the feature space and enhances the network’s generalization capabilities; (2) the trainable attention weights highlight the importance of samples, enabling the network to focus on more useful or distinguishable samples. We validate our method on two popular large-scale Re-ID benchmarks, where extensive evaluations show that our MFT outperforms previous methods and significantly improves Re-ID performances. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

17 pages, 3364 KB  
Article
A Novel Lightweight Model for Underwater Image Enhancement
by Botao Liu, Yimin Yang, Ming Zhao and Min Hu
Sensors 2024, 24(10), 3070; https://doi.org/10.3390/s24103070 - 11 May 2024
Cited by 7 | Viewed by 4871
Abstract
Underwater images suffer from low contrast and color distortion. In order to improve the quality of underwater images and reduce storage and computational resources, this paper proposes a lightweight model Rep-UWnet to enhance underwater images. The model consists of a fully connected convolutional [...] Read more.
Underwater images suffer from low contrast and color distortion. In order to improve the quality of underwater images and reduce storage and computational resources, this paper proposes a lightweight model Rep-UWnet to enhance underwater images. The model consists of a fully connected convolutional network and three densely connected RepConv blocks in series, with the input images connected to the output of each block with a Skip connection. First, the original underwater image is subjected to feature extraction by the SimSPPF module and is processed through feature summation with the original one to be produced as the input image. Then, the first convolutional layer with a kernel size of 3 × 3, generates 64 feature maps, and the multi-scale hybrid convolutional attention module enhances the useful features by reweighting the features of different channels. Second, three RepConv blocks are connected to reduce the number of parameters in extracting features and increase the test speed. Finally, a convolutional layer with 3 kernels generates enhanced underwater images. Our method reduces the number of parameters from 2.7 M to 0.45 M (around 83% reduction) but outperforms state-of-the-art algorithms by extensive experiments. Furthermore, we demonstrate our Rep-UWnet effectively improves high-level vision tasks like edge detection and single image depth estimation. This method not only surpasses the contrast method in objective quality, but also significantly improves the contrast, colorimetry, and clarity of underwater images in subjective quality. Full article
Show Figures

Figure 1

23 pages, 615 KB  
Article
Comparison of MOEAs in an Optimization-Decision Methodology for a Joint Order Batching and Picking System
by Fabio Maximiliano Miguel, Mariano Frutos, Máximo Méndez, Fernando Tohmé and Begoña González
Mathematics 2024, 12(8), 1246; https://doi.org/10.3390/math12081246 - 19 Apr 2024
Cited by 1 | Viewed by 1614
Abstract
This paper investigates the performance of a two-stage multi-criteria decision-making procedure for order scheduling problems. These problems are represented by a novel nonlinear mixed integer program. Hybridizations of three Multi-Objective Evolutionary Algorithms (MOEAs) based on dominance relations are studied and compared to solve [...] Read more.
This paper investigates the performance of a two-stage multi-criteria decision-making procedure for order scheduling problems. These problems are represented by a novel nonlinear mixed integer program. Hybridizations of three Multi-Objective Evolutionary Algorithms (MOEAs) based on dominance relations are studied and compared to solve small, medium, and large instances of the joint order batching and picking problem in storage systems with multiple blocks of two and three dimensions. The performance of these methods is compared using a set of well-known metrics and running an extensive battery of simulations based on a methodology widely used in the literature. The main contributions of this paper are (1) the hybridization of MOEAs to deal efficiently with the combination of orders in one or several picking tours, scheduling them for each picker, and (2) a multi-criteria approach to scheduling multiple picking teams for each wave of orders. Based on the experimental results obtained, it can be stated that, in environments with a large number of different items and orders with high variability in volume, the proposed approach can significantly reduce operating costs while allowing the decision-maker to anticipate the positioning of orders in the dispatch area. Full article
Show Figures

Figure 1

16 pages, 7192 KB  
Article
A Self-Healing Gel with an Organic–Inorganic Network Structure for Mitigating Circulation Loss
by Cheng Wang, Jinsheng Sun, Yifu Long, Hongjun Huang, Juye Song, Ren Wang, Yuanzhi Qu and Zexing Yang
Gels 2024, 10(2), 93; https://doi.org/10.3390/gels10020093 - 25 Jan 2024
Cited by 1 | Viewed by 2438
Abstract
Lost circulation control remains a challenge in drilling operations. Self-healing gels, capable of self-healing in fractures and forming entire gel block, exhibit excellent resilience and erosion resistance, thus finding extensive studies in lost circulation control. In this study, layered double hydroxide, Acrylic acid, [...] Read more.
Lost circulation control remains a challenge in drilling operations. Self-healing gels, capable of self-healing in fractures and forming entire gel block, exhibit excellent resilience and erosion resistance, thus finding extensive studies in lost circulation control. In this study, layered double hydroxide, Acrylic acid, 2-Acrylamido-2-methylpropane sulfonic acid, and CaCl2 were employed to synthesize organic-inorganic nanocomposite gel with self-healing properties. The chemical properties of nanocomposite gels were characterized using X-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscope, X-ray photoelectron spectroscopy and thermogravimetric analysis. layered double hydroxide could be dispersed and exfoliated in the mixed solution of Acrylic acid and 2-Acrylamido-2-methylpropane sulfonic acid, and the swelling behavior, self-healing time, rheological properties, and mechanical performance of the nanocomposite gels were influenced by the addition of layered double hydroxide and Ca2+. Optimized nanocomposite gel AC6L3, at 90 °C, exhibits only a self-healing time of 3.5 h in bentonite mud, with a storage modulus of 4176 Pa, tensile strength of 6.02 kPa, and adhesive strength of 1.94 kPa. In comparison to conventional gel, the nanocomposite gel with self-healing capabilities demonstrated superior pressure-bearing capacity. Based on these characteristics, the nanocomposite gel proposed in this work hold promise as a candidate lost circulation material. Full article
(This article belongs to the Special Issue Gels for Oil Drilling and Enhanced Recovery (2nd Edition))
Show Figures

Figure 1

17 pages, 11761 KB  
Article
RepECN: Making ConvNets Better Again for Efficient Image Super-Resolution
by Qiangpu Chen, Jinghui Qin and Wushao Wen
Sensors 2023, 23(23), 9575; https://doi.org/10.3390/s23239575 - 2 Dec 2023
Cited by 2 | Viewed by 2320
Abstract
Traditional Convolutional Neural Network (ConvNet, CNN)-based image super-resolution (SR) methods have lower computation costs, making them more friendly for real-world scenarios. However, they suffer from lower performance. On the contrary, Vision Transformer (ViT)-based SR methods have achieved impressive performance recently, but these methods [...] Read more.
Traditional Convolutional Neural Network (ConvNet, CNN)-based image super-resolution (SR) methods have lower computation costs, making them more friendly for real-world scenarios. However, they suffer from lower performance. On the contrary, Vision Transformer (ViT)-based SR methods have achieved impressive performance recently, but these methods often suffer from high computation costs and model storage overhead, making them hard to meet the requirements in practical application scenarios. In practical scenarios, an SR model should reconstruct an image with high quality and fast inference. To handle this issue, we propose a novel CNN-based Efficient Residual ConvNet enhanced with structural Re-parameterization (RepECN) for a better trade-off between performance and efficiency. A stage-to-block hierarchical architecture design paradigm inspired by ViT is utilized to keep the state-of-the-art performance, while the efficiency is ensured by abandoning the time-consuming Multi-Head Self-Attention (MHSA) and by re-designing the block-level modules based on CNN. Specifically, RepECN consists of three structural modules: a shallow feature extraction module, a deep feature extraction, and an image reconstruction module. The deep feature extraction module comprises multiple ConvNet Stages (CNS), each containing 6 Re-Parameterization ConvNet Blocks (RepCNB), a head layer, and a residual connection. The RepCNB utilizes larger kernel convolutions rather than MHSA to enhance the capability of learning long-range dependence. In the image reconstruction module, an upsampling module consisting of nearest-neighbor interpolation and pixel attention is deployed to reduce parameters and maintain reconstruction performance, while bicubic interpolation on another branch allows the backbone network to focus on learning high-frequency information. The extensive experimental results on multiple public benchmarks show that our RepECN can achieve 2.5∼5× faster inference than the state-of-the-art ViT-based SR model with better or competitive super-resolving performance, indicating that our RepECN can reconstruct high-quality images with fast inference. Full article
Show Figures

Figure 1

14 pages, 926 KB  
Article
Optimizing the Performance of the Sparse Matrix–Vector Multiplication Kernel in FPGA Guided by the Roofline Model
by Federico Favaro, Ernesto Dufrechou, Juan P. Oliver and Pablo Ezzatti
Micromachines 2023, 14(11), 2030; https://doi.org/10.3390/mi14112030 - 31 Oct 2023
Cited by 4 | Viewed by 3043
Abstract
The widespread adoption of massively parallel processors over the past decade has fundamentally transformed the landscape of high-performance computing hardware. This revolution has recently driven the advancement of FPGAs, which are emerging as an attractive alternative to power-hungry many-core devices in a world [...] Read more.
The widespread adoption of massively parallel processors over the past decade has fundamentally transformed the landscape of high-performance computing hardware. This revolution has recently driven the advancement of FPGAs, which are emerging as an attractive alternative to power-hungry many-core devices in a world increasingly concerned with energy consumption. Consequently, numerous recent studies have focused on implementing efficient dense and sparse numerical linear algebra (NLA) kernels on FPGAs. To maximize the efficiency of these kernels, a key aspect is the exploration of analytical tools to comprehend the performance of the developments and guide the optimization process. In this regard, the roofline model (RLM) is a well-known graphical tool that facilitates the analysis of computational performance and identifies the primary bottlenecks of a specific software when executed on a particular hardware platform. Our previous efforts advanced in developing efficient implementations of the sparse matrix–vector multiplication (SpMV) for FPGAs, considering both speed and energy consumption. In this work, we propose an extension of the RLM that enables optimizing runtime and energy consumption for NLA kernels based on sparse blocked storage formats on FPGAs. To test the power of this tool, we use it to extend our previous SpMV kernels by leveraging a block-sparse storage format that enables more efficient data access. Full article
(This article belongs to the Special Issue FPGA Applications and Future Trends)
Show Figures

Figure 1

24 pages, 6631 KB  
Article
A Study on Two-Warehouse Inventory Systems with Integrated Multi-Purpose Production Unit and Partitioned Rental Warehouse
by Viswanath Jagadeesan, Thilagavathi Rajamanickam, Vladimira Schindlerova, Sreelakshmi Subbarayan and Robert Cep
Mathematics 2023, 11(18), 3986; https://doi.org/10.3390/math11183986 - 19 Sep 2023
Cited by 4 | Viewed by 3080
Abstract
A study of two warehouse inventory systems with a production unit is developed in this article with some constraints which are of practical applicability to optimize the total production cycle and its cost. A production unit evolves in three different states to retain [...] Read more.
A study of two warehouse inventory systems with a production unit is developed in this article with some constraints which are of practical applicability to optimize the total production cycle and its cost. A production unit evolves in three different states to retain its quality and prolong its lifetime: the state of producing items, the state of reworking the identified defective items, and the state of being idle. It processes the items up to a certain time point. The screening process starts immediately after a product comes out of the production unit. The classified non-defective items are first stored in own warehouse (OW), after filling to its maximum capacity, and the remaining items fill in the first block RW1 of the rental warehouse RW. All identified defective items are stored in the second block RW2 of RW. The holding cost of an item is higher in RW than OW. All defective items are sent to the production unit for re-do processes as a single lot immediately after the stop of the production and re-do items are stored in RW1 to satisfy the demand. The items in the RW1 are of higher priority in satisfying the demands after the stop of the production unit in producing new items as to deduce the total cost. Demand is assumed as both time and advertisement dependent and is encouraged once production starts. The deterioration rate differs in both warehouses. No backlog is entertained. The study is directed to achieve optimum total cycle cost towards the attainment of the optimum production time slot and the entire cycle of the system. We have arrived at explicit expressions for the total cost function of the entire production cycle. An analytic optimization process of the discriminant method is employed in the form of an algorithm to arrive at the optimum total cost. It provides a numerical illustration of a specific environment. The implications of the current research work are as follows. The optimum utility of production units in three different states in arriving at the optimum total cost is extensively studied with respect to deterioration, demand, and production rates. It also examined the influence of fluctuating deterioration, demand, and production parameters in arriving at optimum deterioration cost, holding cost, and total cycle cost, as they have important managerial insights. The effect of rental charges on the optimum total cost is examined as the system is used for multi-purpose storage. Full article
Show Figures

Figure 1

18 pages, 3215 KB  
Article
SSDStacked-BLS with Extended Depth and Width: Infrared Fault Diagnosis of Rolling Bearings under Dual Feature Selection
by Jianmin Zhou, Lulu Liu and Xiwen Shen
Mathematics 2023, 11(17), 3677; https://doi.org/10.3390/math11173677 - 25 Aug 2023
Cited by 1 | Viewed by 1590
Abstract
In fault diagnosis, broad learning systems (BLS) have been applied in recent years. However, the best fault diagnosis cannot be guaranteed by width node extension alone, so a stacked broad learning system (stacked BLS) was proposed. Most of the methods for choosing the [...] Read more.
In fault diagnosis, broad learning systems (BLS) have been applied in recent years. However, the best fault diagnosis cannot be guaranteed by width node extension alone, so a stacked broad learning system (stacked BLS) was proposed. Most of the methods for choosing the number of depth layers used optimization algorithms that tend to increase computation time. In addition, the data under single feature selection are not sufficiently representative, and effective features are easily lost. To solve these problems, this article proposes an infrared fault diagnosis model for rolling bearings based on integration of principal component analysis and singular value decomposition (IPS) and the stacked BLS with self-selected depth model (SSDStacked-BLS). First, 72 second-order statistical features are extracted from the pre-processed infrared images of rolling bearings. Next, feature selection is performed using IPS. he IPS feature selection module consists of principal component analysis (PCA) and singular value decomposition (SVD). The feature selection is performed by PCA and SVD separately, which are then stitched together to form a new feature. This ensures a comprehensive coverage of infrared image features. Finally, the acquired features are input into SSDStacked-BLS. This model establishes a data storage group for the residual training characteristics of stacked BLS, adding one block at a time. The accuracy rate of each newly added block is output and saved to the data storage group. If the diagnostic rate fails to increase three consecutive times, the block stacking is stopped and the results are output. IPS-SSDStacked-BLS achieved an accuracy of 0.9667 in 0.1775 s. This is almost five times faster than stacked BLS optimized using the grid search method. Compared with the original BLS, its accuracy was 0.0445 higher and the time was approximated. Compared with IPS-SVM, IPS-RF, IPS-1DCNN and 2DCNN, IPS-SSDStacked-BLS was more advantageous in terms of accuracy and time consumption. Full article
Show Figures

Figure 1

27 pages, 372 KB  
Article
Dual-Layer Index for Efficient Traceability Query of Food Supply Chain Based on Blockchain
by Chaopeng Guo, Yiming Liu, Meiyu Na and Jie Song
Foods 2023, 12(11), 2267; https://doi.org/10.3390/foods12112267 - 5 Jun 2023
Cited by 15 | Viewed by 2979
Abstract
Blockchain techniques have been introduced to achieve decentralized and transparent traceability systems, which are critical components of food supply chains. Academia and industry have tried to enhance the efficiency of blockchain-based food supply chain traceability queries. However, the cost of traceability queries remains [...] Read more.
Blockchain techniques have been introduced to achieve decentralized and transparent traceability systems, which are critical components of food supply chains. Academia and industry have tried to enhance the efficiency of blockchain-based food supply chain traceability queries. However, the cost of traceability queries remains high. In this paper, we propose a dual-layer index structure for optimizing traceability queries in blockchain, which consists of an external and an internal index. The dual-layer index structure accelerates the external block jump and internal transaction search while preserving the original characteristics of the blockchain. We establish an experimental environment by modeling the blockchain storage module for extensive simulation experiments. The results show that although the dual-layer index structure introduces a little extra storage and construction time, it significantly improves the efficiency of traceability queries. Specifically, the dual-layer index improves the traceability query rate by seven to eight times compared with that of the original blockchain. Full article
Show Figures

Graphical abstract

19 pages, 7443 KB  
Article
Biodegradable Preformed Particle Gel (PPG) Made of Natural Chitosan Material for Water Shut-Off Application
by Reem Elaf, Ahmed Ben Ali, Mohammed Saad, Ibnelwaleed A. Hussein, Hassan Nimir and Baojun Bai
Polymers 2023, 15(8), 1961; https://doi.org/10.3390/polym15081961 - 20 Apr 2023
Cited by 23 | Viewed by 3521
Abstract
Oil and gas extraction frequently produces substantial volumes of produced water, leading to several mechanical and environmental issues. Several methods have been applied over decades, including chemical processes such as in-situ crosslinked polymer gel and preformed particle gel, which are the most effective [...] Read more.
Oil and gas extraction frequently produces substantial volumes of produced water, leading to several mechanical and environmental issues. Several methods have been applied over decades, including chemical processes such as in-situ crosslinked polymer gel and preformed particle gel, which are the most effective nowadays. This study developed a green and biodegradable PPG made of PAM and chitosan as a blocking agent for water shutoff, which will contribute to combating the toxicity of several commercially used PPGs. The applicability of chitosan to act as a crosslinker has been confirmed by FTIR spectroscopy and observed by scanning electron microscopy. Extensive swelling capacity measurements and rheological experiments were performed to examine the optimal formulation of PAM/Cs based on several PAM and chitosan concentrations and the effects of typical reservoir conditions, such as salinity, temperature, and pH. The optimum concentrations of PAM with 0.5 wt% chitosan were between 5–9 wt%, while the optimum chitosan amount with 6.5 wt% PAM was in the 0.25–0.5 wt% range, as these concentrations can produce PPGs with high swellability and sufficient strength. The swelling capacity of PAM/Cs is lower in high saline water (HSW) with a TDS of 67.2976 g/L compared with fresh water, which is related to the osmotic pressure gradient between the swelling medium and the PPG. The swelling capacity in freshwater was up to 80.37 g/g, while it is 18.73 g/g in HSW. The storage moduli were higher in HSW than freshwater, with ranges of 1695–5000 Pa and 2053–5989 Pa, respectively. The storage modulus of PAM/Cs samples was higher in a neutral medium (pH = 6), where the fluctuation behavior in different pH conditions is related to electrostatic repulsions and hydrogen bond formation. The increase in swelling capacity caused by the progressive increment in temperature is associated with the amide group’s hydrolysis to carboxylate groups. The sizes of the swollen particles are controllable since they are designed to be 0.63–1.62 mm in DIW and 0.86–1.00 mm in HSW. PAM/Cs showed promising swelling and rheological characteristics while demonstrating long-term thermal and hydrolytic stability in high-temperature and high-salinity conditions. Full article
(This article belongs to the Special Issue Polymeric Gels in Oil and Gas Applications)
Show Figures

Figure 1

Back to TopTop