Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,132)

Search Parameters:
Keywords = multiple streams

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1942 KB  
Review
Peach Palm (Bactris gasipaes) as a Sustainable Source of Plant Proteins, Dietary Fiber and Other Functional Ingredients: Recovery Techniques and Functional Food Applications
by Kartik Sharma, Nattaya Konsue, Samart Sai-Ut, Ekasit Onsaard, Wanli Zhang, Shusong Wu, Jia-Qiang Huang, Young Hoon Jung and Saroat Rawdkuen
Foods 2026, 15(4), 736; https://doi.org/10.3390/foods15040736 - 16 Feb 2026
Abstract
The current rise in global population and the subsequent demand for food supply to meet the current population has directed the attention of researchers towards sustainable, plant-based sources, particularly underutilized crops. Bactris gasipaes is one such underutilized crop with significant functional food value. [...] Read more.
The current rise in global population and the subsequent demand for food supply to meet the current population has directed the attention of researchers towards sustainable, plant-based sources, particularly underutilized crops. Bactris gasipaes is one such underutilized crop with significant functional food value. During processing, 84% of the total weight of the palm is discarded in the form of waste, or so-called by-products, which are a rich source of bioactive compounds. These compounds can be effectively recovered through modern extraction and valorization techniques. This review critically examines the extraction methods, nutritional profiles, and valorization opportunities of peach palm, highlighting both traditional uses and innovative processing strategies. Recent advances enable the targeted recovery of multiple peach palm fractions, e.g., proteins are commonly extracted using alkaline methods, lipids and carotenoids via green solvents or supercritical CO2, and starch and dietary fiber through hydrothermal or downstream separation processes. Key nutritional findings demonstrate that peach palm fractions offer significant protein content (with isolates reaching 40 to 60%), a favorable starch profile (up to 79%), and abundant unsaturated lipids and carotenoids, making them suitable for gluten-free, protein-enriched, and functional ingredient applications. Previous studies have focused mainly on the edible pulp of peach palm for protein, lipid, and carotenoid extraction, whereas other fractions such as peel, seed, and processing residues remain comparatively underexplored due to technological and safety constraints. This review provides a consolidated and critical overview of recent advances in fractionation and green extraction strategies for multiple value-added streams (proteins, lipids, carotenoids, starch, and dietary fiber), highlighting knowledge gaps and opportunities for sustainable food ingredient development. Full article
14 pages, 1938 KB  
Article
Nitrogen Adsorption Measurement for Pore Structure Characterisation of Cement–Oil Shale Ash Composite Exposed to an Aggressive Salt Environment
by Regina Kalpokaitė-Dičkuvienė
Materials 2026, 19(4), 772; https://doi.org/10.3390/ma19040772 - 16 Feb 2026
Abstract
Despite cement remaining a dominant material in the construction industry, researchers are increasingly exploring strategies to reduce its consumption by incorporating supplementary cementitious materials or by developing alternative binder systems utilising various ashes produced by power plants during the combustion of different waste [...] Read more.
Despite cement remaining a dominant material in the construction industry, researchers are increasingly exploring strategies to reduce its consumption by incorporating supplementary cementitious materials or by developing alternative binder systems utilising various ashes produced by power plants during the combustion of different waste streams. In this context, the present study investigates the influence of two types of oil shale ash on the pore structure of C–S–H under aggressive environmental conditions. To address these issues, a comprehensive pore structure analysis was conducted using nitrogen gas physisorption, applying multiple analytical approaches including Dubinin–Radushkevich, Horvath–Kawazoe, quench solid density function theory, and Barett–Joyner–Halenda for pore volume and pore size distribution. Pore surface fractal dimension obtained by Neimark Kiselev and Frenkel–Halsey–Hill was compared. The results revealed that the deterioration of C–S–H structure depends on the ash type and the exposure duration to the sulfate–chloride solution. Full article
Show Figures

Figure 1

28 pages, 2555 KB  
Article
Deep Learning-Based Video Watermarking: A Robust Framework for Spatial–Temporal Embedding and Retrieval
by Antonio Cedillo-Hernandez, Lydia Velazquez-Garcia, Francisco Javier Garcia-Ugalde and Manuel Cedillo-Hernandez
Future Internet 2026, 18(2), 104; https://doi.org/10.3390/fi18020104 - 16 Feb 2026
Abstract
This paper introduces a deep learning-based framework for video watermarking that achieves robust, imperceptible, and fast embedding under a wide range of visual and temporal conditions. The proposed method is organized into seven modules that collaboratively perform frame encoding, semantic region analysis, block [...] Read more.
This paper introduces a deep learning-based framework for video watermarking that achieves robust, imperceptible, and fast embedding under a wide range of visual and temporal conditions. The proposed method is organized into seven modules that collaboratively perform frame encoding, semantic region analysis, block selection, watermark transformation, and spatiotemporal injection, followed by decoding and multi-objective optimization. A key component of the framework is its ability to learn a visual importance map, which guides a saliency-based block selection strategy. This allows the model to embed the watermark in perceptually redundant regions while minimizing distortion. To enhance resilience, the watermark is distributed across multiple frames, leveraging temporal redundancy to improve recovery under frame loss, insertion, and reordering. Experimental evaluations conducted on a large-scale video dataset demonstrate that the proposed method achieves high fidelity, while preserving low decoding error rates under compression, noise, and temporal distortions. The proposed method operates processing 38 video frames per second on a standard GPU. Additional ablation studies confirm the contribution of each module to the system’s robustness. This framework offers a promising solution for watermarking in streaming, surveillance, and content verification applications. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

21 pages, 21467 KB  
Article
Exploitation of Multi-Sensor UAS Surveying for Monitoring the Volcanic Unrest at Vulcano Island (September 2021–June 2024)
by Matteo Cagnizi, Mauro Coltelli, Luigi Lodato, Peppe Junior Valentino D’Aranno, Maria Marsella and Francesco Rossi
Remote Sens. 2026, 18(4), 601; https://doi.org/10.3390/rs18040601 - 14 Feb 2026
Viewed by 61
Abstract
In September 2021, significant changes in the geophysical and geochemical parameters on Vulcano Island were recorded by the surveillance network activities and periodic surveys. Between October 2021 and June 2024, additional surveys were conducted to acquire LIDAR, thermal, and RGB datasets for the [...] Read more.
In September 2021, significant changes in the geophysical and geochemical parameters on Vulcano Island were recorded by the surveillance network activities and periodic surveys. Between October 2021 and June 2024, additional surveys were conducted to acquire LIDAR, thermal, and RGB datasets for the generation of Digital Terrain Models (DTMs), orthophotos, and fumarole field maps. These data were collected using DJI Matrice 300 UAS platforms. Precision positioning was ensured through a POS/NAV RTK georeferencing approach. The instrumentation included Genius R-Fans-16 and DJI Zenmuse L1 laser scanners for structural mapping, alongside Zenmuse H20T infrared cameras for the thermal detection of potential instabilities on the volcano flanks, focused on the northern area and summit of Gran Cratere La Fossa, and these were subsequently repeated in May 2022, October 2022, October 2023, and June 2024. Additionally, 3D reconstruction targeted morphological variations in unstable areas like the cone top, Forgia Vecchia, and the 1988 landslide site. In May 2022, anomalous degassing in the Eastern Bay led to increased gas and hydrothermal fluid emissions, causing water whitening in front of Baia di Levante. Optical-thermal monitoring, both on land and at sea, detected multiple hydrothermal gas streams, aiding in assessing the magnitude and areal extension of fumarolic fields. These findings contribute to establishing a comprehensive monitoring approach for understanding the volcanic unrest evolution cost-effectively and safely. Full article
Show Figures

Figure 1

26 pages, 851 KB  
Review
Exploring the Work Perceptions and Experiences of Gig Workers Globally: A Scoping Review
by Sameera Hussain-Khan, Shanya Reuben and Anna Meyer-Weitz
Adm. Sci. 2026, 16(2), 98; https://doi.org/10.3390/admsci16020098 - 13 Feb 2026
Viewed by 190
Abstract
The rapid expansion of the gig economy is reshaping work globally, producing both new opportunities and significant challenges for workers across diverse regions. This scoping review mapped global evidence on gig workers’ experiences between 2018 and 2024, following PRISMA-ScR guidelines. A comprehensive search [...] Read more.
The rapid expansion of the gig economy is reshaping work globally, producing both new opportunities and significant challenges for workers across diverse regions. This scoping review mapped global evidence on gig workers’ experiences between 2018 and 2024, following PRISMA-ScR guidelines. A comprehensive search of academic databases (EBSCOhost, Scopus, Sage, Springer, Taylor & Francis, Wiley, and Google Scholar) was conducted, yielding 1986 records, of which 26 met the inclusion criteria. Data were charted and synthesised to identify patterns in how gig workers describe their work experiences within broader socioeconomic and platform-based structures. Three interconnected themes emerged. First, freedom and flexibility remain central attractions of gig work, particularly for younger workers who value autonomy, scheduling control, and opportunities for combining multiple income streams. Second, gig work experiences vary significantly across demographic and geographic contexts, revealing unequal pathways shaped by gender, education, skill, migration status, and national labour-market conditions. Third, across all gig-work categories, workers reported precarity, including inconsistent income, job insecurity, algorithmic surveillance, limited benefits, and emotional strain. Taken together, the findings illustrate how autonomy and vulnerability coexist within the gig economy, highlighting the importance of policies and supports that address intersecting forms of inequality and promote safe, stable, and dignified work in a rapidly evolving labour landscape. Full article
Show Figures

Figure 1

31 pages, 5111 KB  
Article
Research on Movement Intention Recognition Based on CNN-LSTM
by Xiaohua Shi, Jiawei Hou, Jiyang Wang, Hao Lu, Sixiu Li, Xiangwei Meng and Kaiyuan Li
Electronics 2026, 15(4), 797; https://doi.org/10.3390/electronics15040797 - 12 Feb 2026
Viewed by 113
Abstract
Existing methods for recognizing motion intent in lower limb rehabilitation robots focus on spatial feature extraction while neglecting movement continuity, thus failing to extract temporal features. This paper proposes a movement intention recognition model based on a CNN-LSTM parallel dual-stream spatio-temporal neural network, [...] Read more.
Existing methods for recognizing motion intent in lower limb rehabilitation robots focus on spatial feature extraction while neglecting movement continuity, thus failing to extract temporal features. This paper proposes a movement intention recognition model based on a CNN-LSTM parallel dual-stream spatio-temporal neural network, taking surface electromyography (sEMG) signals as the core data. This model concurrently extracts temporal and spatial features from sEMG signals, integrating dual-dimensional information to comprehensively explore deep signal characteristics. By overcoming the limitations of traditional single-feature extraction, it significantly enhances recognition accuracy. Experimental results from movement intention recognition studies involving multiple subjects demonstrate an average recognition accuracy of 97%, providing reliable technical support for precise intent recognition and human–robot collaborative control in lower limb rehabilitation robots. Full article
Show Figures

Figure 1

29 pages, 8492 KB  
Article
Dual-Stream Hybrid Attention Network for Robust Intelligent Spectrum Sensing
by Bixue Song, Yongxin Feng, Fan Zhou and Peiying Zhang
Computers 2026, 15(2), 120; https://doi.org/10.3390/computers15020120 - 11 Feb 2026
Viewed by 108
Abstract
UAV communication, leveraging high mobility and flexible deployment, is gradually becoming an important component of 6G integrated air–ground networks. With the expansion of aerial services, air–ground spectrum resources are increasingly scarce, and spectrum sharing and opportunistic access have become key technologies for improving [...] Read more.
UAV communication, leveraging high mobility and flexible deployment, is gradually becoming an important component of 6G integrated air–ground networks. With the expansion of aerial services, air–ground spectrum resources are increasingly scarce, and spectrum sharing and opportunistic access have become key technologies for improving spectrum utilization. Spectrum sensing is the prerequisite for UAVs to perform dynamic access and avoid causing interference to primary users. However, in air–ground links, the channel time variability caused by Doppler effects, carrier frequency offset, and Rician fading can weaken feature separability, making it difficult for deep learning-based spectrum sensing methods to maintain reliable detection in complex environments. In this paper, a dual-stream hybrid-attention spectrum sensing method (DSHA) is proposed, which represents the received signal simultaneously as a time-domain I/Q sequence and an STFT time-frequency map to extract complementary features and employs a hybrid attention mechanism to model key intra-branch dependencies and achieve inter-branch interaction and fusion. Furthermore, a noise-consistent paired training strategy is introduced to mitigate the bias induced by noise randomness, thereby enhancing weak-signal discrimination capability. Simulation results show that under different false-alarm constraints, the proposed method achieves higher detection probability in low-SNR scenarios as well as under fading and CFO perturbations. In addition, compared with multiple typical baselines, DSHA exhibits better robustness and generalization; under Rician channels, its detection probability is improved by about 28.6% over the best baseline. Full article
(This article belongs to the Special Issue Wireless Sensor Networks in IoT)
Show Figures

Figure 1

18 pages, 6606 KB  
Data Descriptor
Annotated IoT Dataset of Waste Collection Events
by Peter Tarábek, Andrej Michalek, Roman Hriník, Ľubomír Králik and Karol Decsi
Data 2026, 11(2), 38; https://doi.org/10.3390/data11020038 - 11 Feb 2026
Viewed by 87
Abstract
This work presents a curated dataset of multimodal sensor measurements from Internet of Things (IoT) units mounted on waste collection vehicles. Each unit records multiple data streams including GPS position, vehicle velocity, radar-based container presence, accelerometer readings of the lifting arm, and RFID [...] Read more.
This work presents a curated dataset of multimodal sensor measurements from Internet of Things (IoT) units mounted on waste collection vehicles. Each unit records multiple data streams including GPS position, vehicle velocity, radar-based container presence, accelerometer readings of the lifting arm, and RFID tag identifiers of the bins. The dataset provides two complementary forms of annotation: (1) algorithmically generated events that were manually cleaned through visual inspection of sensor signals, offering large-scale coverage across 5 vehicles over a total of 25 collection days, and (2) manually validated events derived from synchronized video recordings, representing ground truth for 3 vehicles over 8 collection days. In total, the dataset contains 12,391 annotated waste collection events. The dataset spans diverse operational conditions with varying container sizes and includes both RFID-equipped and non-RFID bins. It can be used to train and evaluate machine learning models for event detection, anomaly recognition, or explainability studies, and to support practical applications such as Pay-as-you-throw (PAYT) waste management schemes. By combining multimodal sensor signals with reliable annotations, the dataset represents a unique resource for advancing research in smart waste collection and the broader field of IoT-enabled urban services. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

29 pages, 2638 KB  
Article
Satellite-Maritime Communication Network Based on RSMA and RIS: Sum Rate Maximization and Transmission Time Minimization
by Ying Zhang, Yuandi Zhao, Yongkang Chen, Weixiang Zhou, Zhihua Hu, Xinqiang Chen and Guowei Chen
J. Mar. Sci. Eng. 2026, 14(4), 342; https://doi.org/10.3390/jmse14040342 - 10 Feb 2026
Viewed by 116
Abstract
The maritime wireless communication network (MWCN) faces challenges such as limited coverage, inaccurate channel state information (CSI), and the sparse distribution of maritime vessel users. To overcome the above challenges, this paper proposes a low Earth orbit satellite (LEO) MWCN based on rate-splitting [...] Read more.
The maritime wireless communication network (MWCN) faces challenges such as limited coverage, inaccurate channel state information (CSI), and the sparse distribution of maritime vessel users. To overcome the above challenges, this paper proposes a low Earth orbit satellite (LEO) MWCN based on rate-splitting multiple access (RSMA) and reconfigurable intelligent surface (RIS). Common data streams transmit broadcast-shared information to all vessel users. Private data streams provide differentiated supplements. The primary optimization objective is to maximize the sum rate. The transmission time is also introduced as a supplementary performance indicator to assess the system’s transmission capability. To overcome the problems of imperfect CSI and the low efficiency of the weighted minimum mean square error (WMMSE) algorithm, a block coordinate descent (BCD) algorithm is proposed based on the deep unfolding method (DU) and momentum-accelerated projection gradient descent (PGD). Numerical results show that DU-WMMSE reduces the number of convergence iterations from 8 to 4, improves the sum rate by 11.06%, and achieves lower transmission time. In addition, active RIS mitigates severe fading more effectively in complex channels. The proposed scheme also exhibits excellent scalability. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 4812 KB  
Article
Improving Flood Simulation Performance of Distributed Hydrological Model in the Plain–Hilly Transition Zone via DEM Stream Burning and PSO
by Zhiwei Huang, Yangbo Chen and Kai Wang
Remote Sens. 2026, 18(4), 555; https://doi.org/10.3390/rs18040555 - 10 Feb 2026
Viewed by 177
Abstract
Accurate flood simulation and forecasting in plain–hilly transition zones remain challenging due to limitations of medium- and low-resolution digital elevation models (DEMs), which often produce discontinuous drainage networks and misaligned confluence paths. This study evaluates an integrated improvement framework that combines DEM stream-burning [...] Read more.
Accurate flood simulation and forecasting in plain–hilly transition zones remain challenging due to limitations of medium- and low-resolution digital elevation models (DEMs), which often produce discontinuous drainage networks and misaligned confluence paths. This study evaluates an integrated improvement framework that combines DEM stream-burning and automatic parameter calibration to enhance the flood-simulation performance of a physically based distributed hydrological model (the Liuxihe Model). The framework was tested in the Beimiaoji Watershed (upper Huaihe River Basin) using 12 observed flood events: one event for parameter calibration via Particle Swarm Optimization (PSO) and 11 events for independent validation. Model performance was assessed using multiple metrics, including the Nash–Sutcliffe Efficiency (NSE), peak error (PE), and peak-timing error (PT). Results indicate that stream-burning substantially improves river-network extraction, and that the combined application of DEM correction and PSO-based calibration markedly enhances model performance. The findings suggest that the proposed, cost-effective correction–calibration pathway can improve operational flood simulations in terrain-sensitive regions without relying on costly high-resolution DEMs, and thus provides a practical reference for similar basins. Full article
Show Figures

Figure 1

23 pages, 6932 KB  
Article
RocSync: Millisecond-Accurate Temporal Synchronization for Heterogeneous Camera Systems
by Jaro Meyer, Frédéric Giraud, Joschua Wüthrich, Marc Pollefeys, Philipp Fürnstahl and Lilian Calvet
Sensors 2026, 26(3), 1036; https://doi.org/10.3390/s26031036 - 5 Feb 2026
Viewed by 219
Abstract
Accurate spatiotemporal alignment of multi-view video streams is essential for a wide range of dynamic-scene applications such as multi-view 3D reconstruction, pose estimation, and scene understanding. However, synchronizing multiple cameras remains a significant challenge, especially in heterogeneous setups combining professional- and consumer-grade devices, [...] Read more.
Accurate spatiotemporal alignment of multi-view video streams is essential for a wide range of dynamic-scene applications such as multi-view 3D reconstruction, pose estimation, and scene understanding. However, synchronizing multiple cameras remains a significant challenge, especially in heterogeneous setups combining professional- and consumer-grade devices, visible and infrared sensors, or systems with and without audio, where common hardware synchronization capabilities are often unavailable. This limitation is particularly evident in real-world environments, where controlled capture conditions are not feasible. In this work, we present a low-cost, general-purpose synchronization method that achieves millisecond-level temporal alignment across diverse camera systems while supporting both visible (RGB) and infrared (IR) modalities. The proposed solution employs a custom-built LED Clock that encodes time through red and infrared LEDs, allowing visual decoding of the exposure window (start and end times) from recorded frames for millisecond-level synchronization. We benchmark our method against hardware synchronization and achieve a residual error of 1.34 ms RMSE across multiple recordings. In further experiments, our method outperforms light-, audio-, and timecode-based synchronization approaches and directly improves downstream computer vision tasks, including multi-view pose estimation and 3D reconstruction. Finally, we validate the system in large-scale surgical recordings involving over 25 heterogeneous cameras spanning both IR and RGB modalities. This solution simplifies and streamlines the synchronization pipeline and expands access to advanced vision-based sensing in unconstrained environments, including industrial and clinical applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 44876 KB  
Article
SDLS: A Two-Stream Architecture with Self-Distillation and Local Streams for Remote Sensing Image Scene Classification
by Xinliang Ma, Junwei Luo, Shuiping Ni, Xiaohong Zhang and Runze Ding
Remote Sens. 2026, 18(3), 498; https://doi.org/10.3390/rs18030498 - 3 Feb 2026
Viewed by 222
Abstract
Remote sensing image scene classification holds significant application value and has long been a research hotspot in remote sensing. However, remote sensing images contain diverse objects and complex backgrounds. Reducing background interference while focusing on key target regions in the images remains a [...] Read more.
Remote sensing image scene classification holds significant application value and has long been a research hotspot in remote sensing. However, remote sensing images contain diverse objects and complex backgrounds. Reducing background interference while focusing on key target regions in the images remains a challenge, which limits the potential improvement of classification accuracy. In this paper, a local image generation module (LIGM) is proposed to generate weights for the original images. The resulting local images, generated by weighting the original images, effectively focus on key target regions while suppressing background regions. Based on the LIGM, a two-stream architecture with self-distillation and local streams (SDLS) is proposed. The self-distillation stream extracts features from the original images using a convolutional neural network (CNN) and two MobileNetV2 networks. Furthermore, a multiplex-guided attention (MGA) module is introduced into this stream to facilitate cross-network attention-guided learning between the CNN and MobileNetV2 features. In the local stream, a MobileNetV2 network is employed to extract features from the local images. The classification logits produced by the two streams are fused, resulting in the final SDLS classification score. Experimental results demonstrate that SDLS achieves competitive performance on multiple datasets. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

14 pages, 2177 KB  
Article
Adaptive Multi-Camera Fusion and Calibration for Large-Scale Multi-Vehicle Cooperative Simulation Scenarios
by Hui Zhang, Chenyu Xia and Huantao Zeng
Sensors 2026, 26(3), 977; https://doi.org/10.3390/s26030977 - 3 Feb 2026
Viewed by 192
Abstract
In the development of multi-vehicle cooperative hardware-in-the-loop (HIL) simulation platforms based on machine vision, accurate vehicle pose estimation is crucial for achieving efficient cooperative control. However, monocular vision systems inevitably suffer from limited fields of view and insufficient image resolution during target detection, [...] Read more.
In the development of multi-vehicle cooperative hardware-in-the-loop (HIL) simulation platforms based on machine vision, accurate vehicle pose estimation is crucial for achieving efficient cooperative control. However, monocular vision systems inevitably suffer from limited fields of view and insufficient image resolution during target detection, making it difficult to meet the requirements of large-scale, multi-target real-time perception. To address these challenges, this paper proposes an engineering-oriented multi-camera cooperative vision detection method, designed to maximize processing efficiency and real-time performance while maintaining detection accuracy. The proposed approach first projects the imaging results from multiple cameras onto a unified physical plane. By precomputing and caching the image stitching parameters, the method enables fast and parallelized image mosaicking. Experimental results demonstrate that, under typical vehicle speeds and driving angles, the stitched images achieve a 93.41% identification code recognition rate and a 99.08% recognition accuracy. Moreover, with high-resolution image (1440 × 960) inputs, the system can stably output 30 frames per second of stitched image streams, fully satisfying the dual requirements of detection precision and real-time processing for engineering applications. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

36 pages, 2499 KB  
Review
Centella asiatica as a Model Biomass for Sustainable Production of Biochemicals via Green Extraction and Purification Technologies: A Comprehensive Field-to-Market Review
by Waqas Razzaq, Jean Baptiste Mazzitelli, Anne Sylvie Fabiano Tixier and Maryline Abert Vian
Molecules 2026, 31(3), 526; https://doi.org/10.3390/molecules31030526 - 2 Feb 2026
Viewed by 322
Abstract
Centella asiatica has emerged as a strategic biomass for the sustainable production of high-value biochemicals at the interface of traditional medicine and modern biotechnology. This review consolidates the current knowledge on its phytochemical diversity, emphasizing triterpenoid saponins—asiaticoside, madecassoside, asiatic acid, and madecassic acid—as [...] Read more.
Centella asiatica has emerged as a strategic biomass for the sustainable production of high-value biochemicals at the interface of traditional medicine and modern biotechnology. This review consolidates the current knowledge on its phytochemical diversity, emphasizing triterpenoid saponins—asiaticoside, madecassoside, asiatic acid, and madecassic acid—as core bioactive molecules relevant to pharmaceutical, dermatological, nutraceutical, and functional-ingredient applications. Advances in green extraction technologies, including ultrasound-assisted, microwave-assisted, ohmic-heating, and supercritical CO2 systems, have demonstrated superior efficiency in recovering high-purity biochemicals while significantly reducing solvent use, energy demand, and environmental impact compared with conventional methods. Complementary analytical and standardization platforms, such as HPLC, UPLC, and GC–MS, enable rigorous quality control across the entire value chain, supporting the development of reproducible and regulatory-compliant biochemical extracts. From a biomass valorization and biorefinery perspective, C. asiatica offers multiple metabolite streams that align with circular economy and field-to-market sustainability principles. Key challenges remain, including agronomic variability, scaling up green extraction, and supply chain resilience. However, emerging solutions, such as Good Agricultural and Collection Practices (GACP) guided cultivation, plant tissue culture, metabolic engineering, and integrated biorefinery frameworks, show strong potential for establishing a reliable and environmentally responsible production system. Collectively, C. asiatica represents a model species for sustainable biochemical production, combining scientific efficacy with industrial, economic, and ecological relevance. Full article
Show Figures

Graphical abstract

16 pages, 1397 KB  
Article
ODEL: An Experience-Augmented Self-Evolving Framework for Efficient Python-to-C++ Code Translation
by Kaiyuan Feng, Furong Peng and Jiayue Wu
Appl. Sci. 2026, 16(3), 1506; https://doi.org/10.3390/app16031506 - 2 Feb 2026
Viewed by 269
Abstract
Automated code translation plays an important role in improving software reusability and supporting system migration, particularly in scenarios where Python implementations need to be converted into efficient C++ programs. However, existing approaches often rely heavily on large external models or static inference pipelines, [...] Read more.
Automated code translation plays an important role in improving software reusability and supporting system migration, particularly in scenarios where Python implementations need to be converted into efficient C++ programs. However, existing approaches often rely heavily on large external models or static inference pipelines, which limits their ability to improve translation quality over time.To address these challenges, this paper proposes ODEL, an On-Demand Experience-enhanced Learning framework for Python-to-C++ code translation. ODEL adopts a hybrid inference architecture in which a lightweight internal model performs routine translation, while a more capable external model is selectively invoked upon verification failure to conduct error analysis and generate structured experience records. These experience records are accumulated and reused across subsequent translation phases, enabling progressive improvement through a closed-loop workflow that integrates generation, verification, consideration, and experience refinement. Experiments on the HumanEval-X benchmark demonstrate that ODEL significantly improves translation accuracy compared with competitive baselines. Specifically, the framework increases Pass@1 from 71.82% to 81.10% and Pass@10 from 74.30% to 89.02%, and exhibits a consistent performance improvement across multiple translation phases. These results indicate that experience reuse within a continuous task stream can effectively enhance automated code translation without modifying model parameters. Full article
(This article belongs to the Special Issue AI-Enabled Next-Generation Computing and Its Applications)
Show Figures

Figure 1

Back to TopTop