Skip to Content
You are currently on the new version of our website. Access the old version .

Machine Learning and Knowledge Extraction

Machine Learning and Knowledge Extraction is an international, peer-reviewed, open access, monthly journal on machine learning and applications, see our video on YouTube explaining the MAKE journal concept. 

Quartile Ranking JCR - Q1 (Engineering, Electrical and Electronic | Computer Science, Artificial Intelligence | Computer Science, Interdisciplinary Applications)

All Articles (654)

Perception in trellised orchards is often challenged by dense canopy occlusion and overhead plastic coverings, which cause pronounced variations in sky visibility at row terminals. Accurately recognizing row terminals, including both row head and row tail positions, is therefore essential for understanding orchard row structures. This study presents SkySeg-Net, a sky segmentation-based framework for row-terminal recognition in trellised orchards. SkySeg-Net is built on an enhanced multi-scale U-Net architecture and employs ResNeSt residual split-attention blocks as the backbone. To improve feature discrimination under complex illumination and occlusion conditions, the Convolutional Block Attention Module (CBAM) is integrated into the downsampling path, while a Pyramid Pooling Module (PPM) is introduced during upsampling to strengthen multi-scale contextual representation. Sky regions are segmented from both front-view and rear-view camera images, and a hierarchical threshold-based pixel-sum analysis is applied to infer row-terminal locations based on sky-region distribution patterns. To support a comprehensive evaluation, a dedicated trellised vineyard dataset was constructed, featuring front-view and rear-view images and covering three representative grapevine growth stages (BBCH 69–71, 73–77, and 79–89). Experimental results show that SkySeg-Net achieves an mIoU of 91.21% and an mPA of 94.82% for sky segmentation, with a row-terminal recognition accuracy exceeding 98.17% across all growth stages. These results demonstrate that SkySeg-Net provides a robust and reliable visual perception approach for row-terminal recognition in trellised orchard environments.

13 February 2026

Full view of the trellis orchard.

Adverse weather removal aims to restore images degraded by haze, rain, or snow. However, existing unified models often rely on implicit degradation cues, making them vulnerable to inaccurate weather perception and insufficient semantic guidance, which leads to over-smoothing or residual artifacts in real scenes. In this work, we propose AWR-VIP, a prior-guided adverse weather removal framework that explicitly extracts semantic and perceptual priors using a frozen vision–language model (VLM). Given a degraded input, we first employ a degradation-aware prompt extractor to produce a compact set of semantic tags describing key objects and regions, and simultaneously perform weather-type perception by prompting the VLM with explicit weather definitions. Conditioned on the predicted weather type and selected tags, the VLM further generates two levels of restoration guidance: a global instruction that summarizes image-level enhancement goals (e.g., visibility/contrast) and local instructions that specify tag-aware refinement cues (e.g., recover textures for specific regions). These textual outputs are encoded by a text encoder into a pair of priors ( and ), which are injected into a UNet-based restorer through global-prior-modulated normalization and instruction-guided attention, enabling weather-adaptive and content-aware restoration. Extensive experiments on a combined benchmark show that AWR-VIP consistently outperforms state-of-the-art methods. Moreover, the VLM-derived priors are plug-and-play and can be integrated into other restoration backbones to further improve performance.

12 February 2026

The flowchart of our proposed AWR-VIP. The VLM-based Semantic and Low-level Priors Generation Pipeline is introduced to guide the weather removal network.

Semantic segmentation and deep learning methods have rarely been applied to fractional vegetation cover (FVC) segmentation tasks due to the lack of publicly available datasets for training deep learning models. FVC is a key indicator for assessing vegetation distribution, crop density, and crop responses to water availability and fertilizer application, yet conventional field-based measurement methods are time consuming, costly, labor intensive, and may lack the accuracy required for critical applications such as drought stress evaluation and water productivity. In this paper, we introduced causality-based deep learning techniques for FVC segmentation on a publicly available RGB dataset that consists of four ground cover crops: Phyla nodiflora L., Cynodon dactylon, Frankenia thymifolia Desf., and Oxalis stricta L. By separating causal from spurious correlations in pretrained features, using the stepwise intervention and reweighting (SIR) method at different encoder stages reduced confounding bias and enabled the models to learn more generalizable and task-relevant features. Extensive experiments on the FVC dataset, conducted with and without causality learning, showed that the proposed FCN + ResNet-50 model with causality learning and data augmentation achieved an accuracy of 94.80%, a precision of 94.97%, a recall of 94.35%, and an F1-score of 94.62%, which outperformed non-causal baselines and state-of-the-art transformer-based models including SegFormer and Mask2Former.

11 February 2026

Example images from dataset. (P1) Phyla nodiflora L.; (P2) Cynodon dactylon; (P3) Frankenia thymifolia Desf.; and (P4) Oxalis stricta L.

Towards LLM-Driven Cybersecurity in Autonomous Vehicles: A Big Data-Empowered Framework with Emerging Technologies

  • Aristeidis Karras,
  • Leonidas Theodorakopoulos and
  • Alexandra Theodoropoulou
  • + 1 author

Modern Autonomous Vehicles generate large volumes of heterogeneous in-vehicle data, making cybersecurity a critical challenge as adversarial attacks become increasingly adaptive, stealthy, and multi-protocol. Traditional intrusion detection systems often fail under these conditions because of their limited contextual understanding, poor robustness to distribution shifts, and insufficient regulatory transparency. This study introduces LLM-Guardian, a hierarchical intrusion detection framework with decision-making mechanisms that integrates Large Language Models (LLMs) with classical statistical detection theory, optimal transport drift analysis, graph neural networks, and formal uncertainty quantification. LLM-Guardian uses semantic anomaly scoring, conformal prediction for distribution-free confidence calibration, adaptive cumulative sum (CUSUM) sequential testing for low-latency detection, and topology-aware GNN reasoning designed to identify coordinated attacks across CAN, Ethernet, and V2X interfaces. In this work, the framework is empirically evaluated on four heterogeneous CAN-bus datasets, while the Ethernet and V2X components are instantiated at the architectural level and left as directions for future multi-protocol experimentation.

11 February 2026

Architecture of LLM-driven cybersecurity for AVs.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Mach. Learn. Knowl. Extr. - ISSN 2504-4990