Skip to Content
You are currently on the new version of our website. Access the old version .

Machine Learning and Knowledge Extraction

Machine Learning and Knowledge Extraction is an international, peer-reviewed, open access, monthly journal on machine learning and applications, see our video on YouTube explaining the MAKE journal concept. 

Quartile Ranking JCR - Q1 (Engineering, Electrical and Electronic | Computer Science, Artificial Intelligence | Computer Science, Interdisciplinary Applications)

All Articles (656)

Novel Loss Functions for Improved Data Visualization in t-SNE

  • Sara Nassar,
  • Rachid Hedjam and
  • Samir Brahim Belhaouari

A popular method for projecting high-dimensional data onto a lower-dimensional space while preserving the integrity of its structure is t-distributed Stochastic Neighbor Embedding (t-SNE). This technique minimizes the Kullback–Leibler (KL) divergence to align the similarities between points in the original and reduced spaces. While t-SNE is highly effective, it prioritizes local neighborhood preservation, which results in limited separation between distant clusters and inadequate representation of global relationships. To improve these limitations, this work introduces two complementary approaches: (1) The Max-Flipped KL Divergence (KLmax) modifies the original divergence by incorporating a contrastive term, KL, which enhances the ranking of point similarities through maximum similarity constraints. (2) The KL-Wasserstein Loss (LKLW) combines the KL divergence with the classic Wasserstein distance, allowing the embedding to benefit from the smooth and geometry-aware transport properties of Wasserstein metrics. Experimental results show that these methods lead to improved separation and better structural clarity in the low-dimensional space compared to standard t-SNE.

18 February 2026

Effect of alpha and beta on performance based on Pendigits dataset.

The transition of Large Language Models (LLMs) from centralized clouds to edge environments is critical for addressing privacy concerns, latency bottlenecks, and operational costs. However, existing edge benchmarking frameworks remain tailored to discriminative Deep Learning tasks (e.g., object detection), failing to capture the multidimensional challenges of generative AI, specifically the trade-offs between token generation speed, semantic accuracy, and hardware sustainability. To address this gap, we introduce LEAF (LLM Edge Assessment Framework), a novel evaluation methodology that integrates Circular Economy principles directly into performance metrics. LEAF assesses edge deployments across five synergistic pillars: Circular Economy Score, Energy Efficiency (Joules/Token), Performance Speed (Tokens/Second), semantic accuracy (BERTScore), and End-to-End Latency. We validate LEAF through an extensive experimental analysis of five distinct hardware classes, ranging from embedded IoT devices (Raspberry Pi 4 and 5, NVIDIA Jetson Nano) to professional edge servers (NVIDIA T400) and repurposed legacy workstations (NVIDIA GTX 1050 Ti). Utilizing 4-bit quantized models via the Ollama runtime, our results reveal a counterintuitive insight: repurposed consumer hardware significantly outperforms modern purpose-built edge SoCs. The legacy GTX 1050 Ti achieved a 20× speedup over the Raspberry Pi 4 and maintained superior energy-per-task efficiency compared to low-power ARM architectures by minimizing active runtime. These findings challenge the prevailing narrative that newer silicon is essential for Edge AI, demonstrating that sustainable, high-performance inference can be achieved by extending the lifecycle of existing hardware. LEAF thus provides a blueprint for a “Green Edge” ecosystem that balances computational capability with environmental responsibility.

18 February 2026

LEAF architecture.

Perception in trellised orchards is often challenged by dense canopy occlusion and overhead plastic coverings, which cause pronounced variations in sky visibility at row terminals. Accurately recognizing row terminals, including both row head and row tail positions, is therefore essential for understanding orchard row structures. This study presents SkySeg-Net, a sky segmentation-based framework for row-terminal recognition in trellised orchards. SkySeg-Net is built on an enhanced multi-scale U-Net architecture and employs ResNeSt residual split-attention blocks as the backbone. To improve feature discrimination under complex illumination and occlusion conditions, the Convolutional Block Attention Module (CBAM) is integrated into the downsampling path, while a Pyramid Pooling Module (PPM) is introduced during upsampling to strengthen multi-scale contextual representation. Sky regions are segmented from both front-view and rear-view camera images, and a hierarchical threshold-based pixel-sum analysis is applied to infer row-terminal locations based on sky-region distribution patterns. To support a comprehensive evaluation, a dedicated trellised vineyard dataset was constructed, featuring front-view and rear-view images and covering three representative grapevine growth stages (BBCH 69–71, 73–77, and 79–89). Experimental results show that SkySeg-Net achieves an mIoU of 91.21% and an mPA of 94.82% for sky segmentation, with a row-terminal recognition accuracy exceeding 98.17% across all growth stages. These results demonstrate that SkySeg-Net provides a robust and reliable visual perception approach for row-terminal recognition in trellised orchard environments.

13 February 2026

Full view of the trellis orchard.

Adverse weather removal aims to restore images degraded by haze, rain, or snow. However, existing unified models often rely on implicit degradation cues, making them vulnerable to inaccurate weather perception and insufficient semantic guidance, which leads to over-smoothing or residual artifacts in real scenes. In this work, we propose AWR-VIP, a prior-guided adverse weather removal framework that explicitly extracts semantic and perceptual priors using a frozen vision–language model (VLM). Given a degraded input, we first employ a degradation-aware prompt extractor to produce a compact set of semantic tags describing key objects and regions, and simultaneously perform weather-type perception by prompting the VLM with explicit weather definitions. Conditioned on the predicted weather type and selected tags, the VLM further generates two levels of restoration guidance: a global instruction that summarizes image-level enhancement goals (e.g., visibility/contrast) and local instructions that specify tag-aware refinement cues (e.g., recover textures for specific regions). These textual outputs are encoded by a text encoder into a pair of priors ( and ), which are injected into a UNet-based restorer through global-prior-modulated normalization and instruction-guided attention, enabling weather-adaptive and content-aware restoration. Extensive experiments on a combined benchmark show that AWR-VIP consistently outperforms state-of-the-art methods. Moreover, the VLM-derived priors are plug-and-play and can be integrated into other restoration backbones to further improve performance.

12 February 2026

The flowchart of our proposed AWR-VIP. The VLM-based Semantic and Low-level Priors Generation Pipeline is introduced to guide the weather removal network.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Mach. Learn. Knowl. Extr. - ISSN 2504-4990