Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = weighted caching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 694 KB  
Article
Multi-Context Concatenation Across Requests for LLMs
by Ziyi Cao, Jingbin Zhang, Shuai Wang, Shaobo Li, Bingquan Liu and Heng Xie
Axioms 2026, 15(5), 303; https://doi.org/10.3390/axioms15050303 - 22 Apr 2026
Abstract
Reusing separate, pre-filled Key-Value (KV) Caches for multiple contexts has become a common practice in handling multi-context scenarios with Large Language Models. However, this leads to a lack of cross-attention mechanisms between contexts. To address this, we propose CatLLM, the first method that [...] Read more.
Reusing separate, pre-filled Key-Value (KV) Caches for multiple contexts has become a common practice in handling multi-context scenarios with Large Language Models. However, this leads to a lack of cross-attention mechanisms between contexts. To address this, we propose CatLLM, the first method that concatenates multiple contexts across requests offline to compensate for this deficiency. Specifically, during offline processing, CatLLM identifies contexts that severely lack cross-attention by incorporating the weighted inner products of Q and K vectors from tokens in an un-concatenated context into an equivalently transformed weighted formulation for concatenated Q and K inner products. This yields a weighting wiA+B corresponding to the output vector difference, which can then be used to identify contexts with severe cross-attention deficiencies and concatenate them into a single context for KV Cache computation. Experimental results show that, compared to the baseline of separate caching (i.e., no concatenation), fully concatenating all contexts improves the F1 score by 6%. Meanwhile, the proposed method reduces the number of contexts requiring caching from 10 to 7 while achieving a 3% F1 score, thereby maximizing performance improvement while minimizing the degree of context compression. Full article
(This article belongs to the Section Mathematical Analysis)
32 pages, 823 KB  
Article
A Hybrid Temporal Recommender System Based on Sliding-Window Weighted Popularity and Elite Evolutionary Discrete Particle Swarm Optimization
by Shanxian Lin, Yuichi Nagata and Haichuan Yang
Electronics 2026, 15(8), 1544; https://doi.org/10.3390/electronics15081544 - 8 Apr 2026
Viewed by 267
Abstract
This paper proposes a hybrid non-personalized temporal recommendation framework integrating Sliding-Window Weighted Popularity (SWWP) with Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO) to address the challenges of extreme data sparsity and temporal dynamics in global popularity-based recommendation. We first formally prove the NP [...] Read more.
This paper proposes a hybrid non-personalized temporal recommendation framework integrating Sliding-Window Weighted Popularity (SWWP) with Elite Evolutionary Discrete Particle Swarm Optimization (EEDPSO) to address the challenges of extreme data sparsity and temporal dynamics in global popularity-based recommendation. We first formally prove the NP hardness of the temporal-constrained recommendation problem, justifying the adoption of a metaheuristic approach. The proposed SWWP model employs a dual-scale sliding-window mechanism to balance short-term trend adaptation with long-term periodicity capture. A novel deep integration mechanism couples SWWP with EEDPSO through a “purchase heat” indicator, which guides temporal-aware particle initialization, position updates, and fitness evaluation. Extensive experiments on the Amazon Reviews dataset with extreme sparsity (density < 0.0005%) demonstrate that SWWP achieves an NDCG@20 of 0.245, outperforming nine temporal baselines by at least 13%. Furthermore, under a unified fitness function incorporating temporal prediction accuracy, the SWWP-EEDPSO framework achieves 5.95% higher fitness compared to vanilla EEDPSO, while significantly outperforming Differential Evolution and Genetic Algorithms. The temporally informed search strategy enables SWWP-EEDPSO to discover recommendations that better align with future user behavior, while maintaining sub-millisecond online query latency (0.52 ms) through offline precomputation and caching, demonstrating practical feasibility for deployment scenarios where periodic offline updates are acceptable. Full article
Show Figures

Figure 1

16 pages, 965 KB  
Article
Implementation and Feasibility of a Multidisciplinary Endocrine-Led Outpatient Clinic for Cancer Cachexia and Other Forms of Unintentional Weight Loss: A Real-World Observational Study
by Anirudh Murthy, Morgan Simons, Anne Jablonski, Maurice Hurd, Alpana Shukla and Marcus D. Goncalves
Cancers 2026, 18(6), 946; https://doi.org/10.3390/cancers18060946 - 13 Mar 2026
Viewed by 508
Abstract
Purpose: Cachexia, characterized by involuntary weight loss, muscle wasting, and metabolic dysfunction, is prevalent in advanced cancer and chronic illnesses. Despite its impact, outpatient treatment models in the U.S. remain limited and unstandardized. Here, we aim to describe the structure, implementation, patient characteristics, [...] Read more.
Purpose: Cachexia, characterized by involuntary weight loss, muscle wasting, and metabolic dysfunction, is prevalent in advanced cancer and chronic illnesses. Despite its impact, outpatient treatment models in the U.S. remain limited and unstandardized. Here, we aim to describe the structure, implementation, patient characteristics, and real-world clinical trajectories of a multidisciplinary clinic for cancer cache as well as other forms of unintentional weight loss clinic within an academic endocrinology practice. Methods: We conducted a retrospective observational cohort study of 103 patients referred to a single-center unintentional weight loss clinic over five years. Patients received comprehensive assessments (weight trajectory, nutrition status, 5× sit-to-stand test, handgrip strength) and personalized interventions including nutrition counseling, resistance training, and pharmacologic therapies. Results: Among 103 patients (median age 69.7 years; 53% male), 64% had cancer, while 36% were referred for non-malignant causes of weight loss or cachexia. Reduced appetite or food intake was reported in 43%, and functional impairment was common, with low handgrip strength in 47% and impaired 5× sit-to-stand performance in 79% of assessed patients. Systemic abnormalities were frequent, including elevated hs-CRP (57%), elevated neutrophil-to-lymphocyte ratio (43%), and hypoalbuminemia (26%). Among patients with available paired follow-up data, the median rate of weight change shifted from −0.5 kg/month prior to enrollment to 0.0 kg/month three months after the initial visit (p < 0.0001). Five-times sit-to-stand performance improved modestly at three months (p = 0.042), while handgrip strength was unchanged. Half of patients that engaged with the clinic returned for at least follow-up, but there was no identifiable difference between the population of patients that returned versus those that did not. Conclusions: A structured, multidisciplinary unintentional weight loss clinic in an endocrinology setting was associated with stabilization of weight and modest changes in physical function in this single-center cohort among patients who engaged in follow-up. These findings highlight the successful implementation of integrated outpatient care models and provide practice-based context for future interventions and therapeutic evaluations. Full article
(This article belongs to the Special Issue Gaps in Cancer Cachexia Research)
Show Figures

Figure 1

20 pages, 1052 KB  
Article
Distributed State Estimation for Bilinear Power System Models Based on Weighted Least Absolute Value
by Shijie Gao, Zhihua Deng, Yunzhe Zhang and Pan Wang
Appl. Sci. 2025, 15(24), 13129; https://doi.org/10.3390/app152413129 - 13 Dec 2025
Viewed by 495
Abstract
Accurate, scalable, and outlier-robust state estimation (SE) is critical for large AC power systems with mixed SCADA and PMU measurements. This paper proposes D-BSE-L1, a distributed robust state estimator for the bilinear AC model. The method combines the bilinear state estimation framework with [...] Read more.
Accurate, scalable, and outlier-robust state estimation (SE) is critical for large AC power systems with mixed SCADA and PMU measurements. This paper proposes D-BSE-L1, a distributed robust state estimator for the bilinear AC model. The method combines the bilinear state estimation framework with a convex weighted least absolute value (WLAV) loss so that all area subproblems become convex linear or quadratic programs coordinated by ADMM, and a cache-enabled Cholesky factorization is used to accelerate the third-stage linear solves. Simulations on the IEEE 14-, 118-, and 1062-bus systems show that D-BSE-L1 achieves estimation accuracy comparable to its centralized bilinear counterpart. Under severe bad-data conditions, its advantage over weighted least squares with the largest normalized residual test (WLS + LNRT) is pronounced: with 10% 1.5× bad data, the voltage magnitude and angle MAEs are about 62% and 54% of those of WLS + LNRT, and with 5% 5× bad data, they further drop to roughly 43% and 51%, while requiring only about one-tenth of the CPU time. On the 1062-bus system, D-BSE-L1 maintains the MAE of the centralized estimator but reduces runtime from 2.46 s to 0.72 s, providing a scalable, hyperparameter-free, and robust solution for partitioned state estimation in large-scale power grids. Full article
(This article belongs to the Special Issue Applied Machine Learning in Industry 4.0)
Show Figures

Figure 1

26 pages, 4592 KB  
Article
Joint Optimization of Serial Task Offloading and UAV Position for Mobile Edge Computing Based on Multi-Agent Deep Reinforcement Learning
by Mengyuan Tao and Qi Zhu
Appl. Sci. 2025, 15(23), 12419; https://doi.org/10.3390/app152312419 - 23 Nov 2025
Viewed by 882
Abstract
Driven by the proliferation of the Internet of Things (IoT), Mobile Edge Computing (MEC) is a key technology for meeting the low-latency and high-computational demands of future wireless networks. However, ground-based MEC servers suffer from limited coverage and inflexible deployment. Unmanned Aerial Vehicles [...] Read more.
Driven by the proliferation of the Internet of Things (IoT), Mobile Edge Computing (MEC) is a key technology for meeting the low-latency and high-computational demands of future wireless networks. However, ground-based MEC servers suffer from limited coverage and inflexible deployment. Unmanned Aerial Vehicles (UAVs), with their high mobility, can serve as aerial edge servers to extend this coverage. This paper addresses the multi-user serial task offloading problem in cache-assisted UAV-MEC systems by proposing a joint optimization algorithm for service caching, UAV positioning, task offloading, and serial processing order. Under the constraints of physical resources such as UAV cache capacity, heterogeneous computing capabilities, and wireless channel bandwidth, an optimization problem is formulated to minimize the weighted sum of task completion time and user cost. The method first performs service caching based on task popularity and then utilizes the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to optimize the UAV’s position, task offloading decisions, and serial processing order. The MADDPG algorithm consists of two collaborative agents: a UAV position agent responsible for selecting the optimal UAV position, and a task scheduling agent that determines the serial processing order and offloading decisions for all tasks. Simulation results demonstrate that the proposed algorithm can converge quickly to a stable solution, significantly reducing both task completion time and user cost. Full article
Show Figures

Figure 1

17 pages, 2306 KB  
Article
Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms
by Heejin Kim, Jeongha Lee and Hyokyung Bahn
Mathematics 2025, 13(22), 3689; https://doi.org/10.3390/math13223689 - 17 Nov 2025
Cited by 1 | Viewed by 3482
Abstract
Large language models (LLMs) have traditionally relegated inference to remote servers, leaving mobile devices as thin clients. Recently, advances in mobile GPUs and NPUs have made on-device inference increasingly feasible, particularly for privacy-sensitive and personalized applications. However, executing LLMs directly on resource-constrained devices [...] Read more.
Large language models (LLMs) have traditionally relegated inference to remote servers, leaving mobile devices as thin clients. Recently, advances in mobile GPUs and NPUs have made on-device inference increasingly feasible, particularly for privacy-sensitive and personalized applications. However, executing LLMs directly on resource-constrained devices exposes severe I/O bottlenecks, as repeated accesses to large weight files can overwhelm limited memory and storage bandwidth. Prior studies have focused on internal mechanisms such as KV caching, while the role of the host OS buffer cache remains underexplored. This paper closes that gap with file-level trace analysis of real-world mobile LLM applications, and identifies three characteristic access patterns: (1) one-time sequential scans during initialization, (2) persistent hot sets (e.g., tokenizers, metadata, indices), and (3) recurring loop accesses to model weight files. Guided by these observations, we propose LLM-aware buffer cache strategies and derive cache-sizing guidelines that relate loop size, host-set coverage, and storage bandwidth. We further compare smartwatch-class and smartphone-class platforms to clarify feasible model sizes and practical hardware prerequisites for local inference. Our results provide system-level guidance for I/O subsystem design that enables practical on-device LLM inference in future mobile and IoT devices. Full article
Show Figures

Figure 1

14 pages, 1129 KB  
Article
Entropy-Guided KV Caching for Efficient LLM Inference
by Heekyum Kim and Yuchul Jung
Mathematics 2025, 13(15), 2366; https://doi.org/10.3390/math13152366 - 23 Jul 2025
Viewed by 9252
Abstract
Large language models (LLMs), built upon Transformer architectures, have demonstrated remarkable performance in a wide range of natural language processing tasks. However, their practical deployment—especially in long-context scenarios—is often hindered by the computational and memory costs associated with managing the key–value (KV) cache [...] Read more.
Large language models (LLMs), built upon Transformer architectures, have demonstrated remarkable performance in a wide range of natural language processing tasks. However, their practical deployment—especially in long-context scenarios—is often hindered by the computational and memory costs associated with managing the key–value (KV) cache during inference. Optimizing this process is therefore crucial for improving LLM efficiency and scalability. In this study, we propose a novel entropy-guided KV caching strategy that leverages the distribution characteristics of attention scores within each Transformer layer. Specifically, we compute the entropy of attention weights for each head and use the average entropy of all heads within a layer to assess the layer’s contextual importance. Higher-entropy layers—those exhibiting broader attention dispersion—are allocated larger KV cache budgets, while lower-entropy (sink-like) layers are assigned smaller budgets. Instead of selecting different key–value tokens per head, our method selects a common set of important tokens per layer, based on aggregated attention scores, and caches them uniformly across all heads within the same layer. This design preserves the structural integrity of multi-head attention while enabling efficient token selection during the prefilling phase. The experimental results demonstrate that our approach improves cache utilization and inference speed without compromising generation quality. For example, on the Qwen3 4B model, our method reduces memory usage by 4.18% while preserving ROUGE score, and on Mistral 0.1v 7B, it reduces decoding time by 46.6%, highlighting entropy-guided layer analysis as a principled mechanism for scalable long-context language modeling. Full article
(This article belongs to the Special Issue Mathematics and Applications)
Show Figures

Figure 1

14 pages, 2370 KB  
Article
DP-AMF: Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion for Single-View 3D Reconstruction
by Luoxi Zhang, Chun Xie and Itaru Kitahara
J. Imaging 2025, 11(7), 246; https://doi.org/10.3390/jimaging11070246 - 21 Jul 2025
Cited by 4 | Viewed by 2052
Abstract
Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion framework that integrates high-fidelity depth priors—generated [...] Read more.
Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion framework that integrates high-fidelity depth priors—generated offline by the MARIGOLD diffusion-based estimator and cached to avoid extra training cost—with hierarchical local features from ResNet-32/ResNet-18 and semantic global features from DINO-ViT. A learnable fusion module dynamically adjusts per-channel weights to balance these modalities according to local texture and occlusion, and an implicit signed-distance field decoder reconstructs the final mesh. Extensive experiments on 3D-FRONT and Pix3D demonstrate that DP-AMF reduces Chamfer Distance by 7.64%, increases F-Score by 2.81%, and boosts Normal Consistency by 5.88% compared to strong baselines, while qualitative results show sharper edges and more complete geometry in challenging scenes. DP-AMF achieves these gains without substantially increasing model size or inference time, offering a robust and effective solution for complex single-view reconstruction tasks. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

19 pages, 1891 KB  
Article
Comparative Study on Energy Consumption of Neural Networks by Scaling of Weight-Memory Energy Versus Computing Energy for Implementing Low-Power Edge Intelligence
by Ilpyung Yoon, Jihwan Mun and Kyeong-Sik Min
Electronics 2025, 14(13), 2718; https://doi.org/10.3390/electronics14132718 - 5 Jul 2025
Cited by 8 | Viewed by 7290
Abstract
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and [...] Read more.
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and transformer-based large language models (LLMs), represented by GPT3-small, Llama-7B, and GPT3-175B. By analyzing how the scaling of memory energy versus computing energy affects the energy consumption of neural networks with different batch sizes (1, 4, 8, 16), it is shown that ResNet18 transitions from a memory energy-limited regime at low batch sizes to a computing energy-limited regime at higher batch sizes due to its extensive convolution operations. On the other hand, GPT-like models remain predominantly memory-bound, with large parameter tensors and frequent key–value (KV) cache lookups accounting for most of the total energy usage. Our results reveal that reducing weight-memory energy is particularly effective in transformer architectures, while improving multiply–accumulate (MAC) efficiency significantly benefits CNNs at higher workloads. We further highlight near-memory and in-memory computing approaches as promising strategies to lower data-transfer costs and enhance power efficiency in large-scale deployments. These findings offer actionable insights for architects and system designers aiming to optimize artificial intelligence (AI) performance under stringent energy budgets on battery-powered edge devices. Full article
Show Figures

Figure 1

25 pages, 1339 KB  
Article
Link-State-Aware Proactive Data Delivery in Integrated Satellite–Terrestrial Networks for Multi-Modal Remote Sensing
by Ranshu Peng, Chunjiang Bian, Shi Chen and Min Wu
Remote Sens. 2025, 17(11), 1905; https://doi.org/10.3390/rs17111905 - 30 May 2025
Cited by 1 | Viewed by 1746
Abstract
This paper seeks to address the limitations of conventional remote sensing data dissemination algorithms, particularly their inability to model fine-grained multi-modal heterogeneous feature correlations and adapt to dynamic network topologies under resource constraints. This paper proposes multi-modal-MAPPO, a novel multi-modal deep reinforcement learning [...] Read more.
This paper seeks to address the limitations of conventional remote sensing data dissemination algorithms, particularly their inability to model fine-grained multi-modal heterogeneous feature correlations and adapt to dynamic network topologies under resource constraints. This paper proposes multi-modal-MAPPO, a novel multi-modal deep reinforcement learning (MDRL) framework designed for a proactive data push in large-scale integrated satellite–terrestrial networks (ISTNs). By integrating satellite cache states, user cache states, and multi-modal data attributes (including imagery, metadata, and temporal request patterns) into a unified Markov decision process (MDP), our approach pioneers the application of the multi-actor-attention-critic with parameter sharing (MAPPO) algorithm to ISTNs push tasks. Central to this framework is a dual-branch actor network architecture that dynamically fuses heterogeneous modalities: a lightweight MobileNet-v3-small backbone extracts semantic features from remote sensing imagery, while parallel branches—a multi-layer perceptron (MLP) for static attributes (e.g., payload specifications, geolocation tags) and a long short-term memory (LSTM) network for temporal user cache patterns—jointly model contextual and historical dependencies. A dynamically weighted attention mechanism further adapts modality-specific contributions to enhance cross-modal correlation modeling in complex, time-varying scenarios. To mitigate the curse of dimensionality in high-dimensional action spaces, we introduce a multi-dimensional discretization strategy that decomposes decisions into hierarchical sub-policies, balancing computational efficiency and decision granularity. Comprehensive experiments against state-of-the-art baselines (MAPPO, MAAC) demonstrate that multi-modal-MAPPO reduces the average content delivery latency by 53.55% and 29.55%, respectively, while improving push hit rates by 0.1718 and 0.4248. These results establish the framework as a scalable and adaptive solution for real-time intelligent data services in next-generation ISTNs, addressing critical challenges in resource-constrained, dynamic satellite–terrestrial environments. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
Show Figures

Figure 1

17 pages, 748 KB  
Article
Task Offloading Scheme Based on Proximal Policy Optimization Algorithm
by Yutong Ma and Junfeng Tian
Appl. Sci. 2025, 15(9), 4761; https://doi.org/10.3390/app15094761 - 25 Apr 2025
Cited by 2 | Viewed by 1698
Abstract
The rapid development of mobile Internet technology has made users’ requirements for quality of service (QoS) continuously improve. The task unloading process of mobile edge computing has the problem that it is impossible to balance delay and energy consumption for task unloading under [...] Read more.
The rapid development of mobile Internet technology has made users’ requirements for quality of service (QoS) continuously improve. The task unloading process of mobile edge computing has the problem that it is impossible to balance delay and energy consumption for task unloading under the condition of fluctuating network bandwidth. To address this issue, this paper proposes a task offloading scheme based on the Proximal Policy Optimization (PPO) algorithm. On the basis of traditional cloud edge collaborative architecture, the collaborative computing mechanism between edge node devices is further integrated, and the concept of service caching is introduced to reduce duplicate data transmission, reduce communication latency and network load, and improve overall system performance. Firstly, this article constructs an energy efficiency function with a certain weight ratio of energy consumption and latency as the core optimization objective. Then, the task offloading process of mobile terminal devices is modeled as a Markov Decision Process (MDP). Finally, the deep reinforcement learning PPO algorithm is used for training and learning, and the model is solved. The simulation results show that the proposed scheme has significant advantages in reducing energy consumption and latency compared to the comparative scheme. Full article
Show Figures

Figure 1

17 pages, 432 KB  
Article
Efficient Modeling and Usage of Scratchpad Memory for Artificial Intelligence Accelerators
by Cagla Irmak Rumelili Köksal and Sıddıka Berna Örs Yalçın
Electronics 2025, 14(5), 1032; https://doi.org/10.3390/electronics14051032 - 5 Mar 2025
Cited by 1 | Viewed by 3952
Abstract
Deep learning accelerators play a crucial role in enhancing computation-intensive AI applications. Optimizing system resources—such as shared caches, on-chip SRAM, and data movement mechanisms—is essential for achieving peak performance and energy efficiency. This paper explores the trade-off between last-level cache (LLC) and scratchpad [...] Read more.
Deep learning accelerators play a crucial role in enhancing computation-intensive AI applications. Optimizing system resources—such as shared caches, on-chip SRAM, and data movement mechanisms—is essential for achieving peak performance and energy efficiency. This paper explores the trade-off between last-level cache (LLC) and scratchpad memory (SPM) usage in accelerator-based SoCs. To evaluate this trade-off, we introduce a high-speed simulator for estimating the timing performance of complex SoCs and demonstrate the benefits of SPM utilization. Our work shows that dynamic reconfiguration of the LLC into an SPM with prefetching capabilities reduces cache misses while improving resource utilization, performance, and energy efficiency. With SPM usage, we achieve up to 13× speedup and a 10% reduction in energy consumption for CNN backbones. Additionally, our simulator significantly outperforms state-of-the-art alternatives, running 3000× faster than gem5-SALAM for fixed-weight convolution computations and up to 64,000× faster as weight size increases. These results validate the effectiveness of both the proposed architecture and simulator in optimizing deep learning workloads. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

22 pages, 1625 KB  
Article
The Diet of Eleonora’s Falcons (Falco eleonorae) during the Autumn Migration of Passerine Birds across the Aegean Sea
by Dietrich Ristow and Michael Wink
Diversity 2024, 16(9), 538; https://doi.org/10.3390/d16090538 - 2 Sep 2024
Cited by 5 | Viewed by 4106
Abstract
Every year, several hundred million birds cross the Mediterranean on their migration from Eurasia to their wintering quarters in Africa. As many migrants travel at night or at high altitudes, direct observations of bird migration are difficult and thus our information about migrating [...] Read more.
Every year, several hundred million birds cross the Mediterranean on their migration from Eurasia to their wintering quarters in Africa. As many migrants travel at night or at high altitudes, direct observations of bird migration are difficult and thus our information about migrating species, numbers and timing is incomplete. An indirect way to assess autumn migration is the analysis of prey remains of Eleonora’s Falcons (Falco eleonorae). These falcons breed in large colonies on islands in the Mediterranean and on the Canary Islands. Many migrants have to pass these islands on their flight to their African wintering quarters. Eleonora’s Falcons appear to be adapted to the autumn bird migration and raise their young between August and October, when migrating birds are abundant. When nestlings have to be fed, falcons exclusively hunt small birds of 10 to 150 g body mass, whereas they prey mostly on aerial invertebrates (Coleoptera, Hymenoptera, Diptera, Orthoptera, Hemiptera, Odonata, Lepidoptera) from November to July. We studied Eleonora’s Falcons from 1965 to 2001 on a rocky islet, north of Crete, which harboured a colony of about 200 breeding pairs. In 1969, 1971, 1977, and 1988 we systematically monitored and collected the pluckings and cached food items in 22 to 36 nest sites each year. Pluckings were systematically analysed later in Germany using a reference collection of bird feathers for identification. In total, we determined more than 111 prey species (mostly Passerines) comprising more than 13,450 individuals. The top 12 prey species were: Willow Warbler (27.8% of all prey items), Red-backed Shrike (10.7%), Spotted Flycatcher (9.9%), Whinchat (8.8%), Common Whitethroat (5.1%), Wood Warbler (3.8), Tree Pipit (2.9%), Icterine Warbler (2.5%), Greater Short-toed Lark (2.5%), Northern Wheatear (1.8%), Common Nightingale (1.6%), and European Pied Flycatcher (1.5%). Eleonora’s Falcons are selective hunters to some degree; thus, the phenology and abundance data derived from the plucking analyses are biased towards slow-flying species or smaller birds (only up to a body mass of 150 g). When the young falcons develop and grow, food demand increases concomitantly. Comparing the total weight of prey over time indicates a correlation with food demand and in consequence with the number of prey items brought to the nest sites by the falcons. Full article
(This article belongs to the Special Issue 2024 Feature Papers by Diversity’s Editorial Board Members)
Show Figures

Graphical abstract

11 pages, 533 KB  
Article
Proto-Adapter: Efficient Training-Free CLIP-Adapter for Few-Shot Image Classification
by Naoki Kato, Yoshiki Nota and Yoshimitsu Aoki
Sensors 2024, 24(11), 3624; https://doi.org/10.3390/s24113624 - 4 Jun 2024
Cited by 3 | Viewed by 7351
Abstract
Large vision-language models, such as Contrastive Vision-Language Pre-training (CLIP), pre-trained on large-scale image–text datasets, have demonstrated robust zero-shot transfer capabilities across various downstream tasks. To further enhance the few-shot recognition performance of CLIP, Tip-Adapter augments the CLIP model with an adapter that incorporates [...] Read more.
Large vision-language models, such as Contrastive Vision-Language Pre-training (CLIP), pre-trained on large-scale image–text datasets, have demonstrated robust zero-shot transfer capabilities across various downstream tasks. To further enhance the few-shot recognition performance of CLIP, Tip-Adapter augments the CLIP model with an adapter that incorporates a key-value cache model constructed from the few-shot training set. This approach enables training-free adaptation and has shown significant improvements in few-shot recognition, especially with additional fine-tuning. However, the size of the adapter increases in proportion to the number of training samples, making it difficult to deploy in practical applications. In this paper, we propose a novel CLIP adaptation method, named Proto-Adapter, which employs a single-layer adapter of constant size regardless of the amount of training data and even outperforms Tip-Adapter. Proto-Adapter constructs the adapter’s weights based on prototype representations for each class. By aggregating the features of the training samples, it successfully reduces the size of the adapter without compromising performance. Moreover, the performance of the model can be further enhanced by fine-tuning the adapter’s weights using a distance margin penalty, which imposes additional inter-class discrepancy to the output logits. We posit that this training scheme allows us to obtain a model with a discriminative decision boundary even when trained with a limited amount of data. We demonstrate the effectiveness of the proposed method through extensive experiments of few-shot classification on diverse datasets. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 3989 KB  
Article
QWLCPM: A Method for QoS-Aware Forwarding and Caching Using Simple Weighted Linear Combination and Proximity for Named Data Vehicular Sensor Network
by Dependra Dhakal and Kalpana Sharma
Electronics 2024, 13(7), 1183; https://doi.org/10.3390/electronics13071183 - 23 Mar 2024
Cited by 1 | Viewed by 1660
Abstract
The named data vehicular sensor network (NDVSN) has become an increasingly important area of research because of the increasing demand for data transmission in vehicular networks. In such networks, ensuring the quality of service (QoS) of data transmission is essential. The NDVSN is [...] Read more.
The named data vehicular sensor network (NDVSN) has become an increasingly important area of research because of the increasing demand for data transmission in vehicular networks. In such networks, ensuring the quality of service (QoS) of data transmission is essential. The NDVSN is a mobile ad hoc network that uses vehicles equipped with sensors to collect and disseminate data. QoS is critical in vehicular networks, as the data transmission must be reliable, efficient, and timely to support various applications. This paper proposes a QoS-aware forwarding and caching algorithm for NDVSNs, called QWLCPM (QoS-aware Forwarding and Caching using Weighted Linear Combination and Proximity Method). QWLCPM utilizes a weighted linear combination and proximity method to determine stable nodes and the best next-hop forwarding path based on various metrics, including priority, signal strength, vehicle speed, global positioning system data, and vehicle ID. Additionally, it incorporates a weighted linear combination method for the caching mechanism to store frequently accessed data based on zone ID, stability, and priority. The performance of QWLCPM is evaluated through simulations and compared with other forwarding and caching algorithms. QWLCPM’s efficacy stems from its holistic decision-making process, incorporating spatial and temporal elements for efficient cache management. Zone-based caching, showcased in different scenarios, enhances content delivery by utilizing stable nodes. QWLCPM’s proximity considerations significantly improve cache hits, reduce delay, and optimize hop count, especially in scenarios with sparse traffic. Additionally, its priority-based caching mechanism enhances hit ratios and content diversity, emphasizing QWLCPM’s substantial network-improvement potential in vehicular environments. These findings suggest that QWLCPM has the potential to greatly enhance QoS in NDVSNs and serve as a promising solution for future vehicular sensor networks. Future research could focus on refining the details of its implementation, scalability in larger networks, and conducting real-world trials to validate its performance under dynamic conditions. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor Networks)
Show Figures

Figure 1

Back to TopTop