Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,189)

Search Parameters:
Keywords = e-learning experience

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5145 KiB  
Article
An Improved Deep Q-Learning Approach for Navigation of an Autonomous UAV Agent in 3D Obstacle-Cluttered Environment
by Ghulam Farid, Muhammad Bilal, Lanyong Zhang, Ayman Alharbi, Ishaq Ahmed and Muhammad Azhar
Drones 2025, 9(8), 518; https://doi.org/10.3390/drones9080518 - 23 Jul 2025
Abstract
The performance of the UAVs while executing various mission profiles greatly depends on the selection of planning algorithms. Reinforcement learning (RL) algorithms can effectively be utilized for robot path planning. Due to random action selection in case of action ties, the traditional Q-learning [...] Read more.
The performance of the UAVs while executing various mission profiles greatly depends on the selection of planning algorithms. Reinforcement learning (RL) algorithms can effectively be utilized for robot path planning. Due to random action selection in case of action ties, the traditional Q-learning algorithm and its other variants face the issues of slow convergence and suboptimal path planning in high-dimensional navigational environments. To solve these problems, we propose an improved deep Q-network (DQN), incorporating an efficient tie-breaking mechanism, prioritized experience replay (PER), and L2-regularization. The adopted tie-breaking mechanism improves the action selection and ultimately helps in generating an optimal trajectory for the UAV in a 3D cluttered environment. To improve the convergence speed of the traditional Q-algorithm, prioritized experience replay is used, which learns from experiences with high temporal difference (TD) error and avoids uniform sampling of stored transitions during training. This also allows the prioritization of high-reward experiences (e.g., reaching a goal), which helps the agent to rediscover these valuable states and improve learning. Moreover, L2-regularization is adopted that encourages smaller weights for more stable and smoother Q-values to reduce the erratic action selections and promote smoother UAV flight paths. Finally, the performance of the proposed method is presented and thoroughly compared against the traditional DQN, demonstrating its superior effectiveness. Full article
Show Figures

Figure 1

27 pages, 705 KiB  
Article
A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification
by Ramazan Enisoglu and Veselin Rakocevic
Algorithms 2025, 18(8), 457; https://doi.org/10.3390/a18080457 - 23 Jul 2025
Abstract
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static [...] Read more.
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static statistical analyses, fail to capture dynamic frequency patterns inherent to real-time applications. These limitations hinder accurate resource allocation in heterogeneous networks. This paper proposes a novel framework integrating wavelet transform (WT) and artificial neural networks (ANNs) to address this gap. Unlike prior works, we systematically apply WT to commonly used temporal features—such as throughput, slope, ratio, and moving averages—transforming them into frequency-domain representations. This approach reveals hidden multi-scale patterns in low-latency traffic, akin to structured noise in signal processing, which traditional time-domain analyses often overlook. These wavelet-enhanced features train a multilayer perceptron (MLP) ANN, enabling dual-domain (time–frequency) analysis. We evaluate our approach on a dataset comprising FTP, video streaming, and low-latency traffic, including mixed scenarios with up to four concurrent traffic types. Experiments demonstrate 99.56% accuracy in distinguishing low-latency traffic (e.g., video conferencing) from FTP and streaming, outperforming k-NN, CNNs, and LSTMs. Notably, our method eliminates reliance on deep packet inspection (DPI), offering ISPs a privacy-preserving and scalable solution for prioritizing time-sensitive traffic. In mixed-traffic scenarios, the model achieves 74.2–92.8% accuracy, offering ISPs a scalable solution for prioritizing time-sensitive traffic without deep packet inspection. By bridging signal processing and deep learning, this work advances efficient bandwidth allocation and enables Internet Service Providers to prioritize time-sensitive flows without deep packet inspection, improving quality of service in heterogeneous network environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 351 KiB  
Entry
Evolutionary Mismatches Inherent in Elementary Education: Identifying the Implications for Modern Schooling Practices
by Kathryne Gruskin, Anthony J. Caserta, Julia Colodny, Stephanie Dickinson-Frevola, Ethan Eisenberg, Glenn Geher, Mariah Griffin, Aileen McCarthy, Sonia Santos, Shayla Thach and Nadia Tamayo
Encyclopedia 2025, 5(3), 105; https://doi.org/10.3390/encyclopedia5030105 - 21 Jul 2025
Viewed by 369
Definition
For the majority of human history, humans lived in sustenance hunter–gatherer tribes. Due to rapid cultural evolution in the past few thousand years, our biological evolution has not kept up, and many of the adaptations are, as a result, better suited to ancestral [...] Read more.
For the majority of human history, humans lived in sustenance hunter–gatherer tribes. Due to rapid cultural evolution in the past few thousand years, our biological evolution has not kept up, and many of the adaptations are, as a result, better suited to ancestral conditions than they are to modern conditions. This is known as evolutionary mismatch. While evolutionary mismatches can be seen across many facets of contemporary human life (e.g., diet, exercise, online communication), evolutionary mismatches are particularly pervasive in our elementary schools. Due to the critical nature of social learning and cultural transmission, there is a long history of learning that has shaped the evolved learning mechanisms of children. Rather than learning from hands-on, collaborative experiences as was typical for our ancestors, children today often learn in age-segregated classrooms through passive instruction and standardized curricula. In this entry, eight common school-related issues are identified and the associated evolutionary mismatch is outlined. The goal is to provide educators with a model of how an evolutionary lens can be used to better understand, and potentially improve, modern schooling systems. Full article
(This article belongs to the Section Behavioral Sciences)
27 pages, 1868 KiB  
Article
SAM2-DFBCNet: A Camouflaged Object Detection Network Based on the Heira Architecture of SAM2
by Cao Yuan, Libang Liu, Yaqin Li and Jianxiang Li
Sensors 2025, 25(14), 4509; https://doi.org/10.3390/s25144509 - 21 Jul 2025
Viewed by 165
Abstract
Camouflaged Object Detection (COD) aims to segment objects that are highly integrated with their background, presenting significant challenges such as low contrast, complex textures, and blurred boundaries. Existing deep learning methods often struggle to achieve robust segmentation under these conditions. To address these [...] Read more.
Camouflaged Object Detection (COD) aims to segment objects that are highly integrated with their background, presenting significant challenges such as low contrast, complex textures, and blurred boundaries. Existing deep learning methods often struggle to achieve robust segmentation under these conditions. To address these limitations, this paper proposes a novel COD network, SAM2-DFBCNet, built upon the SAM2 Hiera architecture. Our network incorporates three key modules: (1) the Camouflage-Aware Context Enhancement Module (CACEM), which fuses local and global features through an attention mechanism to enhance contextual awareness in low-contrast scenes; (2) the Cross-Scale Feature Interaction Bridge (CSFIB), which employs a bidirectional convolutional GRU for the dynamic fusion of multi-scale features, effectively mitigating representation inconsistencies caused by complex textures and deformations; and (3) the Dynamic Boundary Refinement Module (DBRM), which combines channel and spatial attention mechanisms to optimize boundary localization accuracy and enhance segmentation details. Extensive experiments on three public datasets—CAMO, COD10K, and NC4K—demonstrate that SAM2-DFBCNet outperforms twenty state-of-the-art methods, achieving maximum improvements of 7.4%, 5.78%, and 4.78% in key metrics such as S-measure (Sα), F-measure (Fβ), and mean E-measure (Eϕ), respectively, while reducing the Mean Absolute Error (M) by 37.8%. These results validate the superior performance and robustness of our approach in complex camouflage scenarios. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

22 pages, 32971 KiB  
Article
Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing
by Haixin Sun, Qiuguang Cao, Fanlei Meng, Jingwen Xu and Mengdi Cheng
Sensors 2025, 25(14), 4493; https://doi.org/10.3390/s25144493 - 19 Jul 2025
Viewed by 240
Abstract
In recent years, deep learning (DL) has been demonstrated remarkable capabilities in hyperspectral unmixing (HU) due to its powerful feature representation ability. Convolutional neural networks (CNNs) are effective in capturing local spatial information, but limited in modeling long-range dependencies. In contrast, transformer architectures [...] Read more.
In recent years, deep learning (DL) has been demonstrated remarkable capabilities in hyperspectral unmixing (HU) due to its powerful feature representation ability. Convolutional neural networks (CNNs) are effective in capturing local spatial information, but limited in modeling long-range dependencies. In contrast, transformer architectures extract global contextual features via multi-head self-attention (MHSA) mechanisms. However, most existing transformer-based HU methods focus only on spatial or spectral modeling at a single scale, lacking a unified mechanism to jointly explore spatial and channel-wise dependencies. This limitation is particularly critical for multiscale contextual representation in complex scenes. To address these issues, this article proposes a novel Spatial-Channel Multiscale Transformer Network (SCMT-Net) for HU. Specifically, a compact feature projection (CFP) module is first used to extract shallow discriminative features. Then, a spatial multiscale transformer (SMT) and a channel multiscale transformer (CMT) are sequentially applied to model contextual relations across spatial dimensions and long-range dependencies among spectral channels. In addition, a multiscale multi-head self-attention (MMSA) module is designed to extract rich multiscale global contextual and channel information, enabling a balance between accuracy and efficiency. An efficient feed-forward network (E-FFN) is further introduced to enhance inter-channel information flow and fusion. Experiments conducted on three real hyperspectral datasets (Samson, Jasper and Apex) and one synthetic dataset showed that SCMT-Net consistently outperformed existing approaches in both abundance estimation and endmember extraction, demonstrating superior accuracy and robustness. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 1798 KiB  
Article
An Approach to Enable Human–3D Object Interaction Through Voice Commands in an Immersive Virtual Environment
by Alessio Catalfamo, Antonio Celesti, Maria Fazio, A. F. M. Saifuddin Saif, Yu-Sheng Lin, Edelberto Franco Silva and Massimo Villari
Big Data Cogn. Comput. 2025, 9(7), 188; https://doi.org/10.3390/bdcc9070188 - 17 Jul 2025
Viewed by 262
Abstract
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications [...] Read more.
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications through users’ voice commands presents significant challenges due to the hardware and software limitations of headset devices. This paper aims to bridge this gap by proposing a methodology to address these issues. In particular, starting from a Mel-Frequency Cepstral Coefficient (MFCC) extraction algorithm able to capture the unique characteristics of the user’s voice, we pass it as input to a Convolutional Neural Network (CNN) model. After that, in order to integrate the CNN model with a VR application running on a standalone headset, such as Oculus Quest, we converted it into an Open Neural Network Exchange (ONNX) format, i.e., a Machine Learning (ML) interoperability open standard format. The proposed system demonstrates good performance and represents a foundation for the development of user-centric, effective computing systems, enhancing accessibility to VR environments through voice-based commands. Experiments demonstrate that a native CNN model developed through TensorFlow presents comparable performances with respect to the corresponding CNN model converted into the ONNX format, paving the way towards the development of VR applications running in headsets controlled through the user’s voice. Full article
Show Figures

Figure 1

18 pages, 7391 KiB  
Article
Reliable QoE Prediction in IMVCAs Using an LMM-Based Agent
by Michael Sidorov, Tamir Berger, Jonathan Sterenson, Raz Birman and Ofer Hadar
Sensors 2025, 25(14), 4450; https://doi.org/10.3390/s25144450 - 17 Jul 2025
Viewed by 192
Abstract
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call [...] Read more.
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call Applications (IMVCAs) now constitute a substantial portion of VC communications. Given the multitude of IMVCA options, maintaining a high Quality of Experience (QoE) is critical. While content providers can measure QoE directly through end-to-end connections, Internet Service Providers (ISPs) must infer QoE indirectly from network traffic—a non-trivial task, especially when most traffic is encrypted. In this paper, we analyze a large dataset collected from WhatsApp IMVCA, comprising over 25,000 s of VC sessions. We apply four Machine Learning (ML) algorithms and a Large Multimodal Model (LMM)-based agent, achieving mean errors of 4.61%, 5.36%, and 13.24% for three popular QoE metrics: BRISQUE, PIQE, and FPS, respectively. Full article
Show Figures

Figure 1

22 pages, 15594 KiB  
Article
Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning
by Xike Song and Ziyang Li
Remote Sens. 2025, 17(14), 2482; https://doi.org/10.3390/rs17142482 - 17 Jul 2025
Viewed by 227
Abstract
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing [...] Read more.
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing vast ocean areas. In contrast, medium-resolution imagery, such as 10-m Sentinel-2, provides broad ocean coverage but depicts turbines only as small bright spots and shadows, making accurate detection challenging. To address these limitations, We propose a novel deep learning approach to capture the variability in OWT appearance and shadows caused by changes in solar illumination and satellite viewing geometry. Our method learns intrinsic, imaging geometry-invariant features of OWTs, enabling robust detection across multi-seasonal Sentinel-2 imagery. This approach is implemented using Faster R-CNN as the baseline, with three enhanced extensions: (1) direct integration of imaging parameters, where Geowise-Net incorporates solar and view angular information of satellite metadata to improve geometric awareness; (2) implicit geometry learning, where Contrast-Net employs contrastive learning on seasonal image pairs to capture variability in turbine appearance and shadows caused by changes in solar and viewing geometry; and (3) a Composite model that integrates the above two geometry-aware models to utilize their complementary strengths. All four models were evaluated using Sentinel-2 imagery from offshore regions in China. The ablation experiments showed a progressive improvement in detection performance in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. Seasonal tests demonstrated that the proposed models maintained high performance on summer images against the baseline, where turbine shadows are significantly shorter than in winter scenes. The Composite model, in particular, showed only a 0.8% difference in the F1 score between the two seasons, compared to up to 3.7% for the baseline, indicating strong robustness to seasonal variation. By applying our approach to 887 Sentinel-2 scenes from China’s offshore regions (2023.1–2025.3), we built the China OWT Dataset, mapping 7369 turbines as of March 2025. Full article
Show Figures

Graphical abstract

20 pages, 927 KiB  
Article
An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM
by Yuetong Liu, Chaolang Hu, Shiquan Zhang and Qixiao Hu
Mathematics 2025, 13(14), 2286; https://doi.org/10.3390/math13142286 - 16 Jul 2025
Viewed by 118
Abstract
Given the evaluation data of all the experts in multi-attribute group decision making, this paper establishes an optimization model for learning and determining expert weights based on minimizing the sum of the differences between the individual evaluation and the overall consistent evaluation results. [...] Read more.
Given the evaluation data of all the experts in multi-attribute group decision making, this paper establishes an optimization model for learning and determining expert weights based on minimizing the sum of the differences between the individual evaluation and the overall consistent evaluation results. The paper proves the uniqueness of the solution of the optimization model and rigorously proves that the expert weights obtained by the model have “perfect rationality”, i.e., the weights are inversely proportional to the distance to the “overall consistent scoring point”. Based on the above characteristics, the optimization problem is further transformed into solving a system of nonlinear equations to obtain the expert weights. Finally, numerical experiments are conducted to verify the rationality of the model and the feasibility of transforming the problem into a system of nonlinear equations. Numerical experiments demonstrate that the deviation metric for the expert weights produced by our optimization model is significantly lower than that obtained under equal weighting or the entropy weight method, and it approaches zero. Within numerical tolerance, this confirms the model’s “perfect rationality”. Furthermore, the weights determined by solving the corresponding nonlinear equations coincide exactly with the optimization solution, indicating that a dedicated algorithm grounded in perfect rationality can directly solve the model. Full article
Show Figures

Figure 1

17 pages, 1514 KiB  
Article
Examining the Flow Dynamics of Artificial Intelligence in Real-Time Classroom Applications
by Zoltán Szűts, Tünde Lengyelné Molnár, Réka Racskó, Geoffrey Vaughan, Szabolcs Ceglédi and Dalma Lilla Dominek
Computers 2025, 14(7), 275; https://doi.org/10.3390/computers14070275 - 14 Jul 2025
Viewed by 349
Abstract
The integration of artificial intelligence (AI) into educational environments is fundamentally transforming the learning process, raising new questions regarding student engagement and motivation. This empirical study investigates the relationship between AI-based learning support and the experience of flow, defined as the optimal state [...] Read more.
The integration of artificial intelligence (AI) into educational environments is fundamentally transforming the learning process, raising new questions regarding student engagement and motivation. This empirical study investigates the relationship between AI-based learning support and the experience of flow, defined as the optimal state of deep attention and intrinsic motivation, among university students. Building on Csíkszentmihályi’s flow theory and current models of technology-enhanced learning, we applied a validated, purposefully developed AI questionnaire (AIFLQ) to 142 students from two Hungarian universities: the Ludovika University of Public Service and Eszterházy Károly Catholic University. The participants used generative AI tools (e.g., ChatGPT 4, SUNO) during their academic tasks. Based on the results of the Mann–Whitney U test, significant differences were found between students from the two universities in the immersion and balance factors, as well as in the overall flow score, while the AI-related factor showed no statistically significant differences. The sustainability of the flow experience appears to be linked more to pedagogical methodological factors than to institutional ones, highlighting the importance of instructional support in fostering optimal learning experiences. Demographic variables also influenced the flow experience. In gender comparisons, female students showed significantly higher values for the immersion factor. According to the Kruskal–Wallis test, educational attainment also affected the flow experience, with students holding higher education degrees achieving higher flow scores. Our findings suggest that through the conscious design of AI tools and learning environments, taking into account instructional support and learner characteristics, it is possible to promote the development of optimal learning states. This research provides empirical evidence at the intersection of AI and motivational psychology, contributing to both domestic and international discourse in educational psychology and digital pedagogy. Full article
Show Figures

Figure 1

20 pages, 3147 KiB  
Article
Crossed Wavelet Convolution Network for Few-Shot Defect Detection of Industrial Chips
by Zonghai Sun, Yiyu Lin, Yan Li and Zihan Lin
Sensors 2025, 25(14), 4377; https://doi.org/10.3390/s25144377 - 13 Jul 2025
Viewed by 264
Abstract
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for [...] Read more.
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for industrial defect detection, existing approaches struggle with mixed-scene conditions (e.g., daytime and night-version scenes). In this work, we propose a crossed wavelet convolution network (CWCN), including a dual-pipeline crossed wavelet convolution training framework (DPCWC) and a loss value calculation module named ProSL. Our method innovatively applies wavelet transform convolution and prototype learning to industrial defect detection, which effectively fuses feature information from multiple scenarios and improves the detection performance. Experiments across various few-shot tasks on chip datasets illustrate the better detection quality of CWCN, with an improvement in mAP ranging from 2.76% to 16.43% over other FSL methods. In addition, experiments on an open-source dataset NEU-DET further validate our proposed method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 289 KiB  
Project Report
Characteristics of Authentic Construction Learning Experiences to Enable Accurate Consideration of Cost-Effective Alternatives
by Karan R. Patil, Steven K. Ayer, Kieren H. McCord, Logan A. Perry, Wei Wu, Jeremi S. London and Andrew R. Kline
Buildings 2025, 15(14), 2446; https://doi.org/10.3390/buildings15142446 - 11 Jul 2025
Viewed by 187
Abstract
Authentic learning opportunities that simulate full-scale design and construction using real materials provide valuable experiential learning environments for construction and civil engineering students by challenging students to apply building concepts in practical settings. These activities challenge students to apply theoretical concepts in a [...] Read more.
Authentic learning opportunities that simulate full-scale design and construction using real materials provide valuable experiential learning environments for construction and civil engineering students by challenging students to apply building concepts in practical settings. These activities challenge students to apply theoretical concepts in a realistic, hands-on context. However, the excessive cost of real building materials required for this mode of education limits access to the vast majority of students. As a result, educational researchers have explored potential alternatives to provide cost-effective experiential learning through activities using mock-up materials (e.g., plastic straws and popsicle sticks) and a simulation of experiences using immersive technologies (e.g., virtual reality or augmented reality). While some of these alternatives approximate the environment and others provide physical interaction with mock-up materials, the lack of authenticity in the building materials used introduces some apparent differences between the “authentic” learning environments and their cost-effective approximations. Therefore, this research aims to identify the learning processes reported by students and faculty who participated in authentic learning experiences to understand the ways in which this mode of education offers unique value to construction education. Their interview responses illustrated characteristics of authentic learning experiences that were believed to be critical to the learning process, some of which included working in groups; interdisciplinary participants; and the use of real construction materials. Although some of these characteristics are intrinsically linked to the use of real materials, others do not explicitly refer to interaction with real materials. This may point to specific aspects of authentic learning that educational researchers can replicate or enhance to provide cost-effective learning environments, such as virtual or augmented reality. The contribution of this paper is in identifying the characteristics of authentic learning experiences that may guide educational investment and research innovations that aim to replicate some of these learning experiences through more accessible learning environments. Full article
Show Figures

Figure 1

15 pages, 3425 KiB  
Article
Designing Cross-Domain Sustainability Instruction in Higher Education: A Mixed-Methods Study Using AHP and Transformative Pedagogy
by Wan-Ting Xie, Shang-Tse Ho and Han-Chien Lin
Sustainability 2025, 17(14), 6380; https://doi.org/10.3390/su17146380 - 11 Jul 2025
Viewed by 226
Abstract
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize [...] Read more.
This study proposes an interdisciplinary instructional model tailored for Functional Ecological Carbon (FEC) education, combining Electronic, Mobilize, and Ubiquitous (E/M/U) learning principles with the Practical Transformational Teaching Method (PTtM). The research adopts a mixed-methods framework, utilizing the Analytic Hierarchy Process (AHP) to prioritize teaching objectives and interpret student evaluations, alongside qualitative insights from reflective journals, open-ended surveys, and focus group discussions. The results indicate that hands-on experience, interdisciplinary collaboration, and context-aware applications play a critical role in fostering ecological awareness and responsibility among students. Notably, modules such as biosafety testing and water purification prompted transformative engagement with sustainability issues. The study contributes to sustainability education by integrating a decision-analytic structure with reflective learning and intelligent instructional strategies. The proposed model provides valuable implications for educators and policymakers designing interdisciplinary sustainability curricula in smart learning environments. Full article
Show Figures

Figure 1

20 pages, 2750 KiB  
Article
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
by Jiahao Zhang, Bowen Wang, Hong Liu, Liangzhi Li, Yuta Nakashima and Hajime Nagahara
J. Imaging 2025, 11(7), 232; https://doi.org/10.3390/jimaging11070232 - 11 Jul 2025
Viewed by 245
Abstract
Large-scale models trained on extensive datasets have become the standard due to their strong generalizability across diverse tasks. In-context learning (ICL), widely used in natural language processing, leverages these models by providing task-specific prompts without modifying their parameters. This paradigm is increasingly being [...] Read more.
Large-scale models trained on extensive datasets have become the standard due to their strong generalizability across diverse tasks. In-context learning (ICL), widely used in natural language processing, leverages these models by providing task-specific prompts without modifying their parameters. This paradigm is increasingly being adapted for computer vision, where models receive an input–output image pair, known as an in-context pair, alongside a query image to illustrate the desired output. However, the success of visual ICL largely hinges on the quality of these prompts. To address this, we propose Enhanced Instruct Me More (E-InMeMo), a novel approach that incorporates learnable perturbations into in-context pairs to optimize prompting. Through extensive experiments on standard vision tasks, E-InMeMo demonstrates superior performance over existing state-of-the-art methods. Notably, it improves mIoU scores by 7.99 for foreground segmentation and by 17.04 for single object detection when compared to the baseline without learnable prompts. These results highlight E-InMeMo as a lightweight yet effective strategy for enhancing visual ICL. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

30 pages, 5474 KiB  
Article
WHU-RS19 ABZSL: An Attribute-Based Dataset for Remote Sensing Image Understanding
by Mattia Balestra, Marina Paolanti and Roberto Pierdicca
Remote Sens. 2025, 17(14), 2384; https://doi.org/10.3390/rs17142384 - 10 Jul 2025
Viewed by 247
Abstract
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts [...] Read more.
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts their application to more flexible, interpretable and generalisable learning frameworks. In this study, we introduce WHU-RS19 ABZSL: an attribute-based extension of the widely adopted WHU-RS19 dataset. This new version comprises 1005 high-resolution aerial images across 19 scene categories, each annotated with a vector of 38 features. These cover objects (e.g., roads and trees), geometric patterns (e.g., lines and curves) and dominant colours (e.g., green and blue), and are defined through expert-guided annotation protocols. To demonstrate the value of the dataset, we conduct baseline experiments using deep learning models that had been adapted for multi-label classification—ResNet18, VGG16, InceptionV3, EfficientNet and ViT-B/16—designed to capture the semantic complexity characteristic of real-world aerial scenes. The results, which are measured in terms of macro F1-score, range from 0.7385 for ResNet18 to 0.7608 for EfficientNet-B0. In particular, EfficientNet-B0 and ViT-B/16 are the top performers in terms of the overall macro F1-score and consistency across attributes, while all models show a consistent decline in performance for infrequent or visually ambiguous categories. This confirms that it is feasible to accurately predict semantic attributes in complex scenes. By enriching a standard benchmark with detailed, image-level semantic supervision, WHU-RS19 ABZSL supports a variety of downstream applications, including multi-label classification, explainable AI, semantic retrieval, and attribute-based ZSL. It thus provides a reusable, compact resource for advancing the semantic understanding of remote sensing and multimodal AI. Full article
(This article belongs to the Special Issue Remote Sensing Datasets and 3D Visualization of Geospatial Big Data)
Show Figures

Figure 1

Back to TopTop