Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = adaptive disparity refinement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 29759 KiB  
Article
UAV-Satellite Cross-View Image Matching Based on Adaptive Threshold-Guided Ring Partitioning Framework
by Yushi Liao, Juan Su, Decao Ma and Chao Niu
Remote Sens. 2025, 17(14), 2448; https://doi.org/10.3390/rs17142448 - 15 Jul 2025
Viewed by 121
Abstract
Cross-view image matching between UAV and satellite platforms is critical for geographic localization but remains challenging due to domain gaps caused by disparities in imaging sensors, viewpoints, and illumination conditions. To address these challenges, this paper proposes an Adaptive Threshold-guided Ring Partitioning Framework [...] Read more.
Cross-view image matching between UAV and satellite platforms is critical for geographic localization but remains challenging due to domain gaps caused by disparities in imaging sensors, viewpoints, and illumination conditions. To address these challenges, this paper proposes an Adaptive Threshold-guided Ring Partitioning Framework (ATRPF) for UAV–satellite cross-view image matching. Unlike conventional ring-based methods with fixed partitioning rules, ATRPF innovatively incorporates heatmap-guided adaptive thresholds and learnable hyperparameters to dynamically adjust ring-wise feature extraction regions, significantly enhancing cross-domain representation learning through context-aware adaptability. The framework synergizes three core components: brightness-aligned preprocessing to reduce illumination-induced domain shifts, hybrid loss functions to improve feature discriminability across domains, and keypoint-aware re-ranking to refine retrieval results by compensating for neural networks’ localization uncertainty. Comprehensive evaluations on the University-1652 benchmark demonstrate the framework’s superiority; it achieves 82.50% Recall@1 and 84.28% AP for UAV→Satellite geo-localization, along with 90.87% Recall@1 and 80.25% AP for Satellite→UAV navigation. These results validate the framework’s capability to bridge UAV–satellite domain gaps while maintaining robust matching precision under heterogeneous imaging conditions, providing a viable solution for practical applications such as UAV navigation in GNSS-denied environments. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

30 pages, 11197 KiB  
Article
Few-Shot Unsupervised Domain Adaptation Based on Refined Bi-Directional Prototypical Contrastive Learning for Cross-Scene Hyperspectral Image Classification
by Xuebin Tang, Hanyi Shi, Chunchao Li, Cheng Jiang, Xiaoxiong Zhang, Lingbin Zeng and Xiaolei Zhou
Remote Sens. 2025, 17(13), 2305; https://doi.org/10.3390/rs17132305 - 4 Jul 2025
Viewed by 378
Abstract
Hyperspectral image cross-scene classification (HSICC) tasks are confronted with tremendous challenges due to spectral shift phenomena across scenes and the tough work of obtaining labels. Unsupervised domain adaptation has proven its effectiveness in tackling this issue, but it has a fatal limitation of [...] Read more.
Hyperspectral image cross-scene classification (HSICC) tasks are confronted with tremendous challenges due to spectral shift phenomena across scenes and the tough work of obtaining labels. Unsupervised domain adaptation has proven its effectiveness in tackling this issue, but it has a fatal limitation of intending to narrow the disparity between source and target domains by utilizing fully labeled source data and unlabeled target data. However, it is costly even to attain labels from source domains in many cases, rendering sufficient labeling as used in prior work impractical. In this work, we investigate an extreme and realistic scenario where unsupervised domain adaptation methods encounter sparsely labeled source data when handling HSICC tasks, namely, few-shot unsupervised domain adaptation. We propose an end-to-end refined bi-directional prototypical contrastive learning (RBPCL) framework for overcoming the HSICC problem with only a few labeled samples in the source domain. RBPCL captures category-level semantic features of hyperspectral data and performs feature alignment through in-domain refined prototypical self-supervised learning and bi-directional cross-domain prototypical contrastive learning, respectively. Furthermore, our framework introduces the class-balanced multicentric dynamic prototype strategy to generate more robust and representative prototypes. To facilitate prototype contrastive learning, we employ a Siamese-style distance metric loss function to aggregate intra-class features while increasing the discrepancy of inter-class features. Finally, extensive experiments and ablation analysis implemented on two public cross-scene data pairs and three pairs of self-collected ultralow-altitude hyperspectral datasets under different illumination conditions verify the effectiveness of our method, which will further enhance the practicality of hyperspectral intelligent sensing technology. Full article
Show Figures

Graphical abstract

19 pages, 17180 KiB  
Article
Adaptive Support Weight-Based Stereo Matching with Iterative Disparity Refinement
by Alexander Richter, Till Steinmann, Andreas Reichenbach and Stefan J. Rupitsch
Sensors 2025, 25(13), 4124; https://doi.org/10.3390/s25134124 - 2 Jul 2025
Viewed by 329
Abstract
Real-time 3D reconstruction in minimally invasive surgery improves depth perception and supports intraoperative decision-making and navigation. However, endoscopic imaging presents significant challenges, such as specular reflections, low-texture surfaces, and tissue deformation. We present a novel, deterministic and iterative stereo-matching method based on adaptive [...] Read more.
Real-time 3D reconstruction in minimally invasive surgery improves depth perception and supports intraoperative decision-making and navigation. However, endoscopic imaging presents significant challenges, such as specular reflections, low-texture surfaces, and tissue deformation. We present a novel, deterministic and iterative stereo-matching method based on adaptive support weights that is tailored to these constraints. The algorithm is implemented in CUDA and C++ to enable real-time performance. We evaluated our method on the Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) dataset and a custom synthetic dataset using the mean absolute error (MAE), root mean square error (RMSE), and frame rate as metrics. On SCARED datasets 8 and 9, our method achieves MAEs of 3.79 mm and 3.61 mm, achieving 24.9 FPS on a system with an AMD Ryzen 9 5950X and NVIDIA RTX 3090. To the best of our knowledge, these results are on par with or surpass existing deterministic stereo-matching approaches. On synthetic data, which eliminates real-world imaging errors, the method achieves an MAE of 140.06 μm and an RMSE of 251.9 μm, highlighting its performance ceiling under noise-free, idealized conditions. Our method focuses on single-shot 3D reconstruction as a basis for stereo frame stitching and full-scene modeling. It provides accurate, deterministic, real-time depth estimation under clinically relevant conditions and has the potential to be integrated into surgical navigation, robotic assistance, and augmented reality workflows. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

22 pages, 2884 KiB  
Review
Research on Medical Image Segmentation Based on SAM and Its Future Prospects
by Kangxu Fan, Liang Liang, Hao Li, Weijun Situ, Wei Zhao and Ge Li
Bioengineering 2025, 12(6), 608; https://doi.org/10.3390/bioengineering12060608 - 3 Jun 2025
Viewed by 1186
Abstract
The rapid advancement of prompt-based models in natural language processing and image generation has revolutionized the field of image segmentation. The introduction of the Segment Anything Model (SAM) has further invigorated this domain with its unprecedented versatility. However, its applicability to medical image [...] Read more.
The rapid advancement of prompt-based models in natural language processing and image generation has revolutionized the field of image segmentation. The introduction of the Segment Anything Model (SAM) has further invigorated this domain with its unprecedented versatility. However, its applicability to medical image segmentation remains uncertain due to significant disparities between natural and medical images, which demand careful consideration. This study comprehensively analyzes recent efforts to adapt SAM for medical image segmentation, including empirical benchmarking and methodological refinements aimed at bridging the gap between SAM’s capabilities and the unique challenges of medical imaging. Furthermore, we explore future directions for SAM in this field. While direct application of SAM to complex, multimodal, and multi-target medical datasets may not yet yield optimal results, insights from these efforts provide crucial guidance for developing foundational models tailored to the intricacies of medical image analysis. Despite existing challenges, SAM holds considerable potential to demonstrate its unique advantages and robust capabilities in medical image segmentation in the near future. Full article
(This article belongs to the Special Issue Advances in Medical 3D Vision: Voxels and Beyond)
Show Figures

Figure 1

20 pages, 7105 KiB  
Article
Small-Target Detection Algorithm Based on STDA-YOLOv8
by Cun Li, Shuhai Jiang and Xunan Cao
Sensors 2025, 25(9), 2861; https://doi.org/10.3390/s25092861 - 30 Apr 2025
Viewed by 491
Abstract
Due to the inherent limitations of detection networks and the imbalance in training data, small-target detection has always been a challenging issue in the field of target detection. To address the issues of false positives and missed detections in small-target detection scenarios, a [...] Read more.
Due to the inherent limitations of detection networks and the imbalance in training data, small-target detection has always been a challenging issue in the field of target detection. To address the issues of false positives and missed detections in small-target detection scenarios, a new algorithm based on STDA-YOLOv8 is proposed for small-target detection. A novel network architecture for small-target detection is designed, incorporating a Contextual Augmentation Module (CAM) and a Feature Refinement Module (FRM) to enhance the detection performance for small targets. The CAM introduces multi-scale dilated convolutions, where convolutional kernels with different dilation rates capture contextual information from various receptive fields, enabling more accurate extraction of small-target features. The FRM performs adaptive feature fusion in both channel and spatial dimensions, significantly improving the detection precision for small targets. Addressing the issue of a significant disparity in the number of annotations between small and larger objects in existing classic public datasets, a new data augmentation method called Copy–Reduce–Paste is introduced. Ablation and comparative experiments conducted on the proposed STDA-YOLOv8 model demonstrate that on the VisDrone dataset, its accuracy improved by 5.3% compared to YOLOv8, reaching 93.5%; on the PASCAL VOC dataset, its accuracy increased by 5.7% compared to YOLOv8, achieving 94.2%, outperforming current mainstream target detection models and small-target detection algorithms like QueryDet, effectively enhancing small-target detection capabilities. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 528 KiB  
Article
Advancing Photovoltaic Transition: Exploring Policy Frameworks for Renewable Energy Communities
by Francesca Giuliano and Andrea Pronti
Solar 2025, 5(1), 10; https://doi.org/10.3390/solar5010010 - 14 Mar 2025
Viewed by 837
Abstract
In the decarbonization process, the solar energy sector will play a crucial role, representing one of the key technologies for reducing greenhouse gas emissions. In Italy, photovoltaics stands out as the fastest-growing energy sector, thanks to the combination of favorable climatic conditions, supportive [...] Read more.
In the decarbonization process, the solar energy sector will play a crucial role, representing one of the key technologies for reducing greenhouse gas emissions. In Italy, photovoltaics stands out as the fastest-growing energy sector, thanks to the combination of favorable climatic conditions, supportive policies, and a growing interest in renewable energy sources. In this context, renewable energy communities (RECs) emerge as potential strategic tools for promoting the development of photovoltaics nationally and at the European level. Therefore, this study aims to examine the policy and regulatory frameworks governing RECs in Europe and Italy, highlighting their impact on the establishment, operation, and evolution of these communities. Through a critical analysis of legislative documents at both the European and national levels, this research identifies the key factors shaping the growth and functionality of RECs, such as governance structures, economic incentives, and social inclusivity. This study underscores the dual influence of comprehensive regulation and a certain degree of flexibility in fostering RECs’ adaptability to diverse contexts. Additionally, it identifies existing challenges, including regional implementation disparities, legal ambiguities, and potential conflicts with other renewable energy policies. The findings contribute to the ongoing discourse on decentralized energy systems, providing insights for policymakers to refine frameworks and maximize RECs’ contributions to sustainable energy transitions. Full article
Show Figures

Figure 1

101 pages, 7201 KiB  
Systematic Review
Challenging Cognitive Load Theory: The Role of Educational Neuroscience and Artificial Intelligence in Redefining Learning Efficacy
by Evgenia Gkintoni, Hera Antonopoulou, Andrew Sortwell and Constantinos Halkiopoulos
Brain Sci. 2025, 15(2), 203; https://doi.org/10.3390/brainsci15020203 - 15 Feb 2025
Cited by 16 | Viewed by 11617
Abstract
Background/Objectives: This systematic review integrates Cognitive Load Theory (CLT), Educational Neuroscience (EdNeuro), Artificial Intelligence (AI), and Machine Learning (ML) to examine their combined impact on optimizing learning environments. It explores how AI-driven adaptive learning systems, informed by neurophysiological insights, enhance personalized education for [...] Read more.
Background/Objectives: This systematic review integrates Cognitive Load Theory (CLT), Educational Neuroscience (EdNeuro), Artificial Intelligence (AI), and Machine Learning (ML) to examine their combined impact on optimizing learning environments. It explores how AI-driven adaptive learning systems, informed by neurophysiological insights, enhance personalized education for K-12 students and adult learners. This study emphasizes the role of Electroencephalography (EEG), Functional Near-Infrared Spectroscopy (fNIRS), and other neurophysiological tools in assessing cognitive states and guiding AI-powered interventions to refine instructional strategies dynamically. Methods: This study reviews n = 103 papers related to the integration of principles of CLT with AI and ML in educational settings. It evaluates the progress made in neuroadaptive learning technologies, especially the real-time management of cognitive load, personalized feedback systems, and the multimodal applications of AI. Besides that, this research examines key hurdles such as data privacy, ethical concerns, algorithmic bias, and scalability issues while pinpointing best practices for robust and effective implementation. Results: The results show that AI and ML significantly improve Learning Efficacy due to managing cognitive load automatically, providing personalized instruction, and adapting learning pathways dynamically based on real-time neurophysiological data. Deep Learning models such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Support Vector Machines (SVMs) improve classification accuracy, making AI-powered adaptive learning systems more efficient and scalable. Multimodal approaches enhance system robustness by mitigating signal variability and noise-related limitations by combining EEG with fMRI, Electrocardiography (ECG), and Galvanic Skin Response (GSR). Despite these advances, practical implementation challenges remain, including ethical considerations, data security risks, and accessibility disparities across learner demographics. Conclusions: AI and ML are epitomes of redefinition potentials that solid ethical frameworks, inclusive design, and scalable methodologies must inform. Future studies will be necessary for refining pre-processing techniques, expanding the variety of datasets, and advancing multimodal neuroadaptive learning for developing high-accuracy, affordable, and ethically responsible AI-driven educational systems. The future of AI-enhanced education should be inclusive, equitable, and effective across various learning populations that would surmount technological limitations and ethical dilemmas. Full article
Show Figures

Figure 1

26 pages, 858 KiB  
Review
Artificial Intelligence: An Untapped Opportunity for Equity and Access in STEM Education
by Shalece Kohnke and Tiffanie Zaugg
Educ. Sci. 2025, 15(1), 68; https://doi.org/10.3390/educsci15010068 - 11 Jan 2025
Cited by 4 | Viewed by 7708
Abstract
Artificial intelligence (AI) holds tremendous potential for promoting equity and access to science, technology, engineering, and mathematics (STEM) education, particularly for students with disabilities. This conceptual review explores how AI can address the barriers faced by this underrepresented group by enhancing accessibility and [...] Read more.
Artificial intelligence (AI) holds tremendous potential for promoting equity and access to science, technology, engineering, and mathematics (STEM) education, particularly for students with disabilities. This conceptual review explores how AI can address the barriers faced by this underrepresented group by enhancing accessibility and supporting STEM practices like critical thinking, inquiry, and problem solving, as evidenced by tools like adaptive learning platforms and intelligent tutors. Results show that AI can positively influence student engagement, achievement, and motivation in STEM subjects. By aligning AI tools with Universal Design for Learning (UDL) principles, this paper highlights how AI can personalize learning, improve accessibility, and close achievement gaps in STEM content areas. Furthermore, the natural intersection of STEM principles and standards with the AI4K12 guidelines justifies the logical need for AI–STEM integration. Ethical concerns, such as algorithmic bias (e.g., unequal representation in training datasets leading to unfair assessments) and data privacy risks (e.g., potential breaches of sensitive student data), require critical attention to ensure AI systems promote equity rather than exacerbate disparities. The findings suggest that while AI presents a promising avenue for creating inclusive STEM environments, further research conducted with intentionality is needed to refine AI tools and ensure they meet the diverse needs of students with disabilities to access STEM. Full article
(This article belongs to the Special Issue Application of AI Technologies in STEM Education)
Show Figures

Graphical abstract

18 pages, 43610 KiB  
Article
Reliable and Effective Stereo Matching for Underwater Scenes
by Lvwei Zhu, Ying Gao, Jiankai Zhang, Yongqing Li and Xueying Li
Remote Sens. 2024, 16(23), 4570; https://doi.org/10.3390/rs16234570 - 5 Dec 2024
Viewed by 1355
Abstract
Stereo matching plays a vital role in underwater environments, where accurate depth estimation is crucial for applications such as robotics and marine exploration. However, underwater imaging presents significant challenges, including noise, blurriness, and optical distortions that hinder effective stereo matching. This study develops [...] Read more.
Stereo matching plays a vital role in underwater environments, where accurate depth estimation is crucial for applications such as robotics and marine exploration. However, underwater imaging presents significant challenges, including noise, blurriness, and optical distortions that hinder effective stereo matching. This study develops two specialized stereo matching networks: UWNet and its lightweight counterpart, Fast-UWNet. UWNet utilizes self- and cross-attention mechanisms alongside an adaptive 1D-2D cross-search to enhance cost volume representation and refine disparity estimation through a cascaded update module, effectively addressing underwater imaging challenges. Due to the need for timely responses in underwater operations by robots and other devices, real-time processing speed is critical for task completion. Fast-UWNet addresses this challenge by prioritizing efficiency, eliminating the reliance on the time-consuming recurrent updates commonly used in traditional methods. Instead, it directly converts the cost volume into a set of disparity candidates and their associated confidence scores. Adaptive interpolation, guided by content and confidence information, refines the cost volume to produce the final accurate disparity. This streamlined approach achieves an impressive inference speed of 0.02 s per image. Comprehensive tests conducted in diverse underwater settings demonstrate the effectiveness of both networks, showcasing their ability to achieve reliable depth perception. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data for Oceanography)
Show Figures

Figure 1

19 pages, 627 KiB  
Review
Practices, Challenges, and Future of Digital Transformation in Smallholder Agriculture: Insights from a Literature Review
by Yuyang Yuan and Yong Sun
Agriculture 2024, 14(12), 2193; https://doi.org/10.3390/agriculture14122193 - 30 Nov 2024
Cited by 6 | Viewed by 8112
Abstract
Smallholder farmers play a crucial role in global agricultural development. The digital transformation of smallholder agriculture can enhance productivity, increase farmers’ income, ensure food security, and promote sustainable rural development. However, existing studies often fail to analyze the holistic nature of this transformation [...] Read more.
Smallholder farmers play a crucial role in global agricultural development. The digital transformation of smallholder agriculture can enhance productivity, increase farmers’ income, ensure food security, and promote sustainable rural development. However, existing studies often fail to analyze the holistic nature of this transformation and lack a systematic review of the relevant literature. Therefore, this study aims to provide a comprehensive presentation of the current studies on the digital transformation of smallholder agriculture through logical synthesis and reflective summarization, thereby offering valuable academic insights and practical guidance for the digital transformation of smallholder farming. This study constructs an analytical framework centered on “government–technology–smallholders” using a literature review methodology, systematically examining the main practices, challenges, and future strategies for the digital transformation of smallholder agriculture. Our review reveals that current practices primarily focus on digital agricultural production, rural e-commerce, and agricultural information exchange. We identify key challenges at the government, technical, and smallholder levels, including inadequate digital agriculture policies, limited availability of digital applications, difficulties in adapting uniform technologies to the diverse contexts of smallholders, insufficient resources and endowment among smallholder farmers, significant group disparities, and constraints imposed by social and cultural factors. To enhance the digital transformation of smallholder agriculture, it is essential to improve the supply of policy resources, increase attention to and responsiveness toward smallholder needs, and refine digital governance policies. Additionally, we must develop user-friendly digital applications that cater to the varied digital needs of farmers, reduce access costs, enhance digital literacy, foster an inclusive environment for digital agricultural development, and respect and integrate the social and cultural contexts of smallholder communities. This study deepens the understanding of digital transformation in smallholder agriculture and provides theoretical insights and practical guidance for policymakers, technology developers, and smallholder communities. It contributes to sustainable agricultural development and supports rural revitalization and shared prosperity. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

16 pages, 4359 KiB  
Article
Adaptive Kernel Convolutional Stereo Matching Recurrent Network
by Jiamian Wang, Haijiang Sun and Ping Jia
Sensors 2024, 24(22), 7386; https://doi.org/10.3390/s24227386 - 20 Nov 2024
Viewed by 1043
Abstract
For binocular stereo matching techniques, the most advanced method currently is using an iterative structure based on GRUs. Methods in this class have shown high performance on both high-resolution images and standard benchmarks. However, simply replacing cost aggregation with a GRU iterative method [...] Read more.
For binocular stereo matching techniques, the most advanced method currently is using an iterative structure based on GRUs. Methods in this class have shown high performance on both high-resolution images and standard benchmarks. However, simply replacing cost aggregation with a GRU iterative method leads to the original cost volume for disparity calculation lacking non-local geometric and contextual information. Based on this, this paper proposes a new GRU iteration-based adaptive kernel convolution deep recurrent network architecture for stereo matching. This paper proposes a kernel convolution-based adaptive multi-scale pyramid pooling (KAP) module that fully considers the spatial correlation between pixels and adds new matching attention (MAR) to refine the matching cost volume before inputting it into the iterative network for iterative updates, enhancing the pixel-level representation ability of the image and improving the overall generalization ability of the network. At present, the AKC-Stereo network proposed in this paper has a higher improvement than the basic network. On the Sceneflow dataset, the EPE of AKC-Stereo reaches 0.45, which is 0.02 higher than the basic network. On the KITTI 2015 dataset, the AKC-Stereo network outperforms the base network by 5.6% on the D1-all metric. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

31 pages, 6352 KiB  
Article
Design Thinking Approach to Create Impact Assessment Tool: Cities2030 Case Study
by Elina Mikelsone and Iveta Cīrule
Sustainability 2024, 16(21), 9593; https://doi.org/10.3390/su16219593 - 4 Nov 2024
Viewed by 1910
Abstract
This paper presents the development and testing of an impact assessment tool for the Cities2030 project aimed at transforming city-region food systems to align with the European Union’s Food2030 policy and the European Green Deal. This study highlights the importance of sustainable urban [...] Read more.
This paper presents the development and testing of an impact assessment tool for the Cities2030 project aimed at transforming city-region food systems to align with the European Union’s Food2030 policy and the European Green Deal. This study highlights the importance of sustainable urban food systems, focusing on food security, environmental sustainability, and public health. Using a design thinking approach, this research emphasizes co-creation, stakeholder engagement, and iterative refinement, developing a flexible, multi-dimensional framework adaptable to diverse city-region contexts. Through collaboration with 65 stakeholders, this tool was tailored to meet the socio-economic and environmental needs of different regions. Case studies from Cities2030 partner cities demonstrate its effectiveness in fostering cross-sectoral collaboration, enhancing community participation, and driving food system innovations. Key findings reveal measurable impacts across social, environmental, and economic dimensions, while addressing challenges like regional disparities in data collection and the need for improved long-term tracking of sustainability metrics. This study concludes by underscoring the role of adaptive, inclusive strategies in assessing urban food systems’ sustainability and resilience and suggests that the tool’s framework could be applied to other urban sustainability areas, such as energy and water management. Full article
Show Figures

Figure 1

16 pages, 6078 KiB  
Article
Matchability and Uncertainty-Aware Iterative Disparity Refinement for Stereo Matching
by Junwei Wang, Wei Zhou, Yujun Tang and Hanming Guo
Appl. Sci. 2024, 14(18), 8457; https://doi.org/10.3390/app14188457 - 19 Sep 2024
Viewed by 1221
Abstract
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new [...] Read more.
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new matchability and uncertainty decoder (MUD) is proposed to decode the matchability mask and disparity uncertainties, which are used to evaluate the reliability of feature matching and estimated disparity, thereby reducing the susceptibility to mismatched pixels. Then, based on the proposed MUD, we present two modules: the uncertainty-preferred disparity field initialization (UFI) and the masked hidden state global aggregation (MGA) modules. In the UFI, a multi-disparity window scan-and-select method is employed to provide a further initialized disparity field and more accurate initial disparity. In the MGA, the adaptive masked disparity field hidden state is globally aggregated to extend the propagation range per iteration, improving the refinement efficiency. Finally, the experimental results on public datasets show that the proposed model achieves a reduction up to 17.9% in disparity average error and 16.9% in occluded outlier proportion, respectively, demonstrating its more practical handling of ill-posed regions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3307 KiB  
Review
Multi-Criteria Decision Analysis for Sustainable Oil and Gas Infrastructure Decommissioning: A Systematic Review of Criteria Involved in the Process
by Xin Wei and Jin Zhou
Sustainability 2024, 16(16), 7205; https://doi.org/10.3390/su16167205 - 22 Aug 2024
Cited by 1 | Viewed by 1951
Abstract
The decommissioning of oil and gas (O&G, hereafter) facilities presents complex challenges when addressing the diverse needs of stakeholders. By synthesizing information from previous Multi-Criteria Decision Analysis (MCDA, hereafter) studies on decommissioning projects, this study aims to do the following: (a) formulate a [...] Read more.
The decommissioning of oil and gas (O&G, hereafter) facilities presents complex challenges when addressing the diverse needs of stakeholders. By synthesizing information from previous Multi-Criteria Decision Analysis (MCDA, hereafter) studies on decommissioning projects, this study aims to do the following: (a) formulate a structured set of criteria adaptable to MCDA for both offshore and onshore O&G decommissioning, (b) identify and analyze the evolving trends and regional disparities in MCDA for decommissioning, and (c) explore current O&G onshore decommissioning procedures and map specific criteria to these processes. Following a systematic literature review approach, this study analyzed 63 references across four stages from 2006 to 2024 and identified 158 criteria. These criteria were consolidated into a framework of 22 factors across dimensions comprising technical, environmental, societal, financial, health and safety considerations, and additional concerns from stakeholders. This study observed a significant focus shift from technical aspects to environmental considerations in decommissioning practices from 2011 onwards, reflecting growing awareness of sustainability. It also revealed regional differences, such as the technical emphasis in the North Sea and environmental concerns in Australia. Furthermore, this study refined O&G onshore decommissioning procedures and identified criteria gaps for further research, particularly in societal impact regarding public resource availability, recreational opportunities, and operating company reputation. The study provides a robust foundation for the development of future MCDA frameworks tailored to O&G infrastructure decommissioning projects, thus supporting long-term environmental and social sustainability. Full article
(This article belongs to the Section Sustainable Engineering and Science)
Show Figures

Figure 1

16 pages, 722 KiB  
Article
Dialogues with AI: Comparing ChatGPT, Bard, and Human Participants’ Responses in In-Depth Interviews on Adolescent Health Care
by Jelle Fostier, Elena Leemans, Lien Meeussen, Alix Wulleman, Shauni Van Doren, David De Coninck and Jaan Toelen
Future 2024, 2(1), 30-45; https://doi.org/10.3390/future2010003 - 11 Mar 2024
Cited by 3 | Viewed by 3113
Abstract
This study explores the feasibility of large language models (LLMs) like ChatGPT and Bard as virtual participants in health-related research interviews. The goal is to assess whether these models can function as a “collective knowledge platform” by processing extensive datasets. Framed as a [...] Read more.
This study explores the feasibility of large language models (LLMs) like ChatGPT and Bard as virtual participants in health-related research interviews. The goal is to assess whether these models can function as a “collective knowledge platform” by processing extensive datasets. Framed as a “proof of concept”, the research involved 20 interviews with both ChatGPT and Bard, portraying personas based on parents of adolescents. The interviews focused on physician–patient–parent confidentiality issues across fictional cases covering alcohol intoxication, STDs, ultrasound without parental knowledge, and mental health. Conducted in Dutch, the interviews underwent independent coding and comparison with human responses. The analysis identified four primary themes—privacy, trust, responsibility, and etiology—from both AI models and human-based interviews. While the main concepts aligned, nuanced differences in emphasis and interpretation were observed. Bard exhibited less interpersonal variation compared to ChatGPT and human respondents. Notably, AI personas prioritized privacy and age more than human parents. Recognizing disparities between AI and human interviews, researchers must adapt methodologies and refine AI models for improved accuracy and consistency. This research initiates discussions on the evolving role of generative AI in research, opening avenues for further exploration. Full article
Show Figures

Figure 1

Back to TopTop