Next Article in Journal
Edge Microservice Deployment and Management Using SDN-Enabled Whitebox Switches
Next Article in Special Issue
An Interpretable CPU Scheduling Method Based on a Multiscale Frequency-Domain Convolutional Transformer and a Dendritic Network
Previous Article in Journal
A Fast Method for Estimating Generator Matrixes of BCH Codes
Previous Article in Special Issue
A High-Performance Branch Control Mechanism for GPGPU Based on RISC-V Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Occluded Object Detection in Multimodal Autonomous Driving: A Fusion-Aware Learning Framework

1
School of New Energy and Intelligent Networked Automobile, University of Sanya, Sanya 572022, China
2
Faculty of Mechanical Engineering, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
3
New Energy and Intelligent Vehicle Engineering Research Center of Hainan Province, Sanya 572022, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(1), 245; https://doi.org/10.3390/electronics15010245
Submission received: 5 November 2025 / Revised: 26 December 2025 / Accepted: 31 December 2025 / Published: 5 January 2026

Abstract

Reliable occluded object detection remains a persistent core challenge for autonomous driving perception systems, particularly in complex urban scenarios where targets are predominantly partially or fully obscured by static obstacles or dynamic agents. Conventional single-modality detectors often fail to capture adequate discriminative cues for robust recognition, while existing multimodal fusion strategies typically lack explicit occlusion modeling and effective feature completion mechanisms, ultimately degrading performance in safety-critical operating conditions. To address these limitations, we propose a novel Fusion-Aware Occlusion Detection (FAOD) framework that integrates explicit visibility reasoning with implicit cross-modal feature reconstruction. Specifically, FAOD leverages synchronized red–green–blue (RGB), light detection and ranging (LiDAR), and optional radar/infrared inputs, employs a visibility-aware attention mechanism to infer target occlusion states, and embeds a cross-modality completion module to reconstruct missing object features via complementary non-occluded modal information; it further incorporates an occlusion-aware data augmentation and annotation strategy to enhance model generalization across diverse occlusion patterns. Extensive evaluations on four benchmark datasets demonstrate that FAOD achieves state-of-the-art performance, including a +8.75% occlusion-level mean average precision (OL-mAP) improvement over existing methods on heavily occluded objects O = 2 in the nuScenes dataset, while maintaining real-time efficiency. These findings confirm FAOD’s potential to advance reliable multimodal perception for next-generation autonomous driving systems in safety-critical environments.

1. Introduction

Occlusion has long been considered one of the most critical challenges in object detection for autonomous driving, especially in dense and dynamic urban environments [1]. Target objects may be partially obscured by static obstacles such as vehicles, poles, or roadside infrastructure; completely invisible due to severe full occlusion; or involved in complex interactive occlusions where multiple dynamic agents overlap. Such scenarios substantially deteriorate perception performance by distorting object appearances, blurring spatial boundaries, and introducing high levels of uncertainty in both localization and classification.
Recent advances in multimodal perception have opened promising avenues to mitigate these issues. By integrating complementary sensory modalities—RGB cameras, LiDAR sensors, and radar/infrared (IR) systems—perception systems can leverage distinct advantages: RGB imagery provides fine-grained texture and semantic cues; LiDAR offers precise 3D geometric measurements; and radar/IR sensors ensure resilience under adverse weather or low-illumination conditions [2]. This synergy allows for the construction of more comprehensive and robust object representations, particularly in scenarios characterized by severe occlusion or degraded visibility.

1.1. Motivation and Problem

Despite these benefits, existing multimodal fusion methods remain insufficient for reliable occlusion handling. First, while middle-fusion transformers such as BEVFormer can effectively align multimodal features in a shared bird’s-eye-view (BEV) space, they lack a dedicated mechanism to explicitly identify occluded regions and orchestrate targeted, directional information flow from unoccluded modalities for recovering missing object cues [3]. Second, feature alignment and cross-modal completion strategies are often inadequate, leading to fragmented or inconsistent representations across modalities, and recent BEV-space fusion frameworks such as BEVFusion primarily optimize geometric consistency and efficiency rather than explicit occlusion reasoning [4]. Third, most state-of-the-art fusion pipelines suffer from high computational complexity and limited scalability, hindering their deployment in real-time, safety-critical autonomous driving systems [1]. These limitations underscore the need for new frameworks that can explicitly reason about occlusion while efficiently exploiting multimodal complementarities.
We address these gaps by coupling (i) explicit visibility estimation, (ii) geometry-aware cross-modal attentive completion, and (iii) occlusion-adaptive fusion and calibration within a single trainable objective, designed to preserve efficiency for deployment.

1.2. Approach Overview

At a high level, FAOD introduces a visibility-guided multimodal detector. It first estimates multi-granular visibility cues, then uses geometry-aware cross-modal attention to complete features for occluded regions, and finally performs occlusion-adaptive fusion and calibrated post-processing. All components are trained end-to-end under a unified objective; the architectural details are provided in Section 6.

1.3. Contributions

In this work, we propose a novel framework termed Fusion-Aware Occlusion Detection (FAOD), which tightly integrates explicit occlusion modeling with implicit cross-modal feature reconstruction. The main contributions of this study are summarized as follows:
  • Explicit visibility reasoning for occlusion-aware BEV detection: We propose FAOD as a unified multimodal detection framework that explicitly models occlusion/visibility as learnable variables, including an instance-level occlusion classification and a region-level visibility map. These signals are supervised by occlusion-aware objectives and geometric consistency constraints, and are further used to guide downstream feature completion, fusion, and confidence scoring, rather than relying on implicit BEV aggregation.
  • Visibility-guided directed cross-modal attention (CMA) for alignment and feature completion: We design a geometry-aware CMA module that performs asymmetric, visibility-driven information transfer (donor → recipient): when a target modality is heavily occluded, complementary less-occluded modalities are selectively attended to reconstruct missing BEV features and align cross-modal representations. This goes beyond symmetric BEV fusion, enabling targeted restoration of occluded object regions.
  • Occlusion-aware dynamic fusion and score calibration at inference: FAOD couples visibility estimation with adaptive modality weighting and occlusion-aware post-processing. Fusion weights are adjusted conditioned on occlusion severity and modality reliability, while occlusion-aware Soft-NMS and confidence calibration mitigate false suppression of heavily occluded objects, improving detection stability under partial and complete occlusions.
  • Occlusion-oriented augmentation/labeling and comprehensive benchmarking with deployment considerations: To evaluate FAOD under controlled occlusion levels, we develop an occlusion-centric augmentation and labeling pipeline that explicitly accounts for different visibility regimes. Extensive experiments on four representative datasets (nuScenes, KITTI-MOD, DENSE, and JRDB) show consistent gains over strong baselines, and we additionally adopt streamlined fusion components to maintain practical efficiency for real-time safety-critical deployment.

2. Related Work

2.1. Single-Modality Occluded Detection

Under occlusion, purely image-based detectors rely on spatial context, part/instance completion, and occlusion-aware heads to stabilize recall in crowded scenes. CityPersons highlighted occlusion as a first-class challenge for pedestrian detection and catalyzed research on modeling visible parts and suppressing inter-instance interference [5]. Representative lines include explicit occlusion reasoning within detectors (e.g., Occlusion-Aware Region-based Convolutional Neural Network (R-CNN)) and repulsive/overlap-aware losses that discourage proposals from drifting toward neighboring instances in crowds [6,7]. Several single-modality image detectors have revisited occlusion from the perspective of feature design and loss shaping. For road-scene RGB detection, RE-YOLOv5 enhances the receptive field and introduces deformable feature extraction to better localize heavily occluded vehicles and pedestrians in dense traffic [8], while monocular 3D pipelines with thermodynamic-inspired object losses have reported improved accuracy on distant and partially occluded targets [9].
Meanwhile, LiDAR-based detectors benefit from accurate 3D geometry and can be comparatively robust to appearance changes, yet suffer at long range and under self-occlusion where returns become sparse. Milestones range from voxelized or pillarized backbones (SECOND, PointPillars) to modern center-based heads (CenterPoint) and hybrid point–voxel designs (PV-RCNN) [10,11,12,13]. Occlusion-aware formulations further leverage the notion of “measurable” geometry; for instance, WYSIWYG constrains learning to what is physically observable, decoupling prediction from unmeasurable, fully occluded regions [14]. Sparsity-aware backbones such as SPBA-Net emphasize robust aggregation under extremely sparse returns, and depth-guided monocular detectors like MonoDFNet refine regression for partially visible objects [15,16]. Despite these advances on both RGB and LiDAR fronts, these approaches remain fundamentally single-modality and cannot exploit complementary sensing when one modality is severely degraded, which motivates the multimodal, occlusion-aware design adopted in this work.

2.2. Multimodal Fusion Strategies

To overcome the limitations of single sensors, numerous multimodal fusion strategies have been proposed. Early fusion methods operate at the data level. PointPainting injects per-pixel semantics into points before 3D detection, yielding a simple pipeline but making the system sensitive to semantic noise and camera–LiDAR projection errors [17].
Feature-level or middle-fusion methods, MVX-Net couples image and LiDAR streams at the feature level; 3D-CVF performs cross-view projection/gating to align modalities in 3D/BEV; and more recent UVTR/UniTR families unify features in a shared BEV/voxel space and exchange information via attention, improving geometric consistency and efficiency [18,19,20,21,22]. Recent LiDAR–camera fusion architectures further refine BEV-space multimodal detection. Zhao et al. propose SimpleBEV, an improved LiDAR–camera fusion backbone that jointly reasons over LiDAR BEV and image features, achieving gains on KITTI and nuScenes compared with earlier middle-fusion designs [23]. MSAFusion introduces a multi-sensor adaptive BEV fusion scheme that adjusts modality contributions according to scene context and improves detection robustness in dense traffic [24]. SpaRC integrates sparse 4D imaging radar with cameras via spatially aligned fusion and attention, showing clear benefits under adverse weather and low-visibility conditions where radar complements RGB [25]. Samfusion employs a sensor-adaptive multimodal fusion mechanism that explicitly downweights degraded modalities under rain, fog, and low light [26].
Decision-level or late-fusion strategies, candidate-level fusion (e.g., CLOCs) combines per-modal proposals/scores with weaker sensitivity to alignment noise but provides limited capacity to recover fine-grained details in heavy occlusion [27]. TransFusion and BEVFormer aggregate multi-view/multimodal evidence with queries anchored in the object or BEV space, capturing long-range dependencies while preserving cross-view geometry, and have become a dominant paradigm for robust fusion [3,4,28]. However, these fusion pipelines still lack a dedicated, learnable visibility variable that explicitly drives cross-modal completion and occlusion-aware fusion, which is the focus of FAOD.

2.3. Occlusion Modeling

Explicit supervision of occlusion masks/levels or depth ordering improves reliability under partial and full occlusion by exposing the detector to visibility as a separate variable that gates features and calibrates scores [6,14,29].
In monocular 3D detection, modeling geometric/pose uncertainty in projection helps recover partially occluded instances (e.g., GUPNet), while cross-view attention in BEVFormer-style architectures maintains spatial coherence despite missing observations [3,30].
RadarOcc predicts a dense 3D occupancy grid from 4D imaging radar (optionally combined with LiDAR), explicitly modeling occluded free space and demonstrating robustness in fog and rain [31]. GS-Occ3D builds on the Occ3D benchmark by introducing visibility-aware labels and a geometry-guided network, enabling more accurate reconstruction of occupied, free, and occluded voxels from multi-view images [32]. LinkOcc extends occupancy prediction temporally by linking LiDAR–camera features across frames for 4D panoptic occupancy, which helps recover occluded regions in crowded scenes [33]. More recently, a BEV occlusion mitigation framework has been proposed to minimize the occlusion effect on multi-view camera perception by combining multi-sensor cues and layered depth reasoning [34]. While these works explicitly model visibility or occupancy, they typically do not couple visibility estimation directly with cross-modal completion and the detection head in a single objective, as FAOD does.

2.4. Feature Alignment and Completion

Middle-level fusion must handle parallax, time misalignment, and calibration errors. Cross-view mapping with gating (3D-CVF) and unification in BEV/voxel space (UVTR/UniTR/BEV-style fusion) reduce reprojection artifacts and enable stable geometry-aware interaction [3,18,20,21,22].
Methods such as EPNet and PointAugmenting inject image semantics into point features to increase separability in sparse regions; DeepInteraction and Cross-Modal Transformer variants deploy attention to pass information bidirectionally across modalities and to align latent spaces under occlusion/missing-data, improving robustness in O = 1 / 2 regimes [35,36,37,38,39].
Beyond early BEVFusion-style alignment, several recent works focus on more precise cross-modal feature alignment and completion. High-performance sparse fusion frameworks project image features only at sparse LiDAR locations and refine them with attention in BEV, reducing computation while preserving alignment quality [40]. FusionFormer combines BEV-space LiDAR–camera fusion with a temporal transformer, enforcing temporal consistency of multi-sensory features across frames [41]. Wang et al. introduce a BEV fusion method based on mutual deformable attention and temporal aggregation, explicitly aligning LiDAR and camera features across depth and time [42]. These methods primarily target alignment quality and temporal smoothing, whereas FAOD ties cross-modal attention and completion directly to an explicit occlusion signal and learns them jointly with the detection losses.

2.5. Datasets and Metrics

KITTI provides occlusion/truncation tags and remains a foundational benchmark for 2D/3D detection; nuScenes offers a rich multi-sensor suite (6 cameras + LiDAR + radar) and a 10-class evaluation set; JRDB focuses on mobile robotics with 360° cameras and 3D LiDAR in crowded indoor/outdoor scenes; DENSE targets adverse weather and low-visibility conditions with additional sensing (e.g., radar/thermal) [43,44,45,46].
The standard mean average precision (mAP) and the nuScenes Detection Score (NDS) are widely adopted. For occlusion, occlusion-level mAP (OL-mAP)—computed over subsets partitioned by the visibility ratio or official occlusion flags—has emerged as a sensitive measure of robustness under O = 1 / 2 conditions [44].
Overall, prior BEV and multimodal detectors either (a) lack explicit visibility modeling, (b) fuse modalities without reliability- or occlusion-aware weighting, (c) decouple completion from detection, or (d) rely on calibration-sensitive alignment without safeguards to misalignment. FAOD addresses these limitations by unifying visibility estimation, directed cross-modal completion with geometric bias, and occlusion-adaptive fusion/calibration within one objective.

3. Problem Formulation

3.1. Sensor Inputs and Metadata

The multimodal input includes synchronized RGB images, LiDAR point clouds, and IR maps, together with calibration parameters and temporal alignment and deskew pre-processing. The RGB image, denoted as I RGB R H × W × 3 , is accompanied by intrinsic parameters K , extrinsic transform T E Cam SE ( 3 ) , and a time stamp t. The LiDAR point cloud is represented as P LiDAR = { p i = ( x i , y i , z i , [ int i , ring i , t i rel ] ) } i = 1 N . We keep the intensity int , the laser ring , and the relative sample time t rel . Record T E LiDAR , scan duration Δ t , and whether multi-sweep accumulation is used. The IR map is expressed as T IR R H × W [ × C ] . Its metadata comprise intrinsic parameters and resolution, extrinsics T E IR , sampling rate, and beam model for mapping range–velocity.
For temporal alignment and deskewing, let E denote the unified world frame. We align all sensor timestamps to the LiDAR mid-scan time t 0 and deskew LiDAR by continuous poses T ( t ) :
p ˜ i = T ( t 0 ) T ( t i ) 1 p i .

3.2. Frames, Projections, and Gridding

All modalities are geometrically aligned with a unified coordinate frame E or a common BEV representation for spatial consistency. For the camera projection (LiDAR → image), a 3D point p in the LiDAR frame is first transformed into the camera coordinate system as X = T Cam LiDAR [ p ; 1 ] . The homogeneous pixel u ˜ = K [ R | t ] X is normalized to u = ( u , v ) = ( u ˜ / w ˜ , v ˜ / w ˜ ) . For BEV mapping (points/voxels → BEV), the ground plane is discretized according to ( i , j ) = ( ( x x min ) / s , ( y y min ) / s ) , and features are pooled/encoded along the vertical dimension z to obtain F BEV .

3.3. Instance-Level Annotations and Visibility

Each object instance is associated with a semantic category label c C , where C denotes the predefined set of target classes (e.g., pedestrian, cyclist, passenger vehicle, large vehicle, traffic facility, etc.). The category label characterizes the semantic attributes of an object and serves as one of the fundamental prediction variables in multimodal detection tasks. Depending on the dataset configuration, the cardinality | C | can range from a small set of classes (e.g., three in KITTI) to a richer taxonomy (e.g., ten in nuScenes) and may be extended to support additional categories in more complex scenarios. To ensure cross-modality consistency, the category labels are defined and indexed in a unified manner across RGB images, LiDAR point clouds, and IR/radar annotations, enabling the detection model to align and share a common semantic space among heterogeneous sensors. Moreover, in order to evaluate robustness under occlusion, the category labels are further combined with visibility levels and bounding box annotations, which facilitates fine-grained performance analysis under varying occlusion conditions.
In addition to the semantic category label, each object is also described by a 3D bounding box b = ( x , y , z , w , h , l , θ ) with optional velocity v . The occlusion level is defined O { 0 , 1 , 2 } and unified via a visible ratio:
v img = # visible pixels # box pixels ,
v pc = # valid in-box points # expected points ,
v ratio = γ v img + ( 1 γ ) v pc , γ [ 0 , 1 ] ,
with thresholds
O = 0 , v ratio 0.75 , 1 , 0.25 v ratio < 0.75 , 2 , v ratio < 0.25 .
Here, v img denotes the fraction of visible pixels within the 2D bounding box in the image plane, v pc denotes the fraction of valid LiDAR points inside the 3D box relative to the expected number of points, and v ratio is a weighted combination of the two, with γ controlling the relative contribution of image and point-cloud visibility. The thresholds above assign O = 0 to mostly visible objects, O = 1 to partially occluded objects, and O = 2 to heavily or fully occluded ones.
The cutoffs at 0.75 and 0.25 follow common three-level occlusion protocols in driving benchmarks, where roughly three quarters of the object area being visible corresponds to “non-occluded”, and less than one quarter corresponds to “heavily occluded”. The intermediate band [ 0.25 , 0.75 ) provides a sufficiently wide regime of partially occluded samples for learning while keeping the semantic interpretation of each level clear. Since v ratio is a convex combination of v img and v pc , increasing γ shifts the occlusion decision towards image-based visibility, whereas decreasing γ emphasizes LiDAR-based visibility. The sensitivity of v ratio to γ is bounded by | v img v pc | ; when the two modalities broadly agree, moderate changes of γ do not alter the assigned occlusion level O, and only strong disagreements lead to boundary cases.
Optional fine-grained labels include a pixel/BEV visibility mask M vis [ 0 , 1 ] , a depth-order/occlusion graph G (edges “occluder → occluded”), and sensor-availability flags.

3.4. Sample Organization and Occlusion-Aware Sampling

Under heavy occlusion, temporal windows with motion compensation are employed to increase visibility and maintain spatiotemporal continuity across frames.
To further address dataset imbalance, stratified sampling is applied to balance samples with different occlusion levels ( O = 0 / 1 / 2 or to oversample partially and fully occluded instances ( O { 1 , 2 } )), preventing domination by non-occluded samples.
For the assignment, both anchor-free and anchor-based strategies are adapted to handle occluded samples. In the anchor-free setting, a top-k dynamic assignment with center/distance priors is commonly used. For anchor-based methods, intersection over union (IoU) thresholds are relaxed for high-O samples, and additional center biases are introduced. Define a composite cost (anchor-free example):
C = λ cls C cls + λ box C box + λ occ ϕ ( O ) ,
where ϕ ( O ) downweights penalties for highly occluded instances.

4. Task Objective and Subtasks

The learning objective follows the detection mapping introduced in Section 6 (Equation (9)), which transforms multimodal sensory inputs into instance-level predictions of category, 3D geometry, and occlusion state.
For readability, we describe the learning objective through three coupled subtasks rather than listing all loss terms in the main text: (i) visibility estimation that predicts instance-level occlusion and a region-level visibility cue, (ii) visibility-guided cross-modal completion that reconstructs missing BEV evidence using complementary modalities, and (iii) modality-aware detection that dynamically fuses completed features and decodes ( c , b ) with calibrated confidence. The full formulations of Subtask A/B/C (including all losses and constraints) are provided in Appendix A.

5. Overall Optimization Objective and Training Strategy

The visibility, completion, and fusion modules are trained jointly so the network learns not only to detect objects, but also to estimate occlusion and recover missing information in a coordinated manner. This section summarizes the global objective and the strategies used to emphasize low-visibility cases while keeping training stable.

5.1. Global Objective

We jointly optimize detection, multi-granular visibility estimation, and cross-modal completion using
L total = L det + λ A L A + λ B L B ,
where L det is the detection objective (classification + box regression), L A is the visibility-estimation objective (including occlusion classification and region-level visibility constraints), and L B is the completion objective. The complete definitions of L det , L A , and L B and all constituent losses are given in Appendix A.

5.2. Strategy I: Occlusion-Aware Reweighting

This strategy upweights hard samples so partially and fully occluded instances contribute more strongly during training. Concretely, for difficult cases ( O { 1 , 2 } ), we amplify the occlusion-related objectives and the completion consistency:
λ occ ( O ) = λ occ 0 1 + β occ · [ O { 1 , 2 } ] ,
λ rec ( O ) = λ rec 0 1 + β rec · [ O = 2 ] ,
with stronger amplification for O = 2 to enforce completion consistency under full occlusion.
In addition, we optionally employ robust multi-task balancing (e.g., homoscedastic uncertainty weighting) and gradient normalization techniques (e.g., GradNorm/PCGrad) to prevent a single objective (typically classification) from dominating optimization. The explicit formulation used in our implementation is provided in Appendix B.

5.3. Strategy II: Spatiotemporal Consistency (Stable Completion with Multi-Frame Aggregation)

To stabilize completion across time, we enforce that point clouds and feature maps corresponding to the same physical object remain consistent across neighboring frames. This reduces flicker and overfitting to single-frame noise, which becomes noticeable when visibility is low.
Given a temporal window T = { t K , , t + K } and poses T ( · ) , we incorporate (i) point-level consistency under ego-motion and (ii) feature-level consistency via geometric warping. The complete equations (including the Chamfer-like point loss and feature warping loss) are reported in Appendix B. In practice, we first converge a single-frame model, and then introduce the temporal consistency terms together with multi-frame aggregation.

5.4. Strategy III: Post-Processing (Occlusion-Aware NMS and Calibration)

Beyond the core network, we apply occlusion-aware post-processing to avoid suppressing hard occluded true positives and to calibrate confidence scores. The key idea is to soften suppression and adjust score calibration when a hypothesis is predicted as heavily occluded, because occlusion increases localization uncertainty and reduces IoU overlap.
We adopt occlusion-aware Soft-NMS and occlusion-conditioned temperature scaling (and, optionally, uncertainty-aware NMS). To keep the main text lightweight, the full post-processing equations are given in Appendix B.

5.5. Training Pipeline and Curriculum

We recommend a staged training pipeline. Training proceeds as follows: (i) train L det to stability; (ii) enable L A for visibility estimation; and (iii) activate L B together with temporal consistency (if used). The occlusion curriculum increases the synthetic occlusion strength ρ from ρ 0 to ρ max (linear/cosine schedule) while ramping up the completion weight. To improve robustness to missing sensors, we randomly drop modalities (guided by sensor-availability flags) so the completion and fusion modules generalize across sensor degradation. Unless otherwise stated, all weights and thresholds are treated as learnable or scheduled hyperparameters, supporting reproducibility and systematic ablations across datasets and sensor configurations.

6. Method

6.1. Overall Architecture

FAOD is an occlusion-robust multimodal detector that links modality-specific encoding → occlusion-aware representation → cross-modal attentive completion → multi-task detection in an end-to-end pipeline. To make the data flow easier to follow at first glance, Figure 1 provides a conceptual view of how visibility estimation, cross-modal completion, and occlusion-aware fusion operate together. Figure 2 then presents the detailed module design and training/inference signals.
Formally, given synchronized sensory inputs from RGB cameras I RGB , LiDAR point clouds P LiDAR , and optionally radar/IR maps T IR , the objective is to learn a detection function:
f : ( I RGB , P LiDAR , T IR ) { ( c , b , O ) }
where c C denotes the semantic object category, b = ( x , y , z , w , h , l , θ ) defines the 3D bounding box that includes spatial position, dimensions, and orientation, and O { 0 , 1 , 2 } represents the occlusion state corresponding to no occlusion, partial occlusion, and full occlusion. FAOD comprises (i) modality-specific encoders for RGB, LiDAR, and IR/radar; (ii) an occlusion-aware feature extractor producing multi-granular visibility signals; (iii) CMA for selective fusion and completion; and (iv) a multi-task head that predicts ( c , b , O ) with occlusion-adaptive fusion and decoding (see Figure 2).

6.2. Feature Extraction Modules

With a ResNet/Swin backbone and an FPN, the image encoder produces multi-scale features { F l img } l = 1 L , where F l img R H l × W l × C l . For alignment with BEV/point features, a perspective or learnable view transform is applied:
F ˜ l img = T view F l img ; K , T E Cam .
For LiDAR, the voxel pathway (VoxelNet/SECOND) builds a tensor V R D × H × W × C and yields BEV features F BEV R H b × W b × C b via 3D/2D convolutions. The point pathway (PointNet++) aggregates raw points P = { p i } to F pc R N × C p , then pools to BEV with a voxel/grid operator Γ ( · ) :
F ˜ pc = Γ P , F pc R H b × W b × C ˜ b .
For IR/Radar, a lightweight CNN/Transformer produces F ir R H × W × C ; geometric calibration maps it to the unified view:
F ˜ ir = Φ F ir ; T E IR .
The aligned main-scale maps F RGB , F LiDAR , F IR R H b × W b × C are then used by subsequent modules in a common BEV/grid domain.

6.3. Occlusion-Aware Submodules

FAOD augments the backbone with auxiliary occlusion branches that provide explicit visibility cues for downstream completion and fusion. The goal of these submodules is to estimate, for each candidate and region in the scene, how strongly it is occluded, so that later stages can selectively trust or discount modality evidence.
At each candidate (instance or BEV grid), the occlusion branch outputs an instance probability p O = ( p 0 , p 1 , p 2 ) for O { 0 , 1 , 2 } and a region-level visibility map A vis [ 0 , 1 ] H b × W b . Instance-level occlusion is trained with class-balanced cross-entropy or Focal loss, and the visibility map is supervised by BCE with total-variation (TV) regularization, as defined in Subtask A.
We obtain a semantic-guided visibility map by concatenating RGB and LiDAR features and projecting to a single channel:
A vis = σ Conv 1 × 1 [ F RGB F LiDAR ] ,
where [ · · ] denotes channel-wise concatenation.
Given a coarse fused map F fusion , multi-head self-attention with positional encoding P and geometric bias B geo is applied using the following geometry-aware attention operator:
Attn ( Q , K , V ) = softmax Q K + B geo d V .

6.4. Cross-Modal Attention and Completion

Once visibility has been estimated, FAOD uses cross-modal attention to transfer information from less occluded “donor” modalities to more occluded “target” modalities. Intuitively, this module aims to complete or refine target features in regions where they are unreliable, by borrowing geometry-consistent evidence from other sensors.
Given a target modality F a and a donor modality F b , queries, keys, and values are obtained by linear projections:
Q = W Q F a , K = W K F b , V = W V F b .
The attended target features are then computed by the operator in Equation (14):
F ^ a = Attn ( Q , K , V ) .
For occlusion-gated mixing, let M occ = 1 A vis . A modality reliability score r m ( 0 , 1 ) (estimated from density/SNR/texture/motion blur) yields a donor weight
ω b = exp { κ ( 1 A vis ) r b } m { RGB , LiDAR , IR } exp { κ ( 1 A vis ) r m } ,
and the completed target features are updated by
F a comp = F a + ω b M occ ( F ^ a F a ) .

6.5. Detection Head and Occlusion-Aware Fusion

The final detection stage converts the completed multimodal features into class, box, and occlusion predictions, while adaptively weighting each modality according to visibility and reliability. This head ties together the preceding modules and determines how much each sensor contributes to the final decision at each spatial location.
On the fused representation F final , a multi-task head predicts class, box, and occlusion:
{ p c , b ^ , O ^ } = Head F final .
The detection objective follows Subtask C, with L det = λ cls L cls + λ box L box + λ occ L occ ; the formulations of L cls , L box (IoU/DIoU + 1 with periodic angle), and L occ are defined there and not repeated here.
Occlusion-aware dynamic fusion computes, at each location x , per-modality weights via a learnable gate g m (e.g., a two-layer MLP over [ M occ ( x ) , r m ( x ) , GAP ( F m ) ] ); the resulting logits are normalized to weights:
g m ( x ) = MLP m [ M occ ( x ) , r m ( x ) , GAP ( F m ) ] ,
α m ( x ) = exp { g m ( x ) } m { RGB , LiDAR , IR } exp { g m ( x ) } ,
F final ( x ) = m { RGB , LiDAR , IR } α m ( x ) F m [ comp ] ( x ) .
Here, M occ is the occlusion mask defined earlier, r m ( 0 , 1 ) denotes modality reliability, and GAP ( · ) is global average pooling. Higher occlusion (larger M occ ) and higher reliability r m increase α m , prioritizing robust modalities (e.g., LiDAR/IR) when needed.

6.6. Training and Implementation Notes

All modalities are aligned to a unified BEV/grid; for multi-frame inputs, pose-based registration and LiDAR deskew are applied. Synthetic occlusion T occ ( ρ ) is applied with strength ρ gradually increased, and λ rec is ramped up in tandem to stabilize completion learning.
The overall objective follows the global formulation in Section 5 (Equation (6)). For samples with O { 1 , 2 } , λ occ and λ rec are increased; homoscedastic uncertainty weights { σ k } may be used for task balancing. During inference, occlusion-aware Soft-NMS (weaker suppression for O = 2 ), temperature scaling, and variable IoU thresholds are used to reduce misses and over-suppression under heavy occlusion.
The pipeline forms a causal loop from visibility estimation → cross-modal completion → dynamic fusion. The heatmap A vis localizes occlusions, CMA performs geometry-consistent targeted recovery, and α m assigns modality weights based on occlusion and reliability. The multi-task head jointly optimizes these components, yielding robust 3D detection under partial and full occlusion.

7. Experiments

7.1. Overall Performance and Stratified Analysis

Across four benchmarks, FAOD consistently outperforms multimodal and occlusion-specialized baselines. Averaged over three independent runs (distinct random seeds), the absolute improvements are typically + 3 % to + 7 % in mAP and + 2 to + 5 in NDS, with statistical significance verified by bootstrap testing ( p < 0.05 ) .
To assess robustness under occlusion, results are partitioned by the unified visibility thresholds into O { 0 , 1 , 2 } (non-occluded, partially occluded, and severely/near-completely occluded) and reported as OL-mAP per subset. For non-occluded cases ( O = 0 ) , FAOD matches or slightly exceeds strong baselines by + 0.5 % to + 2 % , indicating no loss of upper-bound accuracy. For partially occluded cases ( O = 1 ) , gains of + 4 % to + 9 % are observed, attributable to visibility guidance and cross-modal completion that mitigate weak texture and sparse points. For severely/near-completely occluded cases ( O = 2 ) , the gains are most pronounced ( + 8 % to + 15 % ) ; recall increases more than precision, consistent with CMA and dynamic fusion recovering detectability under low-information conditions.
Category-wise and scale-wise analyses indicate especially notable improvements in small/distant classes (e.g., pedestrian, cyclist). Scale binning shows larger OL-mAP gains for small-to-medium objects, consistent with FAOD’s ability to compensate for sparse LiDAR returns and weak image cues. This trend is also aligned with typical road scenes, where small agents are the first to disappear behind occluders and the last to provide clean geometry.
The region-level visibility map A vis attains higher IoU and lower cross-entropy against ground-truth M vis than unsupervised baselines. The Spearman correlation between A vis and classification confidence is also higher, supporting its use in score calibration. In practice, this correlation matters more for ( O = 1 , 2 ) : the score needs to reflect “how much evidence is really there”, otherwise the post-processing step tends to discard the hard positives.
On DENSE nighttime/low-light/rain–snow subsets, FAOD’s advantage widens (OL-mAP gains of about + 6 % at O = 1 and + 12 % at O = 2 ). On JRDB’s crowded indoor scenes with long occlusion chains, FAOD maintains stable recall. These are the cases where symmetric fusion is easily confused by missing or noisy cues; the visibility-gated completion and occlusion-aware calibration are simply more forgiving, and the improvements show up consistently in the occlusion-stratified metrics.

7.2. Baseline Comparison and Component Contributions

Compared with multimodal baselines (e.g., PointPainting, MVX-Net, UVTR), PointPainting is vulnerable to noisy image semantics, particularly at O = 2 ; FAOD suppresses unreliable channels via A vis and r m , reducing false positives. MVX-Net/UVTR degrade under alignment errors or missing modalities; FAOD’s geometric bias B geo and gated fusion α m show greater robustness.
Versus occlusion-specialized baselines (e.g., GUPNet, ORN, DetZero), single-modality/view occlusion reasoning is limited under complete occlusion; FAOD’s cross-modal completion transfers information directionally (donor→recipient), reconstructing features for invisible recipients. In dense crowds, ORN/DetZero’s reliance on ordering/logic graphs is less robust to annotation noise; FAOD with L rank yields smoother behavior.
Ablations show consistent trends. Removing the occlusion branch ( L occ + L vis ) notably degrades OL-mAP at O = 2 and weakens A vis , limiting CMA completion. Removing the geometric bias B geo hurts more in high-parallax camera–radar/camera–LiDAR settings. Removing reliability gating r m increases mis-fusion in low-light/sparse segments and reduces the variance of α m . Disabling consistency/contrastive losses ( L rec , L align ) during the occlusion curriculum leads to over-completion or local overfitting with larger per-subset variance. Temporal consistency (optional) further improves O = 2 recall at modest latency cost.

7.3. Performance Analysis: Efficiency, Resources, and Deployability

We use three model scales—FAOD-S (small), FAOD-M (medium), and FAOD-L (large). Unless otherwise stated, the latency breakdown reports the large scale (FAOD-L). Under a common protocol (nuScenes, single GPU, FP16, batch = 1), we report latency and key resource metrics in Table 1 and Table 2. CMA and the image backbone dominate compute; reducing image resolution/backbone width and triggering CMA sparsely (e.g., ROI-based) provide the largest speedups.
Speed–accuracy trade-offs are shown in Table 3. FAOD-M reduces latency by ≈39% vs. FAOD-L while losing ≈2.8 pts on O = 2 , suitable for online use; FAOD-L favors offline high-accuracy.
In this context, FAOD-S can be regarded as the lightweight variant targeting resource-constrained or embedded deployments. Compared with FAOD-L, it reduces latency from 94.7 ms to 41.0 ms (about a 57 % decrease) and peak memory (see Table 2) at the cost of several points in mAP and OL-mAP ( O = 2 ) . Such a trade-off is acceptable for many automotive ECUs where on-board compute and memory are limited. On automotive-grade SoCs, additional gains are expected from TensorRT/ONNX engines, mixed precision, and moderate backbone width scaling; a full evaluation of FAOD-S on embedded hardware is left for future work.
Engine-level optimizations reduce memory and improve throughput (Table 4); e.g., TensorRT yields 20– 25 % throughput gains.
Calibration and post-processing analyses on nuScenes val are given in Table 5. Temperature scaling improves calibration (ECE/Brier), and occlusion-aware Soft-NMS further improves detection under heavy occlusion ( O = 2 ; higher OL-mAP) together with overall mAP.
Robustness to modality dropout at inference is summarized in Table 6. LiDAR is critical under strong occlusion; RGB/IR remain complementary in low light and sparse-point regimes.
Efficiency impacts of key components are shown in Table 7. CMA yields the largest accuracy gains with moderate cost; B geo and r m are highly cost-effective for high-occlusion accuracy.
FAOD delivers statistically significant gains on aggregate and occlusion-stratified metrics across four benchmarks, with the largest improvements at O = 2 due to cross-modal completion and adaptive gating. Efficiency-wise, FAOD traces a clear Pareto frontier via image/BEV resolution and sparse attention, enabling both offline and online deployments. Interpretability ( A vis , α m maps) and better calibration (temperature scaling) support practical deployment and safety analyses.

7.4. Occlusion-Stratified Results on nuScenes

To rigorously assess robustness under varying degrees of occlusion, the nuScenes validation set is stratified into three visibility tiers—non-occluded ( O = 0 ) , partially occluded ( O = 1 ) , and heavily occluded ( O = 2 ) —and OL-mAP is reported for each tier. Overall, FAOD-L attains the best or tied-best performance across all tiers and exhibits a smaller degradation as occlusion increases than both multimodal and occlusion-specialized baselines (Figure 3).
For ( O = 0 ) , FAOD-L achieves an OL-mAP of 71.0 , exceeding the mean of four baselines (69.25) by + 1.75 percentage points (pp) and outperforming the best baseline (70.0) by + 1.0 pp. This suggests that introducing explicit visibility reasoning does not come at the cost of peak accuracy: when observations are clean, the model largely behaves like a strong BEV fusion detector rather than “over-correcting” what is already reliable. For ( O = 1 ) , FAOD-L reaches 63.0 , improving over the baseline mean (56.5) by + 6.5 pp and over the strongest baseline (58.0) by + 5.0 pp. In many nuScenes scenes, partial occlusion is the more common and also the more confusing case: one modality may still carry a usable fragment (e.g., a contour in RGB), while another becomes sparse or locally corrupted (e.g., missing returns in LiDAR). The visibility heatmap helps here by damping unreliable regions and letting the fusion focus on the parts that are still trustworthy; CMA then supplies complementary cues where the target stream is weak, instead of mixing all modalities symmetrically in BEV. For ( O = 2 ) , FAOD-L attains 53.0 , surpassing the baseline mean (44.25) by + 8.75 pp and the strongest baseline (46.0) by + 7.0 pp. The improvement is strongest under ( O = 2 ) and is driven mainly by recall: in these cases, the detector often needs to work with very limited evidence (a few points, a small edge fragment, or intermittent responses). Visibility-gated CMA, together with reliability weighting, reconstructs discriminative features only where information is genuinely missing, making the remaining cues usable without spreading artifacts across the scene. This also makes post-processing less brittle, because a hard true positive under severe occlusion may not achieve the “nice” overlap pattern that standard suppression heuristics assume.
The tiered results also hint at what kind of situations in nuScenes FAOD benefits from. Under ( O = 1 ) , the gain tends to come from cases that are partially blocked but still geometrically consistent—for instance, an agent visible in one stream while partially missing in another due to occluders or viewpoint. The directed (donor → recipient) completion is especially useful in this regime: it transfers information from the less-occluded donor stream to the occluded target stream, which is a different behavior from symmetric BEV aggregation. Under ( O = 2 ) , detections are closer to the decision boundary. Here, the visibility gating prevents the completion module from “guessing everywhere”, and the occlusion-aware calibration/NMS helps avoid over-suppressing these low-IoU, low-confidence but correct hypotheses. In short, ( O = 1 ) benefits more from selective restoration, while ( O = 2 ) benefits from both restoration and a more forgiving confidence/suppression policy.
Figure 4, Figure 5 and Figure 6 provide qualitative comparisons between the baseline fusion model (BEVFormer) and FAOD under heavy occlusion. For each scene, the top image shows the baseline result, while the bottom image shows FAOD. In scenarios where target objects are largely invisible in RGB and only sparsely observed in LiDAR, the baseline often fails to form meaningful responses, leading to missed detections or fragmented hypotheses. In contrast, FAOD produces more coherent BEV activations and more stable object predictions. The visibility cues highlight occluded regions, while cross-modal attention selectively transfers complementary geometric information from less-occluded modalities, resulting in more complete object representations.
Degradation with occlusion is quantified by Δ occ = OL-mAP ( O = 0 ) OL-mAP ( O = 2 ) . PointPainting: 26 pp; MVX-Net: 25 pp; UVTR: 26 pp; DetZero: 23 pp; FAOD-L: 18 pp. Relative to the best baseline(DetZero, 23 pp), FAOD-L reduces the penalty by 5 pp (relative reduction 21.7 % ), yielding a flatter performance–occlusion curve and stronger cross-tier consistency.

8. Discussion

8.1. Generalization Capability

The proposed FAOD framework demonstrates strong generalization ability when deployed in previously unseen environments and across novel object categories. By leveraging multimodal feature representations and explicit occlusion reasoning, the model is less reliant on dataset-specific appearance patterns, thereby enhancing robustness in diverse urban scenarios. Experimental results across four heterogeneous benchmarks confirm that FAOD can effectively adapt to varying sensor configurations and scene geometries without significant performance degradation.

8.2. Robustness to Occlusion Types

A key strength of our approach lies in its robustness against different types of occlusion. In addition to handling static and partial occlusions, FAOD exhibits stable performance in highly dynamic conditions where occlusions are caused by moving vehicles, pedestrians, or other agents. The explicit visibility reasoning module enables reliable estimation of occlusion levels, while the cross-modal feature completion mechanism recovers object representations even when large portions are visually obscured.

8.3. Computational Efficiency

Practical deployment in autonomous driving requires a balance between accuracy and efficiency. In this work, all runtime and resource measurements are obtained on a single NVIDIA RTX 3090 GPU under the evaluation protocol described in the Experiments section (FP16, batch = 1 , with image resolution and LiDAR sweeps as specified for each FAOD-S/M/L configuration). The reference implementation has a compact model size of about 110 MB of learnable parameters, which fits comfortably within the memory budgets of current GPU and automotive SoC platforms.
Under this protocol, the three model scales trace a clear accuracy–latency frontier: FAOD-L targets offline or high-compute settings, FAOD-M offers a favorable trade-off between accuracy and speed, and FAOD-S is explicitly designed as a lightweight variant for resource-constrained or embedded deployments, using lower image resolution, fewer LiDAR sweeps, and narrower backbones while preserving most of the occlusion-stratified gains. Engine-level optimizations such as TensorRT/ONNX conversion, mixed-precision execution, operator fusion, and sparsified (e.g., ROI-triggered) attention further reduce latency and memory footprint. A detailed quantitative evaluation on specific automotive-grade embedded hardware is left for future work.

8.4. Limitations and Future Directions

Despite its effectiveness, the current framework has several limitations. First, the present FAOD implementation assumes fixed and precise extrinsic calibration between cameras, LiDAR, and IR/radar. The geometric bias term B geo and the BEV projections are computed directly from these calibration parameters. In practice, LiDAR misalignment (e.g., due to mechanical tolerances, thermal drift, or mounting vibrations) can distort cross-modal attention and reliability gating, and we do not yet explicitly model or correct such effects. Future variants could incorporate calibration-robust feature encodings, online refinement of extrinsics, or uncertainty-aware fusion that downweights modalities suspected to be misaligned.
FAOD is evaluated in a single-frame setting and does not yet include explicit temporal modeling. Without temporal aggregation, the method cannot fully exploit motion cues and cross-frame visibility to stabilize occlusion estimates or recover objects that are only intermittently visible under heavy occlusion. A natural extension is to aggregate BEV features over short, pose-compensated temporal windows and apply a lightweight temporal attention module on top of the existing BEV representation, together with temporal consistency losses to regularize completion in heavily occluded scenes.
The current study focuses on passive sensing with a fixed sensor layout. We do not consider active strategies such as view planning, adaptive sensor scheduling, or dynamic exposure control, which may further mitigate severe occlusion and adverse-weather degradation. Exploring these directions, together with temporal reasoning and calibration-aware fusion, is left for future work.

9. Conclusions

In this work, we proposed FAOD, a novel Fusion-Aware Occlusion Detection framework designed to address the persistent challenge of object detection under occlusion in autonomous driving systems. By integrating explicit visibility reasoning with implicit cross-modal feature completion, FAOD is capable of reconstructing object representations even in highly cluttered and visually degraded scenarios. A central innovation of our approach lies in the attention-guided multimodal fusion mechanism, which dynamically aligns heterogeneous features from RGB, LiDAR, and infrared/radar modalities to maximize complementary strengths while mitigating occlusion-induced information loss.
Extensive experiments on four representative autonomous driving benchmarks demonstrate that FAOD achieves state-of-the-art performance across a wide range of occlusion conditions, including partial and full occlusions, static and dynamic obstacles, and diverse sensor configurations. Notably, the framework maintains both high accuracy and computational efficiency, reaching real-time inference rates with a compact model size, which highlights its potential for practical deployment in safety-critical driving environments.
Beyond empirical performance, FAOD contributes a methodological foundation that can generalize to multimodal perception research. Its explicit occlusion modeling, modality-aware feature reconstruction, and attention-driven alignment are not confined to detection; they could also support occlusion-aware tracking, improve the reliability of motion forecasting, and refine occupancy prediction, in addition to aiding cooperative multi-agent perception. In each of these tasks, the same principle applies: reasoning about which signals are missing and selectively completing them with information from other modalities can make the system more robust. More broadly, FAOD exemplifies a practical paradigm for dealing with incomplete multimodal data, offering a transferable approach that extends beyond autonomous driving and remains relevant wherever sensor degradation or partial observability pose challenges.
Looking ahead, future research directions include incorporating temporal reasoning to leverage motion dynamics across video sequences, as well as exploring active perception strategies that adapt sensor utilization to occlusion severity. By advancing towards these goals, FAOD can serve as a stepping stone for the development of next-generation robust, reliable, and intelligent perception systems for autonomous driving and broader real-world applications.

Author Contributions

Conceptualization, Z.L. and B.S.; methodology, Z.L.; software, Z.L.; validation, Z.L. and B.S.; formal analysis, Z.L.; investigation, Z.L.; resources, B.S.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, B.S.; visualization, Z.L.; supervision, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data in this study come from relevant open-source datasets in the field of autonomous driving.

Acknowledgments

The authors wants to thanks the anonymous referees for the constructive request.

Conflicts of Interest

The authors declares no conflicts of interest.

Appendix A. Full Task Objective and Loss Formulations

Appendix A.1. Subtask A: Occlusion Classification (Visibility Estimation)

This subtask predicts the instance-level occlusion label O and a region-level visibility heatmap A vis [ 0 , 1 ] Ω ( Ω : pixels or BEV grids) for downstream gating and completion. The input features include multi-scale image features { F l img } , BEV/voxel features F pc , IR/Radar features F ir , and geometric statistics ψ (point density, view angle, and count of neighboring occluders).
The instance-level occlusion loss adopts a class-balanced cross-entropy or Focal loss:
L occ = k { 0 , 1 , 2 } α k ( 1 p k ) γ [ O = k ] log p k .
For region-level visibility prediction, a binary cross-entropy term with total variation (TV) regularization is used:
L vis = BCE A vis , M vis + λ tv x Ω A vis ( x ) 1 .
Depth-order consistency is enforced using an occlusion graph G :
L rank = u v max 0 , d ( v ) d ( u ) + δ + β max 0 , A vis ( v ) ¯ A vis ( u ) ¯ + ϵ ,
where d ( · ) is mean depth and A vis ( · ) ¯ is instance-wise average visibility.
Cross-modal visibility consistency is encouraged through an L 1 projection term:
L cons = A vis img Π A vis pc 1 ,
where Π ( · ) is the projection operator from the point cloud to the image plane.
The Subtask-A objective combines all components as
L A = λ occ L occ + λ vis L vis + λ rank L rank + λ cons L cons .

Appendix A.2. Subtask B: Feature Restoration (Cross-Modal Completion)

Within occluded regions indicated by A vis , missing semantics/geometry are reconstructed using complementary modalities.
For modality reliability, let m { RGB , LiDAR , IR } . We compute statistics ϕ m (texture strength, SNR, point density, motion blur) and
r m = σ w m ϕ m + b m ( 0 , 1 ) .
We then form donor weights:
ω m = exp { κ ( 1 A vis ) r m } m exp { κ ( 1 A vis ) r m } ,
where κ > 0 is a temperature and r m ( 0 , 1 ) are modality reliability scalars.
For CMA completion, given target F a (Query) and donor F b (Key/Value), after geometric alignment (projection or learnable deformation Δ x ):
A a b = softmax Q ( F a ) K ( F b ) + B geo d ,
F ^ a = A a b V ( F b ) ,
with geometric bias B geo . Occlusion-gated mixing is
F a comp = F a + M occ ( F ^ a F a ) , M occ = 1 A vis .
With synthetic occlusion T occ (Cutout/Copy-Paste, ray-drop), consistency and teacher supervision are enforced: the teacher on clean samples provides y * = ( p * , b * ) , while the student on occluded samples outputs y ^ = ( p ^ , b ^ ) , with
L rec = KL p ^ p * + λ box b ^ b * 1 .
We additionally use an instance-wise InfoNCE:
L align = log exp ( z a , z b / τ ) b exp ( z a , z b / τ ) .
Boundary smoothness is enforced as
L smooth = F a comp 1 .
The Subtask-B objective is
L B = λ rec L rec + λ align L align + λ smooth L smooth .

Appendix A.3. Subtask C: Modality-Aware Detection (Dynamic Fusion and Decoding)

Dynamic fusion weights modality features by visibility-guided reliability:
α m = exp { η ( 1 A vis ) , r m } m exp { η ( 1 A vis ) , r m } ,
F fused = m α m F m [ comp ] ,
emphasizing robust modalities (LiDAR/IR) under heavy occlusion.
For decoding, the detector predicts category c and box b , while the occlusion state O from Subtask A is used for occlusion-aware fusion and post-processing. Classification uses Focal or class-balanced cross-entropy:
L cls = c C α c ( 1 p c ) γ [ c = c * ] log p c .
Regression uses IoU/distance-IoU (DIoU) + 1 with a periodic angle loss:
L box = L IoU ( b , b * ) + μ Δ d 1 + μ θ angle ( θ , θ * ) , angle = | sin ( θ θ * ) | + | cos ( θ θ * ) | .
The detection objective is
L det = λ cls L cls + λ box L box .

Appendix B. Additional Training and Post-Processing Details

This appendix provides the explicit formulations used for optional robust multi-task balancing, temporal consistency, and occlusion-aware post-processing.

Appendix B.1. Robust Multi-Task Balancing (Optional)

We employ homoscedastic uncertainty weighting to balance task gradients:
L total = k 1 2 σ k 2 L k + log σ k , k { det , A , B } ,
where σ k are learnable scalars. In practice, this complements the occlusion-level amplification in Equations (7) and (8). We also use GradNorm/PCGrad to equalize per-task gradient norms within a mini-batch, preventing domination (e.g., by L cls ).

Appendix B.2. Spatiotemporal Consistency (Optional)

Given poses T ( · ) , a source point p s at time s is mapped to the reference frame t by
p ˜ s t = T ( t ) T ( s ) 1 p s .
Within a target box region S , we use a Chamfer-like loss:
L temp-pc = p S t min q S ˜ s t p q 1 + q S ˜ s t min p S t q p 1 ,
where S t denotes the set of points inside the target box at frame t; S s is the in-box set at frame s; S ˜ s t = { T ( t ) T ( s ) 1 p s p s S s } ; and · 1 denotes the 1 norm.
For feature-level consistency (image/BEV), let W s t ( · ) be a cross-frame warp (by geometry/pose):
L temp-feat = l F t ( l ) W s t F s ( l ) 1 .
This term can be merged into L B with weight λ temp .

Appendix B.3. Occlusion-Aware Post-Processing (Optional)

In occlusion-aware Soft-NMS, given candidate i and another j i , we apply Gaussian score decay with an occlusion-adaptive width:
s j = s j · exp IoU ( b i , b j ) 2 σ 2 ( O j ) , σ ( O ) = σ 0 + Δ σ [ O = 2 ] .
Here, σ ( O ) enlarges the decay width for fully occluded hypotheses and thus softens suppression.
For occlusion-aware thresholds and temperature scaling, classification logits are rescaled by an occlusion-dependent temperature:
s cal = sigm z / τ cal ( O ) , τ cal ( O ) = τ 0 + Δ τ · [ O = 2 ] ,
where sigm ( x ) = 1 / ( 1 + e x ) and z is the classification logit. The NMS IoU threshold is lowered under full occlusion:
τ nms ( O ) = τ 0 Δ nms · [ O = 2 ] .
Ties can be broken in favor of higher-O boxes.
For uncertainty-aware NMS, if the regressor outputs a covariance Σ , we use the Mahalanobis distance between centers c i , c j :
D M ( b i , b j ) = ( c i c j ) Σ 1 ( c i c j ) ,
and threshold or decay by D M ; Σ can be enlarged for higher O to reflect occlusion-induced uncertainty.

References

  1. Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Gläeser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1341–1360. [Google Scholar] [CrossRef]
  2. Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  3. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Dai, J. BEVFormer: Learning bird’s-eye-view representation from LiDAR-Camera via spatiotemporal transformers. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 47, 2020–2036. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D.L.; Han, S. BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 2774–2781. [Google Scholar] [CrossRef]
  5. Zhang, S.; Benenson, R.; Schiele, B. Citypersons: A diverse dataset for pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3213–3221. [Google Scholar]
  6. Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Occlusion-aware R-CNN: Detecting pedestrians in a crowd. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 637–653. [Google Scholar]
  7. Wang, X.; Xiao, T.; Jiang, Y.; Shao, S.; Sun, J.; Shen, C. Repulsion loss: Detecting pedestrians in a crowd. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7774–7783. [Google Scholar]
  8. Li, T.; Xiong, X.; Zhang, Y.; Fan, X.; Zhang, Y.; Huang, H.; Hu, D.; He, M.; Liu, Z. RE-YOLOv5: Enhancing Occluded Road Object Detection via Visual Receptive Field Improvements. Sensors 2025, 25, 2518. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, G.; Xie, X.; Yu, Q. Monocular 3D object detection with thermodynamic loss and decoupled instance depth. Connect. Sci. 2024, 36, 2316022. [Google Scholar] [CrossRef]
  10. Yan, Y.; Mao, Y.; Li, B. SECOND: Sparsely embedded convolutional detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed]
  11. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. PointPillars: Fast Encoders for Object Detection From Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  12. Yin, T.; Zhou, X.; Krahenbuhl, P. Center-based 3D object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 19–25 June 2021; pp. 11784–11793. [Google Scholar]
  13. Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. PV-RCNN: Point-voxel feature set abstraction for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 14–19 June 2020; pp. 10529–10538. [Google Scholar]
  14. Hu, P.; Ziglar, J.; Held, D.; Ramanan, D. What you see is what you get: Exploiting visibility for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 14–19 June 2020; pp. 11001–11009. [Google Scholar]
  15. Sha, H.; Gao, Q.; Zeng, H.; Li, K.; Li, W.; Zhang, X.; Wang, X. SPBA-Net point cloud object detection with sparse attention and box aligning. Sci. Rep. 2024, 14, 28420. [Google Scholar] [CrossRef] [PubMed]
  16. Gao, Y.; Wang, P.; Li, X.; Sun, M.; Di, R.; Li, L.; Hong, W. MonoDFNet: Monocular 3D Object Detection with Depth Fusion and Adaptive Optimization. Sensors 2025, 25, 760. [Google Scholar] [CrossRef] [PubMed]
  17. Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. PointPainting: Sequential fusion for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 14–19 June 2020; pp. 4604–4612. [Google Scholar]
  18. Sindagi, V.A.; Zhou, Y.; Tuzel, O. MVX-Net: Multimodal VoxelNet for 3D object detection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7276–7282. [Google Scholar]
  19. Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3D proposal generation and object detection from view aggregation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
  20. Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3D-CVF: Generating joint camera and LiDAR features using cross-view spatial feature fusion for 3D object detection. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2020; pp. 720–736. [Google Scholar]
  21. Li, Y.; Chen, Y.; Qi, X.; Li, Z.; Sun, J.; Jia, J. Unifying voxel-based representation with transformer for 3D object detection. Adv. Neural Inf. Process. Syst. 2022, 35, 18442–18455. [Google Scholar]
  22. Wang, H.; Tang, H.; Shi, S.; Li, A.; Li, Z.; Schiele, B.; Wang, L. UniTR: A unified and efficient multi-modal transformer for bird’s-eye-view representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 6792–6802. [Google Scholar]
  23. Zhao, Y.; Gong, Z.; Zheng, P.; Zhu, H.; Wu, S. SimpleBEV: Improved LiDAR-camera fusion architecture for 3D object detection. arXiv 2024, arXiv:2411.05292. [Google Scholar]
  24. Wang, M.; Wang, H.; Li, Y.; Chen, L.; Cai, Y.; Shao, Z. MSAFusion: Object Detection Based on Multi-Sensor Adaptive Fusion under BEV. IEEE Trans. Instrum. Meas. 2025; Early Access. [Google Scholar] [CrossRef]
  25. Wolters, P.; Gilg, J.; Teepe, T.; Herzog, F.; Fent, F.; Rigoll, G. SpaRC: Sparse Radar-Camera Fusion for 3D Object Detection. arXiv 2024, arXiv:2411.19860. [Google Scholar]
  26. Palladin, E.; Dietze, R.; Narayanan, P.; Bijelic, M.; Heide, F. Samfusion: Sensor-adaptive multimodal fusion for 3d object detection in adverse weather. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2024; pp. 484–503. [Google Scholar]
  27. Pang, S.; Morris, D.; Radha, H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 10386–10393. [Google Scholar]
  28. Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; Tai, C.L. TransFusion: Robust LiDAR-Camera fusion for 3D object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 1090–1099. [Google Scholar]
  29. Shi, S.; Wang, X.; Li, H. PointRCNN: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 770–779. [Google Scholar]
  30. Lu, Y.; Ma, X.; Yang, L.; Zhang, T.; Liu, Y.; Chu, Q.; Ouyang, W. Geometry uncertainty projection network for monocular 3D object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual Event, 11–17 October 2021; pp. 3111–3121. [Google Scholar]
  31. Ding, F.; Wen, X.; Zhu, Y.; Li, Y.; Lu, C.X. RadarOcc: Robust 3D occupancy prediction with 4D imaging radar. Adv. Neural Inf. Process. Syst. 2024, 37, 101589–101617. [Google Scholar]
  32. Ye, B.; Qin, M.; Zhang, S.; Gong, M.; Zhu, S.; Zhao, H.; Zhao, H. Gs-occ3d: Scaling vision-only occupancy reconstruction with gaussian splatting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–23 October 2025; pp. 25925–25937. [Google Scholar]
  33. Ouyang, W.; Xu, Z.; Shen, B.; Wang, J.; Xu, Y. LinkOcc: 3D semantic occupancy prediction with temporal association. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 1374–1384. [Google Scholar] [CrossRef]
  34. Kumar, S.; Truong, H.; Sharma, S.; Sistu, G.; Scanlan, T.; Grua, E.; Eising, C. Minimizing Occlusion Effect on Multi-View Camera Perception in BEV with Multi-Sensor Fusion. arXiv 2025, arXiv:2501.05997. [Google Scholar] [CrossRef]
  35. Huang, T.; Liu, Z.; Chen, X.; Bai, X. EPNet: Enhancing point features with image semantics for 3D object detection. In European Conference on Computer Vision; Springer International Publishing: Cham, Switzerland, 2020; pp. 35–52. [Google Scholar]
  36. Wang, C.; Ma, C.; Zhu, M.; Yang, X. PointAugmenting: Cross-modal augmentation for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 11794–11803. [Google Scholar]
  37. Yang, Z.; Chen, J.; Miao, Z.; Li, W.; Zhu, X.; Zhang, L. DeepInteraction: 3D object detection via modality interaction. Adv. Neural Inf. Process. Syst. 2022, 35, 1992–2005. [Google Scholar]
  38. Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; Zhang, X. Cross-modal transformer: Towards fast and robust 3D object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 18268–18278. [Google Scholar]
  39. Zhou, F.; Chen, H. Cross-modal translation and alignment for survival analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 21485–21494. [Google Scholar]
  40. Zhang, H.; Liang, L.; Zeng, P.; Song, X.; Wang, Z. SparseLIF: High-performance sparse LiDAR-camera fusion for 3D object detection. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2024; pp. 109–128. [Google Scholar]
  41. Hu, C.; Zheng, H.; Li, K.; Xu, J.; Mao, W.; Luo, M.; Wang, L.; Chen, M.; Peng, Q.; Liu, K.; et al. FusionFormer: A multi-sensory fusion in bird’s-eye-view and temporal consistent transformer for 3D object detection. arXiv 2023, arXiv:2309.05257. [Google Scholar]
  42. Wang, J.; Li, F.; An, Y.; Zhang, X.; Sun, H. Toward robust LiDAR-camera fusion in BEV space via mutual deformable attention and temporal aggregation. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 5753–5764. [Google Scholar] [CrossRef]
  43. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  44. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 14–19 June 2020; pp. 11621–11631. [Google Scholar]
  45. Martin-Martin, R.; Patel, M.; Rezatofighi, H.; Shenoi, A.; Gwak, J.; Frankel, E.; Savarese, S. JRDB: A dataset and benchmark of egocentric robot visual perception of humans in built environments. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 6748–6765. [Google Scholar] [CrossRef] [PubMed]
  46. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 14–19 June 2020; pp. 11682–11692. [Google Scholar]
Figure 1. Conceptual pipeline of FAOD. Multimodal features are first extracted by modality-specific encoders. The visibility estimation module outputs an instance-level occlusion score O and a region-level visibility map A vis , which act as control signals to (i) gate cross-modal feature completion and (ii) modulate occlusion-aware fusion (modality weighting) before the final detection head.
Figure 1. Conceptual pipeline of FAOD. Multimodal features are first extracted by modality-specific encoders. The visibility estimation module outputs an instance-level occlusion score O and a region-level visibility map A vis , which act as control signals to (i) gate cross-modal feature completion and (ii) modulate occlusion-aware fusion (modality weighting) before the final detection head.
Electronics 15 00245 g001
Figure 2. Overview of FAOD. From left to right: (i) Modality-specific encoding extracts features from RGB, LiDAR and IR streams, producing F RGB , F LiDAR , F IR (shown as F Resnet , F LiDAR , F IR in the diagram). (ii) Occlusion-aware representation estimates a multi-granular visibility signal: an instance-level occlusion score and a region-level visibility map A vis , supervised by BCE + TV and class-balanced losses. (iii) Cross-modal attentive completion performs geometry-aware CMA and occlusion-gated mixing. Queries come from a target modality, and keys/values from a donor modality; a geometric bias B geo preserves cross-view consistency. The mixing mask is M occ = 1 A vis , optionally modulated by modality reliability r m . (iv) Multi-task detection fuses the completed features with dynamic weights α m to obtain F final , and predicts category c, 3D box b = ( x , y , z , w , h , l , θ ) , and occlusion level O. Training uses L det together with L vis , L rank , L cons , L rec , L align , L smooth ; inference applies occlusion-aware calibration and Soft-NMS.
Figure 2. Overview of FAOD. From left to right: (i) Modality-specific encoding extracts features from RGB, LiDAR and IR streams, producing F RGB , F LiDAR , F IR (shown as F Resnet , F LiDAR , F IR in the diagram). (ii) Occlusion-aware representation estimates a multi-granular visibility signal: an instance-level occlusion score and a region-level visibility map A vis , supervised by BCE + TV and class-balanced losses. (iii) Cross-modal attentive completion performs geometry-aware CMA and occlusion-gated mixing. Queries come from a target modality, and keys/values from a donor modality; a geometric bias B geo preserves cross-view consistency. The mixing mask is M occ = 1 A vis , optionally modulated by modality reliability r m . (iv) Multi-task detection fuses the completed features with dynamic weights α m to obtain F final , and predicts category c, 3D box b = ( x , y , z , w , h , l , θ ) , and occlusion level O. Training uses L det together with L vis , L rank , L cons , L rec , L align , L smooth ; inference applies occlusion-aware calibration and Soft-NMS.
Electronics 15 00245 g002
Figure 3. nuScenes validation: occlusion-stratified mAP (OL-mAP).
Figure 3. nuScenes validation: occlusion-stratified mAP (OL-mAP).
Electronics 15 00245 g003
Figure 4. Scene 1.
Figure 4. Scene 1.
Electronics 15 00245 g004
Figure 5. Scene 2.
Figure 5. Scene 2.
Electronics 15 00245 g005
Figure 6. Scene 3.
Figure 6. Scene 3.
Electronics 15 00245 g006
Table 1. Latency breakdown (nuScenes, single GPU, FP16, batch = 1 ).
Table 1. Latency breakdown (nuScenes, single GPU, FP16, batch = 1 ).
ComponentLatency (ms)Share (%)
Image backbone + FPN38.441.0
LiDAR voxel/BEV21.723.2
IR/Radar branch4.85.1
CMA (with B geo )17.618.8
Head + Occl-SoftNMS12.213.0
Total (FAOD-L)94.7100
Table 2. Key resource metrics (same protocol).
Table 2. Key resource metrics (same protocol).
MetricValue
FPS (=1000/total latency)10.6
Params (M)79.2
FLOPs (G)321.5
Peak memory (GB)9.3
Table 3. Pareto trade-off across model scales (same protocol).
Table 3. Pareto trade-off across model scales (same protocol).
ConfigImg ShortBEV StepSweepsmAPNDSOL-mAP ( O = 2 )Lat. (ms)FPS
FAOD-L9600.201066.068.152.894.710.6
FAOD-M8000.25564.366.750.058.117.2
FAOD-S6400.30362.564.947.241.024.4
Table 4. Memory and throughput with/without engine optimizations.
Table 4. Memory and throughput with/without engine optimizations.
VariantPeak Mem (GB)TRT/ONNXThroughput (FPS)
FAOD-L (native)9.3No10.6
FAOD-L + TensorRT7.8Yes12.9
FAOD-M (native)6.1No17.2
FAOD-S (native)4.7No24.4
Table 5. Effect of calibration and occlusion-aware NMS (lower is better for ECE/Brier; higher is better for mAP/OL-mAP).
Table 5. Effect of calibration and occlusion-aware NMS (lower is better for ECE/Brier; higher is better for mAP/OL-mAP).
SettingECE ↓Brier ↓mAP ↑OL-mAP ( O = 2 ) ↑
No temperature scaling and standard NMS4.6%0.15865.350.9
+Temperature scaling ( τ constant)2.9%0.14165.851.6
+Occl-SoftNMS ( σ ( O ) adaptive)2.8%0.14066.052.8
Table 6. Robustness to modality dropout at inference (nuScenes val).
Table 6. Robustness to modality dropout at inference (nuScenes val).
Dropped ModalityΔmAPΔOL-mAP (O = 2)
IR/Radar off 0.6 1.1
RGB off 3.9 6.4
LiDAR off 12.7 18.9
Table 7. Ablations on efficiency and occlusion accuracy (deltas vs. full FAOD).
Table 7. Ablations on efficiency and occlusion accuracy (deltas vs. full FAOD).
VariantΔmAPΔOL-mAP (O = 2)ΔLatency (ms)
w/o CMA 2.8 6.9 13.7
w/o B geo 1.3 3.2 2.1
w/o r m /gating 1.7 4.1 0.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Singh, B. Robust Occluded Object Detection in Multimodal Autonomous Driving: A Fusion-Aware Learning Framework. Electronics 2026, 15, 245. https://doi.org/10.3390/electronics15010245

AMA Style

Li Z, Singh B. Robust Occluded Object Detection in Multimodal Autonomous Driving: A Fusion-Aware Learning Framework. Electronics. 2026; 15(1):245. https://doi.org/10.3390/electronics15010245

Chicago/Turabian Style

Li, Zhengqing, and Baljit Singh. 2026. "Robust Occluded Object Detection in Multimodal Autonomous Driving: A Fusion-Aware Learning Framework" Electronics 15, no. 1: 245. https://doi.org/10.3390/electronics15010245

APA Style

Li, Z., & Singh, B. (2026). Robust Occluded Object Detection in Multimodal Autonomous Driving: A Fusion-Aware Learning Framework. Electronics, 15(1), 245. https://doi.org/10.3390/electronics15010245

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop