Next Article in Journal
Fault Diagnosis of Wind Turbine Drivetrains Using XGBoost-Assisted Discriminative Frequency Band Identification and a CNN–Transformer Network
Previous Article in Journal
Preparation of Inclusion Complexes with Argan Oils and Their Application of Hair Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

D-Know: Disentangled Domain Knowledge-Aided Learning for Open-Domain Continual Object Detection

1
School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
2
Shaanxi Province Key Laboratory of Big Data Knowledge Engineering, Xi’an Jiaotong University, Xi’an 710049, China
3
School of Continuing Education, Xi’an Jiaotong University, Xi’an 710049, China
4
MIGU Video Co., Ltd., Shanghai 201206, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12723; https://doi.org/10.3390/app152312723 (registering DOI)
Submission received: 31 October 2025 / Revised: 25 November 2025 / Accepted: 28 November 2025 / Published: 1 December 2025

Abstract

Continual learning for open-vocabulary object detection aims to enable pretrained vision–language detectors to adapt to diverse specialized domains while preserving their zero-shot generalization capabilities. However, existing methods primarily focus on mitigating catastrophic forgetting, often neglecting the substantial domain shifts commonly encountered in real-world applications. To address this critical oversight, we pioneer Open-Domain Continual Object Detection (OD-COD), a new paradigm that requires detectors to continually adapt across domains with significant stylistic gaps. We propose Disentangled Domain Knowledge-Aided Learning (D-Know) to tackle this challenge. This framework explicitly disentangles domain-general priors from category-specific adaptation, managing them dynamically in a scalable domain knowledge base. Specifically, D-Know first learns domain priors in a self-supervised manner and then leverages these priors to facilitate category-specific adaptation within each domain. To rigorously evaluate this task, we construct OD-CODB, the first dedicated benchmark spanning six domains with substantial visual variations. Extensive experiments demonstrate that D-Know achieves superior performance, surpassing current state-of-the-art methods by an average of 4.2% mAP under open-domain continual settings while maintaining strong zero-shot generalization. Furthermore, experiments under the few-shot setting confirm D-Know’s superior data efficiency.
Keywords: object detection; open vocabulary continual learning; domain prior learning object detection; open vocabulary continual learning; domain prior learning

Share and Cite

MDPI and ACS Style

He, B.; Yan, C.; Kou, Y.; Wang, Y.; Lv, X.; Du, H.; Xie, Y. D-Know: Disentangled Domain Knowledge-Aided Learning for Open-Domain Continual Object Detection. Appl. Sci. 2025, 15, 12723. https://doi.org/10.3390/app152312723

AMA Style

He B, Yan C, Kou Y, Wang Y, Lv X, Du H, Xie Y. D-Know: Disentangled Domain Knowledge-Aided Learning for Open-Domain Continual Object Detection. Applied Sciences. 2025; 15(23):12723. https://doi.org/10.3390/app152312723

Chicago/Turabian Style

He, Bintao, Caixia Yan, Yan Kou, Yinghao Wang, Xin Lv, Haipeng Du, and Yugui Xie. 2025. "D-Know: Disentangled Domain Knowledge-Aided Learning for Open-Domain Continual Object Detection" Applied Sciences 15, no. 23: 12723. https://doi.org/10.3390/app152312723

APA Style

He, B., Yan, C., Kou, Y., Wang, Y., Lv, X., Du, H., & Xie, Y. (2025). D-Know: Disentangled Domain Knowledge-Aided Learning for Open-Domain Continual Object Detection. Applied Sciences, 15(23), 12723. https://doi.org/10.3390/app152312723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop