Multidomain Object Detection Framework Using Feature Domain Knowledge Distillation

This framework improves object detection in low-light images using unsupervised knowledge distillation. It transfers feature knowledge from bright to dark domains without extra testing cost. A novel object-level discriminator helps focus on meaningful regions. The method outperforms traditional enhancement plus detection pipelines. It is ideal for security, driving, and other low-visibility scenarios.

Fig. 4.Comparison between state-of-the-art object detection techniques

Fig. 4.Comparison between state-of-the-art object detection techniques, from left to right: (a) ground truth, (b) RetinaNet, (c) YOLO-v3, (d) SFT-Net, and (e) proposed RMD-Net. Our proposed method shows significant improvement over these techniques.

Technology Overview
This study introduces a novel unsupervised feature domain knowledge distillation (KD) framework designed to improve object detection performance in low-luminance images. The framework uses generative adversarial networks (GANs) combined with a Criterion network trained on sufficient-luminance images to guide feature transfer. A key innovation is the region-based multiscale discriminator, which operates at the object level rather than the global image context, enabling better feature extraction across luminance domains. The approach requires no additional computational cost during testing and allows for joint learning of detection and domain adaptation tasks.

Applications & Benefits
This method significantly enhances object detection in low-light or poorly illuminated environments, outperforming state-of-the-art methods across both low- and high-luminance domains. It is especially applicable in surveillance, autonomous driving, and nighttime imaging, where traditional models fail due to limited visual clarity. The technique also avoids reliance on manual annotation, making it cost-effective and scalable. By focusing on object-level features, the system ensures higher detection accuracy even with complex or dim backgrounds. Overall, it offers a robust and efficient solution for real-world low-light object detection challenges.

Abstract:
Object detection techniques have been widely studied, utilized in various works, and have exhibited robust performance on images with sufficient luminance. However, these approaches typically struggle to extract valuable features from low-luminance images, which often exhibit blurriness and dim appearence, leading to detection failures. To overcome this issue, we introduce an innovative unsupervised feature domain knowledge distillation (KD) framework. The proposed framework enhances the generalization capability of neural networks across both low- and high-luminance domains without incurring additional computational costs during testing. This improvement is made possible through the integration of generative adversarial networks and our proposed unsupervised KD process. Furthermore, we introduce a region-based multiscale discriminator designed to discern feature domain discrepancies at the object level rather than from the global context. This bolsters the joint learning process of object detection and feature domain distillation tasks. Both qualitative and quantitative assessments shown that the proposed method, empowered by the region-based multiscale discriminator and the unsupervised feature domain distillation process, can effectively extract beneficial features from low-luminance images, outperforming other state-of-the-art approaches in both low- and sufficient-luminance domains.

IEEE Transactions on Cybernetics Volume 54, Issue 8, August 2024

Multidomain Object Detection Framework Using Feature Domain Knowledge Distillation
Author:Da-Wei Jaw, Shih-Chia Huang, Zhi-Hui Lu, Benjamin C. M. Fung, Sy-Yen Kuo
Year:2024
Source publication:IEEE Transactions on Cybernetics Volume 54, Issue 8, August 2024
Subfield Highest percentage:99%  Control and Systems Engineering  #4 / 375

https://ieeexplore.ieee.org/document/10243073