Jump to the main content block

DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions

In this paper, we presented a novel approach to improve object detection performance in inclement weather conditions. 

DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions

Architecture of the proposed DSNet. The framework consists of two subnetworks: the Restoration subnetwork and the Detection subnetwork. The Feature Recovery module is only activated during the training phase, and the Restored image is identical to the Foggy input image in size.

Technology Overview
Our DSNet model, which can be trained end to end for joint learning of visibility enhancement, object classification and object localization, is composed of two subnetworks, namely, the detection and restoration subnets. The detection subnet is introduced using RetinaNet, and the restoration subnet is designed by attaching the FR module to the last feature extraction layer of the third residual block of the detection network. The experimental results on both synthetic and natural foggy datasets indicate that our proposed approach obtained the most satisfactory detection performance. 

Applications & Benefits
Qualitative and quantitative evaluations of the compared methods prove that our DSNet is significantly more accurate than the other models while maintaining a high speed.

Abstract:
In the past half of the decade, object detection approaches based on the convolutional neural network have been widely studied and successfully applied in many computer vision applications. However, detecting objects in inclement weather conditions remains a major challenge because of poor visibility. In this article, we address the object detection problem in the presence of fog by introducing a novel dual-subnet network (DSNet) that can be trained end-to-end and jointly learn three tasks: visibility enhancement, object classification, and object localization. DSNet attains complete performance improvement by including two subnetworks: detection subnet and restoration subnet. We employ RetinaNet as a backbone network (also called detection subnet), which is responsible for learning to classify and locate objects. The restoration subnet is designed by sharing feature extraction layers with the detection subnet and adopting a feature recovery (FR) module for visibility enhancement. Experimental results show that our DSNet achieved 50.84 percent mean average precision (mAP) on a synthetic foggy dataset that we composed and 41.91 percent mAP on a public natural foggy dataset (Foggy Driving dataset), outperforming many state-of-the-art object detectors and combination models between dehazing and detection methods while maintaining a high speed.

IEEE Transactions on Pattern Analysis and Machine

DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions
Author:Shih-Chia Huang;Trung-Hieu Le;Da-Wei Jaw
Year:2020
Source publication:IEEE Transactions on Pattern Analysis and Machine 
Subfield Highest percentage:99%   1/510   Applied Mathematics   (2019) 

https://ieeexplore.ieee.org/document/9022905

Click Num: