MDSFE: Multiscale Deep Stacking Fusion Enhancer Network for Visual Data Enhancement

Abstract

With the rapid development of visual sensors and artificial intelligence (AI), video/image data has increased dramatically, especially in the era of AI-enabled intelligent transportation. In low-light imaging conditions, however, the camera only captures weak scene-reflected light. The visual data is thus inevitably affected by noise, low contrast, and poor brightness, and so on. It will have a negative influence on the development of vision-based traffic situational awareness, traffic safety management, and automatic/autonomous vehicles. To guarantee high-quality visual data, the multiscale deep stacking fusion enhancer (termed MDSFE) is proposed to enhance a low-light image. In particular, our MDSFE consists of four components, i.e., coarse extraction module (C-EM), coarse attention fusion module (C-AFM), multiscale dense enhancement module (M-DEM), and fine encoder–decoder fusion module (F-EDFM). The combination of these modules is capable of enhancing the abilities of feature mapping and expression. Experimental results on both synthetic and real-world scenarios have illustrated that the proposed method can provide superior enhancement results under different imaging conditions. It also has the capacity of improving the detection precision under low-light conditions.

Publication
IEEE Transactions on Instrumentation and Measurement
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.

Create your slides in Markdown - click the Slides button to check out the example.

Jingxiang Qu
Jingxiang Qu
PhD

My research interests include Equivariant Learning, multimodal/graph learning, and their application to solve real-world problems.