Detecting Line Segments in Motion-blurred Images with Events

Huai Yu, Hao Li, Wen Yang, Lei Yu, Gui-Song Xia
Wuhan University

[arXiv] [Data] [Code]


Figure 1. Performance visualization of the proposed FE-LSD and competitors (Both the detected and labelled line segments are drawn on the clear image at the end of camera exposure for better visualization)

Introduction


Making line segment detectors more reliable under motion blurs is one of the most important challenges for practical applications, such as visual SLAM and 3D reconstruction. Existing line segment detection methods face severe performance degradation for accurately detecting and locating line segments when motion blur occurs. While event data shows strong complementary characteristics to images for minimal blur and edge awareness at high-temporal resolution, potentially beneficial for reliable line segment recognition. To robustly detect line segments over motion blurs, we propose to leverage the complementary information of images and events. To achieve this, we first design a general frame-event feature fusion network to extract and fuse the detailed image textures and low-latency event edges, which consists of a channel-attention-based shallow fusion module and a self-attention-based dual hourglass module. We then utilize two state-of-the-art wireframe parsing networks to detect line segments on the fused feature map. Besides, we contribute a synthetic and a realistic dataset for line segment detection, i.e., FE-Wireframe and FE-Blurframe, with pairwise motion-blurred images and events. Extensive experiments on both datasets demonstrate the effectiveness of the proposed method. When tested on the real dataset, our method achieves 63.3% mean structural average precision (msAP) with the model pre-trained on the FE-Wireframe and fine-tuned on the FE-Blurframe, improved by 32.6 and 11.3 points compared with models trained on synthetic only and real only, respectively.




Figure 2. FE-LSD network structure

FE-LSD Statistics


Table 1. Quantitative result comparisons on synthetic FE-Wireframe dataset (left) and real FE-Blurframe dataset (right).

Download

1. FE-LSD [Code & pretrained models]
2. Datasets [FE-Wireframe and FE-Blurframe]

Results on real-world data with motion blur and HDR issues


Line detection demo on real videos with diverse motion-blur and illuminations. Left: HAWP, right: FE-HAWP. The video is captured by a handheld event camera (DAVIS 346), APS and events are well temporally and geometrically aligned. For better visualization, we overlay the detected line segments on the APS videos.

Line segment matching based on HAWP and FE-HAWP


Line segment matching using official HAWP (on APS images) and FE-HAWP (on APS and events) with diverse motion-blur and illuminations. We use the same matching algorithm based on endpoint optical flow because the sequence continuity.

PL-SLAM demos based on HAWP and FE-HAWP


PL-VINS demos using HAWP (left) and FE-HAWP (right) results on real-world data with motion-blur and HDR issues. The systems using HAWP stop working immediately upon execution, causing the pose to remain unaltered.

References


  1. Holistically-Attracted Wireframe Parsing [paper]
    Nan Xue, Tianfu Wu, Song Bai, Fudong Wang, Gui-Song Xia, Liangpei Zhang, Philip H.S. Torr
    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  2. ULSD: unified line segment detection across pinhole, fisheye, and spherical cameras [paper]
    Hao Li, Huai Yu, Jinwang Wang, Wen Yang, Lei Yu and Sebastian Scherer
    ISPRS Journal of Photogrammetry and Remote Sensing (ISPRS P&RS), 2021.
  3. Line Segment Detection Using Transformers without Edges [paper]
    Yifan Xu, Weijian Xu, David Cheung, Zhuowen Tu
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  4. End-to-End Wireframe Parsing [paper]
    Yichao Zhou, Haozhi Qi, Yi Ma
    Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  5. End-to-end learning of representations for asynchronous event-based data [paper]
    Daniel Gehrig, Antonio Loquercio, Konstantinos G. Derpanis and Davide Scaramuzza
    Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2019.
  6. ESIM: an open event camera simulator [paper]
    Henri Rebecq, Daniel Gehrig, Davide Scaramuzza
    Conference on robot learning (CoRL), 2018.
  7. Motion deblurring with real events [paper]
    Fang Xu, Lei Yu, Bishan Wang, Wen Yang, Guisong Xia, Xu Jia, Zhendong Qiao, Jian-zhuo Liu
    IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
  8. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line Features [paper]
    Qiang Fu, Jialong Wang, Hongshan Yu, Islam Ali, Feng Guo, Yijia He, Hong Zhang
    arXiv preprint arXiv:2009.07462, 2020.