WeChat Mini Program
Old Version Features

Event-assisted Low-Light Video Object Segmentation

CVPR 2024(2024)

University of Science and Technology of China

Cited 10|Views71
Abstract
In the realm of video object segmentation (VOS), the challenge of operatingunder low-light conditions persists, resulting in notably degraded imagequality and compromised accuracy when comparing query and memory frames forsimilarity computation. Event cameras, characterized by their high dynamicrange and ability to capture motion information of objects, offer promise inenhancing object visibility and aiding VOS methods under such low-lightconditions. This paper introduces a pioneering framework tailored for low-lightVOS, leveraging event camera data to elevate segmentation accuracy. Ourapproach hinges on two pivotal components: the Adaptive Cross-Modal Fusion(ACMF) module, aimed at extracting pertinent features while fusing image andevent modalities to mitigate noise interference, and the Event-Guided MemoryMatching (EGMM) module, designed to rectify the issue of inaccurate matchingprevalent in low-light settings. Additionally, we present the creation of asynthetic LLE-DAVIS dataset and the curation of a real-world LLE-VOS dataset,encompassing frames and events. Experimental evaluations corroborate theefficacy of our method across both datasets, affirming its effectiveness inlow-light scenarios.
More
Translated text
Key words
Video Object Segmentation,Low-light Video,Imaging Modalities,Real-world Datasets,Segmentation Accuracy,Low Light Conditions,High Dynamic Range,Dynamic Vision Sensor,Matching Module,Time Step,Imaging Data,Image Features,Quantitative Results,Convolutional Layers,Event Data,Semantic Segmentation,Segmentation Task,Video Sequences,Two-stage Method,Graph Convolutional Network,Event Stream,Image Encoder,Binary Cross-entropy Loss,Memory Bank,One-stage Methods,Attention Map,Normal Light Conditions
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined