High-throughput event-based and frame-based convolutions for event-cameras
Main Authors: | de Souza Rosa, Leandro, Dinale, Aiko, Bamford, Simeon, Bartolozzi, Chiara, Glover, Arren |
---|---|
Format: | info Video Journal |
Bahasa: | eng |
Terbitan: |
, 2022
|
Online Access: |
https://zenodo.org/record/6476382 |
Daftar Isi:
- Event cameras are promising sensors for on-line and real-time vision tasks, due to their high temporal resolution, low latency and the elimination of redundant static data. Many vision algorithms use some form of spatial convolution (i.e. spatial pattern detection) as a fundamental component, but additional consideration must be taken for event cameras, as the visual signal is asynchronous and sparse. While elegant methods have been proposed for event-based convolutions, they are unsuitable for real scenarios due to their inefficient processing pipeline, and subsequent low event-throughput. This paper presents an efficient implementation based on decoupling the event-based computations from the computationally heavy convolution ones, increasing the maximum event processing rate by 15.92x, to over 10 million events/second, while still maintaining the event-based paradigm of asynchronous input and output. Results on public datasets with modern 640x480 event-camera recordings show that the proposed implementation achieves real-time processing with minimal impact in the convolution result, while the prior state-of-the-art results in latency of over 1 second per-event.