Learning to Control Camera Exposure via Reinforcement Learning

Kyunghyun Lee, Ukcheol Shin, Byeong-Uk Lee

Abstract

Adjusting camera exposure in arbitrary lighting conditions is the first primary step to ensure the functionality of computer vision applications. Poorly adjusted camera exposure can result in performance degradation. Previous camera exposure control methods require multiple convergence steps and time-consuming processes in dynamic lighting conditions. In this paper, we propose a new camera exposure control framework that exploits deep reinforcement learning for instant convergence and real-time processing. The proposed framework consists of four contributions: 1) a simplified training ground to simulate real-world’s diverse and dynamic lighting changes, 2) flickering and image quantity-aware reward design, along with lightweight state-action design for real-time processing, 3) a static-to-dynamic lighting curriculum learning method to gradually improve the agent’s exposure-adjusting capability, and 4) domain randomization techniques to exceed the limitation of the training ground and achieve seamless generalization in the wild. As a result, our proposed method instantly adjusts camera exposure within five steps with real-time processing of less than 1 ms. Also, the acquired images from our method are well-exposed and show superiority in numerous computer vision tasks, such as feature extraction and object detection.

CVPR 2024