Omar Research

3D vision with non simultaneous exposure cameras

Non-simultaneous exposure of pixels introduces geometric distortions that make the classical SfM algorithms imprecise or even unusable, as is the case with Rolling Shutter cameras or linescan systems. Our work allows us to take these distortions into account and/or to compensate them.

Rolling Shutter Homography and its applications (T-PAMI 2020)

Rolling Shutter SfM using analogies with non-rigid SfM (IJCV 2020)

Rolling Shutter compensation using straigtness constraint (CVPR 2018)

Calibration of panoramic line scan cameras (CVIU 2019)

3D vision using plenoptic cameras

The design of plenoptic cameras is usually complex and relies on precise placement of optic elements. Several calibration procedures have been but relying on simplified models, reconstructed images to extract features, or multiple calibrations when several types of micro-lens are used. We propose a new calibration method and explore the use of such cameras in 3D vision.

Blur Aware Calibration of Multi-Focus Plenoptic Camera (CVPR 2020)

3D reconstruction by camera and MMW radar fusion

The combination of a camera and a depth sensor is widely used for 3D reconstruction. Most often, it is LiDAR, structured light or time-of-flight camera. We show that with fine modeling and precise calibration, a MMW camera/radar system provides a good quality 3D map.

Accurate calibration of a camera/radar rig (ICRA 2015)

3D reconstruction using a camera/radar sensor (Sensors 2O15)

Current projects

Ptolémée : Automatic deep brain segmentation using MRI and deep learning (I am a partner of this project driven by Prof. J-J Lemaire from Clermont Ferrand Hospital (CHU))

Three-dimensional tracking of monodisperse TRAjectories by Quantitative measurements (TRAQ) (I am a partner of this ANR project coordinated by Prof. P Biwolé)