# KITTI Dataset for 3D Object Detection This page provides specific tutorials about the usage of MMDetection3D for KITTI dataset. **Note**: Current tutorial is only for LiDAR-based and multi-modality 3D detection methods. Contents related to monocular methods will be supplemented afterwards. ## Prepare dataset You can download KITTI 3D detection data [HERE](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) and unzip all zip files. Like the general way to prepare dataset, it is recommended to symlink the dataset root to `$MMDETECTION3D/data`. The folder structure should be organized as follows before our processing. ``` mmdetection3d ├── mmdet3d ├── tools ├── configs ├── data │ ├── kitti │ │ ├── ImageSets │ │ ├── testing │ │ │ ├── calib │ │ │ ├── image_2 │ │ │ ├── velodyne │ │ ├── training │ │ │ ├── calib │ │ │ ├── image_2 │ │ │ ├── label_2 │ │ │ ├── velodyne ``` ### Create KITTI dataset To create KITTI point cloud data, we load the raw point cloud data and generate the relevant annotations including object labels and bounding boxes. We also generate all single training objects' point cloud in KITTI dataset and save them as `.bin` files in `data/kitti/kitti_gt_database`. Meanwhile, `.pkl` info files are also generated for training or validation. Subsequently, create KITTI data by running ```bash mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets # Download data split wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti ``` Note that if your local disk does not have enough space for saving converted data, you can change the `out-dir` to anywhere else. The folder structure after processing should be as below ``` kitti ├── ImageSets │ ├── test.txt │ ├── train.txt │ ├── trainval.txt │ ├── val.txt ├── testing │ ├── calib │ ├── image_2 │ ├── velodyne │ ├── velodyne_reduced ├── training │ ├── calib │ ├── image_2 │ ├── label_2 │ ├── velodyne │ ├── velodyne_reduced ├── kitti_gt_database │ ├── xxxxx.bin ├── kitti_infos_train.pkl ├── kitti_infos_val.pkl ├── kitti_dbinfos_train.pkl ├── kitti_infos_test.pkl ├── kitti_infos_trainval.pkl ├── kitti_infos_train_mono3d.coco.json ├── kitti_infos_trainval_mono3d.coco.json ├── kitti_infos_test_mono3d.coco.json ├── kitti_infos_val_mono3d.coco.json ``` - `kitti_gt_database/xxxxx.bin`: point cloud data included in each 3D bounding box of the training dataset - `kitti_infos_train.pkl`: training dataset infos, each frame info contains following details: - info['point_cloud']: {'num_features': 4, 'velodyne_path': velodyne_path}. - info['annos']: { - location: x,y,z are bottom center in referenced camera coordinate system (in meters), an Nx3 array - dimensions: height, width, length (in meters), an Nx3 array - rotation_y: rotation ry around Y-axis in camera coordinates [-pi..pi], an N array - name: ground truth name array, an N array - difficulty: kitti difficulty, Easy, Moderate, Hard - group_ids: used for multi-part object } - (optional) info['calib']: { - P0: camera0 projection matrix after rectification, an 3x4 array - P1: camera1 projection matrix after rectification, an 3x4 array - P2: camera2 projection matrix after rectification, an 3x4 array - P3: camera3 projection matrix after rectification, an 3x4 array - R0_rect: rectifying rotation matrix, an 4x4 array - Tr_velo_to_cam: transformation from Velodyne coordinate to camera coordinate, an 4x4 array - Tr_imu_to_velo: transformation from IMU coordinate to Velodyne coordinate, an 4x4 array } - (optional) info['image']:{'image_idx': idx, 'image_path': image_path, 'image_shape', image_shape}. **Note:** the info['annos'] is in the referenced camera coordinate system. More details please refer to [this](http://www.cvlibs.net/publications/Geiger2013IJRR.pdf) The core function to get kitti_infos_xxx.pkl and kitti_infos_xxx_mono3d.coco.json are [get_kitti_image_info](https://github.com/open-mmlab/mmdetection3d/blob/7873c8f62b99314f35079f369d1dab8d63f8a3ce/tools/data_converter/kitti_data_utils.py#L140) and [get_2d_boxes](https://github.com/open-mmlab/mmdetection3d/blob/7873c8f62b99314f35079f369d1dab8d63f8a3ce/tools/data_converter/kitti_converter.py#L378). Please refer to [kitti_converter.py](https://github.com/open-mmlab/mmdetection3d/blob/7873c8f62b99314f35079f369d1dab8d63f8a3ce/tools/data_converter/kitti_converter.py) for more details. ## Train pipeline A typical train pipeline of 3D detection on KITTI is as below. ```python train_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, # x, y, z, intensity use_dim=4, # x, y, z, intensity file_client_args=file_client_args), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, file_client_args=file_client_args), dict(type='ObjectSample', db_sampler=db_sampler), dict( type='ObjectNoise', num_try=100, translation_std=[1.0, 1.0, 0.5], global_rot_range=[0.0, 0.0], rot_range=[-0.78539816, 0.78539816]), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), dict( type='GlobalRotScaleTrans', rot_range=[-0.78539816, 0.78539816], scale_ratio_range=[0.95, 1.05]), dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), dict(type='PointShuffle'), dict(type='DefaultFormatBundle3D', class_names=class_names), dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d']) ] ``` - Data augmentation: - `ObjectNoise`: apply noise to each GT objects in the scene. - `RandomFlip3D`: randomly flip input point cloud horizontally or vertically. - `GlobalRotScaleTrans`: rotate input point cloud. ## Evaluation An example to evaluate PointPillars with 8 GPUs with kitti metrics is as follows: ```shell bash tools/dist_test.sh configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/latest.pth 8 --eval bbox ``` ## Metrics KITTI evaluates 3D object detection performance using mean Average Precision (mAP) and Average Orientation Similarity (AOS), Please refer to its [official website](http://www.cvlibs.net/datasets/kitti/eval_3dobject.php) and [original paper](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf) for more details. We also adopt this approach for evaluation on KITTI. An example of printed evaluation results is as follows: ``` Car AP@0.70, 0.70, 0.70: bbox AP:97.9252, 89.6183, 88.1564 bev AP:90.4196, 87.9491, 85.1700 3d AP:88.3891, 77.1624, 74.4654 aos AP:97.70, 89.11, 87.38 Car AP@0.70, 0.50, 0.50: bbox AP:97.9252, 89.6183, 88.1564 bev AP:98.3509, 90.2042, 89.6102 3d AP:98.2800, 90.1480, 89.4736 aos AP:97.70, 89.11, 87.38 ``` ## Testing and make a submission An example to test PointPillars on KITTI with 8 GPUs and generate a submission to the leaderboard is as follows: ```shell mkdir -p results/kitti-3class ./tools/dist_test.sh configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/latest.pth 8 --out results/kitti-3class/results_eval.pkl --format-only --eval-options 'pklfile_prefix=results/kitti-3class/kitti_results' 'submission_prefix=results/kitti-3class/kitti_results' ``` After generating `results/kitti-3class/kitti_results/xxxxx.txt` files, you can submit these files to KITTI benchmark. Please refer to the [KITTI official website](http://www.cvlibs.net/datasets/kitti/index.php) for more details.