OpenPCDet 训练自己的数据集详细教程!
后台-插件-广告管理-内容页头部广告(手机) |
文章目录
- 前言
- 一、pcd转bin
- 二、labelCloud 工具安装与使用
- 三、训练
- 仿写代码
- 对pcdet/datasets/custom/custom_dataset.py进行改写
- 新建tools/cfgs/dataset_configs/custom_dataset.yaml并修改
- 新建tools/cfgs/custom_models/pointrcnn.yaml并修改
- 其他调整事项
- 数据集预处理
- 数据集训练
- 可视化测试
- 获取尺寸
- 四、总结
前言
这些天一直在尝试通过OpenPCDet平台训练自己的数据集(非kitti格式),好在最后终于跑通了,特此记录一下训练过程。
一、pcd转bin
笔者自己的点云数据是pcd格式的,参照kitti训练过程是需要转成bin格式的。
下面给出转换代码:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
二、labelCloud 工具安装与使用
拉取源码
git clone https://github.com/ch-sa/labelCloud.git- 1
安装依赖
pip install -r requirements.txt- 1
启动程序
python labelCloud.py- 1
启动后出现如下界面:
在setting界面按需设置,笔者这里按kitti格式生成label数据:
标注完成后会在对应目录下生成标签:
标签内容大致如下:
三、训练
仿写代码
把pcdet/datasets/kitti文件夹复制并改名为pcdet/datasets/custom,然后把pcdet/utils/object3d_kitti.py复制为pcdet/utils/object3d_custom.py
把data/kitti文件夹复制并改名为data/custom,然后修改训练信息,结构如下:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
对pcdet/datasets/custom/custom_dataset.py进行改写
import copy import pickle import os import numpy as np from skimage import io from . import custom_utils from ...ops.roiaware_pool3d import roiaware_pool3d_utils from ...utils import box_utils, common_utils, object3d_custom from ..dataset import DatasetTemplate class CustomDataset(DatasetTemplate): def __init__(self, dataset_cfg, class_names, training=True, root_path=None, logger=None, ext='.bin'): """ Args: root_path: dataset_cfg: class_names: training: logger: """ super().__init__( dataset_cfg=dataset_cfg, class_names=class_names, training=training, root_path=root_path, logger=logger ) self.split = self.dataset_cfg.DATA_SPLIT[self.mode] self.root_split_path = os.path.join(self.root_path, ('training' if self.split != 'test' else 'testing')) split_dir = os.path.join(self.root_path, 'ImageSets',(self.split + '.txt')) self.sample_id_list = [x.strip() for x in open(split_dir).readlines()] if os.path.exists(split_dir) else None self.custom_infos = [] self.include_custom_data(self.mode) self.ext = ext def include_custom_data(self, mode): if self.logger is not None: self.logger.info('Loading Custom dataset.') custom_infos = [] for info_path in self.dataset_cfg.INFO_PATH[mode]: info_path = self.root_path / info_path if not info_path.exists(): continue with open(info_path, 'rb') as f: infos = pickle.load(f) custom_infos.extend(infos) self.custom_infos.extend(custom_infos) if self.logger is not None: self.logger.info('Total samples for CUSTOM dataset: %d' % (len(custom_infos))) def get_infos(self, num_workers=16, has_label=True, count_inside_pts=True, sample_id_list=None): import concurrent.futures as futures # Process single scene def process_single_scene(sample_idx): print('%s sample_idx: %s' % (self.split, sample_idx)) # define an empty dict info = {} # pts infos: dimention and idx pc_info = {'num_features': 4, 'lidar_idx': sample_idx} # add to pts infos info['point_cloud'] = pc_info # no images, calibs are need to transform the labels type_to_id = {'Car': 1, 'Pedestrian': 2, 'Cyclist': 3} if has_label: # read labels to build object list according to idx obj_list = self.get_label(sample_idx) # build an empty annotations dict annotations = {} # add to annotations ==> refer to 'object3d_custom' (no truncated,occluded,alpha,bbox) annotations['name'] = np.array([obj.cls_type for obj in obj_list]) # 1-dimension # hwl(camera) format 2-dimension: The kitti-labels are in camera-coord # h,w,l -> 0.21,0.22,0.33 (see object3d_custom.py h=label[8], w=label[9], l=label[10]) annotations['dimensions'] = np.array([[obj.l, obj.h, obj.w] for obj in obj_list]) annotations['location'] = np.concatenate([obj.loc.reshape(1,3) for obj in obj_list]) annotations['rotation_y'] = np.array([obj.ry for obj in obj_list]) # 1-dimension num_objects = len([obj.cls_type for obj in obj_list if obj.cls_type != 'DontCare']) num_gt = len(annotations['name']) index = list(range(num_objects)) + [-1] * (num_gt - num_objects) annotations['index'] = np.array(index, dtype=np.int32) loc = annotations['location'][:num_objects] dims = annotations['dimensions'][:num_objects] rots = annotations['rotation_y'][:num_objects] # camera -> lidar: The points of custom_dataset are already in lidar-coord # But the labels are in camera-coord and need to transform loc_lidar = self.get_calib(loc) l, h, w = dims[:, 0:1], dims[:, 1:2], dims[:, 2:3] # bottom center -> object center: no need for loc_lidar[:, 2] += h[:, 0] / 2 # print("sample_idx: ", sample_idx, "loc: ", loc, "loc_lidar: " , sample_idx, loc_lidar) # get gt_boxes_lidar see https://zhuanlan.zhihu.com/p/152120636 gt_boxes_lidar = np.concatenate([loc_lidar, l, w, h, (np.pi / 2 - rots[..., np.newaxis])], axis=1) # 2-dimension array annotations['gt_boxes_lidar'] = gt_boxes_lidar # add annotation info info['annos'] = annotations return info sample_id_list = sample_id_list if sample_id_list is not None else self.sample_id_list # create a thread pool to improve the velocity with futures.ThreadPoolExecutor(num_workers) as executor: infos = executor.map(process_single_scene, sample_id_list) # infos is a list that each element represents per frame return list(infos) def get_calib(self, loc): """ This calibration is different from the kitti dataset. The transform formual of labelCloud: ROOT/labelCloud/io/labels/kitti.py: import labels if self.transformed: centroid = centroid[2], -centroid[0], centroid[1] - 2.3 dimensions = [float(v) for v in line_elements[8:11]] if self.transformed: dimensions = dimensions[2], dimensions[1], dimensions[0] bbox = BBox(*centroid, *dimensions) """ loc_lidar = np.concatenate([np.array((float(loc_obj[2]), float(-loc_obj[0]), float(loc_obj[1]-2.3)), dtype=np.float32).reshape(1,3) for loc_obj in loc]) return loc_lidar def get_label(self, idx): # get labels label_file = self.root_split_path / 'label_2' / ('%s.txt' % idx) assert label_file.exists() return object3d_custom.get_objects_from_label(label_file) def get_lidar(self, idx, getitem): """ Loads point clouds for a sample Args: index (int): Index of the point cloud file to get. Returns: np.array(N, 4): point cloud. """ # get lidar statistics if getitem == True: lidar_file = self.root_split_path + '/velodyne/' + ('%s.bin' % idx) else: lidar_file = self.root_split_path / 'velodyne' / ('%s.bin' % idx) return np.fromfile(str(lidar_file), dtype=np.float32).reshape(-1, 4) def set_split(self, split): super().__init__( dataset_cfg=self.dataset_cfg, class_names=self.class_names, training=self.training, root_path=self.root_path, logger=self.logger ) self.split = split self.root_split_path = self.root_path / ('training' if self.split != 'test' else 'testing') split_dir = self.root_path / 'ImageSets' / (self.split + '.txt') self.sample_id_list = [x.strip() for x in open(split_dir).readlines()] if split_dir.exists() else None # Create gt database for data augmentation def create_groundtruth_database(self, info_path=None, used_classes=None, split='train'): import torch # Specify the direction database_save_path = Path(self.root_path) / ('gt_database' if split == 'train' else ('gt_database_%s' % split)) db_info_save_path = Path(self.root_path) / ('custom_dbinfos_%s.pkl' % split) database_save_path.mkdir(parents=True, exist_ok=True) all_db_infos = {} # Open 'custom_train_info.pkl' with open(info_path, 'rb') as f: infos = pickle.load(f) # For each .bin file for k in range(len(infos)): print('gt_database sample: %d/%d' % (k + 1, len(infos))) # Get current scene info info = infos[k] sample_idx = info['point_cloud']['lidar_idx'] points = self.get_lidar(sample_idx, False) annos = info['annos'] names = annos['name'] gt_boxes = annos['gt_boxes_lidar'] num_obj = gt_boxes.shape[0] point_indices = roiaware_pool3d_utils.points_in_boxes_cpu( torch.from_numpy(points[:, 0:3]), torch.from_numpy(gt_boxes) ).numpy() # (nboxes, npoints) for i in range(num_obj): filename = '%s_%s_%d.bin' % (sample_idx, names[i], i) filepath = database_save_path / filename gt_points = points[point_indices[i] > 0] gt_points[:, :3] -= gt_boxes[i, :3] with open(filepath, 'w') as f: gt_points.tofile(f) if (used_classes is None) or names[i] in used_classes: db_path = str(filepath.relative_to(self.root_path)) # gt_database/xxxxx.bin db_info = {'name': names[i], 'path': db_path, 'gt_idx': i, 'box3d_lidar': gt_boxes[i], 'num_points_in_gt': gt_points.shape[0]} if names[i] in all_db_infos: all_db_infos[names[i]].append(db_info) else: all_db_infos[names[i]] = [db_info] # Output the num of all classes in database for k, v in all_db_infos.items(): print('Database %s: %d' % (k, len(v))) with open(db_info_save_path, 'wb') as f: pickle.dump(all_db_infos, f) @staticmethod def generate_prediction_dicts(batch_dict, pred_dicts, class_names, output_path=None): """ Args: batch_dict: frame_id: pred_dicts: list of pred_dicts pred_boxes: (N,7), Tensor pred_scores: (N), Tensor pred_lables: (N), Tensor class_names: output_path: Returns: """ def get_template_prediction(num_smaples): ret_dict = { 'name': np.zeros(num_smaples), 'alpha' : np.zeros(num_smaples), 'dimensions': np.zeros([num_smaples, 3]), 'location': np.zeros([num_smaples, 3]), 'rotation_y': np.zero(num_smaples), 'score': np.zeros(num_smaples), 'boxes_lidar': np.zeros([num_smaples, 7]) } return ret_dict def generate_single_sample_dict(batch_index, box_dict): pred_scores = box_dict['pred_scores'].cpu().numpy() pred_boxes = box_dict['pred_boxes'].cpu().numpy() pred_labels = box_dict['pred_labels'].cpu().numpy() # Define an empty template dict to store the prediction information, 'pred_scores.shape[0]' means 'num_samples' pred_dict = get_template_prediction(pred_scores.shape[0]) # If num_samples equals zero then return the empty dict if pred_scores.shape[0] == 0: return pred_dict # No calibration files pred_boxes_camera = box_utils.boxes3d_lidar_to_kitti_camera[pred_boxes] pred_dict['name'] = np.array(class_names)[pred_labels - 1] pred_dict['alpha'] = -np.arctan2(-pred_boxes[:, 1], pred_boxes[:, 0]) + pred_boxes_camera[:, 6] pred_dict['dimensions'] = pred_boxes_camera[:, 3:6] pred_dict['location'] = pred_boxes_camera[:, 0:3] pred_dict['rotation_y'] = pred_boxes_camera[:, 6] pred_dict['score'] = pred_scores pred_dict['boxes_lidar'] = pred_boxes return pred_dict annos = [] for index, box_dict in enumerate(pred_dicts): frame_id = batch_dict['frame_id'][index] single_pred_dict = generate_single_sample_dict(index, box_dict) single_pred_dict['frame_id'] = frame_id annos.append(single_pred_dict) # Output pred results to Output-path in .txt file if output_path is not None: cur_det_file = output_path / ('%s.txt' % frame_id) with open(cur_det_file, 'w') as f: bbox = single_pred_dict['bbox'] loc = single_pred_dict['location'] dims = single_pred_dict['dimensions'] # lhw -> hwl: lidar -> camera for idx in range(len(bbox)): print('%s -1 -1 %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f' % (single_pred_dict['name'][idx], single_pred_dict['alpha'][idx], bbox[idx][0], bbox[idx][1], bbox[idx][2], bbox[idx][3], dims[idx][1], dims[idx][2], dims[idx][0], loc[idx][0], loc[idx][1], loc[idx][2], single_pred_dict['rotation_y'][idx], single_pred_dict['score'][idx]), file=f) return annos def __len__(self): if self._merge_all_iters_to_one_epoch: return len(self.sample_id_list) * self.total_epochs return len(self.custom_infos) def __getitem__(self, index): """ Function: Read 'velodyne' folder as pointclouds Read 'label_2' folder as labels Return type 'dict' """ if self._merge_all_iters_to_one_epoch: index = index % len(self.custom_infos) info = copy.deepcopy(self.custom_infos[index]) sample_idx = info['point_cloud']['lidar_idx'] get_item_list = self.dataset_cfg.get('GET_ITEM_LIST', ['points']) input_dict = { 'frame_id': self.sample_id_list[index], } """ Here infos was generated by get_infos """ if 'annos' in info: annos = info['annos'] annos = common_utils.drop_info_with_name(annos, name='DontCare') loc, dims, rots = annos['location'], annos['dimensions'], annos['rotation_y'] gt_names = annos['name'] gt_boxes_lidar = annos['gt_boxes_lidar'] if 'points' in get_item_list: points = self.get_lidar(sample_idx, True) # import time # print(points.shape) # if points.shape[0] == 0: # print("**********************************") # print("sample_idx: ", sample_idx) # time.sleep(999999) # print("**********************************") # 000099, 000009 input_dict['points'] = points input_dict.update({ 'gt_names': gt_names, 'gt_boxes': gt_boxes_lidar }) data_dict = self.prepare_data(data_dict=input_dict) return data_dict def create_custom_infos(dataset_cfg, class_names, data_path, save_path, workers=4): dataset = CustomDataset(dataset_cfg=dataset_cfg, class_names=class_names, root_path=data_path, training=False) train_split, val_split = 'train', 'val' # No evaluation train_filename = save_path / ('custom_infos_%s.pkl' % train_split) val_filenmae = save_path / ('custom_infos%s.pkl' % val_split) trainval_filename = save_path / 'custom_infos_trainval.pkl' test_filename = save_path / 'custom_infos_test.pkl' print('------------------------Start to generate data infos------------------------') dataset.set_split(train_split) custom_infos_train = dataset.get_infos(num_workers=workers, has_label=True, count_inside_pts=True) with open(train_filename, 'wb') as f: pickle.dump(custom_infos_train, f) print('Custom info train file is save to %s' % train_filename) dataset.set_split('test') custom_infos_test = dataset.get_infos(num_workers=workers, has_label=False, count_inside_pts=False) with open(test_filename, 'wb') as f: pickle.dump(custom_infos_test, f) print('Custom info test file is saved to %s' % test_filename) print('------------------------Start create groundtruth database for data augmentation------------------------') dataset.set_split(train_split) # Input the 'custom_train_info.pkl' to generate gt_database dataset.create_groundtruth_database(train_filename, split=train_split) print('------------------------Data preparation done------------------------') if __name__=='__main__': import sys if sys.argv.__len__() > 1 and sys.argv[1] == 'create_custom_infos': import yaml from pathlib import Path from easydict import EasyDict dataset_cfg = EasyDict(yaml.safe_load(open(sys.argv[2]))) ROOT_DIR = (Path(__file__).resolve().parent / '../../../').resolve() create_custom_infos( dataset_cfg=dataset_cfg, class_names=['Car', 'Pedestrian', 'Cyclist'], data_path=ROOT_DIR / 'data' / 'custom', save_path=ROOT_DIR / 'data' / 'custom' )- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347
- 348
- 349
- 350
- 351
- 352
- 353
- 354
- 355
- 356
- 357
- 358
- 359
- 360
- 361
- 362
- 363
- 364
- 365
- 366
- 367
- 368
- 369
- 370
- 371
- 372
- 373
- 374
- 375
- 376
- 377
- 378
- 379
- 380
- 381
- 382
- 383
- 384
- 385
- 386
- 387
- 388
- 389
- 390
- 391
- 392
- 393
- 394
新建tools/cfgs/dataset_configs/custom_dataset.yaml并修改
DATASET: 'CustomDataset' DATA_PATH: '../data/custom' # If this config file is modified then pcdet/models/detectors/detector3d_template.py: # Detector3DTemplate::build_networks:model_info_dict needs to be modified. POINT_CLOUD_RANGE: [-70.4, -40, -3, 70.4, 40, 1] # x=[-70.4, 70.4], y=[-40,40], z=[-3,1] DATA_SPLIT: { 'train': train, 'test': val } INFO_PATH: { 'train': [custom_infos_train.pkl], 'test': [custom_infos_val.pkl], } GET_ITEM_LIST: ["points"] FOV_POINTS_ONLY: True POINT_FEATURE_ENCODING: { encoding_type: absolute_coordinates_encoding, used_feature_list: ['x', 'y', 'z', 'intensity'], src_feature_list: ['x', 'y', 'z', 'intensity'], } # Same to pv_rcnn[DATA_AUGMENTOR] DATA_AUGMENTOR: DISABLE_AUG_LIST: ['placeholder'] AUG_CONFIG_LIST: - NAME: gt_sampling # Notice that 'USE_ROAD_PLANE' USE_ROAD_PLANE: False DB_INFO_PATH: - custom_dbinfos_train.pkl # pcdet/datasets/augmentor/database_ampler.py:line 26 PREPARE: { filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'], filter_by_difficulty: [-1], } SAMPLE_GROUPS: ['Car:20','Pedestrian:15', 'Cyclist:15'] NUM_POINT_FEATURES: 4 DATABASE_WITH_FAKELIDAR: False REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0] LIMIT_WHOLE_SCENE: True - NAME: random_world_flip ALONG_AXIS_LIST: ['x'] - NAME: random_world_rotation WORLD_ROT_ANGLE: [-0.78539816, 0.78539816] - NAME: random_world_scaling WORLD_SCALE_RANGE: [0.95, 1.05] DATA_PROCESSOR: - NAME: mask_points_and_boxes_outside_range REMOVE_OUTSIDE_BOXES: True - NAME: shuffle_points SHUFFLE_ENABLED: { 'train': True, 'test': False } - NAME: transform_points_to_voxels VOXEL_SIZE: [0.05, 0.05, 0.1] MAX_POINTS_PER_VOXEL: 5 MAX_NUMBER_OF_VOXELS: { 'train': 16000, 'test': 40000 }- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
新建tools/cfgs/custom_models/pointrcnn.yaml并修改
CLASS_NAMES: ['Car'] # CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist'] DATA_CONFIG: _BASE_CONFIG_: /home/zonlin/CRLFnet/src/site_model/src/LidCamFusion/OpenPCDet/tools/cfgs/dataset_configs/custom_dataset.yaml _BASE_CONFIG_RT_: /home/zonlin/CRLFnet/src/site_model/src/LidCamFusion/OpenPCDet/tools/cfgs/dataset_configs/custom_dataset.yaml DATA_PROCESSOR: - NAME: mask_points_and_boxes_outside_range REMOVE_OUTSIDE_BOXES: True - NAME: sample_points NUM_POINTS: { 'train': 16384, 'test': 16384 } - NAME: shuffle_points SHUFFLE_ENABLED: { 'train': True, 'test': False } MODEL: NAME: PointRCNN BACKBONE_3D: NAME: PointNet2MSG SA_CONFIG: NPOINTS: [4096, 1024, 256, 64] RADIUS: [[0.1, 0.5], [0.5, 1.0], [1.0, 2.0], [2.0, 4.0]] NSAMPLE: [[16, 32], [16, 32], [16, 32], [16, 32]] MLPS: [[[16, 16, 32], [32, 32, 64]], [[64, 64, 128], [64, 96, 128]], [[128, 196, 256], [128, 196, 256]], [[256, 256, 512], [256, 384, 512]]] FP_MLPS: [[128, 128], [256, 256], [512, 512], [512, 512]] POINT_HEAD: NAME: PointHeadBox CLS_FC: [256, 256] REG_FC: [256, 256] CLASS_AGNOSTIC: False USE_POINT_FEATURES_BEFORE_FUSION: False TARGET_CONFIG: GT_EXTRA_WIDTH: [0.2, 0.2, 0.2] BOX_CODER: PointResidualCoder BOX_CODER_CONFIG: { 'use_mean_size': True, 'mean_size': [ [3.9, 1.6, 1.56], [0.8, 0.6, 1.73], [1.76, 0.6, 1.73] ] } LOSS_CONFIG: LOSS_REG: WeightedSmoothL1Loss LOSS_WEIGHTS: { 'point_cls_weight': 1.0, 'point_box_weight': 1.0, 'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0] } ROI_HEAD: NAME: PointRCNNHead CLASS_AGNOSTIC: True ROI_POINT_POOL: POOL_EXTRA_WIDTH: [0.0, 0.0, 0.0] NUM_SAMPLED_POINTS: 512 DEPTH_NORMALIZER: 70.0 XYZ_UP_LAYER: [128, 128] CLS_FC: [256, 256] REG_FC: [256, 256] DP_RATIO: 0.0 USE_BN: False SA_CONFIG: NPOINTS: [128, 32, -1] RADIUS: [0.2, 0.4, 100] NSAMPLE: [16, 16, 16] MLPS: [[128, 128, 128], [128, 128, 256], [256, 256, 512]] NMS_CONFIG: TRAIN: NMS_TYPE: nms_gpu MULTI_CLASSES_NMS: False NMS_PRE_MAXSIZE: 9000 NMS_POST_MAXSIZE: 512 NMS_THRESH: 0.8 TEST: NMS_TYPE: nms_gpu MULTI_CLASSES_NMS: False NMS_PRE_MAXSIZE: 9000 NMS_POST_MAXSIZE: 100 NMS_THRESH: 0.85 TARGET_CONFIG: BOX_CODER: ResidualCoder ROI_PER_IMAGE: 128 FG_RATIO: 0.5 SAMPLE_ROI_BY_EACH_CLASS: True CLS_SCORE_TYPE: cls CLS_FG_THRESH: 0.6 CLS_BG_THRESH: 0.45 CLS_BG_THRESH_LO: 0.1 HARD_BG_RATIO: 0.8 REG_FG_THRESH: 0.55 LOSS_CONFIG: CLS_LOSS: BinaryCrossEntropy REG_LOSS: smooth-l1 CORNER_LOSS_REGULARIZATION: True LOSS_WEIGHTS: { 'rcnn_cls_weight': 1.0, 'rcnn_reg_weight': 1.0, 'rcnn_corner_weight': 1.0, 'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0] } POST_PROCESSING: RECALL_THRESH_LIST: [0.3, 0.5, 0.7] SCORE_THRESH: 0.1 OUTPUT_RAW_SCORE: False EVAL_METRIC: kitti NMS_CONFIG: MULTI_CLASSES_NMS: False NMS_TYPE: nms_gpu NMS_THRESH: 0.1 NMS_PRE_MAXSIZE: 4096 NMS_POST_MAXSIZE: 500 OPTIMIZATION: BATCH_SIZE_PER_GPU: 2 NUM_EPOCHS: 80 OPTIMIZER: adam_onecycle LR: 0.01 WEIGHT_DECAY: 0.01 MOMENTUM: 0.9 MOMS: [0.95, 0.85] PCT_START: 0.4 DIV_FACTOR: 10 DECAY_STEP_LIST: [35, 45] LR_DECAY: 0.1 LR_CLIP: 0.0000001 LR_WARMUP: False WARMUP_EPOCH: 1 GRAD_NORM_CLIP: 10- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
其他调整事项
需要对以上文件中的类别信息,数据集路径,点云范围POINT_CLOUD_RANGE进行更改
在 pcdet/datasets/init.py文件,增加以下代码
- 1
- 2
- 3
完成以上就可以开始对数据集进行预处理和训练了
数据集预处理
python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml- 1
同时在gt_database文件夹下生成的.bin文件,data/custom文件夹结构变为如下:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
数据集训练
python tools/train.py --cfg_file tools/cfgs/custom_models/pointrcnn.yaml --batch_size=2 --epochs=300- 1
可视化测试
cd到tools文件夹下,运行:
python demo.py --cfg_file cfgs/custom_models/pointrcnn.yaml --data_path ../data/custom/testinging/velodyne/ --ckpt ../output/custom_models/pointrcnn/default/ckpt/checkpoint_epoch_300.pth- 1
此处根据自己的文件路径进行修改,推理效果如下(笔者标注50多张闸口船舶的点云数据):
看起来效果还是挺不错。
获取尺寸
OpenPCDet平台下根据kitti格式推理得到的bbox的dx、dy、dz就是约等于实际的物体的尺寸。
对于我们的点云数据而言,上述数据对应船的高宽长。(这里不理解的可以去看下OpenPCDet的坐标定义)
四、总结
至此,基于OpenPCDet平台的自定义数据集的训练基本完成了,这里要特别感谢下树和猫,对于自定义数据集的训练我们交流了很多,之前他是通过我写的yolov5系列文章关注的我,现在我通过OpenPCDet 训练自己的数据集系列关注了他,着实让我感觉到了技术分享是一个圈。
参考文档:
https://blog.csdn.net/m0_68312479/article/details/126201450
https://blog.csdn.net/jin15203846657/article/details/122949271
https://blog.csdn.net/hihui1231/article/details/124903276
https://github.com/OrangeSodahub/CRLFnet/tree/master/src/site_model/src/LidCamFusion/OpenPCDet
https://blog.csdn.net/weixin_43464623/article/details/116718451
如果阅读本文对你有用,欢迎一键三连呀!!!
2022年10月24日11:12:53
1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。
在线投稿:投稿 站长QQ:1888636
后台-插件-广告管理-内容页尾部广告(手机) |