LiDAR Laser distributions

Here are a few LiDAR laser distributions and properties:

LiDAR Model vtop vbottom hRes @ 10hz Preview
VLS-128 15 -25 0.2 alt text
HDL-64S2 2 -24.33 0.16 alt text
HDL-32E 10.64 -30.67 0.16 alt text
VLP-32c 15 -25 0.2 alt text
VLP-32MR 15 -25 0.2
Pandar40 15 -25 0.2 alt text
Pandar64 15 -25 0.2 alt text
PandarQT 52.121 -52.121 0.15 alt text
RS-LiDAR-32 15 -25 0.2 alt text
OS0-32 45 -45 0.35 alt text
OS0-64 45 -45 0.35 alt text
OS1-16 16.6 -16.6 0.35 alt text
OS1-64 16.6 -16.6 0.35 alt text

You can see the laser distribution in the following visualizations, where the red line marks the 0 degree line


alt text


alt text


alt text


alt text


alt text


alt text


alt text


alt text


alt text


alt text


alt text

How to use webviz in Docker

  1. Clone the webviz project (

git clone

  1. Build the image, and name it.

The docker image contains only the basic dependencies. No code or compiled resources.

  1. Run a container in interactive mode with the recently created image, mount the wevbiz code, and attach it to the host network.

  2. Once inside the container, go to the mounted resource /webviz, install dependencies, build, and run the server.

  3. You’ll see a message indicating the webviz server URL ℹ 「wds」: Project is running at http://localhost:8081/

  4. Open your browser (Chrome seems to be the only fully supported), and direct it to that URL, adding app at the end. http://localhost:8081/app/

  5. Drag and drop a rosbag to the browsers’s window. You can also execute rosbridge to communicate with the host roscore. roslaunch rosbridge_server rosbridge_websocket.launch

  6. Select topics to be displayed. Be sure to select the correct coordinate frame.

Running Gitlab runners locally

How to Run Gitlab runners locally


  1. Install Docker.
  2. Setup Gitlab repository and install gitlab-runner.
  3. Prepare or get the .gitlab-ci.yml to test.
  4. Execute a runner on the desired job.


Install docker

Don’t forget the post-setup steps.

Setup repository and install gitlab-runner

Prepare or get gitlab-ci.yml

Supposing the following script:

image: "ruby:2.5"


  - apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs

  - ruby -v

  - which ruby

  - gem install bundler --no-document

  - bundle install --jobs $(nproc)  "${FLAGS[@]}"



    - bundle exec rspec



    - bundle exec rubocop

there are two jobs defined

  • rspec
  • rubocop

Execute the runner on a job

in a terminal execute:

gitlab-runner execute docker rspec

this will execute the script in a clean docker the job rspec.

to define a timeout use

--timeout TIME

to pass env variables


Git creating features

Creating features from fork

Important note

Don’t forget FIRST to create ISSUE in upstream describing the fix/feature.


These are the steps involved:

  1. Fork in Github.
  2. Clone fork locally.
  3. Set upstream.
  4. Create features/fixes in local branches.
  5. Sign commits.
  6. Push to your fork.
  7. Create PR upstream.

Fork to your own account/organization

Simple step, just press the Fork button in the top right of the GitHub repository, and select where to fork.

Clone your fork locally

Clone the default branch in the current directory:
git clone --recursive

If you wish to clone a different branch:
git clone --recursive --branch BRANC_NAME

If you wish to clone to a directory with a different name:
git clone --recursive DESIRED_DIR_NAME

Set upstream repository

Set upstream (original source):
git remote add upstream

Confirm remotes:
git remote -v

> origin (fetch)
> origin (push)
> upstream (fetch)
> upstream (push)

In this example:
origin represents your fork repo.
upstream represents your fork’s origin.

NOTE: Names other than origin or upstream can be used. Just be careful to follow the same naming when pulling/pushing commits.

Create features/fixes

In the Autoware case , please always create new features from master branch.

Sign commits

GPG Sign

Set your GPG keys following GitHub article:

Signoff commits

Signoff your commits using git commit -s (–s)

If you have an older git version the -s flag might not available. You can either update it via source/build/install,or use a PPA. ( Taken from

sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version

Update fork

git checkout master
git fetch upstream
git merge upstream/master

Create feature branch

git checkout -b feature/awesome_stuff

Push to our fork

git push origin feature/awesome_stuff

Once finished

Create PR from Github website, and target master branch.

Yolo3 training notes

Yolo3 custom training notes

To train a model, YOLO training code expects:

  • Images
  • Labels
  • NAMES File
  • CFG file
  • train.txt file
  • test.txt file
  • DATA file
  • Pretrained weights (optional)

Images and Labels

The images and labels should be located in the same directory. Each image and label is related to its counterpart by filename.
For image 001.jpg, the corresponding label should be named 001.txt

Label format

The file containing the labels is a plain text file. Each line contains a bounding box for each object. The colums are separated by spaces, in the following format:

classID x y width height

The x, y, width and height should be expressed in a normalized pixel with values from 0 to 1.
x and y correspond to the coordinate of the center of the bounding box.

Yolo includes the following python helper function to easily achieve that:

def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

Names file

This file contains the label string for each class. The first line corresponds to the class 0. second line corresponds to the class 1, and so on.
i.e. Contents of classes.names


This would create the following relationship:

Class ID (labels) Class identifier
0 classA
1 classB
2 classC

CFG file

This file is a darknet configuration file. To simplify the explanation:

Modifications required to train, according to:

GPU memory available:

# Training
 batch=64 #Number of images to move to GPU memory on each batch.

Number of classes

The number of classes should be set on each of the [yolo] sections in the CFG file.

classes= NUM_CLASSES (1,2,3,4) should match names file.

Number of Filters

Before each [yolo] section, the number of filters in the [convolutional] layer should also be updated to match the following formula:

classes=(classes + 5) * 3

For instnace, for 3 classes:


classes= 3

train.txt file

This plain text file contains each of the images that will be used for training. Each line should include the absolute path to the Image.


test.txt file

In the same way as the train.txt file, this text file contains the paths to the images used for testing, one per line.


DATA file

This plain text file summarizes the dataset using the following format:

classes= 20
train  = /home/user/dataset/train.txt
valid  = /home/user/dataset/test.txt
names = /home/user/dataset/classes.names
backup = /home/user/dataset/backup


./darknet detector train file.cfg darknet53.conv.74

How to setup industrial_ci locally and test Autoware

industrial_ci will build, install and test every package in an isolated manner inside a clean Docker image. In this way, missing dependencies(system and other packages) can be easily spotted and fixed before publishing a package. This eases the deployment of Autoware (or any ROS package).

Running locally instead of on the cloud (travis-ci) speeds up the build time.

Autoware and industrial_ci require two different catkin workspaces.


  • Docker installed and working.
  • Direct connection to the Internet (No proxy). See below for proxy instructions.


  1. Install catkin tools:$ sudo apt-get install python-catkin-tools.
  2. Clone Autoware (if you don’t have it already):$ git clone (if you wish to test an specific branch change to that branch or use -b).
  3. Create a directory to hold a new workspace at the same level as Autoware, and subdirectory src. (In this example catkin_ws and base dir being home ~). ~/$ mkdir -p catkin_ws/src && cd catkin_ws/src.
  4. Initialize that workspace running catkin_init_workspace inside src of catkin_ws. ~/catkin_ws/src$ catkin_init_workspace
  5. Clone industrial_ci inside the catkin_ws/src. ~/catkin_ws/src$ git clone
  6. The directory structure should look as follows:
├── Autoware
│   ├── ros
│   │   └── src
│   └── .travis.yml
├── catkin_ws
          └── src
               └── industrial_ci
  1. Go to catkin_ws and build industrial_ci. ~/catkin_ws$ catkin config --install && catkin b industrial_ci && source install/setup.bash.
  2. Once finished, move to Autoware directory ~/catkin_ws$ cd ~/Autoware.
  3. Run industrial_ci:
  • Using rosrun industrial_ci run_ci ROS_DISTRO=kinetic ROS_REPO=ros or rosrun industrial_ci run_ci ROS_DISTRO=indigo ROS_REPO=ros. This method will manually specify the distribution and repositories sources.
  • ~/Autoware$ rosrun industrial_ci run_travis .. This will parse the .travis.yml and run in a similar fashion to travis-ci.

For more detailed info:

How to run behind a proxy

Configure your docker to use proxy (from and

Ubuntu 14.04

Edit the file /etc/default/docker, go to the proxy section, and change the values:

# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://proxy.address:port"
export https_proxy="https://proxy.address:port"

Execute in a terminal sudo service docker restart.

Ubuntu 16.04

  1. Create config container directory: $ sudo mkdir -p /etc/systemd/system/docker.service.d
  2. Create the http-conf.d file inside: $ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

  3. Paste the following text and edit with your proxy values:

  1. Save the file

Modifications to industrial_ci

  1. Add the following lines in ~/catkin_ws/src/industrial_ci/industrial_ci/src/ at line 217:


Traffic Light recognition


  • Vector Map
  • NDT working
  • Calibration publisher
  • Tf between camera and localizer

Traffic light recognition is splitted in two parts

  1. feat_proj finds the ROIs of the traffic signals in the current camera FOV
  2. region_tlr checks each ROI and publishes result, it also publishes /tlr_superimpose_image image with the traffic lights overlayed
    2a. region_tlr_ssd deep learning based detector.

Launch Feature Projection

roslaunch road_wizard feat_proj.launch camera_id:=/camera0

Launch HSV classifier

roslaunch road_wizard traffic_light_recognition.launch camera_id:=/camera0 image_src:=/image_XXXX

SSD Classifier

roslaunch road_wizard traffic_light_recognition_ssd.launch camera_id:=/camera0 image_src:=/image_XXXX network_definition_file:=/PATH_TO_NETWORK_DEFINITION/deploy.prototxt pretrained_model_file:=/PATH_TO_MODEL/Autoware_tlrSSD.caffemodel use_gpu:=true gpu_device_id:=0

Autoware Full Stack to achieve autonomous driving

Velodyne – BaseLink TF

roslaunch runtime_manager setup_tf.launch x:=1.2 y:=0.0 z:=2.0 yaw:=0.0 pitch:=0.0 roll:=0.0 frame_id:=/base_link child_frame_id:=/velodyne period_in_ms:=10

Robot Model

roslaunch model_publisher vehicle_model.launch


rosrun map_file points_map_loader noupdate PCD_FILES_SEPARATED_BY_SPACES


rosrun map_file vector_map_loader CSV_FILES_SEPARATED_BY_SPACES

World-Map TF

roslaunch world_map_tf.launch

Example launch file (From Moriyama data)

  <!-- world map tf -->
  <node pkg="tf" type="static_transform_publisher" name="world_to_map" args="14771 84757 -39 0 0 0 /world /map 10" />

roslaunch PATH/TO/MAP_WORLF_TF.launch

Voxel Grid Filter

roslaunch points_downsampler points_downsample.launch node_name:=voxel_grid_filter

Ground Filter

roslaunch points_preprocessor ring_ground_filter.launch node_name:=ring_ground_filter point_topic:=/points_raw


roslaunch gnss_localizer nmea2tfpose.launch plane:=7

NDT Matching

roslaunch ndt_localizer ndt_matching.launch use_openmp:=False use_gpu:=False get_height:=False

Mission Planning

rosrun lane_planner lane_rule

rosrun lane_planner lane_stop

roslaunch lane_planner lane_select.launch enablePlannerDynamicSwitch:=False

roslaunch astar_planner obstacle_avoid.launch avoidance:=False avoid_distance:=13 avoid_velocity_limit_mps:=4

roslaunch autoware_connector vel_pose_connect.launch topic_pose_stamped:=/ndt_pose topic_twist_stamped:=/estimate_twist sim_mode:=False

roslaunch astar_planner velocity_set.launch use_crosswalk_detection:=False enable_multiple_crosswalk_detection:=False points_topic:=points_no_ground enablePlannerDynamicSwitch:=False

roslaunch waypoint_maker waypoint_loader.launch multi_lane_csv:=/path/to/saved_waypoints.csv decelerate:=1

roslaunch waypoint_follower pure_pursuit.launch is_linear_interpolation:=True publishes_for_steering_robot:=False

roslaunch waypoint_follower twist_filter.launch

How to setup IMU XSense MTI

Check the assigned USB device to the IMU using dmesg

[ 8808.219908] usb 3-3: new full-speed USB device number 28 using xhci_hcd
[ 8808.237513] usb 3-3: New USB device found, idVendor=2639, idProduct=0013
[ 8808.237522] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 8808.237527] usb 3-3: Product: MTi-300 AHRS
[ 8808.237531] usb 3-3: Manufacturer: Xsens
[ 8808.237534] usb 3-3: SerialNumber: 03700715
[ 8808.265957] usbcore: registered new interface driver usbserial
[ 8808.265982] usbcore: registered new interface driver usbserial_generic
[ 8808.265999] usbserial: USB Serial support registered for generic
[ 8808.268037] usbcore: registered new interface driver xsens_mt
[ 8808.268048] usbserial: USB Serial support registered for xsens_mt
[ 8808.268063] xsens_mt 3-3:1.1: xsens_mt converter detected
[ 8808.268112] usb 3-3: xsens_mt converter now attached to ttyUSB0
  1. Change permissions of the device chmod a+rw /dev/ttyUSB0 Probably is USB0, change it accordingly to your setup
  2. In an Autoware sourced terminal execute: rosrun xsens_driver -m 2 -f 100 (this configures the IMU to publish raw data from the sensor at 100Hz) To publish data execute (in a sourced terminal): rosrun xsens_driver _device:=/dev/ttyUSB0 _baudrate:=115200
  3. Confirm data is actually coming using rostopic echo /imu_raw in a different terminal

How to launch NDT Localization


Before starting make sure you have

  1. PCD Map (.pcd)
  2. Vector Map (.csv)
  3. TF File (.launch)

How to start localization

  1. Launch Autoware’s Runtime Manager
  2. Go to Setup Tab
  3. Click on TF and Vehicle Model buttons This will create the transformation between the localizer (Velodyne) to the base_link frame (car’s tires)
  4. Go to Map tab
  5. Click the ref button on the PointCloud section
  6. Select ALL the PCD Files that form the map, then click Open
  7. Click the Point Cloud button to the left. A bar below will show the progress of the load. Wait until it’s complete. Do the same for Vector Map, but this time select all the csv files Finally load the TF for the map
  8. Go to Simulation Tab
  9. Click the Ref Button and load a ROSBAG.
  10. Click Play and once the ROSBAG started to play, immediatly press Pause This step is required to set the Time to simulation instead of real.
  11. IF your rosbag contains /velodyne_packets instead of /points_raw, go to Sensing tab and Launch the LiDAR node corresponding to your sensor, to decode the packets into points_raw The corresponding calibration YML files are located in ${AUTOWARE_PATH}/ros/src/sensing/drivers/lidar/packages/velodyne/velodyne_pointcloud/params/ Select the correct one depending on the sensor.
  12. In the Sensing Tab, Inside the *Points Downsampler** section, click on voxel_grid_filter
  13. Go to Computing tab and click on the [app] button next to ndt_matching inside the Localization section. Make sure the Initial Pos is selected.
  14. Click on the ndt_matching checkbox.
  15. Launch RVIZ using the button below Runtime Manager, and load the default.rviz configuration file located in ${AUTOWARE_PATH}/ros/src/.config/rviz.
  16. In RVIZ click on the 2D Pose Estimate button, located in the top bar
  17. Click on the initial position to start localization AND drag to give an initial pose.
  18. If the initial position and pose are correct, the car model should now be seen in the correct position. If the model starts spinning, try to give a new initial position and pose.