Apollo messages

This post provides an overview of the messages used by Apollo, as captured from the demo rosbag for testing the perception module.

To do:

  • Investigate protobuf message definitions for variable types
  • Investiage contents of standard ROS type messages

Standard message types

sensor_msgs/Image
sensor_msgs/PointCloud2
tf2_msgs/TFMessage

These messages use the standard ROS message format for their respective types.

Custom message types

Apollo uses a custom version of ROS which replaces the msg message description language with a Real Time Publish Subscribe and Google protobuf messaging protocol for publishing and receiving the following messages.

pb_msgs/Chassis
pb_msgs/ContiRadar
pb_msgs/EpochObservation
pb_msgs/GnssBestPose
pb_msgs/GnssEphemeris
pb_msgs/GnssStatus
pb_msgs/Gps
pb_msgs/Imu
pb_msgs/InsStat
pb_msgs/LocalizationEstimate

Messages using these types cannot be interpreted using rosbag tools as only the header is compatible. However, their content can be dumped as text using read_messages().

Message topics

The demo rosbag contains the following topics:

topics: /apollo/canbus/chassis 5851 msgs : pb_msgs/Chassis
/apollo/localization/pose 5856 msgs : pb_msgs/LocalizationEstimate
/apollo/sensor/camera/traffic/image_long 471 msgs : sensor_msgs/Image
/apollo/sensor/camera/traffic/image_short 469 msgs : sensor_msgs/Image
/apollo/sensor/conti_radar 789 msgs : pb_msgs/ContiRadar
/apollo/sensor/gnss/best_pose 59 msgs : pb_msgs/GnssBestPose
/apollo/sensor/gnss/corrected_imu 5838 msgs : pb_msgs/Imu
/apollo/sensor/gnss/gnss_status 59 msgs : pb_msgs/GnssStatus
/apollo/sensor/gnss/imu 11630 msgs : pb_msgs/Imu
/apollo/sensor/gnss/ins_stat 118 msgs : pb_msgs/InsStat
/apollo/sensor/gnss/odometry 5848 msgs : pb_msgs/Gps
/apollo/sensor/gnss/rtk_eph 49 msgs : pb_msgs/GnssEphemeris
/apollo/sensor/gnss/rtk_obs 352 msgs : pb_msgs/EpochObservation
/apollo/sensor/velodyne64/compensator/PointCloud2 587 msgs : sensor_msgs/PointCloud2
/tf 11740 msgs : tf2_msgs/TFMessage
/tf_static 1 msg : tf2_msgs/TFMessage

Example messages

All custom messages were read using the following script, within the Apollo docker:

import rosbag
import std_msgs

from std_msgs.msg import String

bag = rosbag.Bag('../apollo_data/2018-01-03-19-37-16.bag', 'r')

read_topic = '/apollo/canbus/chassis'

counter = 0

for topic, msg, t in bag.read_messages():
    if topic == read_topic:
        out_file = 'messages/' + str(counter) + '.txt'
        f = file(out_file, 'w')
        f.write(str(msg))
        f.close()
        counter = counter + 1

Note that this will not work outside of the Apollo docker – the files will be empty.

pb_msgs/Chassis

Of custom protobuf type pb_msgs/Chassis.

engine_started: true
engine_rpm: 0.0
speed_mps: 0.261111110449
odometer_m: 0.0
fuel_range_m: 0
throttle_percentage: 14.9950408936
brake_percentage: 20.5966281891
steering_percentage: 4.36170196533
steering_torque_nm: -0.75
parking_brake: false
driving_mode: COMPLETE_AUTO_DRIVE
error_code: NO_ERROR
gear_location: GEAR_DRIVE
header {
    timestamp_sec: 1513807824.58
    module_name: "chassis"
    sequence_num: 96620
}
signal {
    turn_signal: TURN_NONE
    horn: false
}
chassis_gps {
    latitude: 37.416912
    longitude: -122.016053333
    gps_valid: true
    year: 17
    month: 12
    day: 20
    hours: 22
    minutes: 10
    seconds: 23
    compass_direction: 270.0
    pdop: 0.8
    is_gps_fault: false
    is_inferred: false
    altitude: -42.5
    heading: 285.75
    hdop: 0.4
    vdop: 0.6
    quality: FIX_3D
    num_satellites: 20
    gps_speed: 0.89408
}

/apollo/localization/pose

Of custom protobuf type pb_msgs/LocalizationEstimate.

header {
timestamp_sec: 1513807826.05
module_name: "localization"
sequence_num: 96460
}
pose {
    position {
    x: 587068.814494
    y: 4141577.34872
    z: -31.0619329279
    }
    orientation {
        qx: 0.0354450020653
        qy: 0.0137914670665
        qz: -0.608585615062
        qw: -0.792576177036
    }
    linear_velocity {
        x: -1.08479686092
        y: 0.30964034124
        z: -0.00187507107555
    }
    linear_acceleration {
        x: -2.10530393576
        y: 0.553837321635
        z: 0.170289232445
    }
    angular_velocity {
        x: 0.00630864843147
        y: 0.0111583669994
        z: 0.0146261464379
    }
    heading: 2.88124066533
    linear_acceleration_vrf {
        x: -0.0137881469155
        y: 2.15869303259
        z: 0.328470914294
    }
    angular_velocity_vrf {
        x: 0.0120972352232
        y: -0.00428235807315
        z: 0.0146133729179
    }
    euler_angles {
        x: 0.0213395674976
        y: 0.0730372234802
        z: -3.40194464185
    }
}
measurement_time: 1513807826.04

/sensor/camera/traffic/image_long

This is a standard sensor_msgs/Image message.

/apollo/sensor/camera/traffic/image_short

This is a standard sensor_msgs/Image message.

/apollo/sensor/conti_radar

Of custom protobuf type pb_msgs/ContiRadar.

header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
}
contiobs {
header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
}
clusterortrack: false
obstacle_id: 0
longitude_dist: 107.6
lateral_dist: -17.2
longitude_vel: -0.25
lateral_vel: -0.75
rcs: 7.5
dynprop: 1
longitude_dist_rms: 0.371
lateral_dist_rms: 0.616
longitude_vel_rms: 0.371
lateral_vel_rms: 0.616
probexist: 1.0
meas_state: 2
longitude_accel: 0.23
lateral_accel: 0.0
oritation_angle: 0.0
longitude_accel_rms: 0.794
lateral_accel_rms: 0.005
oritation_angle_rms: 1.909
length: 2.8
width: 2.4
obstacle_class: 1
}`
...[99 more `contiobs`]...
`object_list_status {
nof_objects: 100
meas_counter: 8464
interface_version: 0
}`

<h3>/apollo/sensor/gnss/best_pose</h3>
Of custom protobuf type `pb_msgs/GnssBestPose`.

`header {
timestamp_sec: 1513807825.02
}
measurement_time: 1197843043.0
sol_status: SOL_COMPUTED
sol_type: NARROW_INT
latitude: 37.4169108497
longitude: -122.016059063
height_msl: 2.09512365051
undulation: -32.0999984741
datum_id: WGS84
latitude_std_dev: 0.0114300707355
longitude_std_dev: 0.00970683153719
height_std_dev: 0.0248824004084
base_station_id: "0"
differential_age: 2.0
solution_age: 0.0
num_sats_tracked: 13
num_sats_in_solution: 13
num_sats_l1: 13
num_sats_multi: 11
extended_solution_status: 33
galileo_beidou_used_mask: 0
gps_glonass_used_mask: 51

/apollo/sensor/gnss/corrected_imu

Of custom protobuf type pb_msgs/Imu.

This appears to be empty, with just a timestamp header.

/apollo/sensor/gnss/gnss_status

Of custom protobuf type pb_msgs/GnssStatus.

header {
timestamp_sec: 1513807826.02
}
solution_completed: true
solution_status: 0
position_type: 50
num_sats: 14

/apollo/sensor/gnss/imu

Of custom protobuf type pb_msgs/Imu.

header {
timestamp_sec: 1513807824.58
}
measurement_time: 1197843042.56
measurement_span: 0.00499999988824
linear_acceleration {
x: -0.172816216946
y: -0.864528119564
z: 9.75685194135
}
angular_velocity {
x: -0.000550057197804
y: 0.00203638196634
z: 0.00155888550527
}

/apollo/sensor/gnss/ins_stat

Of custom protobuf type pb_msgs/InsStat.

header {
timestamp_sec: 1513807826.0
}
ins_status: 3
pos_type: 56

/apollo/sensor/gnss/odometry

Of custom protobuf type pb_msgs/Gps.

header {
timestamp_sec: 1513807824.58
}
localization {
position {
x: 587069.287353
y: 4141577.22403
z: -31.0546750054
}
orientation {
qx: 0.0399296504032
qy: 0.0164343412444
qz: -0.606054063971
qw: -0.79425059458
}
linear_velocity {
x: -0.231635539107
y: 0.0795322332148
z: 0.00147877897123
}
}

/apollo/sensor/gnss/rtk_eph

Of custom protobuf type pb_msgs/GnssEphemeris.

gnss_type: GPS_SYS
keppler_orbit {
gnss_type: GPS_SYS
sat_prn: 23
gnss_time_type: GPS_TIME
week_num: 1980
af0: -0.000219020526856
af1: 0.0
af2: 0.0
iode: 43.0
deltan: 5.23450375238e-09
m0: -1.97123659751
e: 0.0118623136077
roota: 5153.61836624
toe: 345600.0
toc: 345600.0
cic: 1.30385160446e-07
crc: 266.5
cis: -8.56816768646e-08
crs: -9.28125
cuc: -7.13393092155e-07
cus: 5.16884028912e-06
omega0: 2.20992318653
omega: -2.40973031377
i0: 0.943533454733
omegadot: -8.16355433052e-09
idot: -2.10723063176e-11
accuracy: 2
health: 0
tgd: -2.04890966415e-08
iodc: 43.0
}
glonass_orbit {
gnss_type: GLO_SYS
slot_prn: 10
gnss_time_type: GLO_TIME
toe: 339318.0
frequency_no: -7
week_num: 1980
week_second_s: 339318.0
tk: 3990.0
clock_offset: 1.41123309731e-05
clock_drift: 9.09494701773e-13
health: 0
position_x: -16965363.2812
position_y: -15829665.5273
position_z: -10698784.1797
velocity_x: -994.89402771
velocity_y: -1087.91637421
velocity_z: 3184.79061127
accelerate_x: -2.79396772385e-06
accelerate_y: -3.72529029846e-06
accelerate_z: -1.86264514923e-06
infor_age: 0.0
}

/apollo/sensor/gnss/rtk_obs

Of custom protobuf type pb_msgs/EpochObservation.

receiver_id: 0
gnss_time_type: GPS_TIME
gnss_week: 1980
gnss_second_s: 339042.6
sat_obs_num: 13
sat_obs {
sat_prn: 31
sat_sys: GPS_SYS
band_obs_num: 2
band_obs {
band_id: GPS_L1
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104170.8773
carrier_phase: 116158209.251
loss_lock_index: 173
doppler: -2880.74853516
snr: 173.0
}
band_obs {
band_id: GPS_L2
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104174.5591
carrier_phase: 90512897.445
loss_lock_index: 140
doppler: -2244.73510742
snr: 140.0
}
}

…[13 more sat_obs]…

/apollo/sensor/velodyne64/compensator/PointCloud2

This is a standard sensor_msgs/PointCloud2 message.

/tf

This is a standard tf2_msgs/TFMessage message. The messages alternate between frame_id: world to child_frame_id: localization transform messages and frame_id: world to child_frame_id: novatel transform messages.

/tf_static

This is a standard tf2_msgs/TFMessage message, for the frame_id: novatel to velodyne64 transform.

Setup of Allied Vision Camera with VimbaSDK and ROS node

How to setup and use avt_vimba_camera node

Assuming an x86_64 system and a GiGE PoE camera

http://wiki.ros.org/avt_vimba_camera

Driver Setup

  1. Download the Vimba SDK from Allied Vision website https://www.alliedvision.com/en/products/software.html
  2. Extract the contents of the file
  3. Go to Vimba_2_1/VimbaGigETL/
  4. Execute sudo ./Install.sh. This will add a couple of files inside /etc/profile.d
    • VimbaGigETL_64bit.sh
    • VimbaGigETL_32bit.sh
  5. The SDK will ask you to logout and login again. If you don’t wish to do so, execute source /etc/profile.d/VimbaGigETL_64bit.sh
  6. Connect the camera.
  7. You’re ready to use the camera.

Change Camera IP Address

Initially the camera is running on DHCP mode. If you wish to change this.

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera and Press Open CONFIG
  4. Go to the Tree and Open the GiGE root and the configuration subnode
  5. Change IP Configuration Mode to Persistent
  6. Move to the Persistent tree and change the Persistent IP Address to the desired value.
  7. Finally Click on IP Configuration Apply and click the Execute button
  8. Close the CONFIG MODE window
  9. After a few moments, the camera will appear showing the selected IP.

View Image Stream from Camera

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera while in the viewer
  4. Select Open FULL ACCESS
  5. A new Window will appear, press the Blue Play button

ROS Node setup

  1. Once the driver is working, install the node using sudo apt-get install ros-xxxx-avt-vimba-camera. Where xxxx represents your ROS distro.
  2. From Autoware launch in a sourced terminal roslaunch runtime_manager avt_camera.launch guid:=SERIALNUMBER or roslaunch runtime_manager avt_camera.launch ip:=xxx.xxx.xxx.xxx

SERIAL NUMBER or IP Address were previously obtained from the Configuration tool.

Dynamic configuration

The avt_vimba_camera package supports dynamic configuration.

Once the node is running execute rosrun dynamic_reconfigure dynparam get /avt_camera to get all the supported parameters.

To change one execute:
rosrun dynamic_reconfigure dynparam set /avt_camera exposure_auto_max 500000

this will update the maximum auto exposure time to 500 ms.

For extra details on dynamic_reconfigure check: http://wiki.ros.org/dynamic_reconfigure

Continental ARS 308-21 SSAO Radar setup

Originally written by Ekim Yurtsever.

This is a short setup guide for Continental ARS 308-21 SSAO Radar node. This guide covers the following topics;

Contents

  1. Requirements and the hardware setup
  2. A brief introduction to CAN bus communication using Linux
  3. Radar configuration
  4. Receiving CAN messages on ROS
  5. Using the Autoware RADAR Node

 

1. Requirements

Hardware

  1. Continental ARS 308-21 SSAO Radar
  2. 12V DC power supply
  3. Can interface device
  4. Can adaptor*

Software

  • Ubuntu 14.04 or above
  • ROS
  • ROS Packgage: socketcan_interface (http://wiki.ros.org/socketcan_interface)

*: Adaptor is needed for the termination of the can circuit. From the technical documentation of Continental ARS 308-2C/-21;

"Since no termination resistors are included in the radar sensor ARS 308-2C and ARS 308-21, two 120 Ohm terminal resistors have to be connected to the network (separately or integrated in the CAN interface of the corresponding unit)."

 

Hardware setup

1-Connect the devices as shown below

Fig 1. Hardware setup. 

2-Turn on the power supply. The device should start working with audible  operation.

2. A brief introduction to CAN bus communication using Linux

There are various ways to communicate with CAN bus. For linux, SocketCAN is one of the most used CAN drivers. It is open source and comes with the kernel. Furthermore it can be used with many devices. If a different vendor driver is liked to be used however, please refer to  that drivers manual for communicating via CAN bus.

First load the drivers. The device sends the messages with a specific bitrate, if it is not matched the stream would not be synchronized. Therefore the ip can link must be set with the bitrate of the device.  Bitrate for this device is constant at 500000/s and cannot be changed. Below is an example sniplet for setting can device can1:

$ modprobe can_dev
$ modprobe can
$ modprobe can_raw
$ sudo ip link set can1 type can bitrate 500000
$ sudo ifconfig can1 up

Now the connection between the computer and the sensor is established.

For checking the information sent by the device, a user friendly tool package called can-utils can be used with SocketCAN for accessing the messages via the driver.

Get the can-utils;

$ sudo apt-get install can-utils

Display the can messages from can1;

$ candump can1

A stream of CAN messages should be received at this point. An example of the message stream is shown below;

The can messages sent from the device have to be converted into meaningful information. Can messages have headers to identify the content of the message. Below the headers and the content of the messages sent from the Radar are shown;

0x300 and 0x301 are the input signals. The ego-vehicle speed and yaw rate can be sent to the device. If this information is provided, the radar will return detected objects’s positions and speeds relative to the ego-vehicle. If this information is not sent, the radar will assume that it is stationary.

0x600, 0x701 and 0x702 are the output messages of the radar. The structure of these messages are given below;

3. Configuring the radar

The radar must be configured first to receive tracked object information. This device does not transmit raw data from the radar scans. Instead, its microcontroller reads the raw sensing data and detects/tracks objects with its own algorithm (this algorithm is not accesable). The sensor sends the detected/tracked object information through the CAN bus.

The default behavior of the device is to send detected objects (not tracked). In order to receive tracked object messages, 0x60A, 0x60B and 0x60C, a configuration message has to be sent. The following command will send the configuration message using the can-utils for receiving tracked object messages:

$ cansend can1 200#0832000200000000 

Now 0x60A, 0x60B and 0x60C messages can be received instead of 0x600, 0x701 and 0x702. We can check this by dumping the CAN stream on the terminal screen with the following command again:

$ candump can1 

The stream should include 0x60A, 0x60B and 0x60C messages now.

4. Receiving CAN messages on ROS

Ros package socketcan_interface is needed to receive can messages in ROS. This package is used with the socketCAN.

Install socketcan_interface;

$ sudo apt-get install ros-kinetic-socketcan-interface 

Test the communication in ros;

$ rosrun socketcan_interface socketcan_dump can1

This should display the received messages in ROS. An example is shown below;

With socketcan_interface a driver for this device can be developed. However there is already a ready can driver in ros called ros_canopen.  Install this package with the following command;

$ sudo apt-get install ros-kinetic-ros-canopen

This package will be used to publish the can messages received from the device in the ROS environment.

The socketcan_to_topic node in the  socketcan_bridge package can be used to publish topics from the can stream. First, start a ROS core and then launch this node with the name of the can port as an argument (e.g can1).

$ roscore
$ rosrun socketcan_bridge socketcan_to_topic_node _can_device:="can1"

This will publish a topic called “received_messages”.  Check the messages with the following command:

$ rostopic echo /received_messages

This should show the received messages. We are interested in the “id” and “data” fields. An example of the received_messages is shown below.

Running Apollo 2.0 – GPU

This is a brief guide to getting Apollo 2.0 up and running. It is based on the Apollo README with additional setup for the Perception modules.

Prerequisites

  • Ubuntu 16.04 (also works on 14.04).
  • Nvidia GPU. Install the drivers as described here. You don’t need CUDA installed (it’s included in the Apollo docker). On 16.04 you will need a new-ish version – the below is tested using 390.25. The Apollo recommended 275.39 will not work on 16.04, but will work on 14.04. However, as this requires a newer GCC version that breaks the build system, it is much easier to go straight to the 390.25 driver.

Download code and Docker image

  1. Get the code: git clone https://github.com/ApolloAuto/apollo.git
  2. If you don’t have Docker already: ./apollo/docker/scripts/install_docker.sh
  3. Then log out and log back in again.
  4. Pull the docker image. The dev_start.sh script downloads the docker image (or updates it if already downloaded) and starts the container. cd apollo/ ./docker/scripts/dev_start.sh

Install Nvidia graphics drivers in the Docker image

  1. Check which driver you are using (in host) with nvidia-smi.
  2. First off we need to enter the container with root priveledges so we can install the matching graphics drivers. docker exec -it apollo_dev /bin/bash wget http://us.download.nvidia.com/XFree86/Linux-x86_64/***.**/NVIDIA-Linux-x86_64-***.**.run where ***.** is the driver version running on your host system. Note: Disregard the Apollo instructions to upgrade to GCC 4.9. Not only is it not necessary with newer versions of the Nvidia drivers, but it will make the build fail. Stick with the GCC version of 4.8.4 which comes in the Docker image.
  3. Now install the drivers: chmod +x NVIDIA-Linux-x86_64-***.**.run ./NVIDIA-Linux-x86_64-***.**.run -a --skip-module-unload --no-kernel-module --no-opengl-files Hit ‘enter’ to go with the default choices where prompted. Once done, check that the driver is working with nvidia-smi.
  4. To create a new image with your changes, check what the container ID of your image is (on the host): docker ps -l
  5. Use the resulting container ID with the following command to create a new image (on the host): docker commit CONTAINER_ID apolloauto/apollo:NEW_DOCKER_IMAGE_TAG where CONTAINER_ID is the container ID you found before, and NEW_DOCKER_IMAGE_TAG is the name you choose for your Apollo GPU image.

Build Apollo in your new Docker image

  1. To get into your new docker image, use the following: ./docker/scripts/dev_start.sh -l -t NEW_DOCKER_IMAGE_TAG ./docker/scripts/dev_into.sh
  2. Now you should be able to build the GPU version of Apollo: ./apollo.sh clean ./apollo.sh build_gpu

Run Apollo!

  1. From within the docker image, start Apollo: scripts/bootstrap.sh
  2. Check that Dreamview is running at http://localhost:8888.
  3. Set up in Dreamview by selecting the setup mode, vehicle, and map in the top right. For the sample data rosbag, select “Standard”, “Mkz8” and “Sunnyvale Big Loop”.
  4. Start the rosbag in the docker container with rosbag play path/to/rosbag.bag.
  5. Once you see the vehicle moving in Dreamview, pause the rosbag with the space bar.
  6. Wait a few seconds for the perception, prediction and traffic light modules to load.
  7. Resume playing the rosbag with the spacebar.

Once the rosbag playing is complete, to play it again you have to first shutdown with scripts/bootstrap.sh stop and then repeat the above from step 1 (otherwise the time discrepancy stops the modules from working).

Rosbag Record from Rosbag Play, timestamps out of sync

When recording data from a previously recorded rosbag instead of sensor data, clock might become a problem.
Rosbag record updates the clock to the time when the rosbag is being created, but the original timestamps are not updated causing the clock in the rosbag and the topics timestamps to be out of sync.

To fix this when recording the new rosbag add /clock to the list of recorded topics. this will keep the clock of the original rosbag instead of creating a new one.

Example:

<launch>

  <param name="use_sim_true" value="true" />

  <node pkg="rosbag" type="play" name="rosbagplay" output="screen" required="true" args="--clock /PATH/TO/ROSBAGTOPLAY"/>

  <node pkg="rosbag" type="record" name="rosbagrecord" output="screen" required="true" args="/clock LISTOFTOPICS -O /PATH/TO/OUTPUTFILE"/>

</launch>

EOF

Traffic Light recognition

Pre-requisites:

  • Vector Map
  • NDT working
  • Calibration publisher
  • Tf between camera and localizer

Traffic light recognition is splitted in two parts

  1. feat_proj finds the ROIs of the traffic signals in the current camera FOV
  2. region_tlr checks each ROI and publishes result, it also publishes /tlr_superimpose_image image with the traffic lights overlayed
    2a. region_tlr_ssd deep learning based detector.

Launch Feature Projection

roslaunch road_wizard feat_proj.launch camera_id:=/camera0

Launch HSV classifier

roslaunch road_wizard traffic_light_recognition.launch camera_id:=/camera0 image_src:=/image_XXXX

SSD Classifier

roslaunch road_wizard traffic_light_recognition_ssd.launch camera_id:=/camera0 image_src:=/image_XXXX network_definition_file:=/PATH_TO_NETWORK_DEFINITION/deploy.prototxt pretrained_model_file:=/PATH_TO_MODEL/Autoware_tlrSSD.caffemodel use_gpu:=true gpu_device_id:=0

How to install SSD Caffe for Autoware

Caffe Prerequisites

  1. sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
  2. sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
  3. sudo apt-get install libatlas-base-dev

Clone SSD fork of Caffe

  1. Go to your home directory
  2. Clone the code: git clone https://github.com/weiliu89/caffe.git ssdcaffe
  3. Move inside the directory cd ssdcaffe
  4. Checkout compatible API version git checkout 5365d0dccacd18e65f10e840eab28eb65ce0cda7
  5. Create config file cp Makefile.config.example Makefile.config
  6. Start building make
  7. Once completed execute make distribute
  8. Compile Autoware, Cmake will detect SSD Caffe and compile the SSD nodes.
  9. To test, download the object detection models from: http://ertl.jp/~amonrroy/ssd_models/ssd500.zip http://ertl.jp/~amonrroy/ssd_models/ssd300.zip The 300 model will run faster but won’t provide good results at farther distances. In contrast, the 500 model require more computing power but will detect at lower resolutions(farther objects).
  10. In Autoware’s RTM use the [app] button next to ssd_unc in the Computing Tab. to select the correct image input src and the models path. image
  11. Launch the node and play a rosbag with image data.
  12. In Rviz add the ImageViewer Panel
  13. Select the Image topic and the Object Rect topic

How to setup Nvidia Drivers and CUDA in Ubuntu

Disabling Nouveau, if required (login loop or low res mode), otherwise skip to next section

  1. Confirm nouveau is loaded

lsmod | grep -i nouveau

You’ll see the text nouveau in the 4th column, if loaded

video XXXX Y nouveau

  1. If nouveau is loaded then blacklist it. Create the file blacklist-nouveau.conf in /etc/modprobe.d/

sudo nano /etc/modprobe.d/blacklist-nouveau.conf

And add the following text

blacklist nouveau
options nouveau modeset=0
  1. Execute sudo update-initramfs -u

  2. Restart

NVIDIA Driver Setup

  1. Download the RUN file from NVidia’s website https://www.geforce.com/drivers You’ll have a file named similarly to NVIDIA-Linux-x86_64-XXX.YY.run

  2. Assign execution permissions chmod +x NVIDIA-Linux-x86_64-XXX.YY.run

  3. Move to a virtual console pressing Ctrl+Alt+F1 and login

  4. Terminate the X Server executing sudo service lightdm stop

  5. Run the installer sudo ./NVIDIA-Linux-x86_64-XXX.YY.run (If you are running on a laptop run instead sudo ./NVIDIA-Linux-x86_64-XXX.YY.run --no-opengl-files)

  6. Follow the instruction from the wizard. At the end, do not allow the wizard to modify the X configuration.

  7. Once back in the console, execute sudo service lightdm start. The GUI should be displayed. Login.

  8. To confirm everything is set run in a terminal nvidia-smi

CUDA Setup

  1. Download the CUDA Installation RUN File from https://developer.nvidia.com/cuda-downloads You’ll have a file named similarly to cuda_X.0.YY.Z_linux.run

  2. Assign execution permissions chmod +x cuda_X.0.YY.Z_linux.run

  3. Run the installer sudo ./cuda_X.0.YY.Z_linux.run

  4. Follow the instructions on screen. DO NOT install the NVIDIA Driver included. Install the CUDA Samples on your home directory.

  5. Once finished, to confirm everything is ok. Go to your home directory and execute cd NVIDIA_CUDA-X.Y_Samples/1_Utilities/deviceQuery Match X, Y to your CUDA version. i.e. CUDA 9.0 cd NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery

  6. Compile the sample running make

  7. Run the sample ./deviceQuery you should see the details about your GPU(s) and CUDA Setup.

Autoware Full Stack to achieve autonomous driving

Velodyne – BaseLink TF

roslaunch runtime_manager setup_tf.launch x:=1.2 y:=0.0 z:=2.0 yaw:=0.0 pitch:=0.0 roll:=0.0 frame_id:=/base_link child_frame_id:=/velodyne period_in_ms:=10

Robot Model

roslaunch model_publisher vehicle_model.launch

PCD Map

rosrun map_file points_map_loader noupdate PCD_FILES_SEPARATED_BY_SPACES

VectorMap

rosrun map_file vector_map_loader CSV_FILES_SEPARATED_BY_SPACES

World-Map TF

roslaunch world_map_tf.launch

Example launch file (From Moriyama data)

<launch>
  <!-- world map tf -->
  <node pkg="tf" type="static_transform_publisher" name="world_to_map" args="14771 84757 -39 0 0 0 /world /map 10" />
</launch>

roslaunch PATH/TO/MAP_WORLF_TF.launch

Voxel Grid Filter

roslaunch points_downsampler points_downsample.launch node_name:=voxel_grid_filter

Ground Filter

roslaunch points_preprocessor ring_ground_filter.launch node_name:=ring_ground_filter point_topic:=/points_raw

NMEATOPOSE (IF GNSS available)

roslaunch gnss_localizer nmea2tfpose.launch plane:=7

NDT Matching

roslaunch ndt_localizer ndt_matching.launch use_openmp:=False use_gpu:=False get_height:=False

Mission Planning

rosrun lane_planner lane_rule

rosrun lane_planner lane_stop

roslaunch lane_planner lane_select.launch enablePlannerDynamicSwitch:=False

roslaunch astar_planner obstacle_avoid.launch avoidance:=False avoid_distance:=13 avoid_velocity_limit_mps:=4

roslaunch autoware_connector vel_pose_connect.launch topic_pose_stamped:=/ndt_pose topic_twist_stamped:=/estimate_twist sim_mode:=False

roslaunch astar_planner velocity_set.launch use_crosswalk_detection:=False enable_multiple_crosswalk_detection:=False points_topic:=points_no_ground enablePlannerDynamicSwitch:=False

roslaunch waypoint_maker waypoint_loader.launch multi_lane_csv:=/path/to/saved_waypoints.csv decelerate:=1

roslaunch waypoint_follower pure_pursuit.launch is_linear_interpolation:=True publishes_for_steering_robot:=False

roslaunch waypoint_follower twist_filter.launch

How to setup IMU XSense MTI

Check the assigned USB device to the IMU using dmesg

[ 8808.219908] usb 3-3: new full-speed USB device number 28 using xhci_hcd
[ 8808.237513] usb 3-3: New USB device found, idVendor=2639, idProduct=0013
[ 8808.237522] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 8808.237527] usb 3-3: Product: MTi-300 AHRS
[ 8808.237531] usb 3-3: Manufacturer: Xsens
[ 8808.237534] usb 3-3: SerialNumber: 03700715
[ 8808.265957] usbcore: registered new interface driver usbserial
[ 8808.265982] usbcore: registered new interface driver usbserial_generic
[ 8808.265999] usbserial: USB Serial support registered for generic
[ 8808.268037] usbcore: registered new interface driver xsens_mt
[ 8808.268048] usbserial: USB Serial support registered for xsens_mt
[ 8808.268063] xsens_mt 3-3:1.1: xsens_mt converter detected
[ 8808.268112] usb 3-3: xsens_mt converter now attached to ttyUSB0
  1. Change permissions of the device chmod a+rw /dev/ttyUSB0 Probably is USB0, change it accordingly to your setup
  2. In an Autoware sourced terminal execute: rosrun xsens_driver mtdevice.py -m 2 -f 100 (this configures the IMU to publish raw data from the sensor at 100Hz) To publish data execute (in a sourced terminal): rosrun xsens_driver mtnode.py _device:=/dev/ttyUSB0 _baudrate:=115200
  3. Confirm data is actually coming using rostopic echo /imu_raw in a different terminal