Sensor Fusion for multiple sensors
Find a file
Sachin Kumar 1d4cdc5df4 sensor_fusion package major changes
- tf2 broadcaster added
- radar_clustering nodebook added
- object_lane_detection_node modified
- zenoh_ros_bridge Dockerfile added
2024-02-08 19:34:04 +01:00
multi_sensor_fusion 📱 safety distance reduces to 7 meters 2023-09-28 21:26:51 +02:00
ros2_numpy@780ef66b7e ros2 numpy module added 2024-02-08 19:32:51 +01:00
sensor_fusion_perception sensor_fusion package major changes 2024-02-08 19:34:04 +01:00
YOLOP@8d8f68df31 yolop submodule added 2023-11-28 19:12:58 +01:00
zenoh_ros_bridge sensor_fusion package major changes 2024-02-08 19:34:04 +01:00
.gitignore git ignore added 2023-12-25 21:22:04 +01:00
.gitmodules ros2 numpy module added 2024-02-08 19:32:51 +01:00
Dockerfile cuda docker image added 2024-01-10 19:27:44 +01:00
entrypoint.sh docker file added 2024-01-10 17:25:11 +01:00
README.md camera radar fusion updated 2023-12-25 21:19:34 +01:00
rosgraph.png readme updated 2023-09-26 01:41:09 +02:00
sensor_fusion.gif readme updated 2023-09-26 01:41:09 +02:00

Carla Multi Sensor Fusion

Actions Status Documentation GitHub GitHub release (latest by date)

This ROS package is a bridge that enables two-way communication between ROS and CARLA. The information from the CARLA server is translated to ROS topics. In the same way, the messages sent between nodes in ROS get translated to commands to be applied in CARLA.

Features

  • Provide Sensor Data (Lidar, Semantic lidar, Cameras (depth, segmentation, rgb, dvs), GNSS, Radar, IMU)
  • Provide Object Data (Transforms (via tf), Traffic light status, Visualization markers, Collision, Lane invasion)
  • Control AD Agents (Steer/Throttle/Brake)
  • Control CARLA (Play/pause simulation, Set simulation parameters)

Commands for Usecase 1

# start carla
cd /opt/Carla; ./Carla.sh

# start the ROS bridge with an example ego vehicle
ros2 launch carla_ros_bridge carla_ros_bridge_with_example_ego_vehicle.launch.py

# for scan matching point cloud odom
ros2 launch multi_sensor_fusion scan_matching.launch.py

# for gps to odom
ros2 launch multi_sensor_fusion gps_odom.launch.py

# for odom fusion
ros2 launch multi_sensor_fusion odom_fusion.launch.py

# for plot juggler
ros2 launch multi_sensor_fusion plot_juggler.launch.py

Commands for Usecase 2


# start carla
cd /opt/Carla; ./Carla.sh

# start the ROS bridge with an example ego vehicle
ros2 launch carla_ros_bridge carla_ros_bridge_with_example_ego_vehicle.launch.py

# generate traffic
cd /opt/Carla/PythonAPI/examples; python generate_traffic.py -n 30


# for radar safety
ros2 launch multi_sensor_fusion radar_safety.launch.py

ROS2 Graph

ROS2 Graph

Sensor Fusion EKF

Sensor Fusion

Available sensors for fusion

  1. Lidar (point cloud data ) -> (x, y) (improvize)
  2. GPS (lat long) -> (x, y) (low accuracy)
  3. Odometry -> (x, y) (random noise)
  4. Camera -> (x, y)
  5. IMU -> (euler, yaw, pitch , roll)

Sensor Fusion

  1. Kalman Filter (ekf)

Collison Track

  1. If vehicle is coming from the infront then try to predict in how much seconds the collision is going to happen

Predict position using sensor fusion

Object Detection

conda activate ai

python tools/demo.py --source inference/images

Commands for object detection and lane segmentation

/zenoh-bridge-ros2dds -l tcp/0.0.0.0:7447


conda activate ai

python opencv_publisher.py

ros2 run rqt_image_view rqt_image_view

python object_lane_detection_node.py

Results

# with cpu

Done. (2.191s)
inf : (0.2139s/frame)   nms : (0.0011s/frame)

# with gpu
Done. (2.041s)
inf : (0.0525s/frame)   nms : (0.0606s/frame)


# video frames

# with gpu
Done. (44.252s)
inf : (0.0243s/frame)   nms : (0.0032s/frame)

# with cpu
Done. (84.558s)
inf : (0.1766s/frame)   nms : (0.0010s/frame)