The Indoor Positioning System detects the position, orientation and identity of multiple vehicles simultaneously. The position and orientation are updated at 50 Hz.
The Indoor Positioning System works with active light sources (LEDs) on the vehicles and a Camera looking down from the ceiling. The LEDs are detected based on their high brightness. The camera is set to a very short exposure (~100 microseconds). Thus, the ambient light creates a very small signal (almost black), while the LEDs still appear as white dots. The short exposure time also eliminates the problem of motion blur. At a top speed of 4 m/s the vehicle travels 0.4 mm during the exposure.
Other light sources and reflective surfaces on the vehicle can create false signals and must be covered with tape. This includes the connectors on the Raspberry Pi and the status LEDs on the motor speed controller.
The outer three LEDs indicate the vehicle pose and are permanently illuminated.
TODO explain calibration jig
The central LED flashes in a pattern that is different for each vehicle. The patterns are chosen such that sampling effects do not create ambiguous signals. TODO reference MA Isabelle Tuelleners, cf. Nyquist–Shannon sampling theorem.
Vehicle ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
Period of flashing, in number of frames | 4 | 7 | 10 | 13 | 16 | 7 | 10 | 13 | 16 | 19 | 10 | 13 | 16 | 19 | 22 | 13 | 16 | 19 | 22 | 25 | 16 | 19 | 22 | 25 | 28 |
LED illumination duration, in number of frames | 2 | 2 | 2 | 2 | 2 | 5 | 5 | 5 | 5 | 5 | 8 | 8 | 8 | 8 | 8 | 11 | 11 | 11 | 11 | 11 | 14 | 14 | 14 | 14 | 14 |
The IPS software has two major components, the LED detection and the IPS pipeline. The LED detection reads images from the camera at 50 Hz and extracts the list of image coordinates for all visible LEDs. These LedPoints
are published via DDS. The raw images are not saved or published, as this would create impractically large data volumes. The IPS pipeline processes the LED points and publishes VehicleObservation
s, which include the vehicle's position, orientation, identity and a timestamp.
The LED detection uses the OpenCV functions cv::threshold
and cv::findContours
to find the centers of the LEDs.
The IPS pipeline performs a relatively complex data processing task. To make the software easier to understand, it is broken down into independent, stateless processing steps.
The UndistortPoints function transforms the image/pixel coordinates to metric coordinates on the driving surface (floor). The calculation is based on a calibration image. See https://git.rwth-aachen.de/CPM/Project/Lab/software/tree/master/matlab_scripts/camera_calibration
The DetectVehicles function groups and classifies points as front
, center
, back_left
or back_right
. Points that appear to not belong to a vehicle are discarded.