Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Indoor Positioning System (IPS) detects the position , and orientation (pose) and identity of multiple vehicles simultaneously. The position and orientation poses are updated at 50 Hz.

Optics

The Indoor Positioning System Setup works with active light sources (LEDs) on the vehicles and a Indoor Positioning System Setup camera looking down from the ceiling. The LEDs are detected based on their high brightness. The camera is set to a very short exposure (~100 microseconds). Thus, the ambient light creates a very small signal (almost black), while the LEDs still appear as white dots. The short exposure time also eliminates the problem of motion blur. At a top speed of 4 m/s the vehicle travels 0.4 mm during the exposure.

Other light sources and reflective surfaces on the vehicle can create false signals and must be covered with tape. This includes the connectors on the Raspberry Pi and the status LEDs on the motor speed controller.

Image RemovedImage Added


Vehicle Pose

The outer three LEDs indicate the vehicle pose and are permanently illuminated. The section Pose Calibration describes how the LED positions are related to the reference pose.

...

The central LED flashes in a pattern that is different for each vehicle. The patterns are chosen such that sampling effects do not create ambiguous signals. TODO reference MA Isabelle Tuelleners (IPS Paper), cf. Nyquist–Shannon sampling theoremSee our paper Vision-Based Real-Time Indoor Positioning System for Multiple Vehicles for more information.

Vehicle ID12345678910111213141516171819202122232425
Period of flashing, in number of frames47101316710131619101316192213161922251619222528
LED illumination duration, in number of frames22222555558888811111111111414141414

...

The IPS software has two major components, the LED detection and the IPS pipeline. The LED detection reads images from the camera at 50 Hz and extracts the list of image coordinates for all visible LEDs. These LedPoints are published via DDS. The raw images are not saved or published, as this would create impractically large data volumes. The IPS pipeline processes the LED points and publishes VehicleObservations, which include the vehicle's position, orientation, identity and a timestamp.

Image RemovedImage Added

LED Detection

The LED detection uses the OpenCV functions cv::threshold, cv::findContours and cv::moments to find the centers of the LEDs.

...

The IPS pipeline performs a relatively complex data processing task. To make the software easier to understand, it is broken down into independent, stateless processing steps.

Image RemovedImage Added


The UndistortPoints function transforms the image/pixel coordinates to metric coordinates on the driving surface (floor). The calculation is based on a calibration image. See https://git.rwth-aachen.de/CPM/Project/Lab/software/tree/master/matlab_scripts/camera_calibration

...

To accurately implement this definition of the pose, another measurement and calibration procedure is performed. First we need a means of accurately placing the vehicle on the floor, such that the true pose is known. The vehicle is clamped into a piece of wood, which extends the vehicle's local x/y coordinate system. This makes it simple to manually align the vehicle with the checkerboard pattern.

Image RemovedImage Added

While the IPS is running, the vehicle is placed in various poses on the floor, following a particular calibration sequence. The calibration features (back_x, back_y, direction_x, direction_y) are recorded from the running IPS. A linear calibration is then fitted using least squares.

...