It’s clear that AR tracking technology has come a long way since we started TWNKLS seven years ago. Back then, the available computer vision technologies allowed us to recognize and track small artificial patterns (‘markers’) and Natural Feature Tracking (NFT) was just becoming possible on mobile devices. At that time, a physical marker was always needed, and the size of the augmentation could not be much larger than two to three times the size of the marker.
Fast forward to now, and we can show augmentations of almost unlimited size and at the right scale, using the Visual-Inertial Odometry (VIO) tracking techniques that are present in ARKit, ARCore and Hololens.
Yet there is one piece of the puzzle still missing in these platforms (although Hololens has a rudimentary implementation), and that is the ability to accurately recognize where the device is, in large areas.
The benefits of indoor positioning with VIO tracking:
- Pedestrian wayfinding in large buildings like airports, hospitals
- Showing maintenance instructions (logged maintenance, future maintenance) in 3D, on large, real-world installations, such as those in
processindustry, manufacturing or engine rooms
- Showing live sensor information about material flows in pipes, in vessels
- Increasing situational awareness of (
private / public) security officers and their commanders
Requirements for indoor positioning with VIO tracking:
For these applications it must be possible to locate the user’s device in an indoor environment and track the device while it moves.
This differs from object-tracking in that the environment can be assumed to be static, which makes it possible to use the sensors (accelerometer, gyroscope) inside the device to help with the tracking.
The starting point for such wide-area indoor place recognition and tracking is generally a 3D scan of the space. This can be made with photography/photogrammetry, LIDAR scanners, or a combination of both.
Indoor positioning with VIO tracking at TWNKLS HQ:
At TWNKLS we have been researching an indoor positioning system that can be used for handheld devices as well as smart eyewear, where the 3D scan is made using photogrammetry. In the following video a first test can be seen:
- In the large area on the left there is a top-down view of a 3D reconstruction of the TWNKLS office. In the top-right we see a view of the Visual Inertial Odometry (VIO) tracker, and in the bottom right we see the view of the tracked camera, which sees the 3D reconstruction from the tracked viewpoint
- At the start of the video the coffee corner is recognized, and the system starts tracking
- As the camera moves around the central meeting rooms of the office, the coordinate system of the camera can be seen to move to the left, down and then to the right again, which matches the movement in the real world.
- When an area comes into view that is recognized, the drift that has built up during the VIO tracking is corrected immediately.
This demo is recorded with our system running in Unity3D, on a laptop, using an external camera. The CPU usage for the tracking is only about 15%, and the tracker has even run on an Intel M3 Compute Stick, and an NVidia Jetson TX1.
This system is under active development, and we expect to see a first deployment in a project in the future.
This project was made possible through funding by the Dutch Ministry of Economic Affairs