In the form of a concept study, I wanted to find out to what extent the smartphone’s position sensors can be used to detect simple gestures by moving the smartphone in space. To do this, I looked extensively at the current state of sensor fusion (Kalman filter) using an acceleration sensor, gyroscope and magnetometer.

By calculating the smartphone rotation, the double derivative of the resulting linear acceleration was to be used to detect the movement in three-dimensional space. An algorithm was responsible for determining the underlying plane and the two-dimensional projection onto it. A library for simple shape recognition was then used to determine shapes from the images.

The approach only worked to a limited extent, as the sensors available at the time were not yet precise enough. In particular, slight accelerations were barely registered, whereas strong accelerations were registered excessively, which became a problem especially during braking movements. In addition, there was a growing error that was creeping in. The performance of calculations in the web container of a hybrid app was also reaching its limits.