Android Sensor Fusion Tutorial

While working on my master thesis, I’ve made some experiences with sensors in Android devices and I thought  I’d share them with other Android developers stumbling over my blog. In my work I was developing a head tracking component for a prototype system. Since it had to adapt audio output to the orientation of the users head, it required to respond quickly and be accurate at the same time.

I used my Samsung Galaxy S2 and decided to use its gyroscope in conjunction with the accelerometer and the magnetic field sensor in order to measure the user’s head rotations both, quickly and accurately. To acheive this I implemented a complementary filter to get rid of the gyro drift and the signal noise of the accelerometer and magnetometer. The following tutorial describes in detail how it’s done.

There are already several tutorials on how to get sensor data from the Android API, so I’ll skip the details on android sensor basics and focus on the sensor fusion algorithm. The Android API Reference is also a very helpful entry point regarding the acquisition of sensor data. This tutorial is based on the Android API version 10 (platform 2.3.3), by the way.

This article is divided into two parts. The first part covers the theoretical background of a complementary filter for sensor signals as described by Shane Colton here. The second part describes the implementation in the Java programming laguage. Everybody who thinks the theory is boring and wants to start programing right away can skip directly to the second part. The first part is interesting for people who develop on other platforms than Android, iOS for example, and want to get better results out of the sensors of their devices.

Update (March 22, 2012):
I’ve created a small Android project which contains the whole runnable code from this tutorial. You can download it here:

Update (April 4, 2012):
Added a small bugfix in the examples GUI code.

Update (July 9, 2012):
Added a bugfix regarding angle transitions between 179° <–> -179°. Special thanks to J.W. Alexandar Qiu who pointed it out and published the soultion!

Update (September 25, 2012):
Published the code under the MIT-License (license note added in code), which allows you to do with it pretty much everything you want. No need to ask me first 😉

Sensor Fusion via Complementary Filter

Before we start programming, I want to explain briefly how our sensor fusion approach works. The common way to get the attitude of an Android device is to use the SensorManager.getOrientation() method to get the three orientation angles. These two angles are based on the accelerometer and magenotmeter output. In simple terms, the acceletometer provides the gravitiy vector (the vector pointing towards the centre of the earth) and the magnetometer works as a compass. The Information from both sensors suffice to calculate the device’s orientation. However both sensor outputs are inacurate, expecially the output from the magnetic field sensor which includes a lot of noise.

The gyroscope in the device is far more accurate and has a very short response time. Its downside is the dreaded gyro drift. The gyro provides the angular rotation speeds for all three axes. To get the actual orientation those speed values need to be integrated over time.  This is done by multiplying the angular speeds with the time interval between the last and the current sensor output. This yields a rotation increment. The sum of all rotation increments yields the absolut orientation of the device. During this process small errors are introduced in each iteration. These small errors add up over time resulting in a constant slow rotation of the calculated orientation, the gyro drift.

To avoid both, gyro drift and noisy orientation, the gyroscope output is applied only for orientation changes in short time intervals, while the magnetometer/acceletometer data is used as support information over long periods of time. This is equivalent to low-pass filtering of the accelerometer and magnetic field sensor signals and high-pass filtering of the gyroscope signals. The overall sensor fusion and filtering looks like this:

So what exactly does high-pass and low-pass filtering of the sensor data mean? The sensors provide their data at (more or less) regular time intervals. Their values can be shown as signals in a graph with the time as the x-axis, similar to an audio signal. The low-pass filtering of the noisy accelerometer/magnetometer signal (accMagOrientation in the above figure) are orientation angles averaged over time within a constant time window.

Later in the implementation, this is accomplished by slowly introducing new values from the accelerometer/magnetometer to the absolute orientation:

// low-pass filtering: every time a new sensor value is available
// it is weighted with a factor and added to the absolute orientation
accMagOrientation = (1 - factor) * accMagOrientation + factor * newAccMagValue;

The high-pass filtering of the integrated gyroscope data is done by replacing the filtered high-frequency component from accMagOrientation with the corresponding gyroscope orientation values:

fusedOrientation =
    (1 - factor) * newGyroValue;    // high-frequency component
    + factor * newAccMagValue;      // low-frequency component

In fact, this is already our finished comlementary filter.

Assuming that the device is turned 90° in one direction and after a short time turned back to its initial position, the intermediate signals in the filtering process would look something like this:

Notice the gyro drift in the integrated gyroscope signal. It results from the small irregularities in the original angular speed. Those little deviations add up during the integration and cause an additional undesireable slow rotation of the gyroscope based orientation.

204 thoughts on “Android Sensor Fusion Tutorial

  1. Sampreeti, 2014-12-27 8.36 am

    I found your tutorial very interesting. I am doing a research which requires me to record limb movements. I am planning to do it using accelerometer and gyroscope. Is there any way where I can save the recorded data along with date and time of the day. (I need the time at which every movement takes place.)


    1. admin, 2015-01-03 12.03 am

      Hi Sampreeti,
      you can record any data produced by the sensors. You simply have to provide the required memory and after each measurement you save the sensor data alongside of the current timestamp. But keep in mind that the sensor fusion described in this tutorial only returns the orientation of the device, not its linear movement. But if you had several sensors (or sensor sets) and the distance between them, you could calculate the movement of a specific limb.

      The practicality of MEMS sensors (such as common accelerometers or gyroscopes in mobile devices) depends heavily on the accuracy of the data you want to record. Maybe you don’t even require any sensor fusion and an accelerometer would suffice. However, if you require high data accuracy like in a motion capturing system, I doubt that MEMS sensors will fit your needs. In such a case I would recommend a computer vision approach.
      Good luck with your research,

  2. Saurabh Gupta, 2015-01-22 2.17 pm

    Using this approach can I find magnetic heading of device? Will it be more accurate ?
    My understanding is that azimuth which is orientation[0] will be the magnetic heading?

    1. admin, 2015-02-18 12.26 pm

      The magnetometer is used for azimuth stabilisation. This sensor provides the magnetic heading. However, I’m not sure whether azimuth (or orientation[0]) = 0° equals north. Maybe youwill have to do some post processing.

  3. allan, 2015-02-06 10.23 am

    i have liked your tutorial

    1. admin, 2015-02-18 12.30 pm

      Tanks 🙂

  4. allan, 2015-02-06 10.26 am

    i would like to know how exactly i can get the all the sensor data a once.i am rying to develop a porthole detection system

    1. admin, 2015-02-18 12.36 pm

      Please clarify your requirement. What do you mean by getting all the sensordata at once? You need to collect the data from the sensors at some point.

  5. Marcel Blanck, 2015-02-15 2.32 pm

    Ein ganz ausgezeichnetes Tutorial! Danke dafür!

    1. admin, 2015-02-18 12.31 pm

      Freut mich, dass es gut ankommt. Danke für das Lob 🙂
      Alles Gute!

  6. Jake Smith, 2015-02-23 7.32 pm

    Thanks again for your tutorial. I loved your explanations, which made a lot of sense. The part I’m struggling with is getting OpenGL to do what I want. I don’t have a lot of experience with it. I’m attempting to follow what I see here:, and I somewhat understand the necessity of using quarternions rather than Euler rotation/orientation. Would you be able to help me understand how to get a quarternion out of the resulting vector of this work and apply that to my “view in OpenGL land?”

    I have light reflecting off of a sphere rather than a cube for a more subtle proof of perspective change. My renderer is here:

  7. Tuan Dinh, 2015-03-08 7.45 am


    I found your tutorial very interesting and useful. Is it possible to track linear motion ? One more thing I dont understand, the fused data from the complementary filter, is it the accelerometer data without noise or gyroscope data without drift ?

    1. admin, 2015-03-10 3.37 pm


      The question about linear motion tracking is asked very often here. The answer is that I don’t see a reliable way to track linear motion with the sensor data we use in this tutorial. Most people who try to track linear motion start off with the accelerometer which is not possible with the current state of the art MEMS sensors. Perhaps there will be more accurate sensors one day. But as for now, I don’t think it’s possible.

      The fused data is composed of both: low-pass filtered accelerometer/magnetometer data (not entirely without noise, only the high frequency parts removed) and high-pass filtered gyro data (removing the drift while the system moves slowly).

  8. Sebastian, 2015-03-19 3.52 pm

    Danke für dieses sehr aufschlussreiche Tutorial. Wirklich gut und verständlich geschrieben. Bezüglich der Frage von TUAN DINH zu Linear Motion Tracking hat sich mir die Frage gestellt, ob es über sehr kurze Zeitintervalle (60ms) möglich ist die Bewegung, bzw. die zurück gelegte Distanz, zu tracken.

    Angenommen ich tracke mit der Kamera meines Devices einen definierten Punkt. Nachdem der Punkt initial detektiert wurde, möchte ich die Bewegung des Devices erfassen können, um diesen Punkt in der Folge leichter detektieren zu können. Der Ablauf würde etwa so aussehen:

    1. Initiales Erfassen des Punktes.
    2. Bewegungen des Devices wird ermittelt (via Sensoren).
    3. Integration der Bewegungsdaten in die Berechnung zum Tracken des Punktes. (Area of interest soll so eingeschränkt werden können)
    4. Erfassen des Punktes
    5. Repeat 2.

    Dauer einer Iteration beträgt ca. 45-60 ms. Sind die Integrationsfehler innerhalb solch kleinen Zeitintervallen noch relevant?


    1. admin, 2015-03-20 1.13 am

      Hi Sebastian,
      freut mich sehr, dass dir das Tutorial gefallen hat 🙂

      Zu deiner Frage habe ich zunächst einen paar Gegenfragen:
      1: Habe ich es richtig verstanden, dass du mit dem Kamera-Tracking die Positionierung des Devices stabilisieren willst? Die Frage wäre dann, was genauer ist, dein Kamera-Tracking, oder die doppelt integrierte Beschleunigung. Ich würde meine Hand dafür nicht ins Feuer legen, aber ich wäre eher auf der Seite des Kamera-Trackings. Und dann brächtest du kein Accelerometer mehr, wenn das eh ungenauer ist.
      2: Benötigst du bei jeder Iteration wirklich nur die Streckendifferenz, die du aus der Accelerometer-Integration bekommst, oder brauchst du auch die Device-Position, die du in den vorherigen Iterationen berechnet hast?

      An und für sich, sollten in einem Zeitfenster von 60ms nicht viele Fehler anfallen. Ist aber dennoch mit Vorsicht zu genießen. Denn in einer so kurzen Zeit kann das Gerät auch nicht nicht viel Strecke zurücklegen. Deswegen wird das Integrations-Ergebnis auch ein sehr kleiner Wert sein. Und was die Genauigkeit anbetrifft ist das Verhältnis zwischen Fehler und Nutzdaten ausschlaggebend und nicht der absolute Fehleranteil (und der Fehleranteil wird ja ebenfalls doppelt integriert!).

      Das wären meine ersten Überlegungen dazu. Unterm Strich kann ich nur sagen: Ist schwer zu beantworten. Müsste man ausprobieren :-/

  9. Chintan Shah, 2015-05-18 10.36 pm

    Hello Paul,

    This is a fantastic post about Complementary Filters.

    I would really appreciate if you could please give some pointers on the doubts that I have.

    I have raw sensor data from Accelerometers, Gyroscope and Magnetometer from the Android. I am getting this data through an app Sensor Fusion. Now I want to process these data offline in Matlab to get accurate orientation. Is possible to obtain code for the functions that you have used such as SensorManager.getRotationMatrix . If I am not wrong, these functions come from Android but is it possible to see what they are doing so I can implement them in Matlab.

    Many thanks.

    Best Regards


    1. Paul, 2015-05-19 7.42 pm

      Hi Chintan,
      the only way to get the code behind Android-APIs like SensorManager.getRotationMatrix I can think of is to look into the Android code itself. It’s open source:
      You could pull the source from Googles git repository, search for the SensorManager in it and look into the method in question.
      Best Regards,

      1. Chintan Shah, 2015-05-20 2.10 pm

        Hello Paul

        Thank you for your reply.

        I am new to this topic and need some clarification about some terms.

        1. So the idea behind fusion is to get better orientation estimation, is that correct? And for that people either use Kalman or Complementary filter.
        2. I saw the popular video on Youtube by David Sachs and he says that in order to obtain Linear Acceleration, we need to estimate Gravity and subtract it from Raw acceleration. How do I estimate Gravity from Raw acceleration? I came across the following link and please tell me if ti is correct.
        3. And how to estimate heading?

        I would appreciate if you could please point me in right direction.

        Best Regards


      2. Paul, 2015-05-24 10.11 pm

        Hi Chintan,
        1. correct.
        2. yes, the described method looks like a sound way to get the acceleration without gravitational influence
        3. how would you define the term ‘heading’. Do you mean the overall orientation of the device or rather the azimuth (i.e. the ‘compass’ component of the orientation)? In both cases the answer is already at hand (see sensor fusion or determination of orientation.

      3. Chintan Shah, 2015-05-27 10.17 am

        Hello Paul

        Thank you for your reply. I just have a last question. I came across a TYPE_ROTATION_VECTOR which outputs orientation in terms of Quaternions. Have you looked at this output and does it behave exactly the same on all devices?

        I still think I should implement my own Orientation estimation but if the TYPE_ROTATION_VECTOR does the same job then not sure if it is worth spending time on orientation estimation.

        Best Regards


      4. Paul, 2015-05-28 7.11 pm

        at the time I wrote the article, the rotation vector sensor type was not available in Android. I had no chance to evaluate its behaviour and the quality of its output. You should try it out, do some tests and evaluate whether the data quality meets the requirements of your application. It could save you some time.
        Best Regards

      5. Chintan Shah, 2015-05-29 4.20 pm

        Hello Paul,

        I have implemented your code in Matlab and it works really well. I just have few question:

        I walked few steps in a straight line and recorded raw data from Accel, Magnetometer and Gyro and then passed them to the Complementary filter.

        The only problem is that I was expecting the Yaw angle to be nearly constant but it deviates towards the end and I found that the magnetometer data also deviates towards the end. How can I fix this?

        If possible, can I send you the plots by email?

        Many thanks.

        Best Regards


      6. Samuel, 2015-06-05 1.30 am

        Hi Paul and Chintan

        I am trying to understand how the Rotation Vector behaves in order to estimate orientation of the wrist using a Android wear device. It tells you the angle you have rotated it with respect to a fixed frame in the three axes. Have you found something relevant Chintan? Regards.


  10. KV, 2015-05-29 11.00 pm

    How would I make this tutorial translate to the direction that a human is facing? Sort of like how in Google Maps the blue arrow is pointing in the direction of the user’s phone.

    1. Yang, 2015-10-14 9.43 am

      I also want to know.

  11. zonglinyang, 2015-08-01 8.40 am

    Very very thank you for what you do

  12. brayan, 2016-01-26 5.40 pm

    Hello Paul,
    just a quick question , I need to measure angle Horizental Wall I use sensor fusion?
    thank you beforehand

  13. Dhruvkumar Patel, 2016-02-02 8.53 pm

    Hey i am doing research on getting position data from android wear and display into visualizer. I need your help. How to calculate position using accelerometer sensor data from android wear?


    1. Paul, 2016-03-15 1.55 pm

      I’m not sure what you mean by ‘android wear’. You cannot calculate a precise location from accelerometer data. Please read the previous comments. I’ve addressed the problem several times.

  14. Whutzn, 2016-03-19 11.04 am

    Hey, thanks for your tutorial. I would like to get a relatively reliable position by android phone. Could I use the fused accelermometer data and absolute orientation to do that?

    1. Paul, 2016-05-16 2.16 am

      I get this question a lot and I’m afraid that’s a whole different requirement that cannot be met with the standard sensors in an android device.

  15. AmyCheng, 2016-03-27 10.19 am

    Hi~~thank u for this tutorial !! But I have a question that spent me lots of time to think>west->south->east->north), the yaw angle from AccMagOrientation will be 0 -> -90 ->-180 -> 179(-181+360) -> 90 -> 0
    However, the yaw angle from GyroOrientation will always be 0, because deltaMatrix/deltavector is calculated in cellphone’s local frame(because gyrosensor’s values are in local frame). So I don’t understand the meaning of “fusion of AccMagOrientation and GyroOrientation”~

    Thank u very much!

  16. AmyCheng, 2016-03-27 10.24 am

    Hi~~thank u for this tutorial !! But I have a question that spent me lots of time to think.
    I don’t understand the meaning of “fusion of AccMagOrientation and GyroOrientation”
    I mean that, when you hold an android cellphone stably(local coordinate z points to sky and y points forward) and walk around a playground (facing order: north->west->south->east->north), the yaw angle from AccMagOrientation will be 0 -> -90 ->-180 -> 179(-181+360) -> 90 -> 0
    However, the yaw angle from GyroOrientation will always be 0, because deltaMatrix/deltavector is calculated in cellphone’s local frame(because gyrosensor’s values are in local frame). So I don’t understand the meaning of “fusion of AccMagOrientation and GyroOrientation”~

    Thank u very much!!!

    1. Paul, 2016-05-16 2.23 am

      The gyro data is rotation speed, which is angle per time. That’s why the gyro speed values are integrated over time and transformed into the target frame of reference used by the sensor fusion algorithm. All data needs to be processed so they are in the same conventions, otherwise they cannot be combined in a reasonable way.

  17. Jeffry Johnston, 2016-03-28 6.25 am

    Let’s say I relied solely on gyro data alone, without using the accelerometer or magnetometer. I’d be curious to know how much time it might take for the gyro drift to accumulate an error of over 1 degree on an axis. I realize this might be dependent on a lot of factors, but what did you observe during your experiments?

    1. Paul, 2016-05-16 2.14 am

      In my case that depended on how much you moved the device. But the drift manifested pretty quickly, almost alway within several seconds up to half a minute.

  18. mclng, 2016-04-13 11.34 am

    Great article.
    What about using Sensor.TYPE_ROTATION_VECTOR ?

    1. Paul, 2016-05-16 2.12 am

      I don’t know. How Accurate is Sensor.TYPE_ROTATION_VECTOR? Do you have any experience with it?

  19. Doyun kim, 2016-07-18 12.55 am

    Hi, Thank you for your interesting Code.
    Could you please explain how to use the quaternion, convert the quaternion into orientation without gimblock?
    You did’t explain the quaternion in the article.
    In the article, could you please explain what’s mean matrixMultiplication, and float[ ]o in detail?

    1. Paul, 2016-07-24 12.31 am

      I didn’t explain quaternions because I didn’t use them in my original implementation. I considered quaternions but decided against them for simplicity’s sake.
      I would have to explain quaternions in a separate article first. I planned for such an article but unfortunately I currently haven’t the time for further writing.
      matrixMultiplication, as the name suggests, multiplies two 4×4 matrices (stored in two flat float arrays, each of size 16). See here for the mathematical background:
      The only occurence of float[]o I can find in my code is the parameter of getRotationMatrixFromOrientation. This function expects an orientation vector (yaw, pitch, and roll angles stored in a float array of size 3) and generates a rotation matrix that corresponds to the euler angles stored in float[]o.
      I hope that explains the code parts in question.

  20. daniel, 2016-07-20 7.11 am

    Hi Paul where can the sampling rate be modified?


  21. daniel, 2016-07-23 3.38 pm

    Hi Paul, could you disregard the previous comment,

    Could you please go into further detail into how you got the coefficient and how 33hz yields a time period of 30s?
    and in regards to the signal quality, how did you verify quality of it?

    Much appreciated!

    1. Paul, 2016-07-24 12.17 am

      Hi Daniel,
      I estimated the coefficient heuristically. There was no exact calculation involved. I tested it in the context of my target application and the given value worked the best for me. The coefficient must be a value between 0.0 and 1.0 and it determines the ratio between the low-pass and high-pass components of the complementary filter (i.e. the cutoff-frequency of the filter)
      In the article the time period is 30ms (milliseconds) not 30s. The time period is calculated as follows: T = 1/f which in our case is T = 1/30hz ~= 0,033s
      Best Regards 🙂

  22. daniel, 2016-07-24 6.46 am

    Hi Paul,

    Thanks for the clarification!

  23. daniel, 2016-09-20 7.30 pm

    Hi Paul,

    In regards to the sampling rate , does setting it to 30hz(~30ms), remove the need for time interpolation?

  24. daniel, 2016-09-20 8.04 pm

    Also would calculating the magnitude of the fusedvalues then applying fast fourier transform to it act as a good feature for data mining?

Leave a Reply

Your email address will not be published. Required fields are marked *