Hi Everyone,

I'm currently using Madgwick's popular filter for my own AHRS project. After some experimentation with sensors and adjusting the beta gain, I've noticed some unpredictable behavior of the filter. For example, euler angles computed from the quaternion changed unexpectedly to 20 degrees off the reference angle (in a static scenario!).

After setting the magnetometer input to (0,0,0) the problem stayed the same. Then I removed the gyroscope inputs (0,0,0) and the problem stayed the same. My conclusion was that it must have been related to the accelerometer inputs.

After experimenting with real sensors I moved to artificial ACC input data and set up a test bed for Madgwick's algorithm (MadgwickTests on GitHub). I've figured out that in Madgwick's algorithm the fast inverse square root leads to huge instabilities when noisy measurements are applied. As an example, consider the pitch and roll angles shown in the following image:

The input for the filter was: gyro = (0,0,0), acc = (2.5, 2.5, 9.15)  +- 0.1 uniform noise (see test.c for details). As you can see, the pitch and roll angles (red/green, in degrees) of the original implementation show unacceptable variations.

Remarkably more stable and accurate pitch/roll angles (blue/magenta) were achieved by exchanging the inverse square root implementation. I used  the following code from Accurate and Fast InvSqrt for computing the inverse square root of float x:

unsigned int i = 0x5F1F1412 - (*(unsigned int*)&x >> 1);
float tmp = *(float*)&i;
float y = tmp * (1.69000231f - 0.714158168f * x * tmp * tmp);

Please see the GitHub commit in which I've added some code for switching between the original (0), the proposed (1) and the reference (2) inverse square root implementations.
I hope that my investigations are helpful to improve the accuracy of Madgwick's C filter implementation.

Cheers, Tobias

Tags: AHRS, IMU, Madgwick

Views: 4313

Reply to This

Replies to This Discussion

Thank you for this optimisation. I will test it this evening.

I would have some question about using this algorithm.

I plan to use it with an atmel board (xmega-a1 or xmega-b1) and atavrsbin1 9dof board (http://www.atmel.com/Images/doc8354.pdf)

How fast should I read the sensor and make the DCM update? 
* Should I do this using interrupt based code? 
* How to choose the "right" parameters for the filter?

 

Thank your for your help.

Hi neoraptor,

the frequency at which You need to read the sensors and perform the orientation estimates depends on Your application. I read a MARG sensor cluster at 200Hz which seems to be enough for my MAV prototype (40cm-class quadrotor).

The decision whether to use interrupt-based code depends also on Your application. If the main task of the controller is running the orientation filter, then a synchronous reading via I2C with delay loops might be sufficient. If You want to perform multiple other computations in "parallel", an interrupt-based code might be preferred.

Choosing the "right" parameters for the filter depends on the noise and biases of your sensor readings. It can be figured out experimentally.


Cheers,Tobias

Thank you for your answer Tobias. I have not much experience in this field, so I am discovering slowly how it works

Here is the capture of my raw sensors as well as the output of the filter.

[IMG]http://data.imagup.com/multi/1168635168_Mag.png[/IMG]
[IMG]http://data.imagup.com/multi/1168635168_Acc.png[/IMG]
[IMG]http://data.imagup.com/multi/1168635168_Gyro.png[/IMG]

and my filter output:
[img]http://data.imagup.com/multi/1168635168_output.png[/img]

What I noticed, it take time to go to the right value (stabilize) :
[IMG]http://data.imagup.com/12/1168636311.png[/IMG]

The thing is, I set:
* I used a beta gain of 10 and a sample frequency of 1000Hz
* BMA150 bandwidth to 1500Hz
* ITG3200 bandwidth to 256Hz and samplerate to 1500Hz
* my main loop looks like that:
while (1) {
delay_us(666); // 1/1500Hz >> 666us
readSensors();
updateFilter();
}


If I take a beta of 0.5, my system is not stable anymore :S
[IMG]http://data.imagup.com/10/1168641203.png[/IMG]

Have you a range of correct settings for beta? Is it required to calibrate the sensors? How this is done?

Hi,

the most important thing to do ist to identify the bias of your gyroscope. You can do this by keeping the device static, then sample N gyro measurements, compute the average and subtract this average (aka bias) from each gyro reading. So you need two states: the initialization state in which you identify the biases and the operational state in which you substract the bias from each new reading.


When using the madgwick filter you should also go sure that you convert the gyro values to rad/s (not deg/s) before feeding them into the filter. I think the latter is the problem which leads to overshooting when reducing the beta gain.

Regards,
Tobias

Hi,

Thank you for your answer. It helps me a lot !!

I just noticed that the values from my gyro were in deg/sec instead of rad/sec !! I will try this correction this evening and see how it reacts.

But correct me if I am wrong, normally the gyro bias has to be fixed by the complementary filter? Otherwise, it might be difficult to have it calibrated each time (for an end user).

I will test with and without calibration to see how it is handled.

Best regards,

Jonathan

Yes, the gyro bias is corrected by the feedback step of the complementary filter but you will achieve better results when the bias is known a priori.

Cheers, Tobias

OK, the problem was my 64 bit machine. Long is not 4 bytes wide and thus, the following code line fails:

https://github.com/TobiasSimon/MadgwickTests/blob/master/MadgwickAH...

The portable solution for this is:

float halfx = 0.5f * x;
float y = x;
int32_t i = *(int32_t *)&y;
i = 0x5f3759df - (i>>1);
y = *(float*)&i;
y = y * (1.5f - (halfx * y * y));
return y;

RSS

© 2014   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service