AHRS and Sensor Fusion
An Attitude and Heading Reference System (AHRS) takes information from the Inertial Measurement Unit (IMU) and processes it to provide reliable roll, pitch and yaw angles. It is a fundamental part of any flight controller, used in drones, etc.
Euler Angles
A gyroscope reports angular rates with respect to the sensor. So if our sensor is mounted to a drone which is pitching, rolling and yawing then the measured rates will be in the frame of the rotating sensor (i.e., the aircraft reference frame) NOT a fixed world reference frame. In order to combine angles from different sensors they all need to be in the same reference frame. For this purpose it makes sense to use a fixed frame.
Euler angles are three angles used to describe orientation with respect to a fixed coordinate system. The idea is that we can define any point in 3D space by performing three rotations around the axis of our fixed coordinate system. The order of the rotation matters, so we need to be consistent with this.?
The fixed coordinate system used for Euler angles is called the “inertial” or “world” coordinate system. This coordinate system is chosen as a global reference frame that remains fixed in space. In the Euler angle representation known as the “roll-pitch-yaw” convention, the rotations are applied successively about the fixed axes of the inertial coordinate system. The sequence typically follows the order of rotations:
You can read more about how we translate sensor readings to a fixed reference frame in Part 5 of our series on How to Write your own Flight Controller Software.
One of the problems with Euler angles, is that for certain specific values the transformation exhibits discontinuities, a phenomenon called Gimbal Lock (see below).
Inertial Coordinate System and?NED
The inertial coordinate system is closely related to the NED (North-East-Down) coordinate system, which is a commonly used local-level coordinate system in navigation and aerospace applications.
The inertial coordinate system, also known as the Earth-Centered Earth-Fixed (ECEF) coordinate system, is a global reference frame with its origin located at the center of the Earth. It is fixed with respect to the Earth and does not rotate with respect to the Earth’s rotation. The x-axis typically points towards the intersection of the equator and prime meridian (0 degrees latitude, 0 degrees longitude). The y-axis points eastward along the equator, and the z-axis points along the Earth’s rotational axis, typically aligned with the North Pole.
The NED coordinate system, on the other hand, is a local-level coordinate system that is commonly used for navigation purposes. It is defined with respect to a specific location or observer on the Earth’s surface. The origin of the NED coordinate system is typically set at the observer’s position, and the NED axes are aligned such that:
The NED coordinate system is relative to the observer’s position and orientation, while the inertial coordinate system is fixed with respect to the Earth. However, the relationship between the two coordinate systems can be established through appropriate coordinate transformations.
To convert between NED and inertial coordinates, one typically needs to consider the observer’s position and the rotation of the Earth. This transformation involves accounting for the observer’s latitude, longitude, and altitude, as well as considering the Earth’s rotation rate. For these reasons, we will stick with the inertial reference frame.
Sensor Fusion
Sensor fusion is the process of combining sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. With the gyroscope and accelerometer, we have two angle sensors which should be providing the same data but with different errors. The concept is to combine or fuse the data in such a way as to eliminate the errors and produce an accurate angle that we can use.
A free parameter is defined as a variable in the sensor fusion algorithm which cannot be determined by the model and must be estimated experimentally or theoretically. A detailed explanation of the free parameters and the sensor fusion algorithms available are provided in Part 7 of our series on writing your own flight control software.
领英推荐
Gimbal Lock
Gimbal lock is a physical and mathematical phenomenon. For physical gimbals, it results in the loss of one degree of freedom and occurs when two gimbal axes are parallel.
Both the Apollo 11 and Apollo 13 missions had gimbal lock incidents.
On these spacecraft a set of physical gimbals were used for the IMU.
The NASA engineers knew about the issue and their “solution” was having
the flight computer flash a gimbal lock warning at 70° and then freeze
the IMU at 85°, at which point the IMU was no longer available.
In this situation, the spacecraft would have to be moved away from the
gimbal lock position, and then the IMU manually realigned using the
stars as a reference.
In our context, gimbal lock is a problem that arises from the use of Euler Angles to represent the rotation matrix. For Tait-Bryan (ZYX rotation order) angles, you get gimbal lock when the pitch is ±??/2 radians (i.e., ±90°). At this pitch the orientation of the sensor cannot be uniquely represented using Euler Angles.
An AHRS that uses Euler Angles will always fail to produce reliable angle estimates when the pitch approaches 90 degrees. This is an intrinsic issue for Euler Angles and can only be solved by switching to a different representation method — cue quaternions.
Quaternions
Unlike Euler Angles, quaternions are difficult to visualise. There are a lot of detailed explanations regarding quaternions already available, so we will keep our explanation as high level as possible.
Mathematically quaternions are described as a hyper complex number of rank 4, made up of four scalar variables (q0, q1, q2, and q3) and three complex numbers (i, j, and k). The four scalar variables are sometimes known as Euler Parameters, don’t confuse these with Euler Angles.
Quaternions have 4 dimensions, one real dimension (q0) and 3 imaginary dimensions (q1, q2, and q3). Each of these imaginary dimensions has a unit value of the square root of -1, all perpendicular to each other (i, j and k).
There are two conventions for quaternions, Hamilton and JPL. The difference
between the two conventions is the relation between the three imaginary
bases. In the Hamilton convention, ijk = ?1, while JPL defines ijk = 1.
As consequences, the multiplication of quaternions and the transformation
between quaternions and other rotation parameterizations differ with the
quaternion convention used.
As we are using quaternions to only represent rotations, we normalise them to a unit magnitude. The quaternion in terms of axis-angle, ?? is:
q = cos(??/2) + i( x * sin(??/2)) + j(y * sin(??/2)) + k(z * sin(??/2))
= [cos(??/2) -x * sin(??/2) -y * sin(??/2) -z * sin(??/2)]
where:
We will use the Hamilton quaternion notation (this seems to be the consensus convention, apart from the folks at JPL of course). The quaternion units from q1 to q3 are called the vector part of the quaternion, while q0 is the scalar part. Quaternions are often represented as:
q = q0 + q1i + q2j + q3k = [q0 q1 q2 q3]^T = [qw qx qy qz]^T
We like quaternions because rotations with unit quaternions are less computationally intensive than Euler Angles and they don’t suffer from Gimbal Lock.
For these reasons, it makes sense to use quaternions to provide sensor fusion updates to your flight controller.