## Saturday, February 27, 2010

### True Physics

One of the vital organs of a physics engine is the code that integrates Newton's laws of motion. There are several techniques which employ approximations of calculus in order to numerically solve this problem. I am proposing a scheme which can solve this problem without approximation.

In this article you will learn
• How the stepping equations are derived from Newton's laws.
• A good method for doing this approximation, which is very common in many third party physics engines.
• The Kinematic Integrator method, which allows you to use unapproximated calculus in the force calculations. This enables extremely large time steps.
In this article I will just be talking about the linear motion, not the angular motion. If you want to see a pretty decent way to integrate the angular portion check out the last section of Quaternions: How.

Newton's Laws

Given in short:
1. Without external influences, objects preserve their state of motion. If the object is at rest, it remains so. If it is moving it will continue to move at that same speed.
2. If a force acts on an object, the object will change its state of motion. The amount of change induced by the force is related to the mass of the object. i.e.
F = ma
3. If one body applies a force on another body, then the other body applies an equal and opposite force on the first body
The third law is taken care of during resolution of collisions, and applications of forces, etc. The first two laws are handled in the integration stage.

The net force acting on a body is directly proportional to acceleration. Acceleration is defined as the time derivative of velocity. In order to get the velocity of an object, given the acceleration, we need to integrate.

Velocity is an important portion of the physical state of an object. Several forces require knowledge of the velocity. However, the most important quantity is the position, since this determines where we place the object in the world. The velocity is defined as the time derivative of position, so in order to get the position we must integrate the velocity.

Since the information we start with is acceleration, and the information we desire is velocity and position, then we must integrate the acceleration two times. For this reason the equations of motion are usually second order equations. In order to fully solve a second order equation, we need to know two quantities. These two quantities are usually the initial position, and the initial velocity.

How do we integrate?

When it comes to physics engines I always start my integration by taking a derivative. This may seem anti-intuitive, since an integral seems to be the opposite of a derivative. But with physics, the thing we are integrating can be expressed in terms of derivatives, and nothing is as easy to integrate as a derivative.

The equations of motion can be expressed in terms of two differential equations

a = ∂v/∂t

v = ∂x/∂t

In order to see how we solve these equations numerically, first consider the standard mechanism for taking a derivative.

∂f/∂t = (f1 - f0) / (t1 - t0) ; t1 → t0

Which defines the value of the derivative at t0.

On a computer, we have finite precision arithmetic, and so we cannot allow t1 to get arbitrarily close to t0. Therefore, we can only approximate derivatives as finite differences.

∂f/∂t ≈ (f1 - f0) / (t1 - t0) ; t1 = t0 + Δt

Here Δt represents our time step. Obviously, the smaller the time step is, the closer we get to representing a true derivative. Making this simple approximation to the representation of the calculus has introduced an ambiguity. Is this the value of the derivative at t0 or t1, or at some point in between? If we choose t0, then the approximation is called a forward, or explicit derivative. If we choose t1, then the approximation is called a reverse, or implicit derivative. The quality of the representation of the equations of motion ends up being tied to our choice.

If we choose to use only explicit derivatives to represent the equations of motion, then we have

a0 = (v1 - v0) / Δt

v0 = (x1 - x0) / Δt

In these expressions, we know we are using a forward derivative because the a and the v on the left hand sides are evaluated at t0. We can rewrite these equations, so that we can find the values of x and v at time t1, given that we know what the values are at time t0.

v1 = v0 + a0Δt

x1 = x0 + v0Δt

These equations are called the explicit Euler stepping equations. We see that if the value of a is zero, then the equations will maintain the motion of an object, satisfying the first law. The solitary occurrence of a is the contribution from the second law, given that you calculate a as the net Force divided by mass.

As it turns out, the explicit Euler stepping equations are very crappy. The acceleration ends up introducing energy into the system, causing it to be very volatile. This volatility can be managed by making the time step smaller, but this increases the computational expense.

The current gold standard for integration in high profile physics engines is the semi-implicit, or symplectic, Euler stepping equations. These equations represent the acceleration with an explicit derivative, but the velocity is represented with an implicit derivative.

a0 = (v1 - v0) / Δt

v1 = (x1 - x0) / Δt

Since on the left hand side, the acceleration is evaluated at t0, it is a forward or explicit derivative. Since the velocity on the left hand side is evaluated at t1 it is a reverse, or explicit derivative. The combination of these two gives us the name semi-implicit. These equations can be rearranged so that x and v at t1 are functions of their values at t0

v1 = v0 + a0Δt

x1 = x0 + v1Δt

These equations are solvable, so long as we evaluate the velocity equation first, so that we have the result to plug into the position equation.

So how accurate are these stepping equations?

The semi-implict Euler technique is a second order method, which means that the error is tied to the second power of the time step. If you cut the time step in half, you quadruple the accuracy of the method.

Also, the semi-implicit Euler method obeys a different property, which makes it useful. It conserves energy on average. What this means is that, although the error you accumulate over the course of the time step might increase or decrease the effective energy of the system, over time the energy will average out to a constant. This property is where the symplectic term comes from. I'm not going to explain what symplectic means, but if you are interested, search for Hamiltonian dynamics, the equations of which are used to prove that the semi-implicit Euler method conserves energy.

We see in this instance that there is a defining difference between accuracy and stability. The semi-implicit Euler method may not be the most accurate method ever, but it is stable since it conserves energy. On the user end, we don't tend to see the tiny inaccuracies, but we do tend to notice if the physics wigs out and everything explodes. Therefore, we would like our inaccuracies to NOT lead to instabilities.

Can you have perfect integration?

With numerical integration schemes, such as Euler based techniques, the only way we can get perfect integration is if we let the time step become zero, which we know we can't do. However, it is possible to achieve perfect integration in a computational simulation, and I will tell you how, but we will need to make a slight departure from the standard development of numerical integration.

Let's start with a very simple example of perfect integration.

Consider that you are writing the game Lunar Lander where you must use thrusters on a spacecraft to guide it safely to the landing pad. In this game, the only forces that come into play are the gravitational force, and the thrusters on the spacecraft.

Over the course of a single time step, we know which thrusters are on. We consider that the force due to a thruster is constant over the course of the time step. We also use a constant force for the gravitational force. Thus we have a full knowledge of the value of the force at all intermediate points during the simulation time step.

Add all of the forces together to get the total force. Dividing the total force by the mass will give you the acceleration.

a = (Fgrav + Fthruster + ...) / m

Now, we need not resort to any approximation of calculus in order to solve this problem. Since we know the form of the forces (constant) at all points during the simulation time step, we can solve the problem directly - analytically. The true solution and stepping equations without approximation are given by

v1 = v0 + aΔt

x1 = x0 + v0Δt + (1/2)aΔt2

Anyone who has taken a physics class, even a beginning physics class, should recognize these equations. Restate: if you have taken a beginning physics classes you have seen these equations before, even if you don't recognize them. They are called the kinematic equations of uniform acceleration, and are the primary feature of almost all beginning physics problems, especially problems involving a baseball, basketball, or any other type of sports related sphere.

Here the a is constant, so we don't need to use a subscript to specify the point in time where we are evaluating it. These equations look almost like the stepping equations we were using before, except the position equation has an extra term involving acceleration. These equations are perfect, which means that there is no upper bound of accuracy that is related to the size of the time step. In a real application the upper bound on the size of the time step would not be related to the accuracy of the integration, but rather to the ability to perform collision with the landscape. Collision issues aside however, this method will give you the right answer, no matter how big the time step is.

If these stepping equations are perfect, why don't we use them? The answer is that these equations are only perfect if the forces are really constant. If we use these equations to simulate forces that are not constant, e.g. the spring force, we will not get very good results. If we decide to use these equations to step the physics of our system forward in time, we have essentially moved the approximation out of the calculus and into the representation of the forces. And the approximation applied to the forces ends up having worse results than applying the approximation to the calculus.

In short: the kinematic equations rock for forces that are constant, but really suck for forces that aren't.

If we can't use the kinematic equations all of the time, should we ditch them all together? Is it possible for constant forces to use the kinematic equations, and non-constant forces to use the semi-implicit Euler equations? Then our integration would be perfect with respect to the constant forces, at least. Under such a scheme, how would we go about mixing constant and non-constant forces?

Consider once again the semi-implicit Euler equation. We can plug the velocity equation into the position equation, since they both involve v1. The result is similar in appearance to the kinematic equations
v1 = v0 + a0Δt

x1 = x0 + v0Δt + a0Δt2

We now see a very striking resemblance to the kinematic equations. We actually have three terms that match exactly. The first term in the velocity equation, and the first two terms in the position equation. Only the terms that involve acceleration are different. It can be shown in a general case, that the three matching terms will always match, regardless of the form of acceleration. The form of the acceleration only affects the terms which involve acceleration.

We now have a clue as to how we might combine a constant force with a non-constant force.

v1 = v0 + ( aCΔt + aNCΔt )

x1 = x0 + v0Δt + ( (1/2) aCΔt2 + aNCΔt2 )

Thus, the constant force gets the benefit of using the kinematic equations, and the non-constant force can use the semi-implicit Euler technique. You may see some simplification that can take place, since the acceleration terms contain common factors. Now, it is possible to analytically solve for the acceleration terms given other forms of acceleration. These terms may not have common factors, so I chose to keep the acceleration terms separated.

The Kinematic Integrator:

The kinematic integration method is given as follows

v1 = v0 + dv

x1 = x0 + v0Δt + dx

Which will give us the new position and velocity, given we supply the value of the previous position and velocity as well as the two integral parameters dx and dv. So what are dx and dv? They are generalizations of the terms which involve integrals of acceleration.

In Euler based integration techniques, you usually have a unique method to calculate each different kind of force. There might be a method for a spring force, and a method for a pulse, etc.. This resulting force is divided by the mass, and accumulated with whatever existing acceleration is already acting on the object.

Using the kinematic integration technique, the entire point of the force calculation method is not to calculate the force, but to calculate dv and dx. These are, likewise, accumulated with whatever existing dv's and dx's that were previously acting on an object.

The point of this generalization is to move the calculus out of the stepping equations, and into the force calculation, so that each different kind of force can give the true result of the acceleration integrals. This leaves the stepping equations in a form that is not dependent on the form of the input forces.

A method which calculates a constant force will return the exact dv and dx of a constant force. A method which calculates a spring force will return the exact dv and dx of a spring force. This is really great, but what do we do if we don't know how to calculate the true integral of a given force? Simple, we just go back to using the semi-implicit Euler technique. We have already expanded this technique out into kinematic form, so we know what dv and dx are.

dv = a0Δt

dx = a0Δt2 = dvΔt

The spring force

So how do we calculate the dv and dx of a spring? As an example, consider a spring which connects a mass to the origin. The spring has a spring constant k, which determines how tight it is. The force acting on the mass has the form

F = -kx = -mω2x

Here ω is the angular frequency of the spring, and the reason for defining it is because the form of motion of the spring is given by a sinusoidal function of the following form

x = A sin(ωt) + B cos(ωt)

We can take the derivative of this expression to find the functional form of the velocity. Then we can use x0 and v0 as our initial conditions in order to fix the values of A and B.

v1 = v0 cos(ωΔt) - x0ω sin(ωΔt)

x1 = v0/ω sin(ωΔt) + x0 cos(ωΔt)

These are the true, un-approximated results of the integration. We can use these to find the values of dv and dx, merely by equating these results with the left hand side of the kinematic integrator equations. We find that

dv = v0 ( cos(ωΔt) - 1 ) - x0 ( ω sin(ωΔt) )

dx = v0 ( sin(ωΔt)/ω - Δt ) + x0 ( cos(ωΔt) - 1 )

If the parameters of your spring do not change, and you are using a fixed time step, then the coefficients of x0 and v0 do not change, and should only need to be calculated once.

Again, the accuracy of the stepping equations does NOT depend on the size of your time step, because we are using the exact result. I have compared the numerical result of the kinematic integrator with the exact result in the case of a spring, and there is literally zero error - no matter what the time step is, or how long the system runs.

Multiple forces:

Now, there is a problem that arises when you try to apply two forces that depend on position or velocity. They both are integrated without a knowledge of the other. Therefore they are oblivious to the changes in intermediate position or velocity caused by the other force. These changes would have altered the final result, had they been taken into account.

A good example of this is the example of having two springs.

If one spring acts alone, the result of the kinematic integrator is exact, and so the result is always correct, regardless of the size of the time step. The story is different when two springs act simultaneously.

The system begins to accumulate error, due to the misrepresentation of the forces. However, the error is very small, almost insignificant. In a test of such a system, I calculated that the error of the kinematic integrator was 12,000 times smaller than the error from the semi-implicit Euler method. Unlike the semi-implicit Euler method, however, the kinematic integrator is not guaranteed to preserve energy. Therefore these very small inaccuracies build up, and over a very long time, the system diverges. Now, the introduction of even the smallest amount of friction to the system would conceal these tiny inaccuracies, however, there ends up being a better way.

The error due to multiple forces is actually an error in the calculus. We can play the trick where we move the error out of the calculus and into the representation of the forces, by making an approximation of the force. The approximation I have in mind ends up introducing less error than the error due to multiple forces. Also, this approximation tends to remove energy from the system, which makes the system stable. With this method, you would still calculate dv exactly, like before, but when you calculate dx, you do it like this

dx = (1/2) dvΔt

Using this method to calculate dx actually beats out calculating it exactly in cases where multiple forces that depend on position are in play. This method is called the average acceleration method. On a test performed on a system with two springs, the average acceleration method removed about 5% of the energy from the system. After about 3 hours. In the test I ran of this sample system, the error introduced by the average acceleration method was 58 million times smaller than the error introduced by the semi-implicit Euler method.

It is recommended that forces which depend on position or velocity use the average acceleration method of calculating dx.

Conclusion:

Using the kinematic integrator allows you to remove the process of integration from the stepping equations, and push it down into the force calculation. This enables you to taylor the integration method to each different type of force. Having this freedom allows you to provide exact solutions, if they are attainable. If the solutions are not attainable, you can still use the semi-implicit Euler method as a standby.

This method introduces a trivial amount of extra computation, i.e. two or three operations per force applied. In the spectrum of numerical methods, this method has roughly the same computational expense as a first order method, and acheives results that far exceed the more expensive fourth order methods.

Obviously this method only has utility if you know the form of the forces acting during a given simulation step, and also if the forces are integrable. Happily, most if not all of the forces that are used in run-time simulation are integrable.

With some systems it is possible to completely eliminate error. In others, the error is a very small fraction to the error of other methods that have comparable computational expense. Thus, with the kinematic integrator you can use much larger time steps, and expect higher accuracy from integration.

## Thursday, February 25, 2010

### Quaternions: Why

Before digging into the deep guts of the quaternion, I want to talk a little bit about complex numbers. Complex numbers are really only a step away from quaternions, and people are much less nervous around complex numbers. So it's a pretty good place to start.

A Bit About Complex Numbers:

A complex number has a real and an imaginary component.

z = x +iy

The presence of the i makes these two components linearly independant. Thus we can begin to think of a complex number as a 2D vector. In fact it is convenient to represent a complex number as a (real, imaginary) pair.

z = (x, y)

We know that we can add and subtract complex numbers

z1 + z2 = (x1 + x2, y1 + y2)

z1 - z2 = (x1 - x2, y1 - y2)

and we acheive the same result as adding and subtracting 2D vectors.

Complex numbers can do something that 2D vectors can't. You can multiply them.

z1z2 = (x1x2 - y1y2, x1y2 + y1x2)

The existence of the multiplication rule promotes the complex numbers from a mere vector space to an algebra. An algebra is a linear space that has a multiplication rule.

If we want the complex numbers to represent a 2D vector space we are missing something, and that's a dot product. In a 2D vector space, we have a dot product that tells us the length of the vectors, (or the length squared of the vectors to be more precise.)

But wait!! We can determine the magnitude squared of a complex number as well:

|z|2 = zz* = (x2 + y2, 0) = x2 + y2

Here the * represents a complex conjugate, which merely flips the sign of the imaginary component. The magnitude squared of a complex number is identical to the result of taking the dot product of a 2D vector with itself. Using this as your starting point, you can show that the vector dot product, defined in terms of a complex algebra is

A•B = (1/2) (AB* + BA*)

When endowed with a dot product, the complex numbers truly do contain a full and complete representation of a 2D vector space, with the added bonus of having a well defined way of multiplying the vectors together.

In other words, 2D vectors are not complex numbers, but complex numbers ARE 2D vectors which have the added power of multiplication.

So What Was Hamilton Thinking?

Often, when one is looking up information about quaternions, the story comes up about how W.R. Hamilton is walking along one day, and while crossing a bridge he suddenly comes up with the formula for quaternions. He then promptly vandalizes the bridge and goes on his merry way.

How does someone just come up with quaternions? I will tell you.

Hamilton was aware of the fact that the algebra of complex numbers supply a natural mechanism for multiplying 2D vectors, and he was wondering: How would a person multiply 3D vectors? We add and subtract 3D vectors, and we have a dot product for 3D vectors, but how do you multiply 3D vectors?

Now, some people might be thinking, what about the cross product? Why can't the cross product be the multiplication rule for 3D vectors? The answer is: the cross product definitely holds a clue, but it is not the entire answer. Besides, it wasn't until after the invention of the quaternion that we even had a cross product, so Hamilton didn't know about them.

So, as Hamilton is crossing the bridge it dawns on him how we might multiply 3D vectors together, and he writes the multiplication rules for the 3D basis vectors on the bridge. These multiplication rules are what we are talking about when we say the "definition of a quaternion"

So How Did Hamilton Come Up With the Formula?

Remember how we defined the 2D dot product in terms of the complex multiplication rule? Hamilton similarly decided that if a 3D multiplication rule exists, the result of multiplying a vector with itself should be a real number that is equal to the length squared of the vector. So, however the mutliplication works, it should result in the following formula

VV = x2 + y2 + z2

The meaning of the † here is similar to that of the complex conjugate, but here it flips the signs of all of the basis vectors.

V = -xI - yJ - zK

There are nine terms in the product, when the vector is expressed in terms of its 3 components

VV = -x2I2 - y2J2 - z2K2 - xy(IJ + JI) - xz(IK + KI) - yz(JK + KJ)

By not combining the IJ and the JI terms, I am stating that they are possibly different.

While crossing the bridge Hamilton realized that he could satisfy his initial postulate about the multiplication of 3D vectors, if the basis vectors satisfied the following multiplicaiton rules.

I2 = J2 = K2 = -1

IJK = -1

In other words, if he applied these rules, the first 3 terms would fall out correctly, and the last 6 terms would vanish.

Hamilton was pretty stoked about this, because this meant that you could add, and subtract 3D vectors, but now you could also multiply them together!

So Where Does the Fourth Component Come From?

Using Hamiltons multiplication rules for the basis vectors, we can define a general multiplication rule for the product of arbitrary 3D vectors.

AB = -AB + A×B

The first term is a scalar, and the second term is a 3 component vector, four components in all. It was probably this surprising discovery that the multiplication rules summoned the existence of a 4th component that prompted Hamilton to call them quaternions, which literally means "a group of 4."

The existence of the 4th component in a general quaternion does not change the original meaning of the vector portion. The vector portion of a quaternion is truly a 3D vector, in every possible identification. It adds, subtracts, and has a dot product like a standard 3D vector. We can use it in every possible way that we can use any 3D vector. The only magic here is that we now have a multiplication law. This is a good thing, since we can use this multiplication law to make meaningful geometric statements about 3D vectors.

So What Do Quaternions Have To Do With Rotations?

At the very heart of the definition of the quaternion multiplication lies the postulate that it must somehow represent the length of the vector. The fundamental definition of a rotation is that it is a transformation which does not change the length of a vector. Thus the definition of quaternion mutliplication is very intimately connected to the concept of rotation.

Let's build the rotation from the ground up. To begin with, we know that quaternion multiplication from the right is not the same as quaternion multiplication from the left. Thus, the general form of a transformation would look something like this

v' = AvB

Where we have two transformation quaternions, one acting on the right, and one acting on the left.

Since v is a 3D vector, we don't care what the scalar part is. However, the transformation should also not care. This means that the transformation should not change the value of the scalar part. If the scalar part starts at zero, the transformation should leave it zero. Placing this restriction on the general form of the transformation leads to the following condition.

B = A

And so our transformation law now looks like this

v' = AvA

Finally, we require that the length of v is not changed by the transformation. This can be stated using Hamiltons initial postulate of 3D vector multiplication

v'v' = vv

AvA (AvA) = AvAAvA

We see that the only way our condition can be satisfied is if AA = 1. In other words A must be normalized. An arbitrary normalized quaternin takes the form

N = cos(α) + n sin(α)

Here n is a normalized 3D vector.

If we use this N to apply the transformation, we will see that we have successfully rotated v around the axis n by an angle of 2α. The reason for the factor of 2, is because there are 2 factors of N acting on v. To take this into account, we generally define rotation quaternions in terms of a half-angle.

r = cos(θ/2) + n sin(θ/2)

Conclusion:

You now know the why behind quaternions. A 3D vector is not a quaternion, but a quaternion IS a 3D vector, with a multiplication law that requires an additional scalar component. Go now, and unleash your newfound power upon the helpless masses.

## Wednesday, February 24, 2010

### Quaternions: How

Game developers have a love-hate relationship with quaternions. Most understand that they are somehow mathematically superior to Euler-Angles for representing rotations. Most also know that they are more compact, and easier to interpolate than rotation matrices. Other than that, quaternions remain a mystery.

I have a very deep and powerful love relationship with quaternions. For me, knowing why they work is more important than knowing how they work. However, most programmers need to see the how of quaternions before they care to ask about why. I will attempt to bestow my quaternion mojo upon you at this time, if you are willing. For now, I will just talk about HOW.

We will discuss the basic things that any game programmer actually does with quaternions, such as:

• Representing a rotation using an axis and an angle
• Rotating vectors
• Concatenating rotations
• Blending between an initial and final rotation
• Integrating quaternions using an angular velocity
How to Represent a Quaternion:

A quaternion can be represented by 4 numbers. A common struct definition might be

```
struct Quat
{
float x;
float y;
float z;
float w;
};
```
The x, y, and z components of a quaternion represent coordinates in standard 3D space. Really. Some people might try to make it more complicated than that, but they are mistaken. When using a quaternion to represent a standard vector, it is customary, but unnecesary to set the w component to zero. It is convenient to define the quaternion with the w last, so that you can easily typecast it to a 3D vector. A quaternion IS a vector, in every sense of the term vector.

You may ask: if a quaternion IS a vector, why all the fuss? That is an excellent question the answer of which plumbs the very foundations of the universe itself. (You think I'm joking, but I'm not!!) I will give you the answer when I'm ready to discuss the why of the quaternion.

When talking about quaternions it is convenient to use (scalar, vector) notation. The scalar I'm referring to is the w component, and the vector is the x, y, and z components. For instance, I might use the following notation

q = (w, V)

In order to use a quaternion to represent a rotation, you need to know the angle θ of the rotation, and the axis n around which you are rotating. The axis n is a normalized 3D vector, and the angle θ is e.g. a float. The rotation quaternion is defined as:

r = ( cos(θ/2), n sin(θ/2) )

Looking at this, you may think that it almost makes sense; you wonder why θ/2, and not just θ? This is a wonderful question, and the answer is both beautiful and profound. (You think I'm joking, but I'm not!!) But I'm not answering why yet, I'm just discussing how. One thing that is very important about this formula: the angle is in radians, NOT degrees!

To generate a quaternion with a given axis and angle, you may envision creating a function like this

```
Quat QuatFromAxisAngle(const Vector& axis, float angleInRadians)
{
Quat result;
float angle = angleInRadians / 2.0f;
float sinAngle = sin(angle);
Vector n = VecNormalize(axis);
result.w = cos(angle);
result.x = n.x * sinAngle;
result.y = n.y * sinAngle;
result.z = n.z * sinAngle;
return result;
}
```
You now have a quaternion that represents a rotation.

How To Rotate a Vector With a Quaternion:

Quaternions aren't just a set of 4 numbers, they are an algebra. This means that there is a procedure defined to add and multiply these quantities. You can perform a vector rotation using a formula involving quaternion multiplication. But I'm not going to talk about that yet, because most of the time you aren't going to use the quaternion rotation formula.

The first rule of rotating vectors with quaternions is: don't rotate vectors with quaternions! Quaternions are great for representing rotations, but when you get to the point in your code where you are doing the actual rotation computation, it's better to convert to a matrix form. Using the quaternion multiplication rule to rotate a vector isn't too expensive, but nothing is more efficient than a matrix for transforming vectors. In fact, it is cheaper to convert the quaternion to a matrix, and use the matrix to rotate the vector, than to use the quaternion formula.

As it turns out, most transforming of vectors is done by the graphics library and not by you. However, transformations are ubiquitously represented by matrices in graphics libraries. If you want to send your rotation to the graphics library, you will need to convert it to a matrix. Therefore it is essential for you to know how to convert a quaternion to a matrix.

To convert a quaternion to a matrix use this function:

```
Matrix MatrixFromQuaternion(const Quat& q)
{
Matrix result;
//helper quantities - calculate these up front
//to avoid redundancies
float xSq = q.x * q.x;
float ySq = q.y * q.y;
float zSq = q.z * q.z;
float wSq = q.w * q.w;
float twoX = 2.0f * q.x;
float twoY = 2.0f * q.y;
float twoW = 2.0f * q.w;
float xy = twoX * q.y;
float xz = twoX * q.z;
float yz = twoY * q.z;
float wx = twoW * q.x;
float wy = twoW * q.y;
float wz = twoW * q.z;
//fill in the first row
result.m00 = wSq + xSq - ySq - zSq;
result.m01 = xy - wz;
result.m02 = xz + wy;
//fill in the second row
result.m10 = xy + wz;
result.m11 = wSq - xSq + ySq - zSq;
result.m12 = yz - wx;
//fill in the third row
result.m20 = xz - wy;
result.m21 = yz + wx;
result.m22 = wSq - xSq - ySq + zSq;
return result;
}
```
This function does not assume that the input quaternion is normalized. A quaternion only represents a rotation if it is normalized. If it is not normalized, then there is also a uniform scale that accompanies the rotation. Normalizing a quaternion is similar to normalizing a vector. You just have to take into account that the quaternion has 4 components. To be explicit, here is a function to normalize a quaternion.

```
Quat QuatNormalize(const Quat& q)
{
Quat result;
float sq = q.x * q.x;
sq += q.y * q.y;
sq += q.z * q.z;
sq += q.w * q.w;
//detect badness
assert(sq > 0.1f);
float inv = 1.0f / sqrt(sq);
result.x = q.x * inv;
result.y = q.y * inv;
result.z = q.z * inv;
result.w = q.w * inv;
return result;
}
```

Now, you may be wondering how I came up with this magic formula to convert quaternions to matrices. I can do it, because I know how to use the quaternion rotation formula. You may also be wondering why you would ever mess around with quaternions if you just are going to convert it to a matrix anyway. Keep reading... you will see.

How to Concatenate Rotations:

When you multiply two transformation matrices together, the result (aside from any numerical error) is also a transformation matrix. Transformation matrices can rotate, translate, scale, and skew. However, in many cases the only operation being performed by a transformation matrix is a rotation.

If you multiply two 3x3 rotation matrices together, there are 27 total terms that need to be evaluated. If you multiply 2 quaternions together, there are only 16. This number can actually be optimized a bit, but even with a naive approach you can perform about 1.5 quaternion multiplies for every matrix multiply. Thus, if you know your transformations only involve rotations, using a quaternion is a very good thing.

The order of quaternion multiplication is important, so you need to keep track. Just remember, the first rotation is on the right, and the second rotation is on the left.

qT = q2q1

This multiplication is a quaternion multiplication. The way that quaternion multiplication is defined is one of the things that makes quaternions good at representing rotations.

You can express the quaternion multiplication in terms of standard vector operations, such as dot and cross products.

AqBq = (ab - AB, aB + bA + A×B)

A function which multiplies two quaternions together may be defined in terms of the components as follows:

```
Quat QuatMultiply(const Quat& q1, const Quat& q2)
{
Quat result;
result.w = q1.w*q2.w - q1.x*q2.x - q1.y*q2.y - q1.z*q2.z;
result.x = q1.w*q2.x + q1.x*q2.w + q1.y*q2.z - q1.z*q2.y;
result.y = q1.w*q2.y + q1.y*q2.w + q1.x*q2.z - q1.z*q2.x;
result.z = q1.w*q2.z + q1.z*q2.w + q1.x*q2.y - q1.y*q2.x;
return result;
}
```
This is just an expanded version of the vector operations given previously.

How to Interpolate a Quaternion:

This is a very huge topic, but I will boil it down for you and give you the goods.

Interpolating a quaternion is useful when smoothly varying between an initial and final rotation. Interpolation is good for finding arbitrary in-between values of rotation. This is employed especially in character animation systems. It is possible to interpolate rotation matrices, but the interpolated matrix may not be a size and shape preserving matrix. Needless to say, interpolating a quaternion is a bajillion times easier than interpolating rotation matrices.

There is one interesting property of quaternions that comes into play when dealing with interpolation. If a rotation can be represented by a quaternion q, then the quaterion -q also represents the same rotation. Why is that? I'm not going to explain it right now, other than to say that it is connected to the very fabric of reality. (You think I'm joking but I'm not!) What you need to worry about is which one of these quaternions you are going to use.

To describe the difference between q and -q, consider that you turn a quarter turn to your left. Esentially this is the same as turning 3/4 turn to your right. One turn is the "short" turn and the other is the "long"one. When representing a static orientation it is irrelevant if a quaternion represents the short, or long path, because it just sits in the final position and you don't get to see the in-between values. However, when you are blending it surely does make a difference.

When blending between an initial and a final quaternion, there is some ambiguity as to if we are taking the "short" way or the "long" way. It seems like the right thing to always blend on the shortest path. Given the two input quaternions, it is possible to determine which way we are going to blend. You can check this by examining the sign of the 4D dot product of the inputs. If the sign is negative, then you know you are going to be blending the long way.

So, what do you do if you find out that you are blending the long way? Simply flip the sign on one of your input quaternions. remember q and -q represent the same rotation. Flipping the sign on one of your inputs will flip the sign of the 4D dot product.

Now that we have discussed that little tid-bit, let's move on to interpolation formulas.

There are a few different interpolation formulas, but two main ones: NLerp is a linear interpolation of the components that is followed by a normalization of the interpolated quaternion, to ensure that it represents a rotation. Slerp is a spherical interpolation which interpolates in a spherical space, rather than in the cartesian space of the coordinates. The interpolant of the slerp function moves at a constant motion, while the interpolant of the NLerp has some non-linear acceleration.

Heres the quick and dirty: Don't mess around with Slerp, even though you think it might be the more "correct" thing to do. It is too expensive, and has too many special cases that need to be considered. There are some complicated schemes that try to closely approximate the Slerp function, but it just isn't worth it. Just use NLerp. Especially for computationally strapped code.

In fact, I'm not even going to show you how to SLerp. You can consult google if you really want to know.

Here is a blending function that uses NLerp

```
Quat QuatBlend(const Quat& i, const Quat& f, float blend)
{
Quat result;
float dot = i.w*f.w + i.x*f.x + i.y*f.y + i.z*f.z;
float blendI = 1.0f - blend;
if(dot < 0.0f)
{
Quat tmpF;
tmpF.w = -f.w;
tmpF.x = -f.x;
tmpF.y = -f.y;
tmpF.z = -f.z;
result.w = blendI*i.w + blend*tmpF.w;
result.x = blendI*i.x + blend*tmpF.x;
result.y = blendI*i.y + blend*tmpF.y;
result.z = blendI*i.z + blend*tmpF.z;
}
else
{
result.w = blendI*i.w + blend*f.w;
result.x = blendI*i.x + blend*f.x;
result.y = blendI*i.y + blend*f.y;
result.z = blendI*i.z + blend*f.z;
}
result = QuatNormalize(result);
return result;
}
```
This function has a singularity when the difference between the initial and final quaternions is a 180 degree rotation. This is due to the fact that the axis of rotation for the blend becomes ambiguous. You could potentially detect for this case, and decide on using the "up" vector for the axis of the blend. Or you could break up the blend into a few steps. This singularity is something that shows up in any interpolation scheme, not just NLerp.

How to Integrate a Quaternion:

Updating the dynamical state of a rigid body is referred to as integration. If you represent the orientation of this body with a quaternion, you will need to know how to update it. This is done with the following quaternion formula.

q' = Δq q

We calculate Δq using a 3D vector ω whose magnitude represents the angular velocity, and whose direction represents the axis of the angular velocity. We also use the time step Δt over which the velocity should be applied. Δq is still a rotation quaternion, and has the same form involving sines and cosines of a half angle. We use the angular velocity and time step to construct a vector θ, whose magnitude is the half angle, and whose direction is the axis.

θ = ωΔt/2

Note: I've included the factor of 1/2, which shows up inside the trig functions of the rotation quaternion. Expressing the rotation quaternion in terms of this vector you have

Δq = ( cos(θ), (θ/|θ|) sin(θ) )

This works well, however this formula becomes numerically unstable as |θ| approaches zero. If we can detect that |θ| is small, we can safely use the Taylor series expansion of the sin and cos functions. The "low angle" version of this formula is

Δq = (1 - |θ|2/2, θ - θ|θ|2/6)

We use the first 3 terms of the Taylor series expansion, so we should ensure that the fourth term is less than machine precision before we use the "low angle" version. The fourth term of the expansion is

|θ|4/24 < ε

Here is a sample function for integrating a quaternion with a given angular velocity and time step

```
Quat QuatIntegrate(const Quat& q, const Vector& omega, float deltaT)
{
Quat deltaQ;
Vector theta = VecScale(omega, deltaT * 0.5f);
float thetaMagSq = VecMagnitudeSq(theta);
float s;
if(thetaMagSq * thetaMagSq / 24.0f < MACHINE_SMALL_FLOAT)
{
deltaQ.w = 1.0f - thetaMagSq / 2.0f;
s = 1.0f - thetaMagSq / 6.0f;
}
else
{
float thetaMag = sqrt(thetaMagSq);
deltaQ.w = cos(thetaMag);
s = sin(thetaMag) / thetaMag;
}
deltaQ.x = theta.x * s;
deltaQ.y = theta.y * s;
deltaQ.z = theta.z * s;
return QuatMultiply(deltaQ, q);
}
```

This is basically it! You now know how to accomplish all of the main tasks that any game programmer will usually bump up against relating to quaternions. If you are brave, you can move on to my next post, which covers a lot of details concerning the WHY of quaternions.