Thursday, June 24, 2010

Blending a Walk/Run cycle

My friend, Rory Aguilar, posed an interesting problem to me:

"Suppose you have a walk animation, and a run animation, and you want to blend them together in such a way that you can smoothly go from walking to running without the feet sliding all over the place."

(Rory is kind of girly, so Rory quotes are hilighted in pink.)

This is very interesting for character animation, since it becomes possible to smoothly vary the speed of a character, while maintaining realistic movement.

Let's assume some things about our underlying animation system:

1) it is possible to get the instantaneous pose on any frame, or intermediate frame, of a particular animation
2) it is possible to do this for multiple animations
3) it is possible to perform a weighted blend of the resulting poses on a per bone basis

Now, let's assume some things about the walk and the run animations:

1) both the walk and the run animations represent a single loop of the cycle
2) the walk animation does not need to have the same number of frames as the run animation. It will likely have many more frames.
3) the run animation does not need to cover the same amount of distance as the walk animation. It will likely cover a longer distance, since when you run you have a longer stride.
4) for now, we will assume that the footfalls happen at the same fraction, e.g. the right foot hits the ground 45% of the way throgh the animation on both animations.
5) The animations will contain translations off of the origin. In other words, we are not going to remove the translations off the origin, and then add them in after the fact.

Ok, we're ready to go. Let's dive on in.

Step 1 : define the blend parameter

We are going to define a blend parameter b, that defines how much we are walking or how much we are running. If b = 0, then we are walking. If b = 1, then we are running.

You can relate the blend parameter to the variable speed you desire your character to go. Before doing this you need to calculate the speed of the walk animation, and the speed of the run animation. To calculate the speed of an animation that has N frames, runs at 30 fps and has distances d measured in feet, you do the following

If you want the speed to be in units other than feet per second, then you can multiply this by an appropriate conversion factor. For instance, if you want to express the speed in miles per hour, then the speed becomes

You can express the blend parameter in terms of the variable velocity, and the velocities of the walk and run animations as follows

Step 2 : unify the duration of the animations

When you are blending the two animations together, you wouldn't want to blend the beginning of the walk animation with the end of the run animation, because it would result in nonsense. However, the run animation is likely to end much quicker than the walk animation. What we would like to acheive is some type of temporal correspondence between the frames of the walk animation and the frames of the run animation. One way to acheive this is to slow down the run animation, so that it ends on precicely the same frame as the walk animation. The rate that acheives this is

This ratio represents the rate required to play the number of frames in the run animation Nr within the time it takes to complete a walk cycle Tw.

The time it takes to complete a walk cycle is

So the rate is

This rate represents the rate of the run animation when the blend value b is equal to 0, i.e. we are in a full walk. For intermediate blend values, we not only need to slow down the run animation, we also need to speed up the walk animation. The blended rates for both animations are given as

If you plug in a value of b=0 into these equations, we see that the rate of the walk animation is 30 fps, and the rate of the run animation is exactly what we had calculated before. If we plug in a value of b=1 we see that the run animation rate is now 30 fps, and the rate of the walk animation is modified.

You can use these rate equations to calculate the animation frame that should be evaluated, given the time delta of the loop Δt measured in seconds, and the animation frame evaluated on the previous loop.

Using this calculation, you should be able to vary the value of b from one loop to the next, and still maintain temporal correspondence between the frames of the two animations, e.g. both animations should always cycle on the same loop, and the footfalls should always occur at the same time.

Step 3 : unify the distance traversed by the animations

The footfalls of the independent animations are now happening at the same time, but the animations are not in the same place when this happens. As stated before, the stride of a run animation is probably longer than that of a walk animation.

To synchronize the position of the charachter across the two animations, I will define a "target bone". This bone can be used to define the floor plane, the position of the character, the forward direction, and the up direction, on each frame of the animation.

We will need to know the full joint matrix for the target bone (T), both in animation space TA, and in world space TW. These two are related by the animation origin AO through matrix multiplication

Each animation will have a unique independent animation origin, and the value of TA is the animation space representation of the target bone, evaluated at the frame determined by the rate calculated in the previous section. Therefore each animation will have a unique result for the value of TW

As we blend from one animation to the other, we want to transition which animation controls the world space value of the target bone matrix. To do this, we will blend the world space representation of the target bone.

Blending a matrix is kind of tricky. If the matrix is composed of only translates and rotates, it is possible to decompose the matrix into a translate vector, and a rotation quaternion, both of which can be blended, and then recomposed back into a matrix.

If we want the individual animations target bone to be the same as the blended target bone in world space, we will need to move the respective animation origins.

Using the blended world space values of the target bone matrix, we can determine the incremental offset that needs to be applied to each of the animation origins. Use the formula

And solve for Δ.

Now apply Δ to each animation origin to incrementally update it.

Finally, you need to find the animation origin for the blended animation. This can be done using the same blend function that was used for the world space target bone matrix.

You are tracking and updating the animation origins of the walk and run animations independently, but the origin you use to actually position the animation in world space is the blended origin.

Using this method, the blend parameter will determine which animation has more control over the actual world space position of the character. If we are in a full walk, then the run origin will continuously move backward, in order to conform to the position set by the walk animation. When we are in a full run, the walk animation origin will continuously move forward to conform to the position set by the run animation.

Step 4 : The next level

We should now have a walk/run blend, with the only constraint on the animator being that the footfalls happen at the same fractional intervals on both animations. However, placing constraints on animators never turns out well in practice, so let's discuss how we might remove the "fractional interval" constraint on footfalls.

Let's say we break the animation up into a set of events:

Presumably, we start the cycle with the left foot down, and the right foot stepping forward. On some frame, the right foot becomes stationary relative to the ground. Later the left foot becomes stationary relative to the ground, and finally the animation ends.

We can split the animation up into these four segments. Both the walk and the run animations should have these segments, and they should occur in the same order. We can either visually determine which frame each footfall happens on, and tag it, or we can have an automated process determine this for us. Either way, let's assume we know the exact number of frames in each of the four sub-intervals of each animation.

The only portion of our process that needs to be modified is the rate calculation. Now, instead of requiring that both animations end at the same time, we require that they reach the first event at the same time.

After reaching the first event, we recalculate the rates using the number of frames between the first and second events.

After reaching the second event, we recalculate the rates using the number of frames between the seond and third events.

After reaching the third event, we recalculate the rates using the number of frames between the third and fourth events.

Thursday, March 18, 2010

Narrow Phase: Sweeping a Sphere Against a Polygon

Before sweeping a sphere against a polygon, you should determine if the sphere is already intersecting the polygon. I will discuss the case where the sphere is not initially intersecting the polygon.

We will use a 2D representation of the hypothetical setup, since it's easier to draw 2D pictures.

In this setup, there is a sphere with radius r located at point c, a relative velocity vector v that represents the difference of velocity between the sphere and the polygon. This relative velocity can be considered as the velocity of the sphere from the point of view of the polygon. The velocity is given in units that represent how much the sphere will move over the course of a given frame. Thus, at the end of the frame, the center of the sphere should be positioned at the tip of the velocity vector. The polygon lies entirely within some plane, and we are projecting the setup in such a way that the plane occupies a single line. The polygon itself is the bold section of the line. The normal n of the plane is pointing out of the front side of the polygon.

Before determining collision with the polygon, we will first determine collision with the plane. The first step is to find the point on the sphere that will first come into contact with the plane. This can be done scaling the normal of the plane by the radius of the sphere, and subtracting this from the center of the sphere.

We will sweep the point p1 forward. The sweep is accomplished by finding a line that passes through p1, and is parallel to v. We then find the point where the line intersects the plane. The point on the plane is the point where the sphere will first make contact with the plane.

The point p2 can be calculated using the following expression

Here d is the plane parameter, and can be found by dotting the plane normal with any point that resides on the plane, such as one of the vertices of the polygon.

Before calculating p2, check if s is less than 0, or greater than 1. If it is then the collision is not going to happen during this frame, so you can return a false on your collision routine.

If s is between 0 and 1, then go ahead and calculate p2 and continue with the operation.

The next step is to find a point inside the polygon that is nearest to p2. The method for doing this is dependant on the polygon. If p2 already resides within the polygon, then the nearest point to p2 is just p2. Otherwise, the nearest point lies on the boundary of the polygon.

NOTE: Finding the nearest point on the polygon to p2 does not actually work in 3D, since it is possible to have components of velocity that are tangent to the polygon plane. To correct for this, you should try to find the point on the polygon that is closest to the ray that is cast from p1. This is done by finding which feature of the polygon (vertex, edge, or face) is closest to the line, and then finding the closest point on this feature to the line.

The point p3 is the point on the polygon that will first be touched by the sphere.

Now we will sweep the point p3 backward, and see if it touches the sphere. This sweep is performed by finding a line that passes throw p3 and is parallel with v. This line will intersect the sphere in 0, 1, or 2 places.

The formula for this sweep involves a square root. If the stuff we are trying to take the square root of is negative, then there is no intersection. So, we first check to see if this stuff is negative.

If this is negative, then there is no collision and we can return false. Otherwise, we will continue to calculate the fraction of the time step t at which the collision takes place

This quantity should not be less than 0. It may however be greater than 1. If it is, then there will be a collision, but not during this time interval, so return false. If however, t is less than 1, you can proceed to determine the point p4 that represents the point that the sphere will first come into contact with the polygon.

At this point, we can move the sphere forward so that it just touches the polygon. The task now is to determine how to resolve this collision, by adjusting the velocity of the sphere. Once we have determined the new, resolved, velocity we continue to sweep the sphere for the remainder of the time step - checking for collisions with other polygons along the way.

Tuesday, March 9, 2010

Collision Overview

Collision detection is one of the very key components of a physics engine. However, collision detection on its own is a much more broad topic. It is important to know what kinds of information your collision detection code should be providing, and the different means of accessing this information.

Generally, the primary thing we want the collision system to do is notify us if two objects are intersecting. As a secondary part of this computation, we often would like to know how to keep the objects from intersecting.

There are a couple of methods I have used, when writing collision systems, to deliver this information.

The first is a direct query method, which has the possible functional prototype

bool CollisionSystem::CollideObjects(const PhysObj& A, const PhysObj& B, ColResData* resolve = NULL);

Such a function would return true if the two objects were intersecting. We may optionally pass in a data structure to retrieve information about how to resolve the collision.

The second method is a bit more automatic. All collision information is passed to the user through callback functions. The collision system accumulates all motion of all objects, and automatically determines which objects need collision processing. If a collision event occurs, a callback function is called.

typedef void (*CollisionHandler)(PhysObj&, PhysObj&, ColResData*);
void collisionHandler(PhysObj& A, PhysObj& B, ColResData* resolve);

The physics objects are not passed in as const because the collision handler may change the state of these objects in order to resolve the collision.

We may want to have several different collision event handlers. For instance, we may want to handle an event which corresponds to objects touching, or an event which corresponds to the moment that they cease to touch. There are several different objects in the world, and we may want a different handler for different object types. Therefore, when we proved the collision system with collision handlers, we must supply the object types, and event types for which the collision handler applies

bool CollisionSystem::AddHandler(CollisionHandler handler, int AType, int BType, ColEventType event);

This function passes in a collision handler, and an int for the types of objects A and B. An int is used so that the user may map these types to any meaning desired. If the function returns false, it was not able to map the collision handler, probably because there is already a handler mapped to the given types and event. The ColEventType is an enum that may contains events like

enum ColEventType

We may want to add more collision events, but these cover many of the bases. We want to know if two objects have just gotten near to each other, if they are near to each other, if they have just begun to intersect, if they are intersecting, if we want to resolve the collision, if they have just separated, and if they have just moved away from each other.

The way that we actually process collisions is broken up into three distinct portions
  1. Broadphase - rough estimation if objects are even close to each other
  2. Midphase - determines which parts of a complex object might be colliding
  3. Narrowphase - performs collision on convex primitives, and calculates resolution information

In the following posts, I will discuss each of these topics.

Thursday, March 4, 2010

Quaternion Tricks

Trick #1 : Optimizing the Quaternion Rotation Formula

The quaternion rotation formula is given as

(0, v') = R(0, v)R

This formula can be expressed in terms of standard 3D vector products such as scaling, dot products, and cross products. This equation involves two quaternion products, each of which requires 16 multiplies and 12 additions. Thus there are 56 operations in all.

We can express the vector cross product in terms of the quaternion product. We know that if you switch the factors in a cross product, the product switches sign. Knowing this, you might accept without proof the following formula.

qV × pV = (1/2) (qp - pq)

Indeed if you flip the order of the arguments, the product will change sign.

We will use this relationship, in a rearranged form, on the first two factors in the quaternion rotation formula.

R(0, v) = (0, 2RV × v) + (0, v)R

Inserting the expression on the right hand side into the quaternion rotation formula yields

(0, v') = (0, 2RV × v)R + (0, v)RR

When examining the first term on the right hand side, it can be shown that the scalar portion is always zero. For the second term, we can use the fact that the rotation quaternion R obeys the identity RR = 1. Since all 3 terms have zero scalar component, we can now rewrite this expression in terms of vector operations.

v' = v + 2[ RS( RV × v ) + RV × ( RV × v ) ]

This expression employs 18 multiplications and 12 adds, totalling 30 unique operations in all. This optimisation basically cuts the expense of the quaternion rotation formula in half.

Here is a function that employs this omptimisation

Vector rotateVector(const Vector& v, const Quat& q)
Vector result;
float x1 = q.y*v.z - q.z*v.y;
float y1 = q.z*v.x - q.x*v.z;
float z1 = q.x*v.y - q.y*v.x;

float x2 = q.w*x1 + q.y*z1 - q.z*y1;
float y2 = q.w*y1 + q.z*x1 - q.x*z1;
float z2 = q.w*z1 + q.x*y1 - q.y*x1;

result.x = v.x + 2.0f*x2;
result.y = v.y + 2.0f*y2;
result.z = v.z + 2.0f*z2;

return result;

Let's express the scalar and vector portions of the quaternion in terms of the axis and angle of the rotation

RS = cos( θ/2 )

RV = n sin( θ/2 )

Pluging these values into the optimized quaternion rotation formula, and using a little bit of trigonometric magic (double angle identity), the formula reduces to

v' = v + [ sin(θ) (n × v) + ( 1 - cos(θ) ) n × (n × v) ]

Using the vector triple product identity, and the fact the n is normalized, we can reduce it a bit further.

v' = cos(θ)v + sin(θ)(n × v) + (1 - cos(θ)) (nv) n

This operation has 19 mults and 12 adds, totalling 31 operations. We didn't do anything to optimize the formula, but we were able to transform it into a form involving axis and angle.

Trick #2: Determining a Rotation From an Initial and Final Vector

One of the questions which I am asked most frequently is how to find a rotation which takes some initial vector into some final vector. There are several variants of this problem, including "I am pointing my gun at Bob, and I would like to point my gun at Jim." I have seen this problem solved in many different ways, many times involving inverse trig functions. We are going to solve this problem with quaternions - and it's really easy!

We are going to call the initial vector A, and the final vector B. Both A and B are normalized 3D vectors.

We are going to multiply these two 3D vectors together, and if you remember, this is accomplished with a quaternion multiplication. To reitterate, our input vectors are just standard 3D vectors, but the output of the product is a quaternion, which represents the product of the inputs. Since the ordering matters, we are going to place A on the right, and B on the left.

BA = -AB - A × B

We can relate the dot and cross products to functions involving the angle between A and B. If A and B are normalized, then their product has the following form

BA = - cos(θ) - n sin(θ)

The angle θ is the angle between the initial and final vector, and the vector n is a normalized vector that is perpendicular to the input vectors. This is the exact form of a rotation quaternion. Don't worry about the minus sign, since we can flip the sign without changing the rotation we are representing. In fact, this almost represents the quaternion that we are looking for. We need a quaternion with a half angle instead of a full angle.

Instead of multiplying the initial vector A by the final vector B, let's instead find a vector that represents a halfway rotation between A and B. This is very easy to do - just find the average of A and B!

H = (1/2) (A + B)

Now, this H is not normalized, and it needs to be. If we are going to normalize it anyway, we don't need to worry about the factor of 1/2. The normalized version of H is

H = (A + B) / A + B

Knowing that the angle between A and H is half of the angle between A and B, we can use our previous result to show that the rotation which takes A into B is given by

R = HA

Here is a function that will give you the rotation quaternion that will rotate some initial vector into some final vector

Quat getRotationQuat(const Vector& from, const Vector& to)
Quat result;

Vector H = VecAdd(from, to);
H = VecNormalize(H);

result.w = VecDot(from, H);
result.x = from.y*H.z - from.z*H.y;
result.y = from.z*H.x - from.x*H.z;
result.z = from.x*H.y - from.y*H.x;
return result;

It doesn't get much easier than that folks.

Trick #3: Tangent Space Compression / Quaternion Maps

Tangent space is a set of 3 vectors - normal, tangent, and bi-normal - which are used in lighting calculations such as bump mapping, or anisotropic shading techniques, like fur or hair shaders.

The tangent space is a representation of the surface of the mesh. The tangent space vectors are stored per-vertex, and interpolated across the polygon for per-pixel operations. If you use floats to represent the components of the tangent space vectors, then you are dedicating 3 x 3 x 4 bytes = 36 bytes per vertex to the storage of tangent space.

The tangent space vectors form an ortho-normal frame. Consider the rotation which will transform the ortho-normal frame of tangent space into the standard frame in world space. If this rotation is expressed as a matrix, then the 3 columns of this matrix represent the 3 tangent space vectors.

If you can express a rotation as a matrix, then you can also express it as a quaternion. This immediately reduces the tangent space representation to 4 components = 16 bytes. When decompressing the vertex, use the quat->matrix conversion, and pull out the 3 columns of the resulting matrix as your tangent space vectors. The quat->matrix conversion costs 13 multiplies and 15 subtractions. It may be worthwhile to determine if it is possible to optimize this conversion for shader operations.

If you want to save more space, you can go even further, if you have the time - computationally speaking. Since rotation quaternions are normalized, there are only 3 degrees of freedom instead of 4. Thus you only really need to store 3 components of the quaternion, and the 4th can be calculated using the formula
w = sqrt(1 - x2 - y2 - z2)

This brings the tangent space size down to 12 bytes.

Finally, if you are super hard pressed for space, you can take advantage of the fact that the components of a quaternion are within the range [-1,1], which may not require 32 bits to have a sufficient representation. You may possibly use 16 bit floats to represent these components. Doing so will reduce the size further - 6 bytes.

With a 6-byte tangent space representation, it seems conceivable to employ quaternion texture maps, which store the tangent space directly in the texture, rather than using bump map data to perturb the interpolated tangent vectors. You could then apply a texture compression, like 3DC which gets 2:1 compression. Now tangent space only consumes 3-bytes of texture, and 0-bytes on the vertex. If you were previously using bump maps, and you replace them with quaternion maps, then you aren't really increasing the amount of texture memory used, and you have successfully removed all 36 bytes of tangent space data from the vertex.

Of course, doing quaternion map lookups in the pixel shader would also mean doing the quat->matrix conversion in the pixel shader - or, alternatively recasting the pixel shader lighting equation in terms of quaternion algebra, rather than matrix algebra - whichever ends up being cheaper. To gain perspective on the cost of this conversion, you can perform 5 quat->matrix conversions in roughly the same amount of time as a single matrix multiply. A single quat->matrix conversion is about as costly as transforming 3 vectors with a matrix.

The quaternion map would take the place of bump maps altogether, and would provide the added feature of having fine tuned control over tangent space on a per-pixel level. For anisotropic shaders, like hair, fur, and velvet shaders, you could introduce swirls and wiggles in the middle of a polygon.

There are two rather large problems facing the idea of a quaternion map: 1) a lot of people get freaked out when you use the word quaternion. 2) I can't off the top of my head conceive of a way for an artist or content creator to generate such a map.

At any rate, it seems like a very intriguing idea.

Saturday, February 27, 2010

True Physics

One of the vital organs of a physics engine is the code that integrates Newton's laws of motion. There are several techniques which employ approximations of calculus in order to numerically solve this problem. I am proposing a scheme which can solve this problem without approximation.

In this article you will learn
  • How the stepping equations are derived from Newton's laws.
  • A good method for doing this approximation, which is very common in many third party physics engines.
  • The Kinematic Integrator method, which allows you to use unapproximated calculus in the force calculations. This enables extremely large time steps.
In this article I will just be talking about the linear motion, not the angular motion. If you want to see a pretty decent way to integrate the angular portion check out the last section of Quaternions: How.

Newton's Laws

Given in short:
  1. Without external influences, objects preserve their state of motion. If the object is at rest, it remains so. If it is moving it will continue to move at that same speed.
  2. If a force acts on an object, the object will change its state of motion. The amount of change induced by the force is related to the mass of the object. i.e.
    F = ma
  3. If one body applies a force on another body, then the other body applies an equal and opposite force on the first body
The third law is taken care of during resolution of collisions, and applications of forces, etc. The first two laws are handled in the integration stage.

The net force acting on a body is directly proportional to acceleration. Acceleration is defined as the time derivative of velocity. In order to get the velocity of an object, given the acceleration, we need to integrate.

Velocity is an important portion of the physical state of an object. Several forces require knowledge of the velocity. However, the most important quantity is the position, since this determines where we place the object in the world. The velocity is defined as the time derivative of position, so in order to get the position we must integrate the velocity.

Since the information we start with is acceleration, and the information we desire is velocity and position, then we must integrate the acceleration two times. For this reason the equations of motion are usually second order equations. In order to fully solve a second order equation, we need to know two quantities. These two quantities are usually the initial position, and the initial velocity.

How do we integrate?

When it comes to physics engines I always start my integration by taking a derivative. This may seem anti-intuitive, since an integral seems to be the opposite of a derivative. But with physics, the thing we are integrating can be expressed in terms of derivatives, and nothing is as easy to integrate as a derivative.

The equations of motion can be expressed in terms of two differential equations

a = ∂v/∂t

v = ∂x/∂t

In order to see how we solve these equations numerically, first consider the standard mechanism for taking a derivative.

∂f/∂t = (f1 - f0) / (t1 - t0) ; t1 → t0

Which defines the value of the derivative at t0.

On a computer, we have finite precision arithmetic, and so we cannot allow t1 to get arbitrarily close to t0. Therefore, we can only approximate derivatives as finite differences.

∂f/∂t ≈ (f1 - f0) / (t1 - t0) ; t1 = t0 + Δt

Here Δt represents our time step. Obviously, the smaller the time step is, the closer we get to representing a true derivative. Making this simple approximation to the representation of the calculus has introduced an ambiguity. Is this the value of the derivative at t0 or t1, or at some point in between? If we choose t0, then the approximation is called a forward, or explicit derivative. If we choose t1, then the approximation is called a reverse, or implicit derivative. The quality of the representation of the equations of motion ends up being tied to our choice.

If we choose to use only explicit derivatives to represent the equations of motion, then we have

a0 = (v1 - v0) / Δt

v0 = (x1 - x0) / Δt

In these expressions, we know we are using a forward derivative because the a and the v on the left hand sides are evaluated at t0. We can rewrite these equations, so that we can find the values of x and v at time t1, given that we know what the values are at time t0.

v1 = v0 + a0Δt

x1 = x0 + v0Δt

These equations are called the explicit Euler stepping equations. We see that if the value of a is zero, then the equations will maintain the motion of an object, satisfying the first law. The solitary occurrence of a is the contribution from the second law, given that you calculate a as the net Force divided by mass.

As it turns out, the explicit Euler stepping equations are very crappy. The acceleration ends up introducing energy into the system, causing it to be very volatile. This volatility can be managed by making the time step smaller, but this increases the computational expense.

The current gold standard for integration in high profile physics engines is the semi-implicit, or symplectic, Euler stepping equations. These equations represent the acceleration with an explicit derivative, but the velocity is represented with an implicit derivative.

a0 = (v1 - v0) / Δt

v1 = (x1 - x0) / Δt

Since on the left hand side, the acceleration is evaluated at t0, it is a forward or explicit derivative. Since the velocity on the left hand side is evaluated at t1 it is a reverse, or explicit derivative. The combination of these two gives us the name semi-implicit. These equations can be rearranged so that x and v at t1 are functions of their values at t0

v1 = v0 + a0Δt

x1 = x0 + v1Δt

These equations are solvable, so long as we evaluate the velocity equation first, so that we have the result to plug into the position equation.

So how accurate are these stepping equations?

The semi-implict Euler technique is a second order method, which means that the error is tied to the second power of the time step. If you cut the time step in half, you quadruple the accuracy of the method.

Also, the semi-implicit Euler method obeys a different property, which makes it useful. It conserves energy on average. What this means is that, although the error you accumulate over the course of the time step might increase or decrease the effective energy of the system, over time the energy will average out to a constant. This property is where the symplectic term comes from. I'm not going to explain what symplectic means, but if you are interested, search for Hamiltonian dynamics, the equations of which are used to prove that the semi-implicit Euler method conserves energy.

We see in this instance that there is a defining difference between accuracy and stability. The semi-implicit Euler method may not be the most accurate method ever, but it is stable since it conserves energy. On the user end, we don't tend to see the tiny inaccuracies, but we do tend to notice if the physics wigs out and everything explodes. Therefore, we would like our inaccuracies to NOT lead to instabilities.

Can you have perfect integration?

With numerical integration schemes, such as Euler based techniques, the only way we can get perfect integration is if we let the time step become zero, which we know we can't do. However, it is possible to achieve perfect integration in a computational simulation, and I will tell you how, but we will need to make a slight departure from the standard development of numerical integration.

Let's start with a very simple example of perfect integration.

Consider that you are writing the game Lunar Lander where you must use thrusters on a spacecraft to guide it safely to the landing pad. In this game, the only forces that come into play are the gravitational force, and the thrusters on the spacecraft.

Over the course of a single time step, we know which thrusters are on. We consider that the force due to a thruster is constant over the course of the time step. We also use a constant force for the gravitational force. Thus we have a full knowledge of the value of the force at all intermediate points during the simulation time step.

Add all of the forces together to get the total force. Dividing the total force by the mass will give you the acceleration.

a = (Fgrav + Fthruster + ...) / m

Now, we need not resort to any approximation of calculus in order to solve this problem. Since we know the form of the forces (constant) at all points during the simulation time step, we can solve the problem directly - analytically. The true solution and stepping equations without approximation are given by

v1 = v0 + aΔt

x1 = x0 + v0Δt + (1/2)aΔt2

Anyone who has taken a physics class, even a beginning physics class, should recognize these equations. Restate: if you have taken a beginning physics classes you have seen these equations before, even if you don't recognize them. They are called the kinematic equations of uniform acceleration, and are the primary feature of almost all beginning physics problems, especially problems involving a baseball, basketball, or any other type of sports related sphere.

Here the a is constant, so we don't need to use a subscript to specify the point in time where we are evaluating it. These equations look almost like the stepping equations we were using before, except the position equation has an extra term involving acceleration. These equations are perfect, which means that there is no upper bound of accuracy that is related to the size of the time step. In a real application the upper bound on the size of the time step would not be related to the accuracy of the integration, but rather to the ability to perform collision with the landscape. Collision issues aside however, this method will give you the right answer, no matter how big the time step is.

If these stepping equations are perfect, why don't we use them? The answer is that these equations are only perfect if the forces are really constant. If we use these equations to simulate forces that are not constant, e.g. the spring force, we will not get very good results. If we decide to use these equations to step the physics of our system forward in time, we have essentially moved the approximation out of the calculus and into the representation of the forces. And the approximation applied to the forces ends up having worse results than applying the approximation to the calculus.

In short: the kinematic equations rock for forces that are constant, but really suck for forces that aren't.

If we can't use the kinematic equations all of the time, should we ditch them all together? Is it possible for constant forces to use the kinematic equations, and non-constant forces to use the semi-implicit Euler equations? Then our integration would be perfect with respect to the constant forces, at least. Under such a scheme, how would we go about mixing constant and non-constant forces?

Consider once again the semi-implicit Euler equation. We can plug the velocity equation into the position equation, since they both involve v1. The result is similar in appearance to the kinematic equations
v1 = v0 + a0Δt

x1 = x0 + v0Δt + a0Δt2

We now see a very striking resemblance to the kinematic equations. We actually have three terms that match exactly. The first term in the velocity equation, and the first two terms in the position equation. Only the terms that involve acceleration are different. It can be shown in a general case, that the three matching terms will always match, regardless of the form of acceleration. The form of the acceleration only affects the terms which involve acceleration.

We now have a clue as to how we might combine a constant force with a non-constant force.

v1 = v0 + ( aCΔt + aNCΔt )

x1 = x0 + v0Δt + ( (1/2) aCΔt2 + aNCΔt2 )

Thus, the constant force gets the benefit of using the kinematic equations, and the non-constant force can use the semi-implicit Euler technique. You may see some simplification that can take place, since the acceleration terms contain common factors. Now, it is possible to analytically solve for the acceleration terms given other forms of acceleration. These terms may not have common factors, so I chose to keep the acceleration terms separated.

The Kinematic Integrator:

The kinematic integration method is given as follows

v1 = v0 + dv

x1 = x0 + v0Δt + dx

Which will give us the new position and velocity, given we supply the value of the previous position and velocity as well as the two integral parameters dx and dv. So what are dx and dv? They are generalizations of the terms which involve integrals of acceleration.

In Euler based integration techniques, you usually have a unique method to calculate each different kind of force. There might be a method for a spring force, and a method for a pulse, etc.. This resulting force is divided by the mass, and accumulated with whatever existing acceleration is already acting on the object.

Using the kinematic integration technique, the entire point of the force calculation method is not to calculate the force, but to calculate dv and dx. These are, likewise, accumulated with whatever existing dv's and dx's that were previously acting on an object.

The point of this generalization is to move the calculus out of the stepping equations, and into the force calculation, so that each different kind of force can give the true result of the acceleration integrals. This leaves the stepping equations in a form that is not dependent on the form of the input forces.

A method which calculates a constant force will return the exact dv and dx of a constant force. A method which calculates a spring force will return the exact dv and dx of a spring force. This is really great, but what do we do if we don't know how to calculate the true integral of a given force? Simple, we just go back to using the semi-implicit Euler technique. We have already expanded this technique out into kinematic form, so we know what dv and dx are.

dv = a0Δt

dx = a0Δt2 = dvΔt

The spring force

So how do we calculate the dv and dx of a spring? As an example, consider a spring which connects a mass to the origin. The spring has a spring constant k, which determines how tight it is. The force acting on the mass has the form

F = -kx = -mω2x

Here ω is the angular frequency of the spring, and the reason for defining it is because the form of motion of the spring is given by a sinusoidal function of the following form

x = A sin(ωt) + B cos(ωt)

We can take the derivative of this expression to find the functional form of the velocity. Then we can use x0 and v0 as our initial conditions in order to fix the values of A and B.

v1 = v0 cos(ωΔt) - x0ω sin(ωΔt)

x1 = v0/ω sin(ωΔt) + x0 cos(ωΔt)

These are the true, un-approximated results of the integration. We can use these to find the values of dv and dx, merely by equating these results with the left hand side of the kinematic integrator equations. We find that

dv = v0 ( cos(ωΔt) - 1 ) - x0 ( ω sin(ωΔt) )

dx = v0 ( sin(ωΔt)/ω - Δt ) + x0 ( cos(ωΔt) - 1 )

If the parameters of your spring do not change, and you are using a fixed time step, then the coefficients of x0 and v0 do not change, and should only need to be calculated once.

Again, the accuracy of the stepping equations does NOT depend on the size of your time step, because we are using the exact result. I have compared the numerical result of the kinematic integrator with the exact result in the case of a spring, and there is literally zero error - no matter what the time step is, or how long the system runs.

Multiple forces:

Now, there is a problem that arises when you try to apply two forces that depend on position or velocity. They both are integrated without a knowledge of the other. Therefore they are oblivious to the changes in intermediate position or velocity caused by the other force. These changes would have altered the final result, had they been taken into account.

A good example of this is the example of having two springs.

If one spring acts alone, the result of the kinematic integrator is exact, and so the result is always correct, regardless of the size of the time step. The story is different when two springs act simultaneously.

The system begins to accumulate error, due to the misrepresentation of the forces. However, the error is very small, almost insignificant. In a test of such a system, I calculated that the error of the kinematic integrator was 12,000 times smaller than the error from the semi-implicit Euler method. Unlike the semi-implicit Euler method, however, the kinematic integrator is not guaranteed to preserve energy. Therefore these very small inaccuracies build up, and over a very long time, the system diverges. Now, the introduction of even the smallest amount of friction to the system would conceal these tiny inaccuracies, however, there ends up being a better way.

The error due to multiple forces is actually an error in the calculus. We can play the trick where we move the error out of the calculus and into the representation of the forces, by making an approximation of the force. The approximation I have in mind ends up introducing less error than the error due to multiple forces. Also, this approximation tends to remove energy from the system, which makes the system stable. With this method, you would still calculate dv exactly, like before, but when you calculate dx, you do it like this

dx = (1/2) dvΔt

Using this method to calculate dx actually beats out calculating it exactly in cases where multiple forces that depend on position are in play. This method is called the average acceleration method. On a test performed on a system with two springs, the average acceleration method removed about 5% of the energy from the system. After about 3 hours. In the test I ran of this sample system, the error introduced by the average acceleration method was 58 million times smaller than the error introduced by the semi-implicit Euler method.

It is recommended that forces which depend on position or velocity use the average acceleration method of calculating dx.


Using the kinematic integrator allows you to remove the process of integration from the stepping equations, and push it down into the force calculation. This enables you to taylor the integration method to each different type of force. Having this freedom allows you to provide exact solutions, if they are attainable. If the solutions are not attainable, you can still use the semi-implicit Euler method as a standby.

This method introduces a trivial amount of extra computation, i.e. two or three operations per force applied. In the spectrum of numerical methods, this method has roughly the same computational expense as a first order method, and acheives results that far exceed the more expensive fourth order methods.

Obviously this method only has utility if you know the form of the forces acting during a given simulation step, and also if the forces are integrable. Happily, most if not all of the forces that are used in run-time simulation are integrable.

With some systems it is possible to completely eliminate error. In others, the error is a very small fraction to the error of other methods that have comparable computational expense. Thus, with the kinematic integrator you can use much larger time steps, and expect higher accuracy from integration.

Thursday, February 25, 2010

Quaternions: Why

Before digging into the deep guts of the quaternion, I want to talk a little bit about complex numbers. Complex numbers are really only a step away from quaternions, and people are much less nervous around complex numbers. So it's a pretty good place to start.

A Bit About Complex Numbers:

A complex number has a real and an imaginary component.

z = x +iy

The presence of the i makes these two components linearly independant. Thus we can begin to think of a complex number as a 2D vector. In fact it is convenient to represent a complex number as a (real, imaginary) pair.

z = (x, y)

We know that we can add and subtract complex numbers

z1 + z2 = (x1 + x2, y1 + y2)

z1 - z2 = (x1 - x2, y1 - y2)

and we acheive the same result as adding and subtracting 2D vectors.

Complex numbers can do something that 2D vectors can't. You can multiply them.

z1z2 = (x1x2 - y1y2, x1y2 + y1x2)

The existence of the multiplication rule promotes the complex numbers from a mere vector space to an algebra. An algebra is a linear space that has a multiplication rule.

If we want the complex numbers to represent a 2D vector space we are missing something, and that's a dot product. In a 2D vector space, we have a dot product that tells us the length of the vectors, (or the length squared of the vectors to be more precise.)

But wait!! We can determine the magnitude squared of a complex number as well:

|z|2 = zz* = (x2 + y2, 0) = x2 + y2

Here the * represents a complex conjugate, which merely flips the sign of the imaginary component. The magnitude squared of a complex number is identical to the result of taking the dot product of a 2D vector with itself. Using this as your starting point, you can show that the vector dot product, defined in terms of a complex algebra is

A•B = (1/2) (AB* + BA*)

When endowed with a dot product, the complex numbers truly do contain a full and complete representation of a 2D vector space, with the added bonus of having a well defined way of multiplying the vectors together.

In other words, 2D vectors are not complex numbers, but complex numbers ARE 2D vectors which have the added power of multiplication.

So What Was Hamilton Thinking?

Often, when one is looking up information about quaternions, the story comes up about how W.R. Hamilton is walking along one day, and while crossing a bridge he suddenly comes up with the formula for quaternions. He then promptly vandalizes the bridge and goes on his merry way.

How does someone just come up with quaternions? I will tell you.

Hamilton was aware of the fact that the algebra of complex numbers supply a natural mechanism for multiplying 2D vectors, and he was wondering: How would a person multiply 3D vectors? We add and subtract 3D vectors, and we have a dot product for 3D vectors, but how do you multiply 3D vectors?

Now, some people might be thinking, what about the cross product? Why can't the cross product be the multiplication rule for 3D vectors? The answer is: the cross product definitely holds a clue, but it is not the entire answer. Besides, it wasn't until after the invention of the quaternion that we even had a cross product, so Hamilton didn't know about them.

So, as Hamilton is crossing the bridge it dawns on him how we might multiply 3D vectors together, and he writes the multiplication rules for the 3D basis vectors on the bridge. These multiplication rules are what we are talking about when we say the "definition of a quaternion"

So How Did Hamilton Come Up With the Formula?

Remember how we defined the 2D dot product in terms of the complex multiplication rule? Hamilton similarly decided that if a 3D multiplication rule exists, the result of multiplying a vector with itself should be a real number that is equal to the length squared of the vector. So, however the mutliplication works, it should result in the following formula

VV = x2 + y2 + z2

The meaning of the † here is similar to that of the complex conjugate, but here it flips the signs of all of the basis vectors.

V = -xI - yJ - zK

There are nine terms in the product, when the vector is expressed in terms of its 3 components

VV = -x2I2 - y2J2 - z2K2 - xy(IJ + JI) - xz(IK + KI) - yz(JK + KJ)

By not combining the IJ and the JI terms, I am stating that they are possibly different.

While crossing the bridge Hamilton realized that he could satisfy his initial postulate about the multiplication of 3D vectors, if the basis vectors satisfied the following multiplicaiton rules.

I2 = J2 = K2 = -1

IJK = -1

In other words, if he applied these rules, the first 3 terms would fall out correctly, and the last 6 terms would vanish.

Hamilton was pretty stoked about this, because this meant that you could add, and subtract 3D vectors, but now you could also multiply them together!

So Where Does the Fourth Component Come From?

Using Hamiltons multiplication rules for the basis vectors, we can define a general multiplication rule for the product of arbitrary 3D vectors.

AB = -AB + A×B

The first term is a scalar, and the second term is a 3 component vector, four components in all. It was probably this surprising discovery that the multiplication rules summoned the existence of a 4th component that prompted Hamilton to call them quaternions, which literally means "a group of 4."

The existence of the 4th component in a general quaternion does not change the original meaning of the vector portion. The vector portion of a quaternion is truly a 3D vector, in every possible identification. It adds, subtracts, and has a dot product like a standard 3D vector. We can use it in every possible way that we can use any 3D vector. The only magic here is that we now have a multiplication law. This is a good thing, since we can use this multiplication law to make meaningful geometric statements about 3D vectors.

So What Do Quaternions Have To Do With Rotations?

At the very heart of the definition of the quaternion multiplication lies the postulate that it must somehow represent the length of the vector. The fundamental definition of a rotation is that it is a transformation which does not change the length of a vector. Thus the definition of quaternion mutliplication is very intimately connected to the concept of rotation.

Let's build the rotation from the ground up. To begin with, we know that quaternion multiplication from the right is not the same as quaternion multiplication from the left. Thus, the general form of a transformation would look something like this

v' = AvB

Where we have two transformation quaternions, one acting on the right, and one acting on the left.

Since v is a 3D vector, we don't care what the scalar part is. However, the transformation should also not care. This means that the transformation should not change the value of the scalar part. If the scalar part starts at zero, the transformation should leave it zero. Placing this restriction on the general form of the transformation leads to the following condition.

B = A

And so our transformation law now looks like this

v' = AvA

Finally, we require that the length of v is not changed by the transformation. This can be stated using Hamiltons initial postulate of 3D vector multiplication

v'v' = vv

AvA (AvA) = AvAAvA

We see that the only way our condition can be satisfied is if AA = 1. In other words A must be normalized. An arbitrary normalized quaternin takes the form

N = cos(α) + n sin(α)

Here n is a normalized 3D vector.

If we use this N to apply the transformation, we will see that we have successfully rotated v around the axis n by an angle of 2α. The reason for the factor of 2, is because there are 2 factors of N acting on v. To take this into account, we generally define rotation quaternions in terms of a half-angle.

r = cos(θ/2) + n sin(θ/2)


You now know the why behind quaternions. A 3D vector is not a quaternion, but a quaternion IS a 3D vector, with a multiplication law that requires an additional scalar component. Go now, and unleash your newfound power upon the helpless masses.

Wednesday, February 24, 2010

Quaternions: How

Game developers have a love-hate relationship with quaternions. Most understand that they are somehow mathematically superior to Euler-Angles for representing rotations. Most also know that they are more compact, and easier to interpolate than rotation matrices. Other than that, quaternions remain a mystery.

I have a very deep and powerful love relationship with quaternions. For me, knowing why they work is more important than knowing how they work. However, most programmers need to see the how of quaternions before they care to ask about why. I will attempt to bestow my quaternion mojo upon you at this time, if you are willing. For now, I will just talk about HOW.

We will discuss the basic things that any game programmer actually does with quaternions, such as:

  • Representing a rotation using an axis and an angle
  • Rotating vectors
  • Concatenating rotations
  • Blending between an initial and final rotation
  • Integrating quaternions using an angular velocity
How to Represent a Quaternion:

A quaternion can be represented by 4 numbers. A common struct definition might be

struct Quat { float x; float y; float z; float w; };
The x, y, and z components of a quaternion represent coordinates in standard 3D space. Really. Some people might try to make it more complicated than that, but they are mistaken. When using a quaternion to represent a standard vector, it is customary, but unnecesary to set the w component to zero. It is convenient to define the quaternion with the w last, so that you can easily typecast it to a 3D vector. A quaternion IS a vector, in every sense of the term vector.

You may ask: if a quaternion IS a vector, why all the fuss? That is an excellent question the answer of which plumbs the very foundations of the universe itself. (You think I'm joking, but I'm not!!) I will give you the answer when I'm ready to discuss the why of the quaternion.

When talking about quaternions it is convenient to use (scalar, vector) notation. The scalar I'm referring to is the w component, and the vector is the x, y, and z components. For instance, I might use the following notation

q = (w, V)

In order to use a quaternion to represent a rotation, you need to know the angle θ of the rotation, and the axis n around which you are rotating. The axis n is a normalized 3D vector, and the angle θ is e.g. a float. The rotation quaternion is defined as:

r = ( cos(θ/2), n sin(θ/2) )

Looking at this, you may think that it almost makes sense; you wonder why θ/2, and not just θ? This is a wonderful question, and the answer is both beautiful and profound. (You think I'm joking, but I'm not!!) But I'm not answering why yet, I'm just discussing how. One thing that is very important about this formula: the angle is in radians, NOT degrees!

To generate a quaternion with a given axis and angle, you may envision creating a function like this

Quat QuatFromAxisAngle(const Vector& axis, float angleInRadians) { Quat result; float angle = angleInRadians / 2.0f; float sinAngle = sin(angle); Vector n = VecNormalize(axis); result.w = cos(angle); result.x = n.x * sinAngle; result.y = n.y * sinAngle; result.z = n.z * sinAngle; return result; }
You now have a quaternion that represents a rotation.

How To Rotate a Vector With a Quaternion:

Quaternions aren't just a set of 4 numbers, they are an algebra. This means that there is a procedure defined to add and multiply these quantities. You can perform a vector rotation using a formula involving quaternion multiplication. But I'm not going to talk about that yet, because most of the time you aren't going to use the quaternion rotation formula.

The first rule of rotating vectors with quaternions is: don't rotate vectors with quaternions! Quaternions are great for representing rotations, but when you get to the point in your code where you are doing the actual rotation computation, it's better to convert to a matrix form. Using the quaternion multiplication rule to rotate a vector isn't too expensive, but nothing is more efficient than a matrix for transforming vectors. In fact, it is cheaper to convert the quaternion to a matrix, and use the matrix to rotate the vector, than to use the quaternion formula.

As it turns out, most transforming of vectors is done by the graphics library and not by you. However, transformations are ubiquitously represented by matrices in graphics libraries. If you want to send your rotation to the graphics library, you will need to convert it to a matrix. Therefore it is essential for you to know how to convert a quaternion to a matrix.

To convert a quaternion to a matrix use this function:

Matrix MatrixFromQuaternion(const Quat& q) { Matrix result; //helper quantities - calculate these up front //to avoid redundancies float xSq = q.x * q.x; float ySq = q.y * q.y; float zSq = q.z * q.z; float wSq = q.w * q.w; float twoX = 2.0f * q.x; float twoY = 2.0f * q.y; float twoW = 2.0f * q.w; float xy = twoX * q.y; float xz = twoX * q.z; float yz = twoY * q.z; float wx = twoW * q.x; float wy = twoW * q.y; float wz = twoW * q.z; //fill in the first row result.m00 = wSq + xSq - ySq - zSq; result.m01 = xy - wz; result.m02 = xz + wy; //fill in the second row result.m10 = xy + wz; result.m11 = wSq - xSq + ySq - zSq; result.m12 = yz - wx; //fill in the third row result.m20 = xz - wy; result.m21 = yz + wx; result.m22 = wSq - xSq - ySq + zSq; return result; }
This function does not assume that the input quaternion is normalized. A quaternion only represents a rotation if it is normalized. If it is not normalized, then there is also a uniform scale that accompanies the rotation. Normalizing a quaternion is similar to normalizing a vector. You just have to take into account that the quaternion has 4 components. To be explicit, here is a function to normalize a quaternion.

Quat QuatNormalize(const Quat& q) { Quat result; float sq = q.x * q.x; sq += q.y * q.y; sq += q.z * q.z; sq += q.w * q.w; //detect badness assert(sq > 0.1f); float inv = 1.0f / sqrt(sq); result.x = q.x * inv; result.y = q.y * inv; result.z = q.z * inv; result.w = q.w * inv; return result; }

Now, you may be wondering how I came up with this magic formula to convert quaternions to matrices. I can do it, because I know how to use the quaternion rotation formula. You may also be wondering why you would ever mess around with quaternions if you just are going to convert it to a matrix anyway. Keep reading... you will see.

How to Concatenate Rotations:

When you multiply two transformation matrices together, the result (aside from any numerical error) is also a transformation matrix. Transformation matrices can rotate, translate, scale, and skew. However, in many cases the only operation being performed by a transformation matrix is a rotation.

If you multiply two 3x3 rotation matrices together, there are 27 total terms that need to be evaluated. If you multiply 2 quaternions together, there are only 16. This number can actually be optimized a bit, but even with a naive approach you can perform about 1.5 quaternion multiplies for every matrix multiply. Thus, if you know your transformations only involve rotations, using a quaternion is a very good thing.

The order of quaternion multiplication is important, so you need to keep track. Just remember, the first rotation is on the right, and the second rotation is on the left.

qT = q2q1

This multiplication is a quaternion multiplication. The way that quaternion multiplication is defined is one of the things that makes quaternions good at representing rotations.

You can express the quaternion multiplication in terms of standard vector operations, such as dot and cross products.

AqBq = (ab - AB, aB + bA + A×B)

A function which multiplies two quaternions together may be defined in terms of the components as follows:

Quat QuatMultiply(const Quat& q1, const Quat& q2) { Quat result; result.w = q1.w*q2.w - q1.x*q2.x - q1.y*q2.y - q1.z*q2.z; result.x = q1.w*q2.x + q1.x*q2.w + q1.y*q2.z - q1.z*q2.y; result.y = q1.w*q2.y + q1.y*q2.w + q1.x*q2.z - q1.z*q2.x; result.z = q1.w*q2.z + q1.z*q2.w + q1.x*q2.y - q1.y*q2.x; return result; }
This is just an expanded version of the vector operations given previously.

How to Interpolate a Quaternion:

This is a very huge topic, but I will boil it down for you and give you the goods.

Interpolating a quaternion is useful when smoothly varying between an initial and final rotation. Interpolation is good for finding arbitrary in-between values of rotation. This is employed especially in character animation systems. It is possible to interpolate rotation matrices, but the interpolated matrix may not be a size and shape preserving matrix. Needless to say, interpolating a quaternion is a bajillion times easier than interpolating rotation matrices.

There is one interesting property of quaternions that comes into play when dealing with interpolation. If a rotation can be represented by a quaternion q, then the quaterion -q also represents the same rotation. Why is that? I'm not going to explain it right now, other than to say that it is connected to the very fabric of reality. (You think I'm joking but I'm not!) What you need to worry about is which one of these quaternions you are going to use.

To describe the difference between q and -q, consider that you turn a quarter turn to your left. Esentially this is the same as turning 3/4 turn to your right. One turn is the "short" turn and the other is the "long"one. When representing a static orientation it is irrelevant if a quaternion represents the short, or long path, because it just sits in the final position and you don't get to see the in-between values. However, when you are blending it surely does make a difference.

When blending between an initial and a final quaternion, there is some ambiguity as to if we are taking the "short" way or the "long" way. It seems like the right thing to always blend on the shortest path. Given the two input quaternions, it is possible to determine which way we are going to blend. You can check this by examining the sign of the 4D dot product of the inputs. If the sign is negative, then you know you are going to be blending the long way.

So, what do you do if you find out that you are blending the long way? Simply flip the sign on one of your input quaternions. remember q and -q represent the same rotation. Flipping the sign on one of your inputs will flip the sign of the 4D dot product.

Now that we have discussed that little tid-bit, let's move on to interpolation formulas.

There are a few different interpolation formulas, but two main ones: NLerp is a linear interpolation of the components that is followed by a normalization of the interpolated quaternion, to ensure that it represents a rotation. Slerp is a spherical interpolation which interpolates in a spherical space, rather than in the cartesian space of the coordinates. The interpolant of the slerp function moves at a constant motion, while the interpolant of the NLerp has some non-linear acceleration.

Heres the quick and dirty: Don't mess around with Slerp, even though you think it might be the more "correct" thing to do. It is too expensive, and has too many special cases that need to be considered. There are some complicated schemes that try to closely approximate the Slerp function, but it just isn't worth it. Just use NLerp. Especially for computationally strapped code.

In fact, I'm not even going to show you how to SLerp. You can consult google if you really want to know.

Here is a blending function that uses NLerp

Quat QuatBlend(const Quat& i, const Quat& f, float blend) { Quat result; float dot = i.w*f.w + i.x*f.x + i.y*f.y + i.z*f.z; float blendI = 1.0f - blend; if(dot < 0.0f) { Quat tmpF; tmpF.w = -f.w; tmpF.x = -f.x; tmpF.y = -f.y; tmpF.z = -f.z; result.w = blendI*i.w + blend*tmpF.w; result.x = blendI*i.x + blend*tmpF.x; result.y = blendI*i.y + blend*tmpF.y; result.z = blendI*i.z + blend*tmpF.z; } else { result.w = blendI*i.w + blend*f.w; result.x = blendI*i.x + blend*f.x; result.y = blendI*i.y + blend*f.y; result.z = blendI*i.z + blend*f.z; } result = QuatNormalize(result); return result; }
This function has a singularity when the difference between the initial and final quaternions is a 180 degree rotation. This is due to the fact that the axis of rotation for the blend becomes ambiguous. You could potentially detect for this case, and decide on using the "up" vector for the axis of the blend. Or you could break up the blend into a few steps. This singularity is something that shows up in any interpolation scheme, not just NLerp.

How to Integrate a Quaternion:

Updating the dynamical state of a rigid body is referred to as integration. If you represent the orientation of this body with a quaternion, you will need to know how to update it. This is done with the following quaternion formula.

q' = Δq q

We calculate Δq using a 3D vector ω whose magnitude represents the angular velocity, and whose direction represents the axis of the angular velocity. We also use the time step Δt over which the velocity should be applied. Δq is still a rotation quaternion, and has the same form involving sines and cosines of a half angle. We use the angular velocity and time step to construct a vector θ, whose magnitude is the half angle, and whose direction is the axis.

θ = ωΔt/2

Note: I've included the factor of 1/2, which shows up inside the trig functions of the rotation quaternion. Expressing the rotation quaternion in terms of this vector you have

Δq = ( cos(θ), (θ/|θ|) sin(θ) )

This works well, however this formula becomes numerically unstable as |θ| approaches zero. If we can detect that |θ| is small, we can safely use the Taylor series expansion of the sin and cos functions. The "low angle" version of this formula is

Δq = (1 - |θ|2/2, θ - θ|θ|2/6)

We use the first 3 terms of the Taylor series expansion, so we should ensure that the fourth term is less than machine precision before we use the "low angle" version. The fourth term of the expansion is

|θ|4/24 < ε

Here is a sample function for integrating a quaternion with a given angular velocity and time step

Quat QuatIntegrate(const Quat& q, const Vector& omega, float deltaT) { Quat deltaQ; Vector theta = VecScale(omega, deltaT * 0.5f); float thetaMagSq = VecMagnitudeSq(theta); float s; if(thetaMagSq * thetaMagSq / 24.0f < MACHINE_SMALL_FLOAT) { deltaQ.w = 1.0f - thetaMagSq / 2.0f; s = 1.0f - thetaMagSq / 6.0f; } else { float thetaMag = sqrt(thetaMagSq); deltaQ.w = cos(thetaMag); s = sin(thetaMag) / thetaMag; } deltaQ.x = theta.x * s; deltaQ.y = theta.y * s; deltaQ.z = theta.z * s; return QuatMultiply(deltaQ, q); }

This is basically it! You now know how to accomplish all of the main tasks that any game programmer will usually bump up against relating to quaternions. If you are brave, you can move on to my next post, which covers a lot of details concerning the WHY of quaternions.