Physics 101 #3: Solvers
The term “solver” in the physics simulation realm usually refers to an algorithm devised to solve (hence their name) a problem, usually expressed as a system of equations. Solvers are an integral part of any physics engine, so much that oftentimes the term is used to refer to the entire engine.
But why would we want to solve a system of equations, in the first place? Well, as we saw in part #2, a collision between two rigidbodies can be expressed as an equation. All kinds of constraints can be, in fact. Solve the equation, and you know what to do to enforce the constraint.
Most constraints used in games (of which collisions are just one kind) involve two bodies: hinges, telescopic joints, ball-and-socket joints, etc. And if all we have is just one constraint, finding a body configuration that meets the condition imposed by the constraint is relatively easy.
As soon as multiple constraints affect the same body, we have multiple equations with shared unknowns that must be simultaneously satisfied. And thus we need a solver.
Roughly speaking there are two main categories of solvers: direct and iterative. Direct solvers are more accurate but slower. Iterative solvers have a number of advantages that make them widely used for interactive/realtime physics, so we will focus on them. Most literature you can find on the subject is pretty mathy, so I will try to stick to plain english here.
The naive iterative solver (which is actually very good).
Suppose you have two bodies (A and B) overlapping -colliding- with a third one (C). This generates two contact constraints, that is, two equations: A vs C and B vs C.
So we start by solving A vs C. This moves both A and C (by calculating penalty forces, impulses, or any other means) so that they no longer overlap each other. However this causes C to sink a little bit deeper into B.
Nice, A and C no longer overlap, just like the first time we solved A vs C. But wait… as a result, B vs C overlap again! Should we try solving B vs C a second time…?
Guess what? solving B vs C will probably make A and C overlap again. But don’t give up just yet! Maybe they will overlap a little bit less than the first time. Hoping to improve things a bit more, we can solve A vs C a third time, and that will of course force us to solve B vs C a third time too. Repeat –iterate– the process of solving both equations a fourth, and maybe a fifth time. The error (amount of overlap) for each pair will hopefully be reduced after each iteration, and eventually both equations will be simultaneously satisfied.
While this method may seem a bit headstrong at first (let’s just solve all equations sequentially over and over until it all lines up nicely somehow), it’s actually a very good way solve a lot of problems: it’s the Gauss-Seidel method.
At this point, we need to introduce an important concept tough.
In the previous case, what would have happened if we set up the three bodies in such a way that solving B vs C after solving A vs C reverted A vs C to the exact same configuration it had at the start? Say A and B are static -not allowed to move- and C is sandwiched in between…
In other words, what if things do not improve at all no matter how many times we iterate? what if we are forever stuck repeating the same process over and over, undoing any progress every time we try to move forward?
Then we’d say the problem does not converge. If each iteration makes things worse instead of improving them, then the problem diverges.
If we find a method that can reach a solution by iterating only 5 times instead of 10, we would say it converges faster. Fast convergence means less work needed to arrive at a solution. So it is a very desirable thing to have.
Another solver (which is better in some ways, but worse in others).
When using Gauss-Seidel, we were applying the solution given by each equation immediately before moving on to the next. What if instead of using the previous equation’s solution as input for the next one, we solved them all using the previous iteration’s solution as the starting point for all equations in the current iteration, then average the results? Would that work?
Yes it would, and it has a name too: the averaged Jacobi method.
How does it compare to Gauss-Seidel? Well, for starters it converges much more slowly. In the above animation you can see it takes Jacobi roughly 6 iterations to converge. It only took 3 Gauss-Seidel iterations to achieve the same.
However the Jacobi method has a couple nice properties: First, since all equations share the same inputs at the start of each iteration, you can solve them all independently from each other.
If you were allowed to use a classroom of 25 students to solve a system of 25 equations, all students could work simultaneously, solving one equation each, and they’d finish 25 times faster than they would using Gauss-Seidel. Isn’t that great?
Sometimes the parallel nature of Jacobi makes up for its slower convergence. Since you can do more iterations in the same amount of time, it doesn’t matter as much that each iteration is less effective. In fact if the amount of equations is large enough and you can parallelize the process, Jacobi will get the job done faster.
The second good thing about averaged Jacobi is a consequence of the first: It is order-independent. Gauss-Seidel gives slightly different end results depending on which order you solve the equations, but the averaged Jacobi method always returns the same result.
The coolest thing about iterative solvers is that you can stop iterating at any moment, and still get a solution. The more iterations you let them perform, the better results they will yield. So the amount of iterations acts as a slider between speed (few iterations) and quality (lots of iterations). How convenient for game development!
In the next post we will see how to apply this knowledge to our position-based physics engine for Unity3D, Obi (See our Obi assets on the Asset Store).