Next gen collision system - Animation progress part 4 (Patreon)
Videos
-
FurryVNE - Custom collision system.mp4
Downloads
Content
Hi all!
We're proud to present our new automatic collision system for FurryVNE that will go along with the new interaction systems coming later this year.
Motivation
When we implemented physical tails/ears/shafts/balls, we were challenged to figure out how these things are supposed to collide with the character. In our initial implementation that we demonstrated for the physical tails/ears, we simply offloaded this responsibility to the user, who then would be forced to set up colliders manually for these things to collide with the character. However, this was something that never really sat well with me. It felt like a very technical thing that the program should figure out for itself.
Coming up with an automatic system that can provide a reasonable degree of accuracy is challenging. Usually, when adding colliders to a character manually (in any engine), you use crude shapes such as spheres, boxes and capsules. While these shapes are very performant (hence their prevalence in realtime physics engines), they don't approximate the shape of characters that well. For most use cases this is OK, as you don't need them to be that precise in order for the physical simulation to be believable. But, when you're dealing with characters that are supposed to interact with eachother more delicately (such as in our app), it is our opinion that these crude shapes are simply not good enough.
(Example of how other physics engines use crude shapes to approximate body.)
If we were using shapes such as this for the entire body, you wouldn't be able to, lets say, stroke a shaft (or balls, for that matter) along the body of another character, without it either noticeably colliding with invisible objects (the crude colliders), or noticeably clipping through the model, breaking the immersion. While clipping is challenging to deal with regardless of what implementation you choose to go for, it is especially prominent when relying solely on these crude shapes.
One possible solution to this could simply be to add many, many more colliders but smaller in size (rather than relying on a single collider per limb), better approximating the body. This, however, costs performance, and if we were to update the character (through inflation, for example), many of these colliders would have to be recomputed. Furthermore, coming up with an algorithm that can fill a character with such shapes most efficiently is a little bit of a science of its own, where there are countless solutions, none necessarily capable of providing an ideal result, as in the end you're working with crude shapes rather than the actual shape of the body itself.
Another approach could have been use a mesh collider for the entire character, but computing mesh colliders is extremely expensive to do. If we were doing it every frame (as we would have to do for it to be aligned with the character animation), framerates would dwindle to sub 10 frames on many systems. I suppose it would be possible to slice the mesh up in smaller sections, baking each one separately and then only update the ones in need of refreshing. While that would remove the need to update the mesh collider(s) every frame, any collider that would eventually need to be updated (because of inflation, lets say) would still require the re-computation, which would cause a noticeable performance hit.
Lastly, since years back, using rigid bodies (which would be required to actually get physical behavior) together with mesh colliders is not supported in Unity anymore (since they simply cost too much performance), unless you use the convex option for the mesh collider, which would remove the benefit of using them in the first place as they would no longer approximate the surface precisely.
(Example of convex mesh collider in green. As you can see, it hardly approximates the actual surface of this example bunny model anymore.)
Given all these problems, we decided to create our own system for collision with the body.
Our method - Part 1
Signed distance functions (shortened SDF) are methods that define a boundary in space. They take inputs in the form of coordinates and other data, and output a distance to the boundary. The function itself would look different depending on what object it describes. In the case of a sphere, it would simply take a point and a radius. Given these data, a distance field could be generated using simple vector mathematics.
SDFs don't even need to be ordinary euclidean objects but the point is that this function, whatever SDF it is, takes inputs, and you get a distance from the defined boundary as the output.
These functions have been around for a long time, and they have been used in 3D for decades. The earliest use cases for it in 3D go back as far as the 1980s.
If you have an SDF, you can render it using a technique known as raymarching. The idea behind raymarching is simple - given a ray (usually generated from the pixel of the camera's viewplane), you keep moving along that ray until it hits a surface according to the SDF. In sphere tracing (a specific optimized implementation of raymarching), the distance you move along the ray is determined by the return value of the SDFs, which you loop through and query at each step of the way, until a reasonable minimum threshold has been reached.
(A visual explanation of sphere tracing, a form of ray marching. The black line is the ray, the black points the time of sampling of the SDF, and the red lines the SDF boundary itself.)
SDFs don't need to be mathematical equations describing shapes, they can also be a pre-baked textures, where each texel (2D pixel, or voxel if it's 3D) represents the distance that index has from the defined shape. Using techniques such as marching cubes, you can pre-bake any 3D model into a 3D texture. This texture, then, could be used as the SDF, meaning you can render it using raymarching.
(Example of objects baked into 3D textures and then rendered using raymarching, borrowed from this blog. NOTE! These objects are not polygonal! The actual shape of each object is actually a box, but thanks to raymarching, when rendering the box's surface in the shader, we can get these amazing shapes from the 3D texture.)
GPUs are well suited to run operations on 3D textures, meaning generating a 3D texture from a mesh is something that can be done fast and efficiently utilizing the GPUs vast parallel computation power.
Our method - Part 2
Ok, so you have signed distance fields, you can pre-bake any 3D model into a 3D texture to be used as the SDF, and you can render them using raymarching. That's great. But what does any of this have to do with collisions? Well, that's the thing - since these 3D textures contain the distance to the boundary, we can query them using solely 1 texture lookup to find out whether or not a point is inside or outside of the geometry, and at what distance. Then, if that point is intersecting with the geometry, and if we want to move it away from the geometry, we simply move the point according to the distance returned by the SDF along the gradient of the SDF. For further optimization, this SDF gradient can be pre-computed and baked into the SDF texture itself along with the distance, meaning we essentially require no computation at runtime. We simply just do this 1 texture lookup to acquire all the relevant information to handle the collision properly. Now, that is AMAZING! We get all the information we need from simply 1 texture lookup! NO COMPLEX OR PERFORMANCE DEMANDING COMPUTATIONS, while also offering far higher fidelity for the physical shape than using simple, crude shapes!
Overview
Demonstration
(A visualization of the SDF shapes in Maya by using raymarching for the rendering. The shapes approximate the body mesh much better than simple boxes and spheres.)
(Turn-around of the shapes.)
(Closeup of the SDF in Maya's butt area.)
(Closeup of the SDF in Maya's chest area.)
(Closeup of the SDF in Maya's hand area.)
(Closeup of the SDF in Maya's hoof area.)
(Doing shaft collision against another character.)
(More shaft collisions against another character.)
(Colliders follow limbs as demonstrated here.)
(Balls collision against another character's torso.)
(Updating SDF in realtime to adapt to inflation. Low performance cost. This would absolutely not be possible to do using mesh colliders with such a low cost.)
(Stroking shaft, self collision.)
(More self collisions.)
Limitations
I might point out that this is a system designed for collisions with shafts, balls and physical bone chains (tails, hair etc). It is not a system for doing limb to limb collisions. However, the SDFs could be used to make the posing easier, since we can query them with low performance impact, meaning we could design our systems to "feel" along another character, placing the nodes accordingly to not intersect. This will be a future challenge.
Public video demonstration
https://twitter.com/furryodes/status/1514640916782456838
https://www.furaffinity.net/view/46777613/
https://mega.nz/file/Jp4QAawB#DAi6Tk6WtPxL2HtyJmkoYeKLo1JtRL934P2pXdRhHPQ
Summary
We've explained problems that exist with existing collision systems, and reasons why we chose to implement our own. Our system offers far higher fidelity than default systems while retaining excellent performance.
- odes