mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.3K
active users

Rune Skovbo Johansen

I can finally work on my procedural creature parametrization again, now on a much better foundation.

After working on parametrization matrix math and custom tools for the past five weeks, the results are coming together. Here's a demo of my derived parametrization tool.

A nice aspect of the math behind this, which is not conveyed in the video, is that the more high-level parameters I create, the better the results also become when rolling random creatures by setting all parameters to random values.

There's more details on the concept and the math behind it in this post from a few days ago:
mastodon.gamedev.place/@runevi

I had a scare today when after working more on the parametrization, the matrix numbers produced by the final pseudoinverse became very large, causing the creature meshes to "explode" and even Unity to crash. (1/2)

After digging a bit online, it seems that in the singular value decomposition used as part of the pseudoinverse, it's common to set singular values below a given threshold to zero to avoid instabilities. The Math.Net pseudoinverse implementation I use doesn't provide any control, but I reimplemented the method with threshold parameter. I had to set it super large (0.01) to avoid the issues, but then it seems to work. (2/2)

It seemed like I had to set it to a larger threshold the more parameters I added, so we'll see how things progress. I must admit I don't fully understand the pseudoinverse yet, or the consequences of the large threshold I'm using. Hopefully I can keep working around the issues without the core functionality being undermined. (3/2)

Here's a video showing what I'm talking about regarding instability of the pseudoinverse and how I'm working around it. But I'd like to understand better what's going on.

@runevision perhaps the stability would be better if the sums went in the other direction, so that low level values were arranged in a tree and equal to the sum of every parent node along a path to the tree root.

along the line of the pseudoinverse, i think the idea is just to give up on having an answer in the undetermined dimensions, setting those components to zero.

@kepeken The sums in the other direction is what the pseudoinverse calculates so to speak. It's not intuitive to author the values that way from the beginning.

I have learned about a concept called equilibration which might be useful, but it seems very niche with only academic sources about it that are a bit above my head.

@runevision Thanks for helping me understand better. I think you will not find a method to invert that matrix in those cases, because its singularity is telling you something valid and important.

If the group averages you have defined are accidentally overlapping in the information they provide, such that there is not enough information to determine all the parameters, that makes the matrix singular. That is not a problem with the algorithm, it is a sign that different/more groups are needed.

@runevision There might be some constraints that you want on the parameter values other than the group averages. If the pseudo-inverse is helping, it is helping because it implicitly applies those constraints, and furthermore they would be linear constraints.

If you can figure out what they are, it could improve the reliability if they were made explicit. The implicit constraints applied by the pseudoinverse are that certain sums are zero, that are dot products with the SVD columns with 0 S.V.

@runevision Did you try inspecting the condition number if the original matrix?

@lisyarus Does that refer to a specific calculation, or do you just mean the same way as I did for the pseudoinverse matrix? I didn't, but given all the numbers in the original matrix are small (between 0 and 1) and given how matrix-vector multiplication works, it should be very stable, similar to how the pseudoinverse is stable once it only has small numbers.

@runevision @lisyarus I think the condition number is analogously defined for the pseudo inverse matrix A^+, so very small values in A may result in very large values in A^+ I would think.

@shanecelis @lisyarus Ok, if I understand right. The L2 norm of the input matrix, pseudoinverse matrix and the multiplication of those two norms (the condition number) are by default 0.87, 1192.46, 1040.80. If I set the two small singular values to zero, then the numbers change to 0.87, 1.45, 1.27.

@runevision This is just a hunch, but it seems to me that you might not have to use the SVD algorithm for the pseudoinverse. One of the simpler methods may be adequate for your matrices.

Avoiding rows that are near duplicates, or, in other words, very similar (close to being linearly dependent) may be enough to keep you out of trouble. If you've got an overdetermined system you need to make sure it's well-conditioned.

I'm sure someone more expert can comment on this.

@runevision Incredible! How do you compute the automatic weights?

@lisyarus Thanks! I actually just described the Auto Weights over on reddit here, so I'll just link to that:
reddit.com/r/proceduralgenerat

@runevision really awesome! is it constrained to quadrupeds?

@runevision what program/s are you using? Or framework to do this?

@psyhackological This is made in Unity as custom editor tooling. I’m using a math library called Math.net for matrix calculations and such.

@runevision Have you considered instead of manually building these groupings instead you just make N "good" animals and then also make X random animals and then have people rate them and train a Neural Network to control the thing? In the end the NN would arbitrate the relationships between moving one parameter and how others respond.

@benthroop I need to do it manually because I need meaningful parameters that I can intentionally set to not just get some random good creature, but a creature with the specific traits needed for a given situation in the game. I wrote more about automatic vs manual parametrization in my blog post here from January: blog.runevision.com/2025/01/pr

blog.runevision.comProcedural creature progress 2021 - 2024For my game The Big Forest I want to have creatures that are both procedurally generated and animated, which, as expected, is quite a resea...

@runevision I'm not implying random creatures. You are making all these contingent rules in what param ranges are valid with other param ranges. I'm just saying instead of the rules being explicit, train an NN to adjust and constrain the other params as you adjust the primary one. It's still user driven, but just different "algorithm".

@runevision Ah ok I read through your blog post. You have thought of Everything. Gradient Descent technique! Very cool. You're so close to training an NN though! Seems you haven't taken that final step, maybe because you anticipate it won't work?

@benthroop Hmm, I don't quite understand (your previous reply). In the video in this thread I make a "tallness" parameter and figure out which lower-level parameters it should be derived from, and control (the relationship is two-way). If I hadn't created it, there wouldn't be any tallness parameter. The low-level parameters are too-low level to control individually meaningfully. This is why I'm using my tool to creature higher-level ones ("primary" as I think you call it).