mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.5K
active users

Rune Skovbo Johansen

I'm working on a tool that attempts to tweak the parameters of my creature parametrization to match the look of a reference 3D model. It does this solely by looking at the silhouettes and executing the parameter changes which make the silhouettes match more closely. 🧵

It's not very fast, the above video is playing at 20x speed. There's 106 parameters that need to be tested in both positive and negative direction each iteration before the algorithm chooses which changes to actually keep. (It's Gradient Descent in 106-dimensional space).

The gradient descent is based on a penalty function which looks at the pixels around the reference silhouette outline and checks how far away the outline is in the corresponding silhouettes of the parametric model. It adds all the distances together, and that's the penalty. That's why the legs is the first thing to glow here. This reduces the most distances at once, and after that the neck length provides the most bang for the buck.

Currently the algorithm doesn't produce as nice results as I can do by tweaking parameters manually, but I hope I can get it close with some further algorithm tweaks. Tweaking parameters manually to try to match a given reference model is quite slow and tedious, which is why I'm attempting to automate it.

Why do I need to match reference 3D models in the first place? In order to have more data about what combinations of parameter values are realistic, so I can make better informed decisions about how to further refine the parametric model and make it more high level without accidentally restricting it too much.

A bunch of tweaks later, and the creature converges way faster and to a better result. The ears look funny but I think it's because they are pulling double duty as both ears and horns, since the parametrization can't create horns currently. Would probably be fixed if I could hide the horns on the reference model.

I also tried implementing the 'Adam' optimization algorithm based on this guide (machinelearningmastery.com/ada) but, uhh, the results are not so good. Maybe I'm doing it wrong.

If anyone know of popular proven techniques for dynamically adjusting the step size in gradient descent, or how to handle approaching a minimum (switch to binary search?), or even escaping local minima, let me know!

@runevision gradient descent? this means you're using machine learning (the actually useful parts), which means you can slap "AI" onto this thing and get all the investors lined up!

Seriously though, very nice.

@aras Haha, right. :) Seriously though, there's no neural network or training data set (only one example model to look at), it's just gradient descent in the same sense as a raindrop sliding down a mountain in the direction of the steepest slope. But yeah, for some reason categorized as machine learning I guess. But then every physics system is machine learning too.

@runevision look at ADAM for step sizes. You could also try L-BFGS (not certain how it will do on 106 params). Can try random restarts to avoid local minima.

@avi I'm looking into ADAM but all descriptions I could find assume familiarity with stochastic gradient descent. And when I'm looking into that, I can't figure out how to map the terminology to what I'm doing, or if it even applies. In the image here (from Wikipedia) I don't know what the summand Qi functions are supposed to correspond to. I have many parameters/dimensions (104) but I only have one evaluation function. I also don't know if I have anything corresponding to "i-th observation".

@runevision I think you can just treat yours as a degenerate case where you have exactly 1 observation, at which point SGD = GD. The momentum and adaptive learning rates from Adam should still apply.

@runevision alternatively, and maybe more conventionally, you could consider each pixel distance to be a separate observation.

@runevision As an aside, I've always wanted games with complicated character creators to have this kind of thing where you give it a photo of something as the target.

@pervognsen Right, that would be cool for such games. To be clear though, what I'm showing here is internal tooling to help me develop my creature generator, not something I'll actually be shipping in the game.

@runevision I get that, it just reminded me of the corresponding problem with character creation sliders, which are always a nightmare if you want to get anything good out of them. Dragon's Dogma 2 had an interesting take on human-in-the-loop gradient descent for character creation where they'd show you a 3x3 grid of nearby variations of the current face and then you could select the one that was an improvement in the direction you wanted, and it would repeat until you were satisfied.