So it turns out I was doing something dumb in that Vulkan bechmark post I shared yesterday (not considering that I may have been hitting a perf bottleneck that was skewing results...)
I've since updated the tests, re ran everything, and posted an updated article on my site:
Hoooorraaaayy for extremely public mistakes!
Sigh... when you spend hours constructing a performance test... and then twitter points out a mistake you made that renders it all useless within about 12 hours of you posting it.
Not sad that I was wrong (I post things publicly so that people let me know when I'm being dumb),
but maaaannn do I wish I had spent less time getting those borked numbers.
In my free time, I've been building a quick test project to benchmark different ways of handling transform data in Vulkan.
I've posted a write up of the results of that benchmark on my blog:
Key takeaway - push constants don't scale as well as UBOs or SSBOs when you're talking about tens of thousands of meshes.
My life right now is trying to get people to understand this:
How is everyone approaching setting content budgets with regards to loading times?
Seems reasonable to run some initial tests to see number of MB you can load on hardware in a certain amount of time, and stress tests to try loading like 10k small files at once, and different types of data to get a feel for what you can reasonably do, and set initial guidelines based around those numbers.
Has anyone approached this differently?
@khalladay Well, I figured I'd need instancing support anyway but, tbh, I just didn't fancy coding two code paths ;) Same reason I'm going down the vertex pull path not the normal vertex push, reduces code complexity in the long run (I hope)
Has everyone settled on what the "right" way to handle matrix data in vulkan is?
Are push constants getting totally filled with model matrix data (this seems less than ideal?), is everyone keeping multiple large UBOs that store model matrices to minimize binding (or 1 large SSBO?)
or for the most part are things just getting their own unique ubo for their model matrix?
I'm looking for a way to write a memory allocator that will bucket allocations based on what system is using that memory (ie: have a bucket for UI, Animation, Gameplay, etc).
But the kicker is that I can't modify the call sites of any allocation, and the current allocator interface only asks for an alloc size and alignment.
I've had someone suggest manually walking the stack to figure out what's making the alloc call... are there any other options?
I make graphics tech for mobile games, and dabble in engine dev as needed. Shared pointers must die. Random tutorials and thoughts @ http://kylehalladay.com
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!