One option might be to attempt to actually reduce the entire screen down into a smaller set of repeating patterns, Cinepak style, to reduce the number of totally unique symbols in the huffman tree but meh that sounds like effort and I'm not convinced it will save me much.

One other thing I've also noticed is that I seem to get slightly better compression results by length-limiting my huffman tree, since at some point the codes become longer than it would take to actually just encode the full block.
A dual-color block takes all of four bytes total to encode, so I limit length to 32 entries now. Past that it just encodes the block itself rather than the huffman code

Though I'm a little less worried about how big it is and more focused on making sure it can decode super fast so I can write a playback library for my fantasy console LOL

My shitty Cronch video codec playing the Sonic 06 opening. At the moment the raw 320x200 3k+ frame video sits at nearly 200MB, and the cronched video is 30MB (not as good as it could be but still a savings)
streamable.com/yef0e

OK so now the video plays from start to finish, and it's.... visually legible.
But man, it's all squashed into one corner of the frame and the colors are really bad.
Time to do some digging 🤔

Think I fixed it, was some slightly wrong logic in how I serialized my huffman codes. Amazing it still managed to play as far into the stream as it did haha

Debugging off-by-one errors in playing back video from a continuous stream that only happen several hundred frames in and that *could* potentially be at the individual bit level is

really tough apparently

w h e w
Working on my new huffman-coded retro video compression.
Uncompressed would be 320 x 200 8 bits per pixel paletted. 64000 bytes per frame uncompressed.
My compressed bitstream is sitting at about 2000 bytes per frame.
That's a ~33:1 compression ratio.
Hot d a m n.

Huffman coding is cool because everyone tells you how to build a huffman tree using a fancy priority-queue based algorithm to iteratively pop the two least used items and build a tree node out of them but nobody tells you you can just put your shit in a flat list, sort by ref count, and call it a day.

Weird quirk of my C++ project compared to my C# attempt:
Compiled in Debug mode, massively slower than my C# version.
Compiled in Release mode, ludicrously faster than my C# version.

Interesting performance gap lol.

Nice, got my little console app opening an MP4 file, receiving frames, and rescaling/converting them to 320x200 RGBA32 frames. Next step: generate frame palettes, conform frames to palettes, and then compress each frame!

One of the things I'm doing at the moment is porting over from my original hacky-ass C# app to a C++ app and compiling in libavcodec instead of the really bad solution i used for the C# app which was call out to external FFMPEG bin -> generate PNG frames -> load PNG frames.
New C++ version will just decode that shit *directly*.

The other thing I wanna do is look into improving palette selection, and also look into huffman coding the aforementioned 4x4 blocks for further reduced size.

Video is restricted to 256 color paletted, 320x200 (which is the resolution of mode 1 of my fantasy console's graphics system). Encodes 4x4 blocks of pixels which can be unchanged since last frame, single color fill, dual color, or full/uncompressed. Still lots of work to do.

Current status: messing with a Cinepak/Smacker inspired video codec for a fantasy console I'm working on :)
streamable.com/808av

So I guess virtual calls are gonna be potentially much more expensive as a side effect of fixes for Spectre / Meltdown.
And suddenly Mike Acton's "typical C++ bullshit" slides are now perhaps more relevant than ever.

Well, my PC has bluescreened twice in a row related to NVidia services :c

Interesting. Looks like my device supports a minimum OpenSL buffer size of 240 samples at 48khz, yet does not expose the "android.hardware.audio.low_latency" feature, so Unity opts not to use the OpenSL audio path.

That's quite unfortunate.

@KillaMaaki It probably wouldn't be so bad for a non-VR app, but it becomes much more noticeable in VR I think.

Wow..... so apparently audio latency in Unity is unusably bad, at least on a Moto G4. On "best latency" vehicle crash sounds are playing easily 200ms after the actual impact has happened.
That's almost a quarter of a second later. It sounds horrible.

Oh OK. The culprit was "Use 32-bit display buffer". 16-bit buffer it is then.

Now, I suppose my hubris was in assuming that mobile hardware would be capable of going beyond the color depth of a SNES without breaking a sweat, but I suppose I ask too much of a phone 🤷

Show more
Gamedev Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!