mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.1K
active users

#postgresql

44 posts40 participants10 posts today

Movim is officially dropping support for MySQL ⚠️

It is too difficult to maintain compatibility with all the quirks and specificity of this database 😔 MySQL was already broken for a while (migrations not running, broken queries).

Don't worry we are still fully compatible with PostgreSQL (that is the recommended one) and MariaDB. You can find those two databases in all the major distributions 😊

This will greatly simplify and streamline the development of the project ✨

🐘 PostgreSQL Pro Tip: Meta-commands can transform your data import workflow!

Just shared a new tutorial from my "Master Meta-Commands in PostgreSQL" course focusing on importing data into tables with minimal effort.

Learn how to import data with and leverage PostgreSQL's meta-commands for data migration.

Watch and level up: link.illustris.org/importdatai
#PostgreSQL #DatabaseTips #DataImport #SQL #LinkedInLearning

link.illustris.orgImporting data into tables | LinkedIn Learning, formerly Lynda.comAfter watching this video, you can have a better understanding of how to import data into tables in Postgres. When a person knows how to bring data into Postgres, they are able to integrate with other tools and platforms.

This is a typical FreshPorts daily database backup being rsync'd from AWS to the #homelab :

It just so happens the the output of pg_dump for the #PostgreSQL database more-or-less keeps the data in the same order each time. So the actual daily transfer SEEMS to be only the new data.

On disk, I rely upon #ZFS compression to do the disk savings for me. That compression also speeds up disk throughput - the CPU can uncompressed faster than the disk can provide the data.

In today's case, the amount copied down seems to be 360MB.

This speed-up by rsync, which recognizes what has and has not been transferred, is also why do not dump in compressed mode.

dumping freshports.org
receiving incremental file list
postgresql/
postgresql/freshports.org.dump
3,621,784,708 100% 77.73MB/s 0:00:44 (xfr#1, to-chk=1/4)
postgresql/globals.sql
3,963 100% 58.64kB/s 0:00:00 (xfr#2, to-chk=0/4)

Number of files: 4 (reg: 2, dir: 2)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 2
Total file size: 3,621,788,671 bytes
Total transferred file size: 3,621,788,671 bytes
Literal data: 355,867,046 bytes
Matched data: 3,265,921,625 bytes
File list size: 142
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 481,559
Total bytes received: 356,171,297

sent 481,559 bytes received 356,171,297 bytes 6,925,298.17 bytes/sec
total size is 3,621,788,671 speedup is 10.15

I always wanted to play with osm2pgsql. Getting it running was easier than I thought. I now have access to crucial information, like knowing I'd have to travel 3200km to reach the bakery which is furthest from my house but still on the Australian mainland.

Object-oriented or relational? Why not both?

For many years, we tried to fit the business data into a normalised table structure. We used Object-Relational Mappers, which was a constant battle on how to map unfitting models.

Then document databases like MongoDB came along and got traffic.

Still, many people wanted guarantees they had in relational databases, they also wanted to reuse muscle memory related to operations and other tooling.

Now we have the choice as we have #JSONB data type implemented by #PostgreSQL and then by MySQL, SQLite.

The B in JSONB stands for binary. It looks like a JSON, it quacks like a JSON, but it's not JSON. And thanks to that, it's powerful.

When you're storing JSON data in JSONB, it's parsed, tokenised, and stored in a tree-like structure. Types are preserved, and a hierarchical structure is also preserved, and thanks to that, you can index it and efficiently query it.

I'm super happy that in recent years, I have had the opportunity to use Postgresql and JSONB, first in Marten and now in Pongo. I didn't look back. JSONB has its cons, but for most typical line-of-business applications, they're negligible.

I finally wrote an intro in #ArchitectureWeekly about how JSONB works, check it, tell me how you liked it and share with your friends!

And most importantly, play with it on your own 😊

architecture-weekly.com/p/post

Architecture Weekly · PostgreSQL JSONB - Powerful Storage for Semi-Structured DataBy Oskar Dudycz

psql:create_tract.sql:17: ERROR: cannot change name of view column "geoid" to "GEOID"
HINT: Use ALTER VIEW ... RENAME COLUMN ... to change name of view column instead.

Good hint; escapes added to fix casey-ness.
#PostgreSQL