mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.4K
active users

#raid

4 posts4 participants0 posts today

Cursed homelab update:

Server #2 now has two fancy new 8TB NAS drives in it, but I forgot to grab a new fan while getting them so it's down a fan. I had the wizard wheeze of putting a duct thing in the case and using the PSU's fan to drawer air, but:
1. Corsair's PSUs are so efficient that this one doesn't spin up it's fan
2. The Linux driver for it doesn't let you set the fan speed

So I have a random fan hooked up to a PSU blowing air on the drives for now.

I'm also trying to do the whole recovery as non-destructively as possible, so system disk has been copied to temporary storage on one of the NAS drives, and I'm currently copying-ish the Ceph OSD's storage onto the other before wiping all 6 other drives in the system, turning two into a mirrored pair of system drives and the rest can be OSDs, including the two new 8TB NAS drives ... once they're done being a lifeboat for the only remaining storage on that server.

Speaking of storage stuff happening: big copy part 3 and 4 are happening: copying all the storage my _partner_ cares about into Ceph, which is on the last MD-RAID array. So the rough plan is:
1. Copy everything that is self-contained other than the Big Folder ✅
2. Copy parts of the Big Folder ✅
3. Repartition server #2 adding ~16TB to the array 🚧
4. Copy the remaining parts of the Big Folder
5. Turn the RAID members over to OSDs

Oh yeah, it's all coming together!

#homelab#ceph#raid

I wonder if a sufficiently large #RAID 0 array of #floppy drives could match the throughput of an #SSD, and if so, how many drives it would take.

Let's see…

The Crucial T705 does sequential IO at 13387 MB/s, or 109666304 kbps. A floppy drive can do sequential IO at 250 kbps.

So, an array of 438665 floppy drives should be about the same speed, and would store about 602 GB. Not bad.

Now, how do I connect 438665 floppy drives to one computer? 🤔

ICE Raid Resources - Housing Not Handcuffs
housingnothandcuffs.org/icerai

> ICE is now targeting homeless people. This inhumane directive is just one page of the Trump administration’s anti-Black, anti-homeless, anti-migrant, anti-trans and anti-poor agenda.

Housing Not Handcuffs · ICE Raid Resources for Homeless Shelters and Service ProvidersView ICE raids resources, including what to do before, during, and after a raid, know your rights materials, messaging guidance and more.

Hast du dir einen neuen Synology NAS-Server zugelegt und fragst dich bei der Inbetriebnahme, welches RAID du verwenden solltest? In diesem sehr kompakten Beitrag möchte ich beschreiben, welche Möglichkeiten du bei der Einrichtung deines NAS-Servers hast und für welches RAID ich mich entschieden habe.

Synology NAS-Server: RAID-Typ bei Einrichtung grellmann.net/welches-raid-fue

Server Festplatten-RAID
Grellmann.NET · Synology NAS-Server: RAID-Typ bei Einrichtung
More from André G.
#Synology#Raid#NAS
Replied to Andreas Bulling

@abulling
From a different point of views and the fact that one can host a Server on any machine.

Components or machines should avoid as many parts as possible from big tech Bros. That means all for recommendations before should be ignored.
AMD instead of Intel and Nvidia is more compatible with Linux as it supports it.

I'll keep my current stuff as it does not make sense to throw functional parts away, but I'll consider this on next sets.

What do you think?

Continued thread

BTW - if anyone of you has a recommendation for a 19" mini rack server (shorter mounting depth) I'd love to hear about them.

I'd like to try #proxmox for the typical services #homeassistant #nextcloud #paperlessngx #adblocker #vpn etc

Ideally #ssd or #nvme in a #raid plus separate disks for storage (also raid). IOMMU

Doesn't have to be the newest version. A slightly older and used one is also fine.

#boost welcome - thanks a lot!

Wow, do I ever regret buying #OWC's #SoftRAID. It lost track of one drive forcing me to initialize and rebuild it. While rebuilding, it bitched about another drive that kept disconnecting. Once the rebuild finished, I swapped enclosures (different manufacturer) for the supposedly disconnecting drive. SoftRAID didn't know anything about the drive forcing me to reinitialize it and rebuild yet again … that takes days for an 8 TB drive. Worst #RAID software ever.

Continued thread

the #Pine64 #SOQuartz has a nice lil trick: a M.2 slot that i fit with a 6 port SATA controller thingy

basically, i get to add 6 harddrives to this, giving me the option to #RAID (basically, turn several disks into just one, with some built-in backups and protection)

i've ordered 6 2.5" harddrives from a laptop repair shop in Ireland on eBay that should arrive any day ✨ again, trying to reduce e-waste here :)

Ever accidentally blow away your /boot partition?

dpkg -S /boot

will list what packages were written there. then you can do an 'apt-get --reinstall install <pkgs>' to fix it... This is Debian. Probably similarly for pacman, apk, emerge, etc.

So, why do I mention this?

Was replacing a failed drive (md RAID 1 root) in my garage "Artoo" unit and was duping the partitions from working to new ssd.

I guess I dd'ed the new empty /boot to the old working /boot. I was aiming for the UEFI partition...

Big dummy! LOL

Oh well. No one died, move along, nothing to see here. System is back up again now and in sleep mode for the rest of the night.

#Linux#Debian#mdadm

How I switched us.slackware.nl from lilo to grub

Today I finally switched my US server from lilo to grub as its bootloader.

The reason for doing it now, is that I usually do not have a remote IP console (KVM) to that physical server which is located in a US datacenter whereas I live in Europe. This server's storage is configured as software RAID1

alien.slackbook.org/blog/how-i

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting