RAID for home PC
I have 2 new HDD drives and I am thinking about making a ride array. I opted for RAID 1 because it is the most reliable one. It remains to choose now which method to use? BIOS or software (windows, linux)? I think that hardware raid is too much for home PC. What do you think?
Talking about the software RAID, if my Windows or Linux is broken my disk will be ok if I just reinstall the OS? The OS will be on NVMe, not HDD.
Comments
Maybe someone can correct me, but mirrored ZFS may be more suitable than MDADM?
It is more resilient to power loss than MDADM so you won't lose data
Personally I'd do software. My concern with BIOS is that if your computer dies and you replace it with a newer/current model that has a different controller chip then will it still handle the array OK - probably, but maybe not. For commercial RAID cards the lifecycle is longer than consumer devices.
If you are referring to BIOS RAID as a BIOS option in a consumer MB I would use software RAID.
What is offered on consumer MBs is a not a real hardware RAID and it's not reliable. It's commonly called "fakeRAID".
The software option will depend on what you are planning to run on that machine. If you are planning to use it as a server keep it simple and just use MDADM on Linux. If you are planning to use it for gaming or Windows applications just create the RAID on Windows.
ZFS could be an option, but it has a planning penalty. You can't simply add another drive and expand the array, so unless you are willing to add at least 2 drives at time to expand your mirror array just stick to MDADM and you'll have more flexibility to expand your array and RAID level by adding more drives. (i.e. 2 drives RAID1, +1 drive RAID5, +2 or more drives RAID6, etc)
Keep it simple, mdadm raid 1
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
You use HDD?
I got rid of those. Don't use raid anymore nowadays. Just a single SSD is fine with a regular backup session of critical data (aka porn).
♻ Amitz day is October 21.
♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.
Performance wise raid 1 will work well on anything. Raid 5 and raid 10 gain more much in performance. With that said Linux mdadm is very very good and the documentation on recovery is great and expansive. If you have the option that's ideal. But I would choose motherboard bios raid 1 over windows raid1
Wait, why was your home raided again for your PC? Did they seize it?
.
SmartHost™ - Intelligent Hosting! - Multiple Locations - US/EU! - Join our Resale Program
https://www.smarthost.net - sales@smarthost.net - Ultra-Fast NVME SSD KVM VPS - $2.95/month!
he got caught with some mdmadm
Personally I run BTRFS RAID1 with transparent filesystem compression on two drive arrays. In years past I used mdadm RAID10f2 for the mirroring with performance gains!
What kind of RAM are you using?
MDadm software raid1 if you're using desktop non-ecc ram.
If you have plenty of ECC ram (like 6-8gb excess), ZFS mirrors are pretty nice.
Just a regular RAM, not ECC. Didn't even know that ZFS could be used only with ECC.
Of course ZFS does not require ECC memory, but it is highly recommended as ZFS heavily makes use of memory.
If some data in your RAM becomes corrupted and ZFS writes it to your mirrored drives, this most likely won't end well.
BTW you cannot really compare md raid and ZFS as they operate on different layers.
MD operates on block level and is able to mirror any other filesystem of your personal choice to two drives. This is the only thing it does and it does this very simple and reliable.
In contrast ZFS is a feature-rich filesystem with one of its features being the ability to mirror a pool across multiple drives.
If you only want to mirror, MD RAID is most likely the easiest approach. If you know how to handle, manage, recover etc. ZFS pools, you can get a really mighty toolkit.
it-df.net: IT-Service David Froehlich | Individual network and hosting solutions | AS39083 | RIPE LIR services (IPv4, IPv6, ASN)
I know that‘s not exactly what you asked, but: I wouldn‘t bother creating RAID for a personal computer at all. Hard driver failures are extremely rare if you just have normal kind of usage in desktops or notebooks. As long as you keep daily backups a drive failure shouldn‘t matter much anyhow (in comparison to its likelihood and the time it takes to fix it), even if the device is used for your daily work stuff. (Servers, thought, would be a totally different story.)
In my entire life I only had one drive failure on any of my workstations back in 2008 (or 2009?) and zero drive failures in any of the desktop or notebooks I manage for my clients. And, the one drive failure was partially my fault: that was the first gen Intel SSD, back then SSDs were still rather „experimental“ and new stuff. I knew what risk I was taking, but it‘s much less likely to happen today, at least when you buy quality SSDs or HDDs. Usually, you will replace your computer before the disks end-of-life is reached.
Alwyzon - Virtual Servers in Austria starting at 4,49 €/month (excl. VAT)
I use torrents too much. As fas as I know it affects hard drive life. And you now, nowadays HDD drives are not the drives like in 00x. I still have my 160GB HDD Seagate working. I bought it in 2006 just a week ago read a lot of reviews, people says modern HDD drives live only 3-4 years. My latest HDD were bought in 2015, still working but smart says it almost dead.
@BlaZe is this true ?
Yes, if you're torrenting a lot it can prematurely wear out both HDDs and consumer SSD. One idea is to get a cheap, used, enterprise 10k SAS HDD (plus HBA) just for torrent/seeding; those are pretty sturdy. Enterprise PCIe/U.2 NVMe can have very high DWPD, too. Or just get a dedi seedbox.
I do have to say that since I started drinking the ansible kool-aid, I worry a lot less about drive failure. Backups are still necessary, of course, but the main value of RAID in reducing downtime is somewhat obviated by the ease with which I can provision a new server, whether rented or owned.
Well, my missus' Wave 256GB SSD (laptop) died yesterday. I also killed a 1.8" SSD a few years back. What I hate is that they just stop. At least with rust, a short sharp shock often got them running enough to (partially) extract data - especially the ancient MFM types.
Of course, I could only nag so often to keep a copy of stuff onto a pendrive.
Not that I practice what I preach!
Glad that I've got most of my photos on a RAID1 (mdm) NAS.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I am curious with UNRAID, planning to use 4 nvme drives
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Unraid doesn't pass TRIM, last I heard. Most folks use it with an array of spinners, plus SSD cache. For 4x NVMe, perhaps zfs pool of mirrors (raid10), depending on your needs.
Thank you for the info, I will try to find about zfs then..
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.