Raid 10 vs Raid 5

evnixevnix OG
edited January 2020 in Technical

I have seen quite a few providers offer different variations of RAID
Is it more like availability vs performance?
The obvious question being which one is more "Prem".

Also, I don't intend to start a ? war

My Personal Blog 🌟🌟 Backups for $1 🔥🚀

Comments

  • InceptionHostingInceptionHosting Hosting ProviderOG

    For performance VPS hosting 10

    For storage VPS hosting 5 or 6

    With NVMe or even sata SSD I guess, 1 or 10 over 5

    5 = more storage but slower

    10 = better performance, less storage.

    Thanked by (1)evnix

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

  • @evnix said:
    I have seen quite a few providers offer different variations of RAID
    Is it more like availability vs performance?
    The obvious question being which one is more "Prem".

    Also, I don't intend to start a ? war

    https://www.acronis.com/en-us/articles/whats-raid10-and-why-should-i-use-it/

    Tldr; Raid 10 if you need Faster performance with redundancy, 5 if you need more space with redundancy while sacrificing write speed.

    Btw, when are we going to see your offers? Excited.

    Thanked by (1)evnix
  • SpryServers_TabSpryServers_Tab Hosting ProviderOG

    If you want a little more redundancy and speed, but want the storage benefits of raid-5, go with RAID-50. (If you have enough drives)

    Thanked by (1)evnix

    Tab Fitts | Founder/CEO - Spry Servers
    SSD Shared Hosting || VPS || Dedicated Servers || Network Status || PHX1 LG || DAL1 LG || || AS398646 || 1-844-799-HOST (4678)

  • @SpryServers_Tab said:
    If you want a little more redundancy and speed, but want the storage benefits of raid-5, go with RAID-50. (If you have enough drives)

    Wouldn't 5+0 offer less storage due to more than 1 parity? Or am I not thinking right?

  • PureVoltagePureVoltage Hosting ProviderOG

    I would highly avoid raid 5-50 without a raid card.
    We have some large NVMe systems with 12-24 4TB drives in raid 50 and performance is not nearly as good as it could be however it's what our client wanted and needs.

    I highly suggest raid 10 unless you really need the extra storage space.

    PureVoltage - Custom Dedicated Servers Dual E5-2680v3 64gb ram 1TB nvme 100TB/10g $145
    New York Colocation - Amazing pricing 1U-48U+

  • ionswitch_stanionswitch_stan OGRetired
    edited January 2020

    @seriesn said:
    Wouldn't 5+0 offer less storage due to more than 1 parity? Or am I not thinking right?

    RAID1 is only for two disks, RAID 10 spans those. Very simple math, 10 8TiB disks in RAID10 would net 10*8/2 usable, or 40TiB. This config would allow worst case two disk failures, and best case 5 disk failures without puncturing the array.

    RAID5 would net (10-1)*8 TiB, or 72TiB and support worst case one disks and best case one disks failing.

    RAID50 would net (10/2)-1*8TiB usable, or 32TiB usable. Worst case you can sustain three disks failing. Best case you can sustain six disks failing.

    Ionswitch.com | High Performance VPS in Seattle and Dallas since 2018

  • mfsmfs OG
    edited January 2020

    I'd generally avoid RAID-5 (it's 2020 already!) and go for either RAID-1, RAID-10 or RAID-6 (depending on your requirements); RAIDZ3 only if your OCD is grave enough

    Thanked by (2)vimalware bikegremlin
  • RAID 0. You make backups right. :smiley:

    The best all-around RAID level is 10. Reads and Writes are pretty well balanced. Definitely the best for high write environments.

    RAID 5 and 6 are best for high read environments and lots of small to medium drives. Of the two, RAID 6 is probably the only one that should be used for critical data.

    RAID 5 has the problem of only supporting a single disk failure, and probably shouldn't be used with high capacity drives. Rebuilding arrays hammers the drives and increases the likelyness of a second drive failure.

    Then there is software RAID 5 which might be vulnerable to the RAID 5 write hole. BTRFS, in-particular, had a problem with their RAID 5 implementation. Of course, RAID 5 via a card without a battery backup has the same problem.

    If you're thinking "hardware RAID card!", keep in mind it will be a point of failure, and the array will become trash if it dies. Granted this is for truly paranoid, but layering a software mirror on top of RAID arrays on two different cards gets around this. I haven't seen anyone duplex RAID cards, but I have seen people lose arrays because the card died.

    Then there are things like ZFS which have all sorts of different things going on.

  • @ionswitch_stan said:

    @seriesn said:
    Wouldn't 5+0 offer less storage due to more than 1 parity? Or am I not thinking right?

    RAID1 is only for two disks, RAID 10 spans those. Very simple math, 10 8TiB disks in RAID10 would net 10*8/2 usable, or 40TiB. This config would allow worst case two disk failures, and best case 5 disk failures without puncturing the array.

    RAID5 would net (10-1)*8 TiB, or 72TiB and support worst case one disks and best case one disks failing.

    RAID50 would net (10/2)-1*8TiB usable, or 32TiB usable. Worst case you can sustain three disks failing. Best case you can sustain six disks failing.

    Okay so I was thinking right. Running on 3 hours of sleep.

  • cybertechcybertech OGBenchmark King

    When it is raid 10 on SSD, hardware raid or not? Is performance better with hardware raid or without?

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech said:
    When it is raid 10 on SSD, hardware raid or not? Is performance better with hardware raid or without?

    "It depends". There are many hardware and software raid implementations.

    Thanked by (1)cybertech

    Ionswitch.com | High Performance VPS in Seattle and Dallas since 2018

  • cybertechcybertech OGBenchmark King

    @ionswitch_stan said:

    @cybertech said:
    When it is raid 10 on SSD, hardware raid or not? Is performance better with hardware raid or without?

    "It depends". There are many hardware and software raid implementations.

    I ask this in pursuit of the perfect VPS haha

    I bench YABS 24/7/365 unless it's a leap year.

  • ionswitch_stanionswitch_stan OGRetired
    edited January 2020

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    Thanked by (2)cybertech poisson

    Ionswitch.com | High Performance VPS in Seattle and Dallas since 2018

  • @ionswitch_stan said:

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    NVMe is sufficiently beastly for performance. I would just do a software raid 10 to mitigate failure. There is no need for NVMe to do raid for performance reasons.

    Also, hardware raid caching can mess with performance. I have seen massive see saws in iops with hardware raid and I suggest for peace of mind, just go NVMe.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • cybertechcybertech OGBenchmark King
    edited January 2020

    @ionswitch_stan said:

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    Yes the same vps is doing 3.4GB/s now on DD.

    I was asking about the normal SSD in RAID 10 though; NVMe is too fast and too hot for me to comprehend right now :p

    I bench YABS 24/7/365 unless it's a leap year.

  • @poisson said: Also, hardware raid caching can mess with performance. I have seen massive see saws in iops with hardware raid and I suggest for peace of mind, just go NVMe.

    Many SSD's have an onboard cache that is faster (throughput and latency) than RAID controller battery backed cache. Its not uncommon on older raid controllers (older than the last few years) to absolutely need to disable cache.

    Ionswitch.com | High Performance VPS in Seattle and Dallas since 2018

  • @cybertech said:

    @ionswitch_stan said:

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    Yes the same vps is doing 3.4GB/s now on DD.

    I was asking about the normal SSD in RAID 10 though; NVMe is too fast and too hot for me to comprehend right now :p

    Yo! If you want man, I can cache the shit of your box to fake that number :lol:

    DD rest is over rated and should be taken lightly unless you notice major system slow down combined with high io latency.

  • With today's HDD sizes, a single 10TB hdd has about the same failure rate as a RAID5 array.
    Kimsufi is still good for backups , especially with nice enterprise HGST drives.

  • cybertechcybertech OGBenchmark King
    edited January 2020

    @seriesn said:

    @cybertech said:

    @ionswitch_stan said:

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    Yes the same vps is doing 3.4GB/s now on DD.

    I was asking about the normal SSD in RAID 10 though; NVMe is too fast and too hot for me to comprehend right now :p

    Yo! If you want man, I can cache the shit of your box to fake that number :lol:

    DD rest is over rated and should be taken lightly unless you notice major system slow down combined with high io latency.

    Isn't caching good for performance? Although I don't know what you can use to cache SSD/NVMe other than RAM?

    I agree with overall view on performance, and personally I'm kind of leaning towards limiting IOPS in every VPS so the node stays stable even under full occupancy (and I as one of the guest wouldn't bump into I/O issues). But would this work efficiently, say each VPS is given a healthy 1K IOPS, as opposed to allowing full access to I/O so every request gets completed ASAP to free up resources?

    I bench YABS 24/7/365 unless it's a leap year.

  • WSSWSS Retired

    @vimalware said:
    With today's HDD sizes, a single 10TB hdd has about the same failure rate as a RAID5 array.
    Kimsufi is still good for backups , especially with nice enterprise HGST drives.

    That's why I only run RAID0 120GB Deskstars. Live dangerously.

    Thanked by (2)corey dahartigan

    My pronouns are asshole/asshole/asshole. I will give you the same courtesy.

  • @WSS said: 120GB Deskstars

    I had 75GXP's back in the day.

    Thanked by (1)WSS

    Ionswitch.com | High Performance VPS in Seattle and Dallas since 2018

  • @cybertech said:

    @seriesn said:

    @cybertech said:

    @ionswitch_stan said:

    @cybertech said:
    I ask this in pursuit of the perfect VPS haha

    If perfect means the fastest at a reasonable cost, and speed at the cost of space, nothing is going to beat NVMe. You started the 'Share Some Monster Benches' thread with a VPS that had 2.4G/s read/write. There is a tiny bit of headroom above that... You likely wont see much above 3.2G/s.

    Im not aware of anyone on LET/LES selling hardware RAID NVME, nor anyone running more two disk RAID1 (I could be wrong).

    Yes the same vps is doing 3.4GB/s now on DD.

    I was asking about the normal SSD in RAID 10 though; NVMe is too fast and too hot for me to comprehend right now :p

    Yo! If you want man, I can cache the shit of your box to fake that number :lol:

    DD rest is over rated and should be taken lightly unless you notice major system slow down combined with high io latency.

    Isn't caching good for performance? Although I don't know what you can use to cache SSD/NVMe other than RAM?

    I agree with overall view on performance, and personally I'm kind of leaning towards limiting IOPS in every VPS so the node stays stable even under full occupancy (and I as one of the guest wouldn't bump into I/O issues). But would this work efficiently, say each VPS is given a healthy 1K IOPS, as opposed to allowing full access to I/O so every request gets completed ASAP to free up resources?

    RAID caching? Nope.. wild swings are dangerous man

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • KermEdKermEd OG
    edited January 2020

    Fwiw, for my home use I've been using a RAID10 with 10x3TB drives, with a PCI raid card, and an external USB storage device for the final backup for about 6 years now. Used to do off-site backups over the cloud but became too overly redundant.

    The data isnt critical but stuff I'd rather keep around. The tiring part is remembering to replace drives every so often. And you need to get a decent raid card, I used the cheap ones a few times and they always end up dying or losing data, finally bought something mid-grade. No regrets though, data hoarding is fun.

    The reason I went with RAID10 is performance.

Sign In or Register to comment.