NAS disk problem

P

pibbur who

Guest
I have a 4-year old Asustor 4-bay NAS. 3 6 TB disks are set up in a RAID 5 configuration, originally they were all WD Red disks. Now, during the last couple of months I've had 2 of the disks failing (bad sectors and a couple of other S.M.A.R.T errors). I've replaced them with 6TB Red Pro disks, I thought that made sense, because they seem to be a bit more reliable (5 years warranty in stead of 3 yeras fopr the Reds). They also come at 7200 RPM in stead of 5400, although I won't notice a difference until I also replace the 3rd Red. (Or perhaps I won't noitice the difference at all).

However, now I see on the internet that the Pros are really not recommended for so small NAS'es, so maybe it's a waste of money. And I would probably see better performance if I install an SSD disk as a cache for the RAID, which the ASUSTOR supports. And even more important - do I actually need better performance from NAS. Unfortunately, to be honest, I'm not sure I can justify saying yes to that question,

A couple of questions:

1. The NAS is on 24/7. Is 4 years a reasonable durability (correct English?) for a standard NAS disk?
2. Would you recommend going for a Red Pro if/when the remaining Red fails.

pibbur

PS. BTW, it's actually quite fascinating how easy replacing a RAID disk is, I just take out the offending one, and put in the replacement, then the NAS takes over rebuilding the RAID. No need to turn off the NAS or releasing it in the admin software. Initially I couldn't believe it was that easy, but it actually is. DS.
 
I would think that 4 years for a standard NAS disk, in a home setup that I'd guess is not being thrashed that hard, would be pretty reasonable. As to whether specific models that are perhaps better adapted to server use are worthwhile, I'll leave that to someone who has more experience dealing with data storage. I know that the cloud service Backblaze achieved major savings by using cheaper consumer drives in their data centres, and say that it's worked out well for them.

What I did want to suggest is to be careful if you're using RAID as a type of backup. When a RAID rebuild goes wrong (and they do), you can lose the whole shebang. RAID is more intended to help with availability and data consistency, but it's very fragile if it's being relied upon to safeguard the data.
 
Joined
Nov 8, 2014
Messages
12,085
I have backups as well. 2 USB disks. What I don't have are backups outside my house. I probably should do something about that.

pibbur who no longer thinks backups are cowardly.
 
Does your NAS have link aggregation and are you using it? Otherwise, you might not really see a benefit in regards to speed when going from 5400rpm to 7200rpm. Are you getting sub-gigabit speeds during large file transfers? There are theoretical speed decreases when writing to RAID5, but if so this might be more limited to your raid controller, rather than drive speed. If you can get a sustained 110MB/s transfer on a large file, you're probably doing fine.

See here for reference to your concerns.

I tend to prefer software raid myself for my use or for home in general. If your hardware fails in an enterprise environment, you're usually equipped for that. At home, recovering your data isn't hardware dependant. However that's where you may see a benefit to running faster drives. Although i'm using mirrored SSDs for caching when not making large 200GB+ transfers, it still has to move the data over every night. Since i'm running a parity drive setup (unRaid), every write to the array needs to wait for the drive to complete a rotation before it also writes to the parity drive, which can slow down the theoretical maximum by almost half.

There are also other things to consider for drive longevity too, such as spindown time. You should be able to set this via your NAS settings, or possible through the WD firmware tools. As far as I am aware, the REDs were designed to be able to adjusted. There is of course great power saving when spinning down and parking the drive, but if it's set to do this a few seconds after every disk event (rather than stay spinning for a few minutes until all events are completed), that adds up to huge added stress on the drives. These are one of the weaknesses in the WD Blues i'm using, however i've modified spindown times on them through a software tool and have been running them in a NAS setup for the past few years without issue. You should be able to see your park frequency on your old failing disks through Crystal Disk or something similar, I believe. I'd check to see if they are unusually high. Because 1 RED drive? Maybe. 2 around the same time is suspect unless other things are at play such as perhaps your raid controller or if heat is an issue, etc.
 
Joined
Feb 19, 2009
Messages
2,257
Location
Calgary, Alberta
You are not providing enough information for useful comments. System read/writes (physical seeks) play a role in disk durability as well as spinning. I've had old drives last 15 years (15 years of spinning with light read/write loads) and their replacement was due to technology age (density/perf) and not the disk failed. Then again I've also had disks fail after 2 years (samsung 1tb drives were horrible with regards to longevity - but they were the 'rage' due to good perf when they first hit the market).
-
I've heard mixed thing about the reds and had a few 4tb ones but ended up settling for hitach drives when i filled out with 4tb drives; now i go with segate 10tb drives for most of my hard drive needs ( SEST10000NM0 ) - they have been fast/low power/cool and reliable so far (but i dont' stress my disks like a data center might).
-
So you asked several questions:
are red pro 6tb drives reliable - not sure I think when i looked into them they were ok but i mostly was looking at 4,8 and 10tb drives and didn't do a deep dive into 6tb models - so i don't have a reliable answer here - but if i remember correctly they were high on power consumption.
-
should drives last 4 years - trivally if the load is light - they should last 10+ years i many cases - but it is always the luck of the draw - some will fail after 2 or 3 others will last 20+ - i would say 4 pretty easy if the load is light.
-
will ssd help with your perf - depends on your work load - in some cases they will only hurt in other cases they will result in a massive improvement - depends on your 'cache' hit rate.
-
do you need ssd - no clue - are you having a perf issue? Remember under certains loads the ssd will actually hurt your perf.
-
Generally if i remember correclty 6tb was a bad drive size but i forget the reason i came to that conclusion - some of that was reliability and some was cost per byte.
-
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
NAS is dead and gone. I suspected that might be the case, among other because the disks it reported having failures, work flawlessly under windows (disregarding the possibility that windows is wrong about that). Additionally, it suddenly reported problems also with the 3rd RAID disk. I reinitialized it, and got it working. Sort of. Significant problems writing to the disks, creating users and more so I tried the almost-always-working-solution and rebooted it, and now it refuses to start. I don't think I want to spend more time on that one.

So I have to get a new NAS. I would have liked to get a new Asustor, mainly because I know that system. But my pusher don't have that brand anymore. For reasonably priced 4-bays they offer:

  1. Qnap TS-431X2
  2. Qnap TS-453Be-4G
  3. Synology NAS DS418Play
  4. Synology NAS DS918+

They also have a Qnap TS-431P2, but their customers don't seem to like that one. Besides, it's white.

pibbur who now has to read reviews and maybe also listen to the watchers.
 
For a NAS what I do (and recommend) is get a cheap pc and run ZFS. There are some advantages here - first it is not proprietary so if the hardware dies you can just move the disks to a new enclosure. Second ZFS will keep block checksum so if a (for example) realloc occurs on the drive (which zeros the block) zfs will detect that the block has 'bad data' and rewrite it when you do a scrub. Third you can use the box as your web machine if you currently use a machine for that purpose (it won't be very good as a primary gaming machine since many games require windows (but many will also work on linux - either via wine (or steam equivalent to wine) as well as native port.
-
disadvantage - depends on what spare equipment you have lying around - the software is free but you do need to have the pc and an enclosure big enough to hold the drives. If you do 4+1 (personally i recommend 4+2 or 8+2 but that takes more drives) then you need a case that holds 6 drives (though you can use ssd for the boot disk). There are many cases that will hold 6 drives and most motherboards have 6 or more sata connections. If the system is not a game system and does not require a dedicated gpu then the psu can be 300 or 400 watts. 8GB is fine but if you are going to web browse then you might want 16gb (browsers can eat up 8gb of ram pretty easy - at least chrome does - but i leave it running and only reboot once every few months when a security patch requires a reboot).
-
If you dont' want to go this route - i can't really advise you on a nas box - but i'd look for one that runs linux underneath and uses software raid so the next time the hardware fails you don't suffer data loss.
--
for those who care i do game on a linux box with a 1070. It works well for the games supported by linux but alas there are some games I would not attempt on linux (like witcher 3 - today it appears that witcher 3 runs very well under wine but didn't at release)

NAS is dead and gone. I suspected that might be the case, among other because the disks it reported having failures, work flawlessly under windows (disregarding the possibility that windows is wrong about that). Additionally, it suddenly reported problems also with the 3rd RAID disk. I reinitialized it, and got it working. Sort of. Significant problems writing to the disks, creating users and more so I tried the almost-always-working-solution and rebooted it, and now it refuses to start. I don't think I want to spend more time on that one.

So I have to get a new NAS. I would have liked to get a new Asustor, mainly because I know that system. But my pusher don't have that brand anymore. For reasonably priced 4-bays they offer:

  1. Qnap TS-431X2
  2. Qnap TS-453Be-4G
  3. Synology NAS DS418Play
  4. Synology NAS DS918+

They also have a Qnap TS-431P2, but their customers don't seem to like that one. Besides, it's white.

pibbur who now has to read reviews and maybe also listen to the watchers.
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
NAS is dead and gone. I suspected that might be the case, among other because the disks it reported having failures, work flawlessly under windows (disregarding the possibility that windows is wrong about that). Additionally, it suddenly reported problems also with the 3rd RAID disk. I reinitialized it, and got it working. Sort of. Significant problems writing to the disks, creating users and more so I tried the almost-always-working-solution and rebooted it, and now it refuses to start. I don't think I want to spend more time on that one.

So I have to get a new NAS. I would have liked to get a new Asustor, mainly because I know that system. But my pusher don't have that brand anymore. For reasonably priced 4-bays they offer:

  1. Qnap TS-431X2
  2. Qnap TS-453Be-4G
  3. Synology NAS DS418Play
  4. Synology NAS DS918+

They also have a Qnap TS-431P2, but their customers don't seem to like that one. Besides, it's white.

pibbur who now has to read reviews and maybe also listen to the watchers.

That sounds about right. Even often in software raid solutions, similar issue can present with cables and not be drive related.

We've had a lot of success with the Synology systems, and seem to have great performance.
 
Joined
Feb 19, 2009
Messages
2,257
Location
Calgary, Alberta
For a NAS what I do (and recommend) is get a cheap pc and run ZFS. There are some advantages here - first it is not proprietary so if the hardware dies you can just move the disks to a new enclosure. Second ZFS will keep block checksum so if a (for example) realloc occurs on the drive (which zeros the block) zfs will detect that the block has 'bad data' and rewrite it when you do a scrub. Third you can use the box as your web machine if you currently use a machine for that purpose (it won't be very good as a primary gaming machine since many games require windows (but many will also work on linux - either via wine (or steam equivalent to wine) as well as native port.
-
If you dont' want to go this route - i can't really advise you on a nas box - but i'd look for one that runs linux underneath and uses software raid so the next time the hardware fails you don't suffer data loss.
--

I agree with what @you; is saying, but I also can see enjoying the simplicity of a dedicated NAS. For one of my storage projects, I went the in-between route.

You can pick up NAS based enclosures with 4 hotswap bays and x86 hardware. For example the ones Chenbro offers

I have one running an older Atom CPU, so instead of using FreeNAS which can be a little more demanding on hardware, you can always look at installing Open Media Vault, which is Debian based, uses a web interface like a traditional NAS, can run on older hardware, and has the ability to use plugins, for instance if you'd like to also use it as a media server, or add ZFS capabilities to it.

However these enclosures usually accept a standard mini-itx board, so depending on the hardware you get for it, you could be spending the same or even less than a traditional NAS box, but have a lot more flexibility and computing power.

EDIT: Here's the little beast. Only using 2 bays at the moment.

SYaCI7o.jpg
 
Last edited:
Joined
Feb 19, 2009
Messages
2,257
Location
Calgary, Alberta
Isn't it just a fancy name for a case ? I realize that it is a case with a hot-swap plane but more or less it is just a case - or am i missing something ?

I agree with what @you; is saying, but I also can see enjoying the simplicity of a dedicated NAS. For one of my storage projects, I went the in-between route.

You can pick up NAS based enclosures with 4 hotswap bays and x86 hardware. For example the ones Chenbro offers

I have one running an older Atom CPU, so instead of using FreeNAS which can be a little more demanding on hardware, you can always look at installing Open Media Vault, which is Debian based, uses a web interface like a traditional NAS, can run on older hardware, and has the ability to use plugins, for instance if you'd like to also use it as a media server, or add ZFS capabilities to it.

However these enclosures usually accept a standard mini-itx board, so depending on the hardware you get for it, you could be spending the same or even less than a traditional NAS box, but have a lot more flexibility and computing power.

EDIT: Here's the little beast. Only using 2 bays at the moment.

SYaCI7o.jpg
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
@you;: Lots of what you say makes sense. But at the moment, I feel more like buying a commercially available NAS box, because I it seems to be simpler to set up and maintain. When I was younger I most likely would go for a Linux machine, but these days .. at the age of 64… maybe I've grown lazy.

Here and now, the Synologies seems like the way to go. They get very good reviews, and @Caddy; seems to like them. On the down side, they do have a proprietary OS, which makes me a bit uncomfortable. OTOH, reviewers like it very much.

pibbur who still haven't made up his mind, but probably will do so within the next 24 hours.

PS. I have an old Core 2 machine, currently running Windows Xp (not connected to the net) and Fedora. I could reinstall that one as a Linux server (I thought having an xp machine would be nice for compatibility reasons, but actually, I haven't used it for years). But it's 10 years old, with only 4 GB RAM, don't know if it's suitable for a server now. DS.

PPS: I say thanks to both @you; and @Caddy;. Good advice. DS.

PPPS. I consulted the web page of my pusher, and it seems like I can get an OK motherboard/Intel I5/8GB RAM for a reasonable price, about 50% of what a NAS box woudl cost me. I have the cabinet, the disks, I (probably) have a GPU. So maybe.... *ponders* DS.
 
Last edited:
The core 2 is fast enough; 4gb of ram would probably work ok (esp if you don't run xwindows) but 8 wouldn't hurt.
-
if you go with ubuntu you can install zfs as an optional package; for the other versions of linux I think you have to download zfs and build it (but they might provide a debian package - not sure - the issue is zfs has hooks in the kernel).
-
btw 'cept for the most current branch zfs does not yet support trim on linux (freebsd has had it for years and years - one advantage of freenas which is based off of bsd); so be aware of that issue if you use ssd with zfs.

PS. I have an old Core 2 machine, currently running Windows Xp (not connected to the net) and Fedora. I could reinstall that one as a Linux server (I thought having an xp machine would be nice for compatibility reasons, but actually, I haven't used it for years). But it's 10 years old, with only 4 GB RAM, don't know if it's suitable for a server now. DS.

PPS: I say thanks to both @you; and @Caddy;. Good advice. DS.
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
Well, I would also have mentioned the DIY route, which seems redundant at this point. I'm sure a good quality proprietary box can serve you perfectly well. I just enjoy the tinkering, and I have an irrational (in terms of best use of time) desire for my devices not to be black-boxed, so that I can indulge that impulse.
 
Joined
Nov 8, 2014
Messages
12,085
My preference for the DIY is the superiority of software solution and block level checksum. Shamefully many commercial solutions are inferior - they could do better they just haven't bothered to improve the solution they derived many years earlier.
-
I actually wrote a software raid system for a company i worked for (we had block level checksum but did not do block level repairs - but we did do block level fixes on reads).
-
Generically I just like the idea that if the hardware fails i'm not held hostage by the vendor to recover my data.

Well, I would also have mentioned the DIY route, which seems redundant at this point. I'm sure a good quality proprietary box can serve you perfectly well. I just enjoy the tinkering, and I have an irrational (in terms of best use of time) desire for my devices not to be black-boxed, so that I can indulge that impulse.
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
Isn't it just a fancy name for a case ? I realize that it is a case with a hot-swap plane but more or less it is just a case - or am i missing something ?

Pretty much just a case. But if there wasn't any demand, they wouldn't be selling updated models on a regular basis. Probably the biggest advantage is the usability for it's size. I don't think the scale shows well in the picture, but it's small enough to stick in a backpack (a little over 2 Xbox One S consoles stacked in size). Sure there's lots of mini-ITX cases, but it's pretty hard to find one this size that has 4 x 3.5" bays (or 3 x 5.25 if your going to put in a 4/5 bay hotswap unit). On top of the 4 bays, it has a ultrabay/drive caddy slot (which currently has an SSD in it) for flexibility. It also uses a smaller external PSU with surge protection, so you're a lot less limited where you put it, like on a bookshelf. Although it would work well as one, it's not my main storage unit. I'm planning on setting this one at a friend's house for automatic off-site backups. On top of that, if someone is looking for hardware anyways, there's a few out there complete.
 
Joined
Feb 19, 2009
Messages
2,257
Location
Calgary, Alberta
Hmmm.

I have also my OpenSuse Linux machine - my previous gaming machine, so it indeed has sufficient power. I could try to see how that would work, just as an experiment - won't, unlike buying a new NAS, cost me any money. If that works OK, I can always get me a cheap dedicated server at a later stage. I have 4 6Gb disks available. If I had bought a NAS box, I would have set them up in either a RAID 5 or RAID 10 configuration. I see that ZFS support both, despite "ZFS uses odd (to someone familiar with hardware RAID) terminology" (according to http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/).

Sadly, I don't know as much about Linux as I used to. So if you would answer a few questions, that would be fine - just to get me in the right direction.

  1. How would I access the RAID from windows clients. Samba? I assume I can set up the usual home/public/other shares.
  2. A NAS box typically comes with two gigabit network interfaces, which can be linked. Is that possible with a Linux/ZFS setup?
  3. A NAS box can be set up to use an SSD as a cache (The Synology I'm considering suppports M.2s for that purpose). Can that be done on Linux/ZFS?
  4. Applications: On my NAS I've been using among the following applications: iTunes server, Plex server. Are these available under linux? I won't ask about mysql-server and source control, as I would be very surprised if those were not avaliable.
pibbur who no doubt will come up with other questions.

PS: No promises. I'm still leaning towards an-easy-to-setup commerical NAS box. DS.
 
Last edited:
By way of a quick answer to some of your questions, you might like to have a look at the OpenMediaVault distro.

https://www.openmediavault.org/

It does most of what you ask for out of the box. The DAAP server I believe is the Apple standard for media sharing, so I'd guess that acts as the iTunes server.

I'm not sure if it has easy options for SSD caching, though I'm sure that could be accomplished (I believe it's a feature of the ZFS filesystem.). TBH, I'd wonder if there's really much benefit to that, unless you're using it for performance intensive workloads - running websites off it, and so forth.

Another, possibly stronger, candidate distro would be FreeNas, if you don't mind BSD.

https://www.freenas.org
 
Last edited:
Joined
Nov 8, 2014
Messages
12,085
Interesting, but both of those require require fresh installations, right? (or wrong)? What I wanted to do - as an experiment, was to use my already installed openSuse Linux. As far as I understand I can install zfs on disks without installing a completely new OS????

pibbux
 
Oh, right. I was thinking about just using the Suse machine with a fresh build, rather than adapting the current installation. Definitely possible to do it all manually - all those services on those distros could be installed and configured individually. I think ZFS for Linux should easy enough to set up on Suse. Then Samba/CIFS for file sharing, DAAP for iTunes, Plex, and so on. I'm not sure the best way to setup link aggregation, but tracking down guides for those things should be fairly swift on Google.
 
Joined
Nov 8, 2014
Messages
12,085
With opensuse i think you have to download and build zfs (it is not hard i've done it before but ubuntu integrated zfs as does freebsd (freenas)). the reason you have to build it is that it needs hooks for the specific kernel you are running. With ubuntu they provide prebuilt image. The reason ubuntu does this and not other distribution is license issue - zfs is sunos license and not gpl and there are 'complaints' by gpl purist that it should not be distributed with the kernel. ubuntu lawyer sez it is legal since they distribute a loadable module and not in the kernel itself.
-
as for your question on network it can be done with linux (it is not a zfs (filesystem) issue it is a network issue) but i've never had a need so not sure of the magic to bound two interfaces to a single address. an alternative solution is to use a 10gb interface if you are concern that you are getting close to the 1gb limit (samba is pretty inefficient...)

Interesting, but both of those require require fresh installations, right? (or wrong)? What I wanted to do - as an experiment, was to use my already installed openSuse Linux. As far as I understand I can install zfs on disks without installing a completely new OS????

pibbux
 
Joined
Oct 20, 2006
Messages
7,758
Location
usa - no longer boston
Back
Top Bottom