All three manufacturers started selling relatively small, from 2TB HDD
In the English-speaking Internet and the media, such actions are criticized and, I think, rightly so. In Russia, the THG resource was marked by an article
Also in the text thg.ru refers to Alan Brown,
The way out of this situation was found by Alan Brown, network administrator of UCL Mullard Space Science Laboratory. He found that when stripping data from RAID arrays, performed when a new drive is added to an existing RAID array and then overwritten to balance access, the system takes the new WD Red HDDs out of control.
It is rather vague what exactly it means "the system takes the new WD Red HDDs out of its control." - but in the sense of the sentence, this is the solution
At the same time
The WD40EFAX drive I filled with zeros averaged 40MB/s but started out at 120MB/s.
In the case of ZFS, the resolver is not a block-level end-to-end scan, but hops around the entire disk as each file's parity is restored. This appears to be causing another issue on the WD40EFAX where a request to check a sector that has not yet been written to causes the drive to internally log a "sector ID not found (IDNF)" error and throw a hardware I/O error from interface to the host system.
RAID controllers (hardware or software, RAID5/6 or ZFS) will quite reasonably decide that a drive has failed after a few of them and will eject it from the array if it hasn't already done so after a timeout.
This is certainly in line with what I've noticed - the resolver goes at around 100MB/s for around 40 minutes, after which the drives "Die" and die again if I try to restart the resolver, however if I leave it - after an hour or so, they work another 40 minutes before falling off.
It's hard to imagine what exactly made thg.ru do this. One can only guess whether this was due to pressure from advertisers. In any case, the situation when popular and designed specifically for NAS drives are silently replaced by significantly less suitable ones at the same price and without changing specifications deserves attention.
In conference
I've just purchased 3 WD REDs to replace aging drives in a ZFS array
ALL THREE are failing during resilvering with IDNF (sector ID not found) errors:
As far as I can understand, the problem is
WD RED - WD Red EFAX - SMR drives and have 256 MB cache. EFRX drives - do not use SMR (they are regular CMR drives) and have a 64 MB cache
Toshiba has several models
Seagate has several series -
Source: habr.com