Building an Inexpensive Linux Home NAS System

Building an Inexpensive Linux Home NAS System

I, like many other MacBook Pro users, faced the problem of insufficient internal memory. To be more precise, the rMBP I used daily was equipped with an SSD of only 256GB, which, of course, was not enough for a long time.

And when I started recording videos during my flights, the situation only got worse. The volume of footage after such flights was 50+ GB, and my unfortunate 256GB SSD filled up very soon, forcing me to purchase a 1TB external drive. However, after one year, it couldn't keep up with the volumes of data I was generating, not to mention the lack of redundancy and backup made it unsuitable for hosting important information.

So, at one point, I decided to build a large-capacity NAS in the hope that this system would last at least a couple of years without requiring another upgrade.

I wrote this article primarily as a reminder of exactly what and how I did in case I need to do it again. I hope it will be useful for you if you decide to do the same.

Maybe it's easier to buy?

So, we know what we want to get, the question remains how?

I first looked at commercial solutions and looked in particular at Synology, which was supposed to provide the best consumer-grade NAS systems on the market. However, the cost of this service was quite high. The cheapest 4-bay system costs $300+ and does not include hard drives. In addition, the internal filling of such a kit is not particularly impressive, which calls into question its real performance.

Then I thought: why not build a NAS server yourself?

Finding the Right Server

If you are going to complete such a server, then first of all you need to find the right hardware. A used server should be fine for this build, as we don't need much performance for storage tasks. Of the necessary, it should be noted a large amount of RAM, several SATA connectors and good network cards. Since my server will work at my place of permanent residence, the noise level also matters.

I started my search on eBay. Although I found many used Dell PowerEdge R410/R210s under $100 there, having worked in a server room, I knew that these 1U units were too noisy and not suitable for home use. As a rule, tower servers are often less noisy, but, unfortunately, there were few of them listed on eBay, and they were all either expensive or underpowered.

The next place to look was Craiglist, where I found a man selling a used HP ProLiant N40L for only $75! I was familiar with these servers, which usually cost around $300 even used, so I emailed the seller hoping the listing was still up. Having learned that this is the case, I, without thinking twice, went to San Mateo to pick up this server, which I was definitely pleased with at first sight. It had minimal wear and, aside from a bit of dust, everything else was great.

Building an Inexpensive Linux Home NAS System
Server photo, immediately after purchase

And here is the specification of the kit I purchased:

  • CPU: AMD Turion(tm) II Neo N40L Dual-Core Processor (64-bit)
  • RAM: 8 GB non-ECC RAM (installed by previous owner)
  • Flash: 4GB USB Drive
  • SATA Connectors:4+1
  • NIC: 1Gbps on-board NIC

Needless to say, despite being several years old, the specification of this server still outperforms most NAS options on the market, especially in terms of RAM. A little later, I even upgraded to 16 GB ECC with more buffer space and better data protection.

Selecting hard drives

Now we have an excellent working system and it remains to choose hard drives for it. Obviously, for those $75 I got only the server itself without the HDD, which did not surprise me.

After doing a bit of research, I found that 24/7 NAS drives are best suited for WD Red HDDs. To purchase them, I turned to Amazon, where I purchased 4 copies of 3 TB each. In fact, you can connect any preferred HDD, but make sure that they are of the same size and speed. This will help you avoid potential RAID performance issues in the long run.

System Setup

I think that many will use the system for their NAS builds. FreeNAS, and there's nothing wrong with that. However, despite the possibility of installing this system on my server, I preferred to use CentOS, since the ZFS on Linux system is initially prepared for a production environment, and in general I am more familiar with managing a Linux server. Also, I wasn't interested in the fancy interface and features FreeNAS provided - I was satisfied with the RAIDZ array and AFP sharing.

Installing CentOS on USB is quite simple - just specify USB as the boot source, and when you start the installation wizard will guide you through all the steps.

RAID assembly

After successfully installing CentOS, I also installed ZFS on Linux by following the listed here steps.

With this process complete, I loaded the ZFS Kernel module:

$ sudo modprobe zfs

And created a RAIDZ1 array using the command zpool:

$ sudo zpool create data raidz1 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609145 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609146 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609147 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609148
$ sudo zpool add data log ata-SanDisk_Ultra_II_240GB_174204A06001-part5
$ sudo zpool add data cache ata-SanDisk_Ultra_II_240GB_174204A06001-part6

Note that here I'm using hard drive IDs instead of their displayed names (sdx) to reduce the chance of them failing to mount after boot due to a letter change.

I also added ZIL and L2ARC cache running on a separate SSD, splitting this SSD into two partitions: 5GB for ZIL and the rest for L2ARC.

As for RAIDZ1, it can withstand 1 disk failure. Many argue that this pooling option should not be used due to the possibility of a second disk failing during the RAID rebuild, which is fraught with data loss. I neglected this recommendation, because I regularly backed up important data on a remote device, and the failure of even the entire array can only affect the availability of data, but not their safety. If you do not have the ability to make backups, then it would be better to use solutions like RAIDZ2 or RAID10.

You can verify that the pool was created successfully by running:

$ sudo zpool status

ΠΈ

$ sudo zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
data                               510G  7.16T   140K  /mnt/data

By default, ZFS mounts the newly created pool directly on /which is generally undesirable. You can change this by doing:

zfs set mountpoint=/mnt/data data

From here, you can choose to create one or more datasets to store your data. I created two, one for the Time Machine backup and one for the shared file storage. I limited the size of the Time Machine dataset to a quota of 512 GB to prevent it from growing indefinitely.

Optimization

zfs set compression=on data

This command enables ZFS compression support. Compression uses minimal CPU power, but can greatly improve I/O throughput, so it is always recommended.

zfs set relatime=on data

With this command, we reduce the number of updates to atimeto reduce the generation of IOPS when accessing files.

By default, ZFS on Linux uses 50% of physical memory for ARC. In my case, when the total number of files is small, this amount can be safely increased to 90%, since other applications on the server will not be running.

$ cat /etc/modprobe.d/zfs.conf 
options zfs zfs_arc_max=14378074112

Then with the help arc_summary.py You can verify that the changes have taken effect:

$ python arc_summary.py
...
ARC Size:				100.05%	11.55	GiB
	Target Size: (Adaptive)		100.00%	11.54	GiB
	Min Size (Hard Limit):		0.27%	32.00	MiB
	Max Size (High Water):		369:1	11.54	GiB
...

Setting up recurring tasks

I used systemd-zpool-scrub to set systemd timers to clean up once a week, and zfs-auto-snapshot to automatically create snapshots every 15 minutes, 1 hour and 1 day.

Netatalk installation

nettalk is an open source implementation of AFP (Apple Filing Protocol). Following official installation instructions for CentOS, in just a couple of minutes I got the RPM package compiled and installed.

Configuration setting

$ cat /etc/netatalk/afp.conf
[datong@Titan ~]$ cat /etc/netatalk/afp.conf 
;
; Netatalk 3.x configuration file
;

[Global]
; Global server settings
mimic model = TimeCapsule6,106

; [Homes]
; basedir regex = /home

; [My AFP Volume]
; path = /path/to/volume

; [My Time Machine Volume]
; path = /path/to/backup
; time machine = yes

[Datong's Files]
path = /mnt/data/datong
valid users = datong

[Datong's Time Machine Backups]
path = /mnt/data/datong_time_machine_backups
time machine = yes
valid users = datong

Note that vol dbnest is a major improvement in my case, as by default Netatalk writes the CNID database to the root of the file system, which was not desirable since my main file system is on USB, and therefore is relatively slow. Turning on the same vol dbnest leads to saving the database in the Volume root, which in this case belongs to the ZFS pool and is already an order of magnitude faster.

Enabling Ports in the Firewall

$ sudo firewall-cmd --permanent --zone=public --add-service=mdns
$ sudo firewall-cmd --permanent --zone=public --add-port=afpovertcp/tcp

sudo firewall-cmd --permanent --zone=public --add-port=afpovertcp/tcp
If everything was set up correctly, then your machine should show up in the Finder, and Time Machine should also work.

Additional settings
SMART monitoring

It is recommended to monitor the status of your disks in order to prevent them from failing.

$ sudo yum install smartmontools
$ sudo systemctl start smartd

Daemon for UPS

Monitors the charge of the APC UPS and shuts down the system when the charge becomes critically low.

$ sudo yum install epel-release
$ sudo yum install apcupsd
$ sudo systemctl enable apcupsd

Hardware upgrade

A week after setting up the system, I began to worry more and more about non-ECC memory installed in the server. In addition, in the case of ZFS, additional memory for buffering will be very useful. So I turned to Amazon again, where I purchased 2x Kingston DDR3 8GB ECC RAM for $80 each and replaced the desktop RAM the previous owner had installed. The system booted up the first time without any problems, and I made sure that ECC support was activated:

$ dmesg | grep ECC
[   10.492367] EDAC amd64: DRAM ECC enabled.

Experience the Power of Effective Results

The result made me very happy. Now I can constantly load the server's 1Gbps LAN connection by copying files, and Time Machine works flawlessly. So, in general, I'm happy with the setup.

The total cost:

  1. 1 * HP ProLiant N40L = $75
  2. 2 * 8 GB ECC RAM = $174
  3. 4*WD Red 3TB HDD = $440

Total = $ 689

Now I can say that the price was worth it.

Do you make your own NAS servers?

Building an Inexpensive Linux Home NAS System

Building an Inexpensive Linux Home NAS System

Source: habr.com

Add a comment