Tuesday, August 23, 2011

Additional Datastores for your ESXi 4.1 server via NFS running on CentOS 6

I've recently run out of storage for creation of new instances of servers to our test environment and the IBM 300X series has maxed out. What I did was introduce an NFS drive and mount it on EXSi using vSphere client running on Windows. There are a couple of chores to do first before you can perform this. In my case, I have a spare server running on CentOS 6 with enough SATA/ports to get the job done. So basically, the idea was to prep a new hardrive and configure it to be used as an NFS drive. Lets get down to the details.

First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:

# mkdir /nfs
# chmod a+w /nfs

Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.

# yum -y install nfs-utils rpcbind

Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:

# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on

Configure APF Firewall for NFS

Rather than disabling the firewall it is a good idea to configure NFS to work with APF for (iptables). For NFSv3/v4 we need to lock several daemons related to rpcbind/portmap to statically assigned ports. We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.

First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:

# vi /etc/sysconfig/nfs

RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

We now need to modify the APF firewall configuration to allow access to the NFS ports. For simplicity I did not use “iptables” command and insert the appropriate rules, in my case I am using APF to get the iptables done.


# vi /etc/apf/conf.apf and look for the:
IG_TCP_CPORTS="111,662,875,892,2049,32803" <-- add="" also="" and="" br="" ig_udp_cports="111,662,875,892,2049,32769" look="" ports="" section="" the="" these="" to="" udp="">

Now save the APF configuration to the config file so it will apply when APF is restarted:

# service apf restart

Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the 192.168.10.0 subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.

# vi /etc/exports

/nfs 192.168.10.0/255.255.255.0(rw,sync,no_root_squash)

Configure SELinux for NFS Export

Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share. This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility. In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.

# setsebool -P nfs_export_all_rw 1

Now we will start the NFS services:

# service rpcbind start
# service nfs start
# service nfslock start

If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:

# exportfs -a

Implement TCP Wrappers for Greater Security

TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.

Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “.mydomain.com” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.

vi /etc/hosts.allow

mountd: 192.168.10.0/255.255.255.0

Append these directives to the “hosts.deny” file to deny access from all other domains or networks:

vi /etc/hosts.deny

portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:

# watch -n 1 "nfsstat" 

Troubleshooting:
If you encounter errors while attaching/adding the new NFS resource to your ESXi host via vCenter do the following steps:

1. Ensure that you check the "setup" command and un-tick the enable Firewall option and save.
2. Edit the /etc/selinux/config file and do the changes on the line SELINUX=enforcing and change it to SELINUX=disable, save and close.
3. Reload -- rpcbind, nfslock and nfs services.

Try again and cheers!!!

No comments:

Post a Comment