The first version of the Network File System, or NFS for short, was published by Sun Microsystems in 1985. The name is a bit misleading because today, NFS is not the actual file system but the protocol used between the clients and the servers with the data.
The Network File System (NFS) protocol was designed to allow several client machines to transparently access a file system on a single server. One of the design goals was to enable a broad range of operating systems and processor architectures to implement NFS. Most operating systems have extensive native support for NFS, including Linux and macOS, but also more “exotic” systems such as FreeBSD or Solaris. Newer versions of Windows have native support for mounting NFS.
Today there are only two versions of the NFS protocol left in use: Version 3, published in 1995, and version 4 in 2000. NFS 3 is still by far the most common version of the protocol and is the only one supported by Windows clients.
What are the advantages of NFS?
There really aren’t many. Being such a dated protocol, NFS hasn’t been able to adapt to the ever-changing needs of storage users today. It’s like the lowest common denominator of storage because almost all operating systems can access NFS version 3 storage.
What are the disadvantages of NFS?
Most of the disadvantages of NFS stem from the fact that it was designed decades ago and for communication with a single server:
- NFS doesn’t support failover. This must be done on the IP level and often causes interruptions, delays, and user-visible errors (stale file handle).
- NFS has no load balancing. Although NFS was designed for a single server, the lack of load balancing in a scale-out storage system is a big issue with the emergence of scale-out workloads.
- NFS has no checksums. Mostly due to the high cost of checksums in the 90s, this is a major issue today. Checksums protect your data in transit, and modern processors have hardware instructions for them. Even newer versions of the NFS protocol lack
Is NAS the same as NFS?
The short answer is: No. The Network File System – despite it’s name – is a protocol to access a file system that is located on a remote server. NAS means network-attached storage and has become synonymous with remote file system storage (read more about NAS here). NFS is one of the protocols to access a NAS storage system over the network and the most common one in the Linux/Unix world. In the Windows world, SMB/CIFS is the primary protocol to access NAS storage.
What is the difference between a local file system and NFS?
The obvious difference is that local storage, or direct-attached storage (DAS), lives inside a single machine and can’t be shared with other hosts. Often, Linux’s NFS server is used to share a local file system via NFS with other hosts. However, there are subtle differences in how a local file system and NFS behave.
On Linux and Unix machines, both local file systems and NFS look and “feel” the same to users and applications when files are accessed only from a single host (with a few minor exceptions like xattrs). However, when files are accessed from multiple hosts, there are some very important differences in the behavior. The main reason for the differences in behavior is local caching.
NFS allows clients to cache metadata, such as directory listings, file names, and data (the actual file contents). When you write to a file, the data is first cached in the local machine and then written to the server at a later point in time. This is a great feature as it hides network latencies and allows the client to batch smaller writes. However, when client A writes to a file, and then client B tries to read it, the data might still be in the cache on client A. Client B would see, e.g., a shorter file or just zeros. Similar effects might happen when client A just created the file, and client B still has the directory contents listed. In that case, client B doesn’t see the file.
Distributed applications (those running on more than one server at the same time) that run on NFS must be able to work with the so-called “close-to-open” consistency of NFS. NFS flushes the local caches when an application closes a file. The close call returns only after the data has been written to a server. The next application that opens the file (on a different client machine) is guaranteed to read the latest data from the server. The effect of file locking is the same. NFS flushes the data on unlock and guarantees that the next application that locks the region of the file on a different client will get the latest data from the server. This is called “unlock-to-lock” consistency.
What is Scale-out NAS?
What is Software-defined Storage?
What is a POSIX file system?
How to consolidate your NAS with Quobyte - a scale-out software NAS.
Quobyte - a distributed file system with NFS 3 and 4 support
Talk to Us
We are here to answer all of your questions about how Quobyte can benefit your organization.
Are you ready to chat? Want a live demo?