What is the Network File System (NFS)?

The first version of the Network File System, or NFS for short, was published by Sun Microsystems in 1985. The name is a bit misleading because today, NFS is not the actual file system but the protocol used between the clients and the servers with the data.

The Network File System (NFS) protocol was designed to allow several client machines to transparently access a file system on a single server. One of the design goals was to enable a broad range of operating systems and processor architectures to implement NFS. Most operating systems have extensive native support for NFS, including Linux and macOS, but also more “exotic” systems such as FreeBSD or Solaris. Newer versions of Windows have native support for mounting NFS.

Today there are only two versions of the NFS protocol left in use: Version 3, published in 1995, and version 4 in 2000. NFS 3 is still by far the most common version of the protocol and is the only one supported by Windows clients.

Illustration of a Linux NFS client communicating with a Linux NFS server (nfsd, in-kernel NFS server) and the components of the Linux kernel involved.

What are the advantages of NFS?

There really aren’t many. Being such a dated protocol, NFS hasn’t been able to adapt to the ever-changing needs of storage users today. It’s like the lowest common denominator of storage because almost all operating systems can access NFS version 3 storage.

What are the disadvantages of NFS?

Most of the disadvantages of NFS stem from the fact that it was designed decades ago and for communication with a single server:

  • NFS doesn’t support failover. This must be done on the IP level and often causes interruptions, delays, and user-visible errors (stale file handle).
  • NFS has no load balancing. Although NFS was designed for a single server, the lack of load balancing in a scale-out storage system is a big issue with the emergence of scale-out workloads.
  • NFS has no checksums. Mostly due to the high cost of checksums in the 90s, this is a major issue today. Checksums protect your data in transit, and modern processors have hardware instructions for them. Even newer versions of the NFS protocol lack

Is NAS the same as NFS?

The short answer is: No. The Network File System – despite it’s name – is a protocol to access a file system that is located on a remote server. NAS means network-attached storage and has become synonymous with remote file system storage (read more about NAS here). NFS is one of the protocols to access a NAS storage system over the network and the most common one in the Linux/Unix world. In the Windows world, SMB/CIFS is the primary protocol to access NAS storage.

What’s the difference between SMB and NFS?

The two protocols roughly serve the same purpose: making a remote file system accessible to clients via a computer network. Sun Microsystems developed NFS as an open standard targeted at Unix environments (Sun developed Solaris, which was a Unix system). On the other hand, SMB/CIFS was developed by Microsoft for their Windows operating system. So, your client’s operating system dictates which protocol you use: Unix/Linux=NFS and Windows=SMB/CIFS. There are drivers to mount NFS on Windows and SMB on Linux. However, they are more of a last resort.

What is the difference between a local file system and NFS?

The obvious difference is that local storage, or direct-attached storage (DAS), lives inside a single machine and can’t be shared with other hosts. Often, Linux’s NFS server is used to share a local file system via NFS with other hosts. However, there are subtle differences in how a local file system and NFS behave.

On Linux and Unix machines, both local file systems and NFS look and “feel” the same to users and applications when files are accessed only from a single host (with a few minor exceptions like xattrs). However, when files are accessed from multiple hosts, there are some very important differences in the behavior. The main reason for the differences in behavior is local caching.

NFS allows clients to cache metadata, such as directory listings, file names, and data (the actual file contents). When you write to a file, the data is first cached in the local machine and then written to the server at a later point in time. This is a great feature as it hides network latencies and allows the client to batch smaller writes. However, when client A writes to a file, and then client B tries to read it, the data might still be in the cache on client A. Client B would see, e.g., a shorter file or just zeros. Similar effects might happen when client A just created the file, and client B still has the directory contents listed. In that case, client B doesn’t see the file.

Distributed applications (those running on more than one server at the same time) that run on NFS must be able to work with the so-called “close-to-open” consistency of NFS. NFS flushes the local caches when an application closes a file. The close call returns only after the data has been written to a server. The next application that opens the file (on a different client machine) is guaranteed to read the latest data from the server. The effect of file locking is the same. NFS flushes the data on unlock and guarantees that the next application that locks the region of the file on a different client will get the latest data from the server. This is called “unlock-to-lock” consistency.

What is RPC and XDR and how is it related to NFS?

NFS is the protocol that describes how to access a remote file system over the network. The operations of the NFS protocol look familiar to anyone who has seen the POSIX file system calls, e.g. NFSPROC_LOOKUP or NFSPROC_MKDIR.

To execute these operations or procedures on a remote server, Sun invented the Remote Procedure Call protocol SunRPC (today this is just called RPC). This protocol is not concerned with the actual application, here NFS, but is a generic protocol to send messages to and from a remote server and to call remote procedures. This simply means telling a remote server to do a specific operation.

XDR is a language or standard on how to encode the messages in binary from that are sent back and forth between clients and servers when doing RPCs. One important goal of XDR was to ensure interoperability between the different Unix flavors and – more importantly – between the many different processor architectures that existed in the 1980s and 1990s, e.g. ensure that little and big endian based systems can communicate with each other.

Although Quobyte supports versions 3 and 4 of NFS, it also comes with native drivers with built-on failover, load balancing, parallel IO, and end-to-end checksums. Because of these features, there are no performance bottlenecks when you use Quobyte, unlike NFS, thus making it a more suitable option for most storage needs.

Learn More

Talk to Us

We are here to answer all of your questions about how Quobyte can benefit your organization.

Are you ready to chat? Want a live demo?