The term parallel file system is used in two ways that mean very different things. The first use refers to the ability to do IO in parallel to multiple servers. The second use of the term is mostly in high performance computing and refers to specific IO patterns.
Parallel IO means that a client that accesses storage can directly access several storage servers in parallel to take advantage of the aggregated bandwidth of multiple servers. Often, parallel IO also removes bottlenecks like NFS gateways and improves load distribution. This use of the term parallel IO is often associated with pNFS (short for parallel NFS) and most high performance or scale-out file systems offer parallel IO.
The opposite of a parallel file system is one where a client talks to a single server or gateway. Any NFS based system (except those that explicitly offer pNFS) is such a centralized storage system.
A file system with parallel IO is a must have for demanding throughput workloads such as 4k video streaming/transcoding/editing, image processing or big data analytics workloads, just to name a few. However, small file workloads also benefit from the direct communication of the client with the servers that have the data, rather than going through an NFS gateway that adds another network hop in latency.
In high performance computing (HPC) parallel file systems allow distributed applications to read and -more importantly- write to a single file from many clients at the same time without locking, i.e. in parallel. This very specific IO pattern is mostly found in research and is often associated with MPI. If you don't know what MPI or MPI-IO is, chances are high that you don't need a parallel file system in the HPC sense.