ParaView Client-Distributed Server Mode

Example of Client-Distributed Server mode
Example of Client-Distributed Server mode

Perhaps the most popular use of ParaView is in the Single Client-Multiple Server mode, where a single user connects to a cluster of systems to process their data in parallel. This offers many advantages over standalone or single-server mode. ParaView is specifically designed for optimum performance in parallel and offers many tools to assist in this. By default, ParaView will use the IceT library to composite the images, but this can be disabled in favor of the simpler MPI compositor. However, since the IceT compositor offers improvements in quality, performance, and features, this tutorial will focus on the IceT compositor.

Setup

Normal Setup

  1. On the "Server" system, make sure the Data is available and ParaView is installed and configured for use with MPI.
  2. On the "Server" system, run the following command:
    mpirun -np <number of nodes> pvserver
    • Note: mpirun may not be the MPI launch program on your system. Replace it with the proper one if necessary.
  3. On the "Client" system, run the following command:
    pvclient -sh=<server>

At this point the two should be connected, and you are ready to go.

Reverse Connection Setup

When the remote system will not accept incoming connections (Usually due to a Firewall) the reverse connection option can be used to force the server to connect to the client.

  1. On the "Client" system, run the following command:
    pvclient -rc
  2. On the "Server" system, make sure the data is available and ParaView is installed and configured for use with MPI.
  3. On the "Server" system, run the following command:
    mpirun -np <number of nodes> pvserver -rc -ch=<client>
    • Note: mpirun may not be the MPI launch program on your system. Replace it with the proper one if necessary.

Usage

ParaView Client Server Options
ParaView Client Server Options

When operating in distributed server mode, ParaView behaves much differently that in usual client mode.

The first noticeable difference is that the Open Data option no longer browses your local machine, but rather shows files on the remote system's Node 0. In this usage mode, all of the data loading and computation will be performed on the remote system, leaving your local memory and CPU virtually untouched. This is ideal for a low-end client which may not have the local memory available for the selected data.

Also there are several new options available to configure how ParaView behaves now. At startup you will be presented with the General tab of the application settings and you will see several new options.

  • Composite - If this is checked, then whenever the dataset is above the size specified with the slider, the rendering will be performed remotely and frames will be sent back to the client. If the dataset is below this, or the option is unchecked, then the computed geometry will be sent back to the client for local rendering.
    • This option may be disabled if the server was unable to access a display. If this is the case, check the DISPLAY environment variable and make sure you have proper access controls to the X session running. Also, you can compile ParaView with Mesa support and use the --use-offscreen-rendering (specified on the server) to remove this requirement, but the performance will be significantly slower.
  • Subsample Rate - If checked, then in addition to the usual LOD operations used while the visualization is being interacted with, the screen will be subsampled by the desired amount. This significantly reduces the rendering time and bandwidth required, so it results in a significantly more interactive display.
    • Note: This option only applies when the server is performing the rendering. If the client is rendering (because the data is too small or the function is disabled) this option does not apply.
  • Squirt Compression - ParaView uses the Squirt compression algorithm for all frames sent between the client and server. The slider sets the number of bits of precision kept in the images. 24 bit (the maximum) is a full 8-bit red, green, and blue channel, while 10-bit (the minimum) is highly compressed.
  • Enable Ordered Compositing - This check box is usually disabled. When enabled, then the compositor will go through an extra step to make sure the depth buffer is preserved correctly. This is required for correct transparency effects and volume rendering. It adds significant extra processing to the rendering and is not normally needed.

Notes about Data

ParaView was designed for use with parallel data and has a lot of important features for this. If, however, your data is not parallel then ParaView will attempt to separate it into partitions for you upon loading. This means that non-parallel data will load very slowly as it is broken into chunks and divided amongst the active nodes. It will be divided two ways:

  • Structured Data - The data will simply be divided into roughly equivalent IJK blocks, with each process taking one. For rectilinear this is usually good enough, but for curvilinear grids it can lead to some non-optimal regions.
  • Unstructured Data - The cells will be split into blocks of equal size and handed off to each process. This often leads to poor performance as no attempt to maintain spatial locality is made.

When working in this mode, a few new Filters are available to assist in repartitioning the data.

  • ParaView All to N - This filter will let you reduce the number of processes with data. This can be useful if you are going to load multiple datasets and do not want two datasets to reside on a process.
  • ParaView Balance - This filter will reperform the starting unstructured balance, simply spreading the cell array amongst the processes.
  • ParaView D3 - By far the best method for redistributing unstructured data, it maintains spatial locality and load balancing. This is also the only filter that will compute ghost cells.

Notes about Rendering

Because of how IceT and ParaView work together, each node in the cluster needs unrestricted access to all other nodes in the cluster. They continually work together and cooperate on both processing and rendering (compositing) and exchange alot of data. Because of this, a high-speed network (like InfiniBand) is extremely beneficial. The resulting image is brought back to the "Node 0" process, where it is relayed to the client.

Because of this, network connectivity to the client is only required by the Node 0 process. Most versions of MPI allow specification of a hostfile, and Node 0 will be the first entry in the hostfile. This is useful if your cluster nodes do not have external network connectivity and only the "head" node does. By specifying the head node as Node 0, you can still use a remote client on another system.

Notes about Ghost Cells

Ghost Cells are crucial for a distributed system, and by default ParaView will not generate them. The ParaView D3 filter can be used to generate them and to properly load-balance the data. Without proper ghost cells there will be visible "seams" in the data in many visualization modes.

  • Note: As the Quadric Clustering filter does not make use of ghost cells, any LOD operations (like the ones when interacting with the display) will show seams that should vanish when the full-resolution data is displayed.

Back to ParaView