Chapter 5. Maintaining ONC3/NFS

This chapter provides information about maintaining ONC3/NFS. It explains how to change the default number of NFS daemons and modify automatic mounter maps. It also gives suggestions for using alternative mounting techniques and avoiding mount point conflicts. It also describes how to modify and delete CacheFS file systems.

This chapter contains these sections:

Changing the Number of NFS Server Daemons

Systems set up for NFS normally run several server daemons, nfsd. These daemons accept RPC for NFS and for the Network Lock Manager from clients.

Prior to IRIX 6.5.22 release, the number of nfsd daemons had to be controlled manually via the /etc/config/nfsd.options file and quite often the default number of NFS server daemons would be inadequate for the amount of NFS traffic on server. This would result in degraded NFS performance on clients. Starting with IRIX 6.5.22 release, the nfsd daemon spawns dynamically to match the load on the server. Please refer to the section "DYNAMIC NFS DAEMONS" in nfsd(1M) manpage for more information about the way to control nfsd behavior on your server.

You can also monitor the dynamic nfsd behavior using the nfsstat command with -d option added in the 6.5.22 release. It displays statistics about individual NFS daemons, including average call service times and utilization. Note that the utilization statistics include both CPU time and time spent waiting for disk I/O to complete, and thus are a more accurate guide to NFS daemon usage than the CPU utilization reported by the ps(1) command. Sample output from the nfsstat -d command is, as follows;

% nfsstat -d
NFS daemons:
 pid      queue   calls     Tsvc(us) %busy
 41613    pool    4247      60065    0    
 42025    pool    4543      38474    0    
 42035    pool    2774      60859    0    
 42044    pool    1758      98317    0    
 42045    pool    2756      57639    0    
 42048    pool    1666      75941    0    
 42052    pool    2855      39747    0    
 43674    pool    2810      50598    1    
 43686    pool    961       99239    1    
 43690    pool    796       106054   1    
 43692    pool    2153      70689    1    
 43693    pool    2108      59758    1    
 43694    pool    683       83709    1    
 43699    pool    684       117302   1    
 43720    pool    13        152984   0 

Table columns are described in the following list.

queue 

Queue number or "pool" if the thread is in the idle pool. The queues are NFS requests queues; there is one queue per ccNUMA node.

calls 

Cumulative number of calls processed by a nfsd process.

Tsvc 

Average time it took to process one request.

%busy 

Ratio between the time spent processing requests and total lifetime of a nfsd process.

The %busy value goes to 0 over time because on a non-dedicated NFS server time spent waiting for work is much larger then time doing work. Eventually, threads that fulfill the role of watch dogs would accumulate too much idle time to make any difference to their %busy value.

In the continuous mode (nfsstat -C), the numbers are rate converted.

For more information, see the nfsstat(1M) and nfsd(1M) man pages.

Temporary NFS Mounting

In cases where an NFS client requires directories not listed in its /etc/fstab file, you can use manual mounting to temporarily make the NFS resource available. With temporary mounting, you need to supply all the necessary information to the mount program through the command line. As with any mount, a temporarily mounted directory requires that a mount point be created before mounting can occur.

For example, to mount /usr/demos from the server redwood to a local mount point /n/demos with read-only, hard, interrupt, and background options, as superuser, enter these commands:

mkdir -p /n/demos 
mount –o ro,bg redwood:/usr/demos /n/demos

A temporarily mounted directory remains in effect until the system is rebooted or until the superuser manually unmounts it. Use this method for one-time mounts.

Modifying the Automatic Mounter Maps

You can modify the automatic mounter maps at any time. AutoFS accepts map modifications at any time, without having to restart the daemon. Simply make the change and run the command /usr/etc/autofs -v. This command reconciles any differences between /etc/mtab and the current map information.

With automount, some of your modifications take effect the next time the automatic mounter accesses the map, and others take effect when the system is rebooted. Whether or not booting is required depends on the type of map you modify and the kind of modification you introduce.

Rebooting is generally the most effective way to restart automount. (See the automount(1M) .)

Modifying the Master Map

automount consults the master map only at startup time. A modification to the master map (/etc/auto.master) takes effect only after the system has been rebooted or automount is restarted (see “Modifying Direct Maps”). With AutofS, the change takes effect after the autofs command is run.

Modifying Indirect Maps

You can modify, delete, or add to indirect maps (the files listed in /etc/auto.master or /etc/auto_master) at any time. Any change takes effect the next time the map is used, which is the next time a mount is requested.

Modifying Direct Maps

Each entry in a direct map is an automount or autofs mount point. The daemon mounts itself at these mount points at startup; and with AutoFS, when autofs is run. With autofs, the changes in the attributes of the map are noted immediately, since it stays in sync with the /etc/mtab file, to add or remove a key from a map, you need to rerun the autofs command.

With automount, adding or deleting an entry in a direct map takes effect only after you have gracefully killed and restarted the automountd daemon or rebooted. However, except for the name of the mount point, you can modify direct map entries while automount is running. The modifications take effect when the entry is next mounted, because automount consults the direct maps whenever a mount must be done.

For instance, with automount, suppose you modify the file /etc/auto.indirect so that the directory /usr/src is mounted from a different server. The new entry takes effect immediately (if /usr/src is not mounted at this time) when you try to access it. If it is mounted now, you can wait until automatic unmounting takes place to access it. If this is not satisfactory, unmount the directory with the umount command, notify automount with the command /etc/killall -HUP automount that the mount table has changed, and then access the directory. The mounting should be done from the new server. However, if you want to delete the entry, you must gracefully kill and restart the automount daemon. The automount process must be killed with the SIGTERM signal:

/etc/killall -TERM automount

You can then manually restart automount or reboot the system.


Note: If gracefully killing and manually restarting automount does not work, rebooting the system should always work.


Mount Point Conflicts

You can cause a mount conflict by mounting one directory on top of another. For example, say you have a local partition mounted on /home, and you want automount to mount other home directories. If the automount maps specify /home as a mount point, automount hides the local home partition whenever it mounts.

The solution is to mount the server's /home partition somewhere else, such as /export/home, for example. You need an entry in /etc/fstab like this:

/dev/home    /export/home    xfs rw,raw=/dev/rhome 0 0

This example assumes that the master file contains a line similar to this:

/home        /etc/auto.home

It also assumes an entry in /etc/auto.home like this:

terra        terra:/export/home

where terra is the name of the system.

Modifying CacheFS File System Parameters


Note: Before changing parameters for a cache, you must unmount all file systems in the cache directory by using the umount command.

The following command changes the value of one or more parameters:

cfsadmin -u –o parameter_list cache_directory 


Note: You can only increase the size of a cache, either by number of blocks or number of inodes. If you want to make a cache smaller, you must remove it and re-create it with new values.

The following commands unmount /local/cache3 and change the maxfiles parameter to 85%:

# umount /local/cache3
# cfsadmin -u -o maxfiles=85 /local/cache3

Displaying Information About Cached File Systems

The following command returns information about all file systems cached under the specified cache directory.

cfsadmin -l cache_directory 


Note: The block size reported by cfsadmin is in 8 KB blocks.

The following command shows information about the cache directory named /usr/cache/nabokov:

# cfsadmin -l /usr/cache/nabokov

  cfsadmin: list cache FS information
     Version        2   4  50
   maxblocks     90% (1745743 blocks)
    hiblocks     85% (1648757 blocks)
   lowblocks     75% (1454786 blocks)
   maxfiles      90% (188570 files)
    hifiles      85% (178094 files)
   lowfiles      75% (157142 files)
  neteng:_old-root-6.2_usr_local_lib:_usr_local_lib
  neteng:_usr_annex:_usr_annex
  bitbucket:_b_jmy:_usr_people_jmy_work
  neteng:_old-root-6.2_usr_local_bin:_usr_local_bin

This example shows multiple mount points for a single cache: neteng and bitbucket.

Deleting a CacheFS File System

The following command deletes a file system in a cache:

cfsadmin –d cache_id cache_directory 


Note: Before deleting a cached file system, you must unmount all the cached files systems for that cache directory.

The cache ID is part of the information returned by cfsadmin –l.

The following commands unmount a cached file system and delete it from the cache:

# umount /usr/work 
# cfsadmin -d _dev_dsk_c0t1d0s7 /local/cache1

You can delete all file systems in a particular cache by using all as an argument to the –d option. The following command deletes all file systems cached under /local/cache1:

# cfsadmin -d all /local/cache1

The all argument to –d also deletes the specified cache directory.