Chapter 2. Planning ONC3/NFS Service

To plan the ONC3/NFS service for your environment, it is important to understand how ONC3/NFS processes work and how they can be configured. This chapter provides prerequisite information on ONC3/NFS processes and their configuration options. It also explains the conditions under which certain options are recommended.

This chapter explores these variables in the following sections:

File System Export Process

Access to files on an NFS server is provided by means of the exportfs command (see the exportfs(1M) reference page). The exportfs command reads the file /etc/exports for a list of file systems and directories to be exported from the server to NFS clients. Normally, exportfs is executed at system startup by the /etc/init.d/network script. It can also be executed by the superuser from a command line while the server is running. Exported file systems must be local to the server. A file system that is NFS-mounted from another server cannot be exported (see “NFS Mount Restrictions” in Chapter 1 regarding multihop).

This section describes various aspects of the export process in the following subsections:

Customizing exportfs

The exportfs command has several options used to configure its operation. Four of these options are briefly described below. For more complete information on exportfs options, see the exportfs(1M) reference page.

–a 

(all) Export all resources listed in /etc/exports.

–i 

(ignore) Do not use the options set in the /etc/exports file.

–u 

(unexport) Terminate exporting designated resources.

–v 

(verbose) Display any output messages during execution.

Invoking exportfs without options reports the file systems that are currently exported.

Operation of /etc/exports and Other Export Files

Exporting starts when exportfs reads the file /etc/exports for a list of file systems and directories to be exported from the server. As it executes, exportfs writes a list of file systems it successfully exported, and information on how they were exported, in the /etc/xtab file. Anytime the /etc/exports file is changed, exportfs must be executed to update the /etc/xtab file. If an entry is not listed in /etc/xtab, it has not been exported, even if it is listed in /etc/exports.

In addition to the /etc/xtab file, the server maintains a record of the exported resources that are currently mounted and the names of clients that have mounted them. The record is maintained in a file called /etc/rmtab. Each time a client mounts a directory, an entry is added to the server's /etc/rmtab file. The entry is removed when the directory is unmounted. The information contained in the /etc/rmtab file can be viewed using the showmount command.


Note: The information in /etc/rmtab may not be current, since clients can unmount file systems without informing the server.


/etc/exports Options

There are a number of export options for managing the export process. Some commonly used export options are briefly described below. For a complete explanation of options, see the exports(4) reference page.

ro 

(read only) Export this file system with read-only privileges.

rw 

(read, write) Export this file system with read and write privileges. rw is the default.

rw=  

(read mostly) Export this file system read-only to all clients except those listed.


Note: Directories are exported either ro or rw, not both ways. The option specified first is used.


anon= 

(anonymous UID) If a request comes from the user root (UID = 0), use the specified UID as the effective UID instead. By default, the effective UID is nobody (UID = –2). Specifying a UID of –1 disables access by unknown users or by root on a host not specified by the root option. Use the root option to permit accesses by the user root.

root= 

Give superuser privileges to root users of NFS-mounted directories on systems specified in root access list. By default, root is set to none.

access= 

Grant mount privileges to a specified list of clients only. Clients can be listed individually or as an NIS netgroup (see netgroup(4)).

nohide 

(IRIX enhancement) By default, the contents of a child file system are hidden when only the parent file system is mounted. Allow access to this file system without performing a separate mount if its parent file system is mounted.

wsync 

(IRIX enhancement for NFS2 only) Perform all write operations to disk before sending an acknowledgment to the client. Overrides delayed writes. (See “NFS Input/Output Management” in Chapter 1 for details.)

32bitclients 

Causes the server to mask off the high-order 32 bits of directory cookies in NFS3 directory operations. This option may be required when clients run 32-bit operating systems such as IRIX 5.3.

When a file system or directory is exported without specifying options, the default options are rw and anon=nobody.

Sample /etc/exports File

A default version of the /etc/exports file is shipped with NFS software and stored in /etc/exports when NFS is installed. You must add your own entries to the default version as part of the NFS setup procedure (given in “Setting Up the NFS Server” in Chapter 4). This sample /etc/exports illustrates entries and how to structure them with various options:

/dev/root     -ro
/reports      -access=finance,rw=susan,nohide
/usr          -nohide
/usr/demos    -nohide,ro,access=client1:client2:client3
/usr/catman   -nohide

In this sample /etc/exports, the first entry exports the root directory (/) with read-only privileges. The second entry exports a separate file system, /reports, read-only to the netgroup finance, with write permission specified for susan. Users who mount the root directory can access the /reports, /usr, /usr/demo and /usr/catman file systems (if they are in finance) because nohide is specified.

The fourth entry uses the access list option. It specifies that client1, client2, and client3 are authorized to access /usr/demos with read-only privileges. To avoid possible problems, client1, client2, and client3 should be fully qualified domain names.


Note: If you are using an access list to export to a client with multiple network interfaces, the /etc/exports file must contain all names associated with the client's interfaces. For example, a client named octopus with two interfaces needs two entries in the /etc/exports file, typically octopus and gate-octopus.

The fifth entry is an example of an open file system. It exports /usr/catman to the entire world with read-write access (the default when neither ro or rw is specified) to its contents.

Efficient Exporting

Consider these suggestions for setting up exports on your NFS service:

  • Use the ro option unless clients must write to files. This reduces accidental removal or changes to data.

  • In secure installations, set anon to –1 to disable root on any client (except those specified in the root option) from accessing the designated directory as root.

  • Be cautious with your use of the root option.

  • If you are using NIS, consider using netgroups for long access lists.

  • Use nohide to export related but separate file systems to minimize the number of mounts clients must perform.

  • Use wsync when minimizing risk to data is more important than optimizing performance (NFS2 only).

  • If you are serving NFS3 to Solaris, IRIX 5.3, or other clients with 32-bit operating systems, you may need the 32bitclients option.

Security Caveats for Exporting

The following security caveats should be observed in setting up your NFS service:

  • If two directories in the same file system are exported with different access controls (for example, one is exported rw and the other ro, or ro and root=xx) the export options can be circumvented on one of the exported directories with the export options of the other by guessing its file handles.

  • A user can access the root of the file system (and any file in it) with the export options of any of that file system's exported directories. This is done by guessing the root file handle of another file within the exported directory.

Currently, there are no solutions to the these problems. As a precaution, SGI recommends that you use the access=client option to ensure security. This way, file system access on the server is restricted to well-known hosts.


Note: NFS performance is degraded if there are too many hosts for the access option. This is because the access check is done on every single nfs rpc transaction.


/etc/fstab Mount Process

An NFS client mounts directories at startup via /etc/fstab entries, or by executing the mount command. The mount command can be executed during the client's boot sequence, from a command line entry, or graphically, using the System Manager tool. The mount command supports the NFS3 protocol if that protocol is also running on the server.

Mounts must reference directories that are exported by a network server and mount points that exist on the client. Directories that serve as mount points may or may not be empty. If using the System Manager for NFS mounting, the mount points must be empty. If the directory is not empty, the original contents are hidden and inaccessible while the NFS resources remain mounted.

This section discusses aspects of the /etc/fstab mount process in these subsections:

Customizing mount and umount Commands

The mount and umount commands have many options for customizing mounting and unmounting that can apply to either XFS or NFS file systems. Several commonly-used options are briefly described below in their NFS context (see mount(1M) for full details).

–t type 

(type) Set the type of directories to be mounted or unmounted. type can be nfs2 for NFS2 mounting, nfs3 for the NFS3 protocol, and nfs and nfs3pref for mounts that attempt the NFS3 protocol, but fall back to nfs2 if the attempt fails. The mount command, by default, uses nfs3pref. To mount NFS3, the server must support NFS3.

–a 

(all) Attempt to mount all directories listed in /etc/fstab, or unmount all directories listed in /etc/mtab. If a filesystem type has been specified with the -t option, all filesystems of that type are mounted or unmounted.

–h hostname 

(host) Attempt to mount all directories listed in /etc/fstab that are remote-mounted from the server hostname, or unmount directories listed in /etc/mtab that are remote-mounted from server hostname.

-b list 

(all but) Attempt to mount or unmount all file systems listed in /etc/fstab except those associated with the directories in list. list contains one or more comma-separated directory names.

–o options 

(options) Use these options, instead of the options in /etc/fstab.

Operation of /etc/fstab and Other Mount Files

Mounting typically occurs when the mount command reads the /etc/fstab file. Each NFS entry in /etc/fstab contains up to six fields. An NFS entry has this format:

file_system directory type options frequency pass

where:

file_system 

is the remote server directory to be mounted.

directory 

is the mount point on the client where the directory is attached.

type 

is the file system type. This can be nfs2 for NFS2 mounting, nfs3 for the NFS3 protocol, and nfs and nfs3pref for mounts that attempt the NFS3 protocol, but fall back to nfs2 if the attempt fails.

options 

is mount options (see “/etc/fstab Options” in this chapter).

frequency 

is always set to zero (0) for NFS and CacheFS entries.

pass 

is always set to zero (0) for NFS and CacheFS entries.

The mount command maintains a list of successfully mounted directories in the file /etc/mtab. When mount successfully completes a task, it automatically updates the /etc/mtab file. It removes the /etc/mtab entry when the directory is unmounted. The contents of the /etc/mtab file can be viewed using the mount command without any options. See the mount(1M) reference page for more details.

/etc/fstab Options

There are several options for configuring mounts. When you use these options, it is important to understand that export options (specified on a server) override mount options, in the sense that the more restrictive options take precedence. NFS /etc/fstab options are briefly described below (see the fstab(4) reference page for complete information):

ro 

Read-only permissions are set for files in this directory.

rw 

Read write permissions are set for files in this directory (default).

hard 

Specifies how the client should handle access attempts if the server fails. If the NFS server fails while a directory is hard-mounted, the client keeps trying to complete the current NFS operation until the server responds (default).

soft 

Alternative to hard mounting. If the NFS server fails while a directory is soft-mounted, the client attempts a limited number of tries to complete the current NFS operation before returning an error.

nointr 

(non-interruptible) Disallows NFS operations to be interrupted by users. The default setting is off (that is, interruptible).

bg 

(background) Mounting is performed as a background task if the first attempt fails. The default setting is off.

fg 

(foreground) Mounting is performed as a foreground task. The default setting is on.

private 

(IRIX enhancement) Uses local file and record locking instead of a remote lock manager and minimizes delayed write flushing. Diskless clients are the primary users of this option.

proto 

Specifies the protocol that NFS uses. Available options are udp and tcp. The default setting is udp.

rsize 

(read size) Changes the read buffer to the size specified (default is 8K for NFS2, 32K for NFS3).

wsize 

(write size) Changes the write buffer to the size specified (default is 8K for NFS2, 32K for NFS3).

timeo 

(NFS timeout) Sets a new timeout limit (default is .11 seconds.)

retrans 

(retransmit) Specifies an alternative to the number of times NFS operations are retried (default is 5).

port 

Specifies an alternative UDP port number for NFS on the server (default port number is 2049).

noauto 

Tells mount –a to ignore this /etc/fstab entry.

grpid 

Allows files created in a file system to have the parent directory's group ID, not the process' group ID.

nosuid 

Turns setuid execution off for nonsuperusers (default is off).

nodev 

Disallows access to character and block special files (default is off).

vers=n 

Use NFS protocol version n (accepted values are 2, 3). For example use vers=2 to specify NFS2. By default, NFS3 is tried; if the mount is unsuccessful, NFS2 is then tried.

In addition to these options, /etc/fstab also offers several options dedicated to attribute caching. Using these options, you can direct NFS to cache file attributes, such as size and ownership, to avoid unnecessary network activity. See the fstab(4) reference page for more details.

Sample /etc/fstab File

NFS entries in /etc/fstab are designated by the nfs identifier, while XFS (local file systems) entries are designated by xfs. This sample /etc/fstab file includes a typical NFS entry:

/dev/root           /         xfs rw,raw=/dev/rroot 0 0
redwood:/usr/demos  /n/demos  nfs ro,bg 0 0

In this example, the NFS directory /usr/demos on server redwood is mounted at mount point /n/demos on the client system with read-only (ro) permissions (see Figure 1-2). Mounting executes as a background task (bg) if it didn't succeed the first time. By default, if the server fails after the mount has taken place, the client attempts to complete any NFS transactions indefinitely (hard) or until it receives an interrupt.

Efficient Mounting with /etc/fstab

Some recommendations for /etc/fstab mounting are:

  • Use conventional mounting for clients that are inoperable without NFS directories (such as diskless workstations) and for directories that need to be mounted most of the time.

  • The intr option is no longer needed. Specify nointr if NFS operations are not to be interrupted.

  • The bg option should always be specified to expedite the boot process if a server is unavailable when the client is booting. In other words, a client hangs until the server comes back up unless you specify bg.

  • If you use nohide when exporting file systems on the server, the client can mount the top-most directory in the exported file system hierarchy. This gives access to all related file systems while reducing individual mount calls and the complexity of the /etc/fstab file.

  • Use private when the NFS directory on the server is not shared among multiple NFS clients.

  • Do not put NFS mount points in the root (/) directory of a client. Mount points in the root directory can slow the performance of the client and can cause the client to be unusable when the server is unavailable.

Automatic Mount Process

The automatic mounters (automount and autofs) dynamically mount NFS directories on a client when a user references the directory. They can be set up to execute when a client is booted, or can be executed by the superuser from a command line while the client is running.

To start an automatic mounter at boot time, either the automount or autofs flag must be set to on (see the chkconfig(1M) reference page for details). If the flag is on, the automatic mounter is invoked by the /etc/init.d/network script and started with any automount or autofs options specified in the /etc/config/automount.options or /etc/config/autofs.options file, respectively.


Note: autofs and automount cannot co-exist on the same system. If both are chkconfig'd on, autofs is configured.

This section discusses aspects of the automounter commands in these subsections:

Customizing automount Commands

The automount command offers many options that allow you to configure its operation (for a complete description, see the automount(1M) reference page). Some commonly used options are:

–D 

Assign a value to an environment variable.

–f 

Read the specified local master file before the NIS master map.

–m 

Do not read the NIS master map.

–M 

Use the specified directory as the automount mount point.

–n 

Disable dynamic mounts.

–T 

Trace and display each NFS call.

–tl n 

Maintain the mount for a specified duration of client inactivity (default duration is 5 minutes).

–tm n 

Wait a specified interval between mount attempts (default interval is 30 seconds).

–tp n 

Hold information about server availability in a cache for a specified time (default interval is 5 seconds).

–tw n 

Wait a specified interval between attempts to unmount file systems that have exceeded cache time (default interval is 60 seconds).

–v 

Display any output messages during execution.

autofsd and autofs Command Options

The autofs command installs AutoFS mount points and associates a map with each mount point. For a complete description, see the autofs(1M) reference page).

-t duration 

Specify a duration, in seconds, that the file system should remain mounted when not in use (default interval is 5 minutes).

-v  

Display any output message during AutoFS mounts and unmounts.

autofsd and autofs share the configuration options file /etc/config/autofs.options.

The autofsd command answers filesystem mount requests and uses the local files or name service maps to locate the filesystems. For a complete description, see the autofsd(1M) reference page. Options for autofsd are:

-m n 

Make autofsd multi-threaded. n is the number of threads available. The maximum number of threads available is 16.

-p priority 

Set process priority.

-tp duration 

Specify how long, in seconds, the return of a query of server availability will remain cached. The default is 5 seconds.

-v 

Log status messages to the console.

-D name=value 

Assign a value to the indicated AutoFS map substitution variable.

-T 

Trace each RPC call by expanding and displaying call output.

Operation of Automatic Mounter Files and Maps

Just as the conventional mount process reads /etc/fstab and writes to /etc/mtab, automount and autofs can be set up to read input files for mounting information. automount and autofs also record their mounts in the /etc/mtab file and remove /etc/mtab entries when they unmount directories.

Details of the automatic mounters automount and autofs are explained in these subsections:

automount Files and Maps

By default, when automount executes at boot time, it reads the /etc/config/automount.options file for initial operating parameters. The information contained in the /etc/config/automount.options file can contain the complete information needed by the automounter or the information can direct automount to a set of files that contain customized automounting instructions. /etc/config/automount.options cannot have comments in it.

The default version of /etc/config/automount.options is:

-v /hosts -hosts -nosuid,nodev

This /etc/config/automount.options directs automount to execute with the verbose (–v) option. It also specifies that automount should use /hosts as its daemon mount point. When a user accesses a file or directory under /hosts, the –hosts argument directs automount to use the pathname component that follows /hosts as the name of the NFS server. All accessible file systems exported by the server are mounted to the default mount point /tmp_mnt/hosts/hostname with the nosuid and nodev options.

For example, if the system redwood has the following entry in /etc/exports:

/
/usr 	-ro,nohide

And a client system is using the default /etc/config/automount.options file, as above, then executing the following command on the client lists the contents of the directory /usr on redwood:

ls -l /hosts/redwood/usr/*

automount Mount Points

Mount points for automount serve the same function as mount points in conventional NFS mounting. They are the access point in the client's file system where a remote NFS directory is attached. There are two major differences between automount mount points and conventional NFS mount points.

With automount, mount points are automatically created and removed as needed by the automount program. When the automount program is started, it reads configuration information from /etc/config/automount.options, additional automount maps, or both, and creates all mount points needed to support the specified configuration.

By default, automount mounts everything in the directory /tmp_mnt and creates a link between the mounted directory in /tmp_mnt and the accessed directory. For example, in the default configuration, mounts take place under /tmp_mnt/hosts/hostname. The automounter creates a link from the access point /hosts/hostname to the actual mount point under /tmp_mnt/hosts/hostname. The command ls /hosts/redwood/tmp displays the contents of server redwood's /tmp directory. You can change the default root mount point with the automount –M option.

autofs Files and Maps

By default, when autofs executes at boot time, it reads the /etc/config/autofs.options and /etc/auto_master files for initial operating parameters. /etc/config/autofs.options cannot have comments in it.

The default version of /etc/config/autofs.options is:

-v 

This /etc/config/autofs.options directs autofs to execute with the verbose (–v) option.

The default /etc/auto_master contains:

/hosts -hosts -nosuid,nodev

This file specifies that autofs should use /hosts as its daemon mount point. When a user accesses a file or directory under /hosts, the –hosts argument directs autofs to use the pathname component that follows /hosts as the name of the NFS server. All accessible file systems exported by the server are mounted to the default mount point /hosts/hostname with the nosuid and nodev options.

For example, if the system redwood has the following entries in /etc/exports:

/
/usr -ro,nohide

and a client system is using the default /etc/auto_master file, as above, then executing the following command on the client lists the contents of the directory /usr on redwood:

ls -l /hosts/redwood/usr/*

autofs Mount Points

Mount points for autofs serve the same function as mount points in conventional NFS mounting. They are the access point in the client's file system where a remote NFS directory is attached. Noticeably different from automount, autofs performs mounts in place; it does not link /hosts with /tmp_mnt.

With AutoFS, mount points are automatically created and removed as needed by the autofs program. When the autofs program is started, it reads configuration information from /etc/config/autofs.options, additional autofs maps, or both, and creates all mount points needed to support the specified configuration.

The autofs command installs AutoFS mount points and associates an AutoFS map with each mount point. The AutoFS file system monitors attempts to access directories within it and notifies the autofsd daemon. The daemon uses the map to locate a file system, and then mounts it at the point of reference within the AutoFS file system. Maps can be assigned to an AutoFS mount using an entry in the /etc/auto_master map or they can be combined in another map file referenced in an /etc/auto_master entry.

About Automatic Mounter Map Types

The automount and autofs features use various maps, discussed in the following subsections:

Master Maps for the Automatic Mounter

The master map is the first file read by the automount or autofs program. There is only one master map on a client. It specifies the types of supported maps, the name of each map to be used, and options that apply to the entire map (if any). By convention, the master map is called /etc/auto.master with automount (but the name can be changed) and /etc/auto_master with autofs (this name cannot be changed).

With automount, for complex automatic mounter configurations, a master map can be specified in the /etc/config/automount.options file. For example, /etc/config/automount.options might contain:

-v -f /etc/auto.master

The automount master map can be a local file or an NIS database file. For autofs, it must be a local file named /etc/auto_master. The master map contains three fields: mount point, map name, and map options. A crosshatch (#) at the beginning of a line indicates a comment line. A sample of master map entries is:

#Mount Point     Map Name              Map Options
/hosts           -hosts                -nosuid,nodev
/net             /etc/auto_irix.misc   -nosuid
/home            /etc/auto_home        -timeo=20
/-               /etc/auto_direct      -ro
/net2            /etc/indirect2        -ro,vers=2

The mount point field serves two purposes. It determines whether a map is a direct or indirect map, and it provides mount point information. A slash followed by a dash (/–) in the mount point field designates a direct map. It signals the automatic mounter to use the mount points specified in the direct map for mounting this map. For example, to mount the fourth entry in the sample above, the automatic mounter gets a mount point specification from the direct map /etc/auto_direct. In the fifth entry, an entire indirect map, which includes all its entries, is declared to use the NFS version 2 protocol. The default for automount is NFS version 2; the default for autofs is NFS version 3, and if it is not available on the server, the mount tries to NFS version 2.

A directory name in the mount point field designates an indirect map. It specifies the mount point the automatic mounter should use when mounting this map. For example, the second entry in the sample above tells the automatic mounter to mount the indirect map /etc/auto.irix.misc at mount point /net. A mount point for direct and indirect maps can be several directory levels deep.

The map name field in a master map specifies the full name and location of the map. Notice that –hosts is considered an indirect map whose mount point is /hosts. The –hosts map mounts all the exported file systems from a server. If frequent access to just a single file system is required for a server with many exports that do not use the -nohide option, it is more efficient to access that file system with a map entry that mounts just that file system.

The map options field can be used to specify any options that should apply to the entire map. Options set in a master map can be overridden by options set for a particular entry within a map.

Direct Maps for the Automatic Mounter

Direct maps allow mounted directories to be distributed throughout a client's local file system. They contain the information that the automatic mounter needs to determine when, what, and how to mount a remote NFS directory. You can have as many direct maps as needed. A direct map for AutoFS is typically called /etc/auto.mapname, where mapname is some logical name that reflects the map's contents.

All direct maps contain three fields: directory, options, and location. An example of an /etc/auto_direct direct map is:

#Directory          Options   Location
/usr/local/tools    -nodev    ivy:/usr/cooltools
/usr/frame                    redwood:/usr/frame
/usr/games          -nosuid   peach:/usr/games 

In a direct map, users access the NFS directory with the pathname that is identical to the directory field value in the direct map. For example, a user gives the command cd /usr/local/tools to mount /usr/cooltools from server ivy as specified in the direct map /etc/auto_direct. Notice that the directory field in a direct map can include several subdirectory levels.

The options field can be used to set options for an entry in the direct map. Options set within a map for an individual entry override the general option set for the entire map in the master map. The location field contains the NFS server's name and the remote directory to mount.

Indirect Maps for the Automatic Mounter

Indirect maps allow remotely mounted directories to be housed under a specified shared top-level location on the client's file system. They contain the specific information the automatic mounter program needs to determine when, what, and how to NFS mount a remote directory. You can have as many indirect maps as needed.

An indirect map is typically called /etc/auto.mapname (for automount) and /etc/auto_mapname (for autofs), where mapname is some logical name that reflects the map's contents. Indirect maps can be grouped according to logical characteristics. For example, in the master map above, the indirect map /etc/auto_home, indicated by the mount point /home, can include mounting information for all home directories on various servers.

Indirect maps contain three fields: directory, options, and location. Entries might look something like this for the /etc/auto_home indirect map:

#Directory    Options       Location
willow                      willow:/usr/people
rudy          -nosuid       pine:/usr/people/rudy
bruiser       -ro,nointr    ivy:/usr/people/bruiser
jinx          -ro,vers=2    jinx:/usr

With an indirect map, user access to an NFS directory is always relative to the mount point specified in the master map entry for the indirect map. That is, the directory is the concatenation of the mount-point field in the master map and the directory field in the indirect map. For example, given our sample /etc/auto_master and indirect map /etc/auto_home, a user gives the command cd /home/willow to access the NFS directory willow:/usr/people.

If a user changes the current working directory to the /home directory and tries to list its contents, the directory appears empty unless a subdirectory of /home, such as /home/willow, was previously accessed, thereby mounting /home subdirectories. Access to the mount point of an indirect map only shows information for mounts currently in effect; it does not trigger mounts, as with direct maps. Users must access a subdirectory to trigger a mount.

The directory field in an indirect map is limited to one subdirectory level. Additional subdirectory levels for indirect maps must be indicated in the mount point field in the master map, or on the command line for automount.

The options field can be used to set options for an entry in the indirect map. For example, the fourth entry attempts to mount using the NFS2 protocol, all other entries are unaffected. Options set within a map for an entry override the general options set for the entire map in the master map. The location field contains the NFS server's name and the remote directory to mount.

Effective Automatic Mounting

Some recommendations for automatic mounting are:

  • Use the automatic mounter when the overhead of a mount operation is not important, when a file system is used more often than the automatic mounter time limit (5 minutes by default), or when file systems are used infrequently. Although directories that are used infrequently do not consume local or remote resources, they can slow down applications that report on file systems, such as df.

  • The default configuration in /etc/config/automount.options or /etc/auto_master is usually sufficient because it allows access to all systems. It performs the minimal number of mounts necessary when it is used in conjunction with the nohide export option on the server.

  • Use indirect maps whenever possible. Direct maps create more /etc/mtab entries, which means more mounts are performed, so system overhead is increased. With indirect maps, mounts occur when a process references a subdirectory of the daemon or map mount point. With direct maps, when a process reads a directory containing one or more direct mount points, all of the file systems are mounted at the mount points. This can result in a flurry of unintended mounting activity when direct mount points are used in well-traveled directories.

  • Try not to mount direct map mount points into routinely accessed directories. This can cause unexpected mount activity and slow down system performance.

  • Use a direct rather than an indirect map when directories cannot be grouped, but must be distributed throughout the local file system.

  • Plan and test maps on a small group of clients before using them for a larger group. Some changes to the automount environment require that systems be rebooted (see Chapter 5, “Maintaining ONC3/NFS” for details on changing the map environment).

Planning a CacheFS File System

CacheFS is a file system layered above other standard IRIX file systems, and is installed as part of the ONC3/NFS software package. CacheFS automatically stores consistent local copies of the NFS file system on a local disk cache, in order to shift part of the typical server burden to the local machine. The original or back file system acts as the authoritative source of data, and the front file system acts as a specially managed cache. Either the xfs or efs file system types can be used for the front file system.


Note: The default directory for the cache on the front file system is /cache.

The back file system can be of the types nfs2, nfs, nfs3, iso9660, cdfs, hfs, kfs, and dos.

CacheFS is most useful in a “read-mostly” file system, such as /usr/local or /usr/share/man. Once data has been cached, file read and read-only directory operations are as fast as those on a local disk (XFS file systems). Write performance, however, is closer to an NFS write operation.

Planning and setting up a CacheFS configuration is similar to that of an NFS client-server configuration. For detailed information refer to the cachefs(4) reference page. To administer CacheFS, see “Cached File System Administration”. Instructions for setting up the CacheFS file system are given on page 68. This section discusses recent additions and options to CacheFS and contains the following subsections:

Customizing CacheFS

CacheFS-specific options have been added to the conventional mount command and /etc/fstab file and are described in this section. For the complete description of these commands and files, refer to “/etc/fstab Mount Process”. The cfsadmin and cfsstat commands are new with CacheFS (see cfsadmin(1M) and cfsstat(1M)).

mount and umount Options for CacheFS

When mounting and unmounting a CacheFS file system, the following option is used for CacheFS. For descriptions of the other options, see “Customizing mount and umount Commands”.

–t type 

(type) Set the type of directories to be mounted or unmounted. type is cachefs for all CacheFS mounting.

/etc/fstab Additions for CacheFS

For an example of a fundamental /etc/fstab file and an explanation of the /etc/fstab mount process, see “/etc/fstab Mount Process”. The /etc/fstab file also has several added options that are used with CacheFS for mounting and unmounting.

Any mount options not recognized by CacheFS are passed to the back file system mount if one is performed.

These added options for CacheFS are:

backfstype=file_system_type 


Specifies the back file system type (nfs2, nfs, nfs3, iso9660, cdfs, hfs, kfs, and dos). If this option is not specified, the back file system type is determined from the file system name. File system names of the form hostname:path are assumed to be of the type nfs.

backpath=path 

Specifies the path where the back file system is already mounted. If this argument is not specified, CacheFS determines a mount point for the back file system.

cachedir=directory 


Specifies the name of the cache directory.

cacheid=ID 

Allows you to assign a string to identify each separate cached file system. If you do not specify the cacheid value, CacheFS generates one. You need the cache ID when you delete a cached file system with cfsadmin –d. A cache ID you choose is easier to remember than one automatically generated. The cfsadmin command with the –l option includes the cacheid value in its display.

write-around | non-shared 


Determines the write modes for CacheFS. In the write-around mode, as writes are made to the back file system, the affected file is purged from the cache.

In the non-shared mode, all writes are made to both the front and back file systems, and the file remains in the cache.

Either mode can be used in an environment where more than one client may be writing to the same file, in spite of what the names imply. File locking is required to ensure consistency in this case. In both modes, file locking is performed through the back file system. The default mode is non-shared.

noconst 

Disables consistency checking between the front and back file systems. Use noconst when the back file system and cache file system are read-only. Otherwise, always allow consistency checking. The default is to enable consistency checking.

If none of the files in the back file system are to be modified, you can use the noconst option to mount when mounting the cached file system. Changes to the back file system may not be reflected in the cached file system.

private 

Causes file and record locking to be performed locally. Additionally, files remain cached when file and record locking are performed. By default, files are not cached when file and record locking are performed and all file and record locking is handled by the back file system.

local-access 

Causes the front file system to interpret the access mode bits used for access checking (see chmod(1)). By default, the back file system interprets the access mode bits used for access checking to ensure data integrity.

purge 

Remove any cached information for the specified file system.

suid | nosuid 

Allow set-uid (default) or do not allow set-uid.

bg 

Causes mount to run in the background if the back file system mount times out.

disconnect 

Causes the cache file system to operate in disconnected mode when the back file system fails to respond. This allows read accesses to files already cached to be performed from the front file system even when the back files system does not respond.

Consistency Checking with mount Command in CacheFS

To ensure that the cached directories and files are kept up to date, CacheFS periodically checks consistency of files stored in the cache. To check consistency, CacheFS compares the current modification time to the previous modification time; if the modification times are different, all data and attributes for the directory or file are purged from the cache and new data and attributes are retrieved from the back file system.

When an operation on a directory or file is requested, CacheFS checks to see if it is time to verify consistency. If so, CacheFS obtains the modification time from the back file system and performs the comparison. If the write mode is write-around, CacheFS checks on every operation.

Table 2-1 provides more information on mount consistency checking parameters.

Table 2-1. Consistency Checking Arguments for the -o mount Option

Parameter

Description

acdirmin= n

Specifies that cached attributes are held for at least n seconds after a directory update. After n seconds, if the directory modification time on the back file system has changed, all information about the directory is purged and new data is retrieved from the back file system. The default for n is 30 seconds.

acdirmax= n

Specifies that cached attributes are held for no more than n seconds after a directory update. After n seconds, the directory is purged from the cache and new data is retrieved from the back file system. The default for n is 30 seconds.

acregmin= n

Specifies that cached attributes are held for at least n seconds after file modification. After n seconds, if the file modification time on the back file system has changed, all information about the file is purged and new data is retrieved from the back file system. The default for n is 30 seconds.

acregmax= n

Specifies that cached attributes are held for no more than n seconds after a file modification. After n seconds, all file information is purged from the cache. The default for n is 30 seconds.

actimeo= n

Sets acregmin, acregmax, acdirmin, and acdirmax to n.


Cached File System Administration

The cfsadmin command is used to administer the cached file system on the local system. It can be used to:

  • Create a cached file system.

  • List the contents and statistics about the cache.

  • Delete the cached file system.

  • Modify the resource parameters when the file system is unmounted.

The cfsadmin command works on a cache directory, which is the directory where the cache is actually stored. A pathname in the front file system identifies the cache directory. See cfsadmin(1M) for more details.


Note: If the default resource parameters are acceptable (see “Cache Resource Parameters in CacheFS”), it is not necessary to run cfsadmin to create the cache. The cache is created with default parameters when the first mount is performed.

The syntax for the cfsadmin command is:

cfsadmin -c [ -o cacheFS-parameters ] cache_directory 
cfsadmin -d [ cache_ID | all ] cache_directory
cfsadmin -l cache_directory
cfsadmin -u [ -o cacheFS-parameters ] cache_directory
cfsadmin -C cache_directory 

The options and their parameters are:

-c 

Create a cache under the directory specified by cache_directory. This directory must not exist prior to cache creation.

-d 

Delete the file system and remove the resources of the cache_ID that you specify or all file systems in the cache if you specify all.

-l 

List the file systems that are stored in the specified cache directory. A listing provides the cache_ID and statistics about resource utilization and cache resource parameters.

-u 

Update the resource parameters of the specified cache directory. The parameter values (specified with the -o option) can only be increased; to decrease the values, you must remove the cache, then re-create it. All file systems in the cache must be unmounted when you use this option. Changes take effect the next time you mount the file system in the cache directory.

Using the -u option without the -o option resets all parameters to their default values.

cache_ID 

Specifies an identifying name for the file system that is cached. If you do not specify a cache_ID, CacheFS assigns a unique identifier.

-o options 

Specifies the CacheFS resource parameters. Multiple resource parameters must be separated by commas. The following section describes the cache resource parameters.

-C 

Convert an existing cache to the new format. This consists of converting the cache_IDs from their old form to the new form.

Cache Resource Parameters in CacheFS

The default values for the cache parameters are for a cache that uses the entire front file system for caching. To limit the cache to only a portion of the front file system, you should change the parameter values.

Any parameter may be changed at any time. The change does not take effect however, until all file systems for the affected cache have been unmounted and remounted.

Table 2-2 shows the parameters for space and file allocation.

Table 2-2. CacheFS Parameters

Parameters for Space Allocation

Parameters for File Allocation

maxblocks

maxfiles

hiblocks

hifiles

lowblocks

lowfiles

Table 2-3 shows the default values for the cache parameters. The default values for parameters devote the full resources of the front file system to caching

Table 2-3. Default Values of Cache Parameters

Cache Parameters

Default Value

maxblocks

90%

hiblocks

85%

lowblocks

75%

maxfiles

90%

hifiles

85%

lowfiles

75%

The maxblocks parameter sets the maximum number of blocks, expressed as a percentage, that CacheFS is allowed to claim within the front file system. The maxblocks percentage is relative to the total number of blocks on the front file system, not what has been allocated by CacheFS. The maxfiles parameter sets the maximum percentage of available inodes (number of files) CacheFS can claim.


Note: The maxblocks and maxfiles parameters do not guarantee the resources will be available for CacheFS—they set maximums. If you allow the front file system to be used for purposes other than CacheFS, there may be fewer blocks or files available to CacheFS than you intend.

The hiblocks parameter sets the high water mark for disk usage, and lowblocks sets the low water mark, expressed as a percentage of the total number of blocks available to CacheFS. The hifiles and lowfiles parameters set the maximum and minimum blocks available for file system use. When the maximum number of blocks or files has been reached, CacheFS will begin removing cached files to stay within the established percentage.

The maxblocks, maxfiles, hiblocks, hifiles, lowblocks and lowfiles values apply to the entire front file system, not file systems you have cached under the front file system.


Note: Using the whole front file system solely for caching eliminates the need to change the maxblocks, maxfiles, hiblocks, hifiles or the corresponding low parameters.

CacheFS allows the cache to grow to the maximum size specified—if you have not reduced available resources by using part of the front file system for other storage purposes.

cfsstat Command

The cfsstat command displays and reinitializes statistics about CacheFS. It must be used as the superuser. For more information, refer to the cfsstat(1M) reference page.

CacheFS Tunable Parameters

The CacheFS tunable parameters are used to fine-tune the performance of CacheFS file opens and reads. The CacheFS tunable parameters are contained in the file /var/sysgen/mtune/cachefs. They can be modified with the systune command (see systune(1M)).

The tunable parameters for CacheFS, along with their descriptions, are listed in Table 2-4.

Table 2-4. CacheFS Tunable Parameters

Parameter

Description

cachefs_readahead

Controls the number of blocks to read ahead of the current block being read. The readaheads are read asynchronously. The size of the block is the preferred I/O size of the front file system.

cachefs_max_threads

Controls the maximum number of asynchronous I/O daemons allowed to run for each CacheFS file system.

fileheader_cache_size

Controls the number of file headers containing CacheFS metadata that are cached. The effectiveness of file header caching can be monitored with cfsstat -b

replacement_timeout

Controls the time in seconds between successive cache snapshots made by the replacement daemon.

The parameter's maximum, minimum, and default values are listed in Table 2-5.

Table 2-5. CacheFS Tunable Parameter Values

Parameter

Default Value

Minimum Value

Maximum Value

cachefs_readahead

1

0

10

cachefs_max_threads

5

1

10

fileheader_cache_size

512

0

8192

replacement_timeout

600

30

86400