Chapter 3. Miser Batch Processing System

Miser is a resource management facility that provides deterministic batch scheduling of applications with known time and space requirements without requiring static partitioning of system resources. When Miser is given a job, it searches through the time/space pool that it manages to find an allocation that best fits the job's resource requirements.

Miser has an extensive administrative interface that allows most parameters to be modified without requiring a restart. Miser runs as a separate trusted process. All communication to Miser, either from the kernel or the user, is done through a series of Miser commands. Miser accepts requests for process scheduling, process state changes, and batch system configuration control, and returns values and status information for those requests.

This chapter contains the following sections:

Read Me First

The sections in this chapter contain information about installing Miser software on your system. You should reference them in the order they are listed here:

  1. For a general description of Miser, see “Miser Overview”.

  2. To install the Miser package, see “Enabling or Disabling Miser”.

  3. For information on how to configure the Miser queues, see “Miser Configuration”.

  4. For information on submitting Miser jobs, see “Submitting Miser Jobs”.

  5. For information on Miser man pages, see “Miser Man Pages”.

Miser Overview

Miser manages a set of time/space pools. The time component of the pool defines how far into the future Miser can schedule jobs. The space component of the pool is the set of resources against which a job can be scheduled. This component can vary with time.

A system pool represents the set of resources (number of CPUs and physical memory) that is available to Miser. A set of user-defined pools represents resources against which jobs can be scheduled. The resources owned by the user pools cannot exceed the total resources available to Miser. Resources managed by Miser are available to non-Miser applications when they are unused by a scheduled job.

Associated with each pool is a definition of the pool resources, a set of jobs allocating resources from the pool, and a policy that controls the scheduling of jobs. The collection of the resource pool, jobs scheduled, and policy is called a queue.

The queues allow for fine-grained resource management of the batch system. The resources allotted to a queue can vary with time. For example, a queue can be configured to manage 5 CPUs during the day and 20 during the night. The use of multiple queues allows the resources to be partitioned among different users of a batch system. For example, on a 24 CPU system, it is possible to define two queues: one that has 16 CPUs and another that has 6 CPUs (assuming that 2 CPUs have been kept outside the control of Miser). It is possible to restrict access to queues to particular users or groups of users on a system to enforce this resource partition.

The policy defines the way a block of time/space is searched to satisfy the resource request made by the application. Miser has two policies: “default” and “repack.” Default is the first fit policy. Once a job is scheduled, its start and end time remain constant. If an earlier job finishes ahead of schedule, it does not have an effect on the start/end time of future scheduled jobs. On the other hand, in addition to using the first fit policy, repack maintains the order of the scheduled jobs and attempts to reschedule the jobs to pull them ahead in time in the event of a job's early termination.

Users submit jobs to the queue using the miser_submit command, which specifies the queue to which the job should be attached and a resource request to be made against the queue. Each Miser job is an IRIX process group. The resource request is a tuple of time and space. The time is the total CPU wall-clock time if run on a single CPU. The space is the logical number of CPUs and the physical memory required. The request is passed to Miser, and Miser schedules the job against the queue's resources using the policy attached to the queue. Miser returns a start and end time for the job to the user.

When a job's start time has not yet arrived, the job is in batch state. A job in batch state has lower priority than any non-weightless process. A job in batch state may execute if the system has idle resources; it is said to run opportunistically. When the specified execution time arrives, the job state is changed to batch critical, and the job then has priority over any non-realtime process. The time spent executing in batch state does not count against the time that has been requested and scheduled. While the process is in batch critical state, it is guaranteed the physical memory and CPUs that it requested. The process is terminated if it exceeds its time allotment or uses more physical memory than it had requested.

A job with the static flag specified that was scheduled with the default policy will only run when the segment is scheduled to run. It will not run earlier even if idle resources are available to the job. If a job is scheduled with the repack policy, it may run earlier.

About Logical Number of CPUs

When a job is scheduled by Miser, it requests that a number of CPUs and some amount of memory be reserved for use by the job. When the time period during which these resources were reserved for the job arrives, Miser reserves specific CPUs and some amount of logical swap space for the job.

There are a number of issues that affect CPU allocation for a job. When a job becomes batch critical, Miser will try to find a dense cluster of nodes. If it fails to find such a cluster, it will assign the threads of the job to any free CPUs that are available. These CPUs may be located at distant parts of the system.

The Effect of Reservation of CPUs on Interactive Processes

The way in which Miser handles the reservation of CPUs is one of its strengths. Miser controls and reserves CPUs based on a logical number, not on physical CPUs. This provides Miser with flexibility in how it controls CPU resources.

Interactive and batch processes that run opportunistically are allowed to use all CPUs in a system that have not been reserved for Miser jobs. If new jobs are submitted, Miser attempts to schedule the jobs based on the amount of logical resources still available to Miser. As a result, CPUs could become reserved by Miser, and the interactive processes would no longer be able to execute on the newly reserved CPUs. However, if a resource is not being used by Miser, the resource is free to be used by any other application. Miser will claim the resource when it needs it.

About Miser Memory Management

While Miser only reserves CPUs when they are needed, memory must be reserved before it is needed.

When Miser is started, it is told the number of CPUs and amount of memory that it will be able to reserve for use by jobs. The number of CPUs is a logical number. When a Miser job becomes batch critical, it is assigned a set of CPUs. Until a Miser job requires a CPU (in other words, until a process or thread is ready to run), the CPU is available to the rest of the system. When a Miser job's thread begins executing, the currently non-Miser thread is preempted and resumes on a CPU where no Miser thread is currently running.

Memory resources are quite different than CPU resources. The memory that Miser uses to reserve for jobs is called logical swap space. Logical swap space is defined as the sum total of physical memory (less space occupied by the kernel) and all the swap devices.

When Miser begins, it needs to reserve memory for its jobs. However, it does not need to reserve physical memory; it simply needs to make sure that there is enough physical memory plus swap to move non-Miser jobs memory to. Miser does this by reserving logical swap equal to the memory that it requires.

Only jobs that are submitted to Miser are able to use allocations of the logical swap space that was reserved for Miser. However, any physical memory that is not being used by Miser is free to be used by any other application. Miser will claim the physical memory when it needs it.

How Miser Management Affects Users

If a user submits a job to Miser, that job will have an allocation of resources reserved for the requested time period. The job will not have to compete for system resources. As a result, the job should complete more quickly and have more stable run-times than it would if run as an interactive job. However, there is a cost. Because Miser is space sharing the resources, the job must wait until its scheduled reservation period before the requested resources will be reserved. Prior to that time, the non-static job may run opportunistically, competing with the interactive workload, but at a lower priority than the interactive workload.

If a user is working interactively, the user will not have full access to all of the system resources. The user's interactive processes will have access to all of the unreserved CPUs on the system, but the processes will only have a limited amount of logical swap space available for memory allocation. The amount of logical swap space available for non-Miser jobs is the amount not reserved by Miser when it was started.

Miser Configuration

The central configurable aspect of Miser is the set of queues. The Miser queues define the resources allocated to Miser.

The configuration of Miser consists of the following:

  • Set up the Miser system queue definition file. Every Miser system must have a Miser system queue definition file. This file's vector definition specifies the maximum resources available to any other queue's vector definition.

  • Define the queues by setting up the Miser user queue definition file.

  • Enumerate all the queues that will be part of the Miser system by setting up the Miser configuration file.

  • Set up the Miser commandline options file to define the maximum CPUs and memory that can be managed by Miser.

Setting Up the Miser System Queue Definition File

The Miser system queue definition file (/etc/miser_system.conf) defines the resources managed by the system pool. This file defines the maximum duration of the pool. All other queues must be less than or equal to the system queue. The system queue identifies the maximum limit for resources that a job can request. It is required that a Miser system queue be configured.

Valid tokens are as follows:

POLICY name 

The policy is always “none” as the system queue has no policy.

QUANTUM time 

The size of the quantum. A quantum is the Miser term for an arbitrary number of seconds. The quantum is used to specify how you want to break up the time/space pool. It is specified in both the system queue definition file and in the user queue definition file and must be the same in both files.

NSEG number 

The number of resource segments.

SEGMENT 

Defines the beginning of a new segment of the vector definition. Each new segment must begin with the SEGMENT token. Each segment must contain at a minimum the number of CPUs, memory, and wall-clock time.

START number 

The number of quanta from 0 that the segment begins at. The origin for time is 00:00 Thursday, January 1st 1970 local time.

Miser maps the start and end times to the current time by repeating the queue forward until the current day. For example, a 24-hour queue always begins at midnight of the current day.

END number 

The number of quanta from 0 that the segment ends at.

NCPUS number 

The number of CPUs.

MEMORY amount 

The amount of memory, specified by an integer followed by an optional unit of k for kilobytes, m for megabytes, or g for gigabytes. If no unit is specified, the default is bytes.

The following system queue definition file defines a queue that has a quantum of 20 seconds and 1 element in the vector definition. The start and end times of each multiple are specified in quanta, not in seconds.

The segment defines a resource multiple beginning at 00:00 and ending at 00:20, with 1 CPU and 5 megabytes of memory.

POLICY none # System queue has no policy
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 1
 
SEGMENT
START 0
END 60# Number of quanta (20min*60sec) / 20
NCPUS 1
MEMORY 5m

Setting Up the Miser User Queue Definition FIle

The Miser user queue definition file (/etc/miser_default.conf) defines the CPUs, the physical memory, the policy name, and the resource pool of the queue. The file consists of a header that specifies the policy of the queue, the number of resource segments, and the quantum used by the queue.

Access to a queue is controlled by the file permissions of the queue definition file. Read permission allows a user to examine the contents of the queue using the miser_qinfo command. Execute permission allows a user to schedule a job on a queue using the miser_submit command. Write permission allows a user to modify the resources of a queue using the miser_move and miser_reset commands.

The default user queue definition file can be used as a template for other user queue definition files. Each Miser queue has a separate queue definition file, which is named in the overall Miser configuration file (/etc/miser.conf ).

Users schedule against the resources managed by the user queues, not against the system queue. If the duration specified by a user queue is less than that specified by the system queue, the user queue will be repeated again and again (for example, the system queue specifies one week and the user queue specifies 24 hours). If the user queue does not divide into the system queue (for example, the system queue is 6 and the user queue is 5), the user queue will repeat evenly.

Valid tokens are as follows:

POLICY name 

The name of the policy that will be used to schedule applications submitted to the queue. The two valid policies are “default” and repack.” Default is the first fit policy; it specifies that once a job is scheduled, its start and end time remain constant. Repack maintains the order of the scheduled jobs and attempts to reschedule the jobs to pull them ahead in time in the event of a job's early termination. Note that both policies initially use the first fit method when scheduling a job.

QUANTUM time 

The size of the quantum. A quantum is the Miser term for an arbitrary number of seconds. The quantum is used to specify how you want to break up the time/space pool. It is specified in both the system queue definition file and in the user queue definition file and must be the same in both files.

NSEG number 

The number of resource segments.

SEGMENT 

Defines the beginning of a new segment of the vector definition. Each new segment must begin with the SEGMENT token. Each segment must contain at a minimum the number of CPUs, memory, and wall-clock time.

START number 

The number of quanta from 0 that the segment begins at. The origin for time is 00:00 Thursday, January 1st 1970 local time.

Miser maps the start and end times to the current time by repeating the queue forward until the current day. For example, a 24-hour queue always begins at midnight of the current day.

END number 

The number of quanta from 0 that the segment ends at.

NCPUS number 

The number of CPUs.

MEMORY amount 

The amount of memory, specified by an integer followed by an optional unit of k for kilobytes, m for megabytes, or g for gigabytes. If no unit is specified, the default is bytes.

The following user queue definition file defines a queue using the policy named “default”. It has a quantum of 20 seconds and 3 elements to the vector definition. The start and end times of each multiple are specified in quanta, not in seconds.

  • The first segment defines a resource multiple beginning at 00:00 and ending at 00:50, with 50 CPUs and 100 MB of memory.

  • The second segment defines a resource multiple beginning at 00:51.67 and ending at 01:00, with 50 CPUs and 100 MB.

  • The third segment defines a resource multiple beginning at 01:02.00 and ending at 01:03.33, also with 50 CPUs and 100 MB of memory.

    POLICY default
    QUANTUM 20
    NSEG 3
     
    SEGMENT
    START 0
    END 150 (50min*60sec) / 20
    NCPUS 50
    MEMORY 100m
     
    SEGMENT
    START 155 ((51min*60sec)+67) / 20
    END 185 (1h*60min*60sec) / 20
    NCPUS 50
    MEMORY 100m
     
    SEGMENT
    START 186 ((1h*60min*60sec)+(2min*60sec)) / 20
    END 190 ((1h*60min*60sec)+(3min*60sec)+33sec) / 20
    NCPUS 50
    MEMORY 100m

Setting Up the Miser Configuration FIle

The Miser configuration file (/etc/miser.conf) lists the names of all Miser queues and the path name of the queue definition file for each queue. This file enumerates all the queue names and their queue definition files.

Every Miser configuration file must include as one of the queues the Miser system queue that defines the resources of the system pool. The Miser system queue is identified by the queue name “system.”

Valid tokens are as follows:

QUEUE queue_name queue_definition_file_path
 

The queue_name identifies the queue when using any interface to Miser. The queue name must be between 1 and 8 characters long. The queue name “system” is used to designate the Miser system queue.

The following is a sample Miser configuration file:

# Miser config file
QUEUE system /hosts/foobar/usr/local/data/system.conf
QUEUE user /hosts/foobar/usr/local/data/usr.conf

Setting Up the Miser CommandLine Options File

The Miser commandline options file (/etc/config/miser.options) defines the maximum CPUs and memory that can be managed by Miser.

The -c flag defines the maximum number of CPUs that Miser can use. This value is the maximum number of CPUs that any resource segment of the system queue can reserve.

The -m flag defines the maximum memory that Miser can use. This value is the maximum memory that any resource segment of the system queue can reserve. The memory reserved for Miser comes from physical memory. The amount of memory that Miser uses should be less than the total physical memory, leaving enough memory for kernel use. Also, the system should have at least the amount of swap space configured for Miser so that if Miser memory is in full use, the system will have enough swap space to move previous non Miser submitted processes out of the way.

The following example sets the -c and -m values in the commandline options file to 1 and 5 megabytes, respectively:

-f/etc/miser.conf -v -d -c 1 -m 5m 

The -v flag specifies verbose mode, which results in additional output.

The -d flag specifies debug mode. When this mode is specified, the application does not relinquish control of the tty (that is, it does not become a daemon). This mode is useful in conjunction with the -v flag to figure out why Miser may not be starting up correctly.


Note: The -C flag can be used to release any Miser reserved resources after the Miser daemon is killed and before it is restarted. For additional information, see the miser(1) man page.


Configuration Recommendations

The configuration of Miser is site dependent. The following guidelines may be helpful:

  • The system must be balanced for interactive/batch use. One suggestion is to keep at least one or two processors outside the control of Miser at all times. These two processors will act as the interactive portion of the system when all of the Miser managed CPUs are reserved. For an interactive load, you typically want the load average for the CPUs to be less than 2.0. Keep this in mind as you adjust for the optimal number of free CPUs.

  • The amount of free logical swap should be balanced against the number of free CPUs. When you have a system with N CPUs, you should also have an appropriate amount of memory to be used by processes running on those N CPUs. Also, many system administrators like to back up this memory with swap space. If you think of the free CPUs as a separate system and provide memory and swap space accordingly, interactive work should perform well. Remember that the free memory not reserved by Miser is logical swap space (the combination of physical memory and the swap devices).

  • Be careful when using virtual swap. When no Miser application is running, time-share processes can consume all of physical memory. When Miser runs, it begins to reclaim physical memory and swaps out time-share processes. If the system is using virtual swap, there may be no physical swap to move the process to, and at that point the time-share process may be terminated.

Miser Configuration Examples

In the examples used in this section, the system has 12 CPUs and 160 MB available to user programs.

Example 1:

In this example, the system is dedicated to batch scheduling with one queue, 24 hours a day.

The first step is to define a system queue. You must decide how long you want the system queue to be. The length of the system queue defines the maximum duration of any job submitted to the system. For this system, you have determined that the maximum duration for any one job can be 48 hours, so you define the system vector to have a duration of 48 hours.

# The system queue /usr/local/miser/system.conf
POLICY none # System queue has no policy
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 1
 
SEGMENT
NCPUS 12
MEMORY 160m
START 0
END 8640 # Number of quanta (48h*60 min*60 sec) / 20

The next step is to define a user queue.

# The user queue /usr/local/miser/physics.conf
POLICY default # First fit, once scheduled maintains start/end time
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 1
 
SEGMENT
NCPUS 12
MEMORY 160m
START 0
END 8640 # Number of quanta (48h*60 min*60 sec) / 20

The last step is to define a Miser configuration file:

# Miser config file
QUEUE system /usr/local/miser/system.conf
QUEUE physics /usr/local/miser/physics.conf

Example 2:

In the following example, the system is dedicated to batch scheduling, 24 hours a day, and split between two user groups: chemistry and physics. The system must be divided between them with a ratio of 75% for physics and 25% for chemistry.

The system queue is identical to the one given in Example 1.

The physics user queue appears as follows:

# The physics queue /usr/local/miser/physics
POLICY default # System queue has no policy
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 1
 
SEGMENT
NCPUS 8
MEMORY 120m
START 0
END 8640 # Number of quanta (48h*60min*60sec) / 20

Next, you define the chemistry queue:

# The chemistry queue /usr/local/miser/chemistry.conf
POLICY default # System queue has no policy
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 1
 
SEGMENT
NCPUS 4
MEMORY 40m
START 0
END 8640 # Number of quanta (48h*60min*60sec) / 20

To restrict access to each queue, you create the user group physics and the user group chemistry. You then set the permissions on the physics queue definition file to execute only for group physics and similarly for the chemistry queue.

Having defined the physics and chemistry queue, you can now define the Miser configuration file:

# Miser configuration file
QUEUE system /usr/local/miser/system.conf
QUEUE physics /usr/local/miser/physics.conf
QUEUE chem /usr/local/miser/chemistry.conf

Example 3:

In this example, the system is dedicated to time-sharing in the morning and to batch use in the evening. The evening is 8:00 P.M. to 4:00 A.M., and the morning is 4:00 A.M. to 8:00 P.M.

First you define the system queue.

# The system queue /hosts/foobar/usr/local/data/system.conf
POLICY none # System queue has no policy
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 2
 
SEGMENT
NCPUS 12
MEMORY 160m
START 0
END 720 # (4h*60min*60sec) / 20
 
SEGMENT
NCPUS 12
MEMORY 160m
START 3600 # (8pm is 20 hours from UTC, so 20h*60min*60sec) / 20
END 4320

Next, you define the batch queue:

# User queue
POLICY repack # Repacks jobs (FIFO) if a job finishes early
QUANTUM 20 # Default quantum set to 20 seconds
NSEG 2
 
SEGMENT
NCPUS 12
MEMORY 160m
START 0
END 720 # (4h*60min*60sec) / 20
 
SEGMENT
NCPUS 12
MEMORY 160m
START 3600 # (8pm is 20 hours from 0, so 20h*60min*60sec) / 20
END 4320

The last step is to define a Miser configuration file:

# Miser config file
QUEUE system /usr/local/miser/system.conf
QUEUE user /usr/local/miser/usr.conf

Enabling or Disabling Miser

The following steps are required to set up the Miser batch processing system:

  1. Use the inst(1M) utility to install the eoe.sw.miser subsystem from your IRIX distribution media.

  2. Modify the Miser configuration files as appropriate for your site. For information on the Miser configuration files, see “Miser Configuration Examples”.

    After the Miser configuration files are modified appropriately, Miser can be selected for boot-time startup with the chkconfig(1) command and the system can be rebooted, or Miser can be started directly by root with the command /etc/init.d/miser start. When starting Miser manually without rebooting, the chkconfig command must be issued first or Miser will not start up.

  3. To enable Miser manually, use the following command sequence:

    chkconfig miser on
    /etc/init.d/miser start

  4. Miser can be stopped at any time by root. To disable Miser, use the following command sequence:

    /etc/init.d/miser stop
    /etc/init.d/miser cleanup

Running Miser jobs are not stopped, and the current committed resources cannot be reclaimed until the jobs are terminated. If you are going to restart Miser after stopping it, you do not need to run the miser cleanup command.


Note: The Miser -C flag can be used to release any Miser reserved resources after the Miser daemon is killed and before it is restarted.


Submitting Miser Jobs

The command to submit a job so that it is managed by Miser is as follows:

miser_submit -q queue -o c=cpus,m=memory, t=time[,static] command
miser_submit -q queue -f file command

-q queue 

Specifies the name of the queue against which to schedule the application.

-o c=cpus,m=memory, t=time[,static] 

Specifies a block of resources. The CPUs must be an integer up to the maximum number of CPUs available to the queue being scheduled against. The memory consists of an integer followed by a unit of k for kilobyte, m for megabyte, or g for gigabyte. If no unit is specified, the default is bytes. Time can be specified either as an integer followed by a unit specifier of h for hours, m for minutes, or s for seconds, or by a string in the format hh:mm:ss.

A job with the static flag specified that was scheduled with the default policy will only run when the segment is scheduled to run. It will not run earlier even if idle resources are available to the job. If a job is scheduled with the repack policy, it may run earlier.

-f file 

File that specifies a list of resource segments. This flag allows greater control over the scheduling parameters of a job.

command 

Specifies a script or program name.

For additional information, see the miser_submit(1) and miser_submit(4) man pages.

Querying Miser About Job Schedule/Description

The command to query Miser about the schedule/description of a submitted job is as follows:

miser_jinfo -j bid [-d]

The bid is the ID of the Miser job and is the process group ID of the job. The -d flag prints the job description including job owner and command.

Note that when the system is being used heavily, Miser swapping can take some time. Therefore, the Miser job may not begin processing immediately after it is submitted.

For additional information, see the miser_jinfo(1) man page.

Querying Miser About Queues

The command to query Miser for information on Miser queues, queue resource status, and a list of jobs scheduled against a queue is as follows:

miser_qinfo -Q|-q queue [-j]|-a

The -Q flag returns a list of currently configured Miser queue names. The -q flag returns the free resources associated with the specified queue name. The -j flag returns the list of jobs currently scheduled against the queue. The -a flag returns a list of all scheduled jobs, ordered by job ID, in all configured Miser queues and also produces a brief description of the job.

For additional information, see the miser_qinfo(1) man page.

Moving a Block of Resources

The command to move a block of resources from one queue to another is as follows:

miser_move -s srcq -d destq -f file 
miser_move -s srcq -d destq -o s=start,e=end,c=CPUs,m=memory

This command removes a tuple of space from the source queue's vector and adds it to the destination queue's vector, beginning at the start time and ending at the end time. The resources added or removed do not change the vector definition, and are, therefore, temporary. The command returns a table that lists the start and end times of each resource transfer and the amount of resources transferred.

The -s and -d flags specify the names of any valid Miser queues. The -f flag contains a resource block specification. The -o flag specifies a block of resources to be moved. The start and end times are relative to the current time. The CPUs are an integer up to the maximum free CPUs associated with a queue. The memory is an integer with an identifier of k for kilobyte, m for megabyte, or g for gigabyte.


Note: The resource transfer is temporary. If Miser is killed or crashes, the resources transferred are lost, and Miser will be unable to restart.

For additional information, see the miser_move(1) and miser_move(4) man pages.

Resetting Miser

The command to reset Miser with a new configuration file is as follows:

miser_reset -f file

This command forces a running version of Miser to use a new configuration file (specified by -f file). The new configuration will succeed only if all scheduled jobs can be successfully scheduled against the new configuration.

For additional information, see the miser_reset(1) man page.

Terminating a Miser Job

The miser_kill command is used to terminate a job submitted to Miser. This command both terminates the process and contacts the Miser daemon to free any resources currently committed to the submitted process. For additional information, see the miser_kill(1) man page.

Miser and Batch Management Systems

This section discusses the differences between a Miser job and a batch job from a batch management system such as the Network Queuing Environment (NQE) or Load Share Facility (LSF).

Miser and batch management systems such as NQE each lack certain key characteristics. For Miser, these characteristics are features to protect and manage the Miser session. For batch management systems, the ability to guarantee resources is lacking. However, these two systems used together provide a much more capable solution, provided the batch management system supports the Miser scheduler.

If your site does not need the job management and protection provided by a batch management system, then Miser alone may be an adequate batch system. However, most production-quality environments require the support and protection provided by batch systems such as NQE or LSF. These sites should run a batch management system in cooperation with the Miser scheduler.

Miser Man Pages

The man command provides online help on all resource management commands. To view a man page online, type mancommandname.

User-Level Man Pages

The following user-level man pages are provided with Miser software:

User-level man page

Description

miser(1)

Miser resource manager; starts the miser daemon.

miser_jinfo(1)

Queries Miser about the schedule and description of a submitted job.

miser_kill(1)

Kills a Miser job.

miser_move(1)

Moves a block of resources from one queue to another.

miser_qinfo(1)

Queries information on miser queues, queue resource status, and list of jobs scheduled against a queue.

miser_reset(1)

Resets miser with a new configuration file.

miser_submit(1)

Submits a job to a miser queue.

File Format Man Pages

The following file format descriptions man pages are provided with Miser software:

File Format man page

Description

miser(4)

Miser configuration files

miser_move(4)

Miser resource transfer list

miser_submit(4)

Miser resource schedule list

Miscellaneous Man Pages

The following miscellaneous man pages are provided with Miser software:

Miscellaneous man page

Description

miser(5)

Miser Resource Manager overview