Cuby on clusters

Cuby supports execution of the calculations on clusters managed with several common job management utilities (queue systems). Cuby calculation can be configured (both from input or commandline) to not to be run immediately but submitted to the queue system. The calculation is then executed on a node of the cluster allocated by the system and the results are copied back to the directory from which it was submitted (this requires strict application of the rule one calculation – one directory).

Supported software

Currently, the Portable Batch System (PBS), Sun Grid Engine (SGE) and Slurm queue systems (and possibly also their derivatives) are supported.

Cuby provides unified interface to these systems, the minor differences are handled internally.

Job setup

To use this feature, it should be configured at the root level of the input file. The required usual is

queue_submit: yes # Sumbit to queue rather than running immediately. Also controlled from commandline by the -q switch
queue_system: pbs # Type of the queue system
queue_scratch_dir: /scratch/$USER # Path to scratch directory - set up according to your environment
queue_jobname: TestJob # Name of the job passed
queue_name: q_test # Name of the queue to submit to
queue_walltime_hrs: 10 # Walltime limit 

Some of these keywords can be configured in the config file so that they do not have to be repeated in every input.

Job submission and execution

When cuby is run with such an input, Cuby submits the job and termuinates immediately.

The queue system then executes the job on a node of the cluster. The contents of the directory are copied to the scratch direcory (into subdirectory unique for the job) and another instance of cuby is run there.

Obtaining results

After the calculation finishes, all the output is copied to the location from which the job was submitted into the RESULTS subdirectory (can be changed in input). By default, the output of cuby is saved to a file named LOG.