Next: 6.2 Input format Up: 6.1.2 Running MOLPRO on Previous: 6.1.2 Running MOLPRO on

6.1.2.1 Specifying parallel execution

The following additional options for the molpro command may be used to specify and control parallel execution.
-n | --tasks tasks/tasks_per_node:smp_threads
tasks specifies the number of Global Array processes to be set up, and defaults to 1. tasks_per_node sets the number of GA processes to run on each node, where appropriate. The default is installation dependent. In some environments (e.g., IBM running under Loadleveler; PBS batch job), the value given by -n is capped to the maximum allowed by the environment; in such circumstances it can be useful to give a very large number as the value for -n so that the control of the number of processes is by the batch job specification. smp_threads relates to the use of OpenMP shared-memory parallelism, and specifies the maximum number of OpenMP threads that will be opened, and defaults to 1. Any of these three components may be omitted, and appropriate combinations will allow GA-only, OpenMP-only, or mixed parallelism.
-N | --task-specification user1:node1:tasks1,user2:node2:tasks2$\dots$
node1, node2 etc. specify the host names of the nodes on which to run. On most parallel systems, node1 defaults to the local host name, and there is no default for node2 and higher. On Cray T3E and IBM SP systems, and on systems running under the PBS batch system, if -N is not specified, nodes are obtained from the system in the standard way. tasks1, tasks2 etc. may be used to control the number of tasks on each node as a more flexible alternative to -n / tasks_per_node. If omitted, they are each set equal to -n / tasks_per_node. user1, user2 etc. give the username under which processes are to be created. Most of these parameters may be omitted in favour of the usually sensible default values.
-G | --global-memory memory
Some parts of the program make use of Global Arrays for holding and communicating temporary data structures. This option sets the amount of memory to allocate in total across all processors for such activities.



Next: 6.2 Input format Up: 6.1.2 Running MOLPRO on Previous: 6.1.2 Running MOLPRO on

P.J. Knowles and H.-J. Werner
molpro@tc.bham.ac.uk
Jan 15, 2002