Université Paul Sabatier - Bat. 3R1b4 - 118 route de Narbonne 31062 Toulouse Cedex 09, France


Rechercher





Accueil > Codes & clusters > Moyens de calcul

Computational Resources

In addition to the resources available from the national supercomputing center (GENCI) and from the regional computing mesocenter CALMIP, the LCPQ has a local cluster. This cluster is constituted from several nodes that are characterized by different configurations which match the needs of the different teams. Here is a brief description of the cluster, totalizing 117 nodes, i.e. 1460 cores/2232 threads and more than 7.7 TB of RAM :

Picture of the LCPQ cluster. Different types of nodes are gathered within a unique rack.
Picture of the LCPQ cluster. Different types of nodes are gathered within a unique rack.

For parallel computing :

- 68 hp moonshot nodes : 8 cores and 16 GB of RAM per node.

- 3 amd Barcelona nodes : 48 cores and 128 GB of RAM per node.

- 14 intel sandy bridge nodes : 16 cores and 64 GB of RAM per node.

- 7 intel ivy bridge nodes : 20 cores and 64 GB of RAM per node.

- 9 intel haswell nodes : 24 cores and 128 GB of RAM per node.

- 1 intel broadwell node : 28 cores and 128 GB of RAM.

For monolithic computing :

- 4 intel westmere nodes : between 8 and 32 cores and between 48 and 256 GB of RAM per node.

- 4 intel sandy bridge nodes : 8 cores, 128 GB of RAM per node and dedicated local hard disks per node.

- 1 intel ivy bridge node : 20 cores and 128 GB of RAM.

- 2 intel haswell nodes : 4 cores, 192 GB of RAM and dedicated SSDs per node.

- 1 intel haswell node : 8 cores, 512 GB of RAM and dedicated SSDs.

- 2 intel haswell nodes : 8 cores, 512 GB of RAM and dedicated SSDs per node.

- 1 intel broadwell node : 16 cores, 512 GB of RAM and dedicated SSDs.

For storage :

- A 33 TB BeeGFS filesystem available on each nodes.

- A 15 TB archiving space available for users.

Computing nodes are managed by the SLURM workload manager.