Université Paul Sabatier - Bat. 3R1b4 - 118 route de Narbonne 31062 Toulouse Cedex 09, France


September 2020 :

Nothing for this month

August 2020 | October 2020

Search





Home > Codes & Clusters > Computational Resources

Computational Resources

In addition to the resources available from the national supercomputing center (GENCI) and from the regional computing mesocenter CALMIP, the LCPQ has a local cluster. This cluster is constituted from several nodes that are characterized by different configurations which match the needs of the different teams. Here is a brief description of the cluster, totalizing 120 nodes, i.e. 1640 cores/2672 threads and more than 10 TB of RAM:

Picture of the LCPQ cluster. Different types of nodes are gathered within a unique rack.
Picture of the LCPQ cluster. Different types of nodes are gathered within a unique rack.

For parallel computing:


- 64 HP moonshot nodes: 8 cores and 16 GB of ram per node.
- 2 AMD Barcelona nodes: 48 cores and 128 GB of ram per node.
- 14 intel sandy bridge nodes: 16 cores and 64 GB of ram per node.
- 7 intel ivy bridge nodes: 20 cores and 64 GB of ram per node.
- 9 intel haswell nodes: 24 cores and 128 GB of ram per node.
- 4 intel broadwell nodes: 28 cores and 128 GB of ram per node.
- 5 intel skylake nodes: 32 cores and 192 GB of ram per node.
- 1 AMD epyc node: 32 cores and 256 GB of ram per node.

For monolithic computing:


- 1 intel westmere node: 12 cores and 192 GB of ram.
- 4 intel sandy bridgenodes: 8 cores, 128 GB of ram per node and dedicated local disks per node.
- 1 intel ivy bridge node: 20 cores, 128 GB of ram.
- 2 intel haswell nodes: 4 cores, 192 GB of ram per node and dedicated SSDs per node.
- 1 intel haswell node: 8 cores, 512 GB of ram per node and dedicated SSDs per node.
- 2 intel haswell nodes: 8 cores, 512 GB of ram per node and dedicated SSDs per node.
- 1 intel broadwell node: 8 cores, 512 GB of ram and dedicated SSDs per node.
- 1 intel broadwell node: 16 cores, 512 GB of ram and dedicated SSDs per node.
- 1 intel skylake node: 8 cores, 384 GB of ram and dedicated SSDs.

For storage:


- A 33 TB BeeGFS filesystem available on each nodes.
- A 15 TB archiving space available for users.

Computing are managed by the SLURM workload manager. This cluster is hosted at the Université Paul Sabatier datacenter.